Azure Data Engineer Job Vacancy in ECS ME LLC Hyderabad, Telangana – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
ECS ME LLC
Location : Hyderabad, Telangana
Position :

Job Description : Job DescriptionResponsible for implementing robust data pipeline using Microsoft Stack.Responsible for creating reusable and scalable data pipelines.Responsible for development and deployment of new data platforms.Experience with Cloud Platforms such as Azure, AWS , Google Cloud etc.Experience on Migrating SQL database to Azure data Lake, Azure SQL Database, Azure Synapse, Data Bricks and Talend.Experience in Deploying* Distributed Messaging Systems using KAFKA, or other messaging System such Azure IoT / Event Hub, or CDC mechanisms like Striim or HVR.*Experience building rest API pipeline, and parsing complex nested json into relational formatExperience in Developing Spark applications using Spark – SQL in Databricks for data extraction, transformation and aggregation from multiple file formats for analysing & transforming the data to uncover insights into the customer usage patterns.Good understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.Good understanding of Big Data Hadoop and Yarn architecture along with various Hadoop Demons such as Job Tracker, Task Tracker, Name Node, Data Node, Resource/Cluster Manager, and Kafka (distributed stream-processing) .Experience in Database Design and development with Business Intelligence using SQL Server 2014/2016, Integration Services (SSIS), DTS Packages.Expertise in various phases of project life cycles (Design, Analysis, Implementation and testing).Analyse, design and build Modern data solutions using Azure PaaS service to support visualization of data. Understand current Production state of application and determine the impact of new implementation on existing business processes.Create Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.Primary responsibilities include using services and tools to ingest, egress, and transform data from multiple sources.Create Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.QualificationBachelor/master’s in computer science/IT or equivalent.Azure certifications will be added advantage(Certification in Az-900 and/or AZ-204, AZ-303, AZ-304 or AZ-400).Job Type: Full-time

This post is listed Under  Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts