Data Engineer – Spark/SCALA/PYTHON/JAVA, with Databricks,CLOUD -AZURE Job Vacancy in AT&T Bengaluru, Karnataka – Updated today
Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for
Full Details :
Company Name : AT&T
Location : Bengaluru, Karnataka
Position :
Job Description : Description – External
Impact and Influence:
This position will interact on a consistent basis with architects, leads, data engineers, data scientists to support cutting edge analytics in development and design of data Acquisitions, data ingestions and data products. Developers will be working on Big data technologies and platforms on prem and cloud. The resources will be involved in following and contributing to best practices in for software development in the big data ecosystem. The Roles and Responsibilities are as follows,
Create and maintain optimal data pipeline architecture.
Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
Build tools to utilize the data pipeline to provide actionable insights into Data acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Experience level – 5years to 6years
Technical Skills Required:
In depth knowledge & experience in Big data technologies (Spark, Python, Java based applications) Cloud based Experience (Azure).
Good understanding & experience with Performance and Performance tuning for complex S/W projects mainly around large scale and low latency
Experience with data flow.
NoSQL exposure like MongoDB or PostgresDB
Experience in understanding Java programs and troubleshooting.
Able to understand Map reduce/Tez/Hive code and help in conversions to Spark.
Excellent communication skills.
Ability to work in a fast-paced, team-oriented environment.
Mandatory Skills:
Unix/Linux shell scripting
Experience in Hadoop Ecosystem, Spark, SCALA/PYTHON/JAVA, NIFI, CLOUD -AZURE
Should have – Databricks & Azure certification
This post is listed Under Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company