senior data engineer Job Vacancy in Randstad Bengaluru, Karnataka – Updated today
Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for
Full Details :
Company Name : Randstad
Location : Bengaluru, Karnataka
Position :
Job Description : summary
randstad india
permanent
specialism
information technology
reference number
JPC – 65543
job details
Job Description: Role: Senior Data Engineer
Programming languages : SQL, Python,GO,R, Scala, Java.
Deriving warehousing solutions, and using ETL (Extract, Transfer, Load) tools
Understanding machine learning algorithms Writing Spark function to clean raw data
Databases and Cache : SQL / NOSQL DBS
Data APIs integration
Duties:
You will be guiding the data architecture strategy to design, build and maintain production-grade data pipeline
You will work with backend engineers to build APIs and pipelines that serve up realtime analytics and predictive models used by cloud-based applications
You will create and maintain the core cloud applications to analyse large real-time as well as stored data sets and implement appropriate optimisation, debug issues, triaging, and prioritising them and fixing them quickly
You will prepare and implement tooling for ETL (extract, transform and load) procedures and architectures
You will develop and maintain data models for real-time timeseries data and create dashboards and reports (using Tableau/PowerBI/QlikView) to provide performance data and insights
You will collaborate with backend developers to build seamless interfaces with multiple systems and to maintain and improve the CI / CD pipeline
You will demonstrate your code and solutions work, with ‘show and tells’ and documented repeatable tests
You will setup monitoring and turning data loads and queries, ensuring the best possible performance, quality and responsiveness
show lessshow more
Job Description: Role: Senior Data Engineer
Programming languages : SQL, Python,GO,R, Scala, Java.
Deriving warehousing solutions, and using ETL (Extract, Transfer, Load) tools
Understanding machine learning algorithms Writing Spark function to clean raw data
Databases and Cache : SQL / NOSQL DBS
Data APIs integration
Duties:
You will be guiding the data architecture strategy to design, build and maintain production-grade data pipeline
You will work with backend engineers to build APIs and pipelines that serve up realtime analytics and predictive models used by cloud-based applications
You will create and maintain the core cloud applications to analyse large real-time as well as stored data sets and implement appropriate optimisation, debug issues, triaging, and prioritising them and fixing them quickly
You will prepare and implement tooling for ETL (extract, transform and load) procedures and architectures
You will develop and maintain data models for real-time timeseries data and create dashboards and reports (using Tableau/PowerBI/QlikView) to provide performance data and insights
You will collaborate with backend developers to build seamless interfaces with multiple systems and to maintain and improve the CI / CD pipeline
You will demonstrate your code and solutions work, with ‘show and tells’ and documented repeatable tests
You will setup monitoring and turning data loads and queries, ensuring the best possible performance, quality and responsiveness
…
show lessshow more
experience
9
skills
Python
SQL
NOSQL
PySpark
Spark
data engineer
Data modelling
qualifications
B.E/B.Tech
MCA
BCA
MS/M.Sc(Science)
This post is listed Under Software Development
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company