Data Engineer Job Vacancy in NatWest Group Gurgaon, Haryana – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
NatWest Group
Location : Gurgaon, Haryana
Position :

Job Description : Our people work differently depending on their jobs and needs. From home working to job sharing, visit the remote and flexible working page on our website to find out more.
This role is based in India and as such all normal working days must be carried out in India.
Join us as a Data Engineer
This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences
You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure
Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank
What you’ll do
We’ll look to you to lead and inspire a team of data engineers and drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering.
We’ll also expect you to be:
Delivering the automation of data engineering pipelines through the removal of manual stages
Developing and sharing your knowledge of the bank’s data structures and metrics, advocating change where needed for product development
Developing a strategy for streaming data ingestion and transformations
Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight
The skills you’ll need
To be successful in this role, you’ll need to be an expert level programmer and Data Engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data.
You’ll also demonstrate:
Deep exposure to Kubernetes, Docker and Containerisation
Design and architecture experience of scalable solution serving multi million customer base
Strong technical knowledge of PySpark, Python and Cloud data warehousing
Proven architecture experience with No-SQL databases and unstructured data management
Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling
Extensive experience using RDMS, ETL pipelines, Hadoop and SQL
A good understanding of modern code development practices
Good critical thinking and proven problem solving abilities
It would be ideal if you have experience of using Oracle, Unix scripting, Java, cloud, API, and Kafka as well as expertise with Orchestration and Airflow.

This post is listed Under  Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *