Data Engineer – ClienTech Job Vacancy in McKinsey & Company Gurgaon, Haryana – Updated today
Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for
Full Details :
Company Name : McKinsey & Company
Location : Gurgaon, Haryana
Position :
Job Description : Who You’ll Work With
You will be based in our Gurugram office as part of the broader McKinsey technology team (ClienTech).
McKinsey technology fosters innovation driven by analytics, design thinking, mobile and social by developing new products/services and integrating them into our client work. It is helping to shift our model toward asset based consulting and is a foundation for and expands our investment in our entrepreneurial culture. Through innovative software as a service solutions, strategic acquisitions, and a vibrant ecosystem of alliances, we are redefining what it means to work with McKinsey. As one of the fastest-growing parts of our firm, McKinsey’s technology team has more than 1,500 dedicated professionals (including more than 800 analysts and data scientists) and we’re hiring more mathematicians, data scientists, designers, software engineers, product managers, client development managers and general managers.
You will collaborate with the product and delivery team on the definition and requirements of analytics products within an agile framework.
What You’ll Do
You’ll provide data engineering expertise in our client service.
In this role, you will be responsible for coding and testing tools and assets to deliver high quality analytics to our customers. You will perform the transformation, filtering and aggregation of raw data into concise, accurate and focused data marts or client specific data models. You will also be responsible for data-intensive ad-hoc analytics including novel analytics for which tools are not sufficiently push button.
You’ll be an integral part of our team with opportunities for coaching and mentoring from your senior colleagues.
Qualifications
Bachelor’s degree in engineering, computer science or equivalent area
3+ years of experience in the field of business intelligence, application development, database development and ETL and/or data analysis domains with extensive SQL knowledge
Experience with cloud infrastructure such as AWS, Azure or Google Experience in container technologies like Docker/Kubernetes is a plus
Experience in building the data pipelines and workflows from ground up including defining the exception handling strategies
Exposure in storing and assembling large complex data sets with various formats such as CSV, JSON, Avro, Parquet
Hands-on coding and application development experience with programming languages such as Python, Shell, Java or similar scripting languages
Familiarity with R, Scala; JavaScript is a big plus
Experience with relational SQL and NoSQL databases and tools like Snaplogic, Alteryx, Tableau, Power BI
Good understanding of REST APIs, data modeling, data warehousing, data lakes and big-data concepts
Ability to proactively identify and advocate for improvements to technology and engineering methods
Demonstrated willingness and ability to engage with teams and collectively solve complex problems
Ability to effectively communicate business benefits and implications of technology initiatives to non-technical and more senior colleagues
This post is listed Under Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company