Data Engineer Job Vacancy in General Mills Mumbai, Maharashtra – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
General Mills
Location : Mumbai, Maharashtra
Position :

Job Description : :

India is among the top ten priority markets for General Mills, and hosts our Global Shared Services Centre. This is the Global Shared Services arm of General Mills Inc., which supports its operations worldwide. With over 1,300 employees in Mumbai, the center has capabilities in the areas of Supply Chain, Finance, HR, Digital and Technology, Sales Capabilities, Consumer Insights, ITQ (R&D & Quality), and Enterprise Business Services. Learning and capacity-building is a key ingredient of our success.
Job Description:

Job Overview
The Enterprise Data Development team is responsible for designing & architecting solutions to integrate & transform business data into Data Lake to deliver data layer for the Enterprise using cutting edge technologies like Big Data – Hadoop. We design solutions to meet the expanding need for more and more internal/external information to be integrated with existing sources; research, implement and leverage new technologies to deliver more actionable insights to the enterprise. We integrate solutions that combine process, technology landscapes and business information from the core enterprise data sources that form our corporate information factory to provide end to end solutions for the business.
This position will develop solutions for the Enterprise Data Lake & Data Warehouse. You will be responsible for developing data lake solutions for business intelligence and data mining.
Job Responsibilities
70% of time Create, code, and support a variety of Hadoop, ETL & SQL solutions
Experience with agile techniques or methods
Work effectively in a distributed global team environment.
Works on pipelines of moderate scope & complexity
Effective technical & business communication with good influencing skills
Analyze existing processes and user development requirements to ensure maximum efficiency
Participates in the implementation and deployment of emerging tools and processes in the big data space
Turn information into insight by consulting with architects, solution managers, and analysts to understand the business needs & deliver solutions
20% of time Support existing Data warehouses & related jobs.
Job Scheduling experience (Tidal, Airflow, Linux)
10% of time Proactive research into up to date technology or techniques for development
Should have automation mindset to embrace a Continuous Improvement mentality to streamline & eliminate waste in all processes.
Desired Profile
Education:
Minimum Degree Requirements: Bachelors
Preferred Degree Requirements: Bachelors
Preferred Major Area of Study: Engineering
Experience:
Minimum years of Hadoop experience required: 2 years
Preferred years of Data Lake/Data warehouse experience: 2-4+ years
Total Experience required : 4-5 years
Specific Job Experience or Skills Needed
Skills Level: Beginner Intermediate Expert Advance
HDFS, Map reduce
Beginner
Hive, Impala & Kudu
Beginner
Python
Beginner
SQL, PLSQL
Proficient
Data Warehousing Concepts
Beginner
Other Competencies:

Demonstrate learning agility & inquisitiveness towards latest technologySeeks to learn new skills via experienced team members, documented processes, and formal trainingAbility to deliver projects with minimal supervisionDelivers assigned work within given parameter of time and qualitySelf-motivated team player and should have ability to overcome challenges and achieve desired results

This post is listed Under  Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *