Bigdata Hadoop Module Lead Noida/Chennai/Bangalore Job Vacancy in Sampoorna Computer People Bengaluru, Karnataka – Updated today
Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for
Full Details :
Company Name : Sampoorna Computer People
Location : Bengaluru, Karnataka
Position :
Job Description : Job Summary
Experience:
4 – 6 Years
Location:
Bangalore,Chennai,Noida
Designation:
Bigdata Hadoop Module Lead Noida/Chennai/Bangalore
Degree:
BE-Comp/IT, BE-Other, BTech-Comp/IT, BTech-Other, MCA
Educational Level:
Graduate/Bachelors
Industrial Type:
IT-Software/Software Services
Functional Area:
IT Software – Application Programming / Maintenance
Key Skills:
Hadoop
Job Post Date:
Wednesday, February 23, 2022
Company Description
Top IT Consultancy companies in Europe with revenues in excess of Euro 4.095 billion (2018) and team size of 44000+ persons. Founded in 1968 and headquartered in Paris, France and listed on Euronext. India Offices are in Noida, Bangalore, Chennai & Pune.
Our client belongs to Fortune 50 list and covers a wide range of domains and technologies. The client base of the group is spread across continental Europe and UK and includes many world leaders and brands of international repute. The Group offers its clients an end to end approach based on a well-honed business model. The Group’s ambition is to allow its clients to focus on transformation projects that will give them a competitive edge and help them drive growth. The Group also pursues the worldwide deployment of its activities in both application integration and business process management through its subsidiary a leading provider of Business Interaction Networks, with a complete range of solutions and services.
The primary business areas of the company include consulting services, systems integration and solutions, integration of ERP solutions, implementation of application solutions, as well as subcontracting solutions for providing technical support to users and application maintenance and outsourcing services and operation of professional processes.
Job Description
Data Engineer – General
Extensive experience in Big Data – Hadoop Frameworks.
Good understanding of Spark Framework with Scala.
Good working experience on Hive, Impala, Pig and Map Reduce.
Efficient knowledge of integration of Spark with Hive, Kafka, AWS and other structured data sources and processing them.
Good understanding and knowledge of NoSQL databases
Good knowledge on Map Reduce Framework & HDFS Architecture.
Good understanding and knowledge of Amazon Web Services as cloud computing platform.
Capable of processing large sets of structured and semi-structured data and supporting systems application architecture.
Efficient in building hive, pig and map-reduce scripts.
Efficient knowledge on integration of Hive and HBase.
Experience with Agile methodologies.
Proficient in English language with ability to lead stakeholder conversations
This post is listed Under Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company