Hadoop Technical Consultant Job Vacancy in Vupico Hyderabad, Telangana – Updated today
Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for
Full Details :
Company Name : Vupico
Location : Hyderabad, Telangana
Position :
Job Description : Hyderabad, India
Department
Hadoop
Job posted on
July 2020
Employment type
Permanent/Contract
Experience Range
3+ years
About VUPICO
VUPICO is a leading enterprise business and technology solutions partner for Small, medium and large global customers, is seeking for experienced professionals, who thrive on challenge and desire to make a real difference. With an environment of extraordinary innovation and unprecedented growth, this is an exciting opportunity for a self-starter who enjoys working in a quality-oriented, fast-paced, team environment.
Vupico is growing MNC, operating from Different locations (Japan, India, Singapore, Australia & USA) across the globe.
About Role
Role: Hadoop Technical Consultant (Mid- level)
Skillset: Hadoop, Hive, Spark, Python, Hbase, Kafka, Pig, Solr, NiFi, SVN, GIT, Data Landscaping and Algorithms.
Experience: 3+ years’
Work Type: We do have Both (Permanent/Contract)
Eligibility: Any Graduate/Post Graduate
About Responsibilities (JD)
Minimum 3 years of relevant experience in Big Data Technologies
Strong Hands on experience in Hadoop development and Implementation
Knowledge on data pre-processing as per business requirement using streaming API’s, Pig, Hive and User Defined Functions
Strong programming knowledge on Spark, Scala and Python
Must have working knowledge of at least one NoSQL DB (Cassandra, Mongo DB, Hbase, Redis etc.). Graph DB knowledge is an added advantage.
Hands on in managing Hadoop Job flows using Oozie
Reviewing and managing Hadoop log files
Working experience with different file formats like Parquet, Avro, Sequence, ORC etc. to speed up Analytics
Writing high-performance, reliable and maintainable code in Java, Python and Scala
Exposure on loading from disparate complex data sets
Designing, building, installing, configuring and supporting Hadoop
Assess the quality of datasets for a hadoop data lake
Fine tune hadoop applications for high performance and throughput
Troubleshoot and debug any hadoop ecosystem run time issues
Maintain the privacy and security of hadoop clusters
Utilize a diverse array of technologies and tools as needed, to deliver insights, such as Python, Scala and R
Analyze structured/unstructured data and implement algorithms to support analysis using advanced statistical and mathematical methods from statistic
Unix/Linux shell scripting is an added advantage
Co-ordinate with other teams for resolving the Hadoop issues such as system issues, functional issues, technical issues etc
Excellent written and verbal communication skills
Comfortable Client Interfacing
Excellent Documentation skills
Added advantage if you have below
Hadoop Developer certification (Cloudera/Hortonworks/MapR)
Spark Certification (Databricks/MapR/Cloudera/Hortonworks)
Exposure on Integration of Spark with Hadoop ecosystem components and NoSQL technologies
Exposure on visualization tools like Tableau/Qlikview
Japanese Language (Preferably) or Any other Foreign Language (if Any ….Pls specify)
Should be a very good Mentor and should have strong Leadership skills
Should be good at Attitude, Good at Work habits, Loyal to Organization
This post is listed Under Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company