Hadoop Architect Job Vacancy in Vupico Hyderabad, Telangana – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
Vupico
Location : Hyderabad, Telangana
Position :

Job Description : Hyderabad, India
Department
Customer Success
Job posted on
July 2020
Employment type
Permanent
Experience Range
6+ years
About VUPICO
VUPICO is a leading enterprise business and technology solutions partner for small, medium and large global customers, is seeking for experienced professionals, who thrive on challenge and desire to make a real difference. With an environment of extraordinary innovation and unprecedented growth, this is an exciting opportunity for a self-starter who enjoys working in a quality-oriented, fast-paced, team environment.

Vupico is growing MNC, operating from Different locations (Japan, India, Singapore, Australia & USA) across the globe.
About Role
Role: Hadoop Architect

Skillset: Hadoop, Hive, Impala, Sqoop, Kafka, NiFi, Oozie, Spark, Python, Scala, Agile, AWS, Redshift, DynamoDB, HBase, Phoenix, Informatica Big Data Edition, Any ETL, Tableau, DevOps

Experience: 6 to 10 Years

Work Type: We Do have Both (Permanent/Contract)

Eligibility: Any Graduate/Post Graduate

(Preferably in Computer science, Information Technology)
About Responsibilities (JD)
Minimum 10+ years of total experience & minimum of 5+ year experience in Hadoop technology stack
Provide technical guidance for real time data pipeline and analytic frameworks
Work on enhancing and maintaining existing analytic framework and real time data pipelines
Mentor junior team members across all level of the big data technology stack
Expertise in Business Requirement gathering, Analysis & conceptualizing high-level architectural framework & Design
Familiarity with values, principles, practices, methods and tools used in DevOps, some experience in designing solutions with reliability, availability and serviceability (RAS) attributes
Must have experience in designing and architecting large scale distributed applications
Thorough understanding of Cloudera/Hortonworks/MapR distributions of Hadoop
Thorough understanding of Hadoop ecosystem components like Hive, Pig, Oozie, Flume, YARN, Zookeeper etc
Thorough understanding of No SQL databases like HBase, Mongo, Cassandra, DynamoDB etc
Thorough understanding of Solution architecting & Estimation process
Exemplary general IT knowledge (applications development, testing, deployment, operations, documentation, standards, best practices, security, hardware, networking, OS, RDBMS, middleware, etc)
Strong technical skills on Spark, HBase, Hive, Sqoop, Oozie, Flume, Java, Pig, Python etc
Should have strong Data Warehousing Concepts
Experience in security implementation across entire Hadoop stack
Good understanding of streaming technologies and real time analytics
Hands on programming experience with Python/R as well as shell scripts and SQL
Experience with agile or other rapid application development methodologies
Ability to learn quickly in a fast-paced, dynamic team environment
Highly effective communication and collaboration skill
Good experience with distributed systems, large scale non-relational data stores, map-reduce systems, performance tuning, and multi-terabyte data warehouses
Must have Ability to hustle and Analytical Thinking
Team Player, Motivated & Career-Driven
Excellent consulting skills, oral and written communication, presentation, and analytical skills
Active involvement in thought leadership Best Practices development, Written White Papers and/or POVs on Big Data Technology
Comfortable Client Interfacing
Excellent Documentation skills
Self-starter, with a keen interest to grow in Big Data space
Should have a deeper understanding about pros and cons of the architecture designed and contingency plans in case of failure
Excellent troubleshooting skills to detect various problem areas such as Performance, scalability, security, availability
Added advantage if you have below
Hadoop Developer certification(Cloudera/Hortonworks/MapR)
Spark Certification (Databricks/MapR/Cloudera/Hortonworks)
Exposure on Integration of Spark with Hadoop ecosystem components and NoSQL technologies
Exposure on visualization tools like Tableau/Qlikview
Japanese Language (Preferably) or Any other Foreign Language (if Any ….Pls specify)
Should be a very good Mentor and should have strong Leadership skills
Should be good at Attitude, Good at Work habits, Loyal to Organization

Note: It would be nice to have Any Hadoop certification as latest as possible… Zero value if it is older than 3 years from now.

This post is listed Under  Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *