Senior Data Engineer Job Vacancy in Verve Group GmbH Pune, Maharashtra – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
Verve Group GmbH
Location : Pune, Maharashtra
Position :

Job Description : Smaato is a Verve Group company. Its Digital Advertising Technology Platform gives publishers the controls to deliver seamless, tailored, and engaging experiences for their audiences and advertisers. Advanced targeting capabilities and competitive intelligence help publishers optimize their monetization strategy to create data-driven experiences and reach their full revenue potential. Founded in 2005, Smaato is headquartered in San Francisco, California, with additional offices in Hamburg, New York, Shanghai, and Singapore.

Tasks

Job Description

The Data Engineering team at Smaato presents exciting challenges in technologies such as big-data, distributed computing and storage. We build reliable, peta bytes scale and high performance distributed systems using open source technologies such as Spark, Hadoop, Kafka, and Druid. We work closely with the Apache community, adopting the latest from the community. As part of the team, you will work on the application where all threads come together: Streaming, Processing, Storage the ad exchange. Our ultra-efficient exchange is capable of processing more than 50 billion ad requests daily. Consequently, every line of code you write matters as it is likely to be executed several billion times a day. We are one of the biggest AWS users with a presence in four different regions. If you want your code to make an impact this is the place to be.

As an Engineer of petabyte scale streaming ad tech environment, you will be part of and Engineering team that focuses on Kafka, Spark, Flink, and Druid to build a containerized, highly scalable big-data platform on our Hybrid Cloud Platform for supporting data warehousing, machine learning and Deep learning application.

Our Data Engineers work on our large-scale analytical databases and the surrounding ingestion pipelines. The job will involve constant feature development, performance improvements and platform stability assurance. The mission of our analytics team is “data driven decisions at your fingertips”. You own and provide the system that all business decisions will be based on. Precision and high-quality results are essential in this role.

What You’ll Do

Design, develop, deliver and operate scalable, high-performance, real-time data processing software
You will be responsible for handling more than a few terabytes of data at a given time
Maintain the current platform and shape it’s evolution by evaluating and selecting new technologies
Closely collaborate with stakeholders to gather requirements and design new product features in a short feedback loop
Interact with Smaato’s UI/UX Engineers to visualize terabytes of data
Contributing to any Open-Source big data technologies

Requirements

Qualifications

5-9 years of experience in Big-data platforms with deep understanding of Apache Kafka and/or Apache Druid
Experience in building and maintaining any distributed computing data platforms.
Hands-on experience with JVM language, Flink and JAVA Or Kafka is required.
Additional nice-to-have experience: Kubernetes, Kinesis or AWS-Stack
Experience and passion to work with OLAP/MPP database systems
Demonstrated experience in owning the products and driving them end to end, all the way from gathering requirements, development, testing, deployment to ensuring high availability post deployment.
Contribute to architectural and coding standards, evolving engineering best practices.
Nice to have – Open source committer / contributor / PMC in any of these Apache big data open-source technologies
Good understanding of cloud technologies such as storage and compute and in delivering Cloud services in an engineering role
Good knowledge in container orchestration: automate deployment, management, scaling, networking using Kubernetes.
You enjoy operating your applications in production and strive to make on-calls obsolete, debugging issues in their live/production/test clusters and implement solutions to restore service with minimal disruption to business. Perform root cause analysis.

If interested, please send your resume to learn more.

This post is listed Under  Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *