Principal Software Engineer – Data Engineering Job Vacancy in Algonomy Bengaluru, Karnataka – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
Algonomy
Location : Bengaluru, Karnataka
Position :

Job Description : Designation: Principal Software Engineer – Data Engineering
Experience: 12+ Years
Location: Bangalore
Education: B.E/B.Tech/Masters
At Algonomy, we believe the future of our economy is Algorithmic, where businesses will develop resilient, adaptive and agile decisioning abilities that will constantly test and refine AI-driven actions to create the best personal experience for every individual customer at scale.
We aim to become the algorithmic bridge between consumers and brands/retailers, and to lead our customers through the Algorithmic transformation imperative. The name Algo-nomy signifies an expertise in algorithms. As technology evolves our lives (and our clients’) at hyper-speed, Algonomy stands as a bold, creative and agile brand; and these are also the very qualities that every digital-first business needs in order to be successful in the new normal. We are ambitious, we create category leading solutions in our markets, and we are constantly learning, inventing and adapting to stay ahead of our industry’s needs
Are you interested in building systems to handle petabytes of retail data, while working in an agile and nimble organization? On our Data Platform team, rom ingestion to storage to driving insights and analytics on high velocity stream data.
We’re looking for Engineering leaders with passion for technology, love writing code and likes to build large scale data systems. Ideal candidate will have strong problem solving, analytic, decision-making and excellent communication with interpersonal skills. Candidates should be self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities. Candidate should be willing provide technical leadership and mentoring to a small team of highly talented and motivated engineers to deliver these solutions with highest quality.
Primary responsibilities:
Designing high performance Information architecture and Data Management for delivering diverse needs from high velocity data.
Designing and Building systems with Hadoop and Spark (MLib, GraphX), Flink, Mesos, Marathon, Yarn, Kafka
Involve in the algorithm design, research and production scalability.
Designing and implementing data pipelines to support data needs for machine learning (both batch and online), reporting, monitoring, and alerting. (Crunch, Cassandra, Hive, Presto, no-SQL databases)
Software engineering in Java and Scala
Automating tests at various levels, including end to end integration testing with synthetic known data, unit testing with JUnit, performance testing and tuning
Delivering analytics using standard Business intelligence tools
Working with rapid and innovative development methodologies like: Kanban, Continuous Integration and Daily deployments
Mentoring and raise the bar by improving the team’s definition of best practices and architecture with deep domain knowledge.
Engage with Product Management and Business to create Product roadmap, own technical backlog and roadmap for technology supremacy.
Evangelize solution with Professional services and Customer Success teams to drive adoption.
Driving various organization wide activities like Hackathon, Ideathon, brown bag sessions and technical blogs

Minimum requirements:
B. Tech/M.Tech in Computer Science Engineering or related fields with at least 12 years of experience in related field
At least 4 years of designing and managing systems with PB scale data volume and high velocity streaming data
High level of experience working with Big data tools: HDFS/S3, Spark/Flink,Hive,Hbase, Kafka etc
Hands on experience in object oriented or functional programming such as Scala or Java or Python
Hands on experience in coding using distributed computing architecture such as Spark
Good data analysis, correlation and reasoning skills
Knowledge and Experience working with cloud platforms.
Knowledge of Container management framework such as Docker, Mesos, Microservices framework for data as utility
Proficient in data modeling with advanced knowledge of data structures
Additional language skills for scripting and rapid application development

Desired skills and experience:
Knowledge of numerical programming, data science, machine learning and/or statistics a strong plus
Familiarity with UNIX (systems skills a plus)
Working in a distributed environment and has dealt with challenges around scaling and performance
Knowledge of Business Intelligence & datawarehouse is added advantage

This post is listed Under  Technology
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *