Senior Data Engineer IoT Job Vacancy in SPX FLOW Pune, Maharashtra – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
SPX FLOW
Location : Pune, Maharashtra
Position :

Job Description : JOB SUMMARY
Experienced Data Engineer will guide the data architecture strategy for the cloud based IoT
platform. You will be instrumental in building the data pipeline to clean, transform and load time-
series data from its raw format to multiple endpoints and for multiple use cases. Defining
dimensions and facts for provided business problems. You will create cloud-based applications
using AWS/Azure to run data modelling on real-time time-series data as well as stored data from
the data-lake
PRINCIPLE DUTIES AND RESPONSIBILITIES
You will be guiding the data architecture strategy to design, build and maintain
production-grade data pipeline
You will work with backend engineers to build APIs and pipelines that serve up real-
time analytics and predictive models used by cloud-based applications
You will create and maintain the core cloud applications to analyse large real-time as
well as stored data sets and implement appropriate optimisation, debug issues,
triaging, and prioritising them and fixing them quickly
You will prepare and implement tooling for ETL (extract, transform and load)
procedures and architectures
You will develop and maintain data models for real-time timeseries data and create
dashboards and reports (using Tableau/PowerBI/QlikView) to provide performance
data and insights
You will collaborate with backend developers to build seamless interfaces with
multiple systems and to maintain and improve the CI / CD pipeline
You will demonstrate your code and solutions work, with ‘show and tells’ and
documented repeatable tests
You will setup monitoring and turning data loads and queries, ensuring the best
possible performance, quality and responsiveness
KNOWLEDGE, SKILLS & ABILITIES
Programming languages : SQL, Python,GO,R, Scala, Java.
Deriving warehousing solutions, and using ETL (Extract, Transfer, Load) tools
Understanding machine learning algorithms
Writing Spark function to clean raw data
Databases and Cache : SQL / NOSQL DBS
Data APIs integration
SPX FLOW, Inc. Equal Opportunity Employer
Job Description – Page 2/3
Understanding of distributed systems
understanding of data structures, and algorithms
Problem-solving
Good communication and interpersonal skills
Experience of creating a Data warehouse and directing the improvement of data
management
Strong Python/Java coding skills, including knowledge of Jupyter notebooks and AWS
SageMaker
Hands-on experience with tools used for low-latency event-based pipelines such as
Kafka, Flume, Kinesis, Fluentd, etc
Solid understanding of SQL engineering; how to optimize query performance
Experience with Lambda/SQS/SNS/Step functions to build and support near real-time
feeds of data into Redshift and other databases
Experience with Hadoop for data stores and Apache Spark (or similar tools) for fault
tolerant data stream processing
SQL Databases: MySQL, Athena, MSSQL, PostgreSQL
Data Lake Storage Solutions: S3, AWS Glue, Athena
AWS Data Solutions: Glue,EMR, Athena, Redshift, QuickSight, Lambda, EC2, RDS
Experience of Agile development, CI / CD Pipeline and DevOps, Git and version control
best practices

This post is listed Under  Software Development
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *