Data Engineer Job Vacancy in Chubb INA Holdings Inc. Bengaluru, Karnataka – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
Chubb INA Holdings Inc.
Location : Bengaluru, Karnataka
Position :

Job Description : Job Requirements

Utilize the data engineering skills within and outside of the developing Chubb information ecosystem for discovery, analytics and data management.
Work with data science team to deploy Machine Learning Models.
You will be using Data wrangling techniques converting one “raw” form into another including data visualization, data aggregation, training a statistical model etc.
Work with various relational and non-relational data sources with the target being Azure based SQL Data Warehouse & Cosmos DB repositories.
Clean, unify and organize messy and complex data sets for easy access and analysis.
Create different levels of abstractions of data depending on analytics needs.
Hands on data preparation activities using the Azure technology stack especially Azure Databricks is strongly preferred.
Implement discovery solutions for high speed data ingestion.
Work closely with the Data Science team to perform complex analytics and data preparation tasks.
Work with the Sr. Data Engineers on the team to develop APIs.
Sourcing data from multiple applications, profiling, cleansing and conforming to create master data sets for analytics use.
Utilize state of the art methods for data manning especially unstructured data.
Experience with Complex Data Parsing (Big Data Parser) and Natural Language Processing (NLP) Transforms on Azure a plus.
Design solutions for managing highly complex business rules within the Azure ecosystem.
Performance tune data loads.

Work Experience

Knowledge of Python and Pyspark is an absolute must.
Knowledge of Azure, Hadoop 2.0 ecosystems, HDFS, MapReduce, Hive, Pig, Sqoop, Mahout, Spark is important for this role.
Experience with Web Scraping frameworks (Scrapy or Beautiful Soup or similar).
Extensive experience working with Data APIs (Working with REST endpoints and/or SOAP).
Significant programming experience (with above technologies as well as Java, R and Python on Linux).
Knowledge of any commercial distribution application such as Horton Works, Cloudera, MapR etc.
Excellent working knowledge of relational databases, MySQL, Oracle etc.
Experience with Complex Data Parsing (Big Data Parser) a must. Should have worked on XML, JSON and other custom Complex Data Parsing formats.
Natural Language Processing (NLP) skills with experience in Apache Solr, Python a plus.
Knowledge of High-Speed Data Ingestion, Real-Time Data Collection and Streaming is a plus.
Bachelor’s in computer science or related educational background.
3-5 years of solid experience in Big Data technologies a must.
Microsoft Azure certifications a huge plus.
Data visualization tool experience a plus.

This post is listed Under  Software Development
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *