Analytics Engineer Job Vacancy in Apollo Neuroscience Remote – Updated today

Are you looking for a New Job or Looking for better opportunities?
We got a New Job Opening for

Full Details :
Company Name :
Apollo Neuroscience
Location : Remote
Position :

Job Description : Who are we?
Apollo Neuroscience has developed a first-of-its-kind wearable that is scientifically validated to improve people’s resilience to stress. A spinout of research between the University of Pittsburgh and the University of Pittsburgh Medical Center, our small company is already helping thousands of people all over the world.
As we’re growing, our need for a robust data infrastructure is a top priority both for business intelligence as well as product research and development. We’re in a unique position to both help people understand their own mental and physical health better while simultaneously using that information to offer them direct therapeutic help through our revolutionary device.
Most of our team is split between CA and Pittsburgh, PA, but like nearly everyone else, we operate remotely these days and this position can be 100% remote. From the beginning, we’ve always had remote team members and we’re always thinking about how we can improve our processes to make remote work as successful as possible.
Who are you?
As an Analytics Engineer, you will develop and support Apollo’s modern data stack. You’ll be taking existing data streams from our website, mobile app, customer service software, and the device itself and transforming those into information which can be used across the company.
At Apollo, our data platform is considered an internal product whose customers are found throughout the enterprise. As owner of the data product you will be investigating what these internal customers need and building a plan to deliver it.
As our business continues to grow, there will be opportunities to scale this system and help incorporate machine learning to improve business scalability, business operations, customer retention and in the product itself for the personalization of the therapy and its delivery.
The ideal candidate will show a strong passion for clean, simple yet flexible data models and architectures. They will be driven by a desire to ingest data from the wild and transform it into accurate, valid, and timely sources of intelligence.
In your day-to-day, you will work closely with representatives of each department from product, marketing, customer service, finance, and operations to create an enterprise data management system capable of generating meaningful data for each department as well as providing an overall picture for our Chief Operations Officer and Chief Executive Officer.
Responsibilities include, but are not limited to:
Works with the Strategy, Product, Project Management, UI/UX & Technology disciplines to define the data requirements and technology stack for projects.
Create and maintain optimal data pipeline architecture
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources – Azure, Google Cloud, and AWS
SME to provide consulting services to stakeholders including clients, product owners, data, strategy, and design teams to assist with data related technical questions
Keeps up to date with the latest data practice and trends, to help influence the direction for the Data & Analytics Discipline.
Document the project and programs data architecture and environment in order to maintain a current and accurate view of the larger data picture, an environment that supports a single version of the truth and is scalable to support future analytical needs
Your qualifications:
5+ years of experience in SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
3+ years of experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
3+ years of working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores (Bonus in Google Cloud or AWS)
3+ years of experience with big data tools: Hadoop, Spark, Kafka, Azure Big Data, etc.
3+ Experience with AWS cloud services: EC2, EMR, RDS, Redshift
3+ Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Ability to manage multiple concurrent projects/initiatives
A bachelor’s degree in Computer Science, or Engineering. A master’s level education is an asset
Solid communication skills and a sense of customer service.

V9vNKddCMg

This post is listed Under  App Development
Disclaimer : Hugeshout works to publish latest job info only and is no where responsible for any errors. Users must Research on their own before joining any company

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *