Data Engineer

Company:
Location: San Diego, CA 92111

*** Mention DataYoshi when applying ***

Sharecare is the digital health company that helps people manage all their health in one place. The Sharecare platform provides each person - no matter where they are in their health journey - with a comprehensive and personalized health profile, where they can dynamically and easily connect to the information, evidence-based programs and health professionals they need to live their healthiest, happiest and most productive life. With award-winning and innovative frictionless technologies, scientifically validated clinical protocols and best-in-class coaching tools, Sharecare helps providers, employers and health plans effectively scale outcomes-based health and wellness solutions across their entire populations. We are always looking for people that value the opportunity to work hard, have fun on the job, and make a difference in the lives of others through their work every day!

Sharecare Health Data Services is a wholly owned subsidiary of Sharecare. The Data Engineer will demonstrate that they are culturally aligned with Sharecare Health Data Services, by displaying and working within the values of Servant Leadership, Family, Sharing Care, Compassion, Accountability and Respect for their leader and their peers. They will be innovative, open to change, and display honesty and integrity in all that they do.

  • This is a remote position and can be located anywhere in the United States.

Job Summary:
In partnership with senior leaders and data end users, the Analytics and Data Science Team is responsible for building out Sharecare's internal data infrastructure, iterating on data pipeline improvements, delivering on the analytics roadmap, and measuring its impact across the organization. The Data Engineers on the team are given direct ownership of key areas for Sharecare's business, and delivering meaningful outcomes is their primary measure of success. The team's stakeholders span the company and work across business areas and teams, including Sales, Marketing, Operations, Quality Control, Product Development, Human Resources, Finance, Information Technology as well as external clients.

In addition to developing reliable data pipelines for our end users, this role will initially have an opportunity to focus on improving our innovative data architecture and helping the team to integrate data from across the company and help build a single pane of glass view to drive actionable insight and analytics. If you have a passion for solving complex data engineering issues and are interested in steering your career in a meaningful direction where you will have the opportunity to work in a fast-paced environment - we want to hear from you!
The ideal candidate will be a life-long learner possessing intellectual humility, have excellent programming and data engineering skills, possess an ability to tackle a wide variety of projects including ETL/ELT development, data quality analysis, data modeling, monitoring, and automation and support the ongoing buildout of our data warehouse and analytics environments.
Essential Functions:
  • Lead the design, development, and maintenance of scalable data pipelines for structured and unstructured data from a variety of sources including raw files, source systems, databases, external APIs, and cloud services.
  • Add value at each step from raw data, to structured data, to data modeling, to reporting, analytics and machine learning.
  • Monitor and continuously improve ELT/ELT processes to assemble large, complex data sets using clean Python and SQL code focusing on data reliability, efficiency, and quality.
  • Lead deep architectural discussions to build confidence with our stakeholders and ensure customer success when creating new data solutions and products.
  • Collaborate closely with the data analysts and scientists to strive for greater functionality in our data ecosystem and quickly resolve operational issues by working with upstream support groups and other engineering teams.
  • Continuously research and integrate new data management and software engineering technologies into existing data structures and create POCs to constantly evolve our technology stack.
  • Support compliance and regulations standards with data stewardship and data security policies.

Qualifications:
  • Bachelor's degree (or higher) in a quantitative field.
  • 2+ years of direct data engineering experience and proficiency with SQL and Python.
  • Working knowledge of data structures, data modeling, and data analysis techniques.
  • Proven experience building and optimizing data pipelines and data models.
  • Working knowledge of Apache Airflow and Vertica is a big plus.
  • Experience with REST API's and cloud infrastructure and services (Azure, AWS).
  • Hands-on experience with Microsoft SQL data technologies (SQL Server/SSIS/SSRS)
  • Familiarity with Microsoft Power Platform: Power BI, Power Apps, Power Automate.
  • Superb communication skills and ability to distill complex technical issues to business audiences.
  • Comfort in a fast-paced agile environment, and the ability to consistently revise your approach in response to new information.
  • Intellectual curiosity and creativity, with willingness and ability to learn and apply new skills.

*** Mention DataYoshi when applying ***

Offers you may like...

  • EXL Services

    Cloud Data Engineer - Healthcare
    Hartford, CT 06103
  • Nu Skin

    Principal Data Engineer
    Provo, UT 84601
  • Compass Group

    E15 DATA ENGINEER--REMOTE
    Chicago, IL 60651
  • Independent Recruiting

    Senior Data Engineer
    Indianapolis, IN 46262
  • UnitedHealth Group

    Sr Data Engineer - Telecommute
    Eden Prairie, MN 55346