All new
Data Science
jobs, in one place.

Updated daily to help you be the first to apply ⏱

Data Engineer
  • Spark
  • Big Data
  • ETL
  • Modeling
  • Hadoop
  • MapReduce
  • Scala
  • Kafka
  • Azure
Eden Prairie, MN
188 days ago

Projects the candidate will be working on:

  • The project is related to building solutions for our Analytical Platform using Provider Call Data and Claims Data to be able to measure the performance of our providers using different network levers.

Ideal Background:

  • They should have good Big Data Skills.


  • 5+ yrs of hands on experience with Big Data

Top Requirements:

  • Big Data (Spark, Scala, Hive, Hadoop Ecosystem)
  • Data Streaming (Kafka)
  • Docker and K8s.

Team and Team size:

  • Part of a team
  • PDP P360 Analytical Platform Team, Team Size - 5
  • Scrum Team (1 Scrum Master, 1 Team Lead Lead, 3-4 Developers).

Software tools/skills:

  • MS, Good communication
  • Building and supporting a cloud based analytical platform for complex data systems.
  • Document technical applications, specifications, and enhancements.
  • Collaborate with business and technical stakeholders while defining solutions.
  • Collaborate with the data scientists to organize huge sets of data for data modeling.
  • Recommend ways to improve data reliability, quality and efficiency.
  • Lead engineers in making sound, sustainable, and practical technical decisions.
  • Foster high-performance, collaborative technical work resulting in high-quality output.
  • Compile and present key findings and reports to all levels of the organization, including senior leadership.


  • 5+ years of experience with Big Data (HDFS, MapReduce, Hive, HBase, Pig, Sqoop, Spark, Scala).
  • Experience building and optimizing data pipelines, architectures and data sets.
  • Experience with data modelling, data management and ETL tools like Kafka Connect, Informatica etc.
  • Experience building and deploying applications to the cloud (AWS, Azure, etc.).
  • Experience working in an Agile environment.
  • Expertise with modern programming languages, systems, and architectures.

Nice to have:

  • BA/BS degree in Computer Science or equivalent experience.
  • Experience with the use of containers such as Kubernetes and Docker.
  • Experience with DevOps, Continuous Integration, and Continuous Delivery.
  • Expertise in performance and scalability optimization.
  • Knowledge of software and infrastructure security practices..


  • a. How many rounds? 2-3 rounds
  • b. Video vs. phone? Video
  • c. How technical will the interviews be? Very Technical

Horizontal is proud to be an Equal Opportunity and Affirmative Action Employer. We seek to provide employment opportunities to talented, qualified candidates regardless of race, color, sex/gender including gender identity and/or expression, national origin, religion, sexual orientation, disability, marital status, citizen status, veteran status, or any other protected classification under federal, state or local law.

In addition, Horizontal will provide reasonable accommodations for qualified individuals with disabilities. If you need to request a reasonable accommodation in order to complete the application or interview process, please contact

All applicants applying must be legally authorized to work in the country of employment.

    Related Jobs

  • Associate Data Scientist, Data Modeling

    • scikit-learn
    • Pandas
    • NumPy
    San Francisco
    21 days ago
  • Staff Machine Learning Engineer

    • scikit-learn
    • Java
    • Python
    8 days ago
  • Data Analyst (Enterprise Data Warehouse)

    • SQL
    • Tableau
    Partners HealthCare
  • Senior Business Data Analyst

    • Data Analysis
    28 days ago
  • Data Analyst 2

    • SQL
    • Database
    28 days ago