Data Engineer

Location: Chicago, IL

*** Mention DataYoshi when applying ***

About CCC

CCC is the technology platform for the underwritten assets economy. CCC technology, insights, and support connect industries – insurers, automotive manufacturers, collision repairers, parts suppliers, lenders, fleet operators and more – to advance decision-making, productivity, and customer experiences for thousands of clients worldwide. Clients leverage CCC’s network management, data management, AI, operational workflows and customer experience solutions to efficiently scale, interact, transact and achieve their unique business objectives. CCC was ranked a best mid-sized company to work for by Forbes (2019). BuiltIn Chicago, Austin and LA named CCC a top place to work in 2020. Diverse perspectives and experiences are core to CCC’s success and award-winning culture of more than 2,000 employees worldwide. We hold inclusion as a core value and are committed to celebrating and cultivating the diversity of our team. With a 40+ year track record of innovation, CCC’s tenacious spirit and growth mindset turn next generation technology into real world solutions and empower team members to expand their knowledge and potential. Headquartered in Chicago, CCC has 11 locations worldwide. CCC’s principal PE investors are Advent International, Technology Crossover Ventures, and Oak Hill Capital Find out more about CCC by visiting

Job Description Summary

The Enterprise Analytics team at CCC has an open position for a Data Engineer. The team builds platforms to provide insights to internal and external clients of CCC businesses in auto property damage and repair, medical claims and telematics data. Our solutions include analytical applications against claim processing, workflow productivity, financial performance, client and consumer satisfaction, and industry benchmarks. Our data engineers use big data technology to create best-in-industry analytics capability. This position is an opportunity to use Hadoop and Spark ecosystem tools and technology for micro-batch and streaming analytics. Data behaviors include ingestion, standardization, metadata management, business rule curation, data enhancement, and statistical computation against data sources that include relational, XML, JSON, streaming, REST API, and unstructured data. The role has responsibility to understand, prepare, process and analyze data to drive operational, analytical and strategic business decisions. The Data Engineer will work closely with product owners, information engineers, data scientists, data modelers, infrastructure support and data governance positions. We look for engineers who start with 2-3 years of experience in big data and who also love to learn new tools and techniques in a big data landscape that is endlessly changing.

Job Duties

  • Build end to end data flows from sources to fully curated and enhanced data sets. This can include the effort to locate and analyze source data, create data flows to extract, profile, and store ingested data, define and build data cleansing and imputation, map to a common data model, transform to satisfy business rules and statistical computations, and validate data content.
  • Modify, maintain and support existing data pipelines to provide business continuity and fulfill product enhancement requests.
  • Provide technical expertise to diagnose errors from production support teams
  • Participate as both leader and learner in team tasks for architecture, design and analysis.
  • Coordinate within on-site teams as well as work seamlessly with the US team.


  • Master’s or bachelor’s degree with Engineering/Programming/Analytics Specialization
  • 2-3 years’ experience with building, maintaining, and supporting complex data flows with structural and un-structural data
  • Proficiency in Python and pyspark
  • Hands on experience in HDFS, HIVE, and SQOOP
  • Familiarity with Apache Kafka and Apache Airflow
  • Ability to use SQL for data profiling and data validation
  • Experience in Unix commands and scripting

Preferred Qualifications:

  • Understanding AWS ecosystem and services such as EMR and S3
  • Experience and understanding of Continuous Integration and Continuous Delivery (CI/CD)
  • Understanding in performance tuning in distributed computing environment (such as Hadoop cluster or EMR)
  • Familiarity with BI tools (such as Tableau and MicroStrategy) and commonly use data modeling techniques.

*** Mention DataYoshi when applying ***

Offers you may like...

  • Recruiting From Scratch

    Data Engineer
    San Diego, CA
  • VirginPulse

    Senior Data Engineer
  • Babylist

    Data Engineer
    Oakland, CA
  • Darwin Homes

    Data Engineer
  • eCivis

    Senior Data Engineer (BI) -Remote Must live in USA...
    Pasadena, CA 91103