Kunai Consulting

Senior Data Engineer

Job description

Kunai is a growing digital agency of 70+ engineers, designers, and architects. During the past decade, we've shipped over 150 products for a portfolio of renowned clients including Visa, the United Nations, the NBA, Wells Fargo, Ernst & Young, TOMS Shoes, and many fintech unicorn startups. Our founders built a previous agency (Monsoon) that was acquired by Capital One in 2015.

As one of Kunai's first Data Engineers, you and your team will work with other engineers to build data accessibility solutions for our client, one of the world's most famous social media networks. Your team will be building both real-time and offline solutions to make petabytes of data accessible and reliable by using using the largest scale data processing tools in the world and applying them to our client's most critical and fundamental data problems. Your role will be challenging, fun, and interesting.

If you are passionate about data and driven by working in the uncharted waters of data organization at an unthinkable scale, this role is for you.

As a fully remote member of the Kunai team, you will:

  • Build and own mission-critical data pipelines that are ‘source of truth’ for our client's fundamental revenue data, as well as modern data warehouse solutions, while collaborating closely with one of the internet's most respected data science teams.
  • Be a part of an early stage team and have a significant stake in defining its future with a considerable potential to impact all of our client's revenue and hundreds of millions of users.
  • Be among the earliest adopters of bleeding-edge data technologies, working directly with Data Science and Platform engineering teams to integrate your services at scale.
  • Reveal invaluable business and user insights, leveraging vast amounts of our client's revenue data to fuel numerous Revenue teams.

You have:

  • Strong programming and algorithmic skills.
  • Experience with data processing (such as Hadoop, Spark, Pig, Hive, MapReduce etc).
  • Proficiency with SQL (Relational, Redshift, Hive, Presto, Vertica).

Extra credit for:

  • Experience writing Big Data pipelines, as well as custom or structured ETL, implementation and maintenance.
  • Experience with large-scale data warehousing architecture and data modeling.
  • Proficiency with Java, Scala, or Python.
  • Experience with GCP (BigQuery, BigTable, DataFlow).
  • Experience with Druid or Apache Flink.
  • Experience with real-time streaming (Apache Kafka, Apache Beam, Heron, Spark Streaming).
  • Ability in managing and communicating data warehouse project plans to internal clients.

SgjLHl0CWM

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
CBS
December 9, 2021
Qiagen
December 9, 2021
Maersk
December 9, 2021