Lead Data Engineer

Job description

Lead Data Engineer

At Protecht Inc, we are driven to continuously build and refine disruptive technologies for our clients who range from local businesses to publicly traded companies. We provide fans with the ability to experience events with a feeling of safety and protection against unforeseen circumstances such as illness, issues with work/travel, or other personal complications or liabilities that may affect attendance.

Protecht is looking for a hands-on Lead Data Engineer who can bring their ideas, vision, and experience to help build our data infrastructure. The ideal candidate has been a key contributor to building data pipelines (Kafka, Glue, Spark, Airflow, Kinesis, etc) on AWS (Redshift). You’ll take ownership of our data platform, develop a long-term data technology strategy, and implement key improvements. You have experience working closely with analytics teams and business stakeholders to deliver considerable business impact leveraging the data solutions that you’ve built.

Essential duties and responsibilities may include, but are not limited to:

  • Own the strategy, implementation, and management of our data infrastructure.
  • Work with developers, stakeholders, and analysts to develop data models.
  • Build streaming and batch pipelines to ETL data from structured and unstructured sources.
  • Develop automation to support our data analysts.
  • Design and manage a framework for ensuring data quality and governance.
  • Help troubleshoot complex data issues and recommend solutions.
  • Communicate directions and decisions effectively with the team and stakeholders.
  • Build alignment with engineering, analytics, and product leadership.
  • Foster cross team communication, ensuring data engineering work closely with the broader engineering team and Protecht organization.


  • 3-5+ years of experience creating streaming and batch ETL systems
  • Strong ability to write efficient SQL queries and develop data models
  • Strong coding ability (Python or Java)
  • Expertise with databases including AWS Redshift, AWS Athena, Postgres, MySQL, and DynamoDB
  • Experience with message distribution systems like Kafka or SQS
  • Familiarity with ETL systems like Airflow, Spark, Glue, etc
  • Hands-on experience using serverless AWS technologies including Lambda, Step Functions, Fargate, etc
  • Experience with Bi tools such as Domo, Tableau, etc
  • Knowledge of testing, profiling, and debugging practices
  • Experience taking end-to-end ownership of your code from requirements and analysis to testing through deployment
  • Collaborative team player with strong communication skills (verbal and written)
  • Eagerness to solve challenging problems and a love of learning and trying new things
  • BS in Computer Science or related field or equivalent experience


Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.