- Develop fast data infrastructure leveraging data streaming, batch processing, and machine learning to personalize experiences for our customers.
- Lead work and deliver elegant and scalable solutions
- Work and collaborate with a nimble, autonomous, cross-functional team of makers, breakers, doers, and disruptors who love to solve real problems and meet real customer needs.
Requirements
Technical Qualifications:
- Bachelor-level Degree in engineering, Information Technology or Computer Science
- 4 years of hands-on experience as a Data Engineer in a Big Data environment (Spark, Hive, HDFS, Sqoop)
- Strong SQL knowledge and data analysis skills for data anomaly detection and data quality assurance
- Programming experience in Scala, Python, shell scripting and automation
- Experience with modern workflow/orchestration tools (e.g. Apache Airflow, Oozie, Azkaban, etc.)
- Experience working with PostgreSQL, Teradata, Vertica and/or other DBMS platforms
Preferred Qualifications:
- Hadoop Certification or Spark Certification.
- Experience with BI tools such as Tableau or Qlik to create visualizations and dashboards for various data quality metrics.