The data engineer role will be mainly helping the data team and the company as a whole to build a sound data foundation.
Key Responsibilities
- Design, develop and deploy data pipelines
- Data modelling, design and maintain various analytical layers in the Data Warehouse
- Design and implement various data health checks to ensure the data quality and consistency across systems
- Design and implement data extraction solution in a distributed system
- Design and implement other Geometry data related processing solutions
- Build monitoring solution at various data snapshot / pipeline checkpoints
Skills required
- Minimum 1 year experience with Data Engineering, covering data pipelines from end-to-end
- Familiar with Python 3, able to deliver production ready code
- Familiar with one modern Data Warehouse solution, e.g. Redshift, Big Query, Click House etc.
- Familiar with one orchestrating solution, e.g. Airflow, Luigi, Azkaban
- Familiar with one Relational Database such as Postgres
- Familiar with Git usage, a good sense of documentation and standard sprint planning process
- Experience with Geolocation data will be an advantage
- Experience with other Data Engineering tools / framework such as dbt, Spark, Kafka will be an advantage
- Experience with DevOps and AWS will be an advantage
- Experience with analytical queries