- 6 months contract with a view to extend | Based in Sydney CBD
- Experience in SQL,Apache Spark, Python/PySpark, Kafka, Docker
- Work on exciting large Big Data transformation project
- 6 months contract with a view to extend | Based in Sydney CBD
- Experience in SQL,Apache Spark, Python/PySpark, Kafka, Docker
- Work on exciting large Big Data transformation project
Our client is looking for Data Engineer to work on a large data transformation project.Experience in range of automation workflows for Data Transformation project. The primary tech stack should include Kubernetes, Apache Spark, Python/PySpark, Kafka, Docker, ML workflow automation/orchestration via Airflow, Jenkins.ellent big data engineering experience.
Skills Required:
- Excellent SQL writing skills.
- Experience in ETL/ELT tools like Teradata, Informatica and PySpark
- Experience in Extraction, Loading and Transformation of data directly from heterogeneous
- Experience in implementing Teradata Applications using Teradata control framework (TCF).
- Experience in modeling Methodologies like Star Schema, Snowflake Schema and Data Vault
- Technology Stack Experience: SQL, Kubernetes, Apache Spark, Python/PySpark, Kafka, Docker, ML workflow automation/orchestration via Airflow, Jenkins and Tableau.
- Experience in Tableau or Qlik
Please send your cv in word document ONLY. Only successful candidates will be contacted.
Sumeet Kaur
Infrastructure
Recruitment Consultant
Let's Connect
https://www.linkedin.com/in/recruitersumeetsydney/
+61 2 8088 3616
BBBH105752_159304396396044