Job description

Our client, a worldwide media and entertainment brand is looking to add a Data Engineer to their team!
Responsibilities:
  • Interface with the key stakeholders to understand the business requirements
  • Design and develop necessary data models, ETL's, reports, etc. as per requirements
  • Work with big data technologies such as Hadoop and cloud database technologies like Snowflake to prepare the data and deliver the reports
  • Write test scripts and automate where possible (using Pyspark, bash scripting, python)
  • Participate in code reviews
  • Ensure code is checked in as per company guidelines
  • Participate in weekly scrum meetings and daily stand-up meetings
  • Contribute to ensuring correct status in burn down charts
  • Ensure QA sign off is obtained, fix any bugs as discovered
  • Support product signoff process and fix any issues / bugs discovered
  • Post go live support for fixing any P1 issues
Required Skills:
  • 5+ years of data engineering experience developing large data pipelines
  • Strong SQL skills and ability to create queries to extract data and build performant datasets
  • Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data
  • Experience with at least one major MPP (elastic map reduce) or cloud database technology (Snowflake, Redshift, Big Query)
  • Nice to have experience with Cloud technologies like AWS (S3, EMR, EC2)
  • Solid experience with data integration toolsets (i.e Airflow) and writing and maintaining Data Pipelines
  • Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices
  • Good Scripting skills, including Bash scripting and Python
  • Familiar with Scrum and Agile methodologies
  • You are a problem solver with strong attention to detail and excellent analytical and communication skills

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.