Job description

The Disney Decision Science and Integration (DDSI) analytics consulting team is responsible for supporting clients across The Walt Disney Company including Direct-to-Consumer & International, Media Networks (e.g., ABC, ESPN), Studio Entertainment (e.g., The Walt Disney Studios, Disney Theatrical Group) and Parks, Experiences & Consumer Products. DDSI leverages technology, data analytics, optimization, statistical and econometric modeling to explore opportunities, shape business decisions and drive business value.

The DDSI Data Engineering (DE) team is seeking a Senior Data Engineer to work on a Food and Beverage Revenue Management project. This position will work with the Decision Science Products and Science teams to design and implement a new system in the Food and Beverage space. The work will involve various data engineering activities such as data acquisition and validation, designing and implementing ETL/ELT data pipelines, and designing and implementing databases.

Responsibilities of the Role:

Work assignments may cover activities such as data requirements gathering, source-to-target mapping, data validation scripting and review, developing and monitoring ETL/ELT data pipelines, designing and implementing database schema/tables/views, and producing datasets as input to science models and visualizations.

Technologies generally leveraged to fulfill the work include, but not limited to, SQL, Python, Docker, Gitlab, Airflow, Lambda, Snowflake, and PostgreSQL.

Basic Qualifications:

  • 3-5 years of experience with ELT/ETL data pipeline development and maintenance
  • Proven experience and expertise using Python, SQL, and cloud storage (such as AWS S3)
  • Experience with developing in a multi environment (Dev, QA, Prod, etc.) and DevOps procedures for code deployment/promotion.
  • Strong understanding of relational database design and proficiency utilizing a database such as PostgreSQL, or Snowflake

Desired Qualifications:

  • Master’s degree (Computer Science, Mathematics, Engineering or related field preferred)
  • Experience working with large datasets and big data technologies, preferably cloud-based, such as Snowflake, Databricks, or similar
  • Knowledgeable on cloud architecture and product offerings, preferably AWS
  • Experience managing and deploying code using a source control product such as GitLab/GitHub
  • Experience leveraging containerization technologies such as Docker or Kubernetes
  • Hands-on knowledge of job scheduling software like Apache Airflow, Amazon MWAA, or UC4

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.