Team’s Vision
The Machine Learning (ML) Engineering team at Disney drives and enables ML usage across several domains in heterogeneous language environments and at all stages of a project’s life cycle, including ad-hoc exploration, preparing training data, model development, and robust production deployment. The team is invested in continual innovation of the ML infrastructure itself to carefully orchestrate a continuous cycle of learning, inference, and observation while also maintaining high system availability and reliability. We seek to find new ways to scale with our guest and partner base as well as the ever-growing need for ML and experiments.
Role
In this role you will work on event and context processors to federate context, infrastructure and tooling to enable event-driven ML pipelines. In addition, you will partner with ML platform users to help automate and manage their ML applications. You will work on cross-functional projects and push the envelope on data and ML infrastructure.
Responsibilities:
- Design and develop event and context processing ecosystem
- Collaborate with ML and data practitioners to automate their pipelines
- Build tooling and low-latency services to enable and support event-driven pipelines
- Ability to work on multi-faceted projects with engineers from diverse backgrounds, heterogenous skills and across teams.
- Drive and maintain a culture of quality, innovation and experimentation
- Work in an Agile environment that focuses on collaboration and teamwork
Basic Qualifications:
- Bachelor’s degree in Computer Science, Information Systems, Software, Electrical or Electronics Engineering, or comparable field of study, and/or equivalent work experience
- 5+ years of software experience, with 3+ years of relevant data and software experience
- Experience in building large datasets and scalable services
- Experience deploying and running services in AWS/GCP/Azure, and engineering big-data solutions using technologies like Databricks, EMR, S3, Spark
- Experience loading and querying cloud-hosted databases such as Redshift and Snowflake
- Experience designing and developing backend microservices for large scale distributed systems using gRPC or REST.
- Experience with large-scale distributed data processing systems, cloud infrastructure such as AWS or GCP, and container systems such as Docker or Kubernetes.
Preferred Qualifications:
- Knowledge of the Python/Scala/Java data ecosystem
- Experience building streaming pipelines using Kafka, Spark, Flink, or Samza
- Excellent communication and people engagement skills
- Drive and maintain a culture of quality, innovation and experimentation
- Mentor colleagues on best practices and technical concepts of building large scale solutions
The hiring range for this position in New York is $142,516 - $191,180 per year, in California is $129,560 - $199,870 per year and in Seattle is $142,516 - $191,180 per year. The base pay actually offered will take into account internal equity and also may vary depending on the candidate’s geographic region, job-related knowledge, skills, and experience among other factors. A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.