Machine Learning Engineer

Job description

Machine Learning Engineer – IT Data Science Team


Warsaw, Poland (or remote in Poland willing to visit the office up to 2 times per month)

Job Description

As a key member of the Equinix’s IT Data Science team you will be setting standards in designing, building, scaling and operating innovative machine learning solutions. You will work with other data scientists, machine learning engineers and data engineers to design, develop and be responsible to deploy various ML/AI solutions. You will drive research and proof of concepts for state-of-the-art MLOps tools, then incorporate them in our platform and drive adoption across all our projects. This role is hands on individual contributor role.

Job Profile Summary

Our growing IT Analytics and Data Science team is spread in US, Poland, India and Singapore with a diverse set of skills and backgrounds. We work with our internal customer in departments like Sales, Marketing, Finance, Operations and more, to deliver machine learning solutions to critical data-driven problems directly connected to company’s top-level goals.

We have a variety of datasets generated by our internal systems, that includes tabular/panel data and text (and even videos) and datasets acquired from third-party providers. We have several ongoing projects related to revenue, retention, growth, risk mitigation and a vast green field to explore new ideas. To deliver such solutions we use multiple machine learning tools like classification, regression, clustering, time series, anomaly detection, NLP, graph models and constantly exploring new ones.

We are building our ML platform considering MLOps best practices to increase productivity and reliability across several machine learning use cases. We use GCP (Google Cloud Platform) and its cloud components to architect our solutions, using mostly python, SQL and popular data science libraries for all our work while keeping flexibility to try new technologies along the way.


  • Deliver ML projects: Build and improve data science use cases in multidisciplinary teams, from idea to code in production. This role will be on the cutting edge of our Data / Machine Learning platform. As we push to solve more of our ML/AI challenges, you will be prototyping new features, tools and ideas. Innovate at a very fast pace to maintain our competitive edge
  • ML engineering and operations: Research, design and build a ML platform that will sustain all our models’ full life cycle. Interact with data engineering and architecture teams to execute and optimize the way we consume data and deploy models in production. Roadmap includes feature store, model monitoring and other MLOps concepts
  • Data expertise: Work with large volumes of data; extract and manipulate large datasets using tools such as Python and SQL. Leverage a state-of-the-art ML platform to train and deploy models. Drive scalability, reliability and efficiency across projects
  • Standardization: You will be fully involved in optimization of several processes, coming up with standardization for various approaches in solving use cases/problems and making sure that multiple projects are following the defined frameworks
  • Collaboration: Coordinate and work with cross functional teams, sometimes located at different locations

Required skills

  • CS fundamentals: You have earned at least a B.S. (MS / PhD desired) in Computer Science or related degree, and you have a strong ethos of continuous learning
  • Software engineering: You have 3+ years of professional software development experience using Python and SQL using version control (GIT), with good analytical & debugging skills. You have used automatic code analysis and formatting tools. You are fluent in writing tests and documenting your code along the way.
  • Machine Learning: You have 3+ years of experience with machine learning modeling, creating prototypes and evolving them into full-fledged products that are served in production environments. Knowledge about methods such as classification, regression, time series, NLP, anomaly detection, clustering and other common ML tools.
  • Cloud and ML Environments: You have worked in at least one cloud environment like GCP (preferred), AWS or other cloud data platforms. Understanding of different cloud components to build ML solutions (Seldon.io, Kubeflow, Vertex AI or similar)
  • Machine learning operations (MLOps) – you have developed CI/CD workflows using GitHub Actions and/or ML workflows using ML lifecycle tracking and deployment frameworks (like MLFlow, Seldon.io, Kubeflow Pipelines, TFX).
  • Data manipulation: Deep interest in understanding the datasets related to the projects, spot and communicate data quality issues, knowledge of different feature engineering techniques to feed models with the best possible data. Experience in developing Spark scripts is a plus.
  • Project management: You demonstrate excellent project and time management skills, exposure to scrum or other agile practices in JIRA
  • Fluent in English

Desired skills

  • Machine learning operations (MLOps): Practical experience in MLOps use cases such as: Feature Store, Model monitoring, Data and model versioning
  • AutoML: Experience and interest to work with AutoML software (like H2O Driverless AI, GCP AutoML or others)
  • Graph databases: Experience in evaluating, designing and implementing scalable solutions that leverage graph DBs (Neo4j, TigerGraph or similar)

Successful candidate will

  • Be a talent multiplier who gets the team around them to excel
  • Be persistent, creative and driven to get results relentlessly
  • Exhibit a strong backbone to challenge the status quo, when needed
  • Exhibit a high level of curiosity, keeping abreast of the latest trends & technologies
  • Show pride of ownership and strive for excellence in everything undertaken

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.