Manager, Data Engineer (ETL and Modeling)

Job description

About Us

About SATS – Feed and Connect Communities

SATS is a global leader in gateway services and Asia's pre-eminent provider of food solutions. Using innovative food technologies and resilient supply chains, we create tasty, quality food in sustainable ways for airlines, foodservice chains, retailers, and institutions. With heartfelt service and advanced technology, we connect people, businesses, and communities seamlessly through our comprehensive gateway services for customers such as airlines, cruise lines, freight forwarders, postal services and eCommerce companies.

Fulfilling our purpose to feed and connect communities, SATS delights customers in over 210 locations and 27 countries across the Asia Pacific, UK, Europe, the Middle East and the Americas. SATS has been listed on the Singapore Exchange since May 2000. For more information, please visit

Key Responsibilities

Objectives of this role

Work with teams from concept to operations, providing technical subject matter expertise for successful implementation of data solutions in the enterprise, using modern data technologies. This individual will be responsible for the planning, execution, and delivery of data initiatives. The role will also be responsible for expanding and optimising of data pipeline and architecture. This is a hands-on development role mainly using Databricks and Microsoft Azure data engineering skill sets and application development using Java and Python.


  • Work collaboratively with relevant teams to define functional and technical requirements
  • Document technical specifications, processes, and workflows for data pipelines and related systems
  • Design, develop, and maintain scalable data pipelines to ingest, process, and store data from various sources into the operational data platform
  • Design and develop intuitive, highly automated, self-service data platform functions for business users
  • Optimise data processing and storage infrastructure for mission-critical, high-volume, near-real-time & batch data pipelines
  • Implement data quality checks, monitoring, and alerting to ensure data accuracy and availability
  • Participate in code reviews, testing, and deployment processes to maintain high standards of data engineering practices
  • Troubleshoot and resolve data pipeline issues in a timely manner to minimise impact on business operations
  • Contribute to the overall data architecture and strategy for the operational data platform
  • Manage stakeholder expectations and ensure clear communication

Key Requirements

Required Skills And Qualifications

  • More than 7 years data engineering experience, with good knowledge of Python, Java and SQL
  • Hands-on experience on big data framework like Spark for large-scale data processing
  • Experience in .Net, C#, HTML5, CSS, React and React JS will be an added advantage
  • Familiar with cloud computing platforms, specifically Microsoft Azure data engineering tools and services
  • Experience in data architecture, data lake and building data pipelines (including data collection, warehousing, processing, analysis, monitoring, and governance)
  • Fluent in structured and unstructured data, database management and transformation methodologies
  • Familiar with technical integrations using microservices, API, message queue, stream processing, etc.
  • Exposure to CI/CD pipeline, Azure DevOps or GitHub
  • Communication skills, with ability to explain technical concepts to non-technical stakeholders

Preferred Skills And Qualifications

  • Bachelor’s degree (or equivalent) in computer science, information technology, engineering, or related discipline
  • Certifications on cloud technology and data engineering
  • Aviation domain knowledge will be an added advantage

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.