Job description

We are Merlin Digital Partner! A leading IT and Digital headhunting company who stands out from the crowd, boasting over a decade of experience. We've successfully collaborated and played a pivotal role in the growth of industry heavyweights such as Wallapop, Glovo, Banc Sabadell, and Factorial, among others.

Our emphasis lies in people-centric approaches and optimizing the selection processes. Our mission is to revolutionize companies by seamlessly integrating top-tier talent. What sets us apart is our in-depth understanding of each partner (being their best influencer!), addressing not only their needs but also capturing their essence.

We are currently seeking a Data Engineer Lead to work with one of our partners, under contract with Merlin (maximum one year with possible client incorporation thereafter).

Our partner, a major multinational in the consumer goods sector, operates in a fast-changing environment, driven by big data and digital technologies that are transforming the business and generating innovations in areas such as e-commerce and IoT. They are looking to optimize enterprise data within the Data Operations team to achieve agility in decision-making

As the Data Engineering Manager, you will be the key technical expert responsible for overseeing the construction and operations of our partner's data products, and leading a strong vision on how data engineering can proactively create a positive impact on the business

The Role:

  • Provide leadership and management to a team of data engineers, managing processes and their flow of work, vetting their designs, and mentoring them to realize their full potential.
  • Act as a subject matter expert across different digital projects.
  • Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers.
  • Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products.
  • Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance.
  • Responsible for implementing best practices around systems integration, security, performance and data management.
  • Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape.
  • Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners.
  • Develop and optimize procedures to “productionalize” data science models.
  • Define and manage SLA’s for data products and processes running in production.
  • Support large-scale experimentation done by data scientists.
  • Prototype new approaches and build solutions at scale.
  • Research in state-of-the-art methodologies.
  • Create documentation for learnings and knowledge transfer.
  • Create and audit reusable packages or libraries.

Please, consider applyng if you have:

  • 8 years of overall technology experience that includes at least 6 years of hands-on software development, data engineering, and systems architecture.
  • 6 years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools.
  • 6 years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.).
  • 4 years in cloud data engineering experience in at least one cloud (Azure, AWS, GCP).
  • Fluent with Azure cloud services. Azure Certification is a plus.
  • Experience scaling and managing a team of engineers.
  • Experience with integration of multi cloud services with on-premises technologies.
  • Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.
  • Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations.
  • Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.
  • Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake.
  • Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes.
  • Experience with version control systems like Github and deployment & CI tools.
  • Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools.
  • Experience with Statistical/ML techniques is a plus.
  • Experience with building solutions in the retail or in the supply chain space is a plus
  • Understanding of metadata management, data lineage, and data glossaries is a plus.
  • Working knowledge of agile development, including DevOps and DataOps concepts.
  • Familiarity with business intelligence tools (such as PowerBI).
  • BA/BS in Computer Science, Math, Physics, or other technical fields.
  • Preferably some experience in the Consumer Goods Industry
  • Any experience, passion for / interest in Corporate Social Responsibility & Sustainability topics (spanning agriculture, climate, water, packaging, emissions, etc.) is considered an added advantage to the role

We offer:

  • Initial 1-year project with the possibility of incorporation into the client.
  • You will have the opportunity to work with one of the most powerful consumer goods corporations currently.
  • Collaborative and innovative work culture.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.