Proximus

Data Engineer

Job description

Role Description

As a Data Engineer, you will play a key role designing the AI solution and preparing the infrastructure and data that will be used to deliver high quality data products for our customers. You will help us create, improve and maintain data pipelines that will deliver insights. By using a DevOps approach, you will make sure the overall system is always running & you will automate tasks so you can spend time on creating and not deploying.

You will also make sure the system is appropriately tested and monitored by using adapted methods and tools. You will collaborate with the other data engineers and data scientists to create the simplest possible effective data landscape to improve delivery speed of future AI use cases.

Responsibilities

  • Execute ETL (extract/transform/load) processes from complex and/or large data sets
  • Collaborate with machine learning engineers for the implementation, deployment, scheduling and monitoring of AI solutions
  • Ensure robust CI/CD processes are in place
  • Promote DevOps best practices in the team
  • Simplify & optimize existing pipelines if needed
  • Conceive and build data architectures to support data scientists
  • Participate to the architecture and planning of the big data platform to optimize the ecosystem’s performances
  • Provide support for AI models in production & ensure operational excellence
  • Include security best practices by design & reinforce our defenses


Qualifications

  • Master's degree in computer science, engineering, or related field;
  • 3+ years of experience data engineering or related roles;
  • Experience with AWS cloud services and tools (if not then with Azure at least).


Technical Skills

  • Proficient in Python & pyspark;
  • Strong knowledge of SQL (Teradata /Oracle);
  • Strong knowledge of CI/CD & DevSecOps / MLOps concepts;
  • Knowledge on containerized deployment of ML products (docker, podman, Kubernetes, ...);
  • Experience with on premise Linux systems & cloud environments (AWS or Azure) for:
  • Database & storage management systems;
  • Data pipeline & ETL flows;
  • Infrastructure & cluster management;
  • Deployment, scheduling & integration of AI solutions;
  • High level understanding of AI & ML concepts;


Nice to have:

  • Proficient in scala is a big plus;
  • Experience with Databricks is a plus;
  • Stream processing such as Kafka, Kinesis, Elasticsearch, grafana, nginx is a plus.


Attitude/Behavior

  • Quality oriented;
  • Excellent problem analyzer and solver;
  • Open minded, collaborative, team player and ready to adapt to the changing needs;
  • Curious about new techniques and tools, eager to learn;
  • Convinced having a good infrastructure is essential & you can motivate and drive changes if needed;
  • Committed to deliver, pragmatic and can-do attitude.


Languages

  • FR or NL native + English fluent.


Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
Capgemini
July 15, 2024

Data engineer - SQL

RCBC
July 15, 2024

Data Engineer

Richemont
July 15, 2024