Do you like solving engineering puzzles? Do you like making things work? Do you want to contribute to a company that feeds millions of people every day? Then FrieslandCampina’s Gobal Data Science Chapter is looking for you to join our growing group of Data Engineers.
What we ask
We need to do these things well and set the standard way of working for other teams in the company as currently, everybody is moving into the data space. It will help if you understand data science and machine learning, as you will regularly work with our data scientist colleagues. That means we are looking for someone that not only loves to puzzle in Python/SQL/PySpark, but also has strong communicative skills.
What do we expect from you?
- You know Python, Pandas, PySpark and SQL intimately.
- You can program object-oriented, modularly and know how to make code testable and reusable
- You take full, end-to-end responsibility of development and alignment
- You have at least 5 years of experience in technical data related roles
- You have a bachelor’s degree in CS, AI or a quantitative field
- Data Warehousing knowledge
- PySpark expertise
- Food/FMCG/supply chain or dairy experience
What we offer
What can you expect from us?
- We have a young and growing team with a global mindset and a diverse set of individuals.
- Work on diverse projects in different parts of the company. We have already done projects in factories, commercial departments, planning, finance, and HR.
- We are a small, growing team, you will not be not a cog in a big machine, but you get end-to-end responsibility
- Support on your professional journey: training on hard and soft skills, mentoring and career opportunities depending on your individual growth plan
- Free cheese and milk at lunch and yes we will pay you too
As Data Engineers, we are responsible for enabling business analysts and data scientists to get value from data in a painless, fast and reliable way.
Here are six examples of work that you could be doing:
- Collecting millions of data-points from a factory into our AWS cloud, to then prepare it for further analysis;
- Build a data pipeline that connects a variety of external and internal data sources into one pricing prediction project;
- Using Python to develop a plugin on top of our enterprise-wide data science tool (DataIKU) that automatically feeds data into our data warehouse ingestion framework;
- Assist and challenge data scientists to make their code modular and testable;
- Move notebooks and POC web apps into products that scale well and can be used by business people to determine price points for some of our key commodities.
- Setup a CI/CD pipeline, together with our AWS partners, for a global shared code repository.
Your primary role will be data engineer, but we strongly encourage picking up additional roles and skills like data science, agile project management, business analysis, change management or training