Job description

Custimy- who are we!

In a time of recession and disruption, we believe there is a place for innovation and growth. We have therefor embarked the journey of creating the Next-Generation Customer Data Platform for all Ecommerce businesses.

Currently 96% of E-commerce businesses are without the needed infrastructural software to compete against Enterprises, why?

With Custimy E-commerce businesses has the right infrastructural software required to activate customer data like never before. Giving the possibility to automatically collect, consolidate, validate, and analyse all customer data from multiple channels - This can ensure, that there never is a meaningless marketing campaign, email, newsletter or product recommendation to your customers again.

In this modern digital age, we have a clear mission and purpose of empowering SMBs and minimize the competitive gap between SMBs and Enterprises. We believe that SMBs has been forgotten in the technological evolution and there is a lack of solutions for them to enable, consolidate and analyse their first-party data in a secure, compliant and value-creating way.

If you agree to:

  • Customer Centricity above all
  • One team, one mission
  • always strive for excellence
  • Technology for all

Then you are the exactly the fit for Custimy :-D

Key Responsibilities:

  • Design, implement, monitor and optimize data pipelines and extract, transform and load processes within our product architecture
  • Assemble large and complex data sets to meet business requirements, work with different data integrations, their data formats and documentations
  • Optimize data delivery and re-design infrastructure for greater scalability when you consider it relevant
  • Explore ways to enhance data quality and reliability, participate in designing the testing suite for data pipelines
  • Collaborate with software engineers and data scientists to optimize the data flows, participate in MLOps architecture design
  • Work with internal stakeholders to assist with data-related technical issues and support data infrastructure needs

Skills / Requirements:

  • A master degree in Computer Science, Informatics, Information Systems or other quantitative field with +3 years of professional experience as a data engineer
  • Solid experience with designing scalable and reliable data processes including handling with data pipelines, ETL processes, data warehouses or data lakes
  • Experience with AWS cloud services like Glue, S3, Batch and RDS. Our architecture is based on AWS and this would be a fundamental part of your daily work to put your hands on Amazon services
  • Experience working with relational databases like MySQL, MariaDB or PostgreSQL
  • Solid experience in Python along with Embedded SQL, SQLAlchemy and ORMs
  • Working experience with modern software development stack including CI/CD, repositories, version controls and containerization
  • Basic understanding of REST API, data formats like JSON and basics of HTTP requests
  • Basic understanding of distributed systems, we would appreciate if you have Spark experience as we work with PySpark under AWS

We strive to be as remote as possible however we would like you to be close to one of our offices so we can meet on a regular basis.

If this sounds right to you, don't hesitate to send us your application.

We are super excited to hear from you, and will fill the role as soon as we find the best match for our team!

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
January 28, 2022
January 28, 2022
January 28, 2022