Job description

We are Addepto where you can feel a startup atmosphere! We believe that the only constant in life is change, so we try to keep developing and improving to become better at what we do every day! We act outside the box and create and deliver the best solutions in Big Data, Machine Learning and Artificial Intelligence.

For our team based in Warsaw and remotely, we are looking for a Data Engineer focusing mainly on designing and constructing data processing architecture.

We are open for candidates with different expertise levels (Mid/Senior), who want to develop further their skills and experience in this role.

Some of our recent Big Data projects

  • Data lakes which stores terabyte of data and process machine learning tasks for big telecom company
  • Streaming applications to server data analytics in real-time for manufacturing companies
  • Systems that support the decision-making process and help to analyze data in a unified format for controlling and operations departments
  • Support real-time machine learning prediction on massive datasets which prevents company losses for pharmaceutical companies
  • And more!


Responsibilities

  • Design and construction of scalable data processing architecture
  • Using Big Data and BI technologies (Spark, Hadoop)
  • Building an application that will aggregate, process, and analyze data from various sources
  • Cooperation with the Data Science department in the field of Machine Learning projects (including text/image analysis, building predictive models)
  • Manage distributed database systems like ClickHouse, BQ, Teradata, Oracle Exadata, PostgreSQL + Citus Modeling, Star and Snowflake schema
  • Develop and organize data transformations in DBT and Apache Airflow
  • Translate requirements from the business and translate them into technical code
  • Ensure the best possible performance and quality in the packages
  • Manage business user’s expectations
  • Higher education in technical and mathematical studies (or the last year of studies)
  • Commercial experience in the implementation, development, or maintenance of Business Intelligence or Big Data systems
  • Knowledge of Python (or Java/Scala)
  • Hands-on experience in Spark, Cloudera, Data platform, Airflow, NiFi
  • Familiarity with Big Data technologies (Databricks, Docker, Kubernetess, Iceberg, Trino, Hudi)
  • Good command of the English language (min. B2+)
  • Experience with cloud services (AWS, Azure or GCP)
  • Independence and responsibility for delivering a solution
  • Excellent knowledge in Dimensional Data
  • Good communication and soft skills
  • Lead discussions, requirement sessions, should be able to comprehend, summarize and finalize the requirements


What We Offer

  • Work in a well-coordinated team of passionate enthusiasts of Big Data & Artificial Intelligence
  • Fast career path and opportunity to develop your qualifications thanks to sponsorship for trainings, conferences and many other development possibilities in various areas
  • Challenging international projects for global clients and innovative start-ups
  • Friendly atmosphere, outstanding people and great culture – autonomy and supportive work environment are crucial for us
  • Flexible working hours – you can adjust your schedule to better fit your daily routine
  • Possibility of both remote and office-based work – modern office space available in Warsaw, Cracow, Wroclaw, Bialystok or coworking space in any place in Poland if needed
  • Any form of employment – we offer B2B, employment contract or contract of mandate
  • Paid vacation – 20 fully paid days off if you choose B2B or contract of mandate
  • Other benefits – e.g. great team-building events, language classes, trainings & workshops, knowledge sharing sessions, medical & sports package, and others

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.