Make an impact
by working for sectors where technology is the enabler, everything is ground-breaking and there’s a constant need to be innovative.Create and enhance projects
in Java, Pyhton, Angular, PHP, .NET and so much more while diving in the world of Blockchain, Artificial Intelligence, Data Science, Security and Internet of Things.Be part of the team
that combines business knowledge, technological edge and a design experience. Our different backgrounds and know-how are key in developing solutions and experiences for digital clients.Face challenges
and learn other ways of thinking and seeing the world - there’s always room for your energy and creativity.About The Role
Data Engineer is responsible for building and maintaining Data Platforms. Recognizes the importance of data for the organization in the areas where it is the key to success. Ensures that these success factors are being realized in the best possible way. Maintaining an eye on the big picture and knowing the details of the business are decisive for this role.
This role is focused on designing, developing, and maintaining the data platform required for data storage, processing, orchestration, and analysis.
Their mission involves implementing scalable and performant data pipelines and data integration solutions.
Agnostic of data sources and technologies to ensure efficient data flow and high data quality, enabling data scientists, analysts, and other stakeholders to access and analyze data effectively.As a part of your job, you will:
What are we looking for?
- Design, build, and maintain scalable data platforms;
- Collect, process, and analyze large and complex data sets from various sources;
- Develop and implement data processing workflows using data processing framework technologies such as Spark, Apache Beam;
- Collaborate with cross-functional teams to ensure data accuracy and integrity;
- Ensure data security and privacy through proper implementation of access controls and data encryption;
- Extraction of data from various sources, including databases, file systems, and APIs;
- Monitor system performance and optimize for high availability and scalability.
- Proficiency in programming languages like Python, Java, or Scala;
- Use of Big Data Tools such as Spark, Flink, Kafka, Elastic Search, Hadoop, Hive, Sqoop, Flume, Impala, Kafka Streams and Connect, Druid, etc.;
- Knowledge of data modeling and database design principles;
- Familiarity with data integration and ETL tools (e.g., Apache Kafka, Talend);
- Understanding of distributed systems and data processing architectures;
- Strong SQL skills and experience with relational and NoSQL databases;
- Familiarity with cloud platforms and services for data engineering (e.g., AWS S3, Azure Data Factory);
- Experience with version control tools such as Git;
- Knowledge of agile methodologies as Scrum, Kanban, etc..
- Strong problem-solving and analytical skills;
- Strong communication competencies;
- Ability to adapt to different contexts, teams, and Clients;
- Teamwork skills but also a sense of autonomy;
- Motivation for international projects and ok if travel is included;
- Willingness to collaborate with other players.
We want people who like to roll up their sleeves and open their minds. Believe this is you? Come join the Team!Celfocus is a European high-tech system integrator, providing professional services focused on creating business value through Analytics and Cognitive solutions – addressing Telecommunications, Energy & Utilities, Financial Services and other markets' strategic opportunities.
Serving Clients in 25+ countries, Celfocus delivers solutions such as accelerating digital network transformation in Autonomous Networks, elevating and monetising business services in B2B2x ecosystems, and providing highly relevant customer experiences through Hyper-personalisation solutions.