Data engineer - AWS and Databricks

Job description

About Capgemini:
Capgemini is a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided everyday by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of over 360,000 team members in more than 50 countries. With its strong 55-year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fueled by the fast evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering and platforms. The Group reported in 2022 global revenues of €22 billion.
About Service Line: 
Our Insights and Data team helps our clients make better business decisions by transforming an ocean of data into streams of insight. Our clients are among Australia's top performing companies and they choose to partner with Capgemini for a very good reason - our exceptional people.

Job Overview: 

The role of Data Engineer involves leading complex business reengineering development projects and to solve organizational problems through IT solutions using Big Data concepts, Platform tools and software engineering languages as required.

Key Responsibilities include:

• Designing, developing and implementing solutions to customers organisational problems;
• Providing high quality service delivery on engagements;
• Designing and developing data reconciliation & metadata driven data processing frameworks;
Responsibilities and Duties

Key Skills/Experience:

• Working experience and strong knowledge in Databricks.Knowledge in Cloud Computing – AWS preferably
Knowledge in Data warehousing (SCD) 
• Knowledge in programming and shell scripting eg. Core/Python/Scala/Java
• Knowledge in SQL
• Knowledge with Github and Jenkins;
• Knowledge on change data capture in a big data ecosystem;
• Knowledge building real time or batch ingestion and transformation pipelines;

Additional skills/responsibilities of this role:

• Knowledge on supporting a production service in a DevOps friendly environment;
• Knowledge on working in Agile/Scrum teams;
• Knowledge on API and cloud-native architecture;
• Knowledge on with ‘container’ technology (e.g. Docker, Kubernetes);
• Knowledge on wrangling data using library such as Pandas, Scikit-learn or Numpy;
• Knowledge on using notebook such as Jupyter or Polyglot;
• Knowledge on test-driven development, test automation & continuous delivery

The following qualifications would be advantageous:

• Education level – Degree in Computer Science, Information Systems or related field
• Databricks Associate certification and  AWS certification would be advantageous
• Understanding of Financial Services Industry could be advantageous

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs

data engineer

Neotalent Conclusion
April 17, 2024
April 17, 2024

Data Engineer