- Design, development and deployment of a large scale AWS Cloud-based enterprise data integration.
• Design, construct, install, test and maintain data management systems.
• Support the design and implementation of emerging technologies to solve use cases across my clients' Process Automation programme.
• Build high-performance algorithms, predictive models, and prototypes.
• Implement data solutions and automate data processing.
• Develop and implement a strategy for process data compilation from various platforms and aggregation of the data into a format suitable for searching and analysis.
- BA or MA in a quantitative or hard science discipline: statistics, operations research, computer science, informatics, engineering, applied mathematics, economics, physics, chemistry.
• 2+ years experience in the data warehouse space.
• 2+ years experience in custom ETL design, implementation and maintenance.
• 2+ years experience working with either a Map Reduce or an MPP system.
• 2+ years experience with programming languages, R/Python preferred.
• 2+ years experience working with and analyzing large data sets to solve problems.
• Hands-on and deep experience with schema design and dimensional data modelling.
• Ability to write efficient SQL statements.
• Ability to analyze data to identify deliverables, gaps and inconsistencies.
• Able to engage directly with business stakeholders (marketing, sales, finance etc.) to understand their business objectives; and working with our data analysts and data scientists to implement the most appropriate solutions to meet client's needs.
• Being self-motivated, creative and collaborative.
To apply for this position, send your CV and a cover letter to firstname.lastname@example.org