[CANDIDATES WHO REQUIRE WORK PASSES NEED NOT APPLY]
Responsibilities
Collaborate with teams of experienced actuaries, developers and business experts to deliver projects
Design and propose end-to-end data flows for data projects as well as databases to support web-based applications
Design and implement data warehouses and data marts to serve data consumers
Execute implementation of the designs and bring it from design stages to implementation followed by operationalization to maintenance
Database design and modelling. To be able to organize data at both macro and micro level and provide logical data models for the consumers
Database performance tuning and data lifecycle management
Assist in support and enhancements of existing data pipelines and databases
Requirements
3+ years in-depth experience in working with MS SQL Server or any other relational database management system
Highly knowledgeable in ETL/ELT processes, both design and implementation via SSIS or some other ETL/ELT tool.
Skilled in both complex query performance tuning and database performance tuning for MS SQL Server
Have experience in data validation and data QA as data flows through
Have experience in building and designing data warehouses/data marts
Understand the importance of performance and able to implement best practices in ensuring performance for data-centric projects
This person is required to be dynamic and be a quick eager learner as knowledge about new technologies will need to be acquired on the spot
Some hands-on experience developing data pipelines within a Hadoop-based or Apache Spark environment (solution stack is preferably Cloudera, but similar stacks considered) will be an advantage
Some experience developing data solutions using native IaaS and PaaS solutions on AWS (EMR, Redshift, RDS, S3) is a big advantage.
Have very good and clear communication skills, as this person is required to present, propose and advise on data solutions to the management and technical team