Our purpose in the Business & Private Bank Data Engineering Team is to develop and deliver best-in-class data assets and data domain management for our Business and Private Banking customers and colleagues. We are passionate about simplicity and meeting the needs of our stakeholders, whilst innovating to drive value.
About the role:
As a Senior Consultant Data Engineer, you will bring extensive expertise on data handling and curation capabilities to the team. You’ll be responsible for designing and building intelligent domains using market leading tools, ultimately improving the way we work in B&PB. Experience with and knowledge of AWS, Data Vault and Star Schema is highly desired.
- Actively engage in the whole lifecycle (inception, design, development, testing, deployment, operation, monitoring and refinement) of data services.
- Monitor, measure and maintain availability and health of our data assets and associated platforms, working closely with our technology partners.
- Manage incidents/problems, apply fixes and resolve systematic issues; triage issues with stakeholders and identify and implement solutions to restore productivity
- Advocate for changes that improve overall platform health and performance.
- Continual oversight of end-to-end data lifecycle, identifying opportunities to automate, build and maintain our data pipelines and data assets.
The Engineering Gurus we seek possess:
- Proven experience in the same or similar role, with tertiary qualification in Computer Science or equivalent.
- Mature problem-solving skills, critical thinking.
- Are strong communicators with experience in explaining complex issues to senior stakeholders
- Understand Risk in data – By building outcomes that protect stakeholders and customers, and address risks by making improvements in the bank’s overall risk and control environment.
Key Skills and Competencies you will bring with you:
- Hands-on design build and implementation experience on Data Engineering pipelines using SQL, Python, ETL/ELT/Data Prep tools.
- Implementation experience with designing and building data solutions in Cloud (preferably with AWS and Azure) as well as on-premises assets.
- Demonstrated capability with the development of and performance-tuning skills for RDBMS (eg. Oracle, Teradata, Snowflake, Redshift etc) and no-sql databases
- Experience with API based interactions and bulk integration with on-premises and cloud-based platforms
- Hands-on development and implementation experience with Batch and event-based compute with Apache Spark, HIVE, Sqoop, Beam etc.
- Exposure to an event streaming platform a definite plus, especially with apache Kafka or AWS
To be eligible to apply, you must have Australian or New Zealand citizenship or permanent residency. Please note candidate screening and interviews may be conducted prior to the closing date of the job advert; apply today!
We value and embrace diversity of thought, style and working arrangements to ensure our workforce is representative of the communities that we serve and that our thinking, solutions, and products are the best they can be.