Key role in driving the Bank’s Data Strategy to build cloud-based data assets
Cutting edge technology - Be part of an evolving team as we transform
Permanent role (salary plus benefits)
About us
With more than 160 years of history, we are proud of our position in the community with more satisfied customers than any other Australian bank. Every day, we work hard to bring our company purpose to life, feeding into the success of our customers and communities and not off it.
We're more than just a bank with banking products. We change the lives of customers and communities. Commercial actions with heart!
Our time is now. We are challenging the status quo and we're excited about our future!
About the role
The Data Cloud Engineer is responsible to develop, construct, test and maintain the Bank’s Enterprise Data Cloud Assets to meet organisation’s data & reporting requirements. This includes importing, transforming, curating, and loading structured and unstructured data from internal or external, on-premises or cloud-based source systems to build cloud-based data assets.
You will play a key role in driving the Bank’s Data Strategy to build cloud-based data assets. These assets include Data Lake, Cloud Data Warehouse and Reporting using cloud technologies such as Data pipelines, Change Data Capture (CDC), Kafka/Spark, Stitch and Databricks.
Working in a cross functional team you will engage and communicate widely across the business, utilising strong stakeholder engagement skills to understand functional and technical requirements and deliver high performing end to end solutions for various Data Initiatives
Key responsibilities include:
Managing all delivery risks & meeting deadlines and to proactively manage health and vitality of Data Cloud Assets
Develop and maintain optimal data assets by extraction, transformation, and loading of data from a wide variety of data sources, structured or unstructured
Develop, test, and deploy code to a high standard in a variety of programming environments
Provide application-specific system analysis services to support discoveries, architectural and solution design activities
Support iteration management with story sizing, flow of work, technical risk and sequencing of work
Perform release planning, implementation, and code reviews
About you
Proven development experience in Python
Demonstrated experience in Cloud based Data Warehouse solutions, with use of technologies like Change Data Capture (CDC), Kafka/Spark, Stitch, Databricks etc.
Experience with deployment of data workloads using CI/CD tooling (Gitlab, Terraform or Ansible)
Experience with streaming data pipelines as well as batch-oriented data processing
Experience with Cloud EDWs, Data Lakes, Information Governance Suites (Lineage, Glossary, Reference Data etc)
Experience in API development for analytic/ML end points
Hands on experience working with big data sets and huge volumes with focus on optimal performance for loading and retrieving data.
Well versed Data modelling skills, defect analysis and prioritisation
Exposure to practices that support an agile way of working (e.g. pair programming, continuous build and integration, test driven development etc.)
Why us?
There's so much more to a career with Bendigo and Adelaide Bank than just banking.
Get real benefits, work life balance and flexibility. You bring your brilliant mind and we’ll help you take your learning to the next level with on the job training and external development opportunities - we want you to shine. After all, YOU are the difference that makes us the better big bank.
At Bendigo and Adelaide Bank we believe a diverse workforce supported by an inclusive culture is central to our success and we actively encourage applications from those who bring diversity of thought to our business. We support candidate requests for adjustment to accommodate an illness, injury or disability to equitably participate in the selection process.
Closing date: Feb 11, 2021