- Use the latest cloud and data engineering tools to build out a new platform
- 6 month initial contract with likely extensions
- 50% work from home, 50% office based
Are you a Data Engineer with strong Azure or AWS experience? If the answer is yes, this large Financial Services organisation is looking for you...
You will help build out a new data platform, the foundations of which have been built and need fleshing out. The platform is in the Microsoft stack, including Azure, with a whole host of other technology utilised to help advance it as the company moves their data into a future state.
Your responsibilities will include:
- Manage data pipelines
- Drive Automation through effective metadata management
- Learning and using modern data preparation, integration and AI-enabled metadata management tools and techniques:
- Tracking data consumption patterns
- Performing intelligent sampling and caching
- Monitoring schema changes
- Recommending or sometimes even automating existing and future integration flows
- Collaborate across departments with data science teams and with business (data) analysts in refining their data requirements for various data and analytics initiatives and their data consumption requirements
- Educate and train
- Participate in ensuring compliance and governance during data use
- Be a data and analytics evangelist
You should have a good mix of the following skills and experience (and I appreciate this list is lengthy):
- Experience with advanced analytics tools for object-oriented/object function scripting using languages such as R, Python, Java
- Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management
- Experience with popular database programming languages including SQL, PL/SQL, others for relational databases and certifications on upcoming NoSQL/Hadoop oriented databases like MongoDB, Cassandra, others for non-relational databases
- Experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies
- Experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production
- Experience in working with both open-source and commercial message queuing technologies such as Kafka, JMS, Azure Service Bus and others, stream data integration and analytics technologies such as Data Bricks and others
- Experience working with popular data discovery, analytics and BI software tools like Tableau, Qlik, PowerBI
- Experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms
- Demonstrated success in working with large, heterogeneous datasets to extract business value using popular data preparation tools such as Trifacta, Paxata, Unifi
- Basic experience in working with data governance/data quality and data security teams
- Demonstrated ability to work across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems
- Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines
To be considered for the role click the 'apply' button or for more information about this and other opportunities please contact James Perry on 07 3339 5611 or email: firstname.lastname@example.org and quote the above job reference number.
Paxus values diversity and welcomes applications from Indigenous Australians, people from diverse cultural and linguistic backgrounds and people living with a disability. If you require an adjustment to the recruitment process please contact me on the above contact details.