- Great data science mentors!
- GCP, Python, Spark, Machine Learning in production!
- Build recommendation and next-best-action models!
This is an excellent opportunity for a ‘Junior Data Scientist’ to join a team where you will have great mentors, challenging work, access to BIG data and best in class data science tools, such as; Spark, Google Cloud Platform, BigQuery, Various Machine Learning packages and Python.
This top data science team builds commercial analytics and machine learning solutions that have a big impact on a large amount of Australian customers. They’ve had excellent results and are growing to improve the scalability of models, extend their work and automate various aspects of their data and ML processes.What business problems does this team solve?
The team uses customer, sales transaction and digital data sets to provide better deals, offers and recommendations to customers and improve sales! They work across multiple squads and projects, but everyone is using machine learning to make marketing and customer experiences more personalised.If you were working for this team, here are some of the things you might have done in the last 6 months:
Sounds great, what skills and experience do you need to apply?!
- Gain a solid understanding of the code base for one or more production machine learning models and data pipelines built with Python, SQL, Apache Spark and deployed on Google Cloud Platform. If you notice any bugs to fix, you’ll do it proactively.
- Look for ways to introduce automation - which could involve small enhancements to code, investigating or adding new features to a model or optimising / performance tuning of models
- Assisted a senior data scientist to scale a model using kubernetes or kubeflow - so it is easier to move into production and scale from X million customers to 5X million customers
- Another squad may see you develop APIs to help other analytics or technology teams in the business consume and use your data science products
- The datasets this teamwork on are huge, so typical tools of the trade will be Python, SQL, Spark, Google Cloud Platform, Kubernetes / GKE / Kubeflow, APIs and more from day one!
Experience is important. The more you can demonstrate your abilities to do similar or transferable tasks or projects in a real business, the easier it will be.
What will set you apart from all the other applicants?
- You need skills (1 to 3 years) in data science and are already a strong practitioner with; Python, SQL and Tree-based algorithms
- At least a bachelor’s degree and our preference is in Computer Science, Software Engineering or a similar field (perhaps Aeronautical Engineering?) which combines strong numerical, data and technical skills - understanding data structures, algorithms, CI/CD, etc.
- We prefer if you have already used; Kubernetes or Kubeflow for containerisation of data science solutions and either GCP, AWS or Azure
- You will definitely need full, unrestricted Australian work rights and you will be based in Sydney
- Heavy preference will be given to good commercial experience with data science using real customer data for big companies or start-ups
- Availability preferred to be 1-4 weeks, no relocation or visa sponsorship is available
We will first look at your commercial experience - where have you worked? What have you have built and how you communicate that in your CV and Cover Letter.
Due to volume, we will contact shortlisted candidates as soon as possible for more detailed interviews and briefings
Please fell free to send me your CV and Cover letter directly to firstname.lastname@example.org