The candidate develops and drives self-built models that consume large internal and external data sets, in order to provide actionable insight on short to medium terms market trends in the dry bulk market.
The ideal candidate is responsible for the cross-functional analysis of the business requirements for improving systems and handle Data Lake support and enhancements.
Strategic & Planning
- Provide consultancy and expertise advice on matters of Data Lake architecture and integration with other systems, including their feasibility.
- Responsible for partnering with business representatives to understand business processes and underlying informational and/or process automation needs, then translating those needs into formal, documented business requirements following methodologies and processes.
- Collaborate with IT and Business Innovation projects to perform research, plans and propose a solution, recommend software or hardware components to ensure that the company’s business/ operational requirements are met.
Regional Freight Indicators
- In tight cooperation with Business Innovation and Regional Chartering Managers, develop reporting platforms with self-built data science models that provide indicators on short to medium terms freight movements.
- Build and backtest theories on market drivers with other members of the Business Innovation team.
- Work with external data providers to ensure necessary data flows for the models to be timely, accurate and cost efficient.
- Constant refinement of models to reflects changes in Swire strategy, or changes to market levers.
External Key Customers
- Be the key resource in other outward facing data projects with customers.
- Refine the way Swire Bulk communicates data/reports externally with key customers.
- Support business analytics applications by providing operational support including issues analysis and resolution relating to supported applications and interfaces.
- Extract, transform and load data into database and build it as analytics-ready data.
- Develop and maintain modern data driven models and reports visualization layer.
- Documenting the specifications, design, features, and operation of analytics systems to a Functional /Technical specification which is used by the applications development team for translation into actual program code.
- Testing, debugging systems and perform quality control checks before releasing the system for UAT.
- Good working relationship with 3rd party vendors for applications support and/or changes.
- University Bachelor Degree in Information Systems, Computer Science or related field
- Relevant work experience in data science, machine learning and business analytics.
- Practical experience in coding languages eg. Python, Pandas, R, Scala, etc. (Python preferred).
- Strong proficiency in database technologies eg. SQL, ETL (AWS Glue preferred), Redshift and Big Data technologies eg. pySpark, Hive, etc.
- Experience of machine learning modelling techniques, data leakage and how to fine-tune those models.
- Experience of using AWS Cloud technologies e.g. S3, SageMaker
- Understanding of scripting languages such as Python, Perl, PHP and/or shell scripts.
- Experience on data visualisation tool (Power BI preferred)