Job description

The role

This role is to improve our data and data pipeline architecture, as well as improving the data flow and collection for cross-functional teams. The ideal candidate will also have a chance to figure out things from the ground up for new component integrations.


Role will also include:

  • Monitoring & managing data pipelines, ensuring accuracy and stability
  • Develop and support BAU tasks
  • POC and adopting new technologies to improve data platform management in large-scale high throughput.
  • Identify, design and implement improvements, e.g. automating manual processes, optimising data delivery etc
  • Overall accomplishment is to maintain a robust data platform that can support BI to AI activities.

At LoveBonito, we continuously look forward to improving things so as a Data Engineer you will enable better measurements and ensure measurement accuracy so that we know what we do and where we want to improve.


Main Responsibilities:

  • Candidate will need to understand both business demands and available upstream system assets
  • Understand structured and unstructured datasets
  • Good Knowledge about big data systems like (Hadoop, HDFS, Hive, Apache Spark, Apache Flink, Kafka, etc.) with experience in handling batch/real time data streaming.
  • Skilled in Python, Scala programming languages
  • Strong Knowledge in SQL and good understanding of DBMS is required
  • Proposing new technologies, middle-wares, tools etc. to improve architecture of systems
  • Create and automate the data workflows such as extraction, transformation, load (ETL)
  • Identify gaps or opportunities in existing systems and contribute to their improvement
  • Stakeholder management
  • Bring good knowledge / experience to the team

Qualifications & Experience

  • 2 to 5 years experience in design and development of Datawarehouse Systems.
  • About 1 to 2 years of experience in developing Big data solutions
  • BS/MS in engineering or any other technical discipline
  • Proven experience delivering production-ready data engineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance

Experience in the following will be considered advantageous:

  • Airflow Orchestration
  • Redshift/Big Query
  • AWS/GCP cloud
  • Deploying machine learning models and frameworks
  • Databricks
  • Building APIs

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
NewWave HQ
October 23, 2021
NexInfo Solutions
October 23, 2021
Trajector
October 23, 2021