Big Data Engineer

Company:
Location: Singapore

*** Mention DataYoshi when applying ***

  • Join one of the fastest-growing companies in the industry

  • Excellent opportunities for learning and development

  • Be a part of a global and diverse work environment





Our client is an international product design and development company that has over the last few decades, built a reputation for crafting innovative solutions that turn complex business challenges into real business outcomes.





The Job





In this role, you will be responsible for :




  • Writing high quality and testable code following clean code principles

  • Implementing functionality by following defined software development process without direct supervision

  • Understanding project and requirement documentation

  • Creating documentation describing his/her code

  • Supporting existing and potential customers with requirements capture, solutions architecture, system design, solution prototyping

  • Participating in Agile Scrum activities: daily standup, demo session, retrospective, planning, etc.





The Profile




  • You have at least a Diploma in Computer Science

  • You have a minimum of 3 years of experience in Software Engineering and Big Data

  • You possess advanced knowledge in at least one programming language such as Java, Python, Scala as well as SQL & Bash.

  • You have experience in any of big data technologies and frameworks: Spark, Spark Streaming, Kafka/ Kafka Streams, Hadoop, Yarn, HDFS, Hive, Airflow, Jupyter, Parquet, ORC, AVRO, CarbonData, Lambda, Kappa, Data Lake.

  • You are familiar with NoSQL Databases, Data platform and Machine learning and have experience with JVM based technologies and frameworks

  • You have a background with different platforms and a strong focus on backends / Big Data / Analytics Solutions

  • Your core professional expertise includes: Platform Architecture, Data Pipelines Architecture, Infrastructure Deployment and Management

  • You have experience with building traditional Cloud Data Warehouses, Data Lakes and worked on previous projects with Containers and Resource Management systems: Docker, Kubernetes, Yarn.

  • You have designed, prototype and adjusted end-to-end solution as well as developed independent CI/CD pipelines

  • You have designed and implemented a set of ingestion workflow to parse, transform and validate input data

  • You have provided technical guidance for the support team and communicated with customer on every phase.

  • You pay strong attention to detail and deliver work of high standard

  • You are result-driven and show high level of resilience

  • You enjoy finding creative solutions to problems.





Ref: 16635039


'N' Levels / 'O' Levels, Diploma, Bachelor's / Honours
Minimum of 3 year(s) experience needed for this position

*** Mention DataYoshi when applying ***

Offers you may like...

  • Big Data

    Data Analyst
    São Paulo, SP
  • NODYA

    Data analyst Big Data (IT)
    Paris (75)
  • Vodafone

    Big Data Engineer
    Lisboa
  • Imaginary Cloud

    Big Data Engineer (i.e. Hadoop)
    Lisboa
  • Psicotec

    Big Data Engineer (m/f) - Lisbon/Remote
    Lisboa