Job description

We are looking for strong developers with good hands-on experience in Big Data technologies and who has worked end-to-end in application development.


In this role, the candidate will design and develop robust and scalable analytics processing applications. They will implement data integration pipelines that operate with maximum throughput and minimum latency. They shall bring their knowledge of solving big data analytics use cases and frameworks for managing high volume data processing and analytics.


Ideally the candidate will be responsible for:

  • understanding the data patterns and solving the analytics use case
  • converting the solution to Spark and other big data tools-based implementation
  • building data pipelines and ETL using heterogeneous sources to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc
  • tuning performance optimization of Spark data pipelines
  • managing the data pipelines and delivering enhancements to solve technical issues
  • automation of day-to-day tasks
  • providing technical support and guidance to business users

To excel in this role, the candidate will ideally bring:

  • 3+ years of experience working in Hadoop Ecosystem and big data technologies with 5-8 years of software development experience
  • reading and writing Spark transformation jobs using Java, Scala (Preferably in Java)
  • experience with Hadoop, Spark
  • good knowledge of the core AWS services (S3, IAM, EC2, ELB, CloudFormation)
  • good understanding of Linux and networking concepts
  • good understanding of data architecture principles, including data access patterns and data modelling
  • tertiary degree in IT or similar subject

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.