Job description

  • Experience in handling and maintaining ETL and Data pipelines
  • Working knowledge on Apache Scoop, Flume, Spark, Airflow, Hive etc.
  • Experience in configuring and managing visualization tools like Grafana/Kibana, Prometheus, ELK
  • Must have good experience with system integration with cloud/On-Prem vendors.
  • Must have experience in designing and proposing Data solutions
  • Work exposure with NoSQL Preferred
  • Good team player with exposure in managing teams and client rapport
  • Must have exposure to cluster based hyperscalers.
  • Experience with Kafka Streaming platforms.
  • Good to have cloud and containerization exposure
  • Analyzing user requirements, envisioning system features and functionality.
  • Design, build, and maintain efficient, reusable, and reliable software solution by setting expectations and features priorities throughout development life cycle
  • Identify bottlenecks and bugs, and recommend system solutions by comparing advantages and disadvantages of custom development
  • Contributing to team meetings, troubleshooting development and production problems across multiple environments and operating platforms
  • Understand Architecture Requirements and ensure effective Design, Development, Validation and Support activities
Besides the professional qualifications of the candidates we place great importance in addition to various forms personality profile. These include:
  • High analytical skills
  • A high degree of initiative and flexibility
  • High customer orientation
  • High quality awareness
  • Excellent verbal and written communication skills

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
September 22, 2023

Data Scientist / Data Engineer - F/H

September 22, 2023
September 22, 2023