Job description

BA/BS or Master�s in computer science or equivalent practical experience


  • At least 3
  • years
  • experience as a data engineer
  • Solid Experience with Apache Hadoop / Spark platforms like Hortonworks
  • Experience in deployment, maintenance, and administration tasks related to Cloud (Azure, AWS, GCP or Private Cloud), OpenStack, Docker, Kafka, Airflow, Nifi and Kubernetes
  • Familiarity with monitoring and log management tools like Splunk, App Dynamics, App Insights, ELK
  • Familiarity with networking including DNS, Virtual Networks, WAF, and VPN
  • Familiarity with network and platform security strategies, algorithms, and implementation practices
  • Robust object-oriented design pattern knowledge and implementation experience using one or more languages like Java, Scala or Python
  • Experience with API Design using Rest / Soap & OAuth 2.0
  • MySQL, Sybase, MongoDB, InfluxDB, Cassandra or HBase)
  • Experience with dev ops tools like Git, Maven, Jenkins

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.