Role - Data Engineer
Job Location – Mumbai/ Pune
This position is for a Data Engineer who will be specializing in Data ingestion (Kafka/ Spark) projects. Good experience on Kafka, Java & Spark Development will be a key competency for this position.
3+ years of IT experience
Minimum 2-3 years of relevant experience in Kafka, HDFS, Hive, MapReduce, Oozie
Data Integration experience required
Worked and developed data ingestion pipeline
At least 1 year working experience on Spark Development
Good hands-on experience on Core Java and Other Programming languages like Scala or Python.
Working Experience on Kafka
Excellent understanding of Object-Oriented Design & Patterns
Experience working independently and as part of a team to debug application issues working with configuration files\databases and application log files.
Should have good knowledge on optimization & performance tuning
Working knowledge of one of the IDEs (Eclipse or IntelliJ)
Experience in working with shared code repository (VSS, SVN or Git)
Experience with one of the software build tools (ANT, Maven or Gradle)
Knowledge on Web Services (SOAP, REST)
Good Experience in Basic SQL and Shell scripting
Databases: Teradata, DB2, PostgreSQL, MySQL, Oracle (one or more required).
Should be able to work or enhance on predefined frameworks
Should be able to communicate effectively with Customers
Must have experience or understanding of promoting Bigdata applications into production
Commit to travel to customer locations and core GDC sites (Pune/Mumbai/Manila) when required.
Nice to have Experience:
Working experience with Apache NiFi Exposure on XML & JSON processing.
Awareness about Bigdata Security & related technologies
Experience with Webservers: Apache Tomcat or JBoss
Preferably one project worked by the candidate should be in Production
Understanding of Devops tools like Git, Jenkins, Docker etc..