Minimum 5 - 12 years professional experience as SRE and doing L2/L3 production support, practices and skills or in IT operations in a financial institution.
Candidate with work experience of 2+ years on any one of the below tool’s operations should be given preference
o Kafka o CDH o Open shift o J2EE/Spring Boot o Ansible o Kubernetes o MySQL o Hive o Tableau o Druid o Jenkins Pipeline
Good knowledge of Big Data querying tools, such as Presto and Hive, and Druid
Knowledge of Hadoop, Spark, Hive, S3
Knowledge of cloud native applications on AWS or, Azure or, GCP
Good knowledge of stream-processing systems, using solutions such as Flink or Spark-Streaming
Good knowledge of Kubernetes
Should have basic knowledge of DB(Mysql,PSQL,Hive…)..DDL,DML
Experience with integration of data from multiple data sources
Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
Knowledge of various ETL techniques and frameworks
Experience with various messaging systems, such as Kafka or RabbitMQ
Storage - SQL, MariaDB, Apache HBase
Knowledge of CI-CD - Maven, Git, Jenkins
Monitoring - ELK, Grafana, Prometheus
Good understanding of Lambda Architecture, along with its advantages and drawbacks
Good to have 3+ years Big Data Operation experience
Knowledge of FileSystem, FileTypes good to have.
High-level analytical and problem-solving skills.
Good project management and communication skills.
Well versed with SRE practices
Proficient understanding of distributed computing principles
Bachelor's degree in Computer Science or a related field. At least (5 - 12) years of IT experience of which at least (5) years' experience in similar role, ideally in a Big Data tools.