Work in interdisciplinary teams that combine technical, business and data science competencies.
Design and implement solutions around data warehouse implementation ranging from architecture, ETL processes, multidimensional modelling, data marts implementation.
Integrate datasets and dataflows using a variety of best in class software as well as profile and analyze large and complex datasets from disparate sources
Guide and direct junior developers
Shape and advise on detailed technical design decisions.
Develop scheduling scripts or configure load schedules
Design and run unit tests.
Perform bug diagnosis and fix.
Migrate code between development and test environments.
Participate in support of the development environment.
Minimum 1-4 + years data engineering background
High proficiency in data integration package
Must have (Either one): Cloudera / Informatica / Denodo
Cloud development experience (e.g. AWS, Azure)
Experience in implementing solutions using Hadoop/NoSQL technologies (e.g. HDFS, Hbase, Hive, Sqoop, Flume, Spark, MapReduce, Cassandra, MongoDB etc.)
Deep familiarity with RDBMS
Strong proficiency in SQL
Able to design and implement relational data models
Ideally graduate in Comp Science/ Information Systems.