Job Description
- Create and maintain optimal Data lake setup.
- Assemble large and complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading data from various sources using SQL.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL), and working familiarity with various databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
Requirements
- Experience with Data Analytics services like EMR, Glue, Athena, Kinesis, MSK, Elasticsearch, Quicksight and Redshift.
- Experience with big data tools: Hadoop, Spark, Kafka, Hive etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Scala, etc
Requirements
Experience with Data Analytics services like EMR, Glue, Athena, Kinesis, MSK, Elasticsearch, Quicksight and Redshift. Experience with big data tools: Hadoop, Spark, Kafka, Hive etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc