Work in collaboration with engineers and stakeholders to build a platform for enabling data-driven decisions.
Build reliable, scalable, CI/CD driven streaming and batch data engineering pipelines.
Oversee and govern the expansion of the current data architecture and the optimization of query and data warehouse.
Create a conceptual data model to identify key business entities and visualize their relationships
Create detailed logical models using business intelligence logic by identifying all the entities, attributes, and relationships
Storage (cloud data warehouse, S3 data lake), orchestration (Airflow), processing (Spark, Flink), streaming services (Kafka), BI tools, graph database, and real-time large scale event aggregation store are all examples of data architecture to design and maintain.
Work on cloud data warehouses, data as a service, business intelligence, and machine learning solutions.
Data wrangling in a diverse environment.
Ability to provide data and analytics solutions that are cutting-edge.
Identify strategic and Operational KPIs for the team and drive the team to deliver the committed targets.
Desired Candidate Profile
SQL knowledge, as well as programming skills in Scala or Python.
3+ years of applicable data warehousing, data engineering, or data architecture experience
Experience with the GCP stack (BigQuery, GCP Databricks) is a plus.
Ability to design data analytics solutions to meet performance and scaling requirements.
Demonstrated analytical and problem-solving abilities, particularly in the context of large data.
Data warehousing concepts and modern data warehouse/Lambda architecture are well-understood.
Good understanding of the Machine Learning and Artificial Intelligence (AI) solution space.
Communication and interpersonal skills at all levels of management
You are a detail-oriented person with excellent communication skills and a strong sense of teamwork.