We are actively seeking a highly skilled Senior Data Engineering Developer to lead the design, development, scaling, and maintenance of our cutting-edge SaaS infrastructure. In this pivotal role, you will collaborate closely with cross-functional teams encompassing Data Science, Engineering, and Product/Business Technology. Your primary mission will be to construct a robust data infrastructure, establish streamlined processes, and enhance our toolset.
Essential Job Duties And Responsibilities
Visionary Leadership: Spearhead the strategic plan for Business Intelligence (BI) and Data Warehousing, turning your vision into a reality.
Team Building: Assemble and nurture a high-caliber BI and Data Warehouse team, fostering their growth and skill development.
Collaborative Innovation: Cultivate collaborative relationships with Product Managers, Analysts, and Software Engineers to decipher data requirements and deliver impactful solutions.
Infrastructure Expertise: Architect, construct, oversee, and optimize foundational data infrastructure to drive our success.
Real-time Insights: Implement a monitoring infrastructure to provide real-time insights into the status of our data pipelines.
Process Enhancement: Implement and supervise processes that improve solution performance.
Optimization: Optimize schemas, including partitions, compression, and distribution, to balance costs and performance.
Bespoke Solutions: Craft custom data infrastructure solutions that are not readily available off-the-shelf.
Data Integration: Create and maintain custom data ingestion pipelines and seamless integrations with third-party platforms.
Data Quality: Champion Data Quality and the creation of high-impact dashboards.
SLA Management: Define and manage Service Level Agreements (SLAs) for all production datasets and processes.
Team Support: Provide guidance and support to our data team, assisting with design decisions and performance optimization strategies.
Minimum Qualifications, Job Skills, Abilities:
Bachelor's degree in a technical and/or quantitative field of study—e.g., computer science, mathematics, physics, statistics, or equivalent and/or substantial related experience.
A remarkable track record of 8+ years in the realm of distributed data technologies.
Demonstrable experience in ETL and ELT in cloud SaaS/PaaS infrastructures.
Proficiency in serverless Microservices like GCP Cloud Function, AWS Lambda.
Hands-on experience with streaming and batch data pipelines.
Expertise in databases such as Bigquery, MS SQL on data/domain architecture.
Expertise in SQL language to transform raw source data into SQL columns.
Experience with GCP solutions such as DataFlow and Pubsub are a significant plus.
Understanding and experience in AI/ML platforms and pipelines, such as Vertext AI.
Competitive compensation packages, including bonus and options
Medical, dental, and vision benefits
Paid time off
Telecommuting and remote-work options
Support for continuing education
Team off-sites, social events, annual company events, and frequent extracurricular activities
Unlimited snacks and drinks
This position is eligible for remote work.
Employment Type: Full-Time