Job description

Job Title: Staff Data Engineer

FTE, hybrid - 2 days/week in office (Tuesdays & Thursdays)

Our client is a leader in cloud-first networking and security services, providing solutions that simplify, scale, and deliver reliable networking experiences for organizations worldwide. Named a Top 25 Cyber Security Company by The Software Report and one of Inc. magazine’s Best Workplaces for 2020, we are committed to empowering enterprises, including 70% of the Fortune 500, to take full advantage of the cloud.

They are seeking a Staff Data Engineer to join their Cloud Engineering team in Burnaby, BC. You will be responsible for developing platforms and products that enable next-level networking for our SaaS product line. This role will work closely with data scientists and product teams to curate and refine data that powers the organization's cloud products. If you're passionate about the nexus of data and computer science, and excited about driving new products through data insights, this is the role for you!

Key Responsibilities

  • Data Curation: Curate and aggregate large-scale data from multiple sources into appropriate sets for research and development across data scientists, threat analysts, and developers.
  • Storage Solutions: Design, test, and implement scalable data storage solutions, particularly using data warehouses like ClickHouse and OpenSearch.
  • Data Monitoring: Develop mechanisms for monitoring data sources over time using statistical methods, summarization, and data quality checks.
  • API Development: Design, develop, and maintain secure, scalable APIs to enable seamless data integration and retrieval processes for internal and external applications.
  • Algorithm Implementation: Leverage computer science algorithms and probabilistic data structures to distill large datasets into insightful analytics.
  • Production Systems: Convert prototypes into production-level data engineering solutions by employing best software engineering practices and modern deployment pipelines.
  • Collaboration: Work closely with software engineering, data science, and product teams to deploy data pipelines and applications in Spark and other modern frameworks.
  • Automation & Operations: Build and maintain automation tools for deployment, monitoring, and system operations.
  • Testing: Create test plans, run tests using automated tools, and ensure the integrity and accuracy of data products.

Qualifications

Experience:

  • 12+ years of experience in Python3, and 2+ years of experience with Spark (Scala is a plus).
  • 5+ years of experience in data engineering or data science in large-scale data environments.
  • 3+ years of experience with SQL, relational databases (MySQL, PostgreSQL), and developing ETL pipelines.
  • Experience with ClickHouse data warehouse is highly desired.
  • Expertise in designing and developing APIs, preferably RESTful, for distributed data access.
  • Experience with AWS (EMR, S3, VPC, EC2, Athena) is a must; GCP experience is a plus.
  • Proficiency in Object Oriented Design, S.O.L.I.D principles, and unit testing.
  • Experience with Docker or Kubernetes for containerization and deployment.

Education

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related fields, or equivalent work experience.

Benefits & Perks

  • Competitive salary range for BC: $124,600 - $187,770 plus bonus or commission potential.
  • Generous health, wealth, and wellness benefits, including 401k matching and paid time off.
  • Onsite perks like massages, free lunches, wellness rooms, and more.
  • Career development reimbursement of up to $5,000 annually.
  • Inclusive and supportive work culture, with opportunities for team outings, flexible work schedules, and professional growth.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
Agoda
September 18, 2024
Pinterest
September 18, 2024
Asda
September 18, 2024

Staff Data Engineer