APRIL

Sr Data Engineer

Job description

This is a remote position.

Description

As a Senior Data Engineer for Commercial Analytics, you will drive deliverables to completion with hands-on development responsibilities, and partner with the Lead Engineer to provide thought leadership and innovation with those deliverables.

In this role, you will be part of an agile scrum team, closely partnering with the Product Manager, Lead Engineer, and other team members to deliver data and analytics capabilities to the client’s Commercial Analytics organization. You will collaboratively partner with business stakeholders and other technology solution delivery, architecture, and platform teams.

Responsibilities
  • Design and build reusable components, frameworks, and libraries at scale to support analytics products.
  • Design and implement product features in collaboration with business and Technology stakeholders.
  • Identify and solve issues concerning data management to improve data quality.
  • Clean, prepare and optimize data for ingestion and consumption.
  • Collaborate on the implementation of new data management projects and re-structure of the current data architecture.
  • Implement automated workflows and routines using workflow scheduling tools.
  • Build continuous integration, test-driven development, and production deployment frameworks.
  • Collaboratively review design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards.
  • Analyze and profile data for designing scalable solutions.
  • Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues.
  • Develop architecture and design patterns to process and store high volume data sets.
  • Participate in an Agile / Scrum methodology to deliver high - quality software releases every 2 weeks through Sprints.
Qualifications
  • Bachelor's degree in IT or related field. One of the following alternatives may be accepted: PhD or Law + 3 yrs; Masters + 4 yrs; Associates + 6 yrs; High School + 7 yrs exp.
  • 5+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT, and reporting/analytic tools.
  • 3+ years’ experience in Big Data stack environments (Hadoop, SPARK, Hive & Delta Lake)
  • 3+ year’s working with multiple file formats (Parque, Avro, Delta Lake) & API
  • 3+ years’ experience in cloud environments like AWS (Serverless technologies like AWS Lambda, API Gateway, NoSQL like Dynamo, EMR & S3)
  • Experience with relational and non-relational SQL
  • Strong experience in coding languages like Python, Scala & Java
  • Have experience in building Realtime streaming data pipelines.
  • Experience in pub/sub modes like Kafka
  • Strong understanding of data structures and algorithms
  • Experience in building lambda, kappa, microservice and batch architecture
  • Experience working on CI/CD processes and source control tools such as GitHub and related dev processes.
  • Has a passion for data solutions and willing to pick up new programming languages, technologies, and frameworks.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs