Job description

About Grainger

Grainger is a leading broad line distributor with operations primarily in North America, Japan and the United Kingdom. We achieve our purpose, We Keep the World Working®, by serving more than 4.5 million customers with a wide range of products that keep their operations running and their people safe. Grainger also delivers services and solutions, such as technical support and inventory management, to save customers time and money.

We're looking for passionate people who can move our company forward. As one of the 100 Best Companies to Work For, we have a welcoming workplace where you can build a career for yourself while fulfilling our purpose to keep the world working. We embrace new ways of thinking and recognize everyone is an individual. Find your way with Grainger today.

Position Details

The Matching team builds platform consisting of data pipelines, machine learning models, APIs, a feedback UI, and an in-house application for Sales.  Our mission is to identify internal and external products that match with other Grainger products using ML. This team will replace manual matching curation. The Data Engineer will work with the team in designing, building and operating data pipelines that feed into data models. You will work with your team of Software Engineers and Data Engineers to build team strategy, evaluating, and integrating data patterns and technologies, and building data products alongside domain experts.

You will report to the Director of Product Engineering.

Pay

This position is salaried and will pay between 104,000 to 160,140 with a target bonus of 10%.

The range provided is a guideline and not a guarantee of compensation. Other factors that are involved in offer decisions include, and are not limited to: a candidate's experience, qualifications, geographical area, and internal equity of the team.

You Will

  • Design efficient and scalable data processing systems and pipelines on Databricks, Airflow, APIs, and AWS Services.
  • Create technical solutions that solve business problems and are well engineered, operable, maintainable, and delivered.
  • Design and implement tools to detect data anomalies. Ensure that data is accurate, complete, and across all platforms.
  • Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
  • Provide technical guidance to help data users adopt new data pipelines and tools.
  • Develop scalable and re-usable frameworks for ingestion and transformation of large datasets.
  • Understand trends and latest technologies. Evaluate the performance and applicability of potential tools for our requirements.
  • Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
  • Design, and maintain efficient and scalable data processing systems and pipelines on Databricks, Airflow, APIs, and AWS Services.
  • Create technical solutions that solve business problems and are well engineered, operable, maintainable, and delivered.
  • Design and implement tools to detect data anomalies. Ensure that data is accurate, complete, and across all platforms.
  • Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
  • Provide technical guidance to help data users adopt new data pipelines and tools.
  • Develop scalable and re-usable frameworks for ingestion and transformation of large datasets.
  • Understand trends and emerging technologies. Evaluate the performance and applicability of potential tools for our requirements.
  • Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
  • Work with our AI, Platform, and Business Analytics teams to build useful pipelines and data assets.

You Have

  • 5+ years experience in batch and streaming ETL using Spark, Python, Scala on Databricks for Data Engineering or Machine Learning workloads.
  • Familiarity with AWS Services not limited to Glue, Athena, Lambda, S3, and DynamoDB
  • Experience prepping structured and unstructured data for data science models.
  • Demonstrated experience implementing data management life cycle, using data quality functions like standardization, transformation, rationalization, linking and matching.
  • Familiarity with containerization and orchestration technologies (Docker, Kubernetes) and experience with shell scripting in Bash, Unix or windows shell is preferable.

Rewards And Benefits

With benefits starting day one, Grainger is committed to your safety, health and wellbeing. Our programs provide choice to meet our team members' individual needs. Check out some of the rewards available to you at Grainger.

  • Paid time off (PTO) days and 6 company holidays per year
  • Benefits starting on day one, including medical, dental vision and life insurance
  • 6% 401(k) company contribution each pay period with no personal contribution required
  • Employee discounts, parental leave, tuition reimbursement, student loan refinancing, free access to financial counseling, education and more.

DEI Statement

We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity workplace.

We are committed to fostering an inclusive, accessible environment that includes both providing reasonable accommodations to individuals with disabilities during the application and hiring process as well as throughout the course of one’s employment. With this in mind, should you need a reasonable accommodation during the application and selection process, please advise us so that we can provide appropriate assistance.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
Randstad Australia
December 6, 2023

Senior Data Analyst

Mater
December 6, 2023

Senior Data Analyst

Glovo
December 6, 2023