Job description

General Summary: The Data Engineer will be responsible for partnering with various analytics team and business partners throughout the Customer Operations area to collect data requirements and design and build data pipelines/architecture as per internal Data Engineering team best practices defined. This includes creating a data warehouse using Kimball methodology, design-related extraction, loading data functions, testing designs, dimension/fact data modeling, and ensure the smooth running of legacy applications. Furthermore, this includes building analytic tools, assembling and cleaning BIG data, designing, and building ETL using various technology languages, assembling large data, and identifying and improving internal process to automate manual data wrangling, optimizing performance, and improve data quality. The Data Engineer may seek guidance from Senior/Principal Data Engineers and Data Architects.

What we are looking for: Data Engineer

*Locations will include Jackson (OEP) or Grand Rapids

We encourage you to apply if you have: equivalent combination of education and experience will be considered and reviewed.

  • Bachelor’s degree in Computer, Engineering, Data sciences or equivalent with 2-4 years of experience in Software Engineering experience. Beginner level experience with analytic tool build, data architecture/design, user requirements definition, build BIG data pipeline, understanding of ETL tool extraction, basic data testing aptitude, and understanding of analytic tool deployment processes and best practices.

  • [OR] Associate degree in Computer, Engineering, Data sciences or equivalent with 6 or more years of experience in Software Engineering experience. Beginner level experience with analytic tool build, data architecture/design, user requirements definition, build BIG data pipeline, understanding of ETL tool extraction, basic data testing aptitude, and understanding of analytic tool deployment processes and best practices.

  • [OR] High School Diploma with 10 or more years of experience in Software Engineering experience. Beginner level experience with analytic tool build, data architecture/design, user requirements definition, build BIG data pipeline, understanding of ETL tool extraction, basic data testing aptitude, and understanding of analytic tool deployment processes and best practices.

  • Master’s degree- preferred

In this role, you will:

  • Captures/evaluates requirements from the data architect or business partner. The data engineer considers development alternatives and establishes timelines. The intermediate data engineer may seek guidance from senior or principal data engineers. *
  • Design Kimball based dimension and fact tables based on business requirements. New designs must contribute to the overall team data model.
  • Build analytic tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Provide incident management and direct technical consulting and support for current applications/solutions coded in Python, SQL, and R (migrating away from R but working knowledge would be valuable).
  • Assemble and clean BIG data; data sets including various data sources to meet functional business requirements for small to large enhancements.
  • Work with Various Stakeholders/IT to design and build efficient data pipelines for small to large enhancements using Python, SQL, ADF, DBT and Databricks.
  • Provides technical guidance for small to large enhancements in areas of solution alternatives, design, testing and documentation.
  • Identify opportunities to improve internal process to automate manual data wrangling, optimizing performance, and improve data quality.

Knowledge/Skills/Abilities:

  • Excellent verbal and written communication skills and be able to work with all levels of the organization would be beneficial.
  • Proficient in establishing and maintaining good working relationships (business and IT teams)
  • Knowledge of project planning/full lifecycle delivery using Agile framework, preferably using ADO.
  • Understanding of data test methodologies and testing tools
  • Understanding of database management principles and methodologies, including data structures, data modeling, data warehousing, and transaction processing.
  • Proficiency in Python, SQL and Kimball design concepts must demonstratable.
  • Knowledge of data design principles (Kimball based dimension & fact tables), methods, and approaches, applying systems engineering concepts such as: data structured design, supportability, survivability, reliability, scalability, and maintainability.
  • Knowledge of change and release tools and processes utilized to implement solutions across multiple teams and technologies.

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
Materialise
July 27, 2024

Data Engineer

Rite NRG
July 27, 2024

Data Engineer

IBM
July 27, 2024