Job description

Ready to take the next step in your career? Maybe you’re looking for advancement opportunities, a better work-life balance, or just something new and exciting.

At Pekin Insurance, we strive to go Beyond the expected® in everything we do.

The Data Engineer is responsible for designing and implementing the data and analysis infrastructure, as well as determining the appropriate data management systems for analysis. Data Engineer builds, maintains, and optimizes data pipelines, and moves these data pipelines effectively into production for key data consumers. Data Engineer also provides data expertise when building and testing stories, features, and components; and participates in the development and management of application programming interfaces (APIs) that access key data sources. The Data Engineer works to guarantee compliance with data governance and data security requirements while creating, improving, and operationalizing integrated and reusable data pipelines to enable faster data access, integrated data reuse, and improve time-to-solution for Pekin Insurance’s data initiatives.

The Data Engineer contributes to the development of the team backlog and architectural runway, management of work in process (WIP) levels, and support of engineering aspects of program and solution Kanbans. Data Engineer may also participate in program increment (PI) planning, pre- and- post planning, system and solution demos, and inspect and adapt events.

What You'll Do

  • Participates and plays an active role in all Agile Team activities and is accountable for regularly producing product increments that effectively contribute to solution features and/or components
  • Participates in Agile Release Train (ART) events
  • Works closely with product teams to define product requirements
  • Performs physical design and develops/evaluates product requirements related to data
  • Builds and maintains complex data management systems that combine core data sources into data warehouses or other accessible structures
  • Manages data pipelines consisting of a series of stages through which data flows
  • Drives automation through effective metadata management
  • Learns and uses modern data preparation, integration and artificial intelligence (AI)-enabled metadata management tools and techniques
  • Performs data conversions, imports and exports of data within and between internal and external software systems
  • Develops programs to optimally extract, transform, and load data between complex data sources
  • Creates data transformation processes (extract, transform, load (ETL), SQL stored procedures, etc.) to support complex to highly complex business systems and operational data flows
  • Contributes to the design and management of APIs
  • Designs and implements processes to ensure data integrity and standardization
  • Updates data dictionary
  • Assists in maintaining the quality of Metadata Repository by adding, modifying, and deleting data
  • Recommends and implements data reliability, efficiency, and quality improvements
  • Ensures the collected data is within required quality standards
  • Resolves conflicts between models, ensuring that data models are consistent with the enterprise model (e.g., entity names, relationships and definitions)
  • Documents and reviews new and existing models, solutions, and implementations such as Data Mapping, Technical Specifications, Production Support, Data Dictionaries, Test Cases, etc.
  • Proactively troubleshoots, diagnoses, documents and resolves escalated support problems
  • Supports innovative efforts by driving creativity, acting with agility and thinking outside current boundaries
  • Evaluates services provided by vendors and may recommend changes
  • Uses technology to implement automation and orchestration
  • Performs other duties as assigned

What You'll Need

Required:

  • Bachelor’s degree in IT Engineering, Computer Science, Business Management, Mathematics, Information Technology, Computer Engineering or Information Sciences preferred, or equivalent experience
  • Typically requires 3 or more years of work experience in Systems Administration, Networking, database administration (DBA), database management system (DBMS) design and support, and/or personal computer (PC) support roles

Preferred:

  • Experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks
  • Experience in an agile environment strongly preferred
  • Experience with SAFe® framework preferred

Knowledge, Skill & Abilities

  • Ability to learn and use advanced analytics tools for Object-oriented/object function scripting
  • Ability to work across multiple deployment environments including cloud and on-premises, and multiple operating systems
  • Basic knowledge of popular databased programming languages for relational and non-relational databases
  • Basic knowledge of representational state transfer (RESTful) API design
  • Ability to work with popular data discovery, analytics, and business intelligence (BI) software tools for semantic-layer-based data discovery
  • Basic knowledge of agile methodologies and capable of applying DevOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows
  • Ability to collaborate with both business and IT stakeholders
  • Ability to use judgment to form conclusions that may challenge conventional wisdom
  • Applies original thinking to produce new ideas and innovate

What We Offer

We get it- You’re looking for a career with a company that invests in you. Our desire to enhance the employee experience through our benefits, work perks, and team-oriented environment made us one of 2019’s and 2020’s “Best Places to Work in Illinois.”

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.