Ready to take the next step in your career? Maybe you’re looking for advancement opportunities, a better work-life balance, or just something new and exciting.
At Pekin Insurance, we strive to go Beyond the expected® in everything we do.
What You'll Do
The Data Engineer is responsible for designing and implementing the data and analysis infrastructure, as well as determining the appropriate data management systems for analysis. Data Engineer builds, maintains, and optimizes data pipelines, and moves these data pipelines effectively into production for key data consumers. Data Engineer also provides data expertise when building and testing stories, features, and components; and participates in the development and management of application programming interfaces (APIs) that access key data sources. The Data Engineer works to guarantee compliance with data governance and data security requirements while creating, improving, and operationalizing integrated and reusable data pipelines to enable faster data access, integrated data reuse, and improve time-to-solution for Pekin Insurance’s data initiatives.
The Data Engineer contributes to the development of the team backlog and architectural runway, management of work in process (WIP) levels, and support of engineering aspects of program and solution Kanbans. Data Engineer may also participate in program increment (PI) planning, pre- and- post planning, system and solution demos, and inspect and adapt events.
- Participates and plays an active role in all Agile Team activities and is accountable for regularly producing product increments that effectively contribute to solution features and/or components
- Participates in Agile Release Train (ART) events
- Works closely with product teams to define product requirements
- Performs physical design and develops/evaluates product requirements related to data
- Assists in building and maintaining complex data management systems that combine core data sources into data warehouses or other accessible structures
- Assists in managing data pipelines consisting of a series of stages through which data flows
- Drives automation through effective metadata management
- Learns and uses modern data preparation, integration and artificial intelligence (AI)-enabled metadata management tools and techniques
- Performs data conversions, imports and exports of data within and between internal and external software systems
- Develops programs to optimally extract, transform, and load data between data sources
- Creates data transformation processes (extract, transform, load (ETL), SQL stored procedures, etc.) to support moderately complex to complex business systems and operational data flows
- Contributes to the design and management of APIs
- Designs and implements processes to ensure data integrity and standardization
- Updates data dictionary
- Assists in maintaining the quality of Metadata Repository by adding, modifying, and deleting data
- May recommend and implement data reliability, efficiency, and quality improvements
- Ensures the collected data is within required quality standards
- Resolves low to moderately complex conflicts between models, ensuring that data models are consistent with the enterprise model (e.g., entity names, relationships and definitions)
- Documents new and existing models, solutions, and implementations such as Data Mapping, Technical Specifications, Production Support, Data Dictionaries, Test Cases, etc.
- Troubleshoots, diagnoses, documents and resolves escalated support problems
- Supports innovative efforts by driving creativity, acting with agility and thinking outside current boundaries
- Notifies more senior engineers when contract requirements are not met
- Uses technology to implement automation and orchestration
- Performs other duties as assigned
What You'll Need
- Bachelor’s degree in IT Engineering, Computer Science, Business Management, Mathematics, Information Technology, Computer Engineering or Information Sciences preferred, or equivalent experience
- Typically requires 3 or more years of work experience in Systems Administration, Networking, database administration (DBA), database management system (DBMS) design and support, and/or personal computer (PC) support roles
- Experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks
- Experience in an agile environment strongly preferred
- Experience with SAFe® framework preferred
Knowledge, Skill & Abilities
- Ability to learn and use advanced analytics tools for Object-oriented/object function scripting
- Ability to work across multiple deployment environments including cloud and on-premises, and multiple operating systems
- Basic knowledge of popular databased programming languages for relational and non-relational databases
- Basic knowledge of representational state transfer (RESTful) API design
- Ability to work with popular data discovery, analytics, and business intelligence (BI) software tools for semantic-layer-based data discovery
- Basic knowledge of agile methodologies and capable of applying DevOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows
- Ability to collaborate with both business and IT stakeholders
- Ability to use judgment to form conclusions that may challenge conventional wisdom
- Applies original thinking to produce new ideas and innovate
What We Offer
We get it- You’re looking for a career with a company that invests in you. Our desire to enhance the employee experience through our benefits, work perks, and team-oriented environment made us one of 2019’s and 2020’s “Best Places to Work in Illinois.”
Some of what we offer includes paid volunteer time, reimbursement for industry certifications, flex time, and more.