At Northwestern Mutual, we are strong, innovative and growing. We invest in our people. We care and make a positive difference.
We are looking for a savvy Data Engineer to join our growing team. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. Our Field Information Management System team is accountable for providing key data elements about Northwestern Mutual field force to support operational activities and strategic decision making for today and in the future.
This role will partner with peers and senior engineers in this space to define and build out a new data architecture, capabilities, data store(s) and services to meet organizational needs. This role within the team will be responsible for capturing and modeling data requirements, data definitions, business rules, data quality requirements, and logical/physical data models to ensure data solutions are meeting our customers data quality needs. Deployment environments include both on prem and AWS cloud.
- Designs and performs data extraction, assessment, translation, transformation, and load (ETL or ELT) processes
- In charge of building data equity, it's underlying code, tools, databases, and related infrastructure
- Administers and maintains existing data engineering pipelines to their full extent of engineering needs
- Translates strategic needs into engineering goals and milestones
- Analyzes data needs and identifies available internal and external sources.
Utilizes business and analytical data modeling skills to design data integration and structure approaches.
- Utilizes programming skills to access and extract data from diverse sources residing on multiple platforms and implement large, complex data models by combining, synthesizing and structuring data from databases, files and spreadsheets.
- Assures data consistency and reliability. Performs quality checks, contributes to metadata and data dictionaries, documents repeatable extract, transform and load processes.
- 2+ years data engineering or modeling
- 2+ years professional Java experience
- Experience with data pipeline and workflow management tools.
- Strong SQL background– writing and reviewing complex SQL statements, performance tuning
- Experience with SQL & no-SQL databases, and AWS cloud data services
- Can execute alone but is an awesome team player
- Fast learner and self-starter.
- Strong problem-solving skills
Nice To Have:
- Full-stack developer experience.
- Erwin Data Modeling experience.
- Experience with Cloud (AWS) services: EC2, EMR, RDS, Redshift, Aurora
- Containerized development experience (Docker/Kubernetes)
- Working knowledge of message queuing, stream processing, and highly scalable data stores
- Experience designing microservices
- Experience working on a Scrum/Agile team
- Experience with DevOps processes and tools to build and deploy application
- Familiarity with SCM tools (GitLab)
- CI-CD tools (GitLab CI)
- Batch and near real-time data integration experience using Informatica and Kafka platform.
- Experience with DB2 and Sybase databases
This job is not covered by the existing Collective Bargaining Agreement.
Grow your career with a best-in-class company that puts our client’s interests at the center of all we do. Get started now!
We are an equal opportunity/affirmative action employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender identity or expression, sexual orientation, national origin, disability, age or status as a protected veteran, or any other characteristic protected by law.