NAVIENT

Data Engineer II (REMOTE)

Job description

The Data Engineer II at Navient will be involved in developing services that support predictive models. These models are used to support the product either by optimizing underwriting operations or by reducing and optimizing risk adjusted revenue and return metrics. The Engineer will also be familiar with BI technologies and tools and developing ETL frameworks and abstractions that make it easier for Data Scientists, Analysts, and Engineers to write data pipelines.

Responsibilities include:

Software Engineering: data pipelines, data warehouse and data analytics tools

Our Data Engineer II will be involved in writing ETL and ETL tooling for the following purposes:

  • Automatic ingestion of data from production and other external services into the Data Lake and Data Warehouse.
  • Setting up tooling and the abstraction patterns for orchestrating ETLs and other batch jobs that move data into the data-warehouse or from the data-warehouse.
  • Building data quality tooling and test cases to meet business requirements.
  • Supporting schema changes and communication with all stakeholders.

Position will set up backend services either in Python and Scala that support predictive models or third-party integrations. This will involve the following:

  • Writing the ETL to expose the data from predictive services.
  • Writing tests (unit, integration, regression, property, smoke, and e2e tests).
  • Writing the business execution logic for the service.
  • Troubleshoot, identify, and fix defects with a senior engineer through standard techniques such as debugging, profiling, logging, or log analysis via Splunk.
  • Writing Database migrations and setting up database models.

Data Architecture and documentation

  • Identify opportunities for automation where manual processes are in place.
  • A data engineer will be involved architecture discussions and working with analyst / data scientists to support ETL needs.
  • Document, decompose, and size project planning in JIRA. Communicate acceptance criteria for when a project could be considered complete.

Learning and development

  • Work on technical PoCs with Sr engineers and validate data solutions/requirements.
  • Support cross functional projects with product, engineering, finance, ops and marketing teams.
  • Provide coding and architecture feedback using Github or through pair programming.

MINIMUM REQUIREMENTS

  • Bachelor's degree- Computer Science or related field of study (additional experience may substitute).
  • 2+ years of professional development experience with server-side concepts such as microservices, databases, caching, monitoring, and scalability.
  • 2+ years in at least one language used on the team (Scala, Python).
  • 1+ years with OLTP databases such as Postgresql, MS SQL.
  • 1+ years with OLAP databases such as Cloudera, BigQuery, Redshift.
  • 1+ years with Azure cloud technologies.
  • Additional education may substitute or experience noted above.

PREFERRED QUALIFICATIONS

  • Masters in Computer Science or related fields
  • Experience working in financial industry
  • Software design principles
  • Software testing principles
  • Databases and SQL
  • Github usage
  • Service Now
  • Cloudera\Big Data
  • ETL workflows
  • Azure

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.