Synchrony

Data Engineer (L 09)

Job description

Job Description

Role Summary/Purpose:

The candidate Big Data Engineer will join an Agile scrum team and perform functional & system development for Synchrony’s Enterprise Data Lake. As a Big Data Engineer the ability to integrate data across internal and external sources, provide analytical insights, and integrate with our critical systems are key skills. The engineer will participate in data analysis efforts to ensure the delivery of high-quality data ingestion, standardization and curation and maintain compliance with the applicable Data Sourcing, Data Quality, and Data Governance standards. The engineer will drive quality through the entire software development lifecycle with focus on functional requirements, efficiency, and methodology. The engineer will work cross-functionally with operations, other data engineers and product owner to assure capabilities are delivered that meet business needs.

This position is remote, where you have the option to work from home. On occasion we may request for you to commute to our nearest office for in person engagement activities such as team meetings, training and culture events. To ensure the safety of our colleagues and communities, we require employees who come together in-person to be fully vaccinated. We’re proud to offer you choice and flexibility.

Essential Responsibilities
  • Develop big data applications for Synchrony in Hadoop ecosystem
  • Participate in the agile development process including backlog grooming, coding, code reviews, testing and deployment
  • Work with team members to achieve business results in a fast paced and quickly changing environment
  • Work independently to develop analytic applications leveraging technologies such as: Hadoop, NoSQL, In-memory Data Grids, Kafka, Spark, Ab Initio
  • Provide data analysis for Synchrony’s data ingestion, standardization and curation efforts ensuring all data is understood from a business context
  • Identify enablers and level of effort required to properly ingest and transform data for the data lake.
  • Profile data to assist with defining the data elements, propose business term mappings, and define data quality rules
  • Work with the Data Office to ensure that data dictionaries for all ingested and created data sets are properly documented in data dictionary repository
  • Ensure the lineage of all data assets are properly documented in the appropriate enterprise metadata repositories
  • Assist with the creation and implementation of data quality rules
  • Ensure the proper identification of sensitive data elements and critical data elements
  • Create source-to-target data mapping documents
  • Test current processes and identify deficiencies
  • Investigate program quality to make improvements to achieve better data accuracy
  • Understand functional and non-functional requirement and prepare test data accordingly
  • Plan, create and manage the test case and test script
  • Identify process bottlenecks and suggest actions for improvement
  • Execute test script and collect test results
  • Present test cases, test results, reports and metrics as required by the Office of Agile
  • Perform other duties as needed to ensure the success of the team and application and ensure the team’s compliance with the applicable Data Sourcing, Data Quality, and Data Governance standards
Qualifications/Requirements
  • Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) with 3+ years of Information Technology experience; OR in lieu of degree, 5+ years of Information Technology experience.
  • Hands-on experience writing shell scripts, complex SQL queries, Hive scripts, Hadoop commands and Git
  • Ability to write abstracted, reusable code components
  • Programming experience in at least one of the following languages: Scala, Java or Python
  • Analytical mindset
  • Willingness and aptitude to learn new technologies quickly
  • Superior oral and written communication skills;
  • Ability to collaborate across teams of internal and external technical staff, business analysts, software support and operations staff.
  • For Internal Applicants: Understand the criteria or mandatory skills required for the role, before applying.
  • Inform your Manager or HRM before applying for any role on Workday.
  • Ensure that your Professional Profile is updated (fields such as Education, Prior experience, Other skills) and it is mandatory to upload your updated resume (Word or PDF format)
  • Must not be any corrective action plan (First Formal/Final Formal, PIP)
  • Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible.
  • Level 8+ employees can apply
Desired Characteristics
  • Performance tuning experience
  • Exposure to the following Ab Initio tools: GDE – Graphical Development Environment; Co>Operating System ; Control Center; Metadata Hub; Enterprise Meta>Environment; Enterprise Meta>Environment Portal; Acquire>It; Express>It; Conduct>It; Data Quality Environment; Query>It.
  • Familiar with Ab Intio, Hortonworks/Cloudera, Zookeeper, Oozie and Kafka
  • Familiar with Public Cloud (i.e. AWS, GCP, Azure) data engineering services
  • Familiar with data management tools (i.e. Collibra)
  • Background in ETL, data warehousing or data lake
  • Strong business acumen including a broad understanding of Synchrony business processes and practices
  • Demonstrated ability to work effectively in an agile team environment
  • Financial Industry or Credit processing experience
  • Experience with working on a geographically distributed team managing onshore/offshore resources with shifting priorities
  • Previous experience working in client facing environment
  • Proficient in the maintenance of data dictionaries and other information in Collibra
  • Excellent analytical, organizational and influencing skills with a proven track record of successfully executing on assignments and initiatives
Grade/Level: 09

Job Family Group

Information Technology

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
Jefferson Frank
December 5, 2022
NTT DATA DACH
December 5, 2022

Data Engineer / Splunk Experte (w/m/x)

NTT DATA DACH
December 5, 2022