All new
Data Science
jobs, in one place.

Updated daily to help you be the first to apply ⏱

Data Engineer
  • Python
  • Spark
  • SQL
  • Big Data
  • Tableau
  • Excel
  • Matlab
  • Database
  • ETL
  • Kafka
  • NoSQL
BNY Mellon
Pittsburgh, PA
140 days ago

Mellon is a global multi-specialist manager dedicated to serving our clients with a full spectrum of single and multi-asset investment strategies and solutions. With roots dating back to 1933, Mellon has been innovating across asset classes for generations and has the combined scale and capabilities to offer clients a broad range of solutions. From asset class expertise to broad market exposures, clients drive what we do. We are holistic in approach, client driven and committed to investment excellence. We aim to be a key partner for our clients by delivering customized investment outcomes and best-in-class service.

Role Overview

The Data Engineer is responsible for building and supporting systems to transform, store, and improve processes around data for Mellon Research. This role will focus on the Mellon research data pipeline, warehouse, databases, and BI tooling. He/she will work with business analyst, data scientists, and other data engineers to facilitate ETL/ELT processes that move, clean, and store data. The engineer will also be tasked with creating data accessibility points and tooling to enable reporting insights with ease of use and maintenance in mind. The data engineer is expected to provide input to end state design and schema while enforcing best practices.


Design, build, and maintain efficient and progressive data infrastructure for Mellon research across disparate research silos in San Francisco, Boston, and Pune focusing on creating a transparent data environment.

  • Engage in a variety of tactical projects including but not limited to ETL, storage, visualization, reporting, web- scraping, and dashboard development

  • Support, document, and evolve (re-architect as needed) existing core data stores

  • Utilize ETL tooling to build, template, and rapidly deploy new pipelines for gathering and cleaning data

  • Analyze existing data stores / data marts, clean, and migrate into a centralized data lake

  • Work with Technology and Research leads to implement central and/or virtualized warehousing solutions

  • Develop APIs for accessing data, for use by business users (i.e., researchers and portfolio managers)

  • Configure Tableau dashboards and reports while serving as SME for end consumers of data

  • Identify and deploy advanced BI tooling on top of datasets including AI/ML/DL techniques and algorithms

  • Assist in the design and development of enterprise data standards and best practices

  • Use modern tooling to focus on progressive technology and expand business capabilities and time to market

Work closely with business analysts, data scientists, and technologists through full project lifecycles which will provide deep insight on research needs, business processes, and research practices.

  • Gather requirements and analyze solution options

  • Develop solutions and define and execute test plans

  • Define and implement operational procedures

  • Automate the research and review of data quality issues to ensure data accuracy and reliability

  • Resolve data integrity and data validation issues.

  • Produce ad-hoc queries and reports for non-standard requests from Data Scientists and Data Consumers.

  • Become SMEs on the full suite of solutions delivered by the Research Data Engineering team with an eye to identify, analyze, and interpret trends or patterns to identify new solution options, define process improvement opportunities and generate value opportunities for our business partners.


  • Bachelor's degree or equivalent work experience required

  • 6+ years of experience as a data engineer, software engineer, or similar

  • Strong Experience building ETL pipelines and knowledge of ETL best practices

  • Experience with overall data architecture and data routing design

  • Familiarity with data quality control tools and processes

    Strong communication skills and a keen attention to detail

Technical Qualifications:

Candidate is not expected to have expertise in all technical areas listed but should be highly proficient in several of these including:

  • SQL, R, Python, Matlab, SSIS, Pentaho/Kettle, Excel, Tableau, MongoDB, Kafka, Hive/Spark, Parquet

  • Experience with CI/CD, container, and frameworks: GitLab, Selenium, Docker, Kubernetes

  • Disciplines: Microservice Architecture, Design Patterns

  • Environment Tooling: Agile, JIRA, Confluence

  • Familiarity with RDBS and/or NoSQL and related best practices

Nice to Have Qualifications:

  • Experience working in investment research and/or quantitative finance

  • Advanced Degree or CFA
  • Development experience with R or Python in a data-science or research setting

  • Knowledge/Experience with financial data provider API’s (Bloomberg/Factset/Datastream/MSCI)

  • Experience in EAGLE PACE Access and Oracle
  • Knowledge/Experience with the following technologies:

    • Symphony (STC)
    • .Net Core
    • Snowflake
    • .Net Core
    • Dataiku
    • Cloud and distributed computing experience
    • Big Data Experience

BNY Mellon is an Equal Employment Opportunity/Affirmative Action Employer.
Minorities/Females/Individuals With Disabilities/Protected Veterans.
Our ambition is to build the best global team – one that is representative and inclusive of the diverse talent, clients and communities we work with and serve – and to empower our team to do their best work. We support wellbeing and a balanced life, and offer a range of family-friendly, inclusive employment policies and employee forums.

Primary Location: United States-Pennsylvania-Pittsburgh
Internal Jobcode: 45144
Job: Asset Management
Organization: Mellon With TOH ADJ-HR13428
Requisition Number: 2006649

    Related Jobs

  • Machine Learning Engineer

    • PyTorch
    • scikit-learn
    • Keras
    16 days ago
  • Data Scientist - Permanent - London

    • SQL
    • Machine Learning
    • Python
  • Senior Data Scientist

    • Machine Learning
    • Python
    • SAS

    • Modeling
    Huntington Ingalls Industries Inc.
    Fort Shafter
  • Senior Marketing Data Scientist, Enterprise

    • Looker
    • SQL
    • Tableau
    San Francisco