Mellon is a global multi-specialist manager dedicated to serving our clients with a full spectrum of single and multi-asset investment strategies and solutions. With roots dating back to 1933, Mellon has been innovating across asset classes for generations and has the combined scale and capabilities to offer clients a broad range of solutions. From asset class expertise to broad market exposures, clients drive what we do. We are holistic in approach, client driven and committed to investment excellence. We aim to be a key partner for our clients by delivering customized investment outcomes and best-in-class service.
The Data Engineer is responsible for building and supporting systems to transform, store, and improve processes around data for Mellon Research. This role will focus on the Mellon research data pipeline, warehouse, databases, and BI tooling. He/she will work with business analyst, data scientists, and other data engineers to facilitate ETL/ELT processes that move, clean, and store data. The engineer will also be tasked with creating data accessibility points and tooling to enable reporting insights with ease of use and maintenance in mind. The data engineer is expected to provide input to end state design and schema while enforcing best practices.Responsibilities
Design, build, and maintain efficient and progressive data infrastructure for Mellon research across disparate research silos in San Francisco, Boston, and Pune focusing on creating a transparent data environment.
Engage in a variety of tactical projects including but not limited to ETL, storage, visualization, reporting, web- scraping, and dashboard development
Support, document, and evolve (re-architect as needed) existing core data stores
Utilize ETL tooling to build, template, and rapidly deploy new pipelines for gathering and cleaning data
Analyze existing data stores / data marts, clean, and migrate into a centralized data lake
Work with Technology and Research leads to implement central and/or virtualized warehousing solutions
Develop APIs for accessing data, for use by business users (i.e., researchers and portfolio managers)
Configure Tableau dashboards and reports while serving as SME for end consumers of data
Identify and deploy advanced BI tooling on top of datasets including AI/ML/DL techniques and algorithms
Assist in the design and development of enterprise data standards and best practices
Use modern tooling to focus on progressive technology and expand business capabilities and time to market
Work closely with business analysts, data scientists, and technologists through full project lifecycles which will provide deep insight on research needs, business processes, and research practices.
Gather requirements and analyze solution options
Develop solutions and define and execute test plans
Define and implement operational procedures
Automate the research and review of data quality issues to ensure data accuracy and reliability
Resolve data integrity and data validation issues.
Produce ad-hoc queries and reports for non-standard requests from Data Scientists and Data Consumers.
Become SMEs on the full suite of solutions delivered by the Research Data Engineering team with an eye to identify, analyze, and interpret trends or patterns to identify new solution options, define process improvement opportunities and generate value opportunities for our business partners.
Bachelor's degree or equivalent work experience required
6+ years of experience as a data engineer, software engineer, or similar
Strong Experience building ETL pipelines and knowledge of ETL best practices
Experience with overall data architecture and data routing design
Familiarity with data quality control tools and processesStrong communication skills and a keen attention to detail
Candidate is not expected to have expertise in all technical areas listed but should be highly proficient in several of these including:
SQL, R, Python, Matlab, SSIS, Pentaho/Kettle, Excel, Tableau, MongoDB, Kafka, Hive/Spark, Parquet
Experience with CI/CD, container, and frameworks: GitLab, Selenium, Docker, Kubernetes
Disciplines: Microservice Architecture, Design Patterns
Environment Tooling: Agile, JIRA, Confluence
Familiarity with RDBS and/or NoSQL and related best practices
Nice to Have Qualifications:
Experience working in investment research and/or quantitative finance
Development experience with R or Python in a data-science or research setting
Knowledge/Experience with financial data provider API’s (Bloomberg/Factset/Datastream/MSCI)
Knowledge/Experience with the following technologies: