All new
Data Science
jobs, in one place.

Updated daily to help you be the first to apply ⏱

avatar4avatar1avatar5avatar3avatar2
Senior Data Engineer
  • Python
  • SQL
  • SAS
  • Database
  • ETL
  • Azure
  • Perl
AllianceBernstein
Nashville, TN
145 days ago

Group Description:

Fixed Income Technology (“FIT”) is a strong team that is comprised of over 115 team members who build software that help the Fixed Income business of AllianceBernstein to perform functions such as Fundamental Research, Quantitative research, Portfolio Management, Order Generation, Trading and Middle office and BackOffice operations. This group is in-charge of all the technology needs for Fixed Income and sets strategic direction, executes and maintains solutions by collaborating with the business partners. We act as a central nervous system that connects various operations of Fixed Income business into other areas at AllianceBernstein. Our Data Warehouse ingests and egresses critical reference and transaction data from up-stream and down-stream systems.

Job Description:

We are seeking a Nashville based Senior Data Engineer to join our FIT team.

Describe the role:

The candidate in this role is expected to work with the architecture team and other senior members of the team to modernize and maintain our Data warehouse and real-time transaction system.

As a part of this role, you would be expected to provision and set up data platform technologies that are on-premises and in the Azure cloud platform. The data platforms we use can include relational databases, nonrelational databases, data streams, and file stores. Primary responsibilities include using services and tools to ingest, egress, and transform data from multiple sources. You would also collaborate with business stakeholders to identify and meet data requirements. You would design and implement solutions and manage, monitor, and ensure the security and privacy of data to satisfy business needs. You will primarily provision data stores and make sure that massive amounts of data are securely and cost-effectively extracted, loaded, and transformed.

As a part of this role, you would also be expected to design re-usable frameworks for performing Extract-Transform-Load and Extract-Load-Transform scenarios.

Describe the applications and business or enterprise functions the role supports:

A part of the FIT team called Data Transaction and Services (DT&S), builds and operates Data warehouse, Data marts, Transaction hubs and Operational Data Store for Fixed Income. It is this team’s responsibility that the mission critical data that is required for Portfolio management, Trading, Research, Operations and Settlements are distributed in a timely fashion. There are few vendor systems that is being used and this group performs the necessary ETL / ELT / ELTL functions and keeps the respective target systems updated in near real-time basis.

The candidate will play a key role as a part of the DT&S group to understand the current eco-system and to help modernize it by moving it to Azure. They will be working on a green-field project that is going to define the future state of Fixed Income’s data needs. It’s a multi-year project where we need to migrate the legacy Sybase and Sybase IQ base data warehouse into modern Azure tech stack.

The key job responsibilities include, but are not limited to:

  • Design and develop data ingestion pipeline framework using Azure Data Factory and Data Bricks
  • Design and develop event-driven ingestion and egression framework to consume and publish data
  • Design and develop streaming ingestion solutions to process live data from various vendors
  • Design and develop Continuous Integration and Continuous Deployment strategies
  • Set up and deploy cloud-based data services such as blob services, databases, and analytics.
  • Secure the platform and the stored data. Make sure only the necessary users can access the data.
  • Ensure business continuity in uncommon conditions by using techniques for high availability and disaster recovery.
  • Monitor to ensure that the systems run properly and are cost-effective.
  • Define the data: Identify the data to be extracted. Define data by using a database query, a set of files, or an Azure Blob storage name for blob storage.
  • Define the data transformation: Data transformation operations can include splitting, combining, deriving, adding, removing, or pivoting columns. Map fields between the data source and the data destination. You might also need to aggregate or merge data.
  • Test the ETL / ELT / ELTL job in a development or test environment. Then migrate the job to a production environment to load the production system.
  • Monitor the job: ETL operations can involve many complex processes. Set up a proactive and reactive monitoring system to provide information when things go wrong. Set up logging according to the technology that will use it.
  • Should be able to document and articulate to the team on technology choices and approaches

What makes this role unique or interesting?

This is a unique opportunity to be part of an organization that manages $596 Billion (As of May 2020), and help decide the future state of data space for the entire Fixed Income business of AB. This is a green-field project which will leverage state-of-the-art technologies.

What is the professional development value of this role ?

There is enormous growth opportunity in this role. From a functional point of view, they would be learning how a fixed income investment process works from front to back including research, managing portfolio, order sizing and trading. From a technical point of view, they will be working with all the latest technology stack and would be given ample opportunities to research and suggest a technology that solves the business problem.

Job Qualifications

The ideal candidate should have background in computer science and should have the following skillsets and experience:

  • Bachelor’s degree in computer science related fields. Master’s degree preferred.
  • 12 + years of experience in coding and building software
  • 8+ Years of experience in programming using SQL Server or Oracle or Sybase
  • 3+ years of experience in using Python
  • 3+ years of in-depth working knowledge in Azure Data Factory
  • 3+ years of working experience with Azure Data Bricks
  • 2+Years of experience with CICD process using Azure Dev-ops
  • Proficient with concepts like Resource group, Key Vault, Blob Storage, Table Storage and Data Lakes
  • Exposure to container technologies such as Kubernetes / Docker

Our employees typically have track records of outstanding professional performance and academic achievement, excellent analytical and financial skills, and strong verbal communication skills. Candidates should have a strong ability to work in a collaborative environment and to present results to both expert and non-expert audiences.

Skills:

  • Strong, effective communication skills (oral and written), combined with the ability to engage the business in substantive discussion and resolution of issues
  • Equally strong analytical skills to map data elements between systems, develop transformation logic, and resolve issues with data integrity.
  • Demonstrated ability to meet deadlines
  • Excellent interpersonal skills in order to interface with multiple constituent groups with potentially conflicting priorities and perspectives; the ability to support the entire software development and implementation lifecycle; work on multiple streams simultaneously; and guide users through complex application launches.
  • Proven track-record to own the entire project lifecycle
  • Excellent documentation skills to document all relevant processes, architecture and design decisions
  • Should be able to coach and mentor other team members

Special Knowledge:

  • Finance knowledge a plus
  • Completion of DP-201: Designing an Azure Data Solution or similar certification is a strong plus
Nashville, Tennessee

    Related Jobs

  • Associate Data Scientist, Data Modeling

    • scikit-learn
    • Pandas
    • NumPy
    Williams-S...
    San Francisco
    21 days ago
  • Staff Machine Learning Engineer

    • scikit-learn
    • Java
    • Python
    Zendesk
    California
    8 days ago
  • Data Analyst (Enterprise Data Warehouse)

    • SQL
    • Tableau
    Partners HealthCare
    Somerville
    Today
  • Senior Business Data Analyst

    • Data Analysis
    EMIDS
    Franklin
    28 days ago
  • Data Analyst 2

    • SQL
    • Database
    Applus
    Houston
    28 days ago