Designs, develops, modifies, adapts and implements solutions to information technology needs through new and existing applications, systems architecture and applications infrastructure.
Reviews system requirements and business processes; codes, tests, debugs and implements software solutions.
- Optimize data lake performance by identifying and resolving production and application development problems.
- Act as Data Steward, ensuring Data Governance best practices with developers. Help investigate and resolve data anomalies including data quality issues and ambiguous data definitions. Recommend data integrity checks and controls to ensure data quality.
- Define data/information architecture standards, structure, attributes and nomenclature of data elements, and applies accepted data content standards to technology projects.
- Develop data retention practices and system lifecycle for Nasdaqs index applications based on Nasdaqs data retention policies.
- Hands-on ability to set up reporting tools and build reports and ad hoc functionality, giving users access to their data.
- Experience with master data management, data governance, data security, data quality and related tools desired.
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.
- Work with QA Test analyst to ensure test coverage (Including Integration & Regression testing)
- Develop new program logic and/or assembles standard logic modules to create new applications
Qualifications & Experience
- Good understanding of AWS & Open Source big data technologies Airflow, Terraform, AWS Glue, AWS Lambda, Spark SQL
- Ability to implement both batch and streaming data pipelines in the AWS and change data capture (CDC) experience
- Experience with Databricks preferable
- Programming SPARK in Scala & proficiency in SQL to write complex SQL queries
- Strong data analysis and troubleshooting skills
- Domain knowledge of Capital Market is plus
- Knowledge of shell scripts and other languages including Python, R, Java is plus
- At least 2 years of hands on experience on data engineering programs on AWS
- Minimum 1 year experience with Spark, Scala
- Experience with relational databases (Preferably Oracle)
- Good knowledge of Linux OS and Shell scripting
- Experience working on complex distributed information systems
- Experience with version control systems, preferable SVN, git.
- Strong work ethic in a mission-critical 24x7 diverse environment with multiple vendor/customer relationships.
Nasdaq is an equal opportunity employer. We positively encourage applications from suitably qualified and eligible candidates regardless of age, color, disability, national origin, ancestry, race, religion, gender, sexual orientation, gender identity and/or expression, veteran status, genetic information or any other status protected by applicable law.