Job description

The role is part of a team supporting the integration of internal systems, external data integration, data and machine learning pipelines that deliver an array of data and analytics solutions to our clients and the industry. This position can work 100% remote on a permanent basis.

What you bring

  • Experience leading and doing cloud native data engineering preferably Azure cloud (5+ years)
  • Deep experience of relational, analytical, columnar databases (5+ years)
  • Proven strength in SQL, data modeling, data engineering, and data warehousing (5+ years)
  • Deep experience in SQL Server (3+ years)
  • Proficiency in streaming, micro-batch, and batch data transport pipelines (3+ years)
  • Programming in Python and / or Java. (3+ years)
  • Experience with Git, Continuous Integration/Delivery, and related tools (3+ years)
  • Design solutions for structured and unstructured data (2+ years)
  • Experience with Azure Data Factory (1+ years)
  • High level of sensitivity and attention to unit testing, integration, and data quality testing
  • Experience with workflow orchestration tools
  • Experience delivering in a team environment using SAFe Agile framework
  • Experience as a Data Engineer in large enterprise or commercial data environment
  • Experience delivering Machine Learning pipelines include knowledge of Software 2.0 concepts
  • Knowledge and experience setting up Deep Learning pipelines, application of analytics libraries (e.g., Pytorch, TensorFlow).
  • Familiarity with SQL Server Analysis Services BI stack (SSIS, SSAS, SSRS); Reporting tools (e.g., Tableau, Sisense or Power BI); Analytics platforms (e.g., DataBricks, H2oai)
  • Strong analytical, troubleshooting, and problem-solving skills – experience in analyzing and understanding business/technology system architectures, databases, and client applications
  • Can work with business or technology users to define and gather analytics & data requirements

Job Responsibilities

  • Design, develop and implement large-scale, high-volume, high-performance data lake and data warehouse.
  • Build ETLs/ELTs to take data from various data sources and create a unified/enterprise data model for analytics and reporting.
  • Build, deploy and support batch & real-time, fault tolerant, self-healing data pipelines
  • Work in close collaboration with product management, peer system and software engineering teams to clarify requirements and translate them into robust, scalable, operable solutions that work well within the overall data architecture
  • Actively pursue modernization and automation opportunities that might sometimes challenge the current strategy and direction based on new information and emerging technology advancements and capabilities (e.g., Software 2.0, specialized hardware, etc.)
  • Conduct research on emerging technology solutions and standards in support of business needs
  • Collaborates with team leaders to design solutions and manage technology change

Education

  • Bachelor's or Master’s in computer science, analytics, mathematics, or engineering

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.