Data Engineer

Company:
Location: Dublin, County Dublin

*** Mention DataYoshi when applying ***

The Company

Oneview Healthcare is a global software company, working with hospitals and senior care facilities around the world. Our platform helps caregivers to make real-time care decisions while improving care coordination and workflows. At Oneview, we empower patients to become active participants in their own health care. With hospital installations all over the globe, we know what it takes to organize systems, data, and people to add value and improve the quality of the healthcare experience.


The Role

The successful candidate will join our engineering team based in Blackrock, Melbourne & Kyiv.


Data is recognized as being extremely invaluable in what we deliver as a product. We are building a solid, automated data platform that will deliver data to both clients and internal customers. The successful candidate will be involved in building a quality data product that will make a positive impact for our customers, our patients and our product team. Our products are all about making patients’ stay in a hospital better, we want to use data to drive our decision making.


You should be really interested in data-technology. You are naturally curious, like to learn and love solving problems. You will challenge the status quo, look for opportunities for improvement everywhere and become a great team member.


Focusing on best practice we operate in agile, cross functional teams who are empowered to make decisions and deliver fast and often, trying to make the gap between deliveries smaller and smaller with each release of our data platform.


Objective:

With data in multiple client sites all over the world and in the cloud. We are building a centralized Dataplatform that will enable customers and internal business users to gain valuable insights.


Responsibilities & Accountabilities:


  • You will be responsible for building an end-to-end Dataplatform that spans ELT(ETL) to Reporting.
  • You will primarily focus on building ETL pipelines that using Azure Cloud, specifically tools such as Azure Data Factory, Azure SQL , Azure DataBricks (PySpark) and Azure Synapse Analytics.
  • You will be building, modelling, ingesting data into a dimensional data warehouse (Kimball/Star-Schema).
  • You will be working with batch exports, real-time streams (Stream Analytics, EventHub) and API’s as data-sources. The primary data-format we use is JSON and Parquet.
  • PowerBI is our visualization tool, you will be building DataModels, DataFlows, Dashboard, Visuals, Reports in PowerBI
  • Generating metrics and reports and presenting them periodically
  • Combining analytics methods with advanced data visualizations and produce dashboards using third party tools.
  • You will have the opportunity to explore AI/ML, identifying use cases and building out solutions.
  • Business as usual needs to happen, so we are looking for someone with an inquisitive mindset who just wants to get the job done.
  • Working within an Agile Environment, therefore the nature of our work can pivot.

Skills & Requirements


Data Modelling

  • Strong Dimensional Data Modelling (Kimball, SCD, Auditing etc)
  • Good understanding of Realational models, Normalized Form, JSON structures.
  • Strong SQL knowledge, recursion, windowing functions, cte’s etc


Data Pipelines / ETL

  • Working knowledge of tools such as, Databricks / Spark / PySpark / Spark(S)QL / Hive(HQL)/ Azure HDInsight, / Cloudera
  • Working knowledge of orchestration tools such as, Azure Data Factory, Apache Airflow, AWS Glue, SSIS etc
  • Previous experience with RDBS (SQL Server, Oracle, Sybase, PostgresSQL) or MPP (Teradata, Netezza) or NoSql (Azure HDInsight, Azure Data warehouse/Azure Synapse Analytics, Cloudera Impala) will also be considered.


Databases / Datawarehousing

  • Database and data warehouse development, to include creating complex functions, stored procedures, materialized views, and partition schemes.
  • Experience with basic DBA tasks (DR, indexing, performance tuning, partitioning)
  • Understanding of columnar oriented data format (Parquet / ORC).
  • Good understanding of optimization concepts, using indexes, partitioning, compression

Cloud Technology

  • A good background or working knowledge of Azure,
  • Working knowledge Storage Accounts, Containers, SAS Keys etc
  • Knowledge of Azure Functions, Azure Automation, LogicApps

Visualisation


  • Working knowledge of OLAP and previous work experience using Power BI.
  • A good understanding of the usage of different visuals based on the data in hand.
  • A good understanding of DAX and M Query languages used in Power BI.
  • Strong knowledge on Data Modelling, Data Cleansing, Data Manipulation in Power Query Editor.
  • Knowledge of Performance Enhancement/ML Visuals of the Power BI visuals/data would be beneficial. DevOps
  • Good understanding of Git, branches, commits, pull-requests
  • Knowledge of a scripting language such as Powershell / Bash / Cmd
  • A good understanding DevOps/CICD principles
  • DevOps Tools, such as AzureDevOps, Octopus Deploy, TeamCity, Bamboo etc


Soft Skills

  • Good communication skills (ability to present reports and findings) is a must
  • Appreciation of data and knowing how valuable good data can benefit us as a business.
  • Strong Attention to detail in all elements of what you’re doing
  • Degree in Computer Science or a similar related discipline
  • Good English skills are essential

*** Mention DataYoshi when applying ***

Offers you may like...

  • Adentis

    Big Data Engineer (Cloudera/Hadoop)
    Lisboa
  • Noesis Portugal - Consultadoria em Sistemas de Informação, SA

    Junior Data Engineer
    Lisboa
  • Mind Source

    Data Engineer
    Porto
  • Adentis

    Big Data Engineer
    Porto
  • Noesis

    Junior Data Engineer
    Lisboa