Job Responsibilities
Design and architect real-time data streaming architecture on AWS
Design and implement streaming ingestion pipeline to load data from bedside devices (both discrete vitals and waveform data)
Perform throughput analysis to determine if streaming or batch processing is more cost-effective for data sources
Utilize software development best practices such as version control via Git, CI/CD, and release management to build and deploy the streaming services and pipelines
Build automated data validation processes to ensure the quality and integrity of the datasets
Implement appropriate monitoring and observability solutions
Monitor cloud application performance for potential bottlenecks and resolve performance issues
Job Requirements
Experience working with Kafka, Spark Streams, Snowflake, Grafana, Prometheus, and AWS services (S3, Kinesis, TimeStream, Redshift, CloudWatch, EKS)
Experience working with HL7 data
Experience coding with Python/Java
Strong SQL, Data Warehousing, and Data Lake fundamentals
Hands-on experience with Linux (RHEL/Debian) operating system
Knowledge of version control systems such as Git
Experience consuming and building APIs
Experience utilizing Agile methodology for development