Who You'll Work With
You will work in multi-disciplinary global Life Science focused environments, harnessing data to provide real-world impact for organizations globally. Our Life Sciences practice focuses on helping clients bring life-saving medicines and medical treatments to patients. This Practice is one of the fastest growing practices and is comprised of a tight-knit community of consultants, research, solution, data, and practice operations colleagues across the Firm. It is also one of the most globally connected sector practices, offering ample global exposure.
The LifeSciences.AI (LS.AI) team is the practice’s assetization arm, focused on creating reusable digital and analytics assets to support our client work. LS.AI builds and operates tools that support senior executives in pharma and device manufacturers, for whom evidence-based decision-making and competitive intelligence are paramount. Team works directly with clients across Research & Development (R&D), Operations, Real World Evidence (RWE), Clinical Trials and Commercial to build and scale digital and analytical approaches to addressing their most persistent priorities.
What You'll Do
You are a highly collaborative individual and enjoy solving problems that focus on adding business value. You have a sense of ownership and enjoy hands-on technical work. Our values resonate with yours.
Collaboration with business stakeholders, engineers and internal teams to build and implement extraordinary pharma focused data products (re-usable asset) and solutions and delivering them right to the client will be of utmost importance
You will also be responsible for developing deep Life Sciences domain understanding in at least in one of the following areas - Manufacturing, Procurement, Supply Chain, Chemical Discovery, Molecular / Materials optimization, Clinical trial design and operationalization, Real World Evidence and Commercial.
Other Key Responsibilities Will Include
- Work with our clients to model their data landscape, obtain data extracts and define secure data exchange approaches
- Acquire, ingest, and process data from multiple sources and systems into Big Data platforms
- Understanding, assessing and mapping the data landscape
- Maintaining our Information Security standards on the engagement.
- Collaborate with our data scientists to map data fields to hypotheses and curate, wrangle, and prepare data for use in their advanced analytical models.
- Defining the technology stack to be provisioned by our infrastructure team.
- Building modular pipeline to construct features and modelling tables
What You’ll Learn
- How successful projections on real world problems across Life Sciences use cases are completed through referencing past deliveries of end to end pipelines.
- Build products alongside the Core engineering team and evolve the engineering process to scale with data, handling complex problems and advanced client situations.
- Be focused on the wrangling, clean-up and transformation of data by working alongside the Data Science team which focuses on modelling the data.
- Using new technologies and problem-solving skills in a multicultural and creative environment.
You will work on the frameworks and libraries that our teams of Data Scientists and Data Engineers use to progress from data to impact. You will guide global companies through data science solutions to transform their businesses and enhance performance across industries including healthcare, automotive, energy and elite sport.
- Real-World Impact – We provide unique learning and development opportunities internationally.
- Fusing Tech & Leadership – We work with the latest technologies and methodologies and offer first class learning programs at all levels.
- Multidisciplinary Teamwork - Our teams include data scientists, engineers, project managers, UX and visual designers who work collaboratively to enhance performance.
- Innovative Work Culture – Creativity, insight and passion come from being balanced. We cultivate a modern work environment through an emphasis on wellness, insightful talks and training sessions.
- Striving for Diversity – With colleagues from over 40 nationalities, we recognize the benefits of working with people from all walks of life
Qualifications
- Bachelor's degree in computer science or related field
- 4+ years of relevant experience hands-on experience dealing with data and/or software
- Meaningful experience with at least two of the following programming languages Python, Scala, Java
- Knowledge of ETL concepts and SQL are mandatory
- Proven experience on a distributed processing framework such as Spark, or Dask, or any other distributed computing framework
- Ability to work across structured, semi-structured, and unstructured data, extracting information and dentifying linkages across disparate data sets. Experience working on GenerativeAI modules is a plus
- Ability to write clean, maintainable, scalable and robust code in an object-oriented language (Python, Java) in a professional setting
- Practical knowledge of software engineering concepts and best practices, inc. testing frameworks / libraries, automation frameworks
- Understandings on Information Security principles to ensure compliant handling and management of client data
- Familiarity or hands-on experience with testing libraries (e.g. Pytest)
- Familiarity or hands-on experience with any cloud platforms (AWS, Azure, or GCP)
- Familiarity or hands-on experience with containerization technologies (Docker, Docker-compose)
- Familiarity or hands on experience with automation frameworks (CircleCI, Jenkins, Github Actions, Drone etc)
- Familiarity or hands-on experience with data visualization tools (PowerBI, Tableau, etc.)
- Familiarity or hands-on experience with pipeline orchestration frameworks or orchestrators in general (Airflow, Argo Workflows, Kedro, Dagster etc.)