Data Engineer

Location: Lisboa

*** Mention DataYoshi when applying ***

At Bose Corporation (Permanent), in Lisbon, Portugal
Expires at: 2021-04-11

About Us

At Bose, better sound is just the beginning. We’re passionate engineers, developers, researchers, retailers, marketers … and dreamers. One goal unites us — to create products and experiences our customers simply can’t get anywhere else. We are driven to help people reach their fullest human potential. Creating technology to help people to feel more, do more, and be more. We are highly motivated and curious, and we come to work every day looking to solve real problems and make the best experiences for our customers possible.
The Bose Data Engineering team is responsible for design, development and enhancement of Bose Data Platforms (Analytics & Customer Data Platforms) in leading and supporting Advanced Analytics & AI/ML workloads. This team is highly impactful and a key enabler of Bose Digital journey by playing a central role in the Data driven transformation.
What you will be working on?
As a Data engineer focusing on Big Data you will work on developing Data Platforms that turn Data into actionable insights as part of our digital journey. Enable capabilities and provide business partners with the tools to make their decision-making process more efficient and with greater speed. As part of an agile delivery team, you will design, develop, deploy and support the data ingestion pipelines and the data access solutions for our Data Platform ecosystem. This role requires knowledge and hands-on experience with large data processing (real-time/batch) and ML technologies used throughout entire application stack include Spark, Databricks, Snowflake and Python/Airflow.

  • Design and develop ETL pipelines connected devices, web applications, and mobile applications that support the customer experiences.
  • Work with platform architects on software and system optimizations, helping to identify and remove potential performance bottlenecks.
  • Focus on innovating new and better ways to create solutions that add value and amaze the end user, with a penchant for simple elegant design in every aspect from data structures to code to UI and systems architecture.
  • Stay up to date on relevant technologies, plug into user groups, understand trends and opportunities that ensure we are using the best techniques and tools
  • Collaborate with AWS Cloud Architects to optimize and evaluate scalable and serverless solutions.
  • Work in multi-functional agile teams to continuously experiment, iterate and deliver on new data product objectives.

Main requirements

Qualifications (demonstrated competence):

  • Developed and implemented a full data lifecycle management of multiple data pipelines in a complex Data Platform environment.
  • You know how to work with high volume heterogeneous data, preferably with distributed systems.
  • You are knowledgeable about data modeling, data access, and data storage techniques.
  • You have a command of various programming languages to collect and manipulate data such as Python, and SQL.
  • You have worked with a variety of cloud and data solutions, such as: AWS, SnowFlake, Kafka, DataBricks, Hadoop, Spark, Airflow.
  • You appreciate agile software processes, data-driven development, reliability, and responsible experimentation.
  • You have some experience or knowledge implementing data security and privacy in a Cloud environment.
  • Experience architecting, designing and building out a scalable ML pipeline using AWS Sagemaker or MLFlow desirable.
  • Experience building Machine Learning deployment and monitoring pipelines as part of overall MLOps focus in this role.


  • 3 - 5 years working in a Data Engineering / Machine Learning or software systems development environment.
  • 2+ years developing, deploying and maintaining high volume production workloads supporting Advanced Analytics and ML workloads
  • Bachelor's or a Master’s degree in Computer Science, or equivalent work experience.

Nice to have

  • Experience with implementing Cloud Data Platform with emphasis on building out a Customer 360 platform.
  • Experience using and building out a Graph Database (Neptune or neo4j)

Benefits & Perks

  • Highly competitive benefit package
  • State of the art technological environment
  • Employee product discounts
  • Free coffee and fruit
  • Create and shape our local company culture with the support of a fantastic global group
  • Continuous training and career development

*** Mention DataYoshi when applying ***

Offers you may like...

  • Thermo Fisher Scientific

    Sr. Data Engineer- Remote
    Pittsburgh, PA 15122
  • Sapient Industries

    Data Engineer
    Philadelphia, PA 19103
  • Harvard University

    Data Engineer
    Cambridge, MA
  • Mastery Logistics Systems, Inc.

    Data Engineer (Mongo)
    United States
  • Facebook

    Data Engineer, Analytics (Integrator)
    Sunnyvale, CA 94089