Data Engineer (Stream Processing)

Location: Lisboa

*** Mention DataYoshi when applying ***


Must have:

Apache Spark

Apache Kafka

Amazon Web Services

Other Required:

Data-driven Decision Making

Data Center

Data Integration

Big Data



About Us

At Bose, better sound is just the beginning. We’re passionate engineers, developers, researchers, retailers, marketers … and dreamers. One goal unites us — to create products and experiences our customers simply can’t get anywhere else. We are driven to help people reach their fullest human potential. Creating technology to help people to feel more, do more, and be more. We are highly motivated and curious, and we come to work every day looking to solve real problems and make the best experiences for our customers possible.

The Bose Data Engineering team is responsible for design, development and enhancement of Bose Data Platforms (Analytics & Customer Data Platforms) in leading and supporting Advanced Analytics & AI/ML workloads. This team is highly impactful and a key enabler of Bose Digital journey by playing a central role in the Data driven transformation.

What you will be working on?

As a Data Engineer focusing on Stream Processing, you will work on developing Data Platforms that turn Data into actionable insights as part of our digital journey. Enable capabilities and provide business partners with the tools to make their decision-making process more efficient and with greater speed. As part of an agile delivery team, you will design, develop, deploy and support the data ingestion pipelines and the data access solutions for our Data Platform ecosystem. This role requires knowledge and hands-on experience with large data processing and ML technologies, in a tech stack composed by the Kafka ecosystem, Spark, Snowflake and Python/Scala.

  • Build and maintain data infrastructure focused on Stream Processing, making use of the Kafka ecosystem.
  • Implement and maintain producers and consumers of data sources owned by the Data Engineering team, ensuring data quality and schema management.
  • Support other teams in the implementation of producers and consumers for their data sources, facilitating the creation of stream processing data pipelines.
  • Stay up to date on relevant technologies, plug into user groups, understand trends and opportunities that ensure we are using the best techniques and tools
  • Collaborate with AWS Cloud Architects to optimize and evaluate scalable and serverless solutions.


  • Highly competitive benefit package
  • State of the art technological environment
  • Employee product discounts
  • Create and shape our local company culture with the support of a fantastic global group
  • Continuous training and career development


Thank you in advance for your application. After reviewing all the applications, we will only contact the candidates who fulfill the requirements for this position. So, if you don’t receive a contact in the next 15 days, we will keep your application for future opportunities that fit your profile.


Qualifications (Demonstrated Competence):
  • Degree in Computer Science (or equivalent).
  • Developed and implemented a full data lifecycle management of multiple stream processing data pipelines in a complex Data Platform environment.
  • You know how to work with high volume heterogeneous data, preferably with distributed systems.
  • You are knowledgeable about data modeling, data access, and data storage techniques.
  • You have a command of various programming languages to collect and manipulate data such as Python, and SQL.
  • You have worked with a variety of cloud and data solutions, such as the Kafka ecosystem, AWS, Snowflake, Spark and Airflow.
  • You appreciate agile software processes, data-driven development, reliability, and responsible experimentation.
  • You have some experience or knowledge implementing data security and privacy in a Cloud environment.
Highly Desirable But Not Required Skills Include:
  • Experience with cloud computing (Amazon Web Services preferred)
  • Experience with implementing Cloud Data Platform with emphasis on building out a Customer 360 platform.
  • Experience using and building out a Graph Database (Neptune or neo4j)

To apply for this job you must be willing to work in the time zones between UTC+0 and UTC-5.


*** Mention DataYoshi when applying ***

Offers you may like...

  • Myticas Consulting

    BHJOB15656_16490 - Senior Data Engineer
  • MarketDial

    Data Engineer
    Salt Lake City, UT 84101
  • Applied Research Associates, Inc

    Senior Data Engineer
    Fort Belvoir, VA 22060
  • PBT Group

    Cloud Data Engineer
    Johannesburg, Gauteng
  • PBT Group

    Data Engineer
    Johannesburg, Gauteng