Job description

WE EMPOWER, YOU CREATE

We at Music Tribe believe that our sole purpose is to empower you to become the most creative you can be. We believe in obsessively empowering through our Brand Tribes – Midas, Klark Teknik, Lab Gruppen, Lake, Tannoy, Turbosound, TC Electronic, TC Helicon, Behringer, Aston Microphones, Bugera, Oberheim, Auratone and Coolaudio. Empowering you to create and receive appreciation is the key to our happiness. That's why we exist.

Music Tribe celebrates over 30 years as a provider of Music Products and Solutions and works across 12 counties employing 2,000 Customer Obsessed Tribers. We want you to be part of our ambitious growth journey, empowering your team or fellow Tribers to create the absolute best.

Our People, Our Tribe

TRIBE - Teamwork, Respect, Integrity, Bold & Engage are our cultural cornerstone. We respect all our people and aspire to supply inclusive working experiences and an environment that reflects the audience we serve.

We are diverse, we come from different backgrounds and different countries. We are software engineers, designers, researchers, marketers, accountants, customer service, production operatives, technologists and more. We believe that a diverse organisation makes us stronger.

Our Purpose

We at Music Tribe's Advanced Signal Processing and Artificial Intelligence research team (RESE ASPAI) believe that delivering life-changing Signal Processing and Machine Learning Algorithms will empower our Customers as well as Music Tribe.

We are looking for a talented data engineer to join our highly skilled Advanced Signal Processing and Artificial Intelligence (ASPAI) research team. In the context of digital transformation of the music equipment and audio equipment industry, together we research and develop the audio processing features of the future, with direct impact across the whole range of Music Tribe products. This involves managing large amounts of audio data to serve machine learning R&D.

If you have a Bachelor's, Master's or higher degree in computer science, electrical engineering or data science, followed by two years or more of industry experience in the domain of data management and data engineering for machine learning, please join our team of talented individuals.

We look forward to talking to you soon!

Roles and Responsibilities

  • Design, develop and maintain database solutions for a large corpus of data to support the machine learning, DSP and task evaluation aspects of research in the music domain
  • Strategize, plan and develop the database infrastructure for data acquisition, storage and management
  • Develop data labelling infrastructure and methods, including collaboration with overseas workforce, crowdsourcing and other scalable resourcing methods
  • Develop efficient pipelines (Extract-Transform-Load, ETL) for data pre-processing and loading
  • Create and update policies and guidelines to ensure data governance, security and compliance in the context of data processing for machine learning
  • Plan and manage the required resources

Qualifications, Minimum

  • Minimum BSc or BEng in Engineering or relevant field e.g. Statistics, Maths, Data Science, Computer Science, with relevant study projects, and/or a minimum of two years industry experience in the field
  • Excellent Python programming skills
  • Deep knowledge in database management, e.g. SQL/NoSQL
  • Knowledge of distributed database solutions (Hadoop, S3)
  • Knowledge of system design, algorithms and data structures
  • Personal attributes: strategic, structured, collaborative, resilient

Qualifications, Preferred

  • MSc/PhD in engineering or relevant field, e.g. Statistics, Maths, Data Science, Computer Science
  • Experience in building and maintaining scalable ETL pipelines, ideally for machine learning applications
  • Experience in data processing frameworks such as Spark/Beam
  • Experience in data visualization
  • Experience in managing audio related data
  • Knowledge of data related laws such as copyright restrictions and GDPR
  • Knowledge of multithreading for optimizing data pipelines

Tools

  • Python, SQL, MongoDB, Hadoop, Azure/AWS/GCP, Agile, Git, Azure DevOps, Spark, Databricks

Metrics

  • Delivery against specifications agreed for the design, development and maintenance of databases
  • Delivery to agreed deadlines

Why work for us?

  • Annual leave provision, plus public holidays
  • Pension / retirement fund contributions
  • Health Care
  • Hybrid & remote working options in some locations
  • We measure our People Engagement
  • We run quarterly team building events
  • We are invested in learning & development
  • We reward daily through digital recognition systems

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.