Our client is a pioneer in digital music and a leading provider of music streaming technologies and services for businesses, including some of the world's most recognized and respected consumer services companies. The platform operates their platform in 35 countries with a premium subscription service giving millions of subscribers unlimited ad-free access to the music they love anywhere, anytime, and on any device – online or offline. The idea is to combine the iconic history of one of the most recognizable music brands in the world with their vast experience and technological platform and the industry's first subscription streaming service provider.
We invite you to our company, not a project
We continue to innovate, the platform has just entered into an exciting partnership with a music distribution company, the only licensed Virtual Reality music platform, to create a more immersive and connective experience, harnessing the power of virtual reality. We are entering a new chapter and looking to build on both companies' experience in the digital music & entertainment sector, creating a differentiated product offering that will appeal to true music fans. This is an opportunity to help a long-established business undertake a digital transformation using industry best practices.
We create a restriction-free environment
We let you concentrate on your job
Multi-team structure. The particular data team will consist of 4 people: Data Analyst, Data Engineer, Back-end Developer, DevOps.
By the way, do you speak English?
We’re looking for an experienced GCP Data Engineer to join the team to support ongoing large-scale migration from on-premises to GCP and build the next generation of data delivery systems. The Data team will be responsible for data migration to production. The volume of the historical data counts terabytes. Internal Client’s team will support the current version during the migration process.
Design and implement new version of Data Platform based on GCP Cloud
Migration of Teradata to BigQuery. Moving from on-premise to the cloud, decomp of legacy systems
Implement and support near real-time and batch processing data pipelines? Backend integration
Work collaboratively on designing and implementing modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform
Manage both real time and batch data pipelines. Our technology stack includes a wide variety of technologies including Spark, Kafka, GCP PubSub, Teradata
Showcase your GCP Data experience when communicating with stakeholders, turning these into technical data solutions
Expertise in the main components of continuous data delivery: set up, design and delivery of data pipelines (testing, deploying, monitoring and maintaining)
Expertise in data architecture within web/mobile environments, web/internet-related technologies, architecture across Software-as-a-Service, Platform-as-a-Service, Infrastructure-as-a-Service and cloud productivity suites
Strong engineering background, with experience with Python, SQL, SparkML or similar frameworks used to ship data processing pipelines at scale
Demonstrated experience with and solid knowledge of cloud Google Cloud Platform: Cloud Composer, BigQuery, Data Proc, etc.
Demonstrated experience with Infrastructure-as-code (IaC) - Terraform
Basic knowledge of Teradata
Capable of working independently with minimal supervision in remote teams configuration
Ability to work, communicate effectively and influence stakeholders on internal/external engineering teams, product development teams, sales operations teams and external partners and consumers
Good spoken English
Nice to have
Demonstrated experience in Data migration projects. Migration from Teradata to BigQuery