Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Have you ever found a new favourite series on Netflix, picked up groceries curbside at Walmart, or paid for something using Square? That’s the power of data in motion in action—giving organisations instant access to the massive amounts of data that is constantly flowing throughout their business. At Confluent, we’re building the foundational platform for this new paradigm of data infrastructure. Our cloud-native offering is designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organisation. With Confluent, organisations can create a central nervous system to innovate and win in a digital-first world.
We’re looking for self-motivated team members who crave a challenge and feel energised to roll up their sleeves and help realise Confluent’s enormous potential. Chart your own path and take healthy risks as we solve big problems together. We value having diverse teams and want you to grow as we grow—whether you’re just starting out in your career or managing a large team, you’ll be amazed at the magnitude of your impact.About The Team
The mission of the Data Science/Data Engineering team at Confluent is to serve as the central nervous system of all things data for the company: we build analytics infrastructure, insights, models and tools, to empower data-driven thinking, and optimize every part of the business. This position offers limitless opportunities for an ambitious data engineer to make an immediate and meaningful impact within a hyper growth start-up, and contribute to a highly engaged open source community. About The Role
This is a partnership-heavy role. As a member of the Data team, you will enable various functions of the company, i.e. People, Talent, Workplace to be data-driven.
As a Data Engineer, you will take on big data challenges in an agile way. You will build data pipelines that enable data scientists, analytics and operation teams, and executives to make data accessible to the entire company. You will also build data models to deliver insightful analytics while ensuring the highest standard in data integrity. You are encouraged to think out of the box and play with the latest technologies while exploring their limits. Successful candidates will have strong technical capabilities, a can-do attitude, and are highly collaborative.Your Responsibilities
Here are some examples of our Work:
- Designing, building and launching extremely efficient and reliable data pipelines to move data across a number of platforms, including Data Warehouse and real-time systems.
- Developing strong subject matter expertise and managing the SLAs for those data pipelines.
- Setting up and improving BI tooling and platforms to help the team create dynamic tools and reporting.
- Assessing options and opportunities in order to provide recommendations to business partners and stakeholders.
- Partnering with Data Scientists and business partners, like analytics teams and system administrators, to develop internal data products to improve operational efficiencies organizationally.
What We're Looking For
- Data Pipelines - Create new pipelines or rewrite existing pipelines using SQL, Python or Spark
- Data Quality and Anomaly Detection - Improve existing tools to detect anomalies real time and through offline metrics
- Data Modeling - Partner with analytic consumers to improve existing datasets and build new ones
What Gives You An Edge
- 3 to 6 years of experience in a Data Engineering role, with a focus on data warehouse technologies, data pipelines and BI tooling.
- Bachelor or advanced degree in Computer Science, Mathematics, Statistics, Engineering, or related technical discipline.
- Expert knowledge of SQL and of relational database systems and concepts.
- Strong knowledge of data architectures and data modeling and data infrastructure ecosystem.
- Experience with enterprise business systems such as Workday, Jobvite, or other comparable HR information systems (HRIS), applicant tracking systems (ATS), learning management systems (LMS), and survey tools.
- Experience with ETL pipeline tools like Airflow, and with code version control systems like Git.
- The ability to communicate cross-functionally, derive requirements and architect shared datasets; ability to synthesize, simplify and explain complex problems to different types of audiences, including executives.
- The ability to thrive in a dynamic environment. That means being flexible and willing to jump in and do whatever it takes to be successful.
Come As You Are
- Experience with Apache Kafka
- Experience with Workato and Slack bots
- Experience partnering with People teams and/or working with People data, systems and analytics
- Interest in leveraging automation to drive productivity, improve processes, and create joyful employee experiences
- Knowledge of batch and streaming data architectures Product mindset to understand business needs, and come up with scalable engineering solutions
At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact. Employment decisions are made on the basis of job-related criteria without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by applicable law.
Click HERE to review our Candidate Privacy Notice which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.