Founded in 2015, our company’s mission is to empower emerging market entrepreneurs with financial choices.
We believe that we have the opportunity in our lifetime to connect every entrepreneur to the financial products they need to grow and prosper. We need exceptional people in key roles to make that happen. Joining us requires boldness, resilience, and innovation. You will need to embrace change and operate comfortably in uncharted territory.
Our platform acquires data from millions of GSM and mobile money users from globally distributed sources. Beyond ingress, the data is processed, stored and distributed to support event-driven analytics and ML pipelines, leveraging cloud-based infrastructure. As a Data Engineer, you’ll help contribute to our mission of financial inclusion by extracting maximum predictive value from data assets.
Our environment is designed to foster innovation and enable collaboration, working from one of our Data Hubs in South Africa or Kenya. We operate a remote first working approach where working remotely is our default way of working. We have co-working spaces available in Cape Town and Nairobi for collaboration and connection and for the use of those who value and want to work out of an office.
If you join us, you’ll:
- Design, implement, and maintain the data pipelines that constitute our data platform, enabling effective use of data across the organisation
- Providing feedback on team members’ output, encouraging skills development within the team
- Be responsible for creating robust, mission critical batch and streaming data processing capabilitie
- Work closely with Portfolio Managers and Data Scientists to understand the real world problems we’re trying to solve
- Mentor juniors with technical leadership
- Be supported by senior leaders as you drive your own development
What you’ll need:
- BSc. in Computer Science, Electrical Engineering or equivalent tertiary degree
- Real-world understanding of data processing and storage
- 2+ years experience with Data pipeline design and development experience
- 2+ years experience in processing of data with big-data technologies such as Cassandra, DynamoDB, InfluxDB, MongoDB, Presto, Apache Spark, Hadoop, Beam, Flink, Kafka or Kinesis
- Experience in application design and development with at least one of the following languages: Python (preferred), Scala, Java
- RDBMS experience in any relevant technology such as MySQL, PostgreSQL, Redshift and SQL Server
- Working knowledge of the Data product lifecycle
- Command of productionising and monitoring of data pipeline workflows
- Experience with relational database administration, technical architectures and infrastructure components
- Understanding of CI/CD practices
- Productive within a Linux command line environment
- Experience designing systems to process and curating large data sets
- Proven ability to contribute software as part of a team
- Effective communication of technical concepts
- Critical thinking under pressure
Bonus if you have
- Experience working with messaging systems (RabbitMQ, SNS)
- Experience working with data pipeline orchestration (Airflow, Nifi, Streamsets)
- Experience working with production BI environments and tools (Tableau, Superset, Looker)
We ask a lot of each other, but we give a lot too.
Things you’ll love:
- Collaborating with smart, engaging people
- Working for impact
- Growing and learning continuously, with loads of encouragement and support
- Boldly taking risks as we navigate new challenges
- Flexible work practices enabling your best delivery
- Being autonomous and empowered to lead
- A stack of leading-edge technologies