Keyless is a deeptech cybersecurity company founded by renowned security experts, experienced technologists and business leaders, bringing more than 10 years of research to life.
Keyless is pioneering the world's first privacy-preserving biometric authentication and personal identity management platform, combining multi-modal biometrics with advanced cryptography, that leverages a distributed cloud architecture.
Keyless zero-knowledge biometrics solution eliminates the need to store and manage sensitive information, enabling businesses to adopt passwordless authentication, protect their remote workforce and enable strong customer authentication with just one look.
As a Data Scientist, you will dive deep into the technology that lies at the foundations of Keyless. You are able to exploit the data in order to extract meaning and interpretation; you love to build mathematical and statistical models, design and implement algorithms with the aim of gaining insights and proposing data driven decisions and solutions. You will be part of the Machine Learning Team in Keyless working on vision and biometric data pipelines and deep learning models. This position requires strong analytical rigor and hacker mindset.
You know what it is needed in order to build data driven features that will positively impact, improving and evolving, the Keyless products. You are a data story teller, you see information and patterns where others see just bits.
What will you do?
Be an early member of a high-performing team of software engineers and machine learning researchers building a privacy-first biometric authentication and identity management platform.
Take ownership, be creative, and think outside the box to invent and build solutions to real-world customer problems.
Working with datasets: scraping, downloading, exploring, processing and understanding datasets.
Build and improve deep learning model for biometrics (detection/recognition tasks with: face, liveness, behavioral and more) using common frameworks and tools like Python (e.g. TensorFlow, Keras, MXNet, JupyterLab, Papermill, Google Cloud, AWS) in Linux cloud environments.
Running cross-platform machine learning experiments, evaluating and analysing results, and preparing reports.
Scripting and automating data pipelines.
Working in a dynamic environment of early-stage startup with focus on prototyping and fast delivery.
Nice to have: