Our Client an international data science company mapping the world's information on startups, scaleups and their technologies.
They operate a proprietary DAG stateful-stream-processing information retrieval system (think of a search engine). We run container-based with distributed and scalable agents.
The Stack of choice is Python, Kafka, Scrapy, PostgreSQL, Elastic Search, Docker.
This is a remote, full-time positionIF YOU ARE:
Passionate about working on the world's largest startup search engine and use your talent planning, building and operating critical information systems that convert raw data to knowledge -then you are at the right place!YOU BRING:
- 5+ years work experience with big data / data engineering / DAG stateful-stream-processing / data pipelines
- Strong practical knowledge of Python and its libraries
- Experience with: Elastic Search, PostgreSQL, Docker
- Familiar with stream platforms e.g. Kafka
- Enjoy processing large, unstructured data-sets from multiple sources
- Profound understanding of software development principles
- Good written and spoken English
WHAT YOU GET IN RETURN:
- Lead the design and operation of scalable data systems to map the world's information on startups, scaleups and technologies
- Work with the global innovation and startup intelligence leader
- Join an environment where your decisions shape the future of the entire organization
- Work fully remote, with a strong distributed team that stay connected
- Regular visits to our HQ in Vienna (AT) and other company hot-spots
- Enjoy 25 paid work-free days per year
- Full-time remote position (40 hrs/week)
- Competitive package