Job description

Fast-growing blockchain startup.

About our client:
Fast-growing blockchain startup.

Responsibilities:
Create and maintain optimal data pipelines to streamline the digestion and organization of crypto-related data
Design, build and maintain real-time data pipelines that process blockchain transactions from dozens of different blockchain networks.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work alongside the Data Science team to curate and prototype new data-sets to tackle emerging problems.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Transform, clean and maintain the data so other business units such as research, finance could easily access and analyze the data.
Develop data models that translate complex, esoteric blockchain data into standardized formats that are analytics-ready.
Design automated systems that evaluate and parse the results of smart contract calls.
bi- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS 'big data' technologies.

Requirements / Skill Sets:
Strong technical background that includes 2+ years of experience working in a senior engineering position with data infrastructure/distributed systems.
Good understanding of blockchain and cryptocurrencies is preferred
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Experience building and optimizing 'big data' data pipelines, architectures and data sets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.

Skillset (any subset of the below combinations):
Databases - SQL, NoSQL, GraphQL, etcBlockchain - Solidity, Rust etc
Data handling - Spark, Apache, Airflow including Postgres and Cassandra.
Big data tools - Azkaban, Luigi, Airflow, Hadoop, Spark, Kafka, etcCloud Computing - AWS cloud services: EC2, EMR, RDS, RedshiftScripting languages - Python, Java, C++, Scala, etc.

Working visa sponsorship and relocation allowance will be provided for non-Hong Kong candidates.

Apply Today
To apply online (Word attachment only), please click the 'Apply' button. Please note that only short-listed candidates will be contacted. Reach out to Nora Li for details(Nora.Li@roberthalf.com.hk)

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
GlobalLogic
August 15, 2022

Data Engineer III

IBM
August 15, 2022