Nivoda

Data Engineer (Remote)

Job description

Nivoda’s B2B diamond and gemstones marketplace allows jewelry retailers to save time and money whilst gaining access to a global diamond supply at the best prices, with zero inventory risk.

With a team of over 300 dedicated employees around the world and a wealth of experience in the industry, Nivoda has developed an award-winning solution that enables jewelry businesses of any size, in any location, to buy and sell diamonds in the most profitable, efficient and hassle-free manner.

Over the course of the last six years, Nivoda has evolved into a global platform recognised for its innovation, customer service and ability to deliver a seamless, reliable and efficient experience.

We offer:

  • Dynamic working environment in an extremely fast-growing company
  • Work in an international environment
  • Work in a pleasant environment with very little hierarchy
  • Intellectually challenging, play a massive role in Nivoda’s success and scalability
  • Flexible working hours


We are seeking a talented Data/Analytical Engineer with experience in software development / programming and a passion for building data-driven solutions, you’re ahead of trends and work at the forefront of AWS/Snowflake, DBT, Data Lake and Data warehouse technologies.

The ideal candidate thrives working with large volumes of data, enjoys the challenge of highly complex technical contexts, and is passionate about data and analytics. The candidate is an expert within data modeling, ETL design and cloud/big-data technologies

The candidate is expected to have strong experience with all standard data warehousing/ data lake technical components (e.g. ETL, Reporting, and Data Modelling), infrastructure (hardware and software) and its integration.

Key job responsibilities:

  • Implementing ETL/ELT pipelines within and outside of a data warehouse using Python, Pyspark and Snowflakes Snow SQL.
  • Support Redshift DWH to Snowflake Migration.
  • Design, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Snowflake, Glue/lake formation, EMR/Spark/Scala etc.
  • Work with data analysts to scale value-creating capabilities, including data integrations and transformations, model features, and statistical and machine learning models.
  • Work with Product Managers, Finance, Service Engineering Teams and Sales Teams on a day-to-day basis to support their new analytics requirements.
  • Implement data quality and data governance measures and execute data profiling and data validation procedures
  • Implement and uphold data governance practices to maintain data quality, integrity, and security throughout the data lifecycle.
  • Leverage open-source technologies to build robust and cost-effective data solutions.
  • Develop and maintain streaming pipelines using technologies like Apache Kafka etc.


Skills and Qualifications:

  • Must have total 5+ yrs. of IT experience and 3+ years' experience in data Integration, ETL/ETL development, and database design or Data Warehouse design
  • Broad expertise and experience with distributed systems, streaming systems, and data engineering tools, such as Kubernetes, Kafka, Airflow, Dagster, etc.
  • Experience in data transformation, ETL/ELT tool and technologies such as AWS Glue, DBTetc for transforming structured/semi structured and unstructured datasets.Experience in ingesting and integrating data from APIs/JDBC/CDC sources.
  • Deep knowledge of Python, SQL, relational/ non-relational database design, and master data strategies.
  • Experience defining, architecting, and rolling out data products, including ownership of data products through their entire lifecycle.
  • Deep understanding of Star and Snowflake dimensional modeling. Experience with relational databases, including SQL queries, database definition, and schema design.
  • Experience with data warehouses, distributed data platforms, and data lakes.
  • Strong proficiency in SQL and at least one programming language (e.g., Python,Scala, JS).
  • Familiarity with data orchestration tools, such as Apache Airflow, and the ability to design and manage complex data workflows.
  • Familiarity with agile methodologies, sprint planning, and retrospectives.
  • Proficiency with version control systems, Bitbucket/Git.
  • Ability to work in a fast-paced startup environment and adapt to changing requirements with several ongoing concurrent projects.
  • Excellent verbal and written communication skills.


Preferred/bonus Skills

  • Redshift to Snowflake migration experience.
  • Experience with DevOps technologies such as Terraform, CloudFormation, and Kubernetes.
  • While not mandatory, experience or knowledge in machine learning techniques is highly preferable, enriching our data engineering capabilities.
  • Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)

Please let the company know that you found this position on this Job Board as a way to support us, so we can keep posting cool jobs.

Similar jobs

Browse All Jobs
Noesis
July 27, 2024

Data Engineer (m/f)

Standard Chartered
July 27, 2024

Junior Business Data Engineer

Materialise
July 27, 2024

Data Engineer