DealShare has a vision of enabling ecommerce for the next 500 million users in India in non-metro and rural markets. DealShare has raised series C funding of USD 21 million with key investors like WestBridge Capital, Falcon Edge Capital, Matrix Partners India, Omidyar Network, Z3 Partners and Partners of DST Global and has a total funding of USD 34 million
Dealshare has 2 million customers across Rajasthan, Gujarat, Maharashtra, Karnataka and Delhi NCR with monthly transactions of 1.2 million and annual GMV of $100 million. Our aim is to expand operations to 100 cities across India and reach annual GMV of USD 500 Million by end of 2021.
Dealshare started in Sept 2018 and had 5000 active customers in the first three months. Today we have 25K transactions per day, 1 Lakh DAU and 10 Lakh MAU with a monthly GMV of INR 100 Crores and 50% growth MoM. We aim to hit 2 Lakh transactions per day with an annual GMV of USD 500 Million by 2021
We are looking for a senior data engineer to lead solutioning, design and implementation of ETL pipelines, analytics/reporting/visualization solutioning and data requirements for data sciences.
- Lead solutioning, design and implementation of ETL pipelines and data systems needed for analytics, business intelligence, reporting, and data sciences use cases.
- Ensure data systems created are scalable, are easy to evolve, easy to debug and is easy to experiment
- Optimise for storage, compute, and job run times. Strive for reducing the cost of infrastructure.
- Contribute to tech stack choice and system architecture.
- Solve to handle ever increasing scale of data, ensure data availability, correctness, completeness and freshness. Add necessary alerting to ensure data quality.
- Solve for both snapshot and incremental updates, batch and streaming use cases, fast access to data, etc
- Contribute to design and architecture reviews, suggest data engineering best practices that should be adopted within the org.
- Contribute to hiring outstanding engineers in the company, suggest improvements in hiring process
- Mentor junior engineers and groom them to next level.
- Bachelors (4 years) or higher in Computer Science or related engineering discipline
- 3+ years of experience in data engineering on very large data sets.
- Experience with data modelling and building/scaling/evolving/maintaining large scale ETL pipelines in public cloud environment
- Experience in big data technologies (Hadoop, MapReduce, HDFS, Hive, Hbase, Presto, Spark, Flink, Avro, etc), streaming technologies (Storm, Kafka, etc)
- Experience with one or more workflow management tools: Airflow, Azkaban, Luigi, Oozie, etc.
- Experience in Python, any object oriented language (Java, Scala etc) and good expertise with object oriented design.
- Experience with SQL and No-SQL data stores, tuning and scaling them.
- Experience with one or more visualization tools
- Experience with one or more analytics, slice/dice tools.
- Experience working in any of the cloud computing environments such as AWS, Azure and GCP
- Experience in using CI/CD pipelines
- Extremely good at problem solving, is a self thinker.
- Ability to multitask and thrive in a fast paced timeline-driven environment.
- Good team player and ability to collaborate with others
- Self driven and motivated, very high on ownership
Is a plus
- Exposure to Kubernetes/Docker/Containers.
- Understanding of ML model lifecycle.