- Bachelor's degree in Computer Science, Engineering, Mathematics, or a related technical discipline.
- 4+ years of industry experience in Software Development, Data Engineering, Business Intelligence, Data Science, or related field with a track record of manipulating, processing, and extracting value from large datasets.
- Hands-on experience and advanced knowledge of SQL.
- Experience in Data Modeling, ETL Development, and Data Warehousing.Experience using business intelligence reporting tools (Power BI, Tableau, Cognos, etc.).
- Experience using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.).
- Knowledge of Data Management fundamentals and Data Storage tenets.
- Experience coding and automating processes using Python or R.
- Strong customer focus, ownership, urgency, and drive.
- Excellent communication skills and the ability to work well in a team.
- Effective investigative, troubleshooting, and problem-solving skills.
Our team is passionate about Brands who sell on Amazon - we help them grow their businesses, build their story, and serve their customers. How do we do this? Data! Help us serve this valuable data to our Brands in digestible ways so they can run their businesses more effectively.
The candidate will need to investigate complex problems and synthesize data into efficient, reusable datasets. To be successful in this role, you should have broad skills in database design, be comfortable dealing with large and complex data sets, have experience building self-service dashboards, be comfortable using visualization tools, and be able to apply your skills to generate insights that help solve business problems.
In the role, you will work closely with scientists, product managers and software engineers to build out infrastructure, data pipelines, and reporting mechanisms for our team and our Brands.Our Data Engineer duties & responsibilities will include:
- Design and deliver big data architectures for experimental and production consumption between scientists and software engineering
- Develop the end-to-end automation of data pipelines, making datasets readily-consumable by visualization tools and notification systems.
- Create automated alarming and dashboards to monitor data integrity.
- Create and manage capacity and performance plans.
- Act as the subject matter expert for the data structure and usage.
- Masters in computer science, mathematics, statistics, economics, or other quantitative fields.
- Experience working with AWS big data technologies (Redshift, S3, EMR, Glue).
- Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling choices and data engineering strategy.
- Background in Big Data, non-relational databases, Machine Learning and Data Mining is a plus.