Our global house-of-brands inspires and empowers youth culture. Relentlessly committed to fuel a shared passion for self-expression, we create unrivaled experiences at the heart of the sport and sneaker communities through the power of our people. If you want to be a part of something bigger than you can imagine, you’ve come to the right place. To learn more about the incredible impact we’re making on both our local and global communities, Click Here!
Foot Locker, Inc. is seeking an innovative individual who has a proven track record of building enterprise level platform components to support product development from multiple teams and lines of business. This role is expected to drive innovation through collaboration across our data science teams and business to help push Foot Locker, inc. to the next level. The team is embarking on a journey of building a brand new data lake platform built using cloud native concepts and the latest tech stacks.
- Build new data sets, and products helping support Foot Locker business initiatives.
- Help grow our data catalog through ingestions of a variety of third party data sources, both internal and external
- Must be able to contribute to self-organizing teams with minimal supervision working within the Agile / Scrum project methodology
- Build production quality ingestion pipelines with automated quality checks to help enable the business to access all of our data sets in one place
- Participate in the continuous evolution of our schema / data model as we find more data sources to pull into the platform
- Support our Data Scientists by helping enhance their modeling jobs to be more scalable when modeling across the entire data set
- Participate in a collaborative, peer review based environment fostering new ideas via cross team guilds / specialty groups
- Maintain comprehensive documentation around our processes / decision making
- Bachelors Degree in Computer science or related field, preferred
- Minimum of 1 year's experience of related information technology experience
- Minimum of 1 year's experience directly related to Big Data technologies
- Must have Intermediate knowledge and hands on experience with Spark, running on multiple platforms. Spark Certified Developer a plus
- Knowledge of Hive, Hadoop, Parquet, HDFS, Python, Scala, Data Lake, NoSQL
- Public cloud experience, preferably Azure or AWS, preferred
- Understands proper executor / driver sizing, understands caching / object serialization, spark shuffle optimization, streaming
- Industry recognized certifications (similar to below)
- Demonstrated experience with agile scrum methodology
- Strong desire to learn new technologies and keep up with the latest technologies in the big data space
- Solid leadership and mentorshiping skills
- Must possess well-developed verbal and written communication skills.
- Experience using CI / CD tools and project build automation, Git, Jenkins, Maven, PyPi, etc. preferred
- Experience with Airflow, a plus
- Previous experience at a fortune 500 company, a plus
- Experience with enabling Data Science and Self Service product development with clean, reliable data sets a plus