Big Data Engineer

Location: Warszawa, mazowieckie

*** Mention DataYoshi when applying ***

Big Data Engineer

Schneider Electric's purpose is to empower everyone to make the most of our energy and resources, ensuring Life Is On everywhere, for everyone, at every moment. Along the way, we create and provide equal opportunities for everyone, everywhere. We continuously create an inclusive environment and welcome people from all walks of life. We are empowered to do our best and innovate, while living our unique life and work. Together, we dare to disrupt and turn our bold ideas into reality.

Great people make Schneider Electric a great company.

Join an exciting team and help drive transformation on our digital journey.

As a Data Engineer for Finance Data Operations you will build robust, fault tolerant, near real-time, and scalable data pipelines to supply our Finance community with operational, analytics and reporting ready data

Key responsibilities include:

  • Design, build, test and operate robust and scalable data pipelines for streaming and batch data
  • Design, build, test and maintain a scalable and reliable data repository for operational, analytical and reporting uses Develop best practices and frameworks for automation and CI/CD for data pipelines
  • Refactor legacy pipelines for modern infrastructure and frameworks
  • Create and maintain an optimized cloud based infrastructure
  • Embed end-to-end security and privacy
  • Tenaciously solve problems and make decisions under uncertainty
  • Actively contribute to team ceremonies including: sprint planning, daily stand-up, sprint review and retrospectives
  • Demonstrate a strong ownership in understanding business needs and driving business valued outcomes

Required qualifications:

  • University degree in a quantitative concentration: Computer Science, Engineering, Mathematics, Economics or equivalent
  • 3 years of designing, building, testing and operating data pipelines in AWS
  • 3 years of hands on experience building data pipelines with Python, Scala, or Java
  • 3 years of hands on experiences building on the Hadoop stack especially Spark
  • 3 years of experience modelling data in normalized, denormalized and graph structures
  • Experience developing and operating workflow orchestration (Apache Airflow)
  • Excellent proficiency with Python, SQL, PySpark, Spark SQL
  • Strong experience with schema design and data modelling
  • Fluent in English

Preferred qualifications:

  • AWS Certification: Solutions Architect, SysOps Administrator, Developer, DevOps Engineer
  • Experience building secure systems compliant with ISO 27001 and NIST 800-53
  • Experience with streaming data services (Kafka, Kinesis, MKS, DMS)
  • Experience with data warehousing solutions (Redshift, Snowflake)
  • Experience with log based change data capture extraction (Qlik Data Integration, Attunity Replicate, AWS DMS)
  • Experience with developing containerized solutions (Docker)
  • Experience programming in Java and Scala
  • Experience with Delta Lake
  • Experience with threat modeling

Benefits we offer:

bonus and
Apply here
At Schneider Electric, we believe access to energy and digital is a basic human right. We empower all to do more with less, ensuring Life Is On everywhere, for everyone, at every moment. We provide energy and automation digital solutions for efficiency and sustainability.

*** Mention DataYoshi when applying ***

Offers you may like...


    Big Data Analyst / Engineer (m/w/d) am Standort Kö...
  • Toll Collect GmbH

    Big Data Analyst (d/m/w)
  • Cognizant

    Junior Big Data Analyst
    Madrid, Madrid provincia
  • Tiger Analytics

    Big Data Engineer (Python)
  • HAYS