The Intel Manufacturing & Operation Automation team is looking for a highly motivated big data engineer with strong data engineering skills for data integration of various manufacturing data. You will be responsible for engaging with customers and driving development from ideation to deployment and beyond. This position is a technical role that requires the direct design and development of robust, scalable, performant systems for world-class manufacturing data engineering, together with supporting collaterals such as unit tests and build automation pipelines.
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Work with stakeholders including the users, cross functional teams to assist with data-related technical issues and support their data infrastructure needs.
- Working to keep data secure with right access and authorization
- Development of semiconductor related context data analytics.
- Focus on automated testing and robust monitoring.
The ideal candidate must exhibit the following behavioral traits:
- Excellent problem-solving and interpersonal communication skills.
- Strong desire to learn and share knowledge with others.
- The ideal candidate will be inquisitive, innovative, and a team player with a strong focus on quality workmanship.
- Troubleshooting skills and root cause analysis for performance issues.
- Candidate must possess a Bachelor's or Master's degree with 7+ years of industry/post-university experience or a PhD with 4+ years of experience in Computer Science/Computer Engineering or a related field with strong data engineering experience with the following skills:
- Experience building and optimizing big data pipelines.
- Experience with skills pf handling unstructured data.
- Experience with data transformations, structures, metadata, workload management.
- Experience working with cross functional teams in dynamic environment.
Inside this Business Group
- Experience with big data tools: Hadoop, Spark, Kafka, Storm.
- Experience with Object oriented languages: Python, Java, C#, Scala.
- Experience with relational SQL and NOSQL DBs.
- Experience with data pipeline and workflow management tools.
- Experience in leveraging open-source packages.
- Experience with semiconductor manufacturing.
- Experience of big data engineering on cloud.
Non-Volatile Solutions Memory Group: The Non-Volatile Memory Solutions Group is a worldwide organization that delivers NAND flash memory products for use in Solid State Drives (SSDs), portable memory storage devices, digital camera memory cards, and other devices. The group is responsible for NVM technology design and development, complete Solid State Drive (SSD) system hardware and firmware development, as well as wafer and SSD manufacturing.
All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.