What Will You Be A Part Of?
When you’re part of the team at Thermo Fisher Scientific, you’ll do important work, like helping customers in finding cures for cancer, protecting the environment or making sure our food is safe. Your work will have real world impact, and you’ll be supported in achieving your career goals.
Where Will You Be?
Launched in 2011 to enhance customer productivity, the Instrument and Enterprise Services (IES) division provides a single source for integrated lab service, support and supply management. IES encompasses over 3600 hard-working global service professionals with the broadest service portfolio in the market coupled with access to a deep bench of domain expertise.
How Will You Make An Impact?
Enabling our internal teams at scaled, Data Engineer will join our IES Data and Analytics team and will be responsible for data engineering and analytics for IES Organization working with Corporate IT team members to develop data driven applications and automations across a variety of infrastructure(both On-Prim and Cloud). Data Engineer must be able to collaboratively work in an Agile team to design, develop and maintain data structures for the Enterprise data platform. This position offers an exciting opportunity to work on processes that interface with multiple systems including AWS, Oracle, Middleware and ERPs. The candidate will lead development projects, pilots, and advance best design practices.
What Will You Do?
How Will You Get Here?
- Develop, Improve and Support data engineering solutions for IES organization
- Investigate and effectively partner with core team to address data quality
- Design, develop, test, deploy, and maintain mission critical IES data applications
- Design, develop, test, deploy, support, enhance data integration solutions seamlessly to connect and integrate Enterprise systems into our Enterprise Data Warehouse and Data Platforms.
- Innovate for data integration in Apache Spark-based Platform to ensure the technology solutions leverage cutting edge integration capabilities.
- Facilitate requirements gathering and process mapping workshops, review business/functional requirement documents, author technical design documents, testing plans and scripts.
- Assist with implementing standard operating procedures, facilitate review sessions with functional owners and end-user representatives, and leverage technical knowledge and expertise to drive improvements.
Tool & Technology Skills
- Have strong experience in data lake, data analytics, big data or business intelligence products
- Ability to work independently and as a member of a cross-functional team
- Willingness to learn, be mentored, and improve
- Is passionate about applications, data analytics, end-user productivity
- Exceptional customer focus
- Desire to teach other team members about technology in the area of expertise
- Experience on project management framework Agile (Jira toolset)
- Master's degree in computer science engineering from an accredited university (desired)
- 4-year degree with major in computer science engineering (or equivalent) from an accredited university (preferred) will substitute for minimum 5 years’ professional IT experience.
- Experience in ETL/ELT(Data extraction, data transformation and data load processes)
- Experience in Data lake, analytics & visualizations
- Experience in Oracle, SQL Server or AWS Redshift type databases
Knowledge, Skills, Abilities
- Overall, 5-6 years of experience in the Enterprise Data Warehouse Development environment
- 3-5 years of experience at Enterprise-level ETL development & architecture in Informatica Power Center 10.x and Informatica Cloud environment.
- Use Microsoft Power BI and Microsoft Excel: Pivot Table, Power Query, Power Pivot, and Data Models to gather information from multiple sources and deliver information to the end user.
- Act as a Microsoft Power BI, visualization, and dash-boarding subject matter expert
- 2+ years working experience in data integration and pipeline development.
- 2+ years of Experience with AWS Cloud on data integration with Apache Spark, Databricks, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems
- 2 - 3 years of experience working (Agile/Waterfall) environment
- 1 - 2 years of computer programming in Python, R, MATLAB, etc
- Demonstrated skill and ability in the development of data warehouse projects/applications (Oracle & SQL Server)
- Strong real-life experience in python development especially in pySpark in AWS Cloud environment.
- Design, develop test, deploy, maintain and improve data integration pipeline.
- Experience in Python and common python libraries.
- Strong analytical experience with database in writing complex SQL queries, query optimization, debugging, user defined functions, views, indexes etc.
- Highly self-driven, execution-focused, with a willingness to do what it takes” to deliver results as you will be expected to rapidly cover a considerable amount of demands on data integration
- Strategic Thinking - Think big picture. Set priorities aligned with major goals. Encourage innovation by backing good people who take smart risks.
- Critical Thinking - Question conventional wisdom by identifying and challenging assumptions made that cause actions or inaction. Strive to inject independent thinking, checking biases, promotes action and decision-making.
- Communication - communicate effectively in the way reach audiences with ease, clarity, and transparency: one-to-one, small group, full staff, email, social media, and of course, listening.
- Provide organizational support for relationship development to foster teamwork, build relationships, and promote collaboration to cultivate and strengthen a network for the exchange of ideas
- Problem-solving (analytical)