Interface, Inc. is a global flooring company specializing in carbon neutral carpet tile and resilient flooring, including luxury vinyl tile (LVT) and nora® rubber flooring. We help our customers create high-performance interior spaces that support well-being, productivity, and creativity, as well as the sustainability of the planet. Our mission, Climate Take Back™, invites you to join us as we commit to operating in a way that is restorative to the planet and creates a climate fit for life.
The Data Engineer
will collaborate with our Business Intelligence, Infrastructure, Business Analytics, and global business and IT teams to contribute to the implementation of a modern data warehouse. Through the development of high-performance pipelines and critical workloads, they will help enhance the foundation of our company's primary information source, enabling business applications that drive essential decision-making on a global scale. This position will ensure safe production deployments of high-quality data. This position will be an integral part of the Data Engineering team, playing a key role in reshaping our approach to data and its underlying infrastructure to support advanced analytics.
At Interface, we are deeply committed to the professional development of our employees. We believe in nurturing talent, encouraging personal growth, and fostering a culture of continuous learning. To facilitate this, we provide access to LinkedIn Learning, where you can expand your skill set, keep abreast of industry trends, and even explore new areas of interest. Our work structure is a hybrid model, combining the best of remote and in-office work. You'll have the flexibility to work from home, while also benefiting from in-person collaboration during three office days each week. This balance fosters both efficiency and camaraderie, contributing to an empowering and dynamic work environment.Essentials Functions:
Preferred Skills and Experience:
- Must be able to commute to Atlanta office 2-3 days per week
- Design and develop scalable data models, pipelines, and infrastructure in Azure, driving insights, reporting, mobile/web applications, and machine learning
- Collaborate with development teams and solution architects to define infrastructure and deployment requirements for data warehousing and data modeling.
- Develop and automate high-volume, batch and real-time ETL pipelines using Azure Data Factory, Azure SQL Databases, Databricks, and Python
- Collaborate with cross-functional teams to understand business requirements, create comprehensive test plans, and translate them into technical solutions
- Design and execute test plans to validate the accuracy and completeness of data flowing through ETL pipelines and reports
- Troubleshoot data processing performance issues and data quality problems through collaboration with both technical and functional personnel
- Monitor and automate data quality checks and ensure data is delivered on time and without error
- Collaborate with data engineers, BI developers, data scientists, and analysts to test, troubleshoot and resolve issues
- Implement machine learning models in collaboration with data scientists
- Identify and effectively communicate quality risks and mitigation strategies so that appropriate measures can be implemented
- Deploy backend production services with an emphasis on high availability, robustness, and monitoring
- Set up, maintain, and improve CI/CD build and release pipelines in Azure DevOps, specifically focused on data warehouse and data modeling processes using Azure SQL Servers and Power BI.
- Establish and maintain source code management tools, repositories, workflows, and pipelines and automate CI/CD processes using Git and Azure DevOps.
- Use Power BI for analytics, reporting, and data quality visualizations
- Bachelor's degree in Computer Science, Engineering, Data Science or a related discipline
- 4+ years of experience in Data Engineering, Software Engineering, Data Science, Data Warehousing, Azure or AWS cloud technologies
- 4+ years of experience in processing and modeling data with tools such as Python, SQL, Azure Synapse Analytics, AWS Redshift, Azure Data Factory, AWS Glue, Azure Databricks, AWS EMR, Apache Spark or Qlik with a strong understanding of star and snowflake schemas, OLAP/OLTP
- 3+ years of experience working on an engineering team building out QA practices
- Experience in writing complex SQL queries and debugging
- Strong understanding of data structures, data types, data transformation, and data performance tuning
- Experience with Python and data transformation and quality check libraries such as PySpark, pandas, and Great Expectations
- Strong Excel knowledge for validating data (VBA, macros, pivot tables, formulas, etc.)
- Strong analytical, problem-solving, and debugging skills, with the ability to learn and comprehend business processes quickly
- Experience with data integration and management tools
- Knowledge of Power BI or other data visualization such as Tableau
- Hands-on experience with Azure DevOps, Git, and other CI/CD tools
- Knowledge of Infrastructure as Code (IaC) and provisioning tools like Terraform, Ansible, Jenkins, or ARM in Azure
- Experience scripting languages such as PowerShell and JSON or YAML file formats
- Experience with machine learning is a plus
- Experience supporting and working with cross-functional teams in a dynamic environment
- Exceptional verbal and written communications skills, with an ability to express complex technical concepts in business terms
- Solid organizational skills while working on multiple projects and ability to meet deadlines
We are a VEVRAA Federal Contractor. We desire priority referrals of Protected Veterans for job openings at all locations within the State of Georgia. An Equal Opportunity Employer including Veterans and Disabled.