PRADCO Outdoor Brands (POB), a division of EBSCO Industries Inc., is the largest company in the world that manufactures and markets major hunting and fishing brands and products under one parent organization. We are a leader in producing game calls, scents, attractants, game feeders, game cameras, tree stands and fishing lures. PRADCO Hunting owns the brands Moultrie, Summit, Knight & Hale, Code Blue, Texas Hunter Products and Whitetail Institute. PRADCO Fishing owns more than 20 brands including Rebel, YUM, Booyah, Lindy and Bomber Saltwater Grade. For more information on PRADCO products, please visit our website at www.pradcooutdoorbrands.com. As a member of the EBSCO family of companies, PRADCO team members participate in a selection of outstanding benefits, including: EBSCO Profit Sharing Trust, Excellent Medical/Dental/Drug/Vision benefits, and many other benefits.
At PRADCO Outdoor Brands, we rely on powerfully insightful data to inform our systems and solutions—and we’re seeking an experienced database-centric Data Engineer to build and manage an infrastructure to help the company get the most out of our data. Our ideal hire will have the mathematical and statistical expertise you’d expect, combined with a rare curiosity and creativity. Beyond technical prowess, you will need the soft skills it takes to communicate highly complex data trends to organizational leaders in a way that is easy to understand. The Data Engineer will be responsible for continuous improvement and management of the organization’s data architecture through the extraction, transformation and loading of data from various heterogeneous sources for the downstream ease of use ensuring best practices around data integrity, consistency, and structure. This role is based out of the PRADCO’s Headquarters in Birmingham AL but may have the ability to work remotely.
Scope and Responsibilities:
- Develop & maintain ETL procedures for data ingestion; configure pipelines, apply transformations and decoding; integrate and fuse data; move and securely deliver data to warehouse(s), CDP, and ultimately internal customers
- Design and build data products and data pipelines ensuring the robust flow of data from acquisition through curation, and governance.
- Build expertise across a diverse collection of data, including point-of-sale, customer history, lifecycle marketing, etc., allowing for a unified view & single source of truth
- Understand and manage the data landscape and environments: sources, elements, update frequency, completeness, stewards/contacts, platforms, etc.
- Translate business requirements to build repeatable, sustainable, efficient, coded processes that can be productionized
- Optimize ETL processes and data warehousing to continuously maximize efficiency, speed, stability, system resources and capabilities
- Validate data products and pipelines are functioning as expected following system or application upgrades, source changes, etc.
- Incorporate data quality and privacy checks/alerts to minimize bad data being consumed by end users, models and dashboards
- Maintain feedback loop with analysts on data issues, standards and fit for use
- Work closely with analysts to align systems, tools and applications being utilized with business use case and performance requirements
- Communicate with end users to set expectations and ensure alignment around data accuracy, completeness, timeliness and consistency
- Establish, track and monitor KPIs related to specific data products and deliverables.
- Understand “what” your customer needs and “how” you need to provide it for them
- Perform other duties as assigned and required
- Bachelor's degree in Computer Science, Information Systems, Engineering, or related field. Equivalent combination of experience, education and training will be considered
- 3 years’ experience in ETL processes
- Ability to design & maintain data warehouse, whether hosted in-house or via a third party
- Certification in Spark, Azure or other cloud platform
- Experience in developing and implementing enterprise-scale data architecture and pipelines; knowledge of logical and physical data modeling concepts (relational and dimensional)
- Proficiency in writing, optimizing, and maintaining SQL queries to resolve complex data design problems utilizing tools such as Visual Studio (SSIS), Management Studio (SSMS), and Microsoft Azure Cloud with a successful history of manipulating, processing, and extracting value from large heterogenous datasets required.
- Demonstrated knowledge and proficiency in relational databases such as MS SQL Server, MS Azure, and other various databases
- Understanding of data integration issues (validation and cleaning), familiarity with complex data and structures
- Programming / scripting experience and knowledge of software development life cycle is preferred.
- Excellent interpersonal (verbal and written) communication skills are required to support working in project environments that includes internal, external and customer teams
- Strong analytical, conceptual and problem-solving abilities
- Ability to manage multiple priorities and assess and adjust quickly to changing priorities
EBSCO Industries, Inc.is an equal opportunity employer and complies with all applicable federal, state, and local fair employment practices laws. EBSCO strictly prohibits and does not tolerate discrimination against employees, applicants, or any other covered persons because of race, color, sex (including pregnancy), age, national origin or ancestry, ethnicity, religion, creed, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class. This policy applies to all terms and conditions of employment, including, but not limited to, hiring, training, promotion, discipline, compensation, benefits, and termination of employment.
EBSCO complies with the Americans with Disabilities Act (ADA), as amended by the ADA Amendments Act, and all applicable state or local law.