Hybrid Position (3 days per week average in Downtown Austin, TX or Grand Rapids, MI office)
Note: This is a full-time, in-house position. We do not offer C2C or C2H employment and are not able to sponsor visas for this position.
Acrisure Technology Group (ATG) is a fast-paced, AI-driven team building innovative software to disrupt the $6T+ insurance industry. Our mission is to help the world share its risk more intelligently to power a more vibrant economy. To do this, we are transforming insurance distribution and underwriting into a science.
At the core of our operating model is our technology: we're building the premier AI Factory in the world for risk and applying it at the center of Acrisure, a privately held company recognized as one of the world's top 10 insurance brokerages and the fastest growing insurance brokerage globally. By using the latest technology and advances in AI to push the boundaries of understanding risk, we are systematically converting data into predictions, insights, and choices, and we believe we can remove the constraints associated with scale, scope, and learning that have existed in the insurance industry for centuries.
We are a small team of extremely high-caliber engineers, technologists, and successful startup founders, with diverse backgrounds across industries and technologies. Our engineers have worked at large companies such as Google and Amazon, hedge funds such as Two Sigma and Jump Trading, and a variety of smaller startups that quickly grew such as Indeed, Bazaarvoice, RetailMeNot, and Vrbo.The Role
The Business Intelligence team's mission is to unify data across the enterprise to optimize business decisions made at the strategic, tactical, and operational levels of the organization. We accomplish this by providing an enterprise data warehouse, data lake, reporting platform, and business processes that provide quality data, in a timely fashion, from any channel of the company and present them in such a manner as to maximize the value of that data for both internal and external customers.
The Data Engineer is responsible for designing and developing moderate to complex ETL processes required to populate a data lake and structured data warehouse which supply data for the machine learning, AI & BI teams. Responsibility includes working with a team of contracted developers as well as coaching and mentoring junior and mid-level developers. Ensuring high quality and best practices are maintained through the development cycle is key to this position.
You may be fit for this role if you have
- Leverage established guidelines and custom designs to create complex ETL processes to meet the needs of the business
- Develop from strategic and non-strategic data sources including data preparation/ETL and modeling for data visualizations in a self-service platform
- Contribute to the definition and development of the overall reporting roadmap
- Translate reporting requirements into reporting models, visualizations and reports by having a strong understanding of the enterprise architecture
- Standardize reporting that helps generate efficiencies, optimization, and end user standards
- Integrate dashboards and reports from a variety of sources, ensuring that they adhere to data quality, usability, and business rule standards
- Independently determine methods and procedures for new or existing requirements and functionality
- Work closely with analysts and data engineers to identify opportunities and assess improvements of our products and services
- Contribute to workshops with the business user community to further their knowledge and use of the data ecosystem
- Produce and maintain accurate project documentation
- Collaborate with various data providers to resolve dashboard, reporting and data related issues
- Perform Data Services reporting benchmarking, enhancements, optimizations, and platform analytics
- Participate in the research, development, and adoption of trends in reporting and analytics
- Mentor BI Developers and BI Analysts
- Other projects as assigned in order to support necessary business goals across teams
- Minimum 5 years required, particularly in an Azure environment with Azure Data Bricks, Azure Data Factory, Azure Data Lake
- Minimum 5 years designing data warehouses, data modeling, and end-to-end ETL processes in a MS-SQL environment
- Minimum 2 years developing machine learning models with Azure ML, ML Flow, BQML
- Expert working knowledge of SQL, Python and Spark (and ideally PySpark) with a demonstrated ability to create ad-hoc SQL queries to analyze data, create prototypes, etc required.
- Successfully delivered 2+ end to end projects – from Inception to Execution - in Data Engineering / Data Science / Data Integration as a Tech Senior/Principal
- Ability to Analyze, summarize, and characterize large or small data sets with varying degrees of fidelity or quality, and identify and explain any insights or patterns within them.
- Experience with multi-source data warehouses
- Strong skills in in data analytics and reporting, particularly with Power BI
- Experience with other cloud environments (GCS, AWS) a definite plus
- Strong experience creating reports, dashboards, and/or summarizing large amounts of data into actionable intelligence to drive business decisions required
- Strong understanding of core principles of data science and machine learning; experience developing solutions using related tools and libraries
- Hands on experience building logical data models and physical data models and using tools like ER/Studio/Idera
- Write SQL fluently, recognize and correct inefficient or error-prone SQL, and perform test-driven validation of SQL queries and their results
- Proficient in writing Spark sql using complex syntax and logic like analytic functions etc.
- Well versed in Data Lake & Delta Lake Concepts
- Well versed in Databricks usage in dealing with Delta tables (external \ managed)
- Well versed with Key Vault \ create & maintenance and usage of secrets in both Databricks & ADF
- Should be knowledgeable in Stored procedures \ functions and be able to use them by ADF & Databricks as this is a widely used Practice internally
- Familiar with DevOps process for Azure artifacts and database artifacts
- Well versed with ADF concepts like chaining pipelines, passing parameters, using APIs for ADF & Databricks to perform various activities.
- Experience creating and sharing standards, best practices, documentation, and reference examples for data warehouse, integration/ETL systems, and end user reporting
- Apply disciplined approach to testing software and data, identifying data anomalies, and correcting both data errors and their root causes
Undergraduate degree preferred or equivalent experience along with a demonstrated desire for continuing education and improvementLocation:
Austin, TX or Grand Rapids, MIWe are interested in every qualified candidate who is eligible to work in the United States. We are not able to sponsor visas for this position.