PremFina is a technology-led, private equity backed, London-based firm operating in the $80 billion global premium finance industry. We supply insurance brokers and companies with financing facilities as well as a white label cloud-based Software-as-a-Service (SaaS).
We're revolutionising the way people pay for insurance. We have a team of over 80 people, located across the UK, Bulgaria and Poland - together we exist to help customers, empower our partners and transform the insurance industry for the better by creating a world where insurance is more accessible and affordable for everyone. We’re a fast-growing team, united by our belief in positive disruption to help contribute to a more inclusive society.
We do something amazing. You can do something amazing. Join us today!
Your contribution to something big
Reporting to the Chief Product Officer, you’ll be at the forefront of delivering data centric solutions using efficient methodologies. Working as part of a small team, the work involves the transformation of the organisation’s data management, ensuring that the business benefits from increased automation, efficiency and quality and enabling the unlocking of value from data.
What you’ll be doing
- Be a Data & Analytics evangelist: The data engineer will be considered a blend of data and analytics “evangelist,” “data guru” and “fixer.” This role will promote the available data and analytics capabilities and expertise to business unit leaders and educate them in leveraging these capabilities in achieving their business goals.
- Collaborate across departments: The data engineer will work in close relationship with business stakeholders, the Data & Analytics Development Team, Enterprise Architecture and with Business Analysts in refining their data requirements for various data and analytics initiatives and their data consumption requirements.
- Build data pipelines: Managed data pipelines consist of a series of stages through which data flows (for example, from data sources or endpoints of acquisition to integration to consumption for specific use cases). These data pipelines must be created, maintained and optimized as workloads move from development to production for specific use cases.
- Build reports and analytics: Create targeted, high-quality interactive reports that help business users to solve real-world problems. Develop branding and standards to provide a consistent look and feel.
- Drive Automation: The data engineer will be responsible for using innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. The data engineer will also need to assist with renovating the data management infrastructure to drive automation in data integration and management.
- Educate and train: The data engineer should be curious and knowledgeable about new data initiatives and how to address them. This includes applying their data and/or domain understanding in addressing new data requirements. They will also be responsible for proposing appropriate (and innovative) data ingestion, preparation, integration and operationalization techniques in optimally addressing these data requirements.
- Participate in ensuring compliance and governance during data use: It will be the responsibility of the data engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Data engineers should work with data governance teams (and information stewards within these teams) and participate in vetting and promoting content created in the business and by data scientists to the curated data catalog for governed reuse.
- Strategic direction: Shape the strategic direction of the Data & Analytics team, liaising with senior stakeholders and ensuring close collaboration with technology partners.
Our ideal teammate has experience with/knowledge in
- A bachelor's or master's degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field [or equivalent work experience] is required.
- The ideal candidate will have a combination of IT skills, data governance skills, and analytics skills with a technical or computer science degree, or equivalent work experience.
- Ability to collaborate with technical and business stakeholders
- Demonstrated success in working with both IT and business while integrating analytics and data output into business processes and workflows.
- Experience working with popular data discovery, analytics and BI software tools like PowerBI for semantic-layer-based data discovery.
- Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub and the supporting processes like Data Integration, Governance, Metadata Management
- Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management.
- Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies.
- Basic experience in working with data governance/data quality and data security teams and specifically information stewards and privacy and security officers in moving data pipelines into production with appropriate data quality, governance and security standards and certification. Ability to build quick prototypes and to translate prototypes into data products and services in a diverse ecosystem –
- Strong experience with popular database programming languages for relational databases (SQL, T-SQL).
- Knowledge of working with technologies including HIVE, Spark, Azure Synapse Analytics (SQL Data Warehouse), Azure Data Factory (ADF), Databricks.
- Basic experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, Java, C++, Scala, R, and others.
- Basic experience in working with both open-source and commercial message queuing technologies such as Kafka, JMS, Azure Service Bus, Amazon Simple queuing Service, and others, stream data integration technologies such as Apache Nifi, Apache Beam, Apache Kafka Streams, Amazon Kinesis, and stream analytics technologies such as Apache Kafka KSQL Apache Spark Streaming Apache Samza, others.
- Ability to automate pipeline development.
- Strong experience in working with DevOps capabilities like version control, automated builds, testing and release management capabilities using tools e.g. Git, Azure DevOps, Jenkins, Puppet, Ansible.
- Exposure to hybrid deployments: Cloud and On-premise.
- Adept in agile methodologies and capable of applying DevOps principles to data & analytics development to improve the communication between data managers and consumers across an organization
- Required to be highly creative and collaborative. An ideal candidate would be expected to collaborate with both the business and IT teams to define the business problem, refine the requirements, and design and develop data deliverables accordingly. The successful candidate will also be required to have regular discussions with data consumers on optimally refining the data pipelines developed in nonproduction environments and deploying them in production.
- Is a confident, energetic self-starter, with strong interpersonal skills.
- Has good judgment, a sense of urgency and has demonstrated commitment to high standards of ethics, regulatory compliance, customer service and business integrity.
- At least six years or more of work experience in data management disciplines including data integration, modelling, optimisation and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.
- At least three years of experience working in cross-functional teams and collaborating with business stakeholders in support of a departmental and/or multi-departmental data management and analytics initiative.
Extra brownie points for:
- Experience of working with data science teams in refining and optimizing data science and machine learning models and algorithms a plus but not required/compulsory.
- Competitive salary
- Discretionary bonus
- 25 days holiday per year
- ‘All About Me’ day
- Private health insurance
- Pension contribution
- Flexible working
- Professional development/qualifications support
- All the latest tech you need
- Social events