Senior Data Engineer - Python - Databricks - CI/CD - London
Senior Data Engineer - Python - Databricks - TDD - CI/CD - London
My clienT are looking for a Senior Data Engineer that can drive innovation and modernise my clients data platforms to meet the demands of fast moving market.
As a Senior Data Engineer, you will play a pivotal role in transforming their current data lakehouse with Databricks, designing and building data pipelines from scratch, and driving a large-scale data transformation programme. This is a high-impact role where your work will directly influence business-critical decisions across underwriting, claims, and risk management.
Experience Required:
- Experience from within either Insurance or Insurtech industry: Deep understanding of insurance data models, policy data, claims, and risk-related data.
- 3/5 years experience in Python/PySpark, Databricks, TDD, CI/CD
- Experience of working with Kafka, Amazon Kenesis & Spark Stream
- Experience with OLAP data cubes
- Must come from a Data Background or Software Engineering
- Experience of Batch Data & streaming data
- Data Pipeline Development: Proven track record of building robust, scalable pipelines from scratch.
- Collaborative Mindset: Experience working in cross-functional teams, liaising with business and technology teams to meet project goals.
Reference: 2919606942
Senior Data Engineer - Python - Databricks - CI/CD - London

Posted on Mar 25, 2025 by Mentmore Recruitment
Senior Data Engineer - Python - Databricks - TDD - CI/CD - London
My clienT are looking for a Senior Data Engineer that can drive innovation and modernise my clients data platforms to meet the demands of fast moving market.
As a Senior Data Engineer, you will play a pivotal role in transforming their current data lakehouse with Databricks, designing and building data pipelines from scratch, and driving a large-scale data transformation programme. This is a high-impact role where your work will directly influence business-critical decisions across underwriting, claims, and risk management.
Experience Required:
- Experience from within either Insurance or Insurtech industry: Deep understanding of insurance data models, policy data, claims, and risk-related data.
- 3/5 years experience in Python/PySpark, Databricks, TDD, CI/CD
- Experience of working with Kafka, Amazon Kenesis & Spark Stream
- Experience with OLAP data cubes
- Must come from a Data Background or Software Engineering
- Experience of Batch Data & streaming data
- Data Pipeline Development: Proven track record of building robust, scalable pipelines from scratch.
- Collaborative Mindset: Experience working in cross-functional teams, liaising with business and technology teams to meet project goals.
Reference: 2919606942

Alert me to jobs like this:
Amplify your job search:
Expert career advice
Increase interview chances with our downloads and specialist services.
Visit Blog