We're looking for a Senior Data Engineer to help build and scale an enterprise-wide Centralised Data Platform on Databricks. This role sits within a global financial services environment where data engineering is a top technical priority, underpinning analytics, APIs and future AI initiatives across the organisation.
The role
1. Build and optimise data pipelines on the Databricks Lakehouse Platform
2. Design scalable ETL/ELT and structured streaming pipelines
3. Develop enterprise-grade data processing and analytics solutions
4. Optimise Spark jobs and Databricks clusters for performance and cost
5. Implement data quality, monitoring and governance standards
6. Apply security, access control and cataloguing best practices
7. Work closely with data scientists, analysts and business stakeholders
8. Contribute to Agile delivery, code reviews and technical knowledge sharing
Experience
9. 6+ years' experience in data engineering roles
10. Hands-on experience with Databricks and Apache Spark
11. Strong Python and SQL skills with solid data modelling knowledge
12. Experience building ETL/ELT pipelines and lakehouse architectures
13. Cloud experience, ideally AWS
14. Familiarity with Delta Lake, Unity Catalog and governance frameworks
15. Experience with real-time or streaming data is a plus
16. Exposure to AI/ML use cases or using AI tools in development is advantageous
17. Strong problem-solving skills and the confidence to work in complex, regulated environments