Key Responsibilities
Build, enhance, and support robust data pipelines for large-scale data processing and transformation.
Maintain and optimize cloud-based data lake environments with a focus on performance, reliability, and security.
Work with modern data platforms and orchestration tools to support analytics and reporting use cases.
Partner with analytics and business teams to deliver clean, well-structured datasets.
Improve scalability and efficiency of big data workloads using distributed processing frameworks.
Support and optimize No SQL data stores used within the wider data ecosystem.
Required Skills & Experience
Strong professional background in data engineering within Azure or AWS environments.
Hands-on experience with Databricks, Delta Lake, Data Factory, and No SQL databases.
Proficiency in Python, SQL, or Scala for data processing and pipeline development.
Solid understanding of ETL/ELT patterns, data modelling, and large-scale data processing concepts.
Experience working with distributed processing technologies such as Apache Spark.
What We Offer
Benefits include pension, healthcare, dental, and 25 days annual leave.
Hybrid working model for flexibility.
Opportunities for professional development and cutting-edge projects.
#J-18808-Ljbffr