Our client, a large-scale data-driven organisation, is seeking a Data Engineer to develop, maintain, and optimise cloud-based data pipelines and ETL processes. This role will be instrumental in building scalable data solutions for analytics, reporting, and machine learning initiatives. The ideal candidate will have strong cloud experience, excellent SQL skills, and a passion for working with large datasets. Key Responsibilities Design, develop, and manage data ingestion and transformation pipelines. Work with cloud platforms such as Azure Data Factory, AWS Glue, or similar. Optimise data processing performance, reliability, and scalability. Collaborate with analysts, architects, and data scientists to meet business needs. Implement data quality controls, validation rules, and documentation. Participate in code reviews and promote engineering best practices. Troubleshoot data flow issues and contribute to continuous enhancements. Support the migration of legacy ETL processes to cloud platforms. Required Experience 4+ years as a Data Engineer. Strong SQL and data modelling experience. Hands-on experience with cloud-based ETL/ELT tools. Proficiency with Python or similar scripting languages. Experience handling large and complex datasets. Desirable Skills Experience with Snowflake, Databricks, or BigQuery. Knowledge of CI/CD for data pipelines. Education Degree in Data Engineering, Computer Science, or related field. Benefits & Package 12-month contract with extension potential. Hybrid Dublin-based role. Exposure to large-scale cloud transformation projects. How to Apply Please apply with your updated CV.