Senior Data EngineerHybrid (Dublin, Ireland)About the CompanyOur client is a global technology-driven organization focused on improving lives through data, innovation, and digital transformation. With a culture rooted in collaboration, diversity, and continuous learning, the company builds scalable solutions that drive real-world impact across industries. Employees enjoy strong professional growth opportunities, flexible hybrid work arrangements, and a people-first culture that values initiative, creativity, and excellence.About the PositionWe are seeking an experienced Senior Data Engineer to join a growing analytics and engineering team. In this role, you will design, build, and maintain modern data pipelines and architectures that support analytics, reporting, and advanced data initiatives. You'll work with cloud-based technologies, develop efficient ETL processes, and ensure the highest standards of data quality, performance, and security.Key ResponsibilitiesDesign, build, and optimize data pipelines that extract, transform, and load data from multiple sources into centralized repositories (data lakes, data warehouses, etc.)Manage data integration across systems, APIs, logs, and external sources, ensuring data is consistent and reliableImplement data transformation and cleaning routines to ensure data integrity and usability for analytics, reporting, and machine learning initiativesCollaborate with cross-functional teams to establish data architecture standards and best practices in pipeline automation, deployment, and monitoringContribute to frameworks that support data governance and compliance with organizational and regulatory standardsWork closely with analytics, product, and infrastructure teams to design scalable data solutions using cloud platforms such as Azure and SnowflakeImplement monitoring and alerting mechanisms to ensure high availability, accuracy, and reliability of data systemsParticipate in continuous improvement efforts by exploring new technologies and optimizing existing data processesExperience/RequirementsRequired:Bachelor's degree (or equivalent experience) in Computer Science, Information Technology, Data Management, or a related fieldProven experience designing and implementing data solutions and performing data modelingStrong hands-on experience with PySpark and SQL, including advanced queries and window functionsExperience with Azure Data Factory or Airflow for orchestrationProficiency with Azure Databricks and data pipeline development in cloud environmentsFamiliarity with CI/CD and DevOps tools for automated deployment and testingStrong understanding of data governance, security, and compliance best practicesAdvanced Python skills for data manipulation and transformationPreferred:Experience working in agile/scrum environmentsKnowledge of Snowflake or similar modern data warehouse technologiesExposure to machine learning (ML) or AI model deployment in production settingsSelf-motivated and proactive, with the ability to manage priorities and deliver results independentlyExcellent communication skills and a collaborative mindset