Position: Data EngineerWork Type: HybridSalary: €42000 per annumLocation:Dublin, IrelandWe are looking for a highly skilled Data Engineer to design, build, and optimize scalable data pipelines and enterprise data platforms. The ideal candidate brings deep expertise in ETL development, data modeling, workflow orchestration, and modern data engineering tools such as Databricks, PySpark, Airflow, and Pentaho. This role is critical in enabling high quality, reliable data delivery across analytics, reporting, and business operations.Primary Responsibilities:Data Pipeline Development & ETL Engineering· Design and build scalable ETL pipelines to support enterprise wide data ingestion, transformation, and delivery.· Develop and optimize workflows using Apache Airflow, Pentaho Data Integration, and Python based orchestration.· Implement advanced transformations using PySpark, SQL, and Databricks Lakehouse tools.· Ensure high performance data processing through indexing, partitioning, and incremental loading strategies.Data Architecture & Modelling· Design and maintain data models including star schemas, medallion architecture, and analytical data structures.· Build and enhance data lakes and data marts to support analytics, BI, and reporting needs.· Collaborate on Lakehouse architecture using Databricks, Spark, and cloud platforms.· Data Quality, Governance & Validation· Implement data quality frameworks ensuring accuracy, completeness, and consistency across systems.· Develop reconciliation and validation processes for cross system data alignment (e.g., SAP, Manufacturing, Oracle).· Introduce lineage, audit, and monitoring mechanisms to strengthen governance and transparency.Integration & Performance Optimization· Integrate diverse data sources including Oracle, Teradata, Hadoop, and APIs into unified platforms.· Optimize ETL and data workflows to reduce cycle times and improve system performance.· Troubleshoot and resolve pipeline issues ensuring reliable, uninterrupted data delivery.Required Qualifications:· Bachelor's degree in Computer Science, Data Engineering, or related field· years of experience in Data Engineering or related fields.· Strong proficiency in Python, SQL, PySpark, and Shell scripting.· Hands on experience with Databricks (Lakehouse, Spark, Lakeflow Jobs, Delta).· Expertise in ETL tools such as Pentaho Data Integration and Apache Airflow.· Solid understanding of data modeling (star schema, dimensional modeling).· Experience with Oracle, Teradata, MySQL, PostgreSQL.· Familiarity with Hadoop ecosystem (Hive, Sqoop, HDFS, Spark).· Knowledge of BI tools such as Power BI and Tableau.· Cloud experience with AWS, Azure, or Databricks.· Strong communication and stakeholder management skills.· Databricks certifications (Lakehouse Fundamentals, Data Engineering) are preferred.· Experience in manufacturing or semiconductor data environments is preferred· Background in Agile delivery and team leadership.