Role: Senior Data Engineer
Duration: 12 months + extension (long term project)
Engagement: Contract/freelance (full time, 5 days per week, 8 hour days)
Location: Onsite in Cork 3-4 days per week, remaining remote
General Summary: This position exists to design, build, and maintain data infrastructure and pipelines that enable the organization to effectively manage and utilize data for analytics, reporting, and machine learning applications. The Senior Data Engineer develops scalable data solutions that ensure data is accurate, accessible, and delivered efficiently to support business and technical stakeholders. This role contributes technical expertise to data platform initiatives and helps advance data engineering capabilities across the organization.
Duties & Responsibilities:
Design, develop, and maintain scalable data pipelines and ETL/ELT processes to extract, transform, and load data from multiple sources into target systems 30%
Build and optimize data models, schemas, and database structures to support analytics, reporting, and machine learning workloads 20%
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable data solutions 15%
Implement data quality checks, monitoring, and validation processes to ensure accuracy and reliability of data pipelines 15%
Optimize data pipeline performance, troubleshoot issues, and maintain data infrastructure to ensure high availability and efficiency 10%
Document data engineering processes, technical specifications, and best practices for knowledge sharing and team development 5%
Contribute to the evaluation and adoption of new data technologies and tools to improve team capabilities and platform performance 5%
Knowledge, Skills and Abilities (KSAs):
Strong proficiency in programming languages such as Python, SQL, and/or Scala for data engineering tasks
Knowledge of data pipeline orchestration tools such as Airflow, Prefect, or similar workflow management platforms
Experience with cloud data platforms and services on AWS, Azure, or GCP
Ability to design and implement ETL/ELT processes using modern data integration tools and frameworks
Strong SQL skills and experience with relational databases and data warehouse platforms such as Snowflake, Redshift, or BigQuery
Knowledge of big data technologies such as Spark, Kafka, or Hadoop for distributed data processing
Understanding of data modeling principles for both relational and dimensional models
Ability to optimize query performance and troubleshoot data pipeline issues
Strong problem-solving skills and attention to detail in ensuring data quality and accuracy
Effective communication skills to work collaboratively with technical and non-technical stakeholders
Ability to write clear technical documentation and follow software engineering best practices
Work Experience &/or Education:
Bachelor's degree in Computer Science, Engineering, Information Systems, or related technical field
5+ years of experience in data engineering or related technical roles
Demonstrated experience building and maintaining production data pipelines and ETL processes
Proficiency with cloud-based data platforms and modern data engineering tools
Experience with version control systems and CI/CD practices