Location: Dublin City Centre
Work Arrangement: Hybrid (4 days in-office, 1 day from home)
Role Description:
=================
This position requires a Senior Data Engineer with strong experience in Apache Spark, Databricks, or AWS Glue to design, build, and maintain data pipelines and infrastructure."
The role involves working within a team using the SCRUM framework and collaborating with various stakeholders on data requirements.
Key Responsibilities:
====================
Develop and maintain data pipelines using Spark (PySpark) and Python.
Utilise AWS services, including AWS Glue, Step Functions, Lambda, IAM, and S3 for data processing and analytics tasks.
Manage data warehousing solutions, incorporating technologies such as Apache Iceberg.
Participate in the SCRUM process by estimating and articulating effort for sprint tasks.
Required Experience and Skills:
===============================
Demonstrable experience as a Senior Data Engineer.
Deep knowledge of Spark (PySpark).
Proficiency in Python for data engineering purposes.
General understanding of AWS services related to data and analytics (e.g., AWS Glue, Step Functions/Lambda, IAM, S3).
Familiarity with Apache Iceberg.
Experience working in a SCRUM/Agile environment.
Ability to estimate task effort and communicate effectively within a sprint structure.
Strong communication and collaboration skills.
Desirable Experience:
=====================
A background in the finance industry.
Job Type: Specified-purpose
Contract length: 12 months
Pay: €1.00-€1,000.00 per day
Work Location: Hybrid remote in Arbour Hill, Dublin, CO. Dublin