Work Arrangement: Hybrid (4 days in-office, 1 day from home)
Role Description
This position requires a Senior Spark Data Engineer to design, build, and maintain data pipelines and infrastructure. The role involves working within a team using the SCRUM framework and collaborating with various stakeholders on data requirements.
Key Responsibilities
* Develop and maintain data pipelines using Spark (PySpark) and Python.
* Utilise AWS services, including AWS Glue, Step Functions, Lambda, IAM, and S3 for data processing and analytics tasks.
* Manage data warehousing solutions, incorporating technologies such as Apache Iceberg.
* Participate in the SCRUM process by estimating and articulating effort for sprint tasks.
Required Experience and Skills
* Demonstrable experience as a Senior Data Engineer.
* Deep knowledge of Spark (PySpark).
* Proficiency in Python for data engineering purposes.
* General understanding of AWS services related to data and analytics (e.g., AWS Glue, Step Functions, Lambda, IAM, S3).
* Familiarity with Apache Iceberg.
* Experience working in a SCRUM/Agile environment.
* Ability to estimate task effort and communicate effectively within a sprint structure.
* Strong communication and collaboration skills.
* A background in the finance industry.
Job Details
* Seniority level: Mid-Senior level
* Employment type: Contract
* Job function: Information Technology
* Location: Dublin, County Dublin, Ireland
#J-18808-Ljbffr