About the Role:
We are looking for a Big Data Engineer to join one of our leading clients on an exciting project. The ideal candidate will have hands-on experience with large-scale data processing, Hadoop ecosystem tools, and cloud platforms, and will play a key role in building and optimizing data pipelines.
Tech Stack
* Programming Languages: Java / Scala / Python
* Data Processing Framework: Spark
* Big Data / Hadoop Frameworks: Hive, Impala, Oozie, Airflow, HDFS
* Cloud Experience: AWS, Azure, or GCP (services such as S3, Athena, EMR, Redshift, Glue, Lambda, etc.)
* Data & AI Platform: Databricks
Roles & Responsibilities
* Build, optimize, and maintain ETL pipelines using Hadoop ecosystem tools (HDFS, Hive, Spark).
* Collaborate with cross-functional teams to ensure efficient and reliable data processing workflows.
* Perform data modelling, implement quality checks, and carry out system performance tuning.
* Support modernization efforts, including migration and integration with cloud platforms and Databricks.
Preferred Qualifications
* Hands-on experience with large-scale data processing and distributed systems.
* Strong problem-solving and analytical skills.
* Familiarity with CI/CD pipelines and version control tools is a plus.
Job Types: Full-time, Permanent
Pay: €70,000.00-€85,000.00 per year
Work Location: In person
Application deadline: 10/10/2025
Reference ID: IJP - SBDE - DUBIR - 01
Expected start date: 19/10/2025