Job Overview:
A unique opportunity exists to craft and implement cutting-edge data solutions on Databricks and AWS within a global financial services organization.
* Data Engineering Responsibilities
* Design, build, and optimize large-scale data architectures on AWS using Databricks.
* Develop and maintain ETL/ELT pipelines for efficient data processing (Python, SQL, Spark).
* Create and refine database schemas, tables, indexes, and stored procedures.
* Collaborate with stakeholders to gather requirements and deliver reliable, production-grade data solutions.
* Work closely with cross-functional teams to develop and maintain data pipelines from source systems to analytics layers.
* Implement orchestration, version control (Git), and CI/CD processes within an Agile delivery model.
* Apply best practices in data modeling, security, and performance tuning.
Required Skills and Qualifications:
The ideal candidate will have 5+ years of experience in data engineering, BI, and DW development, with 3+ years of hands-on experience with Databricks on AWS. Strong skills in Python, SQL, and modern data frameworks (Spark, Delta Lake) are also required. A solid understanding of data modeling, ETL, and data architecture principles is essential.
About the Role:
This position offers the chance to contribute to the development of innovative data solutions that drive business growth and success.