Superb opportunity for a leader within the Data Engineering space. In the role you'll lead a team of data engineers building scalable, secure, and high-performance data solutions on Databricks and AWS. You'll architect modern data platforms, guide implementation, and ensure best-in-class engineering practices across a global enterprise.
Responsibilities
1. Architect and manage Databricks-based Lakehouse platforms (Delta Lake, Spark, MLflow)
2. Integrate with AWS services including S3, Glue, Lambda, and Step Functions
3. Design and optimize scalable ETL/ELT pipelines using Spark (Python/Scala)
4. Automate infrastructure with Terraform or CloudFormation
5. Ensure robust performance tuning of Spark jobs and cluster configurations
6. Implement strong security governance using IAM, VPC, and Unity Catalog
7. Lead a high-performing engineering team through Agile delivery cycles
Skills
8. Data Engineering expertise and proven Leadership skills
9. Extensive Databricks experience in production environments
10. Advanced AWS knowledge: S3, Glue, Lambda, VPC, IAM, EMR
11. Strong coding skills in Python (PySpark), Scala, and SQL
12. Expertise in CI/CD pipelines, Git-based workflows, and automated testing
13. Familiarity with data modeling and warehousing (e.g., Redshift, Postgres)
14. Proficient in orchestration and workflow tools (e.g., Airflow, Step Functions)
#LI-LDM