Superb opportunity for a leader within the Data Engineering space. In the role you’ll lead a team of data engineers building scalable, secure, and high-performance data solutions on Databricks and AWS. You’ll architect modern data platforms, guide implementation, and ensure best-in-class engineering practices across a global enterprise.
Responsibilities
* Architect and manage Databricks-based Lakehouse platforms (Delta Lake, Spark, MLflow)
* Integrate with AWS services including S3, Glue, Lambda, and Step Functions
* Design and optimize scalable ETL/ELT pipelines using Spark (Python/Scala)
* Automate infrastructure
* Ensure robust performance tuning of Spark jobs and cluster configurations
* Implement strong security governance using IAM, VPC, and Unity Catalog
* Lead a high-performing engineering team through Agile delivery cycles
Skills
* Data Engineering expertise and proven Leadership skills
* Extensive Databricks experience in production environments
* Advanced AWS knowledge: S3, Glue, Lambda, VPC, IAM, EMR
* Strong coding skills in Python (PySpark), Scala, and SQL
* Expertise in CI/CD pipelines, Git-based workflows, and automated testing
* Familiarity with data modeling and warehousing (e.g., Redshift, Postgres)
* Proficient in orchestration and workflow tools (e.g., Airflow, Step Functions)
#J-18808-Ljbffr