Job Summary:
A key member of our team is required to design, implement and maintain scalable data pipelines.
Key Responsibilities:
* Design large-scale data systems and architectures.
* Develop and deploy data processing pipelines using Python and SQL.
* Collaborate with cross-functional teams to ensure data quality and availability.
Requirements:
* 5+ years of experience as a Data Engineer or in a similar role.
* Strong proficiency in SQL and Python programming languages.
* Experience with data warehousing and pipeline tools such as Airflow, dbt and Spark.
* Expertise in Azure cloud stack.
* Familiarity with data governance best practices.
Desirable Skills:
* Background in real-time data processing using Kafka and Flink.
* Understanding of DevOps concepts and data infrastructure.
* Knowledge of machine learning workflows.
Note: We are looking for an experienced professional who can hit the ground running.