Principal Data Engineer – Azure Databricks | Python PySpark Databricks Engineer

Databricks Data Engineer / Databricks Architect - Azure Databricks Data lake Apache Spark Python Data Engineer - $1200 to $1344 per day - WFH Hybrid
Listed
17 October 2025

By Evolut
17 October 2025

  • $1200 to $1344 per day depending on experience;
  • WFH Hybrid 1 to 2 days in office rest WFH
  • End user Organisaton
  • Initially to 30th of January with prospect of renewal

  
You are a Principal Data Engineer with deep expertise in Azure Databricks, Python, and PySpark to lead large-scale data engineering initiatives within a modern cloud environment.
  
You’ll play a key role in designing, building, and optimising high-performance data pipelines and platforms that enable advanced analytics, machine learning, and data-driven decision-making across the enterprise.
  
Key Responsibilities

  • Architect and develop data pipelines and data integration frameworks using Azure Databricks, PySpark, and Azure Data Factory.
  • Design and maintain data lakehouses leveraging Delta Lake
  • Lead the design and implementation of ETL/ELT pipelines that process large-scale structured and unstructured data.
  • Optimise Databricks clusters and workloads for performance, scalability, and cost efficiency.
  • Collaborate with data scientists, analysts, and stakeholders to deliver reliable, production-ready data solutions.
  • Drive continuous improvement across data architecture, security, governance, and CI/CD automation.
  • Mentor and guide a team of data engineers on cloud engineering best practices.

About You
  
You’re an experienced and strategic Data Engineering leader who thrives in a fast-paced, collaborative environment. You’ll bring:

  • 8+ years’ experience in data engineering, with at least 3+ years on Azure Databricks.
  • Strong expertise in Python and PySpark programming.
  • Proven exposure with Azure ecosystem tools: Data Factory, Synapse Analytics, Key Vault, Event Hubs, and DevOps pipelines.
  • Deep understanding of data modelling, partitioning, schema design, and performance tuning in big-data environments.
  • Experience with Delta Lake, Parquet, and Lakehouse architecture.
  • Strong grasp of CI/CD, Git, and infrastructure-as-code concepts (Terraform, ARM templates).
  • Excellent communication skills and the ability to lead technical discussions with senior stakeholders.

Bonus points for:

  • Knowledge of machine learning pipelines, DataOps, or MLOps integration within Databricks.
  • Familiarity with Snowflake or AWS Glue environments.

Best method to apply is using the application button on this advert. We can be contacted on (02) 9687 1025 for a confidential discussion but please ensure the resume has been sent.
  

Please ensure all documents are sent in Microsoft word format.
Scroll-down