Role:
Databricks Data Engineer
Location
: Melbourne
Role Type
: Contract
PFB the JD:
We are looking for a Databricks Data Engineer responsible for designing, developing, and maintaining scalable data solutions on the Databricks platform. You will work closely with data scientists, analysts, and engineering teams to build robust pipelines, transform large datasets, and support analytics and ML initiatives.
Key Responsibilities:
* Design, develop, and maintain end-to-end data pipelines and ETL/ELT workflows using Databricks and Apache Spark.
* Develop and optimize Spark jobs using PySpark, Spark SQL, or Scala for efficient data processing.
* Integrate and ingest data from diverse sources into the Lakehouse architecture.
* Build, schedule, and monitor production jobs and workflows using Databricks Workflows/Jobs.
* Ensure data quality, reliability, and governance across the data platform.
* Troubleshoot performance issues; optimize pipeline performance and Spark jobs.
* Collaborate with cross-functional teams to understand business needs and deliver data solutions.
* Document technical designs, data models, and pipeline processes.
Required Skills & Qualifications:
* Databricks platform experience (clusters, notebooks, Workflows) — essential.
* Proficiency in Apache Spark, PySpark, Spark SQL, optionally Scala
* Strong programming in Python and/or SQL.
* Redshift experience
* Experience building and maintaining ETL pipelines, data ingestion, transformation, and data modeling.
* Familiarity with cloud platforms(AWS/Azure/GCP) and Databricks cloud setup.
* Understanding of data governance (Unity Catalog, Delta Lake).
* Excellent analytical, debugging, and problem-solving skills.
* Good communication skills for collaborating with stakeholders.