Mid‑Senior Software Developer – Melbourne Based
A$150,000.00/yr – A$170,000.00/yr
What You’ll Do
* Design, develop, and optimise distributed applications using Java and Scala for large‑scale data processing.
* Build and optimise efficient, high‑performance data pipelines leveraging deep knowledge of Spark internals.
* Work with open‑source table formats such as Apache Iceberg, Delta Lake, Apache Hudi, and manage large datasets effectively.
* Implement open Lakehouse solutions including Unity Catalog, Polaris Catalog, and manage ML workflows with MLflow.
* Partner with data engineers, ML engineers, and stakeholders to deliver solutions that meet business requirements.
* Identify and resolve complex issues in large‑scale distributed systems.
What We’re Looking For
* 3+ years in Scala development.
* Strong knowledge of data structures, caching, networking, and database systems.
* In‑depth understanding of Spark internals, job execution, query optimisation, and distributed data processing.
* Hands‑on experience with Apache Iceberg, Delta Lake, Apache Hudi, or similar table formats.
* Familiarity with Lakehouse architecture and ML workflow management.
Preferred
* Experience with cloud platforms (AWS, GCP, Azure) and distributed systems.
* Familiarity with CI/CD pipelines and Git.
* Understanding of ML workflows integrated into data pipelines.
Why Join
* Work on cutting‑edge data technologies in a collaborative environment.
* Build large‑scale, real‑world systems with the latest open‑source tools.
* Growth opportunities including career development and professional learning.
Location
Melbourne – 4 days a week in office
Interested?
If you’re a passionate developer with strong Scala and Apache Spark skills looking for your next challenge in the data and analytics space, we’d love to hear from you. Apply now for a confidential conversation.
#J-18808-Ljbffr