Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Senior data engineer (databricks)

Sydney
Nuix
Posted: 4 February
Offer description

We're on a mission to be a Force For Good, through our People, Products and Purpose at Nuix. Nuix is one of the greatest comeback Technology success stories in Australia, and we're making massive waves each day. Nuix is, and will be, a pioneer in the Australian Technology space, and we're carrying the torch on what "good" looks like.

This extends to our People. We're fiercely passionate, love working at pace, thrive in ambiguity, live, and breathe outside of the box, and above all are good humans. Our impact extends outside of our working hours, and our place in society isn't always defined by corporate metrics. We're determined to make a positive difference in the world, whether through our solutions which help the top companies, governments and agencies find the truth and combat illegal activities, or through our people who care about contributing and giving back both within, and outside, of Nuix. We are a Force For Good.

We're selective about who comes on board, and you should be too. But if the above sounds like a match, get in touch today and get ready for the possibility of starting a once-in-a-career journey.

Role Overview

The Senior Data Engineer will design and oversee data pipelines in Databricks on AWS, manage integrations with SaaS platforms and implement robust data quality and observability frameworks. This role ensures reliable, high-performance data delivery for enterprise analytics and AI workloads.

Purpose

As a Senior Data Engineer, your primary focus will be designing and delivering scalable, reliable data ingestion and transformation pipelines that power our analytics, reporting and AI use cases.

You will play a key role in turning raw, complex data from multiple systems into high-quality, governed, analytics-ready datasets using Databricks and modern Lakehouse practices.

This role sits within our broader Data, Analytics & AI team, where responsibilities are shared across specialized focus areas. While other team members lead platform architecture, cloud infrastructure and visualization, your core contributions will be in pipeline engineering and data transformation, the foundation that everything else relies on.

Growth & Collaboration

While this role is not primarily responsible for platform engineering or dashboard development, experience in these areas is highly valued and will help you collaborate effectively across the team.

If you're interested in expanding your skills into these areas, you will have opportunities to learn, contribute and be mentored as part of a collaborative, cross-functional team.

Location

This position will be based in our Sydney office. The candidate is required to attend the office a minimum of 3 days per week but may voluntarily elect to work either remotely or from the Sydney office for the remaining days of the week.

Key Responsibilities

* Design, build and maintain scalable ETL/ELT pipelines that ingest, transform and deliver trusted data for analytics and AI use cases.
* Build data integrations with well-known SaaS platforms such as Salesforce, NetSuite, Jira and others.
* Implement incremental and historical data processing to ensure accurate, up-to-date data sets.
* Ensure data quality, reliability and performance across pipelines through validation, testing and continuous code optimization.
* Contribute to data governance and security by supporting data lineage, metadata management and data access controls.
* Support production operations, including monitoring, alerting and troubleshooting.
* Work with stakeholders to translate business and technical requirements into well‐structured, reliable datasets.
* Share knowledge and contribute to team standards, documentation and engineering best practices.

Skills, Knowledge And Expertise

* Data Ingestion & Integration: hands‐on experience building robust ingestion pipelines using tools and patterns such as Databricks Auto Loader, Lakeflow Connectors, Fivetran and/or custom API / file‐based integrations.
* Core Data Engineering: strong development experience using SQL, Python and Apache Spark (PySpark) for large‐scale data processing.
* Data Pipeline Orchestration: proven experience developing and operating data pipelines using Databricks Workflows & Jobs, Delta Live Tables (DLT) and/or Lakeflow Declarative Pipelines.
* Incremental Processing & Data Modelling: deep understanding of incremental data loading, including Change Data Capture (CDC), MERGE operations and Slowly Changing Dimensions (SCD) in a Lakehouse environment.
* Data Transformation & Lakehouse Design: experience in designing and implementing Medallion Architecture (bronze, silver and gold) using Delta Lake.
* Data Quality, Test and Observability: experience implementing data quality checks with tools and frameworks such as DLT expectations, Great Expectations or similar, including pipeline testing and monitoring.
* Data Governance & Lineage: hands‐on experience with data cataloguing, lineage and metadata management within Unity Catalog to support governance, auditing and troubleshooting.
* Performance Optimization: experience tuning Spark and Databricks workloads, including partitioning strategies, file sizing, query optimization and efficient use of Delta Lake features.
* Production Engineering Practices: experience working with code versioning (Git), peer review and promoting pipelines through development, test and production environments.
* Security & Access Control Awareness: Understanding of data access control, sensitive data handling and working with Unity Catalog in the context of governed environments.
* Stakeholder & Team Collaboration: strong communication and analytical skills working with business and technical stakeholders to gather requirements, explain data concepts and support downstream users such as analysts and dashboard developers.

Desired Expertise

* Experience with Amazon Web Services (AWS).
* Understanding of DevOps best‐practices and solutions such as:
o Infrastructure-as-code (Terraform).
o Databricks Asset Bundles.
o CI/CD pipelines (Jenkins).
* Familiarity with data warehousing and dimensional modelling methodologies (e.g. Kimball, facts & dimensions, star schemas, data marts).
* Basic understanding of AI & ML, including preparation of structured and unstructured data for ML use cases and AI agents.

Nuix is an Equal Opportunity Employer. We welcome all applications and are a flexible employer.

We strive to make any required adjustments where possible to make the process fair and equitable for everyone. If you need any accommodations throughout the interview process, please note this in your job application.

Love the role, but not the right fit for you? Know someone that might be awesome for this role?We're always looking for talented people who want to make a real impact. If you refer someone and we successfully hire them, you'll receive a $1,000 gift card.

Send referrals to recruitment@nuix.com or message us on LinkedIn. T&C apply.

To all recruitment agencies: Nuix does not accept agency CVs unless we have an existing agreement. Please do not forward CVs to us, or our Nuix employees directly. Nuix is not responsible for any fees related to unsolicited resumes, or where there is not an agreement in place.

We are a leading provider of investigative analytics and intelligence software, that empowers our customers to be a force for good by finding truth in the digital world. We help customers collect, process and review massive amounts of structured and unstructured data, making it searchable and actionable at scale and speed, and with forensic accuracy. Our users rely on Nuix software to assist with challenges as diverse as criminal investigations, data privacy, eDiscovery, regulatory compliance and insider threats.

Powered by AI.

Our solutions are powered by our patented data processing engine and enhanced with AI such as Natural Language Processing. Our AI capabilities super‐charge our software to identify patterns and correlations that no human could find, so that our customers get to the most relevant or risky data faster, saving on time, cost, reputation damage and even lives.

#J-18808-Ljbffr

Send an application
Create a job alert
Alert activated
Saved
Save
Similar jobs
jobs Sydney
jobs New South Wales
Home > Jobs > Senior Data Engineer (Databricks)

About Jobstralia

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by job title
  • Jobs by sector
  • Jobs by company
  • Jobs by location

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobstralia - All Rights Reserved

Send an application
Create a job alert
Alert activated
Saved
Save