Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Data engineer

Sydney
beBeeData
Posted: 14 September
Offer description

Unlocking Data Potential


We are seeking a highly skilled Data Engineer to design, build and operate scalable data platforms that power real-time and analytical use cases. The ideal candidate will have strong experience with AWS services and be proficient in data modeling, SQL and building ETL/ELT pipelines for structured and semi-structured data.



Key Responsibilities:

* Design & Build AWS Data Platforms
* Architect, implement, and operate data lakes/lakehouses on S3 with Glue/Athena/Redshift, Iceberg/Hudi/Delta; optimise storage layout, partitioning, compaction, and schema evolution.
* Build batch and streaming pipelines using Glue/EMR/Lambda/Step Functions, Kafka (good to have); design for idempotency, replay, DLQs, and exactly-once/at-least-once semantics.
* Productionise Airflow orchestration with robust DAG design, SLA management, retry/backoff, and per-environment configuration.
* Implement CI/CD pipelines for data workflows and services using AWS native tools (CodePipeline, CodeBuild) or GitHub Actions/Automations; include automated testing and deployment strategies.
* Ensure data governance, security, and compliance: lineage, cataloguing, DQ checks, PII protection, and privacy-by-design.
* Expose curated data and real-time context through API services (REST/GraphQL) with proper security, caching, and versioning.
* Support AI/ML integration for channel use cases by enabling model endpoints and feature pipelines.
* Collaborate with cross-functional teams to ensure solutions meet performance, reliability, and operational excellence standards.
* Mentor engineers on data engineering best practices, orchestration, and automation.



Required Skills:

* Core Data Engineering & AWS Expertise
* Strong experience with AWS services: S3, Glue, EMR, Lambda, Step Functions, Redshift, Athena, Lake Formation
* Proficiency in data modelling, SQL, and building ETL/ELT pipelines for structured and semi-structured data
* Familiarity with lakehouse technologies (Iceberg/Hudi/Delta) and metadata management. Orchestration & CI/CD
* Hands-on experience with Apache Airflow for workflow orchestration
* Design and implement CI/CD pipelines using AWS CodePipeline/CodeBuild or GitHub Actions
* Develop automated testing and deployment strategies
* Leverage Infrastructure as Code tools such as Terraform and CloudFormation
* Proficiency in Python and SQL for scripting automation and operational tasks
* Utilise containerisation technologies (e.g., Docker) and understand Kubernetes concepts for scalable deployments
* Gain exposure to API design and integration (REST/GraphQL) and API gateways
* Understand AI/ML workflows, including model deployment, monitoring, and basic MLOps practices



About the Opportunity:

We're hiring engineers from across Australia and have opened technology hubs in Melbourne and Perth. We support our people with flexible work options, including part-time arrangements and job share where possible.



What We Offer:

* A collaborative and innovative environment
* Opportunities for professional growth and development
* A competitive salary and benefits package

Send an application
Create a job alert
Alert activated
Saved
Save
Similar jobs
jobs Sydney
jobs New South Wales
Home > Jobs > Data Engineer

About Jobstralia

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by job title
  • Jobs by sector
  • Jobs by company
  • Jobs by location

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2025 Jobstralia - All Rights Reserved

Send an application
Create a job alert
Alert activated
Saved
Save