Join to apply for the Senior Data Engineer (AWS Cloud, API Integration) role at Commonwealth Bank
We're embarking on an exciting Digital Transformation program and are ready to push the boundaries and deliver engineering best practices to elevate the digital experience of our customers.
Overview
Senior Data Engineer focused on building scalable, secure, and high‑performance data platforms that power real‑time and analytical use cases. The role includes contributing to API services and AI‑enabled features that enhance customer experiences.
Roles & Responsibilities
* Design & Build AWS Data Platforms
* Architect, implement, and operate data lakes/lakehouses on S3 with Glue/Athena/Redshift, Iceberg/Hudi/Delta; optimise storage layout, partitioning, compaction, and schema evolution.
* Build batch and streaming pipelines using Glue/EMR/Lambda/Step Functions, Kafka (good to have); design for idempotency, replay, DLQs, and exactly‑once/at‑least‑once semantics.
* Productionise Airflow orchestration with robust DAG design, SLA management, retry/backoff, and per‑environment configuration.
* Implement CI/CD pipelines for data workflows and services using AWS native tools (CodePipeline, CodeBuild) or GitHub Actions/Automations; include automated testing and deployment strategies.
* Ensure data governance, security, and compliance: lineage, cataloguing, DQ checks, PII protection, and privacy‑by‑design.
* Expose curated data and real‑time context through API services (REST/GraphQL) with proper security, caching, and versioning.
* Support AI/ML integration for channel use cases by enabling model endpoints and feature pipelines.
* Collaborate with cross‑functional teams to ensure solutions meet performance, reliability, and operational excellence standards.
* Mentor engineers on data engineering best practices, orchestration, and automation.
Skills Required
* Core Data Engineering & AWS Expertise
* Strong experience with AWS services: S3, Glue, EMR, Lambda, Step Functions, Redshift, Athena, Lake Formation
* Proficiency in data modelling, SQL, and building ETL/ELT pipelines for structured and semi‑structured data
* Familiarity with lakehouse technologies (Iceberg/Hudi/Delta) and metadata management. Orchestration & CI/CD
* Hands‑on experience with Apache Airflow for workflow orchestration
* Design and implement CI/CD pipelines using AWS CodePipeline/CodeBuild or GitHub Actions
* Develop automated testing and deployment strategies
* Leverage Infrastructure as Code tools such as Terraform and CloudFormation
* Proficiency in Python and SQL for scripting automation and operational tasks
* Utilise containerisation technologies (e.g., Docker) and understand Kubernetes concepts for scalable deployments
* Gain exposure to API design and integration (REST/GraphQL) and API gateways
* Understand AI/ML workflows, including model deployment, monitoring, and basic MLOps practices
Working with us
We're hiring engineers from across all of Australia and have opened technology hubs in Melbourne and Perth. We support our people with flexible work options, including part‑time arrangements and job share where possible. If you require additional support, please contact HR Direct on .
Expressions of Interest
We're interested in hearing from people who are passionate about building next‑generation data platforms and data pipeline solutions across the bank, and who enjoy collaborating to advance Data Engineering practices.
Seniority level
* Mid-Senior level
Employment type
* Full-time
Job function
* Information Technology
Advertising End Date: 13/09/2025
Location: Sydney, New South Wales, Australia
#J-18808-Ljbffr