Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Data engineer-data platforms-azure

Sydney
IBM
Posted: 20 December
Offer description

Introduction
IBM is the largest technology and consulting employer in the world, serving clients in 170 countries. In this new era of Cognitive Business, IBM is helping to reshape industries by bringing together our expertise in Cloud, Analytics, Security, Mobile, and the Internet of Things. We are changing how we create. How we collaborate. How we analyse. How we engage. IBM is a leader in this global transformation so there is no better place to launch your career.

IBM Global Business Services (GBS) is a team of business, strategy and technology consultants enabling enterprises to make smarter decisions and providing unparalleled client and consumer experiences in cognitive, data analytics, cloud technology and mobile app development. With global reach, outcome-focused methodologies and deep industry expertise, IBM GBS empowers clients to digitally reinvent their business and get the competitive edge. We outthink ordinary. Discover what you can do at IBM. We are hiring.

Your Role And Responsibilities
We are currently recruiting Data Engineer-Data Platforms-Azure for a 17 months fixed term contract, based in Sydney.

You will be part of an Application Development team that is responsible for supporting the full life cycle of development and enhancements. As a Data Engineer your responsibilities include –

* Collaborate effectively with team members and work independently to deliver high-quality solutions that ensure customer satisfaction and success.
* Partner with Technical Business Analysts and Solution Designers to gather, analyze, and validate business requirements, ensuring alignment with project objectives and regulatory standards.
* Lead development efforts for the FinCrime Line of Business (LoB) project, managing Bitbucket branches and overseeing code integration, reviews, and deployment workflows.
* Design and implement robust Big Data ETL solutions to support FinCrime and CTM initiatives, ensuring data quality, regulatory compliance, and audit readiness.
* Build PySpark-based ADAPT data pipelines and develop Enterprise Data Warehouse (EDW) solutions using tools like Teradata; experienced in Data Lake creation leveraging HDFS, Hive, Sqoop, and HBase.
* Provide production release support, including preparation and delivery of release instructions for PIAT activities, IRB model runs, and RWA execution cycles.
* Monitor and maintain team lab environments, ensuring consistent setup and availability during code merges, testing phases, and critical deployment windows.
* Develop and deploy automation utilities using Python, significantly reducing manual intervention and accelerating delivery of business-critical processes.
* Participate in solution design discussions, review technical design documents, and secure necessary endorsements from Solution Architects.
* Ensure adherence to project timelines and quality standards by actively contributing to planning, execution, and review phases of development cycles.
* Demonstrate strong expertise in Financial Crime Risk domain and regulatory frameworks, applying business knowledge to deliver scalable and reliable solutions.
* Support and guide development teams through peer reviews, sharing best practices, and ensuring compliance with Westpac's data engineering standards.
* Maintain strong awareness of timelines and budgets to ensure timely and cost-effective project delivery.

Preferred Education
Bachelor's Degree

Required Technical And Professional Expertise
To ensure success in the role you will possess the following skills –

* Possess 8+ years of technical expertise as a PySpark Developer with strong knowledge of Financial Crime and the Banking domain.
* Big Data & Distributed Systems: PySpark/Spark, Databricks, Delta Lake, HiveQL, HDFS, Hadoop, HBase, Cloudera; strong focus on code optimization, runtime performance, and cost efficiency.
* Cloud & Hybrid Environments: Azure, AWS, Hortonworks; hands-on experience orchestrating workloads across on‑prem and cloud stacks, including Azure Synapse.
* Data Engineering & Tooling: DataIKU (profiling/validation/transformation assurance), Control‑M (scheduling), Informatica BDM 10.2 (ingestion/ETL), Azure Synapse (analytics).
* Programming & Scripting: Python, PySpark, HiveQL, Unix Shell; working knowledge of C, Java, Scala (basics).
* Databases: MySQL, Postgres, Teradata—versatile across relational ecosystems for diverse data needs.
* Version Control & DevOps: Git/Bitbucket, CI/CD best practices, clean branching strategies, rigorous PR hygiene; Jira/Confluence for governance and documentation.
* Breadth Across the Stack: End‑to‑end coverage from data ingestion and transformation to scheduling, quality checks, deployments, and operational support.
* Deep System Knowledge (Westpac | 4+ years): Lead Data Engineer (IBM) since September 2021, with deep familiarity of Westpac's data landscape; accelerates troubleshooting, reduces delivery risk, and ensures seamless integration of new functionalities.
* Leadership & Squad Coordination: Drives Scrum ceremonies; aligns engineering, QA, operations, and business stakeholders; anticipates risks, resolves issues quickly, and maintains transparent status/reporting to meet compliance and client specifications.
* Governance & Reliability: Enforces best practices in data lineage, documentation, and audit readiness; maintains high standards of data quality and platform reliability.
* Requirements & Design: Gathered requirements; prepared IFS, HLD/LLD, and mapping sheets.
* Build & Optimization: Designed and developed PySpark code and Hive queries; optimized jobs/pipelines for stability and performance.
* Scheduling & Operations: Built Control‑M workflows; managed deployments; handled incidents and production support.
* Data Assurance: Performed analysis in DataIKU to validate transformations and downstream KPIs.
* Testing & Quality: Led BUILD, UT, ST; drove SIT/UAT readiness and communicated results.
* Governance: Enforced Git/Bitbucket best practices; maintained documentation in Jira/Confluence.
* Experience in developing Informatica ETL flow along with Teradata and SME in data warehouse application
* In-depth understanding and practical experience in SRDE architecture, including upstream and downstream data flows, transformation logic, and integration points.
* Well-versed with Agile methodologies and tools such as Bitbucket for version control and Shell scripting for automation and process efficiency.
* Actively contributed supporting regulatory and business reporting needs.
* Proven ability to work collaboratively within SDLC, liaising effectively with development teams, QA, Solution Designers, and cross-functional stakeholders.
* A dependable team player with strong communication skills, capable of engaging with all levels of technical and business teams to drive project success.
* Strong commitment to continuous improvement and quality delivery in the data engineering space within the Banking and Finance industry.

Send an application
Create a job alert
Alert activated
Saved
Save
Similar job
Sales enablement - sales performance coaching (sydney or melbourne)
Sydney
IBM
Similar job
Security architect
Sydney
IBM
Architect
Similar job
Brand sales specialist
Sydney
IBM
Similar jobs
IBM recruitment
IBM jobs in Sydney
jobs Sydney
jobs New South Wales
Home > Jobs > Data Engineer-Data Platforms-Azure

About Jobstralia

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by job title
  • Jobs by sector
  • Jobs by company
  • Jobs by location

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobstralia - All Rights Reserved

Send an application
Create a job alert
Alert activated
Saved
Save