We're seeking an experienced Senior Data Engineer to spearhead our data pipeline infrastructure. As a key member of our engineering team, you'll play a critical role in designing and managing end-to-end data pipelines that enable real-time supply-demand matching, powering personalised experiences, and equipping our Product, AI/ML, and Analytics teams to create life-changing features for millions of Australians.
Job Overview
* Data Engineering Expertise: Develop and refine scalable, high-performance data pipelines that collect, transform, and load data from diverse sources (e.g., transactional systems, 3rd-party APIs, event streams).
* Cloud Infrastructure: Champion a robust infrastructure (AWS, GCP, or Azure) leveraging containers, serverless, and microservices for fault-tolerant data workflows.
* Collaboration: Collaborate with ML Engineers and Data Scientists to ensure data readiness for personalised recommendations, predictive forecasting, and reinforcement learning models.
Key Responsibilities
* Develop Real-Time Streaming Solutions: Build and maintain real-time streaming solutions (e.g., Kafka, Kinesis, Spark Streaming) enabling instant financial insights for our members.
* Data Governance: Implement best-in-class data governance practices to meet CDR, privacy, and regulatory requirements in a high-stakes fintech environment.
* Performance Optimisation: Drive continuous improvement in data engineering by introducing new tools, automating processes, and enhancing performance benchmarks.
Fintech Innovation Leader
* Spearhead Supply-Demand Matching: Develop the backbone for supply-demand matching algorithms - ensuring seamless integration between data streams, ML models, and user-facing applications.
* Unlock Advanced Database Solutions: Use cutting-edge database and storage solutions (NoSQL, columnar, time-series) to maximise throughput and minimise latency.
Mission-Driven Focus
* Empower Better Financial Decisions: Ensure every data pipeline and model integration reflects our commitment to positive financial outcomes for our members.
* Cultivate Data Culture: Welcome diversity of thought and cultivate a growth-oriented data culture within our organisation.
What We're Looking For
* Data Engineering Mastery: 5+ years in data engineering roles, with a track record of delivering large-scale, real-time data pipelines.
* Technical Skills: Proficient in Python, SQL, and frameworks like Apache Spark, Kafka, Kinesis, or Flink.
* Ecosystem Knowledge: Familiarity with cloud-based architectures (AWS, GCP, or Azure) and modern deployment strategies (Docker, Kubernetes, Terraform).