Job Opportunity
We are seeking a skilled professional to join our team as a Data Engineer. In this role, you will be responsible for designing and building data pipelines from data ingestion to consumption within a hybrid big data architecture using Cloud Native products such as DBT and Apache Airflow.
About the Role
The ideal candidate will have hands-on experience with Cloud Platforms, specifically GCP, and prior experience in designing and building data pipelines. They should also possess strong programming skills in languages like Python and have knowledge of data warehousing projects and ETL tools.
Key responsibilities include:
* Designing and implementing robust, fault-tolerant data pipelines that clean, transform, and aggregate data into databases or data sources.
* Collaborating with team members to ensure successful project outcomes and capability building.
* Driving the engineering agenda, ensuring delivery of best practice end-to-end engineering respecting ANZ architecture standards.
What You Will Bring
To succeed in this role, you will need to bring the following skills and qualifications:
* Hands-on experience with Cloud Platforms – GCP (preferred).
* Prior experience in designing and building data pipelines from data ingestion to consumption within a hybrid big data architecture.
* Strong Experience in programming languages such as Python.
* Ability to optimise data flows by building robust, fault-tolerant data pipeline that cleans, transforms, and aggregates data into databases or data sources.
Our Offer
We offer a range of benefits including flexible working arrangements, access to health and wellbeing services, and discounts on selected products and services.
We are committed to building a workplace that reflects the diversity of the communities we serve. We welcome applications from everyone and encourage you to talk to us about any adjustments you may require to our recruitment process or the role itself.
Location and Work Hours
Location: Melbourne
Work Hours: Full Time (Hybrid)