Data Engineer – Contract | Location: Australia
Our client is seeking a
Data Engineer
to join their team and contribute to the design, development, and optimisation of data pipelines and cloud-based data solutions. This role is ideal for someone with strong hands-on experience across
PySpark, AWS Glue, SQL, and modern data architectures
.
Key Responsibilities
* Design, develop, and maintain data pipelines and ETL processes using
PySpark, SparkSQL, SQL, and AWS Glue
.
* Build scalable solutions on
AWS cloud
, with a focus on Lakehouse and Data Warehouse architectures.
* Apply
dimensional modelling
principles to deliver robust data solutions.
* Contribute to
DevOps, CloudOps, DataOps, and CI/CD practices
with a Site Reliability Engineering (SRE) mindset.
* Collaborate with cross-functional teams to translate business needs into system and data requirements.
* Implement version control (Git) and software engineering practices including APIs and microservices.
Skills & Experience
* 1–3 years' experience
in SQL, PySpark, and AWS Glue.
* Strong understanding of AWS cloud services and data architecture (Lakehouse & DW).
* Experience with
software engineering concepts
(APIs, microservices).
* Familiarity with DevOps practices and CI/CD pipelines.
* Knowledge of version control systems, specifically Git.
* Strong analytical skills with the ability to interpret and solve complex data challenges.
Soft Skills
* Excellent written and verbal communication skills.
* Ability to collaborate effectively with stakeholders and translate business needs into technical requirements.
* Strong problem-solving skills and a proactive mindset.
Duration:
Contract role
Location:
Flexible (Australia-based)
If you are a motivated
Data Engineer
with a passion for delivering high-quality data solutions, we'd love to hear from you.
At QIX Consulting, we are committed to fostering a diverse and inclusive workplace. We encourage applications from people of all backgrounds, experiences, and identities.