Role: Data Engineer
6‐month contract opportunity. Daily rate: Circa $850p.d. to $1000p.d. incl super.
About the Role
As an experienced Data Engineer, you'll be responsible for low‐level solution design, detailed design interpretation, and building data pipelines to support transaction‐monitoring scenarios. This role suits someone who is hands‐on with modern data engineering tools, comfortable working in distributed environments, and able to collaborate closely with architects, analysts, and engineering teams.
Key Responsibilities
* Develop and maintain scalable data pipelines using PySpark, Spark, Python, Airflow
* Interpret and deliver low‐level and detailed solution designs
* Build and optimise ETL/ELT workflows across large‐scale Big Data environments
* Support transaction‐monitoring scenario builds and data processing requirements
* Work with cloud‐native tooling on AWS and infrastructure‐as‐code patterns
* Collaborate with cross‐functional teams across Data, Engineering, and Architecture
Skills & Experience
Data Processing & Engineering
* Spark, PySpark, Python
* Airflow, ETL/ELT, distributed computing
* Data modelling and Big Data frameworks
Data Analytics
* SQL, DuckDB
* Jupyter, Superset, PowerBI
* Banking/financial domain knowledge (advantage)
Software Development
* Unit testing, OOP
* Git/source control, CI/CD pipelines
Integration & Infrastructure
* APIs, networking
* Terraform, Docker, AWS
* Kubernetes (nice to have)
Why Join
* Work with a major financial services client on high‐visibility data initiatives
* Modern tech stack across cloud, data engineering, and analytics
* Strong potential for extension in a growing Data & AI environment
#J-18808-Ljbffr