We're partnering with a leading financial services organisation to hire a Data Engineer to join a high-performing engineering squad delivering production-grade data and software solutions on AWS.
This is an opportunity to work in a modern, cloud-native environment where quality, automation, and resilient design are embedded from day one. You'll own features end-to-end — from design and development through to deployment, monitoring, and production support — while contributing to engineering standards and continuous improvement.
The Role
You will design and build scalable data pipelines and services using Python and PySpark on AWS, working closely with product, platform, and QA teams to deliver reliable and observable systems.
Key responsibilities include:
Designing, building, and maintaining robust data processing pipelines on AWS (Glue, EMR, S3)
Orchestrating workflows using Airflow, ensuring reliability, efficiency, and clear SLAs
Embedding automation across testing, quality, security, and deployment (CI/CD)
Implementing unit, integration, and contract testing within Git-based workflows
Driving shift-left quality practices and DevSecOps principles
Monitoring production health using logging, metrics, and tracing tools
Contributing to performance tuning, cost optimisation, and cloud-native best practices
Mentoring engineers and promoting engineering excellence within the squad
About You
You are a hands-on engineer who thrives in cloud environments and takes ownership of solutions from design to production.
Essential experience:
3–5 years' experience in hands-on software development
Strong Python development skills and Linux scripting
Commercial experience with AWS Glue, Spark/PySpark, and S3
Airflow pipeline design and operational experience (DAGs, retries, SLAs)
Solid SQL skills with exposure to RDBMS such as Teradata or Oracle
CI/CD experience with tools such as GitHub Actions, Jenkins, TeamCity, or Octopus
Observability experience (CloudWatch, Splunk, logging, metrics, tracing)
Experience designing scalable, reliable cloud-native solutions on AWS
Exposure to automation frameworks and test-driven practices
Nice to have:
Experience migrating or integrating legacy ETL tools (e.g., Ab Initio, SAS)
Knowledge of Redshift, Athena, EMR, Iceberg formats
API and microservices integration experience
Test automation frameworks (e.g., Playwright, Appium, Sahi)
Dashboarding and engineering metrics reporting
Experience mentoring or uplifting engineering standards across teams
Why Apply?
Join a high-performing, collaborative engineering squad
Work on modern AWS data platforms and cloud-native architectures
Strong focus on automation, DevSecOps, and engineering excellence
Opportunity to make a real impact within a financial services environment
If you're a passionate Data Engineer who enjoys building resilient, scalable data systems and driving quality through automation, we'd love to hear from you.
Apply now or reach out for a confidential discussion.