Company Description
Showtime Consulting is a leading provider of Shielded Cloud Solutions. Based in Australia and New Zealand, we specialise in secure cloud deployments and extending the cloud to the intelligent edge. Our expertise spans Azure, Azure Stack, AWS, OpenStack, and Red Hat Linux, serving both commercial and government environments, including government‐classified clouds. Our mission is to forge the future with secure, innovative technologies that protect and uplift communities, while our vision is to revolutionise seamless collaboration across domains on an ultra‐secure platform.
We've delivered outcomes on programs such as Azure Stack Hub, Azure Protected, OpenStack Protected, Secret Cloud, and major national security system uplifts and continue to play a pivotal role in evolving sovereign cloud capability.
The Role
We are seeking an experienced Azure Data Engineer to join a high‐profile data and analytics program. You will be a hands‐on engineer, responsible for designing, building, and optimising data pipelines using Azure Databricks, PySpark, Python, SQL, and Teradata, while applying DevOps best practices to ensure reliable, automated delivery.
This role is well suited to someone who enjoys working across the full data engineering lifecycle—from ingestion and transformation through to optimisation, governance, and operational support.
Key Responsibilities
* Design, develop, and maintain scalable data pipelines using Azure Data Factory, Azure Databricks, PySpark, and Python
* Build and optimise SQL‐based transformations, including working with Teradata and cloud data platforms
* Deliver and support data migration initiatives, including legacy, on‐premises, and cross‐platform migrations
* Apply DevOps practices including CI/CD pipelines, Git version control, automation, and environment management
* Optimise performance and cost across Databricks compute, SQL workloads, and data storage
* Work closely with architects, analysts, and stakeholders to translate business requirements into technical solutions
* Ensure data quality, security, and governance standards are met across all data pipelines
What You'll Need
* Experience in Data Engineering roles within enterprise environments
* Strong hands‐on experience with Azure Databricks, PySpark, Python, and Spark SQL
* Solid SQL skills, including experience with Teradata or similar enterprise data platforms
* Experience working with Azure Data Factory and cloud data lakes
* Proven experience delivering or supporting data migration projects
* Familiarity with DevOps practices (CI/CD, Git, automation, infrastructure pipelines)
* Strong communication skills and the ability to work effectively with cross‐functional teams
* Ability to work independently while contributing to a collaborative delivery environment
#J-18808-Ljbffr