The Company
Our Client is launching a significant initiative focused on evaluating, optimizing, and preparing their HRIS data for a transition to a new cutting edge system.
As part of their ongoing evolution, they are migrating from a traditional on-premise data warehouse to a cutting-edge cloud-based solution using Snowflake. This position will be instrumental in driving and supporting that transformation.
The Candidate
The ideal candidate will be responsible for designing and delivering scalable, high-performance data solutions aligned with business needs. We're looking for someone passionate about data, with a strong grasp of both established and emerging trends in the field.
Key Responsibilities:
* Partner with multidisciplinary teams to architect, develop, and sustain robust, reusable, and high-performance data pipelines.
* Propose and apply enhancements to boost the accuracy, efficiency, and consistency of data by maintaining a streamlined pipeline architecture.
* Gather and organize extensive, complex datasets to align with both technical specifications and business objectives.
* Design and optimize data processing solutions using modern big data tools and frameworks.
* Write and optimize advanced SQL queries to manage, retrieve, and transform data across databases.
* Drive internal efficiency by automating routine processes, improving data flow, and reconfiguring infrastructure to support scalability and better data access.
* Construct the foundational infrastructure for seamless data extraction, transformation, and loading (ETL) from diverse sources using scalable data technologies.
* Collaborate with internal teams and external partners,including project managers, analysts, data scientists, and developers, to resolve technical challenges and ensure a solid data infrastructure is in place to meet project demands.
What We're Looking For:
* Bachelor's degree in Computer Science, Engineering, Information Technology, or a related discipline, or equivalent hands-on experience.
* Strong proficiency in SQL, CI/CD pipelines, and data integration and delivery methods.
* Proven experience in data engineering and building robust ETL processes.
* Solid background in data modeling and database design best practices.
* Familiarity with Snowflake and its ecosystem is highly desirable.
* Experience using Qlik Data Integration or similar data modeling tools.
* Working knowledge of Progress (OpenEdge) database systems.
* Exposure to Click Replicate or similar data replication technologies is a plus.
* Comprehensive understanding of the full software development lifecycle, including requirements gathering, system architecture, development, testing, deployment, release management, and ongoing support.
Apply Today
Learn more about our Brisbane recruitment services:
By clicking 'apply', you give your express consent that Robert Half may use your personal information to process your job application and to contact you from time to time for future employment opportunities. For further information on how Robert Half processes your personal information and how to access and correct your information, please read the Robert Half privacy notice: Please do not submit any sensitive personal data to us in your resume (such as government ID numbers, ethnicity, gender, religion, marital status or trade union membership) as we do not collect your sensitive personal data at this time.