AI Safety Engineer (Multiple Opportunities)
Posted by the AI Safety Institute and the Department of Industry, Science and Resources. Positions are Science & Technical Level 7-9, Technology & Digital, located in Canberra, Sydney, Darwin, Brisbane, Adelaide, Hobart, Melbourne, Perth.
* Location: Canberra, Sydney, Darwin, Brisbane, Adelaide, Hobart, Melbourne, Perth
* Application deadline: 18 January 2026
* Salary range: $122,235 – $172,828
* Contract type: 12‑month with possible extension (Contract)
* Seniority level: Mid‑Senior
* Job function: Management and Manufacturing
* Industry: Government Administration
About the Department
The Department of Industry, Science and Resources and its broader portfolio are integral to the Australian Government's economic agenda. Our purpose is to help the government build a better future for all Australians through enabling a productive, resilient and sustainable economy, enriched by science and technology. We grow innovative businesses, invest in science, and strengthen the resources sector.
About the AI Safety Institute
The Australian Government is establishing an Australian AI Safety Institute (AISI) to support the ongoing response to emerging risks and harms associated with AI technologies. As the hub of AI safety expertise, the AISI will provide technical assessments, support coordinated government action, foster international engagement, and publish research.
About the Division
The AISI is part of the Technology and Digital Policy Division, which provides policy advice to government, delivers programmes, and engages domestically and internationally on enabling and critical technologies and digitisation of the economy.
Opportunity
As a founding member, you will shape how Australia monitors, tests and governs AI. You will assess risks from frontier models, including CBRN misuse, enhanced cyber capabilities, loss‑of‑control scenarios, information integrity risks, and broader systemic AI deployment risks. You will collaborate with domestic and international experts to shape emerging global AI safety standards.
Senior AI Safety Engineer (Level 8‑9)
Ideal Candidate
* Extensive hands‑on experience with frontier or near‑frontier AI models and systems, including LLMs, multimodal systems or agentic frameworks.
* Demonstrated experience building and running evaluations of frontier AI systems or safety‑relevant model behaviours.
* Experience developing or using safety‑related tooling to support evaluations, such as red‑teaming frameworks, test harnesses, automated evaluation pipelines, or continuous monitoring systems.
* Experience implementing and stress‑testing technical safeguards or mitigations, including guardrails, filtering systems, access controls, safety‑tuning methods and inference‑time controls.
* Demonstrated experience running large‑scale behavioural evaluations, managing logs and datasets, diagnosing evaluation or deployment issues and debugging.
* A working knowledge of safety‑relevant AI failure modes including robustness issues, jailbreak vulnerabilities, unintended behaviours and reliability failures.
* Strong collaborative skills, including the ability to work closely with research scientists and engineers to operationalise evaluation designs and refine testing procedures.
* Experience working in multidisciplinary teams and contributing to shared research and engineering workflows.
Responsibilities
* Operationalise evaluation designs developed with AI safety research scientists, translating conceptual testing methodologies into practical, scalable and reproducible experiments.
* Build, maintain and operate evaluation and safety‑testing tooling for frontier AI systems.
* Run large‑scale behavioural tests and model evaluations, generating high‑quality empirical evidence for safety analysis.
* Diagnose emerging failure modes, identify novel vulnerabilities or anomalous behaviours, and work with AI safety research scientists to interpret patterns and assess safety‑relevant risks.
* Develop and maintain clear and accurate technical documentation, including evaluation logs, testing reports and safeguard assessments.
* Support the continuous improvement of the AISI's engineering practices, tooling and testing infrastructure in a fast‑paced and evolving environment.
* Collaborate across government, industry, academia and civil society, including participation in international AI safety initiatives and joint evaluation activities.
* Contribute to technical reports and research outputs.
* Take ownership in building the culture and reputation of the AISI.
AI Safety Engineer (Level 7‑8)
Ideal Candidate
* Hands‑on experience working with frontier or near‑frontier AI models and systems, including LLMs, multimodal systems or agentic frameworks.
* Experience supporting or contributing to evaluations of frontier AI systems or safety‑relevant model behaviours.
* Experience using safety‑related tooling to support evaluations, such as red‑teaming frameworks, test harnesses, automated evaluation pipelines or continuous monitoring systems.
* Experience implementing or testing safety mitigations, such as guardrails, filtering systems, access controls, safety‑tuning methods and inference‑time controls.
* Experience contributing to behavioural evaluations at scale, working with logs and datasets, supporting issue diagnosis and debugging.
* An understanding of common safety‑relevant AI failure modes, including robustness issues, jailbreak vulnerabilities, unintended behaviours and reliability failures.
* The ability to work effectively in multidisciplinary teams and contribute to the operational delivery of evaluation work.
* A willingness to learn, iterate and contribute to shared processes in a fast‑paced environment.
Responsibilities
* Support the implementation of evaluation designs developed with AI safety research scientists, translating testing methodologies into repeatable and scalable experiments.
* Support the operation and maintenance of evaluation and safety‑testing tooling for frontier AI systems.
* Assist in running behavioural tests and model evaluations, contributing to the generation of reliable empirical evidence for safety analysis.
* Help identify emerging failure modes or anomalous behaviours, and work with AI safety research scientists to interpret results and assess potential risks.
* Maintain clear and accurate technical documentation, including evaluation logs, testing reports and safeguard assessments.
* Contribute to improving engineering practices, tooling and testing infrastructure as the AISI's work evolves.
* Collaborate across government, industry, academia and civil society, including participation in international AI safety initiatives and joint evaluation activities.
* Contribute to technical reports and research outputs.
* Take ownership in building the culture and reputation of the AISI.
Eligibility
* Ability to obtain a minimum baseline security clearance, and higher clearance as required.
* Must be an Australian Citizen to be eligible for employment in the APS and the department.
Application Information
* The selection panel may not consider applications containing classified or sensitive information.
* Please provide a pitch (maximum 750 words) explaining how your skills, knowledge and experience will be relevant to this role and why you are the best candidate.
* Complete your application online and provide your current CV in .doc, .docx, or .pdf format.
* Accessible application documentation is available in other formats on request.
* For assistance with your application, contact or .
* Refer to the applying for a position information for additional application details.
Contact Information
For more information regarding this opportunity, please contact Bill Black on or via email on
Additional Details
References may increase your chances of interview. The vacancy is Australia‑wide, flexible or remote arrangements may be considered.
#J-18808-Ljbffr