Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Head of ai safety research and testing – ai safety institute, science

Adelaide
Department of Industry, Science and Resources
Posted: 8 January
Offer description

* Canberra, Sydney, Darwin, Brisbane, Adelaide, Hobart, Melbourne, Perth
* Applications close 18 January 2026
* The salary range for this role is $179,608 to $199,950

About The Department
The Department of Industry, Science and Resources and our broader portfolio are integral to the Australian Government's economic agenda. Our purpose is to help the government build a better future for all Australians through enabling a productive, resilient and sustainable economy, enriched by science and technology. We do this by:

* Growing innovative & competitive businesses, industries and regions
* Investing in science and technology
* Strengthening the resources sector.

The APS and the department offer a clear direction and meaningful work. You will be able to create positive impact in people's lives whilst contributing to improved outcomes for Australia and our people.

If you would like to feel a strong connection to your work and you are accountable, committed and open to change, join us in shaping Australia's future.

Please see the APSC's APS Employee Value Proposition for more information on the benefits and value of employment within the APS.

About The Team
About the AI Safety Institute
The Australian Government is establishing an Australian AI Safety Institute (AISI) to support the Government's ongoing response to emerging risks and harms associated with AI technologies. The AISI will be the government's hub of AI safety expertise, operating with transparency, responsiveness and technical rigour. The AISI will conduct technical assessments, support coordinated government action, foster international engagement on AI safety, and publish research to inform industry, academia and the Australian people.

About The Division
The AISI is part of the department's Technology and Digital Policy Division. The division is responsible for providing policy advice to government, delivering programs and engaging domestically and internationally on enabling and critical technologies as well as the digitisation of the economy.

The division's priorities include implementing the National AI Plan, providing advice on the safe and responsible use of AI, robotics and automation, the role of critical technologies to support economic security, data policy and emerging digital economy issues.

The opportunity

We're building a motivated and capable team who will be defining the AISI's future. As a founding member of the team, you will help shape how Australia monitors, tests and governs AI. You will assess risks from frontier models, including CBRN misuse, enhanced cyber capabilities, loss-of-control scenarios, information integrity and influence risks, and broader systemic risks arising from the deployment of increasingly capable general-purpose AI systems. This is a unique opportunity to work at the frontier of AI, collaborate with domestic and international experts to shape emerging global AI safety standards and help keep Australians safe from AI-related risks and harms.

You'll have the opportunity to drive positive change, contribute to impactful projects, and develop your expertise in a rapidly evolving field.

Our ideal candidate

We are seeking a dynamic and strategic leader with deep technical AI safety expertise and experience shaping and delivering complex AI safety research programs.

Our Ideal Candidate For This Role Would Have

* Demonstrated experience designing and delivering a program of AI safety research.
* Demonstrated experience leading empirical AI research on frontier AI systems and safety-relevant behaviours. This could include work on model evaluation, adversarial testing, safety tuning, interpretability, robustness, agentic behaviour or human influence.
* Demonstrated experience designing safety-relevant evaluations for frontier AI models, including assessments of model behaviour, reliability, robustness and other risk-relevant capabilities.
* A track record of rigorous research contributions. This could include peer-reviewed publications, conference papers, high-quality preprints or equivalent research outputs.
* A track record of successfully leading ambitious, multidisciplinary AI research teams.
* Demonstrated experience building effective partnerships across government, industry, academia or civil society.
* Experience leading international research collaborations or standards development.
* A demonstrated ability to manage competing priorities, deliver complex projects and thrive in a fast-paced, constantly changing environment.
* A demonstrated ability to communicate complex ideas to diverse audiences.
* A deep understanding of frontier AI risks and mitigation strategies.

We expect these skills will be held by people with 5+ years of rigorous empirical research experience, typically in machine learning, data science, computer science or related quantitative fields, or through relevant empirical work in adjacent disciplines including applied statistics, psychometrics, behavioural science, cognitive science, human-computer interaction, cybersecurity research or systems engineering.

Our department has a commitment to inclusion and diversity, with an ambition of being the best possible place to work. This reflects the importance we place on our people and on creating a workplace culture where every one of us is valued and respected for our contribution. Our ideal candidate adds to this culture and our workplace in their own way.

What you will do

As the Head of AI Safety Research and Testing, you will:

* Drive the AISI's technical work, designing and leading the day-to-day delivery of a program of testing and research.
* Lead the design of empirical methods to assess the safety of frontier AI models and systems.
* Oversee the analysis and interpretation of results from evaluations to identify safety-relevant behaviours and generate clear findings and insights.
* Represent Australia in international AI safety engagements.
* Lead Australia's technical contributions to international AI safety collaborations, including joint testing exercises.
* Build and maintain high-trust relationships with domestic and international stakeholders across governments, industry, civil society and academia to strengthen the science and practice of AI safety.
* Provide strategic advice to senior government officials, policymakers and regulators on emerging AI capabilities, risks and harms.
* Manage the development of research publications and technical reports.
* Contribute to setting the strategic direction of the AISI.
* Take ownership in building the culture and reputation of the AISI.

Eligibility

Positions require the ability to obtain a minimum baseline security clearance, and the ability to obtain higher security clearance as required.

To be eligible for employment in the APS and the department, candidates must be Australian Citizens.

Notes

A successful candidate will be signed to an initial 12-month contract with the possibility of extension. Salaries range up to $199,950 for an Executive Level 2 position.

Please provide a CV of up to 2 pages (or up to 4 pages if including a list of publications).

Bachelor's degree or equivalent degree in relevant field of study is highly desirable.

A merit pool may be established and used to fill future vacancies within 18 months from the date the vacancy was first advertised in the Gazette.

The department does currently offer flexible work opportunities for many roles. This vacancy is based Australia-wide, although flexible or remote work arrangements may be considered. Please reach out to the contact officer to discuss this further.

Application information

Your application must not contain any classified or sensitive information. This includes in your application responses, CV and any other documents. The selection panel may not consider applications containing classified information.

Please provide a pitch explaining how your skills, knowledge and experience will be relevant to this role and why you are the best candidate for the position. Your pitch can contain no more than 750 words and should align to the key duties listed above.

Please complete your application online and provide your current CV with your application. (CVs must be in .doc, .docx, or .pdf format).

Accessible application documentation is available in other formats on request. Please contact or if you require assistance with your application.

Please refer to our Applying for a position information for additional information on how to apply.

Contact Information

For more information regarding this opportunity, please contact Bill Black on or via email on

Send an application
Create a job alert
Alert activated
Saved
Save
Similar jobs
jobs Adelaide
jobs South Australia
Home > Jobs > Head of AI Safety Research and Testing – AI Safety Institute, Science

About Jobstralia

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by job title
  • Jobs by sector
  • Jobs by company
  • Jobs by location

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobstralia - All Rights Reserved

Send an application
Create a job alert
Alert activated
Saved
Save