Research Scientist in AI Safety
We are seeking a highly motivated Research Scientist to join our team at the University of Oxford.
The successful candidate will contribute to the systematic evaluation of the safety of multi-agent systems powered by large language models (LLMs) and vision-language models (VLMs).
This is an exceptional opportunity to work at the forefront of machine learning research within a world-class academic environment.
About the Role
* We are looking for a candidate with a PhD (or near completion) in Machine Learning or a closely related discipline, preferably with experience in agentic systems powered by LLMs or VLMs.
* The ideal candidate will have demonstrated knowledge of safety evaluation, adversarial attacks, or defensive mechanisms in AI systems, and be able to design and execute capability evaluations, attacks, and defensive mechanisms for safe multi-agent systems.
Key Responsibilities
* Developing and implementing methods for evaluating the safety of multi-agent systems.
* Designing and executing attacks on multi-agent systems to identify vulnerabilities.
* Creating defensive mechanisms to mitigate risks associated with multi-agent systems.
Requirements
* PhD (or near completion) in Machine Learning or a closely related field.
* Proven expertise in safety evaluation, adversarial attacks, or defensive mechanisms in AI systems.
* Strong programming skills, preferably in Python or other relevant languages.
Benefits
* Funded by Toyota Motor Europe, reflecting the high industrial relevance and real-world impact of the research.
* The appointment is full-time and fixed-term, with a competitive salary and benefits package.
The University of Oxford is one of the world's most prestigious academic institutions, located in the historic city of Oxford, United Kingdom. Renowned for its rigorous research environment and innovative contributions across disciplines, Oxford consistently ranks among the top universities globally.