Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Phd scholarship in bias propagation in generative ai-driven decision systems and its implicatio[...]

Melbourne
Monash University
IT
Posted: 26 February
Offer description

PhD Scholarship in Bias Propagation in Agentic and Generative AI-Driven Decision Systems

Job No.: 680086

Location: Caulfield campus (with possible collaboration activities at Clayton campus)

Employment Type: Full-time

Duration: 3-year fixed-term appointment (subject to satisfactory progress)

Remuneration: The successful applicant will receive a tax‐free stipend, at the current value of $37,145 per annum 2026 full‐time rate, as per the Monash Research Training Program (RTP) Stipend www.monash.edu/study/fees-scholarships/scholarships/find-a-scholarship/research-training-program-scholarship#scholarship-details

Implications for Trust, Governance, and Regulatory Compliance

This opportunity invites applications from outstanding domestic and international candidates who are interested in undertaking a PhD focused on bias propagation in agentic and generative AI-driven decision systems and its implications for trust, governance, and regulatory compliance. The project is part of an interdisciplinary research team led by experts from the Opportunity Tech Lab within the Monash Business School and the Faculty of IT at Monash University.

The PhD candidate will be working with a team of distinguished researchers:

* Professor Charmine Härtel (Department of Management, Monash Business School; Director, Opportunity Tech Lab)
* Professor Kristian Rotaru (Department of Accounting, Monash Business School; Associate Director, Opportunity Tech Lab; Chair of the Steering Committee, Monash Business Behavioural Laboratory)
* Dr Mor Vered (Department of Data Science & AI, Faculty of IT)
* Dr Estelle Wallingford (Department of Business Law & Taxation, Monash Business School; Opportunity Tech Lab)

The Opportunity

This project addresses a pressing public policy and social issue: the propagation of bias in agentic and generative AI systems and its impact on human decision‐making, trust, and regulatory design. The rapid evolution of AI, from generative models that produce text and recommendations to agentic AI systems that autonomously plan, act, and make decisions with limited human oversight, has transformed how critical choices are made across high‐stakes domains including financial markets, healthcare, and legal practice. These systems' capacity to produce biased, yet seemingly neutral, outputs poses a significant and growing risk to fairness, accountability, and public trust.

A core focus of the project is understanding how AI‐generated explanations and autonomous AI actions influence trust formation, cognitive effort, and user behaviour, including the risk of over‐reliance (automation bias) or inappropriate scepticism when AI outputs are misleading, hallucinated, or generated through opaque multi‐step reasoning. As agentic AI systems increasingly operate across organisational processes with minimal human intervention, understanding how bias propagates through chains of autonomous decisions becomes essential.

The candidate will contribute to the development of empirically validated methods for identifying, measuring, and mitigating bias propagation effects. The research will involve experimental studies utilising the advanced neurophysiological infrastructure of the Monash Business Behavioural Laboratory (MBBL) – including EEG, eye‐tracking, pupillometry, fNIRS, and psychophysiological assessment – alongside cognitive modelling and regulatory analysis. This combination of cutting‐edge neuroscience methods with legal and governance scholarship is a distinctive feature of the project.

The project is situated within a dynamic and rapidly evolving regulatory landscape. In Australia, the Privacy Act reforms introducing new automated decision‐making transparency obligations take effect in December 2026, the Australian AI Safety Institute becomes operational in early 2026, and ongoing policy development under the National AI Plan (2025) signals increasing regulatory attention to high‐risk AI applications. Internationally, the EU AI Act is moving into enforcement, and jurisdictions worldwide are grappling with how existing legal frameworks apply to autonomous AI systems. The PhD candidate will have the opportunity to contribute to this critical policy discourse through empirically grounded research.

This doctoral project will conduct experimental research on generative and agentic AI systems, with a focus on the experiences and risks faced by vulnerable populations. The research program is designed to produce empirically grounded insights that inform harm mitigation strategies, transparency mechanisms, and responsible deployment protocols relevant to both policymakers and organisational decision‐makers. Rather than pursuing broad ethical theorising, the project emphasises causal evidence on how AI design and deployment choices shape downstream social and economic outcomes for populations with asymmetric power, information, or risk exposure. We are looking for a researcher who wants to produce empirical evidence that informs real policy and organisational decisions, rather than work that is mainly philosophical or purely technical.

Essential Skills And Experience

* A background in a relevant field such as behavioural science, cognitive science, data science, psychology, human‐computer interaction, law, or a related discipline
* Demonstrated experience in empirical research (quantitative, qualitative, or mixed methods)
* Strong written communication skills
* A clear interest in interdisciplinary research on AI, decision‐making, governance and ethics

Desirable Skills

* Experience with experimental design and behavioural data collection
* Familiarity with generative or agentic AI systems, explainability (XAI), or algorithmic fairness
* Skills in statistical analysis and/or coding (e.g., R, Python, C++)
* Exposure to neurophysiological measurement methods (e.g., EEG, eye‐tracking, pupillometry, fNIRS)
* Interest or training in technology law, digital regulation, or AI ethics
* An ability to engage in legal research, including familiarity with legislation, regulations and case law, would be advantageous but is not essential

To Apply

This position has a two‐stage selection process.

Stage 1: Expression of Interest

To apply, please submit an Expression of Interest (EOI) via email to Professor Kristian Rotaru (kristian.rotaru@monash.edu). Please use the following subject line: EOI – PhD Scholarship – AI Bias – [Your Full Name].

Your EOI must include the following:

* Curriculum Vitae (CV), including details of academic qualifications, research experience, publications (if any), and relevant professional experience
* Certified copies of academic transcripts from all tertiary qualifications
* Evidence of English language proficiency (where applicable, e.g., IELTS, TOEFL, or equivalent)
* A brief research proposal (maximum 3 pages) that clearly aligns with the project's focus on bias in agentic and generative AI‐driven decision systems. The proposal should outline your interest in the topic, specify the theoretical and methodological approaches you are considering, and describe how your academic background and prior experience equip you to contribute to the project's objectives. Where possible, please highlight relevant skills – such as experimental design, data analysis, neurophysiological methods, or legal/regulatory analysis – that can be directly cross‐referenced against the project's interdisciplinary goals. The proposal should also reflect your enthusiasm for working at the intersection of business, IT, cognitive science, and law.

Incomplete applications (e.g., EOIs submitted without a CV or academic transcripts) will not be considered.

Stage 2: Interview and Formal Application

Candidates shortlisted from the EOI stage will be invited to discuss their ideas via Zoom prior to submitting a formal PhD application to the Faculty of Business and Economics. The successful candidate will enrol in an interdisciplinary cross‐faculty project, with the PhD degree to be awarded by the Faculty of Business and Economics upon completion of the project and the Monash doctoral requirements.

Enquiries

Professor Kristian Rotaru, kristian.rotaru@monash.edu, +61 3 9903 4567

Applications Close

Friday 17 April 2026, 11:55pm AEST

Supporting a diverse workforce

For instructions on how to apply, click on the "Apply" button above for the full position description and instructions on how to apply for Monash jobs.

#J-18808-Ljbffr

Send an application
Create a job alert
Alert activated
Saved
Save
Similar job
It support team lead – radiology & itsm
Melbourne
South Coast Radiology
IT
Similar job
Senior it asset & field services lead
Melbourne
Ambulance Victoria
IT
Similar job
Education it systems & support specialist
Melbourne
JAVTech Solutions
IT
Similar jobs
Monash University recruitment
Monash University jobs in Melbourne
IT and Tech jobs in Melbourne
jobs Melbourne
jobs Victoria
Home > Jobs > IT and Tech jobs > IT jobs > IT jobs in Melbourne > PhD Scholarship in bias propagation in generative AI-driven decision systems and its implicatio[...]

About Jobstralia

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by job title
  • Jobs by sector
  • Jobs by company
  • Jobs by location

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobstralia - All Rights Reserved

Send an application
Create a job alert
Alert activated
Saved
Save