Artificial intelligence and psychiatry
For the medical profession, the use of artificial intelligence (AI) is still an emerging area. Regulators and medical defence organisations emphasise that doctors remain legally and ethically accountable for decisions informed by AI tools.
Other professions are also grappling with liability and governance questions, including law, accounting, and financial services, where professional bodies warn that AI must not replace expert judgment, and that misuse could lead to professional or legal consequences.
AI is available in numerous forms (e.g. generative AI, machine learning, natural language processing and computer vision) and has the capacity to support clinicians in managing their business and clinical workloads.
The Productivity Commission’s report Leveraging Digital Technology in Healthcare highlighted that up to 30 per cent of a clinician's time could be refocused on consumer care by automating low complexity tasks.
The Royal Australian and New Zealand College of Psychiatrists (RANZCP) is committed to supporting its members and consumers in understanding their rights and obligations regarding the use of AI by healthcare professionals.
A clinician's professional obligations
As specialist medical practitioners, psychiatrists must adhere to multiple guidelines and codes of conduct during their practice, including:
- The Medical Board Australia’s Code of Conduct
- Aotearoa New Zealand Health Information Privacy Code 2020
- MCNZ policy on patient records
- RANZCP Code of Ethics
- RANZCP Code of Conduct
- The WPA Code of Ethics
- any other relevant state, federal or national legislation.
With regards to patient data and how it interacts with AI tools used by clinicians, clinicians are obligated to handle patient data in line with local state and national legislation. In Australia and Aotearoa, this includes:
It is the responsibility of practicing psychiatrists to ensure their use of AI does not breach any professional or legal obligation, including the storage of patient data, notes and any saved transcripts.
Case examples – using AI in practice
The following case examples showcase multiple scenarios where AI tools can be used by clinicians. Each of the examples displays good or malpractice using a fictional AI tool, Psyche-AI.
The case examples should be used alongside ethical and practice guidelines to help clinicians understand AI use in their practice.
1) Augmenting formulation and care planning
Dr. L uses Psyche-AI to rapidly summarise a patient’s longitudinal notes, extract key diagnostic features, and draft a biopsychosocial formulation.
- How it’s used: Dr. L reviews and edits Psyche-AI’s draft, adds clinical nuance, and discusses the plan with the patient.
- Why it works: The psychiatrist remains the decision-maker, AI saves time and reduces clerical burden.
- Lesson: Safe when AI is a supportive tool, not a replacement for clinical judgement.
2) Overreliance in crisis response
Dr. K, managing after-hours calls, asks Psyche-AI for guidance on how to respond to a patient expressing suicidal thoughts. They copy the AI’s suggested wording directly into a text reply without reviewing it. The message fails to escalate to emergency services, and the patient later deteriorates.
- Problem: The psychiatrist delegated a critical safety decision to AI.
- Lesson: AI may assist with phrasing, but crisis care demands direct clinician oversight and established protocols.
3) Bias-aware prescribing decision support
Dr. M queries Psyche-AI about medication options for ADHD in a patient with comorbid anxiety. The AI produces a ranked list with citations. Dr. M cross-checks against guidelines, considers the patient’s personal history, and documents the rationale.
- Outcome: A tailored prescription choice that balances evidence and patient context.
- Lesson: AI can speed evidence retrieval, but only when psychiatrists critically appraise and contextualise the output.
4) Documentation shortcut without verification
Dr. S uses Psyche-AI to auto-generate psychotherapy progress notes after each session. They paste the text directly into the electronic medical record without review. Later, a patient requests their records and disputes inaccurate content.
- Problem: The psychiatrist failed to verify, creating medico-legal exposure.
- Lesson: AI-generated notes must be edited and verified; the clinician is legally responsible for accuracy.
5) Population-level insights for service planning
A community psychiatrist, Dr. T, feeds anonymised caseload data into Psyche-AI to identify service gaps (e.g., long waits for youth ADHD assessments). Dr. T presents AI-generated trend analysis to the local hospital board, supplementing it with lived-experience input.
- Outcome: Informed advocacy for new youth services.
- Lesson: AI can empower psychiatrists to take a systems-level view, but only with strict anonymisation, governance, and human interpretation.
Guiding principles
The following principles can guide AI use in your practice:
- Keep human oversight
Treat AI as decision support. Do not delegate clinical judgment. Record your independent reasoning. - Inform patients
Disclose AI use and material risks in plain language. Note limits and what a clinician reviewed. - Verify outputs
Cross-check AI findings against source data and guidelines. Correct the record if wrong. - Lock in governance
Approve an AI policy, risk register, and procurement checklist aligned to AHPRA and TGA rules. Include bias, safety, and data-handling controls. - Choose regulated tools
Prefer TGA-listed SaMD with change-control and monitoring. Keep an inventory of AI systems in use. - Train and audit
Educate prescribers and reviewers. Audit telehealth and AI-supported decisions for adherence to codes and documentation standards.
Related resources
The following resources have been reviewed by the RANZCP and are recommended for psychiatrists to help understand their obligations when using AI tools in their practice
- AHPRA’s Meeting your professional obligations when using artificial intelligence in health webpage
- Te Whatu Ora’s Generative AI and Large Language Models webpage
- Medical Council New Zealand’s Artificial Intelligence Guidelines
Medical Defence Organisations, governmental organisations and peak bodies have developed an array of helpful resources that can inform and support a clinicians’ understanding of how to use AI tools safely, ethically, and strategically within their practice:
- Avant’s Artificial intelligence for medical documentation
- Australian Commission on Safety and Quality in Health Care AI Clinical Use Guide
- Australian Therapeutic Goods Agency’s Artificial intelligence chat, text, and language
- Medical Indemnity Protection Society Using generative AI to respond to patient complaints
For any questions regarding AI use and psychiatry, please contact policy@ranzcp.org
RANZCP governance of artificial intelligence
The RANZCP Committee for Professional Practice (the CPP) oversees policy and governance work within the College relating to the use of AI by psychiatrists.
Disclaimer
This webpage provides general information about AI use in psychiatry and is intended for educational purposes only, it does not constitute medical, professional, or legal advice. Clinicians have a responsibility to ensure their practice is safe and ethical, and that they are providing their best standard of practice.