Understanding and applying the organisation's AI Clinical Use Policy — for practitioners, support workers, and clinical supervisors.
Select any section below to navigate directly. Completed sections are marked in green.
AI tools are increasingly available in clinical and support work contexts. This module explains how the organisation expects you to use them — and why those expectations exist.
"AI tools are decision-support resources only. Practitioners retain full responsibility for all clinical decisions, quality of documentation, and compliance with NDIS, organisational, PBS, and behaviour analytic practice standards. AI-generated output must always be critically reviewed before use."
By the end of this module, you will be able to:
This policy applies to all clinical and support staff — behaviour practitioners, behaviour support practitioners, support workers, and clinical supervisors — across all work contexts: in-person, remote, telehealth, documentation, planning, and professional development.
AI tools can genuinely help you work more efficiently and effectively. This policy creates a structured, ethical, and accountable framework within which AI tools may support your work — not a blanket prohibition.
AI tools present real clinical and privacy risks in a disability support context. These include:
Every use of AI within the organisation must be guided by eight core principles. These are not aspirational — they are the standard against which your practice will be assessed.
AI is a tool. The practitioner is the professional. Clinical responsibility remains entirely with the practitioner at all times. No AI tool, however sophisticated, changes this.
The following uses are approved — subject to de-identification, practitioner review, and transparency requirements being met in every case.
AI must not be used to generate, draft, or produce in full: formal clinical or assessment reports, behaviour support plans, treatment or intervention protocols, clinical recommendations, or NDIS outcomes reporting. The practitioner must substantially author these documents. AI may assist with editing, structure, or language — but the substantive clinical content must come from the practitioner's own assessment and reasoning.
For each scenario below, decide whether the described use is approved or prohibited under the policy.
The following uses are strictly prohibited. Breaches may result in disciplinary action, mandatory reporting obligations, and/or referral to relevant professional or regulatory bodies.
The NDIS Quality and Safeguards Commission does not endorse the use of AI for developing or reviewing Behaviour Support Plans. Their position statement (February 2026) makes clear that where AI is used, providers must de-identify all participant information and must not disclose personal information to AI systems.
Plans must remain person-centred, evidence-informed, and developed in genuine consultation with participants, families, and relevant stakeholders. AI cannot substitute for this process.
The Commission's position directly shapes several of the prohibitions in this policy — including the prohibition on using AI to author BSPs and the prohibition on entering participant information into consumer-grade tools.
Read each scenario and decide whether it is approved or prohibited.
De-identification must be completed as a mandatory step before entering any participant-related information into an AI tool. It is not optional and cannot be skipped.
| Category | Information to Remove or Replace |
|---|---|
| Direct Identifiers | Full name, preferred name or nickname, initials, date of birth, NDIS number, Medicare number, address, phone number, email address |
| Geographical Identifiers | Suburb, school name, day program name, specific location of service delivery if identifiable |
| Family / Network Identifiers | Names of parents, carers, siblings, support workers, or any person connected to the participant |
| Health & Diagnostic Information | Named diagnoses linked to the participant, specific medication names (where identifiable), treating clinician names |
| Photographic / Biometric Data | Photos, videos, voice recordings, or any biometric data — these must never be entered into AI tools |
| Financial / Legal Information | Plan funding amounts, plan dates, tribunal references, court orders, or legal proceedings |
| Dates (where identifiable) | Dates of assessments, incidents, or episodes of care that — in context — could identify an individual |
| Third-Party Reports | Reports, assessments, or correspondence from other providers or agencies must not be uploaded to any AI tool, even with identifiers removed |
Replace identifying details with generic labels that preserve the clinical meaning without identifying the person:
Before submitting any prompt to an AI tool, re-read it as if you were a stranger who doesn't know this participant. Ask: could anyone — including the AI tool's systems — identify who this person is from this information? If yes, remove or change that information before proceeding. If uncertain, consult your clinical supervisor first.
Even with de-identification complete, only enter the minimum information necessary to complete the task. Do not include background context, history, or supporting detail that isn't required for the specific AI-assisted task.
Research in data re-identification has shown that combining just three or four seemingly generic data points — such as age, gender, approximate suburb, and diagnosis — can be sufficient to uniquely identify a person in a small population. In the disability support sector, where participant cohorts may be small and well-known to their communities, this risk is particularly acute.
This is why the policy requires removing geographical identifiers, dates, and diagnostic labels, not just names and numbers. The combination of remaining details is what creates re-identification risk — not any single data point in isolation.
The OAIC's guidance on AI and personal information (2024) specifically addresses this, noting that organisations must consider not just whether information has been de-identified, but whether it could be re-identified when combined with other available data.
Using an AI tool does not transfer, reduce, or share any part of your professional or ethical responsibility for the quality, accuracy, and appropriateness of your clinical work.
All clinical decisions — including behaviour support strategies, capacity building goals, intervention modifications, and risk assessments — remain the sole responsibility of the practitioner. AI may generate ideas, drafts, or summaries to inform thinking, but must not be relied upon as the basis for clinical decisions. You must be able to articulate the clinical reasoning behind every decision you make, independent of any AI assistance.
If you are uncertain about a particular use of AI, identify a potential policy breach, or have concerns about AI outputs:
No practitioner will be disadvantaged for raising a genuine concern in good faith.
Only AI tools reviewed and approved by the organisation may be used for work involving participant-related content — even de-identified. Use of non-approved tools for clinical work is a policy breach.
| Tool / Platform | Approved Use & Conditions |
|---|---|
| Microsoft Copilot for M365 | Document drafting, summarisation, and grammar checking within the organisational M365 tenant. Data is processed within the organisational data boundary. Full practitioner review required. |
| Microsoft Dictate for M365 | Document drafting, summarisation, and grammar checking within the organisational M365 tenant. Data is processed within the organisational data boundary. Full practitioner review required. |
Note: This list is subject to update. Always check with your Practice Lead if you are uncertain whether a tool is approved.
Practitioners wishing to use an AI tool not listed above must submit a request to their Practice Lead and the Lizard Product Manager for review prior to use. Approval will consider data sovereignty, privacy compliance, security, and clinical appropriateness.
Use of Microsoft Teams (e.g. for transcribing, recording, generating meeting recaps or minutes) and organisational systems such as Lizard on a personal device must comply with organisational device and data security policies. Before using any transcription or AI recap feature on a personal device, confirm with your supervisor that your device is approved and the feature has been authorised.
Consumer AI products like ChatGPT, Gemini, and similar tools — regardless of whether you have a paid subscription — are not approved for clinical work. The key reasons are:
The distinction that matters is not whether you're paying for the tool — it is whether the tool has been assessed and approved for your intended use, with appropriate data handling guarantees in place.
10 questions drawn from all sections. Select the best answer for each question. A score of 8/10 (80%) is required to unlock the acknowledgement step. Feedback is provided after each answer.
Please complete all fields below to confirm you have read, understood, and agree to comply with the AI Clinical Use Policy.
Download the certificate and email it to your supervisor ahead of your next supervision meeting.
You have completed the AI in Clinical Practice induction module. Download your completion certificate below and email it to your supervisor to discuss your reflection responses at your next supervision meeting.
Email the downloaded PDF to your supervisor ahead of your next supervision meeting.
AI Practice Series · Policy Induction Module · Version 1.0