AI Policy Induction
Mandatory Induction · Version 1.0

AI in Clinical Practice

Understanding and applying the organisation's AI Clinical Use Policy — for practitioners, support workers, and clinical supervisors.

8 Sections
~40 minutes
Pass mark: 80%
Completion certificate included

Module Overview

Select any section below to navigate directly. Completed sections are marked in green.

Welcome & Why This Matters

AI tools are increasingly available in clinical and support work contexts. This module explains how the organisation expects you to use them — and why those expectations exist.

"AI tools are decision-support resources only. Practitioners retain full responsibility for all clinical decisions, quality of documentation, and compliance with NDIS, organisational, PBS, and behaviour analytic practice standards. AI-generated output must always be critically reviewed before use."

What this module covers

By the end of this module, you will be able to:

  • Explain the purpose and scope of the AI Clinical Use Policy
  • Identify approved and prohibited uses of AI in your role
  • Apply de-identification requirements before using any AI tool
  • Describe your personal responsibilities and accountability obligations
  • Identify the approved AI tools and how to use them correctly
  • Recognise your transparency and disclosure obligations
  • Know how and when to raise concerns about AI use
Who this applies to

This policy applies to all clinical and support staff — behaviour practitioners, behaviour support practitioners, support workers, and clinical supervisors — across all work contexts: in-person, remote, telehealth, documentation, planning, and professional development.

This policy does not prohibit AI use

AI tools can genuinely help you work more efficiently and effectively. This policy creates a structured, ethical, and accountable framework within which AI tools may support your work — not a blanket prohibition.

Why governance matters here

AI tools present real clinical and privacy risks in a disability support context. These include:

  • Re-identification risk: combining seemingly generic details can identify a participant even without their name
  • AI errors: AI tools produce plausible but sometimes factually wrong, biased, or clinically inappropriate outputs
  • Data sovereignty: consumer-grade AI tools may process and store data outside Australian jurisdiction
  • Professional accountability: submitting AI-generated content without review may constitute a professional conduct breach
  • NDIS obligations: the Commission has specific expectations about how AI may and may not be used in behaviour support

The 8 Key Principles

Every use of AI within the organisation must be guided by eight core principles. These are not aspirational — they are the standard against which your practice will be assessed.

01
Human Primacy
Practitioners retain full clinical authority. AI output does not replace professional judgement. You remain responsible for every decision you make.
02
Privacy First
Participant identifying information must be de-identified before being entered into any AI tool. This is mandatory, not optional.
03
Accuracy & Integrity
All AI-generated content must be critically reviewed and verified before use in clinical documentation or communication. AI can produce plausible but incorrect outputs.
04
Transparency
Practitioners must disclose AI use to supervisors and, where appropriate, to participants and families. You must be able to say how and where AI was used in your work.
05
Non-Maleficence
AI must never be used in ways that could harm participants, compromise their dignity, or disadvantage them. Aligns with NDIS and behaviour analytic ethical obligations.
06
Competence
Practitioners must complete required training before using AI tools in clinical work and must operate within their scope of practice at all times.
07
Equity
AI use must not introduce bias or create disparate outcomes for participants based on disability, culture, or communication style. Review outputs with a cultural lens.
08
Auditability
AI use is subject to supervision and quality review, consistent with all other clinical activities. Your supervisor is entitled to ask how AI was used in any piece of work.
The fundamental principle

AI is a tool. The practitioner is the professional. Clinical responsibility remains entirely with the practitioner at all times. No AI tool, however sophisticated, changes this.

✏️ Reflection Prompt 1
Looking at the eight principles, which one will require the most conscious effort to apply consistently in your day-to-day work, and why? What would it look like in practice for you to meet that standard?
Reflection prompts are for personal use only and are not recorded or assessed. They appear in your completion certificate for supervision discussion.

What AI Can Help With

The following uses are approved — subject to de-identification, practitioner review, and transparency requirements being met in every case.

3.1 — Documentation & Report Writing Approved
  • Drafting or editing progress notes
  • Improving the clarity, structure, or readability of de-identified text from clinical documentation
  • Proofreading and grammar checking of de-identified text from clinical documents
  • Improving the structure or readability of a practitioner-authored draft report — where the substantive clinical content has already been written by the practitioner
Important limit on report writing

AI must not be used to generate, draft, or produce in full: formal clinical or assessment reports, behaviour support plans, treatment or intervention protocols, clinical recommendations, or NDIS outcomes reporting. The practitioner must substantially author these documents. AI may assist with editing, structure, or language — but the substantive clinical content must come from the practitioner's own assessment and reasoning.

3.2 — Behaviour Analytic Practice Support Approved
  • Searching for or summarising research literature (always verify sources independently)
  • Generating ideas for antecedent strategies, reinforcement schedules, or skill-building activities to consider during clinical planning
  • Drafting visual supports, social stories, or instructional scripts for practitioner review and adaptation
  • Structuring data collection frameworks or creating initial templates
  • Assisting with treatment integrity checklists or fidelity guides
3.3 — Capacity Building Program Development Approved
  • Drafting task analyses for practitioner review
  • Generating parent or carer training session structures or handout drafts
  • Assisting with plain-language adaptations of de-identified text from clinical content for participants, families, or support workers
3.4 — Professional Learning & Development Approved
  • Researching and summarising clinical topics, theoretical frameworks, or emerging practices
  • Exploring case conceptualisation ideas during supervision — using de-identified scenarios only
  • Preparing reflective questions or discussion prompts for supervision
  • Generating study aids, summaries, or practice vignettes for professional development
3.5 — Administrative & Communication Tasks Approved
  • Drafting internal communications, meeting agendas, or professional correspondence — non-participant-specific content only, or fully de-identified content
  • Creating structured templates, forms, or checklists not linked to individual participants
📋 Try It — Approved Use Scenarios

For each scenario below, decide whether the described use is approved or prohibited under the policy.

Scenario A
A behaviour support practitioner copies a participant's progress note (with the participant's name and NDIS number visible) into Microsoft Copilot to improve its readability.
Scenario B
A practitioner uses Microsoft Copilot to generate ideas for antecedent strategies for a participant, referring to them only as "the participant" with no identifying details in the prompt.
Scenario C
A practitioner writes the full clinical content of a behaviour support plan themselves, then uses Microsoft Copilot to improve the plan's readability and structure.

What AI Must Never Do

The following uses are strictly prohibited. Breaches may result in disciplinary action, mandatory reporting obligations, and/or referral to relevant professional or regulatory bodies.

Prohibited — Do Not Do These Things
📋 Dive Deeper: The NDIS Commission's Position on AI

The NDIS Quality and Safeguards Commission does not endorse the use of AI for developing or reviewing Behaviour Support Plans. Their position statement (February 2026) makes clear that where AI is used, providers must de-identify all participant information and must not disclose personal information to AI systems.

Plans must remain person-centred, evidence-informed, and developed in genuine consultation with participants, families, and relevant stakeholders. AI cannot substitute for this process.

The Commission's position directly shapes several of the prohibitions in this policy — including the prohibition on using AI to author BSPs and the prohibition on entering participant information into consumer-grade tools.

📋 Try It — Prohibited Use Scenarios

Read each scenario and decide whether it is approved or prohibited.

Scenario D
A support worker feels uncertain about a risk situation and asks Microsoft Copilot what they should do, then follows the AI's advice without contacting their supervisor.
Scenario E
A practitioner uploads a PDF of another provider's OT assessment report into ChatGPT Free to extract key goals, because they want to cross-reference it with their own assessment.
Scenario F
A practitioner uses AI to generate a full first draft of a behaviour support plan, reviews it, and makes some edits before submitting it as their own clinical work.

De-Identification in Practice

De-identification must be completed as a mandatory step before entering any participant-related information into an AI tool. It is not optional and cannot be skipped.

What must be removed or changed
CategoryInformation to Remove or Replace
Direct IdentifiersFull name, preferred name or nickname, initials, date of birth, NDIS number, Medicare number, address, phone number, email address
Geographical IdentifiersSuburb, school name, day program name, specific location of service delivery if identifiable
Family / Network IdentifiersNames of parents, carers, siblings, support workers, or any person connected to the participant
Health & Diagnostic InformationNamed diagnoses linked to the participant, specific medication names (where identifiable), treating clinician names
Photographic / Biometric DataPhotos, videos, voice recordings, or any biometric data — these must never be entered into AI tools
Financial / Legal InformationPlan funding amounts, plan dates, tribunal references, court orders, or legal proceedings
Dates (where identifiable)Dates of assessments, incidents, or episodes of care that — in context — could identify an individual
Third-Party ReportsReports, assessments, or correspondence from other providers or agencies must not be uploaded to any AI tool, even with identifiers removed
What to use instead

Replace identifying details with generic labels that preserve the clinical meaning without identifying the person:

  • "The participant" or "the individual" instead of their name
  • "Parent/carer" or "sibling" instead of family names
  • "School A" or "Day program B" instead of named settings
  • A pseudonym clearly understood to be fictional (e.g. "Alex" with a note that this is a pseudonym)
The Stranger Check

Before submitting any prompt to an AI tool, re-read it as if you were a stranger who doesn't know this participant. Ask: could anyone — including the AI tool's systems — identify who this person is from this information? If yes, remove or change that information before proceeding. If uncertain, consult your clinical supervisor first.

Use the minimum information necessary

Even with de-identification complete, only enter the minimum information necessary to complete the task. Do not include background context, history, or supporting detail that isn't required for the specific AI-assisted task.

🔎 Dive Deeper: Why De-Identified Doesn't Always Mean Safe

Research in data re-identification has shown that combining just three or four seemingly generic data points — such as age, gender, approximate suburb, and diagnosis — can be sufficient to uniquely identify a person in a small population. In the disability support sector, where participant cohorts may be small and well-known to their communities, this risk is particularly acute.

This is why the policy requires removing geographical identifiers, dates, and diagnostic labels, not just names and numbers. The combination of remaining details is what creates re-identification risk — not any single data point in isolation.

The OAIC's guidance on AI and personal information (2024) specifically addresses this, noting that organisations must consider not just whether information has been de-identified, but whether it could be re-identified when combined with other available data.

✏️ Reflection Prompt 2
Think about the last time you wrote a clinical document that you might want AI to help improve. Walk yourself through the de-identification process for that document: what would you need to remove or change, and what would be left? Would the remaining information still be clinically useful for your purpose?
Reflection prompts are for personal use only. Your response is not recorded or assessed. They appear in your completion certificate for supervision discussion.

Your Responsibilities

Using an AI tool does not transfer, reduce, or share any part of your professional or ethical responsibility for the quality, accuracy, and appropriateness of your clinical work.

Core accountability statement

All clinical decisions — including behaviour support strategies, capacity building goals, intervention modifications, and risk assessments — remain the sole responsibility of the practitioner. AI may generate ideas, drafts, or summaries to inform thinking, but must not be relied upon as the basis for clinical decisions. You must be able to articulate the clinical reasoning behind every decision you make, independent of any AI assistance.

Before you use any AI output in your work, it must be:
  • Read in full by you before use
  • Critically evaluated for clinical accuracy, factual correctness, and suitability for the individual participant
  • Edited to reflect your own professional judgement and knowledge of the participant
  • Checked for bias, stereotype, or culturally inappropriate content
  • Verified against primary sources where factual claims (research citations, regulatory requirements) are included
Transparency obligations
  • You must be prepared to disclose which AI tools you used in the preparation of any clinical documentation when asked by a supervisor or reviewer
  • Supervisors are entitled to ask you to demonstrate or describe how AI was used in any piece of work
  • Participants and their authorised representatives have the right to request information about how their data is processed
  • Participants and families must not be misled about the nature of clinical work or the extent to which AI tools have been used
Reporting obligations
  • Any breach of participant confidentiality — including accidental entry of identifying data into an AI tool — must be reported immediately to your supervisor
  • An incident report must be filed for any such breach
  • If you identify a potential policy breach by a colleague, raise it with your supervisor
For supervisors — your specific obligations
  • Model responsible AI use and articulate clear professional expectations
  • Include AI use as a standing agenda item in supervision (quarterly minimum)
  • Review AI-assisted documentation and verify it accurately reflects the participant's presentation and the practitioner's own clinical reasoning
  • Address signs of over-reliance on AI: lack of clinical specificity, inability to articulate reasoning independently, minimal evidence of critical review
  • Escalate systemic concerns or policy breaches to the National Clinical Director
  • For deliberate or repeated non-compliance: escalate to the HR manager and National Clinical Director for formal performance management consideration
Raising concerns

If you are uncertain about a particular use of AI, identify a potential policy breach, or have concerns about AI outputs:

  • Step 1: Raise the concern with your clinical supervisor at the earliest opportunity
  • Step 2: Consult the National Clinical Director or Practice Lead where the concern relates to clinical practice or patient safety. Consult the HR manager where the concern relates to conduct or deliberate non-compliance.
  • Step 3: Document the concern if it relates to a potential privacy breach

No practitioner will be disadvantaged for raising a genuine concern in good faith.

Approved Tools & How to Use Them

Only AI tools reviewed and approved by the organisation may be used for work involving participant-related content — even de-identified. Use of non-approved tools for clinical work is a policy breach.

Approved tools
Tool / PlatformApproved Use & Conditions
Microsoft Copilot for M365 Document drafting, summarisation, and grammar checking within the organisational M365 tenant. Data is processed within the organisational data boundary. Full practitioner review required.
Microsoft Dictate for M365 Document drafting, summarisation, and grammar checking within the organisational M365 tenant. Data is processed within the organisational data boundary. Full practitioner review required.

Note: This list is subject to update. Always check with your Practice Lead if you are uncertain whether a tool is approved.

Adding new tools

Practitioners wishing to use an AI tool not listed above must submit a request to their Practice Lead and the Lizard Product Manager for review prior to use. Approval will consider data sovereignty, privacy compliance, security, and clinical appropriateness.

When using any approved tool, you must:
  • Log in using your organisational credentials — not a personal account
  • Use only organisationally approved devices and networks (or approved VPN)
  • Lizard staff may have their personal device approved by submitting a request to their Practice Lead
  • Not enable features that store conversation history in consumer cloud accounts
  • Log out when finished
  • Report any unexpected data retention or privacy concerns immediately to your supervisor and/or the Privacy Officer
Teams, Lizard systems & personal devices

Use of Microsoft Teams (e.g. for transcribing, recording, generating meeting recaps or minutes) and organisational systems such as Lizard on a personal device must comply with organisational device and data security policies. Before using any transcription or AI recap feature on a personal device, confirm with your supervisor that your device is approved and the feature has been authorised.

🔎 Dive Deeper: Why Consumer AI Tools Are Not Approved

Consumer AI products like ChatGPT, Gemini, and similar tools — regardless of whether you have a paid subscription — are not approved for clinical work. The key reasons are:

  • Data sovereignty: these tools may process and store data on servers outside Australia, outside the control of your organisation
  • Training data: some consumer tools use conversation data to improve their models, which could expose participant information even if de-identified
  • No organisational data boundary: unlike M365 enterprise tools, consumer tools do not operate within a controlled organisational data environment
  • No audit trail: the organisation has no visibility into what you enter or receive from consumer tools

The distinction that matters is not whether you're paying for the tool — it is whether the tool has been assessed and approved for your intended use, with appropriate data handling guarantees in place.

Knowledge Check

10 questions drawn from all sections. Select the best answer for each question. A score of 8/10 (80%) is required to unlock the acknowledgement step. Feedback is provided after each answer.

Staff Acknowledgement

Please complete all fields below to confirm you have read, understood, and agree to comply with the AI Clinical Use Policy.

I, the undersigned, confirm that I have read and understood the AI Clinical Use Policy. I agree to comply with its requirements, including de-identification obligations, approved tool use, critical review of AI output, transparency with my supervisor, and my ongoing responsibility for all clinical decisions and documentation I produce or contribute to.
I have completed all sections of this module and understand the AI Clinical Use Policy.
I understand that I retain full professional responsibility for all AI-assisted work, including clinical decisions and documentation.
I will de-identify all participant information before entering it into any AI tool, and will use only approved tools on approved devices.
I understand that breaches of this policy may result in disciplinary action and/or referral to relevant professional or regulatory bodies.

Download the certificate and email it to your supervisor ahead of your next supervision meeting.

Module Complete

You have completed the AI in Clinical Practice induction module. Download your completion certificate below and email it to your supervisor to discuss your reflection responses at your next supervision meeting.

Certificate Preview
Staff Member
Assessment Score
Position
Supervisor
Reflection 1 — Principles in Practice
No response recorded.
Reflection 2 — De-Identification in Practice
No response recorded.

Email the downloaded PDF to your supervisor ahead of your next supervision meeting.

AI Practice Series · Policy Induction Module · Version 1.0