But the reality is mixed. Some AI tools can be genuinely helpful when they are well-designed, evidence-informed, and used responsibly. Others can introduce new risks: inaccurate outputs, bias, privacy concerns, and over-reliance on automated recommendations. This post shares a straightforward, teacher-friendly overview of what AI can do to support learners with SEN, along with practical guardrails for using it safely and effectively.
What “AI” means in classroom tools
In education, “AI” is a broad label. It can refer to systems that:
- provide recommendations (for example, suggesting the next activity or level),
- offer adaptive practice (adjusting difficulty based on learner responses),
- support accessibility (speech-to-text, text-to-speech, captioning, translation),
- or generate new content (generative AI, such as tools that create text, questions, or summaries).
For SEN support, it helps to think less about the “AI” label and more about the teaching purpose: what barrier are you trying to reduce, and what learning goal are you trying to progress?
Where AI can help in SEN teaching (practical examples)
Across different needs and contexts, AI tools tend to be most useful when they support one of these aims:
1) Improving access and participation
Accessibility features can remove barriers that prevent students from showing what they know. Depending on the learner, this might include:
- Text-to-speech to reduce decoding load and support comprehension,
- Speech-to-text to reduce transcription barriers,
- Captioning and audio supports,
- Alternative formats (simplified layouts, adjustable fonts, spacing, contrast, and reading modes).
These supports are often most effective when used as part of an agreed plan (for example, alongside a learner’s support plan, classroom routines, and targeted teaching).
2) Personalised practice and feedback for specific learning differences
Many AI-enabled programmes aim to adapt tasks in real time to a student’s current performance. In theory, this can help students practise at an appropriate level and receive feedback that supports progress.
Examples of where this approach is commonly used include:
- Literacy support (including reading practice for learners with dyslexia or other literacy needs),
- Handwriting and writing skills (where tools may analyse patterns like speed, pressure, or consistency to inform practice),
- Numeracy support (where systems may adapt sequencing and representation to strengthen number sense and calculation skills).
Teacher tip: Treat “personalisation” as a hypothesis, not a guarantee. Look for tools that explain what they measure, what they recommend, and how you can override or adjust the pathway.
3) Supporting communication and structured skill-building
Some AI initiatives explore how technology can support communication development or structured practice for social interaction skills (for example, through guided environments, prompts, or coaching-style feedback). These approaches can be promising, but they also highlight a key message for schools: evidence quality can vary widely, and small-scale research pilots do not always translate neatly into everyday classroom practice.
What to be careful about (especially with generative AI)
AI is not “neutral,” and it is not automatically inclusive. In SEN contexts, the stakes are higher because tools may influence learning pathways, shape expectations, or process sensitive information. Here are the main cautions teachers should keep in mind:
Accuracy and reliability
Generative AI tools can produce confident-sounding answers that are incorrect, incomplete, or not aligned with evidence-based practice. If you use generative AI to create tasks, explanations, or adaptations, you still need to apply professional judgement and verify outputs before using them with students.
Bias and fairness
AI systems reflect patterns in the data they were trained on. If that data under-represents certain learners or contexts, the tool may perform less well for the students who most need reliable support. In education, this can show up as inappropriate difficulty levels, flawed recommendations, or unhelpful “norms” about behaviour and communication.
Privacy and data protection
Some AI tools require large amounts of data to function, especially those promising high levels of personalisation. In SEN contexts, data may be particularly sensitive. Before adopting a tool, check:
- what data is collected (and whether it includes sensitive categories),
- where it is stored and processed,
- who has access,
- how long it is retained,
- and how families are informed in clear, accessible language.
Over-reliance and reduced independence
Scaffolds are useful until they become a crutch. If a tool is doing too much of the thinking, planning, or producing, students may have fewer opportunities to practise the very skills you are trying to build. Aim for tools that support progressive independence: fading prompts, encouraging self-monitoring, and strengthening core skills over time.
A practical classroom checklist for responsible use
- Start with the barrier. Define the challenge (e.g., decoding, transcription, working memory, language processing) and connect it to a clear learning goal.
- Prefer tools with evidence and transparency. Look for clear information on outcomes, learner groups, and evaluation methods. Be cautious of broad claims without proof.
- Keep the teacher in the loop. Choose tools that allow teacher review, adjustment, and professional judgement—not hidden automated decisions.
- Minimise data. Only use what is necessary. Avoid high-risk data collection unless there is a strong, justified reason and robust protections.
- Pilot small and monitor. Trial with a small group, gather feedback (including student voice), and watch for unintended effects (stress, stigma, dependency, misplacement).
- Plan for inclusion. Consider how the tool supports participation, dignity, and belonging—not just performance metrics.
Watch: OECD webinar on inclusive AI and SEN
If you’d like to explore the topic further, this webinar shares insights from an OECD working paper on AI tools designed to support learners with diverse needs, followed by an expert panel discussion on benefits, risks, and responsible use:
Speakers include Andreas Schleicher (OECD), Kate Griggs (Made in Dyslexia), Prof. Jinjun Xiong (University at Buffalo / National AI Institute for Exceptional Education). Moderated by Duncan Crawford (OECD).
Final thought
AI can support more inclusive teaching when it is used to remove barriers, broaden access, and strengthen high-quality instruction—not when it replaces professional judgement or introduces new risks. The most effective use cases are usually the simplest: tools that help students access content, practise at the right level, and communicate more easily, while keeping teachers firmly in control of decisions.
Reference
Linsenmayer, E. (2025), “Leveraging artificial intelligence to support students with special education needs”,
OECD Artificial Intelligence Papers, No. 46, OECD Publishing, Paris,
https://doi.org/10.1787/1e3dffa9-en.