AI Speech Therapy: SLPs at the center of clinical decisions

Table of Contents

Share This Post

AI is transforming the world as we speak, and tools like ChatGPT are democratizing access for everyone. AI speech therapy may become a reality someday for assessments, continuous therapy monitoring, or delivering adaptive therapy exercises.

However, today there are limited, if any, AI speech therapists or AI speech therapy solutions available beyond experimental stages. Some attempts have been made, such as Jessica by Better Speech, there however isn’t much clinically validated solutions available.

How can AI be used in speech therapy?

Today, AI can be used mostly for:

  • Generating with personalized ai speech therapy materials like word lists, stories or comprehension questions.
  • Transcribing and documenting parental feedback during assessments with speech to text transcriptions.
  • Analysing patterns in large datasets to identify therapy trends or gaps in service access.

Though most of today’s usage centres on Generative AI’s ability to personalize speech therapy to meet individual needs with tools like chatGPT, Bing copilot or Perplexity.ai.

Some tools such as YoodliElsaSpeak or Duolingo can offer pronunciation feedback using AI-powered speech transcription tools though these are not designed to be clinically effective speech therapy tools, more so accent correction or foreign language fluency improvements.

You can read our full guide on how AI can be used by ai for speech language pathologists.

Future Promise of AI Speech Therapy

ASHA predicts that someday AI will be used for:

  • Assessments: Automating baseline evaluations while flagging results for SLP review.
  • Progress Monitoring: Offering data-driven insights to track therapy outcomes over time.
  • Supplementing Therapy: Creating adaptive practice tools tailored to the client’s goals.

These are strong potential use cases, though much care must be taken to ensure that AI usage in speech therapy:

  • Respects ethical AI practices and avoids causing harm to already vulnerable populations.
  • Complies with the highest standards of data privacy and security, minimizing data usage and protecting sensitive user information.
  • Relies on strong clinical safeguards, reducing risks of misdiagnosis or inappropriate recommendations.

Will AI replace SLPs?

At Chatter Labs, we believe that SLPs—the human experts in communication disorders—must remain central to any clinical decision-making and judgment. SLPs consider multiple data points, including audio, visual, and contextual information (e.g., bilingual household dynamics), to provide comprehensive assessments.

Emerging AI tools will someday help SLPs:

  • Increase productivity by reducing admin tasks.
  • Collect valuable data points during home practice sessions, even when the SLP isn’t present.
  • Enhance therapy planning with AI-driven insights.

However, expert clinical judgment will not—and should not—be replaced by AI anytime soon. Every step in this evolution must benefit both SLPs and clients while adhering to strong ethical and regulatory standards.

Regulations and Ethical Frameworks on AI for speech therapy

Governments are introducing robust AI guidelines, such as the European AI Act, to ensure public protection, especially for vulnerable populations like those with communication disorders. Software as a Medical Device (SaMD) regulation with the Food and Drug Administration is also evolving quickly to evolve it’s framework to include artificial intelligence and machine learning in SaMD. Any companies venturing into AI-powered speech therapy will need to strictly adhere to these rules to ensure client protection.

Artificial intelligence’s current limitations in speech therapy

While AI for speech therapy holds promise, there are clear limitations:

  • AI cannot account for nuances like a child with cleft palate producing a different acceptable range of sounds compared to a typical articulation disorder.
  • AI would struggle to differentiate an articulation disorder from a child with an accent from a bilingual household or with a strong regional accent.
  • AI has not been designed at scale to adapt to prosody differences of children of various ages
  • AI lacks the ability to consider non-verbal cues or the context of a child’s behaviour during a session.

For example, mature AI tools like speech-to-text transcription cannot currently assess or diagnose speech and language disorders because they miss critical layers of complexity that human SLPs naturally incorporate.

Our Vision of AI speech therapy at Chatter Labs

We are committed to creating AI-powered tools that support, not replace, SLPs. By combining cutting-edge technology with expert clinical judgment, we aim to bridge the gap between demand and access, delivering better outcomes for families and reducing the workload on SLPs.

More To Explore

We're Lauching Soon!

Get Ahead of the Queue...

« « 
""

We're launching Soon!

Get Ahead of the Queue...