PrAACtical Research: Improving Accessibility for People with Significant Speech Disabilities

July 23, 2020 by - Leave your thoughts

PrAACtical Research: Improving Accessibility for People with Significant Speech Disabilities
A- A+

When clinicians, researchers, and individuals with AAC needs come together to work on a problem, good things can happen. In today’s postPrAACtical Research: Improving Accessibility for People with Significant Speech Disabilities, SLP Katie Seaver tells us about her experiences with Project Euphonia.

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

My name is Katie Seaver and I have been an SLP for 16 years.  For the past 10 year I have been an SLP and AAC Specialist at the Leonard Florence Center in Chelsea MA at their ALS Residence (see ALSRI.org for more information).  The center is uniquely built to meet the extensive accessibility obstacles pALS experience.  Each room is private with a fully automated environmental control with a program called PEAC.  Once a resident has a device that allows them to access WiFi (e.g. an iPhone or even a Tobii Dynavox SGD) then they have access to their environment as well, from heating and TV, to doors and elevators.   

My passion for AAC has been to find a way to allow users to maintain the spontaneity of communication and maintain social closeness. For typical speakers, social communication is the largest percentage of our time talking.  However, as people transition to AAC, their expression leans more towards basic wants and needs.  With my clients, we focus on short term goals to increase the number of communication exchanges with a long term goal of increasing the number of partners across a variety of settings.  This means that pre-scripted phrases are helpful, but not always the perfect fit, and AAC remains slow.  There is a delay in greetings, calling attention to someone as they walk by, laughing or cheering, telling jokes, and more.  For those with even moderate or severely dysarthric speech, imagine a way to transcribe their speech to provide a sort of subtitle.  Imagine using facial expressions or sounds (i.e. /p/) to trigger a communication event.  All of this could be possible if we employ the right AI machine learning.  

A year ago, Google came to visit the Leonard Florence Center ALS Residence to discuss Project Euphonia.  Project Euphonia is a Google research project focusing on increasing accessibility for people who have impaired speech.  First, to improve Speech recognition, like that used on Google’s Assistant or Google Home for people who had impaired speech.  Second, to explore solutions for those who may not have speech at all.  Watch this short introduction to Project Euphonia to see examples of their AI research. 

Standard speech recognition is built with machine learning tools using hundreds of thousands of hours of ‘typical’ speech recordings.  But, Google doesn’t have samples of impaired speech to train or improve the machine learning algorithm.  Thus Google launched Project Euphonia in May 2019 asking people (over 18 years old)  to share their speech.   Although it began with a partnership with ALS-TDI, it sought speech samples from all disorder types including dysarthria, speech sound disorders (e.g. developmental speech characteristics of Down Syndrome), stuttering, and dysphonia.  With voice access becoming more and more ubiquitous, it is imperative that it be universally accessible.  Smart speakers and smart homes are improving independence for those with mobility limitations every day, but aren’t taught to understand atypical speech patterns.   This is the research that Project Euphonia is targeting.  PrAACtical Research: Improving Accessibility for People with Significant Speech Disabilities

So, step 1: Provide Project Euphonia with more samples of impaired speech.  Participants who are interested in sharing their speech can sign up at g.co/euphonia and click on the “Record phrases for Google Research” button.  No medical information is collected, only the consent to share speech samples with Google Research.  Participants will need to provide an email for ongoing communication with the team, the same email that they use to log into a Google account.  About 20 minutes after completing the form, an email will arrive with an initial set of phrases to record on an online tool called ChitChat.  Participants will be provided a gift card as a Thank you for their contributions.  This has been $300 for 1500 phrases,  Accessibility Allies (OT’s, PT’s, ATP, HHA, etc) and SLP’s are also provided a gift card of $50 for every hour they assist a participant.  All gift cards are limited to $550 per calendar year.  Accessibility Allies and SLP’s can complete this interest form to stay in touch with the progress of Project Euphonia and to log their hours assisting participants.  

Project Euphonia continues to be a small research project with a big goal.  There has never been an impaired speech data collection project of this size.  Along the way, Project Euphonia has been asking a small group of Trusted Testers, who have shared over a thousand samples, to test the machine learning models, provide feedback and help steer Project Euphonia into the most useful tool possible. PrAACtical Research: Improving Accessibility for People with Significant Speech Disabilities

To learn more about the project visit g.co/euphonia.  

  • Google AI Blog on Project Euphonia
  • Go to YouTube and Watch the Age of AI: Episode 2.
  • Speech Uncensored Podcast interviewed Project Euphonia’s Julie Cattiau and Katie Seaver. 
  • Katie Seaver, MS, CCC-SLP also hosts weekly Office Hours for SLP’s, allies, or participants to join a video chat and discuss the project via Q&A and conversation.  Follow me on Twitter @know_ks.
  • Join via Google Meet on Tuesdays from 3:30-4 EDT by clicking: meet.google.com/bop-phed-bks 
  • Financial Disclosure: I am paid by Adecco for Google as an hourly consultant to Project Euphonia.  I am also paid by Chelsea Jewish Lifecare at the Leonard Florence Center as an SLP.  

Filed under: ,

Tagged With: , ,

This post was written by Carole Zangari

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.