Technology & Aphasia
What do we study?
Technology offers powerful opportunities to improve the assessment and treatment of acquired language disorders, such as aphasia, by increasing real-world relevance, expanding therapy access, and enhancing treatment effectiveness. Since 2012, I have explored the intersection of technology and language rehabilitation, publishing one of the first clinical trials on tablet-based speech therapy and leading remote assessment studies that have recruited over 200 individuals with aphasia. My research spans virtual co-design with persons with aphasia, the role of multimodal communication—including co-speech gestures—and innovative approaches like ecological momentary assessment to capture real-time language use. In Spring 2025, as a Fulbright Scholar at the University of Technology Sydney, I began investigating how immersive virtual reality influences user experience and narrative production in aphasia, further advancing technology-driven solutions for communication support.
Research from our lab
I first became interested in how technology could assist in aphasia recovery during my PhD at the University of Cambridge. Stark & Warburton (e-version 2016, republished 2018) investigated the effectiveness of self-delivered iPad-based speech therapy for individuals with chronic expressive aphasia. Using a crossover design, participants engaged with a language therapy app or a non-language mind-game for four weeks, with results showing significant language improvements only after the therapy app intervention. The findings suggested that self-delivered digital therapy could enhance language function, particularly for those with more severe baseline impairments, supporting its potential as a cost-effective supplement to traditional rehabilitation.
Like many worldwide, when the COVID-19 pandemic landed, in-person research activities were stopped. We had just received funding from the ASHFoundation to collect data for an innovative test-retest study (see Spoken Discourse page!) and we didn’t want to lose momentum. So, we learned about videoconferencing and optimizing the environment for language assessment to the best of our ability at that time. We a fantastic group of persons with and without aphasia contributed to this effort. Because it was early on for remote videoconferencing for language assessment (before teletherapy took off), we published a technical report detailing what we did during our study (Doub, Hittson & Stark, 2020). We will still use many of the same materials to this day! Indeed - we have now collected data from >250 persons with aphasia and >100 cognitively healthy adults using remote videoconferencing methods, and other methods, such as ecological momentary assessment with smartphones (see Inner Speech) or assessing cognition with Gorilla Experiment. If you have any questions about integrating this type of tech into your lab, reach out to me!
My more recent endeavors have involved collaborations with technology that’s been new to me, but that I’ve very much enjoyed learning about. Dr. Seungbae Kim (University of South Florida) and I are developing an AI-powered system that integrates speech and gestures, using a multi-modal fusion model and zero-shot inference to enhance communication for individuals with aphasia. Current assistive technologies primarily focus on speech, neglecting non-verbal communication, which limits their real-world effectiveness. Our prototype, trained on video samples from individuals with aphasia, demonstrates feasibility in synchronizing speech and gestures to infer communicative intent accurately. You can read more about preliminary results here.
Spring 2025 took me to Sydney Australia as a Fulbright Research Scholar and Honorary Faculty at the University of Technology Sydney in the Speech Therapy department, working alongside Dr. Lucy Bryant. I proposed a project to study the potential of virtual reality (VR) to enhance language therapy generalization in individuals with aphasia by facilitating immersive, naturalistic communication practice. Using a mixed-methods design, the study (ongoing!) will compare linguistic complexity in narratives produced in VR versus a traditional clinical setting and examine how immersion, presence, and comfort influence language production. Data will be collected from individuals with post-stroke aphasia using Meta Quest 2. The findings will form the foundation for establishing VR’s feasibility as a tool for language rehabilitation and inform future clinical applications to improve real-world communication outcomes for individuals with aphasia. I wrote a small tutorial on the potential of immersive VR for speech and language interventions in the adult population.