Multimodal Communication and Cognition: The Role of Gesture in Language Processing and Word Learning in Traumatic Brain Injury

FUNDING

NIDCD 1F31DC020388-01; 1/2022 – 12/2024

 

PRINCIPAL INVESTIGATOR

Sharice Clough

PROJECT INFORMATION

Communication is multimodal, containing speech and gesture. When people talk, co-speech gestures (spontaneous movements of the hands and arms) can visually depict information conveyed in speech but often communicate unique information not conveyed in speech. For example, a speaker might say, “I searched for a new recipe,” while making a typing gesture, conveying only in gesture that the speaker searched online rather than through a cookbook. Listeners must bind linguistic information from speech and visuospatial information from gesture to generate an integrated representation of a message. The benefits of gesture for communication and cognition are well-documented in neurotypical individuals. For example, gesture improves comprehension and memory for spoken information and facilitates word learning, abilities critical for academic and vocational success. However, gesture has not yet received the same attention in clinical populations with cognitive-communication disorders, such as traumatic brain injury (TBI). We examine whether the benefits of gesture extend to individuals with TBI, or if the very nature of their deficits prevent gesture’s facilitatory role in communication and cognition. Using a novel approach combining methods and theory from speech-language pathology, gesture studies, psycholinguistics, and neuropsychology, we test the ability of individuals with TBI to use gesture during multimodal language processing and word learning. Studying gesture alongside speech is critical for providing ecologically valid assessments of language that more closely approximate the real-world communication contexts that characterize and enrich everyday life. The proposed research will directly advance the study of gesture in clinical populations by providing new insight into the ability to integrate speech and gesture in multimodal language contexts and testing whether gesture can be leveraged to support new learning in individuals with TBI.