Workshop on Language, Cognition, and Computation
The role of phonetic detail, auditory processing and language experience in the perception of assimilated speech
MEGHAN CLAYARDS
(Communication Sciences and Disorders, McGill)
Friday, May 7 at 3:30pm, in Harper 130
The speech signal is notoriously variable and complex. Not only do listeners cope well with this variability and complexity, they display exquisite sensitivity to the co-occurrence and predictability of fine grained aspects of the speech signal. In this talk I will discuss one such example – place assimilation at word onset and offsets and listeners’ abilities to make use of this information (compensation). Models of spoken-word recognition differ on whether compensation for assimilatory changes is a knowledge-driven, language-specific phenomenon or relies more on general auditory processing mechanisms. Both English and French exhibit some assimilation of sibilants (e.g., /s/ becomes like /S/ in “dress shop”), but they differ in the strength and directionality of these shifts. We taught English and French participants words that began or ended with /s/ or /S/ consonants. After training, participants were presented with the novel words embedded in native-language sentences that could engender assimilation. Sentences were uttered by both French and English speakers and used a continuum of sibilant sounds between the two phonemic endpoints. Listeners’ perceptions of the potential assimilations were examined using a visual-world eyetracking paradigm in which the listener clicked on a picture matching the novel word. The results suggest that French and English participants treated these assimilatory sequences differently. Furthermore, there was evidence for low level auditory processing in cases with weak or no assimilation patterns in the language (/S/-/s/ sequences in both languages) as well as knowledge driven compensation in response to patterns of strong assimilation in the language (/s/-/S/ in English).