Neural temporal dynamics of continous degraded speech processing

Ya-Ping Chen*, Fabian Schmidt, Anne Keitel, Sebastian Rösch, Anne Hauswald, Nathan Weisz

*Corresponding author for this work

Research output: Contribution to conferencePoster


Listening to speech with poor signal quality is challenging. Vocoding is a popular technique to study signal-degraded speech. We have shown (Hauswald et al., 2020) for continuously vocoded speech that theta coherence increases as long as the speech is comprehensible. However, the applied approach provides a static measure of neural activities. Here, we applied temporal response functions (TRFs) to derive the spatio-temporal sequence of neural activities elicited by changes in signal envelopes. Participants with normal hearing listened to audiobooks with six levels of vocoding (original, 7-channel, 5-channel, 3-channel, 2-channel, and 1-channel) during the MEG recordings. We derived TRFs at each MEG sensor and applied a nonparametric cluster permutation test. Interestingly, an early effect (~55-95 ms) was observed, with vocoding in general massively increasing the peak response. The effect was source-localized to auditory cortical regions, suggesting that vocoding leads to an increased gain or phase resetting. Around 165-245 ms, the TRF response declined with the reduction of intelligibility. Different from these two results directly reflecting the physical manipulations of the signal, the maximum response of the late latency (~335-375 ms) occurred for degraded speech, that was still comprehensible (i.e. 5-channel vocoding) to then decline with reduced intelligibility. Finally, as a complementary approach, we also reconstructed the speech envelopes. The reconstruction accuracy decreased nonlinearly, which supports what we found in Hauswald et al. (2020). Our findings shed light on the brain representations of vocoded speech on the micro-temporal scale and underline that brains track linearly degraded speech in a nonlinear fashion.
Original languageEnglish
Publication statusPublished - 22 Oct 2020
EventAPAN 2020 - Advances and Perspectives in Auditory Neuroscience - , United States
Duration: 22 Oct 202023 Oct 2020


Online-ConferenceAPAN 2020 - Advances and Perspectives in Auditory Neuroscience
Abbreviated titleAPAN 2020
CountryUnited States

Fields of Science and Technology Classification 2012

  • 501 Psychology


  • brain processing of speech and language

Cite this