Auditory cognitive neuroscience, with a focus on spoken language
A spoken syllable may persist in the world for a mere tenth of a second. Yet, as adult listeners, we are able to gather a great deal of information from these floating acoustic signals. We may apprehend the physical location of the speaker, the speaker's gender, regional dialect, age, emotional state, and identity, as well as the linguistic message. The ease of everyday conversation belies the complexity involved.
Research in my lab focuses on the cognitive processes that underlie this feat, using speech processing as a platform for investigating learning, plasticity, categorization, cross-modal processing, object recognition, memory, attention and development. Among our current projects, we are investigating the learning that occurs in acquiring the sounds of a second language and how representations of the native language interact with this learning; how listeners' "tune" their auditory perception to the statistical regularities of the sound environment; and how higher-level knowledge may influence early auditory object recognition and speech categorization. The major approach we use is to study human adult (and sometimes child) participants using perception and learning tasks. In addition, we make use of eye-tracking and behavioral methods with nonhuman animals. Students in the lab are also making use of functional imaging to address the neural bases of auditory processing.
Lim, S-J and Holt, L.L. Learning Foreign Sounds in an Alien World: Videogame Training Improves Non-Native Speech Categorization. Cognitive Science, 35, 1390-1405, 2011.
Liu, R. and Holt, L.L. Changes in mismatch negativity reflecting the acquisition of complex, non-speech auditory categories. Journal of Cognitive Neuroscience, 23, 683-698, 2011.
Idemaru, K. and Holt, L.L. Word recognition reflects dimension-based statistical learning. Journal of Experimental Psychology: Human Perception & Performance, 37, 1939-1956, 2011.
Leech, R., Holt, L. L., Devlin, J. T., Dick, F. Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions. Journal of Neuroscience, 29, 5234 –5239, 2009.