Our research focuses on how the nervous system integrates sensory information from different sources and examines links between sensory processes and motor function in speech. To date a satisfactory characterization of sensorimotor processes in a goal directed task — spanning spanning behavior to neural systems — remains a major challenge in neuroscience.
We want to learn about the core processes of sensorimotor integration in speech by employing a variety of psychophysical techniques, using a paradigm of altered sensory feedback.
In recent years the use of robotic devices has found wide-ranging applications in studies of limb motor control. However, their use in perturbing speech movements, and thereby somatosensory feedback, is a rather recent development. We use robots to perturb the motion path of jaw during speech production for exploring somatosensory function in both speech control and speech learning (Figure 1).
An schematic of a robotic device delivering load to the jaw.
The technique is particularly appealing since it enables us to dissociate auditory and somatosensory feedback as the robot deflects the jaw and, hence, proprioception, without affecting speech acoustics.
We can also perturb auditory feedback online during speech by altering the sound of the voice, in particular vowels, by shifting formant frequencies. For example, we can shift upward the first formant frequency of the vowel during the production of the word “head”, while leaving the other formants and the fundamental frequency unchanged. In this way for a subject the produced word will sound more like “hid” (Figure 2).
Alteration of online formant frequency: As a subject utters the word “head” the first formant frequency is altered such that the subject hears the word as “had.”
We also use EEG techniques to trace out temporal patterns of neural dynamics, and as well as the neural structures involved, to help elucidate neural underpinnings of the behavior studied in the lab.