Publications
Ebrahimi S, van der Voort B, Ostry DJ (2024) The consolidation of newly learned movements depends upon the somatosensory cortex in Humans. 44 (32) e0629242024
Abstract | PDF
Studies
using magnetic brain stimulation indicate the involvement
of somatosensory regions in the acquisition and retention
of newly learned movements. Recent work found an
impairment in motor memory when retention was tested
shortly after the appli- cation of continuous theta-burst
stimulation (cTBS) to the primary somatosensory cortex,
compared with stimulation of the primary motor cortex or a
control zone. This finding that the somatosensory cortex
is involved in motor memory retention whereas the motor
cortex is not, if confirmed, could alter our understanding
of human motor learning. It would indicate that plasticity
in sensory systems underlies newly learned movements,
which is different than the commonly held view that
adaptation learning involves updates to a motor
controller. Here we test this idea. Participants were
trained in a visuomotor adaptation task, with visual
feedback gradually shifted. Following adaptation, cTBS was
applied either to M1, S1, or an occipital cortex control
area. Participants were tested for retention 24 h later.
It was observed that S1 stimulation led to reduced
retention of prior learning, compared with stimulation of
M1 or the control area (with no significant difference
between M1 and control). In a further control, cTBS was
applied to S1 following training with unrotated feedback,
in which no learning occurred. This had no effect on
movement in the retention test indicating the effects of
S1 stimulation on movement are learning specific. The
findings are consistent with the S1 participation in the
encoding of learning-related changes to movements and in
the retention of human motor memory.
Ebrahimi S, Ostry DJ (2024) The human somatosensory cortex contributes to the encoding of newly learned movements. Proc Natl Sci USA 121: e2316294121.
Abstract | PDF
Recent
studies have indicated somatosensory cortex involvement in
motor learning and retention. However, the nature of its
contribution is unknown. One possibility is that the
somatosensory cortex is transiently engaged during
movement. Alternatively, there may be durable
learning-related changes which would indicate sensory
participation in the encoding of learned movements. These
possibilities are dissociated by disrupting the
somatosensory cortex following learning, thus targeting
learning-related changes which may have occurred. If
changes to the somatosensory cortex contribute to
retention, which, in effect, means aspects of newly
learned movements are encoded there, disruption of this
area once learning is complete should lead to an
impairment. Participants were trained to make movements
while receiving rotated visual feedback. The primary motor
cortex (M1) and the primary somatosensory cortex (S1) were
targeted for continuous theta-burst stimulation, while
stimulation over the occipital cortex served as a control.
Retention was assessed using active movement reproduction,
or recognition testing, which involved passive movements
produced by a robot. Disruption of the somatosensory
cortex resulted in impaired motor memory in both tests.
Suppression of the motor cortex had no impact on retention
as indicated by comparable retention levels in control and
motor cortex conditions. The effects were learning
specific. When stimulation was applied to S1 following
training with unrotated feedback, movement direction, the
main dependent variable, was unaltered. Thus, the
somatosensory cortex is part of a circuit that contributes
to retention, consistent with the idea that aspects of
newly learned movements, possibly learning-updated sensory
states (new sensory targets) which serve to guide
movement, may be encoded there.
Darainy M, Manning TF, Ostry DJ (2023) Disruption of somatosensory cortex impairs motor learning and retention. J Neurophysiol 130: 1521-1528.
Abstract | PDF
This
study tests for a function of the somatosensory cortex,
that, in addition to its role in processing somatic
afferent information, somatosensory cortex contributes
both to motor learning and the stabilization of motor
memory. Continuous theta-burst magnetic stimulation (cTBS)
was applied, before force-field training to disrupt
activity in either the primary somatosensory cortex,
primary motor cortex, or a control zone over the occipital
lobe. Tests for retention and relearning were conducted
after a 24 h delay. Analysis of movement kinematic
measures and force-channel trials found that cTBS to
somatosensory cortex disrupted both learning and
subsequent retention, whereas cTBS to motor cortex had
little effect on learning but possibly impaired retention.
Basic movement variables are unaffected by cTBS suggesting
that the stimulation does not interfere with movement but
instead disrupts changes in the cortex that are necessary
for learning. In all experimental conditions, relearning
in an abruptly introduced force field, which followed
retention testing, showed extensive savings, which is
consistent with previous work suggesting that more
cognitive aspects of learning and retention are not
dependent on either of the cortical zones under test.
Taken together, the findings are consistent with the idea
that motor learning is dependent on learning-related
activity in the somatosensory cortex. NEW & NOTEWORTHY
This study uses noninvasive transcranial magnetic
stimulation to test the contribution of somatosensory and
motor cortex to human motor learning and retention.
Continuous theta-burst stimulation is applied before
learning; participants return 24 h later to assess
retention. Disruption of the somatosensory cortex is found
to impair both learning and retention, whereas disruption
of the motor cortex has no effect on learning. The
findings are consistent with the idea that motor learning
is dependent upon learning-related plasticity in
somatosensory cortex.
Franken M, Liu B, Ostry DJ (2022) Towards a somatosensory theory of speech perception. J Neurophysiol 128: 1683-1695.
Abstract | PDF
Speech perception is known to be a multimodal process,
relying not only on auditory input but also on the visual
system and possibly on the motor system as well. To date
there has been little work on the potential involvement of
the somatosensory sys- tem in speech perception. In the
present review, we identify the somatosensory system as
another contributor to speech per- ception. First, we
argue that evidence in favor of a motor contribution to
speech perception can just as easily be interpreted as
showing somatosensory involvement. Second, physiological
and neuroanatomical evidence for auditory-somatosensory
interac- tions across the auditory hierarchy indicates the
availability of a neural infrastructure that supports
somatosensory involvement in auditory processing in
general. Third, there is accumulating evidence for
somatosensory involvement in the context of speech
specifically. In particular, tactile stimulation modifies
speech perception, and speech auditory input elicits
activity in somatosen- sory cortical areas. Moreover,
speech sounds can be decoded from activity in
somatosensory cortex; lesions to this region affect
perception, and vowels can be identified based on somatic
input alone. We suggest that the somatosensory involvement
in speech perception derives from the
somatosensory-auditory pairing that occurs during speech
production and learning. By bringing together findings
from a set of studies that have not been previously
linked, the present article identifies the somato- sensory
system as a presently unrecognized contributor to speech
perception.
Ebrahimi S, Ostry DJ (2022) Persistence of adaptation following visuomotor training. J Neurophysiol 128:1312-1323.
Abstract | PDF
Retention tests conducted after sensorimotor adaptation
frequently exhibit a rapid return to baseline performance
once the altered sensory feedback is removed. This
so-called washout of learning stands in contrast with
other demonstrations of retention, such as savings on
re-learning and anterograde interference effects of
initial learning on new learning. In the present study, we
tested the hypothesis that washout occurs when there is a
detectable discrepancy in retention tests between visual
information on the target position and somatosensory
information on the position of the limb. Participants were
tested following adaptation to gradually rotated visual
feedback (15 degree or 30 degree). Two different types of
targets were used for retention testing, a point target in
which a perceptual mismatch is possible, and an arc-target
that eliminated the mismatch. It was found that, except
when point targets were used, retention test movements
were stable throughout aftereffect trials, indicating
little loss of information. Substantial washout was only
observed in tests with a single point target, following
adaptation to a large amplitude 30 degree rotation. In
control studies designed to minimize the use of explicit
strategies during learning, we observed similar patterns
of decay when participants moved to point targets that
suggests that the effects observed here relate primarily
to implicit learning. The results suggest that washout in
aftereffect trials following visuomotor adaptation is due
to a detectable mismatch between vision and
somatosensation. When the mismatch is removed
experimentally, there is little evidence of loss of
information.NEW & NOTEWORTHY Aftereffects following
sensorimotor adaptation are important because they bear on
the understanding of the mechanisms that subserve
forgetting. We present evidence that information loss
previously reported during retention testing occurs only
when there is a detectable discrepancy between vision and
somatosensation and, if this mismatch is removed, the
persistence of adaptation is observed. This suggests that
washout during aftereffect trials is a consequence of the
experimental design rather than a property of the memory
system itself.
Kumar N, Sidarta A, Smith C, Ostry DJ (2022) Ventrolateral prefrontal cortex contributes to human motor learning. eNeuro
Abstract | PDF
This study assesses the involvement in human motor
learning, of the ventrolateral prefrontal cortex (BA
9/46v), a somatic region in the middle frontal gyrus. The
potential involvement of this cortical area in motor
learning is suggested by studies in nonhuman primates
which have found anatomic connections between this area
and sensorimotor regions in frontal and parietal cortex,
and also with basal ganglia output zones. It is likewise
sug- gested by electrophysiological studies which have
shown that activity in this region is implicated in
somatic sensory memory and is also influenced by reward.
We directly tested the hypothesis that area 9/46v is in-
volved in reinforcement-based motor learning in humans.
Participants performed reaching movements to a hidden
target and received positive feedback when successful.
Before the learning task, we applied continu- ous theta
burst stimulation (cTBS) to disrupt activity in 9/46v in
the left or right hemisphere. A control group received
sham cTBS. The data showed that cTBS to left 9/46v almost
entirely eliminated motor learning, whereas learning was
not different from sham stimulation when cTBS was applied
to the same zone in the right hemisphere. Additional
analyses showed that the basic reward-history-dependent
pattern of movements was preserved but more variable
following left hemisphere stimulation, which suggests an
overall deficit in so- matic memory for target location or
target directed movement rather than reward processing per
se. The re- sults indicate that area 9/46v is part of the
human motor learning circuit.
Sidarta A, Komar J, Ostry DJ (2022) Clustering analysis of movement kinematics in reinforcement learning. J Neurophysiol 127:341-353.
Abstract | PDF
Reinforcement learning has been used as an experimental
model of motor skill acquisition, where at times movements
are suc- cessful and thus reinforced. One fundamental
problem is to understand how humans select exploration
over exploitation during learning. The decision could be
influenced by factors such as task demands and reward
availability. In this study, we applied a clustering
algorithm to examine how a change in the accuracy
requirements of a task affected the choice of exploration
over ex- ploitation. Participants made reaching movements
to an unseen target using a planar robot arm and received
reward after each successful movement. For one group of
participants, the width of the hidden target decreased
after every other training block. For a second group, it
remained constant. The clustering algorithm was applied to
the kinematic data to characterize motor learning on a
trial-to-trial basis as a sequence of movements, each
belonging to one of the identified clusters. By the end of
learning, movement trajectories across all participants
converged primarily to a single cluster with the greatest
number of suc- cessful trials. Within this analysis
framework, we defined exploration and exploitation as
types of behavior in which two succes- sive trajectories
belong to different or similar clusters, respectively. The
frequency of each mode of behavior was evaluated over the
course of learning. It was found that by reducing the
target width, participants used a greater variety of
different clusters and displayed more exploration than
exploitation. Excessive exploration relative to
exploitation was found to be detrimental to subsequent
motor learning. NEW & NOTEWORTHY The choice of
exploration versus exploitation is a fundamental problem
in learning new motor skills through reinforcement. In
this study, we employed a data-driven approach to
characterize movements on a trial-by-trial basis with an
unsupervised clustering algorithm. Using this technique,
we found that changes in task demands and, in particular,
in the required accuracy of movements, influenced the
ratio of exploration to exploitation. This analysis
framework provides an attractive tool to investigate
mechanisms of explorative and exploitative behavior while
studying motor learning.
Sedda G, Ostry DJ, Sanguineti V, Sabatini SP (2021) Self-operated stimuli improve subsequent visual motion integration. J Vision 21:13,1-15.
Abstract | PDF
Evidences of perceptual changes that accompany motor
activity have been limited primarily to audition and
somatosensation. Here we asked whether motor learning
results in changes to visual motion perception. We
designed a reaching task in which participants were
trained to make movements along several directions, while
the visual feedback was provided by an intrinsically
ambiguous moving stimulus directly tied to hand motion. We
find that training improves coherent motion perception and
that changes in movement are correlated with perceptual
changes. No perceptual changes are observed in passive
training even when observers were provided with an
explicit strategy to facilitate single motion perception.
A Bayesian model suggests that movement training promotes
the fine-tuning of the internal representation of stimulus
geometry. These results emphasize the role of sensorimotor
interaction in determining the persistent properties in
space and time that define a percept.
Ohashi H, Ostry DJ (2021) Neural development of speech sensorimotor learning. J Neurosci 41:4023-4035.
Abstract | PDF
The development of the human brain continues through to
early adulthood. It has been suggested that cortical
plasticity during this protracted period of development
shapes circuits in associative transmodal regions of the
brain. Here we considered how cortical plasticity during
development might contribute to the coordinated brain
activity required for speech motor learning. Specifically,
we examined patterns of brain functional connectivity
whose strength covaried with the capacity for speech
audio-motor adaptation in children ages 5-12 and in young
adults of both sexes. Children and adults showed distinct
patterns of the encoding of learning in the brain. Adult
performance was associated with connectivity in transmodal
regions that integrate auditory and somatosensory
information, whereas children rely on basic somatosensory
and motor circuits. A progressive reliance on transmodal
regions is consistent with human cortical development and
suggests that human speech motor adaptation abilities are
built on cortical remodeling that is observable in late
childhood and is stabilized in adults.
Kumar N, van Vugt FT, Ostry DJ (2021) Recognition memory for human motor learning. Curr Biol 31:1678-1686.
Abstract | PDF
Motor skill retention is typically measured by asking
participants to reproduce previously learned movements
from memory. The analog of this retention test (recall
memory) in human verbal memory is known to under-estimate
how much learning is actually retained. Here we asked
whether information about previously learned movements,
which can no longer be reproduced, is also retained.
Following visuomotor adaptation,we used tests of recall
that involved reproduction of previously learned movements
and tests of recognition in which participants were asked
whether a candidate limb displacement, produced by a robot
arm held by the subject, corresponded to a movement
direction that was experienced during active training. The
main finding was that 24 h after training, estimates of
recognition memory were about twice as accurate as those
of recall memory. Thus, there is information about
previously learned movements that is not retrieved using
recall testing but can be accessed in tests of
recognition. We conducted additional tests to assess
whether,24 h after learning, recall for previously learned
movements could be improved by presenting passive
movements as retrieval cues. These tests were conducted
immediately prior to recall testing and involved the
passive playback of a small number of movements, which
were spread across the workspace and included both adapted
and baseline movements, without being marked as such. This
technique restored recall memory for movements to levels
close to those of recognition memory performance. Thus,
somatic information may enable retrieval of otherwise
inaccessible motor memories.
van Vugt FT, Near J, Hennessy T, Doyon J, Ostry DJ (2020) Early stages of sensorimotor map acquisition: neurochemical signature in primary motor cortex and its relation to functional connectivity. J Neurophysiol 122: 1708-1720.
Abstract | PDF
One of the puzzles of learning to talk or play a musical
instrument is how we learn which movement produces a
particular sound: an audiomotor map. The initial stages of
map acquisition can be studied by having participants
learn arm movements to auditory targets. The key question
is what mechanism drives this early learning. Three
learning processes from previous literature were tested:
map learning may rely on active motor outflow (target), on
error correction, and on the correspondence between
sensory and motor distances (i.e., that similar movements
map to similar sounds). Alternatively, we hypothesized
that map learning can proceed without these. Participants
made movements that were mapped to sounds in a number of
different conditions that each precluded one of the
potential learning processes.We tested whether map
learning relies on assumptions about topological
continuity by exposing participants to a permuted map that
did not preserve distances in auditory and motor space.
Further groups were tested who passively experienced the
targets, kinematic trajectories produced by a robot arm,
and auditory feedback as a yoked active participant (hence
without active motor outflow). Another group made
movements without receiving targets (thus without
experiencing errors). In each case we observed substantial
learning,therefore none of the three hypothesized
processes is required for learning. Instead early map
acquisition can occur with free exploration without target
error correction, is based on sensory-to-sensory
correspondences, and possible even for discontinuous maps.
The findings are consistent with the idea that early
sensorimotor map formation can involve instance-specific
learning. NEW & NOTEWORTHY: This study tested learning
of novel sensorimotor maps in a variety of unusual
circumstances, including learning a mapping that was
permuted in such as way that it fragmented the
sensorimotor workspace into discontinuous parts, thus no
preserving sensory and motor topology. Participants could
learn this mapping, and they could learn without motor
outflow or targets. These results point to a robust
learning mechanism building on individual instances,
inspired from machine learning literature.
Ito T, Bai J, Ostry DJ (2020) Contribution of sensory memory
to speech motor learning. J Neurophysiol 124:1103-1109. Abstract | PDF
Speech
learning requires precise motor control, but it likewise
requires transient storage of information to enable the
adjustment of upcoming movements based on the success or
failure of previous attempts.The contribution of somatic
sensory memory for limb position has been documented in
work on arm movement; however, in speech,the sensory
support for speech production comes from both
somatosensory and auditory inputs, and accordingly sensory
memory for either or both of sounds and somatic inputs
might contribute to learning. In the present study,
adaptation to altered auditory feed-back was used as an
experimental model of speech motor learning.Participants
also underwent tests of both auditory and somatic sensory
memory. We found that although auditory memory for speech
sounds is better than somatic memory for speech-like
facial skin deformations, somatic sensory memory predicts
adaptation, where as auditory sensory memory does not.
Thus even though speech relies substantially on auditory
inputs and in the present manipulation adaptation requires
the minimization of auditory error, it is somatic inputs
that provide the memory support for learning. NEW &
NOTEWORTHY: In speech production, almost everyone achieves
an exceptionally high level of proficiency. This is
remark-able because speech involves some of the smallest
and most care-fully timed movements of which we are
capable. The present paper demonstrates that sensory
memory contributes to speech motor learning. Moreover, we
report the surprising result that somatic sensory memory
predicts speech motor learning, whereas auditory memory
does not.
Patri JF, Ostry DJ, Diard J, Schwartz JL, Trudeau-Fisette P, Savariaux C, Perrier P (2020) Speakers are able to categorize vowels based on tongue somatosensation. Proc Natl Acad Sci U S A. 117:6255-6263.
Abstract | PDF
Auditory speech perception enables listeners to access
phonological categories from speech sounds. During speech
production and speech motor learning, speakers' experience
matched auditory and somatosensory input. Accordingly,
access to phonetic units might also be provided by
somatosensory information.The present study assessed
whether humans can identify vowels using somatosensory
feedback, without auditory feedback.A tongue-positioning
task was used in which participants were required to
achieve different tongue postures within the /e,?, a/
articulatory range, in a procedure that was totally
non-speech like, involving distorted visual feedback of
tongue shape.Tongue postures were measured using
electromagnetic articulography. At the end of each
tongue-positioning trial, subjects were required to
whisper the corresponding vocal tract configuration with
masked auditory feedback and to identify the vowel
associated with the reached tongue posture. Masked
auditory feedback ensured that vowel categorization was
based on somatosensory feedback rather than auditory
feedback. A separate group of subjects was required to
auditorily classify the whispered sounds.In addition, we
modeled the link between vowel categories and tongue
postures in normal speech production with a Bayesian
classifier based on the tongue postures recorded from the
same speakers for several repetitions of the /e,?, a/
vowels during a separate speech production task. Overall,
our results indicate that vowel categorization is possible
with somatosensory feed-back alone, with an accuracy that
is similar to the accuracy of the auditory perception of
whispered sounds, and in congruence with normal speech
articulation, as accounted for by the Bayesian classifier.
Darainy M, Vahdat S, Ostry DJ (2019) Neural basis of sensorimotor learning in speech motor adaptation. Cereb Cortex 29:2876-2889.
Abstract | PDF
Motor learning is associated with plasticity in both motor
and somatosensory cortex. It is known from animal studies
that tetanic stimulation to each of these areas
individually induces long-term potentiation in its
counterpart. In this context it is possible that changes
in motor cortex contribute to somatosensory change and
that changes in somatosensory cortex are involved in
changes in motor areas of the brain. It is also possible
that learning-related plasticity occurs in these areas
independently. Tobetter understand the relative
contribution to human motor learning ofmotor cortical and
somatosensory plasticity, we assessed the time course of
changes in primary somatosensory and motor cortex
excitability during motor skill learning. Learning was
assessed using aforce production task in which a target
force profile varied from one trial to the next. The
excitability of primary somatosensory cortex was measured
using somatosensory evoked potentials in response to
median nerve stimulation. The excitability of primary
motor cortex was measured using motor evoked potentials
elicited by single-pulse transcranial magnetic
stimulation. These two measures were inter-leaved with
blocks of motor learning trials. We found that the
earliest changes in cortical excitability during learning
occurred in somatosensory cortical responses, and these
changes preceded changes inmotor cortical excitability.
Changes in somatosensory evoked potentials were correlated
with behavioral measures of learning. Changes in motor
evoked potentials were not. These findings indicate that
plasticity in somatosensory cortex occurs as a part of the
earliest stages of motor learning, before changes in motor
cortex are observed.NEW & NOTEWORTHY: We tracked
somatosensory and motorcortical excitability during motor
skill acquisition. Changes in both motor cortical and
somatosensory excitability were observed during learning;
however, the earliest changes were in somatosensory
cortex,not motor cortex. Moreover, the earliest changes in
somatosensory cortical excitability predict the extent of
subsequent learning; those in motor cortex do not. This is
consistent with the idea that plasticity insomatosensory
cortex coincides with the earliest stages of human motor
learning.
Ohashi H, Valle-Mena R, Gribble P, Ostry DJ (2019) Movements
following force-field adaptation are aligned with altered
sense of limb position. Exp Brain Res 237:1303-1313. Abstract | PDF
Previous work has shown that motor learning is associated
with changes to both movements and to the somatosensory
perception of limb position. In an earlier study that
motivates the current work, it appeared that following
washout trials, movements did not return to baseline but
rather were aligned with associated changes to sensed limb
position. Here, we provide a systematic test of this
relationship, examining the idea that adaptation-related
changes to sensed limb position and to the path of the
limb are linked, not only after washout trials but at all
stages of the adaptation process. We used a force-field
adaptation paradigm followed by washout trials in which
subjects performed movements without visual feedback of
the limb. Tests of sensed limb position were conducted at
each phase of adaptation, specifically before and after
baseline movements in a null field, after force-field
adaptation, and following washout trials in a null field.
As in previous work, sensed limb position changed in
association with force-field adaptation. At each stage of
adaptation, we observed a correlation between the sensed
limb position and associated path of the limb. At a group
level, there were differences between the clockwise and
counter-clockwise conditions. However, whenever there were
changes in sensed limb position, movements following
washout did not return to baseline. This suggests that
adaptation in sensory and motor systems is not independent
processes but rather sensorimotor adaptation is linked to
sensory change. Sensory change and limb movement remain in
alignment throughout adaptation such that the path of the
limb is aligned with the altered sense of limb position.
van Vugt FT, Ostry DJ (2019) Early stages of sensorimotor map acquisition: learning with free exploration, without active movement or global structure. J Neurophysiol 122:1708-1720.
Abstract | PDF
Early stages of sensorimotor map acquisition: learning
with free exploration, without active movement or global
structure. J Neurophysiol 122: 1708-1720, 2019. First
published August 21, 2019; doi:10.1152/jn.00429.2019. One
of the puzzles of learning to talk or play a musical
instrument is how we learn which movement produces a
particular sound: an audiomotor map. The initial stages of
map acquisition can be studied by having participants
learn arm movements to auditory targets. The key question
is what mechanism drives this early learning. Three
learning processes from previous literature were tested:
map learning may rely on active motor outflow (target), on
error correction, and on the correspondence between
sensory and motor distances (i.e., that similar movements
map to similar sounds). Alternatively, we hypothesized
that map learning can proceed without these. Participants
made movements that were mapped to sounds in a number of
different conditions that each precluded one of the
potential learning processes. We tested whether map
learning relies on assumptions about topological
continuity by exposing participants to a permuted map that
did not preserve distances in auditory and motor space.
Further groups were tested who passively experienced the
targets, kinematic trajectories produced by a robot arm,
and auditory feedback as a yoked active participant (hence
without active motor outflow). Another group made
movements without receiving targets (thus without
experiencing errors). In each case we observed substantial
learning, therefore none of the three hypothesized
processes is required for learning. Instead early map
acquisition can occur with free exploration without target
error correction, is based on sensory-to-sensory
correspondences, and possible even for discontinuous maps.
The findings are consistent with the idea that early
sensorimotor map formation can involve instance-specific
learning. NEW & NOTEWORTHY This study tested learning
of novel sensorimotor maps in a variety of unusual
circumstances, including learning a mapping that was
permuted in such as way that it fragmented the
sensorimotor workspace into discontinuous parts, thus not
preserving sensory and motor topology. Participants could
learn this mapping, and they could learn without motor
outflow or targets. These results point to a robust
learning mechanism building on individual instances,
inspired from machine learning literature.
Kumar N, Manning TF, Ostry DJ (2019) Somatosensory cortex participates in the consolidation of human motor memory. PLoS Biol 17:e3000469.
Abstract | PDF
Newly learned motor skills are initially labile and then
consolidated to permit retention. The circuits that enable
the consolidation of motor memories remain uncertain. Most
work to date has focused on primary motor cortex, and
although there is ample evidence of learning-related
plasticity in motor cortex, direct evidence for its
involvement in memory consolidation is limited.
Learning-related plasticity is also observed in
somatosensory cortex, and accordingly, it may also be
involved in memory consolidation. Here, by using
transcranial magnetic stimulation (TMS) to block
consolidation, we report the first direct evidence that
plasticity in somatosensory cortex participates in the
consolidation of motor memory. Participants made movements
to targets while a robot applied forces to the hand to
alter somatosensory feedback. Immediately following
adaptation, continuous theta-burst transcranial magnetic
stimulation (cTBS) was delivered to block retention; then,
following a 24-hour delay, which would normally permit
consolidation, we assessed whether there was an
impairment. It was found that when mechanical loads were
introduced gradually to engage implicit learning
processes, suppression of somatosensory cortex following
training almost entirely eliminated retention. In
contrast, cTBS to motor cortex following learning had
little effect on retention at all; retention following
cTBS to motor cortex was not different than following sham
TMS stimulation. We confirmed that cTBS to somatosensory
cortex interfered with normal sensory function and that it
blocked motor memory consolidation and not the ability to
retrieve a consolidated motor memory. In conclusion, the
findings are consistent with the hypothesis that in
adaptation learning, somatosensory cortex rather than
motor cortex is involved in the consolidation of motor
memory.
Ohashi H, Gribble PL, Ostry DJ (2019) Somatosensory cortical excitability changes precede those in motor cortex during human motor learning. J Neurophysiol 122:1397-1405.
Abstract | PDF
Motor learning is associated with plasticity in both motor
and somatosensory cortex. It is known from animal studies
that tetanic stimulation to each of these areas
individually induces long-term potentiation in its
counterpart. In this context it is possible that changes
in motor cortex contribute to somatosensory change and
that changes in somatosensory cortex are involved in
changes in motor areas of the brain. It is also possible
that learning related plasticity occurs in these areas
independently. To better understand the relative
contribution to human motor learning of motor cortical and
somatosensory plasticity, we assessed the time course of
changes in primary somatosensory and motor cortex
excitability during motor skill learning. Learning was
assessed using a force production task in which a target
force profile varied from one trial to the next. The
excitability of primary somatosensory cortex was measured
using somatosensory evoked potentials in response to
median nerve stimulation. The excitability of primary
motor cortex was measured using motor evoked potentials
elicited by single-pulse transcranial magnetic
stimulation. These two measures were interleaved with
blocks of motor learning trials. We found that the
earliest changes in cortical excitability during learning
occurred in somatosensory cortical responses and these
changes preceded changes in motor cortical excitability.
Changes in somatosensory evoked potentials were correlated
with behavioral measures of learning. Changes in motor
evoked potentials were not. These findings indicate that
plasticity in somatosensory cortex occurs as a part of the
earliest stages of motor learning, before changes in motor
cortex are observed.
Vahdat S, Darainy M, Thiel A, Ostry DJ (2018) A single session of robot-controlled proprioceptive training modulates functional connectivity of sensory motor networks and improves reaching accuracy in chronic stroke .Neurorehabil Neural Repair 33:70-81.
Abstract | PDF
The relationship between neural activation during movement
training and the plastic changes that survive beyond
movement execution is not well understood. Here we ask
whether the changes in resting-state functional
connectivity observed following motor learning overlap
with the brain networks that track movement error during
training. Human participants learned to trace an arched
trajectory using a computer mouse in an MRI scanner. Motor
performance was quantified on each trial as the maximum
distance from the prescribed arc. During learning, two
brain networks were observed, one showing increased
activations for larger movement error, comprising the
cerebellum, parietal, visual, somatosensory, and cortical
motor areas, and the other being more activated for
movements with lower error, comprising the ventral putamen
and the OFC. After learning, changes in brain connectivity
at rest were found predominantly in areas that had shown
increased activation for larger error during task,
specifically the cerebellum and its connections with
motor, visual, and somatosensory cortex. The findings
indicate that, although both errors and accurate movements
are important during the active stage of motor learning,
the changes in brain activity observed at rest primarily
reflect networks that process errors. This suggests that
error-related networks are represented in the initial
stages of motor memory formation.
Sidarta A, van Vugt FT, Ostry DJ (2018) Somatosensory working memory in human reinforcement-based motor learning. J Neurophysiol 120:3275-3286.
Abstract | PDF
Recent studies using visuomotor adaptation and sequence
learning tasks have assessed the involvement of working
memory in the visuospatial domain. The capacity to
maintain previously performed movements in working memory
is perhaps even more important in reinforcement-based
learning to repeat accurate movements and avoid mistakes.
Using this kind of task in the present work, we tested the
relationship between somatosensory working memory and
motor learning. The first experiment involved separate
memory and motor learning tasks. In the memory task, the
participant's arm was displaced in different directions by
a robotic arm, and the participant was asked to judge
whether a subsequent test direction was one of the
previously presented directions. In the motor learning
task, participants made reaching movements to a hidden
visual target and were provided with positive feedback as
reinforcement when the movement ended in the target zone.
It was found that participants that had better
somatosensory working memory showed greater motor
learning. In a second experiment, we designed a new task
in which learning and working memory trials were
interleaved, allowing us to study participants memory for
movements they performed as part of learning. As in the
first experiment, we found that participants with better
somatosensory working memory also learned more. Moreover,
memory performance for successful movements was better
than for movements that failed to reach the target. These
results suggest that somatosensory working memory is
involved in reinforcement motor learning and that this
memory preferentially keeps track of reinforced movements.
NEW & NOTEWORTHY The present work examined
somatosensory working memory in reinforcement-based motor
learning. Working memory performance was reliably
correlated with the extent of learning. With the use of a
paradigm in which learning and memory trials were
interleaved, memory was assessed for movements performed
during learning. Movements that received positive feedback
were better remembered than movements that did not. Thus
working memory does not track all movements equally but is
biased to retain movements that were rewarded.
Bernardi NF, Van Vugt FT, Valle-Mena R, Vahdat S, Ostry DJ (2018) Error-related persistence of motor activity in resting-state networks. J Cogn Neurosci 20:1-19.
Abstract | PDF
Background. Passive robot-generated arm movements in
conjunction with proprioceptive decision making and
feedback modulate functional connectivity (FC) in sensory
motor networks and improve sensorimotor adaptation in
normal individuals. This proof-of-principle study
investigates whether these effects can be observed in
stroke patients. Methods. A total of 10 chronic stroke
patients with a range of stable motor and sensory deficits
(Fugl-Meyer Arm score [FMA] 0-65, Nottingham Sensory
Assessment [NSA] 10-40) underwent resting-state functional
magnetic resonance imaging before and after a single
session of robot-controlled proprioceptive training with
feedback. Changes in FC were identified in each patient
using independent component analysis as well as a seed
region-based approach. FC changes were related to
impairment and changes in task performance were assessed.
Results. A single training session improved average arm
reaching accuracy in 6 and proprioception in 8 patients.
Two networks showing training-associated FC change were
identified. Network C1 was present in all patients and
network C2 only in patients with FM scores >7.
Relatively larger C1 volume in the ipsilesional hemisphere
was associated with less impairment (r = 0.83 for NSA, r =
0.73 for FMA). This association was driven by specific
regions in the contralesional hemisphere and their
functional connections (supramarginal gyrus with FM scores
r = 0.82, S1 with NSA scores r = 0.70, and cerebellum with
NSA score r = ?0.82). Conclusion. A single session of
robot-controlled proprioceptive training with feedback
improved movement accuracy and induced FC changes in
sensory motor networks of chronic stroke patients. FC
changes are related to functional impairment and comprise
bilateral sensory and motor network nodes.
Milner TE, Firouzimehr Z, Babadi S, Ostry DJ (2018) Different adaptation rates to abrupt and gradual changes in environmental dynamics. Exp Brain Res 236:2923-2933.
Abstract | PDF
Adaptation to an abrupt change in the dynamics of the
interaction between the arm and the physical environment
has been reported as occurring more rapidly but with less
retention than adaptation to a gradual change in
interaction dynamics. Faster adaptation to an abrupt
change in interaction dynamics appears inconsistent with
kinematic error sensitivity which has been shown to be
greater for small errors than large errors. However, the
comparison of adaptation rates was based on incomplete
adaptation. Furthermore, the metric which was used as a
proxy of the changing internal state, namely the linear
regression between the force disturbance and the
compensatory force (the adaptation index), does not
distinguish between internal state inaccuracy resulting
from amplitude or temporal errors. To resolve the apparent
inconsistency, we compared the evolution of the internal
state during complete adaptation to an abrupt and gradual
change in interaction dynamics. We found no difference in
the rate at which the adaptation index increased during
adaptation to a gradual compared to an abrupt change in
interaction dynamics. In addition, we separately examined
amplitude and temporal errors using different metrics, and
found that amplitude error was reduced more rapidly under
the gradual than the abrupt condition, whereas temporal
error (quantified by smoothness) was reduced more rapidly
under the abrupt condition. We did not find any
significant change in phase lag during adaptation under
either condition. Our results also demonstrate that even
after adaptation is complete, online feedback correction
still plays a significant role in the control of reaching.
Van Vugt FT and Ostry DJ (2018) The structure and acquisition of sensorimotor maps. J Cogn Neurosci 30: 290-306.
Abstract | PDF
One of the puzzles of learning to talk or play a musical
instrument is how we learn which movement produces a
particular sound: an audiomotor map. Existing research has
used mappings that are already well learned such as
controlling a cursor using a computer mouse. By contrast,
the acquisition of novel sensorimotor maps was studied by
having participants learn arm movements to auditory
targets. These sounds did not come from different
directions but, like speech, were only distinguished by
their frequencies. It is shown that learning involves
forming not one but two maps: a point map connecting
sensory targets with motor commands and an error map
linking sensory errors to motor corrections. Learning a
point map is possible even when targets never repeat.
Thus, although participants make errors, there is no
opportunity to correct them because the target is
different on every trial, and therefore learning cannot be
driven by error correction. Furthermore, when the
opportunity for error correction is provided, it is seen
that acquiring error correction is itself a learning
process that changes over time and results in an error
map. In principle, the error map could be derived from the
point map, but instead, these two maps are independently
acquired and jointly enable sensorimotor control and
learning. A computational model shows that this dual
encoding is optimal and simulations based on this
architecture predict that learning the two maps results in
performance improvements comparable with those observed
empirically.
Sidarta A, Vahdat S, Bernardi NF, Ostry DJ (2016) Somatic and reinforcement-based plasticity in the initial stages of human motor learning. J Neurosci 36:11682-11692.
Abstract | PDF
As
one learns to dance or play tennis, the desired
somatosensory state is typically unknown. Trial and error
is important as motor behavior is shaped by successful and
unsuccessful movements. As an experimental model, we
designed a task in which human participants make reaching
movements to a hidden target and receive positive
reinforcement when successful. We identified somatic and
reinforcement-based sources of plasticity on the basis of
changes in functional connectivity using resting-state
fMRI before and after learning. The neuroimaging data
revealed reinforcement-related changes in both motor and
somatosensory brain areas in which a strengthening of
connectivity was related to the amount of positive
reinforcement during learning. Areas of prefrontal cortex
were similarly altered in relation to reinforcement, with
connectivity between sensorimotor areas of putamen and the
reward-related ventromedial prefrontal cortex strengthened
in relation to the amount of successful feedback received.
In other analyses, we assessed connectivity related to
changes in movement direction between trials, a type of
variability that presumably reflects exploratory
strategies during learning. We found that connectivity in
a network linking motor and somatosensory cortices
increased with trial-to-trial changes in direction.
Connectivity varied as well with the change in movement
direction following incorrect movements. Here the changes
were observed in a somatic memory and decision making
network involving ventrolateral prefrontal cortex and
second somatosensory cortex. Our results point to the idea
that the initial stages of motor learning are not wholly
motor but rather involve plasticity in somatic and
prefrontal networks related both to reward and
exploration.
Ito T, Coppola JH, Ostry DJ (2016) Speech motor learning changes the neural response to both auditory and somatosensory signals. Sci Rep 6:25926
Abstract | PDF
In
the present paper, we present evidence for the idea that
speech motor learning is accompanied by changes to the
neural coding of both auditory and somatosensory stimuli.
Participants in our experiments undergo adaptation to
altered auditory feedback, an experimental model of speech
motor learning which like visuo-motor adaptation in limb
movement, requires that participants change their speech
movements and associated somatosensory inputs to correct
for systematic real-time changes to auditory feedback. We
measure the sensory effects of adaptation by examining
changes to auditory and somatosensory event-related
responses. We find that adaptation results in progressive
changes to speech acoustical outputs that serve to correct
for the perturbation. We also observe changes in both
auditory and somatosensory event-related responses that
are correlated with the magnitude of adaptation. These
results indicate that sensory change occurs in conjunction
with the processes involved in speech motor adaptation.
Ostry DJ, Gribble PL (2016) Sensory plasticity in human motor learning. Trends Neurosci 39:114-123
Abstract | PDF
There is accumulating evidence from behavioral,
neurophysiological, and neuroimaging studies that the
acquisition of motor skills involves both perceptual and
motor learning. Perceptual learning alters movements,
motor learning, and motor networks of the brain. Motor
learning changes perceptual function and the sensory
circuits of the brain. Here, we review studies of both
human limb movement and speech that indicate that
plasticity in sensory and motor systems is reciprocally
linked. Taken together, this points to an approach to
motor learning in which perceptual learning and sensory
plasticity have a fundamental role. Trends Sensorimotor
adaptation results in changes to sensory systems and
sensory networks in the brain. Perceptual learning
modifies sensory systems and directly alters the motor
networks of the brain. Perceptual changes associated with
sensorimotor adaptation are durable and occur in parallel
with motor learning.
Ito T, Ostry DJ, Gracco VL (2015) Somatosensory event-related potentials from orofacial skin stretch stimulation. J Vis Exp e53621-e53621
Abstract | PDF
Cortical processing associated with orofacial
somatosensory function in speech has received limited
experimental attention due to the difficulty of providing
precise and controlled stimulation. This article
introduces a technique for recording somatosensory
event-related potentials (ERP) that uses a novel
mechanical stimulation method involving skin deformation
using a robotic device. Controlled deformation of the
facial skin is used to modulate kinesthetic inputs through
excitation of cutaneous mechanoreceptors. By combining
somatosensory stimulation with electroencephalographic
recording, somatosensory evoked responses can be
successfully measured at the level of the cortex.
Somatosensory stimulation can be combined with the
stimulation of other sensory modalities to assess
multisensory interactions. For speech, orofacial
stimulation is combined with speech sound stimulation to
assess the contribution of multi-sensory processing
including the effects of timing differences. The ability
to precisely control orofacial somatosensory stimulation
during speech perception and speech production with ERP
recording is an important tool that provides new insight
into the neural organization and neural representations
for speech.
Bernardi NF, Darainy M, Ostry DJ (2015) Somatosensory contribution to the early stages of motor skill learning. J Neurosci 35: 14316 -14326
Abstract | PDF
The early stages of motor skill acquisition are often
marked by uncertainty about the sensory and motor goals of
the task, as is the case in learning to speak or learning
the feel of a good tennis serve. Here we present an
experimental model of this early learning process, in
which targets are acquired by exploration and
reinforcement rather than sensory error. We use this model
to investigate the relative contribution of motor and
sensory factors to human motor learning. Participants make
active reaching movements or matched passive movements to
an unseen target using a robot arm. We find that learning
through passive movements paired ith reinforcement is
comparable with learning associated with active movement,
both in terms of magnitude and durability, with
improvements due to training still observable at a 1 week
retest. Motor learning is also accompanied by changes in
somatosensory perceptual acuity. No stable changes in
motor performance are observed for participants that
train, actively or passively, in the absence of
reinforcement, or for participants who are given explicit
information about target position in the absence of
somatosensory experience. These findings indicate that the
somatosensory system dominates learning in the early
stages of motor skill acquisition.
Lametti DR, Rochet-Capellan A, Neufeld E, Shiller DM, Ostry
DJ (2014) Plasticity in the human speech motor system drives
changes in speech perception. J Neurosci 34:10339-10346. Abstract | PDF
Recent studies of human speech motor learning suggest that
learning is accompanied by changes in auditory perception.
But what drives the perceptual change? Is it a consequence
of changes in the motor system? Or is it a result of
sensory inflow during learning? Here, subjects
participated in a speech motor-learning task involving
adaptation to altered auditory feedback and they were
subsequently tested for perceptual change. In two separate
experiments, involving two different auditory perceptual
continua, we show that changes in the speech motor system
that accompany learning drive changes in auditory speech
perception. Specifically, we obtained changes in speech
perception when adaptation to altered auditory feedback
led to speech production that fell into the phonetic range
of the speech perceptual tests. However, a similar change
in perception was not observed when the auditory feedback
that subjects' received during learning fell into the
phonetic range of the perceptual tests. This indicates
that the central motor outflow associated with vocal
sensorimotor adaptation drives changes to the perceptual
classification of speech sounds.
Lametti DR, Krol SA, Shiller DM, Ostry DJ (2014) Brief periods of auditory perceptual training can determine the sensory targets of speech motor learning. Psychol Sci. 25:1325-1336.
Abstract | PDF
The perception of speech is notably malleable in adults,
yet alterations in perception seem to have little impact
on speech production. However, we hypothesized that speech
perceptual training might immediately influence speech
motor learning. To test this, we paired a speech
perceptual-training task with a speech motor-learning
task. Subjects performed a series of perceptual tests
designed to measure and then manipulate the perceptual
distinction between the words head and had. Subjects then
produced head with the sound of the vowel altered in real
time so that they heard themselves through headphones
producing a word that sounded more like had. In support of
our hypothesis, the amount of motor learning in response
to the voice alterations depended on the perceptual
boundary acquired through perceptual training. The studies
show that plasticity in adults' speech perception can have
immediate consequences for speech production in the
context of speech learning.
Ito T, Johns AR, Ostry DJ (2014) Left lateralized enhancement of orofacial somatosensory processing due to speech sounds. J Speech Lang Hear Res. 56:1875-81.
Abstract | PDF
PURPOSE:
Somatosensory information associated with speech articulatory movements affects the perception of speech sounds and vice versa, suggesting an intimate linkage between speech production and perception systems. However, it is unclear which cortical processes are involved in the interaction between speech sounds and orofacial somatosensory inputs. The authors examined whether speech sounds modify orofacial somatosensory cortical potentials that were elicited using facial skin perturbations.
METHOD:
Somatosensory event-related potentials in EEG were recorded in 3 background sound conditions (pink noise, speech sounds, and nonspeech sounds) and also in a silent condition. Facial skin deformations that are similar in timing and duration to those experienced in speech production were used for somatosensory stimulation.
RESULTS:
The authors found that speech sounds reliably enhanced the first negative peak of the somatosensory event-related potential when compared with the other 3 sound conditions. The enhancement was evident at electrode locations above the left motor and premotor area of the orofacial system. The result indicates that speech sounds interact with somatosensory cortical processes that are produced by speech-production-like patterns of facial skin stretch.
CONCLUSION:
Neural circuits in the left hemisphere, presumably in left motor and premotor cortex, may play a prominent role in the interaction between auditory inputs and speech-relevant somatosensory processing.
Somatosensory information associated with speech articulatory movements affects the perception of speech sounds and vice versa, suggesting an intimate linkage between speech production and perception systems. However, it is unclear which cortical processes are involved in the interaction between speech sounds and orofacial somatosensory inputs. The authors examined whether speech sounds modify orofacial somatosensory cortical potentials that were elicited using facial skin perturbations.
METHOD:
Somatosensory event-related potentials in EEG were recorded in 3 background sound conditions (pink noise, speech sounds, and nonspeech sounds) and also in a silent condition. Facial skin deformations that are similar in timing and duration to those experienced in speech production were used for somatosensory stimulation.
RESULTS:
The authors found that speech sounds reliably enhanced the first negative peak of the somatosensory event-related potential when compared with the other 3 sound conditions. The enhancement was evident at electrode locations above the left motor and premotor area of the orofacial system. The result indicates that speech sounds interact with somatosensory cortical processes that are produced by speech-production-like patterns of facial skin stretch.
CONCLUSION:
Neural circuits in the left hemisphere, presumably in left motor and premotor cortex, may play a prominent role in the interaction between auditory inputs and speech-relevant somatosensory processing.
Vahdat S, Darainy M, Ostry DJ (2014) Structure of plasticity in human sensory and motor networks due to perceptual learning. J Neurosci 34:2451-63.
Abstract | PDF
As
we begin to acquire a new motor skill, we face the dual
challenge of determining and refining the somatosensory
goals of our movements and establishing the best motor
commands to achieve our ends. The two typically proceed in
parallel, and accordingly it is unclear how much of skill
acquisition is a reflection of changes in sensory systems
and how much reflects changes in the brain's motor areas.
Here we have intentionally separated perceptual and motor
learning in time so that we can assess functional changes
to human sensory and motor networks as a result of
perceptual learning. Our subjects underwent fMRI scans of
the resting brain before and after a somatosensory
discrimination task. We identified changes in functional
connectivity that were due to the effects of perceptual
learning on movement. For this purpose, we used a neural
model of the transmission of sensory signals from
perceptual decision making through to motor action. We
used this model in combination with a partial correlation
technique to parcel out those changes in connectivity
observed in motor systems that could be attributed to
activity in sensory brain regions. We found that, after
removing effects that are linearly correlated with
somatosensory activity, perceptual learning results in
changes to frontal motor areas that are related to the
effects of this training on motor behavior and learning.
This suggests that perceptual learning produces changes to
frontal motor areas of the brain and may thus contribute
directly to motor learning.
Darainy M, Vahdat S, Ostry DJ (2013) Perceptual learning in sensorimotor adaptation. J Neurophysiol 110: 2152-2162.
Abstract | PDF
Motor learning often involves situations in which the
somatosensory targets of movement are initially, poorly
defined, as for example, in learning to speak or learning
the feel of a proper tennis serve. Under these conditions,
motor skill acquisition presumably requires perceptual as
well as motor learning. That is, it engages both the
progressive shaping of sensory targets and associated
changes in motor performance. In the present paper, we
test the idea that perceptual learning alters
somatosensory function and in so doing produces changes to
motor performance and sensorimotor adaptation. Subjects in
these experiments undergo perceptual training in which a
robotic device passively moves the arm on one of a set of
fan shaped trajectories. Subjects are required to indicate
whether the robot moved the limb to the right or the left
and feedback is provided. Over the course of training both
the perceptual boundary and acuity are altered. The
perceptual learning is observed to improve both the rate
and extent of learning in a subsequent sensorimotor
adaptation task and the benefits persist for at least 24
hours. The improvement in the present studies is obtained
regardless of whether the perceptual boundary shift serves
to systematically increase or decrease error on subsequent
movements. The beneficial effects of perceptual training
are found to be substantially dependent upon reinforced
decision-making in the sensory domain. Passive-movement
training on its own is less able to alter subsequent
learning in the motor system. Overall, this study suggests
perceptual learning plays an integral role in motor
learning.
Bernardi NF, Darainy M, Bricolo E, Ostry DJ (2013) Observing motor learning produces somatosensory change. J Neurophysiol 110: 1804-1810.
Abstract | PDF
Observing the actions of others has been shown to affect
motor learning, but does it have effects on sensory
systems as well? It has been recently shown that motor
learning that involves actual physical practice is also
associated with plasticity in the somatosensory system.
Here, we assessed the idea that observational learning
likewise changes somatosensory function. We evaluated
changes in somatosensory function after human subjects
watched videos depicting motor learning. Subjects first
observed video recordings of reaching movements either in
a clockwise or counterclockwise force field. They were
then trained in an actual force-field task that involved a
counterclockwise load. Measures of somatosensory function
were obtained before and after visual observation and also
following force-field learning. Consistent with previous
reports, video observation promoted motor learning. We
also found that somatosensory function was altered
following observational learning, both in direction and in
magnitude, in a manner similar to that which occurs when
motor learning is achieved through actual physical
practice. Observation of the same sequence of movements in
a randomized order did not result in somatosensory
perceptual change. Observational learning and real
physical practice appear to tap into the same capacity for
sensory change in that subjects that showed a greater
change following observational learning showed a reliably
smaller change following physical motor learning. We
conclude that effects of observing motor learning extend
beyond the boundaries of traditional motor circuits, to
include somatosensory representations.
Ito S, Darainy M, Sasaki M, Ostry DJ (2013) Computational model of motor learning and perceptual change. Biol Cybern 107:653-667.
Abstract | PDF
Motor learning in the context of arm reaching movements
has been frequently investigated using the paradigm of
force-field learning. It has been recently shown that
changes to somatosensory perception are likewise
associated with motor learning. Changes in perceptual
function may be the reason that when the perturbation is
removed following motor learning, the hand trajectory does
not return to a straight line path even after several
dozen trials. To explain the computational mechanisms that
produce these characteristics, we propose a motor control
and learning scheme using a simplified two-link system in
the horizontal plane:We represent learning as the
adjustment of desired joint-angular trajectories so as to
achieve the reference trajectory of the hand. The
convergence of the actual hand movement to the reference
trajectory is proved by using a Lyapunov-like lemma, and
the result is confirmed using computer simulations. The
model assumes that changes in the desired hand trajectory
influence the perception of hand position and this in turn
affects movement control. Our computer simulations support
the idea that perceptual change may come as a result of
adjustments to movement planning with motor learning.
Nasir SM, Darainy M, Ostry DJ (2013) Sensorimotor adaptation changes the neural coding of somatosensory stimuli. J. Neurophysiol. 109:2077-85.
Abstract | PDF
Motor learning is reflected in changes to the brain's
functional organization as a result of experience. We show
here that these changes are not limited to motor areas of
the brain and indeed that motor learning also changes
sensory systems. We test for plasticity in sensory systems
using somatosensory evoked potentials (SEPs). A robotic
device is used to elicit somatosensory inputs by
displacing the arm in the direction of applied force
during learning. We observe that following learning there
are short latency changes to the response in somatosensory
areas of the brain that are reliably correlated with the
magnitude of motor learning: subjects who learn more show
greater changes in SEP magnitude. The effects we observe
are tied to motor learning. When the limb is displaced
passively, such that subjects experience similar movements
but without experiencing learning, no changes in the
evoked response are observed. Sensorimotor adaptation thus
alters the neural coding of somatosensory stimuli.
Mattar AAG, Darainy M, Ostry DJ (2013) Motor learning and
its sensory effects: The time course of perceptual change,
and its presence with gradual introduction of load. J.
Neurophysiol. 109:782-91. Abstract | PDF
A
complex interplay has been demonstrated between motor and
sensory systems. We showed recently that motor learning
leads to changes in the sensed position of the limb (Ostry
DJ, Darainy M, Mattar AA, Wong J, Gribble PL. J Neurosci
30: 5384-5393, 2010). Here, we document further the links
between motor learning and changes in somatosensory
perception. To study motor learning, we used a force field
paradigm in which subjects learn to compensate for forces
applied to the hand by a robotic device. We used a task in
which subjects judge lateral displacements of the hand to
study somatosensory perception. In a first experiment, we
divided the motor learning task into incremental phases
and tracked sensory perception throughout. We found that
changes in perception occurred at a slower rate than
changes in motor performance. A second experiment tested
whether awareness of the motor learning process is
necessary for perceptual change. In this experiment,
subjects were exposed to a force field that grew gradually
in strength. We found that the shift in sensory perception
occurred even when awareness of motor learning was
reduced. These experiments argue for a link between motor
learning and changes in somatosensory perception, and they
are consistent with the idea that motor learning drives
sensory change.
Lametti DR, Nasir S, Ostry DJ (2012) Sensory preference in
speech production revealed by simultaneous alteration of
auditory and somatosensory feedback. J Neurosci
32:9351-9359. Abstract | PDF
The idea that humans learn and maintain accurate speech by
carefully monitoring auditory feedback is widely held. But
this view neglects the fact that auditory feedback is
highly correlated with somatosensory feedback during
speech production. Somatosensory feedback from speech
movements could be a primary means by which cortical
speech areas monitor the accuracy of produced speech. We
tested this idea by placing the somatosensory and auditory
systems in competition during speech motor learning. To do
this, we combined two speech-learning paradigms to
simultaneously alter somatosensory and auditory feedback
in real time as subjects spoke. Somatosensory feedback was
manipulated by using a robotic device that altered the
motion path of the jaw. Auditory feedback was manipulated
by changing the frequency of the first formant of the
vowel sound and playing back the modified utterance to the
subject through headphones. The amount of compensation for
each perturbation was used as a measure of sensory
reliance. All subjects were observed to correct for at
least one of the perturbations, but auditory feedback was
not dominant. Indeed, some subjects showed a stable
preference for either somatosensory or auditory feedback
during speech.
Rochet-Capellan A, Richer L, Ostry DJ (2012) Non-homogeneous
transfer reveals specificity in speech motor learning, J
Neurophysiol 107(6):1711-1717. Abstract | PDF
Does motor learning generalize to new situations that are
not experienced during training, or is motor learning
essentially specific to the training situation? In the
present experiments, we use speech production as a model
to investigate generalization in motor learning. We tested
for generalization from training to transfer utterances by
varying the acoustical similarity between these two sets
of utterances. During the training phase of the
experiment, subjects received auditory feedback that was
altered in real time as they repeated a single consonant
vowel-consonant utterance. Different groups of subjects
were trained with different consonant-vowel-consonant
utterances, which differed from a subsequent transfer
utterance in terms of the initial consonant or vowel.
During the adaptation phase of the experiment, we observed
that subjects in all groups progressively changed their
speech output to compensate for the perturbation (altered
auditory feedback). After learning, we tested for
generalization by having all subjects produce the same
single transfer utterance while receiving unaltered
auditory feedback. We observed limited transfer of
learning, which depended on the acoustical similarity
between the training and the transfer utterances. The
gradients of generalization observed here are comparable
to those observed in limb movement. The present findings
are consistent with the conclusion that speech learning
remains specific to individual instances of learning.
Mattar AAG, Nasir SM, Darainy M, Ostry DJ (2011) Sensory
change following motor learning. in Green AM, Chapman CE,
Kalaska JF and Lepore F (Eds), Progress in Brain Research,
Volume 191 (pp 29-42). Abstract | PDF
Here we describe two studies linking perceptual change
with motor learning. In the first, we document persistent
changes in somatosensory perception that occur following
force field learning. Subjects learned to control a
robotic device that applied forces to the hand during arm
movements. This led to a change in the sensed position of
the limb that lasted at least 24 h. Control experiments
revealed that the sensory change depended on motor
learning. In the second study, we describe changes in the
perception of speech sounds that occur following speech
motor learning. Subjects adapted control of speech
movements to compensate for loads applied to the jaw by a
robot. Perception of speech sounds was measured before and
after motor learning. Adapted subjects showed a consistent
shift in perception. In contrast, no consistent shift was
seen in control subjects and subjects that did not adapt
to the load. These studies suggest that motor learning
changes both sensory and motor function.
Vahdat S, Darainy M, Milner TE, Ostry DJ (2011) Functionally
specific changes in resting-state sensorimotor networks
after motor learning. J Neurosci. 31:16907-16915. Abstract | PDF
Motor learning changes the activity of cortical motor and
subcortical areas of the brain, but does learning affect
sensory systems as well? We examined inhumansthe effects
of motor learning using fMRI measures of functional
connectivity under resting conditions and found persistent
changes in networks involving both motor and somatosensory
areas of the brain. We developed a technique that allows
us to distinguish changes in functional connectivity that
can be attributed to motor learning from those that are
related to perceptual changes that occur in conjunction
with learning. Using this technique, we identified a new
network in motor learning involving second somatosensory
cortex, ventral premotor cortex, and supplementary motor
cortex whose activation is specifically related to
perceptual changes that occur in conjunction with motor
learning. We also found changes in a network comprising
cerebellar cortex, primary motor cortex, and dorsal
premotor cortex that were linked to the motor aspects of
learning. In each network, we observed highly reliable
linear relationships between neuroplastic changes and
behavioral measures of either motor learning or perceptual
function. Motor learning thus results in functionally
specific changes to distinct resting-state networks in the
brain.
Rochet-Capellan A, Ostry DJ (2011) Simultaneous acquisition
of multiple auditory-motor transformations in speech. J
Neurosci. 31:2648-2655. Abstract | PDF
The brain easily generates the movement that is needed in
a given situation. Yet surprisingly, the results of
experimental studies suggest that it is difficult to
acquire more than one skill at a time. To do so, it has
generally been necessary to link the required movement to
arbitrary cues. In the present study, we show that speech
motor learning provides an informative model for the
acquisition of multiple sensorimotor skills. During
training, subjects were required to repeat aloud
individual words in random order while auditory feedback
was altered in real-time in different ways for the
different words. We found that subjects can quite readily
and simultaneously modify their speech movements to
correct for these different auditory transformations. This
multiple learning occurs effortlessly without explicit
cues and without any apparent awareness of the
perturbation. The ability to simultaneously learn several
different auditory-motor transformations is consistent
with the idea that, in speech motor learning, the brain
acquires instance-specific memories. The results support
the hypothesis that speech motor learning is fundamentally
local.
Ito T, Ostry DJ (2010) Somatosensory contribution to motor
learning due to facial skin deformation. J Neurophysiol
104:1230-1230. Abstract | PDF
Motor learning is dependent on kinesthetic information
that is obtained both from cutaneous afferents and from
muscle receptors. In human arm movement, information from
these two kinds of afferents is largely correlated. The
facial skin offers a unique situation in which there are
plentiful cutaneous afferents and essentially no muscle
receptors and, accordingly, experimental manipulations
involving the facial skin may be used to assess the
possible role of cutaneous afferents in motor learning. We
focus here on the information for motor learning provided
by the deformation of the facial skin and the motion of
the lips in the context of speech. We used a robotic
device to slightly stretch the facial skin lateral to the
side of the mouth in the period immediately preceding
movement. We found that facial skin stretch increased lip
protrusion in a progressive manner over the course of a
series of training trials. The learning was manifest in a
changed pattern of lip movement, when measured after
learning in the absence of load. The newly acquired motor
plan generalized partially to another speech task that
involved a lip movement of different amplitude. Control
tests indicated that the primary source of the observed
adaptation was sensory input from cutaneous afferents. The
progressive increase in lip protrusion over the course of
training fits with the basic idea that change in sensory
input is attributed to motor performance error. Sensory
input, which in the present study precedes the target
movement, is credited to the target-related motion, even
though the skin stretch is released prior to movement
initiation. This supports the idea that the nervous system
generates motor commands on the assumption that sensory
input and kinematic error are in register.
Lametti DR, Ostry DJ (2010) Postural constraint on movement
variability. J Neurophysiol 104:1061-1067. Abstract | PDF
Movements are inherently variable. When we move to a
particular point in space, a cloud of final limb positions
is observed around the target. Previously we noted that
patterns of variability at the end of movement to a circular target were not circular, but instead reflected patterns of limb stiffness in directions where limb stiffness was high, variability in end position was low, and vice versa. Here we examine the determinants of variability at movement end in more detail. To do this, we have subjects move the handle of a robotic device from different starting positions into a circular target. We use position servocontrolled displacements of the robot's handle to measure limb stiffness at the end of movement and we also record patterns of end position variability. To examine the effect of change in posture on movement variability, we use a visual motor transformation in which we change the limb configuration and also the actual movement target, while holding constant the visual display. We find that, regardless of movement direction, patterns of variability at the end of movement vary systematically with limb configuration and are also related to patterns of limb stiffness, which are likewise configuration dependent. The result suggests that postural configuration determines the base level of movement variability, on top of which control mechanisms can act to further alter variability.
Mattar AAG, Ostry DJ (2010) Generalization of dynamics
learning across changes in movement amplitude. J
Neurophysiol 104:426-438. patterns of variability at the end of movement to a circular target were not circular, but instead reflected patterns of limb stiffness in directions where limb stiffness was high, variability in end position was low, and vice versa. Here we examine the determinants of variability at movement end in more detail. To do this, we have subjects move the handle of a robotic device from different starting positions into a circular target. We use position servocontrolled displacements of the robot's handle to measure limb stiffness at the end of movement and we also record patterns of end position variability. To examine the effect of change in posture on movement variability, we use a visual motor transformation in which we change the limb configuration and also the actual movement target, while holding constant the visual display. We find that, regardless of movement direction, patterns of variability at the end of movement vary systematically with limb configuration and are also related to patterns of limb stiffness, which are likewise configuration dependent. The result suggests that postural configuration determines the base level of movement variability, on top of which control mechanisms can act to further alter variability.
Abstract | PDF
Studies on generalization show the nature of how learning
is encoded in the brain. Previous studies have shown
rather limited generalization of dynamics learning across
changes in movement direction, a finding that is
consistent with the idea that learning is primarily local.
In contrast, studies show a broader pattern of
generalization across changes in movement amplitude,
suggesting a more general form of learning. To understand
this difference, we performed an experiment in which
subjects held a robotic manipulandum and made movements to
targets along the body midline. Subjects were trained in a
velocitydependent force field while moving to a 15 cm
target. After training, subjects were tested for
generalization using movements to a 30 cm target. We used
force channels in conjunction with movements to the 30 cm
target to assess the extent of generalization. Force
channels restricted lateral movements and allowed us to
measure force production during generalization. We
compared actual lateral forces to the forces expected if
dynamics learning generalized fully. We found that, during
the test for generalization, subjects produced reliably
less force than expected. Force production was appropriate
for the portion of the transfer movement in which
velocities corresponded to those experienced with the 15
cm target. Subjects failed to produce the expected forces
when velocities exceeded those experienced in the training
task. This suggests that dynamics learning generalizes
little beyond the range of one's experience. Consistent
with this result, subjects who trained on the 30 cm target
showed full generalization to the 15 cm target. We
performed two additional experiments that show that
interleaved trials to the 30 cm target during training on
the 15 cm target can resolve the difference between the
current results and those reported previously.
Ostry DJ, Darainy M, Mattar AAG, Wong J, Gribble PL (2010)
Somatosensory plasticity and motor learning. J Neurosci
30:5384-5393. Abstract | PDF
Motor learning is dependent upon plasticity in motor areas
of the brain, but does it occur in isolation, or does it
also result in changes to sensory systems? We examined
changes to somatosensory function that occur in
conjunction with motor learning. We found that even after
periods of training as brief as 10 min, sensed limb
position was altered and the perceptual change persisted
for 24 h. The perceptual change was reflected in
subsequent movements; limb movements following learning
deviated from the prelearning trajectory by an amount that
was not different in magnitude and in the same direction
as the perceptual shift. Crucially, the perceptual change
was dependent upon motor learning. When the limb was
displaced passively such that subjects experienced similar
kinematics but without learning, no sensory change was
observed. The findings indicate that motor learning
affects not only motor areas of the brain but changes
sensory function as well.
Nasir SM, Ostry DJ (2009) Auditory plasticity and speech
motor learning. Proc Natl Acad Sci U S A 106:20470-20475. Abstract | PDF
Is
plasticity in sensory and motor systems linked? Here, in
the context of speech motor learning and perception, we
test the idea sensory function is modified by motor
learning and, in particular, that speech motor learning
affects a speaker's auditory map. We assessed speech motor
learning by using a robotic device that displaced the jaw
and selectively altered somatosensory feedback during
speech. We found that with practice speakers progressively
corrected for the mechanical perturbation and after motor
learning they also showed systematic changes in their
perceptual classification of speech sounds. The perceptual
shift was tied to motor learning. Individuals that
displayed greater amounts of learning also showed greater
perceptual change. Perceptual change was not observed in
control subjects that produced the same movements, but in
the absence of a force field, nor in subjects that
experienced the force field but failed to adapt to the
mechanical load. The perceptual effects observed here
indicate the involvement of the somatosensory system in
the neural processing of speech sounds and suggest that
speech motor learning results in changes to auditory
perceptual function.
Laboissiere R, Lametti DR, Ostry DJ (2009) Impedance control
and its relation to precision in orofacial movement. J
Neurophysiol 102:523-531. Abstract | PDF
Speech production involves some of the most precise and
finely timed patterns of human movement. Here, in the
context of jaw movement in speech, we show that spatial
precision in speech production is systematically
associated with the regulation of impedance and in
particular, with jaw stiffness a measure of resistance to
displacement. We estimated stiffness and also variability
during movement using a robotic device to apply brief
force pulses to the jaw. Estimates of stiffness were
obtained using the perturbed position and force trajectory
and an estimate of what the trajectory would be in the
absence of load. We estimated this reference trajectory
using a new technique based on Fourier analysis. A
moving-average (MA) procedure was used to estimate
stiffness by modeling restoring force as the moving
average of previous jaw displacements. The stiffness
matrix was obtained from the steady state of the MA model.
We applied this technique to data from 31 subjects whose
jaw movements were perturbed during speech utterances and
kinematically matched nonspeech movements. We observed
systematic differences in stiffness over the course of
jaw-lowering and jaw-raising movements that were
correlated with measures of kinematic variability. Jaw
stiffness was high and variability was low early and late
in the movement when the jaw was elevated. Stiffness was
low and variability was high in the middle of movement
when the jaw was lowered. Similar patterns were observed
for speech and nonspeech conditions. The systematic
relationship between stiffness and variability points to
the idea that stiffness regulation is integral to the
control of orofacial movement variability.
Darainy M, Mattar AAG, Ostry DJ (2009) Effects of human arm
impedance on dynamics learning and generalization. J
Neurophysiol 101:3158-3168. Abstract | PDF
Previous studies have demonstrated anisotropic patterns of
hand impedance under static conditions and during
movement. Here we show that the pattern of kinematic error
observed in studies of dynamics learning is associated
with this anisotropic impedance pattern. We also show that
the magnitude of kinematic error associated with this
anisotropy dictates the amount of motor learning and,
consequently, the extent to which dynamics learning
generalizes. Subjects were trained to reach to visual
targets while holding a robotic device that applied forces
during movement. On infrequent trials, the load was
removed and the resulting kinematic error was measured. We
found a strong correlation between the pattern of
kinematic error and the anisotropic pattern of hand
stiffness. In a second experiment subjects were trained
under force-field conditions to move in two directions:
one in which the dynamic perturbation was in the direction
of maximum arm impedance and the associated kinematic
error was low and another in which the perturbation was in
the direction of low impedance where kinematic error was
high. Generalization of learning was assessed in a
reference direction that lay intermediate to the two
training directions. We found that transfer of learning
was greater when training occurred in the direction
associated with the larger kinematic error. This suggests
that the anisotropic patterns of impedance and kinematic
error determine the magnitude of dynamics learning and the
extent to which it generalizes.
Ito T, Tiede M, Ostry DJ (2009) Somatosensory function in
speech perception. Proc Natl Acad Sci U S A 106:1245-1248. Abstract | PDF
Somatosensory signals from the facial skin and muscles of
the vocal tract provide a rich source of sensory input in
speech production. We show here that the somatosensory
system is also involved in the perception of speech. We
use a robotic device to create patterns of facial skin
deformation that would normally accompany speech
production. We find that when we stretch the facial skin
while people listen to words, it alters the sounds they
hear. The systematic perceptual variation we observe in
conjunction with speech-like patterns of skin stretch
indicates that somatosensory inputs affect the neural
processing of speech sounds and shows the involvement of
the somatosensory system in the perceptual processing in
speech.
Nasir SM, Ostry DJ (2008) Speech motor learning in
profoundly deaf adults. Nat Neurosci 11:1217-1222. Abstract | PDF
Speech production, like other sensorimotor behaviors,
relies on multiple sensory inputs-audition, proprioceptive
inputs from muscle spindles and cutaneous inputs from
mechanoreceptors in the skin and soft tissues of the vocal
tract. However, the capacity for intelligible speech by
deaf speakers suggests that somatosensory input alone may
contribute to speech motor control and perhaps even to
speech learning. We assessed speech motor learning in
cochlear implant recipients who were tested with their
implants turned off. A robotic device was used to alter
somatosensory feedback by displacing the jaw during
speech. We found that implant subjects progressively
adapted to the mechanical perturbation with training.
Moreover, the corrections that we observed were for
movement deviations that were exceedingly small, on the
order of millimeters, indicating that speakers have
precise somatosensory expectations. Speech motor learning
is substantially dependent on somatosensory input.
Darainy M, Ostry DJ (2008) Muscle cocontraction following
dynamics learning. Exp Brain Res 190:153-163. Abstract | PDF
Coactivation of antagonist muscles is readily observed
early in motor learning, in interactions with unstable
mechanical environments and in motor system pathologies.
Here we present evidence that the nervous system uses
coactivation control far more extensively and that
patterns of cocontraction during movement are closely tied
to the specific requirements of the task. We have examined
the changes in cocontraction that follow dynamics learning
in tasks that are thought to involve finely sculpted
feedforward adjustments to motor commands. We find that,
even following substantial training, cocontraction varies
in a systematic way that depends on both movement
direction and the strength of the external load. The
proportion of total activity that is due to cocontraction
nevertheless remains remarkably constant. Moreover, long
after indices of motor learning and electromyographic
measures have reached asymptotic levels, cocontraction
still accounts for a significant proportion of total
muscle activity in all phases of movement and in all load
conditions. These results show that even following
dynamics learning in predictable and stable environments,
cocontraction forms a central part of the means by which
the nervous system regulates movement.
Andres M, Ostry DJ, Nicol F, Paus T (2008) Time course of
number magnitude interference during grasping. Cortex
44:414-419. Abstract | PDF
In
the present study, we recorded the kinematics of grasping
movements in order to measure the possible interference
caused by digits printed on the visible face of the
objects to grasp. The aim of this approach was to test the
hypothesis that digit magnitude processing shares common
mechanisms with object size estimate during grasping. In
the first stages of reaching, grip aperture was found to
be larger consequent to the presentation of digits with a
high value rather than a low one. The effect of digit
magnitude on grip aperture was more pronounced for large
objects. As the hand got closer to the object, the
influence of digit magnitude decreased and grip aperture
progressively reflected the actual size of the object. We
concluded that number magnitude may interact with grip
aperture while programming the grasping movements.
Tremblay S, Houle G, Ostry DJ (2008) Specificity of speech
motor learning. J Neurosci 28:2426-2434. Abstract | PDF
The idea that the brain controls movement using a neural
representation of limb dynamics has been a dominant
hypothesis in motor control research for well over a
decade. Speech movements offer an unusual opportunity to
test this proposal by means of an examination of transfer
of learning between utterances that are to varying degrees
matched on kinematics. If speech learning results in a
generalizable dynamics representation, then, at the least,
learning should transfer when similar movements are
embedded in phonetically distinct utterances. We tested
this idea using three different pairs of training and
transfer utterances that substantially overlap
kinematically. We find that, with these stimuli, speech
learning is highly contextually sensitive and fails to
transfer even to utterances that involve very similar
movements. Speech learning appears to be extremely local,
and the specificity of learning is incompatible with the
idea that speech control involves a generalized dynamics
representation.
Darainy M, Towhidkhah F, Ostry DJ (2007) Control of hand
impedance under static conditions and during reaching
movement. J Neurophysiol 97:2676-2685. Abstract | PDF
It
is known that humans can modify the impedance of the
musculoskeletal periphery, but the extent of this
modification is uncertain. Previous studies on impedance
control under static conditions indicate a limited ability
to modify impedance, whereas studies of impedance control
during reaching in unstable environments suggest a greater
range of impedance modification. As a first step in
accounting for this difference, we quantified the extent
to which stiffness changes from posture to movement even
when there are no destabilizing forces. Hand stiffness was
estimated under static conditions and at the same position
during both longitudinal (near to far) and lateral
movements using a position-servo technique. A new method
was developed to predict the hand "reference" trajectory
for purposes of estimating stiffness. For movements in a
longitudinal direction, there was considerable
counterclockwise rotation of the hand stiffness ellipse
relative to stiffness under static conditions. In
contrast, a small counterclockwise rotation was observed
during lateral movement. In the modeling studies, even
when we used the same modeled cocontraction level during
posture and movement, we found that there was a
substantial difference in the orientation of the stiffness
ellipse, comparable with that observed empirically.
Indeed, the main determinant of the orientation of the
ellipse in our modeling studies was the movement direction
and the muscle activation associated with movement.
Changes in the cocontraction level and the balance of
cocontraction had smaller effects. Thus even when there is
no environmental instability, the orientation of stiffness
ellipse changes during movement in a manner that varies
with movement direction.
Lametti DR, Houle G, Ostry DJ (2007) Control of movement
variability and the regulation of limb impedance. J
Neurophysiol 98:3516-3524. Abstract | PDF
Humans routinely make movements to targets that have
different accuracy requirements in different directions.
Examples extend from everyday occurrences such as grasping
the handle of a coffee cup to the more refined instance of
a surgeon positioning a scalpel. The attainment of
accuracy in situations such as these might be related to
the nervous system's capacity to regulate the limb's
resistance to displacement, or impedance. To test this
idea, subjects made movements from random starting
locations to targets that had shape-dependent accuracy
requirements. We used a robotic device to assess both limb
impedance and patterns of movement variability just as the
subject reached the target. We show that impedance
increases in directions where required accuracy is high.
Independent of target shape, patterns of limb stiffness
are seen to predict spatial patterns of movement
variability. The nervous system is thus seen to modulate
limb impedance in entirely predictable environments to aid
in the attainment of reaching accuracy.
Mattar AAG, Ostry DJ (2007) Neural averaging in motor
learning. J Neurophysiol 97:220-228. Abstract | PDF
The capacity for skill development over multiple training
episodes is fundamental to human motor function. We have
studied the process by which skills evolve with training
by progressively modifying a series of motor learning
tasks that subjects performed over a 1-mo period. In a
series of empirical and modeling studies, we show that
performance undergoes repeated modification with new
learning. Each in a series of prior training episodes
contributes such that present performance reflects a
weighted average of previous learning. Moreover, we have
observed that the relative weighting of skills learned
wholly in the past changes with time. This suggests that
the neural substrate of skill undergoes modification after
consolidation.
Mattar AAG, Ostry DJ (2007) Modifiability of generalization
in dynamics learning. J Neurophysiol 98:3321-3329. Abstract | PDF
Studies on plasticity in motor function have shown that
motor learning generalizes, such that movements in novel
situations are affected by previous training. It has been
shown that the pattern of generalization for visuomotor
rotation learning changes when training movements are made
to a wide distribution of directions. Here we have found
that for dynamics learning, the shape of the
generalization gradient is not similarly modifiable by
theextent of training within the workspace. Subjects
learned to control a robotic device during training and we
measured how subsequent movements in a reference direction
were affected. Our results show that as the angular
separation between training and test directions increased,
the extent of generalization was reduced. When training
involved multiple targets throughout the workspace, the
extent of generalization was no greater than following
training to the nearest target alone. Thus a wide range of
experience compensating for a dynamics perturbation
provided no greater benefit than localized training.
Instead, generalization was complete when training
involved targets that bounded the reference direction.
This suggests that broad generalization of dynamics
learning to movements in novel directions depends on
interpolation between instances of localized learning.
Nasir SM, Ostry DJ (2006) Somatosensory precision in speech
production. Curr Biol 16:1918-1923. Abstract | PDF
Speech production is dependent on both auditory and
somatosensory feedback. Although audition may appear to be
the dominant sensory modality in speech production,
somatosensory information plays a role that extends from
brainstem responses to cortical control. Accordingly, the
motor commands that underlie speech movements may have
somatosensory as well as auditory goals. Here we provide
evidence that, independent of the acoustics, somatosensory
information is central to achieving the precision
requirements of speech movements. We were able to
dissociate auditory and somatosensory feedback by using a
robotic device that altered the jaw's motion path, and
hence proprioception, without affecting speech acoustics.
The loads were designed to target either the consonant- or
vowel-related portion of an utterance because these are
the major sound categories in speech. We found that, even
in the absence of any effect on the acoustics, with
learning subjects corrected to an equal extent for both
kinds of loads. This finding suggests that there are
comparable somatosensory precision requirements for both
kinds of speech sounds. We provide experimental evidence
that the neural control of stiffness or impedance--the
resistance to displacement--provides for somatosensory
precision in speech production.
Darainy M, Malfait N, Towhidkhah F, Ostry DJ (2006) Transfer
and durability of acquired patterns of human arm stiffness.
Exp Brain Res 170:227-237. Abstract | PDF
We
used a robotic device to test the idea that impedance
control involves a process of learning or adaptation that
is acquired over time and permits the voluntary control of
the pattern of stiffness at the hand. The tests were
conducted in statics. Subjects were trained over the
course of three successive days to resist the effects of
one of three different kinds of mechanical loads, single
axis loads acting in the lateral direction, single axis
loads acting in the forward/backward direction and
isotropic loads that perturbed the limb in eight
directions about a circle. We found that subjects in
contact with single axis loads voluntarily modified their
hand stiffness orientation such that changes to the
direction of maximum stiffness mirrored the direction of
applied load. In the case of isotropic loads, a uniform
increase in endpoint stiffness was observed. Using a
physiologically realistic model of two-joint arm movement,
the experimentally determined pattern of impedance change
could be replicated by assuming that coactivation of elbow
and double joint muscles was independent of coactivation
of muscles at the shoulder. Moreover, using this pattern
of coactivation control we were able to replicate an
asymmetric pattern of rotation of the stiffness ellipse
that was observed empirically. The present findings are
consistent with the idea that arm stiffness is controlled
through the use of at least two independent cocontraction
commands.
Shiller DM, Houle G, Ostry DJ (2005) Voluntary control of
human jaw stiffness. J Neurophysiol 94:2207-2217. Abstract | PDF
Recent studies of human arm movement have suggested that
the control of stiffness may be important both for
maintaining stability and for achieving differences in
movement accuracy. In the present study, we have examined
the voluntary control of postural stiffness in 3D in the
human jaw. The goal is to address the possible role of
stiffness control in both stabilizing the jaw and in
achieving the differential precision requirements of
speech sounds. We previously showed that patterns of
kinematic variability in speech are systematically related
to the stiffness of the jaw. If the nervous system uses
stiffness control as a means to regulate kinematic
variation in speech, it should also be possible to show
that subjects can voluntarily modify jaw stiffness. Using
a robotic device, a series of force pulses was applied to
the jaw to elicit changes in stiffness to resist
displacement. Three orthogonal directions and three
magnitudes of forces were tested. In all conditions,
subjects increased the magnitude of jaw stiffness to
resist the effects of the applied forces. Apart from the
horizontal direction, greater increases in stiffness were
observed when larger forces were applied. Moreover,
subjects differentially increased jaw stiffness along a
vertical axis to counteract disturbances in this
direction. The observed changes in the magnitude of
stiffness in different directions suggest an ability to
control the pattern of stiffness of the jaw. The results
are interpreted as evidence that jaw stiffness can be
adjusted voluntarily, and thus may play a role in
stabilizing the jaw and in controlling movement variation
in the orofacial system.
Malfait N, Gribble PL, Ostry DJ (2005) Generalization of
motor learning based on multiple field exposures and local
adaptation. J Neurophysiol 93:3327-3338. Abstract | PDF
Previous studies have used transfer of learning over
workspace locations as a means to determine whether
subjects code information about dynamics in extrinsic or
intrinsic coordinates. Transfer has been observed when the
torque associated with joint displacement is similar
between workspace locations rather than when the mapping
between hand displacement and force is preserved which is
consistent with muscle- or joint based encoding. In the
present study, we address the generality of an intrinsic
coding of dynamics and examine how generalization occurs
when the pattern of torques varies over the workspace. In
two initial experiments, we examined transfer of learning
when the direction of a force field was fixed relative to
an external frame of reference. While there were no
beneficial effects of transfer following training at a
single location (Experiment 1 and 2), excellent
performance was observed at the center of the workspace
following training at two lateral locations (Experiment
2). Experiment 3 and associated simulations assessed the
characteristics of this generalization. In these studies,
we examined the patterns of transfer observed following
adaptation to force fields that were composed of two
subfields that acted in opposite directions. The
experimental and simulated data are consistent with the
idea that information about dynamics is encoded in
intrinsic coordinates. The nervous system generalizes
dynamics learning by interpolating between sets of control
signals, each locally adapted to different patterns of
torques.
Della-Maggiore V, Malfait N, Ostry DJ, Paus T (2004)
Stimulation of the posterior parietal cortex interferes with
arm trajectory adjustments during the learning of new
dynamics. J Neurosci 24:9971-9976. Abstract | PDF
Substantial neurophysiological evidence points to the
posterior parietal cortex (PPC) as playing a key role in
the coordinate transformation necessary for visually
guided reaching. Our goal was to examine the role of PPC
in the context of learning new dynamics of arm movements.
We assessed this possibility by stimulating PPC with
transcranial magnetic stimulation (TMS) while subjects
learned to make reaching movements with their right hand
in a velocity-dependent force field. We reasoned that, if
PPC is necessary to adjust the trajectory of the arm as it
interacts with a novel mechanical system, interfering with
the functioning of PPC would impair adaptation. Single
pulses of TMS were applied over the left PPC 40 msec after
the onset of movement during adaptation. As a control,
another group of subjects was stimulated over the visual
cortex. During early stages of learning, the magnitude of
the error (measured as the deviation of the hand paths)
was similar across groups. By the end of the learning
period, however, error magnitudes decreased to baseline
levels for controls but remained significantly larger for
the group stimulated over PPC. Our findings are consistent
with a role of PPC in the adjustment of motor commands
necessary for adapting to a novel mechanical environment.
Darainy M, Malfait N, Gribble PL, Towhidkhah F, Ostry DJ
(2004) Learning to control arm stiffness under static
conditions. J Neurophysiol 92:3344-3350. Abstract | PDF
We
used a robotic device to test the idea that impedance
control involves a process of learning or adaptation that
is acquired over time and permits the voluntary control of
the pattern of stiffness at the hand. The tests were
conducted in statics. Subjects were trained over the
course of three successive days to resist the effects of
one of three different kinds of mechanical loads, single
axis loads acting in the lateral direction, single axis
loads acting in the forward/backward direction and
isotropic loads that perturbed the limb in eight
directions about a circle. We found that subjects in
contact with single axis loads voluntarily modified their
hand stiffness orientation such that changes to the
direction of maximum stiffness mirrored the direction of
applied load. In the case of isotropic loads, a uniform
increase in endpoint stiffness was observed. Using a
physiologically realistic model of two-joint arm movement,
the experimentally determined pattern of impedance change
could be replicated by assuming that coactivation of elbow
and double joint muscles was independent of coactivation
of muscles at the shoulder. Moreover, using this pattern
of coactivation control we were able to replicate an
asymmetric pattern of rotation of the stiffness ellipse
that was observed empirically. The present findings are
consistent with the idea that arm stiffness is controlled
through the use of at least two independent cocontraction
commands.
Malfait N, Ostry DJ (2004) Is interlimb transfer of
force-field adaptation a "cognitive" response to the sudden
introduction of load? J Neurosci 24:8084-8089. Abstract | PDF
Recently, Shadmehr and colleagues (Criscimagna-Hemminger
et al. 2003) reported a pattern of generalization of
force-field adaptation between arms that differs from the
pattern that occurs across different configurations of the
same arm. While the intralimb pattern of generalization
points to an intrinsic encoding of dynamics, the interlimb
transfer described by these authors indicates that
information about force is represented in a frame of
reference external to the body. In the present study,
subjects adapted to a viscous curl-field in two
experimental conditions. In one condition, the field was
introduced suddenly and produced clear deviations in hand
paths; in the second condition, the field was introduced
gradually so that at no point during the adaptation
process could subjects observe or had to correct for a
substantial kinematic error. In the first case, a pattern
of interlimb transfer consistent with
Criscimagna-Hemminger et al. was observed, whereas no
transfer of learning between limbs occurred in the second
condition. The findings suggest that there is limited
transfer of fine compensatory force adjustment between
limbs. Transfer when it does occur may be largely the
results of a "cognitive" strategy that arises as a result
of the sudden introduction of load and associated
kinematic error.
Petitto LA, Holowka S, Sergio LE, Levy B, Ostry DJ (2004)
Baby hands that move to the rhythm of language: hearing
babies acquiring sign languages babble silently on the
hands. Cognition 93:43-73. Abstract | PDF
The "ba, ba, ba" sound universal to babies' babbling
around 7 months captures scientific attention because it
provides insights into the mechanisms underlying language
acquisition and vestiges of its evolutionary origins. Yet
the prevailing mystery is what is the biological basis of
babbling, with one hypothesis being that it is a
non-linguistic motoric activity driven largely by the
baby's emerging control over the mouth and jaw, and
another being that it is a linguistic activity reflecting
the babies' early sensitivity to specific
phonetic-syllabic patterns. Two groups of hearing babies
were studied over time (ages 6, 10, and 12 months), equal
in all developmental respects except for the modality of
language input (mouth versus hand): three hearing babies
acquiring spoken language (group 1: "speech-exposed") and
a rare group of three hearing babies acquiring sign
language only, not speech (group 2: "sign-exposed").
Despite this latter group's exposure to sign, the motoric
hypothesis would predict similar hand activity to that
seen in speech-exposed hearing babies because language
acquisition in sign-exposed babies does not involve the
mouth. Using innovative quantitative Optotrak 3-D
motion-tracking technology, applied here for the first
time to study infant language acquisition, we obtained
physical measurements similar to a speech spectrogram, but
for the hands. Here we discovered that the specific
rhythmic frequencies of the hands of the sign-exposed
hearing babies differed depending on whether they were
producing linguistic activity, which they produced at a
low frequency of approximately 1 Hz, versus non-linguistic
activity, which they produced at a higher frequency of
approximately 2.5 Hz the identical class of hand activity
that the speech-exposed hearing babies produced nearly
exclusively. Surprisingly, without benefit of the mouth,
hearing sign-exposed babies alone babbled systematically
on their hands. We conclude that babbling is fundamentally
a linguistic activity and explain why the differentiation
between linguistic and non-linguistic hand activity in a
single manual modality (one distinct from the human mouth)
could only have resulted if all babies are born with a
sensitivity to specific rhythmic patterns at the heart of
human language and the capacity to use them.
Ostry DJ, Feldman AG (2003) A critical evaluation of the
force control hypothesis in motor control. Exp Brain Res
221:275-288. Abstract | PDF
The ability to formulate explicit mathematical models of
motor systems has played a central role in recent progress
in motor control research. As a result of these modeling
efforts and in particular the incorporation of concepts
drawn from control systems theory, ideas about motor
control have changed substantially. There is growing
emphasis on motor learning and particularly on predictive
or anticipatory aspects of control that are related to the
neural representation of dynamics. Two ideas have become
increasingly prominent in mathematical modeling of motor
function forward internal models and inverse dynamics. The
notion of forward internal models which has drawn from
work in adaptive control arises from the recognition that
the nervous system takes account of dynamics in motion
planning. Inverse dynamics, a complementary way of
adjusting control signals to deal with dynamics, has
proved a simple means to establish the joint torques
necessary to produce desired movements. In this paper, we
review the force control formulation in which inverse
dynamics and forward internal models play a central role.
We present evidence in its favor and describe its
limitations. We note that inverse dynamics and forward
models are potential solutions to general problems in
motor control how the nervous system establishes a mapping
between desired movements and associated control signals,
and how control signals are adjusted in the context of
motor learning, dynamics and loads. However, we find
little empirical evidence that specifically supports the
inverse dynamics or forward internal model proposals per
se. We further conclude that the central idea of the force
control hypothesis that control levels operate through the
central specification of forces is flawed. This is
specifically evident in the context of attempts to
incorporate physiologically realistic muscle and reflex
mechanisms into the force control model. In particular,
the formulation offers no means to shift between postures
without triggering resistance due to postural stabilizing
mechanisms.
Tremblay S, Shiller DM,Ostry DJ (2003) Somatosensory basis
of speech production. Nature 423:866-869. Abstract | PDF
The hypothesis that speech goals are defined acoustically
and maintained by auditory feedback is a central idea in
speech production research. An alternative proposal is
that speech production is organized in terms of control
signals that subserve movements and associated vocal-tract
configurations. Indeed, the capacity for intelligible
speech by deaf speakers suggests that somatosensory inputs
related to movement play a role in speech production-but
studies that might have documented a somatosensory
component have been equivocal. For example, mechanical
perturbations that have altered somatosensory feedback
have simultaneously altered acoustics. Hence, any
adaptation observed under these conditions may have been a
consequence of acoustic change. Here we show that
somatosensory information on its own is fundamental to the
achievement of speech movements. This demonstration
involves a dissociation of somatosensory and auditory
feedback during speech production. Over time, subjects
correct for the effects of a complex mechanical load that
alters jaw movements (and hence somatosensory feedback),
but which has no measurable or perceptible effect on
acoustic output. The findings indicate that the positions
of speech articulators and associated somatosensory inputs
constitute a goal of speech movements that is wholly
separate from the sounds produced.
Malfait N, Shiller DM, Ostry DJ (2002) Transfer of motor
learning across arm configurations. J Neurosci 22:9656-9660.
Abstract | PDF
It has been
suggested that learning of new dynamics occurs in
intrinsic coordinates. However, it has also been
suggested that elements that encode hand velocity and
hence act in an extrinsic frame of reference play a role
in the acquisition of dynamics. In order to reconcile
claims regarding the coordinate system involved in the
representation of dynamics, we have used a procedure
involving the transfer of force-field learning between
two workspace locations. Subjects made point-to-point
movements while holding a two-link manipulandum.
Subjects were first trained to make movements in a
single direction at the left of the workspace. They were
then tested for transfer of learning at the right of the
workspace. Two groups of subjects were defined. For
subjects in Group J, movements at the left and right
workspace locations were matched in terms of joint
displacements. For subjects in Group H, movements in the
two locations had the same hand displacements. Workspace
locations were chosen such that for Group J, the paths
(for training and testing) that were identical in joint
space were orthogonal in hand space. Subjects in Group J
showed good transfer between workspace locations,
whereas subjects in Group H showed poor transfer. The
results are in agreement with the idea that new dynamics
are encoded in intrinsic coordinates and that this
learning has a limited range of generalization across
joint velocities.
Abstract | PDF
Humans produce speech by controlling a complex
biomechanical apparatus to achieve desired speech sounds.
We show here that kinematic variability in speech may be
influenced by patterns of jaw stiffness. A robotic device
was used to deliver mechanical perturbations to the jaw to
quantify its stiffness in the mid-sagittal plane. Measured
jaw stiffness was anisotropic. Stiffness was greatest
along a protrusion-retraction axis and least in the
direction of jaw raising and lowering. Consistent with the
idea that speech movements reflect directional asymmetries
in jaw stiffness, kinematic variability during speech
production was found to be high in directions in which
stiffness is low and vice versa. In addition, for higher
jaw elevations, stiffness was greater and kinematic
variability was less. The observed patterns of kinematic
variability were not specific to speech similar patterns
appeared in speech and nonspeech movements. The empirical
patterns of stiffness were replicated by using a
physiologically based model of the jaw. The simulation
studies support the idea that the pattern of jaw stiffness
is affected by musculo-skeletal geometry and
muscle-force-generating abilities with jaw geometry being
the primary determinant of the orientation of the
stiffness ellipse.
Shiller DM, Ostry DJ, Gribble PL, Laboissiere R (2001)
Compensation for the effects of head acceleration on jaw
movement in speech. J Neurosci 21:6447-6456. Abstract | PDF
Recent studies have demonstrated the ability of subjects
to adjust the control of limb movements to counteract the
effects of self-generated loads. The degree to which
subjects change control signals to compensate for these
loads is a reflection of the extent to which forces
affecting movement are represented in motion planning.
Here, we have used empirical and modeling studies to
examine whether the nervous system compensates for loads
acting on the jaw during speech production. As subjects
walk, loads to the jaw vary with the direction and
magnitude of head acceleration. We investigated the
patterns of jaw motion resulting from these loads both in
locomotion alone and when locomotion was combined with
speech production. In locomotion alone, jaw movements were
shown to vary systematically in direction and magnitude in
relation to the acceleration of the head. In contrast,
when locomotion was combined with speech, variation in jaw
position during both consonant and vowel production was
substantially reduced. Overall, we have demonstrated that
the magnitude of load associated with head acceleration
during locomotion is sufficient to produce a systematic
change in the position of the jaw. The absence of
variation in jaw position during locomotion with speech is
thus consistent with the idea that in speech, the control
of jaw motion is adjusted in a predictive manner to offset
the effects of head acceleration.
Petitto LA, Holowka S, Sergio LE, Ostry DJ (2001) Language
rhythms in baby hand movements. Nature 413:35-36. Abstract | PDF
Suzuki M, Shiller DM, Gribble PL, Ostry DJ (2001) Relationship between cocontraction, movement kinematics and phasic muscle activity in single-joint arm movement. Exp Brain Res 140:171-181.
Abstract | PDF
Patterns of muscle coactivation provide a window into
mechanisms of limb stabilization. In the present paper
we have examined muscle coactivation in single-joint
elbow and single-joint shoulder movements and explored
its relationship to movement velocity and amplitude, as
well as phasic muscle activation patterns. Movements
were produced at several speeds and different
amplitudes, and muscle activity and movement kinematics
were recorded. Tonic levels of electromyographic (EMG)
activity following movement provided a measure of muscle
cocontraction. It was found that coactivation following
movement increased with maximum joint velocity at each
of two amplitudes. Phasic EMG activity in agonist and
antagonist muscles showed a similar correlation that was
observable even during the first 30 ms of muscle
activation. All subjects but one showed statistically
significant correlations on a trial-by-trial basis
between tonic and phasic activity levels, including the
phasic activity measure taken at the initiation of
movement. Our findings provide direct evidence that
muscle coactivation varies with movement velocity. The
data also suggest that cocontraction is linked in a
simple manner to phasic muscle activity. The similarity
in the patterns of tonic and phasic activation suggests
that the nervous system may use a simple strategy to
adjust coactivation and presumably limb impedance in
association with changes in movement speed. Moreover,
since the pattern of tonic activity varies with the
first 30 ms of phasic activity, the control of
cocontraction may be established prior to movement
onset.
Ostry DJ, Romo R (2001) Tactile shape processing. Neuron 31:173-174.
Abstract | PDF
Neuroimaging techniques may aid in the identification of
areas of the human brain that are involved in tactile
shape perception. Bodegard et al. (2001) relate
differences in the properties of tactile stimuli to
differences in areas of cortical activation to infer
tactile processing in the somatosensory network.
Gribble PL, Ostry DJ (2000) Compensation for loads during
arm movements using equilibrium-point control. Exp Brain Res
135:474-482. Abstract | PDF
A
significant problem in motor control is how information
about movement error is used to modify control signals to
achieve desired performance. A potential source of
movement error and one that is readily controllable
experimentally relates to limb dynamics and associated
movement-dependent loads. In this paper, we have used a
position control model to examine changes to control
signals for arm movements in the context of
movement-dependent loads. In the model, based on the
equilibrium-point hypothesis, equilibrium shifts are
adjusted directly in proportion to the positional error
between desired and actual movements. The model is used to
simulate multi-joint movements in the presence of both
"internal" loads due to joint interaction torques, and
externally applied loads resulting from velocity-dependent
force fields. In both cases it is shown that the model can
achieve close correspondence to empirical data using a
simple linear adaptation procedure. An important feature
of the model is that it achieves compensation for loads
during movement without the need for either coordinate
transformations between positional error and associated
corrective forces, or inverse dynamics calculations.
Gribble PL, Ostry DJ (1999) Compensation for interaction
torques during single- and multijoint limb movements. J
Neurophysiol 82:2310-2326. Abstract | PDF
During multi-joint limb movements such as reaching,
rotational forces arise at one joint due to the motions of
limb segments about other joints. We report the results of
three experiments in which we assessed the extent to which
control signals to muscles are adjusted to counteract
these ``interaction torques''. Human subjects performed
single- and multi-joint pointing movements involving
shoulder and elbow motion, and movement parameters related
to the magnitude and direction of interaction torques were
systematically manipulated. We examined electromyographic
(EMG) activity of shoulder and elbow muscles, and
specifically, the relationship between EMG activity and
joint interaction torque. A first set of experiments
examined single-joint movements. During both single-joint
elbow (Experiment 1) and shoulder (Experiment 2)
movements, phasic EMG activity was observed in muscles
spanning the stationary joint (shoulder muscles in
Experiment 1 and elbow muscles in Experiment 1). This
muscle activity preceded movement, and varied in amplitude
with the magnitude of upcoming interaction torque (the
load resulting from motion of the non-stationary limb
segment). In a third experiment, subjects performed
multi-joint movements involving simultaneous motion at the
shoulder and elbow. Movement amplitude and velocity at one
joint were held constant, while the direction of movement
about the other joint was varied. When the direction of
elbow motion was varied (flexion vs extension) and
shoulder kinematics were held constant, EMG activity in
shoulder muscles varied depending on the direction of
elbow motion (and hence the sign of the interaction torque
arising at the shoulder). Similarly, EMG activity in elbow
muscles varied depending on the direction of shoulder
motion, for movements in which elbow kinematics were held
constant. The results from all three experiments support
the idea that central control signals to muscles are
adjusted, in a predictive manner, to compensate for
interaction torques loads arising at one joint which
depend on motion about other joints.
Shiller DM, Ostry DJ, Gribble PL (1999) Effects of
gravitational load on jaw movements in speech. J Neurosci
19:9073-9080. Abstract | PDF
External loads arising due to the orientation of body
segments relative to gravity can affect the achievement of
movement goals. The degree to which subjects adjust
control signals to compensate for these loads is a
reflection of the extent to which forces affecting motion
are represented neurally. In the present study we assessed
whether subjects, when speaking, compensate for loads due
to the orientation of the head relative to gravity. We
used a mathematical model of the jaw to predict the
effects of control signals that are not adjusted for
changes to head orientation. The simulations predicted a
systematic change in sagittal plane jaw orientation and
horizontal position resulting from changes to the
orientation of the head. We conducted an empirical study
in which subjects were tested under the same conditions.
With one exception, empirical results were consistent with
the simulations. In both simulation and empirical studies,
the jaw was rotated closer to occlusion and translated in
an anterior direction when the head was in the prone
orientation. When the head was in the supine orienation,
the jaw was rotated away from occlusion. The findings
suggest that the nervous system does not completely
compensate for changes in head orientation relative to
gravity. A second study was conducted to assess possible
changes in acoustical patterns due to changes in head
orientation. The frequencies of the first (F1) and second
(F2) formants associated with the steady-state portion of
vowels were measured. As in the kinematic study,
systematic differences in the values of F1 and F2 were
observed with changes in head orientation. Thus the
acoustical analysis further supports the conclusion that
control signals are not completely adjusted to offset
forces arising due to changes in orientation.
Gribble PL, Ostry DJ (1998) Independent coactivation of
shoulder and elbow muscles. Exp Brain Res 123:355-360. Abstract | PDF
The aim of this study was to examine the possibility of
independent muscle coactivation at the shoulder and elbow.
Subjects performed rapid point-to-point movements in a
horizontal plane, from different initial limb
configurations to a single target. EMG activity was
measured from flexor and extensor muscles that act at the
shoulder (pectoralis clavicular head and posterior
deltoid) and elbow (biceps long head and triceps lateral
head) and flexor and extensor muscles that act at both
joints (biceps short head and triceps long head). Muscle
coactivation was assessed by measuring tonic levels of
electromyographic (EMG) activity after limb position
stabilized following movement end. It was observed that
tonic EMG levels following movements to the same target
varied as a function of the amplitude of shoulder and
elbow motion. Moreover, for the movements tested here, the
coactivation of shoulder and elbow muscles was found to be
independent tonic EMG activity of shoulder muscles
increased in proportion to shoulder movement, but was
unrelated to elbow motion, whereas elbow and double-joint
muscle coactivation varied with the amplitude of elbow
movement, and were uncorrelated with shoulder motion. In
addition, tonic EMG levels were higher for movements in
which the shoulder and elbow rotated in the same
direction, as compared to those in which the joints
rotated in opposite directions. In this respect, muscle
coactivation may reflect a simple strategy to compensate
for forces introduced by multijoint limb dynamics.
Feldman AG, Ostry DJ, Levin MF, Gribble PL, Mitnitski A
(1998) Recent tests of the equilibrium point hypothesis.
Motor Control 2:189-205. Abstract | PDF
The lambda model of the equilibrium-point hypothesis
(Feldman & Levin, 1995) is an approach to motor
control which, like physics, is based on a logical system
coordinating empirical data. The model has gone through an
interesting period. On one hand, several nontrivial
predictions of the model have been successfully verified
in recent studies. In addition, the explanatory and
predictive capacity of the model has been enhanced by its
extension to multimuscle and multijoint systems. On the
other hand, claims have recently appeared suggesting that
the model should be abandoned. The present paper focuses
on these claims and concludes that they are unfounded.
Much of the experimental data that have been used to
reject the model are actually consistent with it.
Gribble PL, Ostry DJ, Sanguineti V, Laboissiere R (1998) Are
complex control signals required for human arm movement? J
Neurophysiol 79:1409-1424. Abstract | PDF
It
has been proposed that the control signals underlying
voluntary human arm movement have a "complex"
non-monotonic time-varying form, and a number of empirical
findings have been offered in support of this idea (Gomi
and Kawato, 1996, Latash and Gottlieb, 1991). In this
paper we address three such findings using a model of
two-joint arm motion based on the lambda version of the
equilibrium-point hypothesis. The model includes six one-
and two-joint muscles, reflexes, modeled control signals,
muscle properties and limb dynamics. First, we address the
claim that "complex" equilibrium trajectories are required
in order to account for non-monotonic joint impedance
patterns observed during multi-joint movement (Gomi and
Kawato, 1996). Using constant-rate shifts in the neurally
specified equilibrium of the limb, and constant
cocontraction commands, we obtain patterns of predicted
joint stiffness during simulated multi-joint movements
which match the non-monotonic patterns reported
empirically. We then use the algorithm proposed by Gomi
and Kawato (1996) to compute a hypothetical equilibrium
trajectory from simulated stiffness, viscosity and limb
kinematics. Like that reported by Gomi and Kawato (1996),
the resulting trajectory was non-monotonic, first leading
then lagging the position of the limb. Second, we address
the claim that high levels of stiffness are required to
generate rapid single-joint movements when simple
equilibrium shifts are used. We compare empirical
measurements of stiffness during rapid single-joint
movements (Bennett, 1993) with the predicted stiffness of
movements generated using constant-rate equilibrium shifts
and constant cocontraction commands. Single-joint
movements are simulated at a number of speeds, and the
procedure used by Bennett (1993) to estimate stiffness is
followed. We show that when the magnitude of the
cocontraction command is scaled in proportion to movement
speed, simulated joint stiffness varies with movement
speed in a manner comparable to that reported by Bennett
(1993). Third, we address the related claim that
non-monotonic equilibrium shifts are required to generate
rapid single-joint movements. Using constant-rate
equilibrium shifts and constant cocontraction commands,
rapid single-joint movements are simulated in the presence
of external torques. We use the procedure reported by
Latash and Gottlieb (1991) to compute hypothetical
equilibrium trajectories from simulated torque and angle
measurements during movement. As in Latash and Gottlieb
(1991), a non-monotonic function is obtained, even though
the control signals used in the simulations are
constant-rate changes in the equilibrium position of the
limb. Differences between the "simple" equilibrium
trajectory proposed in the present paper and those which
are derived from the procedures used by Gomi and Kawato
(1996) and Latash and Gottlieb (1991) arise from their use
of simplified models of force-generation.
Sanguineti V, Laboissiere R, Ostry DJ (1998) A dynamic
biomechanical model for neural control of speech production.
J Acoust Soc Am 103:1615-1627. Abstract | PDF
A
model of the midsagittal plane motion of the tongue, jaw,
hyoid bone, and larynx is presented, based on the lambda
version of equilibrium point hypothesis. The model
includes muscle properties and realistic geometrical
arrangement of muscles, modeled neural inputs and
reflexes, and dynamics of soft tissue and bony structures.
The focus is on the organization of control signals
underlying vocal tract motions and on the dynamic behavior
of articulators. A number of muscle synergies or "basic
motions" of the system are identified. In particular, it
is shown that systematic sources of variation in an x-ray
data base of midsagittal vocal tract motions can be
accounted for, at the muscle level, with six independent
commands, each corresponding to a direction of articulator
motion. There are two commands for the jaw, (corresponding
to sagittal plane jaw rotation and jaw protrusion), one
command controlling larynx height, and three commands for
the tongue, (corresponding to forward and backward motion
of the tongue body, arching and flattening of the tongue
dorsum, and motion of the tongue tip). It is suggested
that all movements of the system can be approximated as
linear combinations of such basic motions. In other words,
individual movements and sequences of movements can be
accounted for by a simple additive control model. The
dynamics of individual commands are also assessed. It is
shown that the dynamic effects are not neglectable in
speechlike movements because of the different dynamic
behaviors of soft and bony structures.
Guiard-Marigny T, Ostry DJ (1997) A system for
three-dimensional visualization of human jaw motion in
speech. J Speech Lang Hear Res 40:1118-1121. Abstract | PDF
With the development of precise three dimensional motion
measurement systems and powerful computers for three
dimensional graphical visualization, it is possible to
record and fully reconstruct human jaw motion. In this
paper, we describe a visualization system for displaying
three dimensional jaw movements in speech. The system is
designed to take as input jaw motion data obtained from
one or multi-dimensional recording systems. In the present
application, kinematic records of jaw motion were recorded
using an optoelectronic measurement system (Optotrak). The
corresponding speech signal was recorded using an analog
input channel. The three orientation angles and three
positions which describe the motion of the jaw as a rigid
skeletal structure were derived from the empirical
measurements. These six kinematic variables, which in
mechanical terms account fully for jaw motion kinematics,
act as inputs that drive a real-time three dimensional
animation of a skeletal jaw and upper skull. The
visualization software enables the user to view jaw motion
from any orientation and to change the viewpoint during
the course of an utterance. Selected portions of an
utterance may be re-played and the speed of the visual
display may be varied. The user may also display, along
with the audio track, individual kinematic degrees of
freedom or several degrees of freedom in combination. The
system is presently being used as an educational tool and
for research into audio-visual speech recognition.
Ostry DJ, Vatikiotis-Bateson E, Gribble PL (1997) An
examination of the degrees of freedom of human jaw motion in
speech and mastication. J Speech Lang Hear Res 40:1341-1351.
Abstract | PDF
The kinematics of human jaw movements were assessed in
terms of the three orientation angles and three positions
that characterize the motion of the jaw as a rigid body.
The analysis focused on the identification of the jaw's
independent movement dimensions, and was based on an
examination of jaw motion paths that were plotted in
various combinations of linear and angular coordinate
frames. Overall, both behaviors were characterized by
independent motion in four degrees of freedom. In general,
when jaw movements were plotted to show orientation in the
sagittal plane as a function of horizontal position,
relatively straight paths were observed. In speech, the
slopes and intercepts of these paths varied depending on
the phonetic material. The vertical position of the jaw
was observed to shift up or down so as to displace the
overall form of the sagittal plane motion path of the jaw.
Yaw movements were small but independent of pitch,
vertical and horizontal position. In mastication, the
slope and intercept of the relationship between pitch and
horizontal position were affected by the type of food and
its size. However, the range of variation was less than
that observed in speech. When vertical jaw position was
plotted as a function of horizontal position, the basic
form of the path of the jaw was maintained but could be
shifted vertically. In general, larger bolus diameters
were associated with lower jaw positions throughout the
movement. The timing of pitch and yaw motion differed. The
most common pattern involved changes in pitch angle during
jaw opening followed by a phase predominated by lateral
motion (yaw). Thus, in both behaviors there was evidence
of independent motion in pitch, yaw, horizontal position
and vertical position. This is consistent with the idea
that motions in these degrees of freedom are independently
controlled.
Ostry DJ, Gribble PL, Levin MF, Feldman AG (1997) Phasic and
tonic stretch reflexes in muscles with few muscles spindles:
human jaw-opener muscles. Exp Brain Res 116:299-308. Abstract | PDF
We
investigated phasic and tonic stretch reflexes in human
jaw opener muscles, which have few, if any, muscle
spindles. Jaw unloading reflexes were recorded for both
opener and closer muscles. Surface electromyographic (EMG)
activity was obtained from left and right digastric and
superficial masseter muscles, and jaw orientation and
torques were recorded. Unloading of jaw opener muscles
elicited a short-latency decrease in EMG activity
(averaging 20 ms) followed by a short duration silent
period in these muscles and sometimes a short burst of
activity in their antagonists. Similar behavior in
response to unloading was observed for spindle-rich jaw
closer muscles although the latency of the silent period
was statistically shorter than that observed for jaw
opener muscles (averaging 13 ms). Control studies suggest
that the jaw opener reflex was not due to inputs from
either cutaneous or periodontal mechanoreceptors. In the
unloading response of the jaw openers, the tonic level of
EMG activity observed after transition to the new jaw
orientation was monotonically related to the residual
torque and orientation. This is consistent with the idea
that the tonic stretch reflex may mediate the change in
muscle activation. In addition, the values of the static
net joint torque and jaw orientation after the dynamic
phase of unloading were related by a monotonic function
resembling the invariant characteristic recorded in human
limb joints. The torque-angle characteristics associated
with different initial jaw orientations were similar in
shape but spatially shifted, consistent with the idea that
voluntary changes in jaw orientation may be associated
with a change in a single parameter, which may be
identified as the threshold of the tonic stretch reflex.
It is suggested that functionally significant phasic and
tonic stretch reflexes may not be mediated exclusively by
muscle spindle afferents. Thus, the hypothesis that
central modifications in the threshold of the tonic
stretch reflex underlie the control of movement may be
applied to the jaw system.
Gribble PL, Ostry DJ (1996) Origins of the power law
relation between movement velocity and curvature: modeling
the effects of muscle mechanics and limb dynamics. J
Neurophysiol 76:2853-2860. Abstract | PDF
1.
When subjects trace patterns such as ellipses, the
instantaneous velocity of movements is related to the
instantaneous curvature of the trajectories according to a
power law movements tend to slow down when curvature is
high and speed up when curvature is low. It has been
proposed that this relationship is centrally planned. 2.
The arm's muscle properties and dynamics can significantly
affect kinematics. Even under isometric conditions, muscle
mechanical properties can affect the development of muscle
forces and torques. Without a model which accounts for
these effects, it is difficult to distinguish between
kinematic patterns which are attributable to central
control and patterns which arise due to dynamics and
muscle properties and are not represented in the
underlying control signals. 3. In this paper we address
the nature of the control signals that underlie movements
which obey the power law. We use a numerical simulation of
arm movement control based on the lambda version of the
equilibrium-point hypothesis. We demonstrate that
simulated elliptical and circular movements, and
elliptical force trajectories generated under isometric
conditions, obey the power law even though there was no
relation between curvature and speed in the modeled
control signals. 4. We suggest that limb dynamics and
muscle mechanics specifically, the spring-like properties
of muscles can contribute significantly to the emergence
of the power law relationship in kinematics. Thus without
a model that accounts for these effects, care must be
taken when making inferences about the nature of neural
control.
Ramsay JO, Munhall KG, Gracco VL, Ostry DJ (1996) Functional
data analyses of lip motion. J Acoust Soc Am 99:3718-3727. Abstract | PDF
The vocal tract's motion during speech is a complex
patterning of the movement of many different articulators
according to many different time functions. Understanding
this myriad of gestures is important to a number of
different disciplines including automatic speech
recognition, speech and language pathologies, speech motor
control, and experimental phonetics. Central issues are
the accurate description of the shape of the vocal tract
and determining how each articulator contributes to this
shape. A problem facing all of these research areas is how
to cope with the multivariate data from speech production
experiments. In this paper techniques are described that
provide useful tools for describing multivariate
functional data such as the measurement of speech
movements. The choice of data analysis procedures has been
motivated by the need to partition the articulator
movement in various ways: end effects separated from shape
effects, partitioning of syllable effects, and the
splitting of variation within an articulator site from
variation from between sites. The techniques of functional
data analysis seem admirably suited to the analyses of
phenomena such as these. Familiar multivariate procedures
such as analysis of variance and principal components
analysis have their functional counterparts, and these
reveal in a way more suited to the data the important
sources of variation in lip motion. Finally, it is found
that the analyses of acceleration were especially helpful
in suggesting possible control mechanisms. The focus is on
using these speech production data to understand the basic
principles of coordination. However, it is believed that
the tools will have a more general use.
Laboissiere R, Ostry DJ, Feldman AG (1996) The control of
multi-muscle systems: human jaw and hyoid movements. Biol
Cybern 74:373-384. Abstract | PDF
A
model is presented of sagittal plane jaw and hyoid motion
based on the lambda model of motor control. The model
which is implemented as a computer simulation includes
central neural control signals, position and velocity
dependent reflexes, reflex delays, and muscle properties
such as the dependence of force on muscle length and
velocity. The model has seven muscles (or muscle groups)
attached to the jaw and hyoid as well as separate jaw and
hyoid bone dynamics. According to the model, movements
result from changes in neurophysiological control
variables which shift the equilibrium state of the motor
system. One such control variable is an independent change
in the membrane potential of alpha-motoneurones (MNs);
this variable establishes a threshold muscle length
(lambda) at which MN recruitment begins. Motor functions
may be specified by various combinations of lambdas. One
combination of lambdas is associated with the level of
coactivation of muscles. Others are associated with
motions in specific degrees of freedom. Using the model,
we study the mapping between control variables specified
at the level of degrees of freedom and control variables
corresponding to individual muscles. We demonstrate that
commands can be defined involving linear combinations of
lambda change which produce essentially independent
movements in each of the four kinematic degrees of freedom
represented in the model (jaw orientation, jaw position,
vertical and horizontal hyoid position). These linear
combinations are represented by vectors in lambda space
which may be scaled in magnitude. The vector directions
are constant over the jaw / hyoid workspace and result in
essentially the same motion from any workspace position.
The demonstration that it is not necessary to adjust
control signals to produce the same movements in different
parts of the workspace supports the idea that the nervous
system need not take explicit account of musculo-skeletal
geometry in planning movements.
Ostry DJ, Gribble PL, Gracco VL (1996) Coarticulation of jaw
movements in speech production: is context sensitivity in
speech kinematics centrally planned? J Neurosci
16:1570-1579. Abstract | PDF
Coarticulation in speech production is a phenomenon in
which the articulator movements for a given speech sound
vary systematically with the surrounding sounds and their
associated movements. Although these variations may appear
to be centrally planned, without explicit models of the
speech articulators, the kinematic patterns which are
attributable to central control cannot be distinguished
from those which arise due to dynamics and are not
represented in the underlying control signals. In the
present paper, we address the origins of coarticulation by
comparing the results of empirical and modeling studies of
jaw motion in speech. The simulated kinematics of sagittal
plane jaw rotation and horizontal jaw translation are
compared to the results of empirical studies in which
subjects produce speech-like sequences at a normal rate
and volume. The simulations examine both "anticipatory"
and "carryover" coarticulatory effects. In both cases, the
results show that even when no account is taken of context
at the level of central control, kinematic patterns vary
in amplitude and duration as a function of the magnitude
of the preceding or following movement in the same manner
as one observes empirically in coarticulation. Since at
least some coarticulatory effects may arise from muscle
mechanics and dynamics and not from central control, these
factors must be considered before drawing inferences about
control in coarticulation.
Bonda E, Petrides M, Ostry DJ, Evans A (1996) Specific
involvement of human parietal systems and the amygdala in
the perception of biological motion. J Neurosci
16:3737-3744. Abstract | PDF
To
explore the extent to which functional systems within the
human posterior parietal cortex and the superior temporal
sulcus are involved in the perception of action, we
measured cerebral metabolic activity in human subjects by
positron emission tomography during the perception of
simulations of biological motion with point-light
displays. The experimental design involved comparisons of
activity during the perception of goal-directed hand
action, whole body motion, object motion, and random
motion. The results demonstrated that the perception of
scripts of goal-directed hand action implicates the cortex
in the intraparietal sulcus and the caudal part of the
superior temporal sulcus, both in the left hemisphere. By
contrast, the rostrocaudal part of the right superior
temporal sulcus and adjacent temporal cortex, and limbic
structures such as the amygdala, are involved in the
perception of signs conveyed by expressive body movements.
Perrier P, Ostry DJ, Laboissiere R (1996) The equilibrium
point hypothesis and its application to speech motor
control. J Speech Hear Res 39:365-377. Abstract | PDF
In
this paper, we address a number of issues in speech
research in the context of the equilibrium point
hypothesis of motor control. The hypothesis suggests that
movements arise from shifts in the equilibrium position of
the limb or the speech articulator. The equilibrium is a
consequence of the interaction of central neural commands,
reflex mechanisms, muscle properties and external loads,
but it is under the control of central neural commands.
These commands act to shift the equilibrium via centrally
specified signals acting at the level of the motoneurone
(MN) pool. In the context of a model of sagittal plane jaw
and hyoid motion based on the lambda version of the
equilibrium point hypothesis, we consider the implications
of this hypothesis for the notion of articulatory targets.
We suggest that simple linear control signals may underlie
smooth articulatory trajectories. We explore as well the
phenomenon of intra-articulator coarticulation in jaw
movement. We suggest that even when no account is taken of
upcoming context, that apparent anticipatory changes in
movement amplitude and duration may arise due to dynamics.
We also present a number of simulations that show in
different ways how variability in measured kinematics can
arise in spite of constant magnitude speech control
signals.
Sergio LE, Ostry DJ (1995) Coordination of multiple muscles
in two degree of freedom elbow movements. Exp Brain Res
105:123-137. Abstract | PDF
The present study quantifies electromyographic (EMG)
magnitude, timing, and duration in one and two degree of
freedom elbow movements involving combinations of flexion
/ extension and pronation / supination. The aim is to
understand the organization of commands subserving motion
in individual and multiple degrees of freedom. The muscles
tested in this study fell into two categories with respect
to agonist burst magnitude those whose burst magnitude
varied with motion in a second degree of freedom at the
elbow, and those whose burst magnitude depended on motion
in one degree of freedom only. In multiarticular muscles
contributing to motion in two degrees of freedom at the
elbow, we found that the magnitude of the agonist burst
was greatest for movements in which a muscle acted as
agonist in both degrees of freedom. The burst magnitudes
for one degree of freedom movements were, in turn, greater
than for movements in which the muscle was agonist in one
degree of freedom and antagonist in the other. It was also
found that for movements in which a muscle acted as
agonist in two degrees of freedom, the burst magnitude
was, in the majority of cases, not different than the sum
of the burst magnitudes in the component movements. When
differences occured, the burst magnitude for the combined
movement was greater than the sum of the components. Other
measures of EMG activity such as burst onset time and
duration were not found to vary in a systematic manner
with motion in these two degrees of freedom. It was also
seen that several muscles which produced motion in one
degree of freedom at the elbow, including triceps brachii
(long head), triceps brachii (lateral head), and pronator
quadratus displayed first agonist bursts whose magnitude
did not vary with motion in a second degree of freedom.
However, for the monoarticular elbow flexors brachialis
and brachioradialis, agonist burst magnitude was affected
by pronation or supination. Lastly, it was observed that
during elbow movements in which muscles acted as agonist
in one degree of freedom and antagonist in the other, the
muscle activity often displayed both agonist and
antagonist components in the same movement. It was found
that, for pronator teres and biceps brachii, the timing of
the bursts was such that there was activity in these
muscles concurrent with activity in both pure agonists and
pure antagonists. The empirical summation of EMG burst
magnitudes and the presence in a single muscle of both
agonist and antagonist bursts within a movement suggest
that central commands associated with motion in individual
degrees of freedom at the elbow may be superimposed to
produce two degree of freedom elbow movements.
Bateson EV, Ostry DJ (1995) An analysis of the
dimensionality of jaw movement in speech. J Phon 23:101-117.
Abstract | PDF
The human jaw moves in three spatial dimensions, and its
motion is filly specified by three orientation angles and
three positions. Using OPTOTRAK, we characterize the basic
motions in these six degrees of freedom and their
interrelations during speech. As has been reported
previously, the principle components of jaw motion fall
primarily within the midsagittal plane, where the jaw
rotates downward and translates forward during opening
movements and follows a similar path during closing. In
general, the relation between sagittal plane rotation and
horizontal translation (protrusion) is linear. However,
speakers display phoneme-specific differences in the slope
of this relation and its position within the
rotation-translation space. Furthermore, instances of pure
rotation and pure translation are observed. These findings
provide direct support for the claim that jaw rotation and
translation are independently controlled (Flanagan, Ostry
& Feldman, 1990). Rotations out of the midsagittal
plane are also observed. Yaw about the longitudinal body
axis is approximately three degrees and roll usually less
than two degrees. The remaining non-sagittal component,
lateral translation, is small in magnitude and
uncorrelated with other motions.
Ostry DJ, Munhall KG (1994) Control of jaw orientation and
position in mastication and speech. J Neurophysiol
71:1528-1545. Abstract | PDF
1.
The kinematics of sagittal-plane jaw motion were assessed
in mastication and speech. The movement paths were
described in joint coordinates, in terms of the component
rotations and translations. The analysis focused on the
relationship between rotation and horizontal translation.
Evidence was presented that these can be separately
controlled.
2. In speech, jaw movements were studied during consonant-vowel utterances produced at different rates and volumes. In mastication, bolus placement, compliance and size as well as chewing rate were manipulated. Jaw movements were recorded using the University of Wisconsin X-ray microbeam system. Jaw rotation and translation were calculated on the basis of the motion of X-ray tracking pellets on the jaw.
3. The average magnitudes of jaw rotation and translation were greater in mastication than in speech. In addition, in speech, it was shown that the average rotation magnitude may vary independent of the horizontal translation magnitude. In mastication, the average magnitude of vertical jaw translation was not dependent on the magnitudes of jaw rotation or horizonal jaw translation.
4. The magnitude of rotation and horizontal jaw translation tended to be correlated when examined on a trial by trial basis. Some subjects also showed a correlation between jaw rotation and vertical jaw translation. However, the proportion of variance accounted for was greater for all subjects in the case of rotation and horizontal translation.
5. Joint space paths in both mastication and speech were found to be straight. The pattern was observed at normal and fast rates of speech and mastication and for loud speech as well. Straight line paths were also observed when subjects produced utterances that had both the syllabic structure and the intonation pattern of speech. The findings suggest that control may be organized in terms of an equilibrium jaw orientation and an equilibrium jaw position.
6. Departures from linearity were also observed. These were typically associated with differences during jaw closing in the end time of rotation and translation. Asynchronies were not observed at the start of jaw closing in either mastication or speech and the movement paths were typically linear within this region.
Sergio LE, Ostry DJ (1994) Coordination of mono- and bi-
articular muscles in multi-degree of freedom elbow
movements. Exp Brain Res 97:551-555. 2. In speech, jaw movements were studied during consonant-vowel utterances produced at different rates and volumes. In mastication, bolus placement, compliance and size as well as chewing rate were manipulated. Jaw movements were recorded using the University of Wisconsin X-ray microbeam system. Jaw rotation and translation were calculated on the basis of the motion of X-ray tracking pellets on the jaw.
3. The average magnitudes of jaw rotation and translation were greater in mastication than in speech. In addition, in speech, it was shown that the average rotation magnitude may vary independent of the horizontal translation magnitude. In mastication, the average magnitude of vertical jaw translation was not dependent on the magnitudes of jaw rotation or horizonal jaw translation.
4. The magnitude of rotation and horizontal jaw translation tended to be correlated when examined on a trial by trial basis. Some subjects also showed a correlation between jaw rotation and vertical jaw translation. However, the proportion of variance accounted for was greater for all subjects in the case of rotation and horizontal translation.
5. Joint space paths in both mastication and speech were found to be straight. The pattern was observed at normal and fast rates of speech and mastication and for loud speech as well. Straight line paths were also observed when subjects produced utterances that had both the syllabic structure and the intonation pattern of speech. The findings suggest that control may be organized in terms of an equilibrium jaw orientation and an equilibrium jaw position.
6. Departures from linearity were also observed. These were typically associated with differences during jaw closing in the end time of rotation and translation. Asynchronies were not observed at the start of jaw closing in either mastication or speech and the movement paths were typically linear within this region.
Abstract | PDF
We
investigated the coordination of mono- and bi-articular
muscles during movements involving one or more degrees of
freedom at the elbow. Subjects performed elbow flexion (or
extension) alone, forearm pronation (or supination) alone,
and combinations of the two. In bi-articular muscles such
as biceps and pronator teres, the amplitude of agonist EMG
activity was dependent on motion in both degrees of
freedom. Agonist burst amplitudes for combined movements
were approximately the sum of the agonist burst amplitudes
for movements in the individual degrees of freedom.
Activity levels in individual degrees of freedom were, in
turn, greater than activity levels observed when a muscle
acted as agonist in one degree of freedom and antagonist
in the other. Other muscles such as triceps, brachialis,
and pronator quadratus, acted primarily during motion in a
single degree of freedom. The relative magnitude and the
timing of activity between sets of muscles also changed
with motion in a second degree of freedom. These patterns
are comparable to those reported previously in isometric
studies.
Parush A, Ostry DJ (1993) Lower pharyngeal wall coarticulation in VCV syllables. J Acoust Soc Am 94:715-22.
Abstract | PDF
The vocal tract's motion during speech is a complex
patterning of the movement of many different articulators
according to many different time functions. Understanding
this myriad of gestures is important to a number of
different disciplines including automatic speech
recognition, speech and language pathologies, speech motor
control, and experimental phonetics. Central issues are
the accurate description of the shape of the vocal tract
and determining how each articulator contributes to this
shape. A problem facing all of these research areas is how
to cope with the multivariate data from speech production
experiments. In this paper techniques are described that
provide useful tools for describing multivariate
functional data such as the measurement of speech
movements. The choice of data analysis procedures has been
motivated by the need to partition the articulator
movement in various ways: end effects separated from shape
effects, partitioning of syllable effects, and the
splitting of variation within an articulator site from
variation from between sites. The techniques of functional
data analysis seem admirably suited to the analyses of
phenomena such as these. Familiar multivariate procedures
such as analysis of variance and principal components
analysis have their functional counterparts, and these
reveal in a way more suited to the data the important
sources of variation in lip motion. Finally, it is found
that the analyses of acceleration were especially helpful
in suggesting possible control mechanisms. The focus is on
using these speech production data to understand the basic
principles of coordination. However, it is believed that
the tools will have a more general use.
Sergio LE, Ostry DJ (1993) Three-dimensional kinematic
analysis of frog hindlimb movement in reflex wiping. Exp
Brain Res 94:53-64. Abstract | PDF
The three-dimensional kinematics of the hindlimb back-wipe
were examined in spinal frogs. The component movements
were identified and the relationship between stimulus
position and hindlimb configuration was assessed. The
planes of motion of the hindlimb were examined throughout
the movement. The back-wipe comprises three essential
phases: a placing phase (I), in which the foot is drawn
over the back of the frog and placed in a position near to
the stimulus; a pre-whisk phase (II), in which the
endpoint of the foot moves away from the stimulus; and a
whisk/extension phase (III), in which the stimulus is
removed. The pre-whisk phase contributes to force
production for the whisk/extension (III). In the placing
phase a systematic relationship was found between limb
endpoint position and stimulus position in the
rostro-caudal direction. The hip, knee and metatarsal
joint angles were related to the position of the endpoint
in the rostro-caudal direction. However, different frogs
tended to adopt different strategies to remove the
stimulus. In one strategy, when the knee angle was
strongly related to the rostro-caudal stimulus position,
the metatarsal angle was weakly related and vice versa.
Other strategies were observed as well. There was no
adjustment in limb endpoint position for stimulus
placement in the medial-lateral direction. Consistent with
this finding, the point on the foot at which stimulus
contact occurred changed systematically as a function of
medial-lateral stimulus placement. Thus, in order to
remove the stimulus in different medial-lateral positions,
the frog used a different part of the foot rather than
moving the foot in the direction of the stimulus. In two
frogs a relationship was observed between the elevation of
the femur and the medial-lateral stimulus position. The
motion planes of the hindlimb were studied by examining
the instantaneous plane of motion of the endpoint and the
planes of motion of adjacent limb segments. The motion of
the endpoint was found not to be planar in any phase of
the wipe. In contrast, planar motion of the femur and
tibia was observed for all phases. Systematic changes in
the orientation of these planes characterized the
different phases. The position of the hindlimb was found
to be variable prior to the placing phase. This
variability was not related to stimulus position. However,
in trials with multiple wipes, once an initial limb
configuration was assumed, the limb returned to this
configuration before each wipe in the sequence. Evidence
for motor equivalence was sought in two ways.
Ostry DJ, Feldman AG, Flanagan JR (1991) Kinematics and
control of frog hindlimb movements. J Neurophysiol
65:547-562. Abstract | PDF
The determinants of the motion path of the hindlimb were
explored in both intact and spinal frogs. In the spinal
preparations the kinematic properties of withdrawal and
crossed-extension reflexes were studied. In the intact
frog the kinematics of withdrawal and swimming movements
were examined. Frog hindlimb paths were described in joint
angle (intrinsic) coordinates rather than limb endpoint
(extrinsic) coordinates. 2. To study withdrawal and
crossed-extension reflexes, the initial angles at the hip,
knee, and ankle were varied. Withdrawal and crossed
extension were recorded in three dimension (3-D) with the
use of an infra-red spatial imaging system. Swimming
movements against currents of different speeds were
obtained with high-speed film. 3. Three strategies were
considered related to the form of the hypothesized
equilibrium paths specified by the nervous system: all
trajectories lie on a single line in angular coordinates;
all trajectories are directed toward a common final
position; and all trajectories have the same direction
independent of initial joint configuration. 4. Joint space
paths in withdrawal were found to be straight and parallel
independent of the initial joint configuration. The hip
and knee were found to start simultaneously and in 75% of
the conditions tested to reach maximum velocity
simultaneously. Hip-knee maximum velocity ratios were
similar in magnitude over differences in initial joint
angles. This is consistent with the observation of
parallel paths and supports the view that the nervous
system specifies a single direction for equilibrium
trajectories. 5. Straight line paths with slopes similar
to those observed in withdrawal in the spinal preparation
were found in swimming movements in the intact frog.
Straight line paths in joint space are consistent with the
idea that swimming and withdrawal are organized and
controlled in a joint-level coordinate system. The
similarities observed between spinal and intact
preparations suggest that a common set of constructive
elements underlies these behaviors. 6. Path curvature was
introduced when joint limits were approached toward the
end of the movement. Depending on the initial joint
angles, the joint movements ended at different times. When
initial joint angles were unequal, joints moving from
smaller initial angles reached their functional limits
earlier and stopped first. 7. In withdrawal and crossed
extension in the spinal frog, velocity profiles at a given
joint were similar over the initial portion of the curve
for movements of different amplitude. This is consistent
with the idea that withdrawal and crossed-extension
movements of different amplitude are produced by a
constant rate of shift of the equilibrium position.
Ostry DJ, Flanagan JR (1989) Human jaw movement in
mastication and speech. Arch Oral Biol 34:685-693. Abstract | PDF
The study of jaw movement in humans is a primary source of
information about the relationship between voluntary
movement and more primitive motor functions. This study
focused on the geometric form of the velocity function, as
measured by linear voltage displacement transducer.
Movement amplitudes, maximum velocities and durations were
greater in mastication than in speech. Nevertheless, there
were detailed similarities in the shape of the normalized
velocity functions. In jaw-closing movements, the
normalized functions were similar in form over differences
in rate, movement amplitude (speech movements) and the
compliance of the bolus (mastication). In opening
movements, the functions for mastication and speech were
again similar over differences in amplitude and
compliance. However, they differed in shape for fast and
slow movements. Normalized acceleration and deceleration
durations were approximately equal in rapid movements,
whereas, for slower movements, deceleration took
substantially
Ostry DJ, Cooke JD, Munhall KG (1987) Velocity curves of
human arm and speech movements. Exp Brain Res 68:37-46. Abstract | PDF
The velocity curves of human arm and speech movements were
examined as a function of amplitude and rate in both
continuous and discrete movement tasks. Evidence for
invariance under scalar transformation was assessed and a
quantitative measure of the form of the curve was used to
provide information on the implicit cost function in the
production of voluntary movement. Arm, tongue and jaw
movements were studied separately. The velocity curves of
tongue and jaw movement were found to differ in form as a
function of movement duration but were similar for
movements of different amplitude. In contrast, the
velocity curves for elbow movements were similar in form
over differences in both amplitude and duration. Thus, the
curves of arm movement, but not those of tongue or jaw
movement, were geometrically equivalent in form.
Measurements of the ratio of maximum to average velocity
in arm movement were compared with the theoretical values
calculated for a number of criterion functions. For
continuous movements, the data corresponded best to values
computed for the minimum energy criterion; for discrete
movement, values were in the range of those predicted for
the minimum jerk and best stiffness criteria. The source
of a rate dependent asymmetry in the form of the velocity
curve of speech movements was assessed in a control study
in which subjects produced simple raising and lowering
movements of the jaw without talking. The velocity curves
of the non-speech control gesture were similar in form to
those of jaw movement in speech. These data, in
combination with similar findings for human jaw movement
in mastication, suggest that the asymmetry is not a direct
consequence of the requirements of the task. The
biomechanics and neural control of the orofacial system
may be possible sources of this effect.
Parush A, Ostry DJ (1986) Superior lateral pharyngeal wall
movements in speech. J Acoust Soc Am 80:749-756. Abstract | PDF
Medial movements of the lateral pharyngeal wall at the
level of the velopharyngeal port were examined by using a
computerized ultrasound system. Subjects produced CVNVC
sequences involving all combinations of the vowels /a/ and
/u/ and the nasal consonants /n/ and /m/. The effects of
both vowels on the CVN and NVC gestures (opening and
closing of the velopharyngeal port, respectively) were
assessed in terms of movement amplitude, duration, and
movement onset time. The amplitude of both opening and
closing gestures of the lateral pharyngeal wall was less
in the context of the vowel /u/ than the vowel /a/. In
addition, the onset of the opening gesture towards the
nasal consonant was related to the identity of both the
initial and the final vowels. The characteristics of the
functional coupling of the velum and lateral pharyngeal
wall in speech are discussed.
Munhall KG, Ostry DJ, Parush A (1985) Characteristics of velocity profiles of speech movements. J Exp Psychol Hum Percept Perform 11:457-474.
Abstract | PDF
The control of individual speech gestures was investigated
by examining laryngeal and tongue movements during vowel
and consonant production. A number of linguistic
manipulations known to alter the durational
characteristics of speech (i.e., speech rate, lexical
stress, and phonemic identity) were tested. In all cases a
consistent pattern was observed in the kinematics of the
laryngeal and tongue gestures. The ratio of maximum
instantaneous velocity to movement amplitude, a kinematic
index of mass-normalized stiffness, was found to increase
systematically as movement duration decreased.
Specifically, the ratio of maximum velocity to movement
amplitude varied as a function of a parameter, C, times
the reciprocal of movement duration. The conformity of the
data to this relation indicates that durational change is
accomplished by scalar adjustment of a base velocity form.
These findings are consistent with the idea that kinematic
change is produced by the specification of articulator
stiffness.
Ostry DJ, Munhall KG (1985) Control of rate and duration of
speech movements. J Acoust Soc Am 77:640-648. Abstract | PDF
A
computerized pulsed-ultrasound system was used to monitor
tongue dorsum movements during the production of
consonant-vowel sequences in which speech rate, vowel, and
consonant were varied. The kinematics of tongue movement
were analyzed by measuring the lowering gesture of the
tongue to give estimates of movement amplitude, duration,
and maximum velocity. All three subjects in the study
showed reliable correlations between the amplitude of the
tongue dorsum movement and its maximum velocity. Further,
the ratio of the maximum velocity to the extent of the
gesture, a kinematic indicator of articulator stiffness,
was found to vary inversely with the duration of the
movement. This relationship held both within individual
conditions and across all conditions in the study such
that a single function was able to accommodate a large
proportion of the variance due to changes in movement
duration. As similar findings have been obtained both for
abduction and adduction gestures of the vocal folds and
for rapid voluntary limb movements, the data suggest that
a wide range of changes in the duration of individual
movements might all have a similar origin. The control of
movement rate and duration through the specification of
biomechanical characteristics of speech articulators is
discussed.
Parush A, Ostry DJ, Munhall KG (1983) A kinematic study of
lingual coarticulation in VCV sequences. J Acoust Soc Am
74:1115-1125. Abstract | PDF
Intra-articulator anticipatory and carryover
coarticulation were assessed in both temporal and spatial
terms. Three subjects produced VCV sequences with velar
stop consonants and back vowels. Pulsed ultrasound was
used to examine the vertical displacement, duration, and
maximum velocity of the tongue dorsum raising (VC
transition) and lowering (CV transition) gestures.
Anticipatory coarticulation was primarily temporal for two
subjects, with decreases in the duration of the VC
transition accompanying increases in displacement for the
CV transition. Carryover coarticulation was primarily
spatial for all three subjects, with decreases in CV
displacement and maximum velocity accompanying increases
in VC displacement. It is suggested that these
intra-articulator patterns can be accounted for in terms
of an interaction between the raising gesture and a
vowel-specific onset time of the lowering gesture towards
the vowel. The implications of this kinematic
characterization are discussed.
Ostry DJ, Keller E, Parush A (1983) Similarities in the
control of the speech articulators and the limbs: kinematics
of tongue dorsum movement in speech. J Exp Psychol Hum
Percept Perform 9:622-636. Abstract | PDF
The kinematics of tongue dorsum movements in speech were
studied with pulsed ultrasound to assess similarities in
the voluntary control of the speech articulators and the
limbs. The stimuli were consonant--vowel syllables in
which speech rate and stress were varied. The kinematic
patterns for tongue dorsum movements were comparable to
those observed in the rapid movement of the arms and
hands. The maximum velocity of tongue dorsum raising and
lowering was correlated with the extent of the gesture.
The slope of the relationship differed for stressed and
unstressed vowels but was unaffected by differences in
speech rate. At each stress level the correlation between
displacement and peak velocity was accompanied by a
relatively constant interval from the initiation of the
movement to the point of maximum velocity. The data are
discussed with reference to systems that can be described
with second-order differential equations. The increase in
the slope of the displacement/peak-velocity relationship
for unstressed versus stressed vowels is suggestive of a
tonic increase in articulator stiffness. Variations in
displacement are attributed to the level of phasic
activity in the muscles producing the gesture.
Keller E, Ostry DJ (1983) Computerized measurement of tongue
dorsum movements with pulsed-echo ultrasound. J Acoust Soc
Am 73:1309-1315. Abstract | PDF
A
computerized system for the measurement of tongue dorsum
movements with pulsed echo ultrasound is described. The
presentation focuses on technical and methodological
considerations in the on-line acquisition of vertical
tongue movement information, its digital processing and
display. Problems associated with transducer placement,
peak detection, data averaging, and curve fitting are
considered, and validation procedures based on x ray and
indicators of measurement reliability are reported. The
discussion centers on advantages and disadvantages of the
technique and its applications.
Ostry DJ (1983) Determinants of interkey times in typing. In W. E. Cooper (ed.), Cognitive Aspects of Skilled Typewriting, Springer-Verlag New York Inc.
Abstract | PDF
Typewriting, musical instrument playing, spoken language,
and dance involve sophisticated motor skills and
associated symbol schemes. Researchers in cognition have
been interested in these abilities because they enable the
study of relations between the structure of motor behavior
and the organization of the associated formal system.
Typewriting, in particular, is of interest because of the
remarkable rate and complexity of finger and hand
movements involved and because its performance is readily
quantifiable. However, if typewriting is to be used to
study either cognitive or motor organization, the factors
contributing to its temporal structure must be identified.
In this chapter I present the findings of several studies
that examine variables that influence the pattern of
interkey times in typing. In addition to providing
evidence on the constituents of control in typing, the
studies provide a basis for the examination of proposals
by Ostry (1980) and Sternberg, Monsell, Knoll, and Wright
(1978) that certain aspects of typing control are
inherently tied to the execution of the sequence. The
suggestions arise from observations that patterns of
initial latency and interkey time are not changed by the
introduction of a delay between stimulus presentation and
a response signal. The inability to take advantage of a
preparation interval seems to indicate that the
programming of typing movements is intimately linked to
their execution.