Access the full text.
Sign up today, get DeepDyve free for 14 days.
O. Hauk, Andreas Keil, T. Elbert, Matthias Müller (2002)
Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEGJournal of Neuroscience Methods, 113
S. Makeig, M. Westerfield, T. Jung, S. Enghoff, J. Townsend, E. Courchesne, T. Sejnowski (2002)
Dynamic Brain Sources of Visual Evoked ResponsesScience, 295
Aniruddh Patel, E. Balaban (2000)
Temporal patterns of human cortical activity reflect tone sequence structureNature, 404
A. Newman, R. Pancheva, Kaori Ozawa, H. Neville, M. Ullman (2001)
An Event-Related fMRI Study of Syntactic and Semantic ViolationsJournal of Psycholinguistic Research, 30
S. Evers, J. Dannert, D. Rödding, G. Rötter, E. Ringelstein (1999)
The cerebral haemodynamics of music perception. A transcranial Doppler sonography study.Brain : a journal of neurology, 122 ( Pt 1)
F. Pulvermüller, H. Preissl, W. Lutzenberger, N. Birbaumer (1996)
Brain Rhythms of Language: Nouns Versus VerbsEuropean Journal of Neuroscience, 8
W. Singer (1999)
Neuronal Synchrony: A Versatile Code for the Definition of Relations?Neuron, 24
R. Galamboš, S. Makeig, P. Talmachoff (1981)
A 40-Hz auditory potential recorded from the human scalp.Proceedings of the National Academy of Sciences of the United States of America, 78 4
F. Varela, J. Lachaux, E. Rodriguez, J. Martinerie (2001)
The brainweb: Phase synchronization and large-scale integrationNature Reviews Neuroscience, 2
Jeffrey Binder (2000)
The new neuroanatomy of speech perception.Brain : a journal of neurology, 123 Pt 12
S. Weiss, P. Rappelsberger (2000)
Long-range EEG synchronization during word encoding correlates with successful memory performance.Brain research. Cognitive brain research, 9 3
(2000)
Imaging in the fourth dimension
M Hämäläinen, RJ Ilmoniemi (1984)
Technical Report TKK-F-A559 HUT Finland
KJ Kohler (1998)
The disappearance of words in connected speechZAS Working Papers in Linguistics, 11
Sophie Scott, C. Blank, S. Rosen, R. Wise (2000)
Identification of a pathway for intelligible speech in the left temporal lobe.Brain : a journal of neurology, 123 Pt 12
R. Zatorre, P. Belin, V. Penhune (2002)
Structure and function of auditory cortex: music and speechTrends in Cognitive Sciences, 6
J. Binder, J. Frost, T. Hammeke, S. Rao, R. Cox (1996)
Function of the left planum temporale in auditory and linguistic processingNeuroImage, 3
(2001)
Long-range synchrony in the gamma band: Role in music perception
R. Srinivasan, D. Russell, G. Edelman, G. Tononi (1999)
Increased Synchronization of Neuromagnetic Responses during Conscious PerceptionThe Journal of Neuroscience, 19
J. Proakis, D. Manolakis (1992)
Digital signal processing - principles, algorithms and applications (2. ed.)
M. Gaetz, H. Weinberg, E. Rzempoluck, K. Jantzen (1998)
Neural network classifications and correlation analysis of EEG and MEG activity accompanying spontaneous reversals of the Necker cube.Brain research. Cognitive brain research, 6 4
M Gaetz, H. Weinberg, E Rzempoluck, KJ Jantzen (1998)
Neural network classifications and correlation analysis of EEG and MEG activity accompanying spontaneous reversals of the Necker cubeCogn Brain Res, 6
S. Makeig, A. Delorme, M. Westerfield, T. Jung, J. Townsend, E. Courchesne, T. Sejnowski (2004)
Electroencephalographic Brain Dynamics Following Manually Responded Visual TargetsPLoS Biology, 2
A. Hahne, J. Jescheniak, J. Jescheniak (2001)
What's left if the Jabberwock gets the semantics? An ERP investigation into semantic and syntactic processes during auditory sentence comprehension.Brain research. Cognitive brain research, 11 2
S. Löwel, W. Singer, M. Fahle, T. Poggio (2002)
Experience-dependent plasticity of intracortical connections
R. Menéndez, O. Hauk, S. Andino, Hermann Vogt, C. Michel (1997)
Linear inverse solutions with optimal resolution kernels applied to electromagnetic tomographyHuman Brain Mapping, 5
W. Singer (1995)
Development and plasticity of cortical processing architectures.Science, 270 5237
S. Weiss, P. Rappelsberger (1996)
EEG coherence within the 13–18 Hz band as a correlate of a distinct lexical organisation of concrete and abstract nouns in humansNeuroscience Letters, 209
J. Démonet, F. Chollet, S. Ramsay, D. Cardebat, J. Nespoulous, R. Wise, A. Rascol, Richard Frackowiak (1992)
The anatomy of phonological and semantic processing in normal subjects.Brain : a journal of neurology, 115 ( Pt 6)
S. Makeig, M. Westerfield, T. Jung, James Covington, J. Townsend, T. Sejnowski, E. Courchesne (1999)
Functionally Independent Components of the Late Positive Event-Related Potential during Visual Spatial AttentionThe Journal of Neuroscience, 19
JB Demb, Jemett Desmond, AD Wagner, Cj Vaidya, G. Glover, J. Gabrieli (1995)
Semantic encoding and retrieval in the left inferior prefrontal cortex: a functional MRI study of task difficulty and process specificity, 15
W. Miltner, C. Braun, M. Arnold, H. Witte, E. Taub (1999)
Coherence of gamma-band EEG activity as a basis for associative learningNature, 397
C. Liégeois-Chauvel, I. Peretz, M. Babaı̈, V. Laguitton, P. Chauvel (1998)
Contribution of different cortical areas in the temporal lobes to music processing.Brain : a journal of neurology, 121 ( Pt 10)
O Hauk, P Berg, C Wienbruch, B Rockstroh, T Elbert (1999)
In Recent Advances in Biomagnetism
R. Zatorre, Alan Evans, E. Meyer (1994)
Neural mechanisms underlying melodic perception and memory for pitch, 14
(1999)
The minimum norm method as an effective mapping tool for MEG analysis
R. Oldfield (1971)
The assessment and analysis of handedness: the Edinburgh inventory.Neuropsychologia, 9 1
(1984)
Interpreting Measured Magnetic Fields of the Brain: Estimates of Current Distributions
Thomas Knösche, B. Maess, A. Friederici (2004)
Processing of Syntactic Information Monitored by Brain Surface Current Density Mapping Based on MEGBrain Topography, 12
S Löwel, W Singer (2002)
In Perceptual Learning
G. Hickok, D. Poeppel (2000)
Towards a functional neuroanatomy of speech perceptionTrends in Cognitive Sciences, 4
(1990)
Search for coherence: A basic principle of cortical self-organization
J. Knott (1951)
The organization of behavior: A neuropsychological theoryElectroencephalography and Clinical Neurophysiology, 3
Background: How does the brain convert sounds and phonemes into comprehensible speech? In the present magnetoencephalographic study we examined the hypothesis that the coherence of electromagnetic oscillatory activity within and across brain areas indicates neurophysiological processes linked to speech comprehension. Results: Amplitude-modulated (sinusoidal 41.5 Hz) auditory verbal and nonverbal stimuli served to drive steady-state oscillations in neural networks involved in speech comprehension. Stimuli were presented to 12 subjects in the following conditions (a) an incomprehensible string of words, (b) the same string of words after being introduced as a comprehensible sentence by proper articulation, and (c) nonverbal stimulations that included a 600-Hz tone, a scale, and a melody. Coherence, defined as correlated activation of magnetic steady state fields across brain areas and measured as simultaneous activation of current dipoles in source space (Minimum-Norm- Estimates), increased within left- temporal-posterior areas when the sound string was perceived as a comprehensible sentence. Intra-hemispheric coherence was larger within the left than the right hemisphere for the sentence (condition (b) relative to all other conditions), and tended to be larger within the right than the left hemisphere for nonverbal stimuli (condition (c), tone and melody relative to the other conditions), leading to a more pronounced hemispheric asymmetry for nonverbal than verbal material. Conclusions: We conclude that coherent neuronal network activity may index encoding of verbal information on the sentence level and can be used as a tool to investigate auditory speech comprehension. the simultaneous excitation of groups of neurons [1-3]. Background One key function of the cerebral cortex involves the inte- These "long-range connections formed by excitatory corti- gration of elements into a percept that separates them cal neurons" [[4] p.3] are considered the anatomical sub- from the background. In this process, changes in cortical strate of this integrative capability. This integration has networks are formed and modified by experience through been modeled in detail for the visual system [e.g., [4]] and Page 1 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 similar principles should also describe other sensory func- acquisition of contingencies in a conditioning paradigm tions such as auditory speech perception and comprehen- [18]. sion. This assumption was tested in the present study by probing patterns of co-activation within and across hemi- The present study investigated coherence patterns of the spheres during the processing of verbal and nonverbal auditory evoked magnetic Steady-State-Field (SSF), specif- acoustic material. Intra-hemispheric co-activation was ically coherence among SSF-generators within and across taken as a large-scale measure of functional network acti- hemispheres, as a measure of neural networks involved in vation, and coherence of oscillatory electromagnetic activ- speech comprehension. If, as we hypothesized, the com- ity served as measure of co-activation in time. Coherence prehension of speech was related to the activation of neu- is defined as the correlated activity between two locations ronal assemblies in the left hemisphere, then we should within a distinct frequency range. see increased coherence in this region with the recogni- tion of a meaningful sentence as compared to an incom- Event-related brain responses, traditionally used in the prehensible string of sounds. We further hypothesized study of cognitive processes, have been found to result that meaningful verbal stimuli should be processed differ- from regional perturbations in ongoing brain activities in ently from musical melodies. That is to say, verbal mate- a self-organizing system rather than constituting a rial should affect the coherence of electromagnetic signals response set from an otherwise silent system. For exam- more in the left than in the right hemisphere whereas lis- ple, Makeig and coworkers [5-7] showed that event- tening to a nonverbal complement of a meaningful sen- related potentials (ERP) must be viewed as perturbations tence like a melody will activate more right- than left- in the oscillatory dynamics of the ongoing EEG. The hemispheric neuronal networks and influence coherence response of successively activated groups of neurons is patterns involving the right hemisphere. Given that lan- governed by an attractor, which means that different neu- guage and music share components, we assumed only a ron groups, one after the other, contribute to large-scale relative dominance in the interconnection of networks changes in the magnetic field that move across brain toward left- or right-hemispheric activity. areas, indicating spatio-temporal changes on a macro- scopic level. The basin of attraction guarantees robustness Results and discussion The present study studied co-activated cortical networks of the propagating synchrony. Therefore, the activation of functional cortical networks may best be determined by involved in speech comprehension by using auditory examining the pattern of dynamic co-activation of groups steady-state (41.5-Hz amplitude modulated) stimuli and of neurons [8,9]. As such, whenever neuronal cell assem- measuring the coherence of generator activity of the mag- blies fire 'in phase' the amplitude of oscillatory activity netic steady state response. Steady-state stimulus modula- will increase. tion were used for a sentence, which – following a German play-of-words – was first presented as an incom- On a macroscopic level, oscillatory coupling between prehensible string of sounds, but became a comprehensi- large neuronal populations can be examined by externally ble sentence after the sentence's meaning was explained to driving the nervous system using oscillatory stimulation the subjects and was properly articulated. In addition to and then measuring the regional coherence of the electro- verbal stimuli, non-verbal stimuli were also studied which magnetic activity [10]. Amplitude modulation of the stim- included a 600-Hz tone, a scale, and a melody-like com- uli induces the oscillatory pattern of the Steady-State- bination of the scales' tones. The present analysis of SSF Response (SSR). For auditory stimuli the SSR is most coherence in the source space (see methods) extended prominent at modulation frequencies around 40 Hz [11]. previous approaches [12], which employed SSR in the sig- Patel & Balaban [12] assessed the synchronization of the nal space to disclose networks involved in auditory magnetoencephalographic SSR at this frequency over time perception. (i.e., coherence) in order to investigate neural correlates of musical comprehension. When the stimulus sequences Figure 1 (lower part) gives an example of the evoked mag- formed a percept (a melody relative to random sequence), netic 41.5-Hz SSF, averaged for the tone condition at the coherence increased between left posterior and right fron- 148 sensors across the 12 subject. The sinusoidal 41.5 Hz tal nodes. Similarly, Srinivasan et al [13] found increased oscillation is evident at all 148 sensors and a change in inter- and intra-hemispheric coherence in the visual SSR polarity over temporal areas suggests generator sources in when subjects consciously recognized visual stimuli in the temporal cortices of each hemisphere. The Fourier their field of view. Coherence measures have also been Transform confirms the peak at the modulation frequency employed in the investigation of complex networks of 41.5 Hz for all stimulus-conditions in the sensor space involved in the processing of nouns [14,15], music [16], (Fig. 1, upper left graph) and in the source space (mid- the perception of Necker cube reversals [17], and in the right graph in Fig. 1; illustrated for a selected dipole in the expected generator structure of the SSF, as indicated by the Page 2 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 Spectral power a Figure 1 nd topography of the Steady-State-Field (averages of 12 subjects) Spectral power and topography of the Steady-State-Field (averages of 12 subjects). Bottom graph: topography of the magnetic SSF evoked by the 600 Hz tone (top view, nose up). Upper left graph (green box): spectral power of the magnetic SSF at a selected sensor (left anterior, indicated by green frame) for all conditions and the baseline. Upper right graph: (blue box): spec- tral power of the magnetic SSF in the source space (MNE) for one selected dipole location. This dipole is approximately located in the area of the left auditory cortex and is indicated by a blue filled circle. Conditions: Tone = 600 Hz tone, NoComp = incomprehensible word string, Comp = word string after comprehension as sentence, T.seq = melody-like sequence of tones, Scale = tone scale). Page 3 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 A graph Figure 2 : Topograp ical top view, hical distributions of the SSF spherical spline interpol ination, n source sp ose indicated by ace (averages o a small triangle f 12 subjects) separately for the five conditions; Ortho- A: Topographical distributions of the SSF in source space (averages of 12 subjects) separately for the five conditions; Ortho- graphical top view, spherical spline interpolation, nose indicated by a small triangle. Each contour-line represents 0.000125 nAm/cm (rounded values at scale). Dark grey indicates higher values of activity. B: Asymmetry of activity in the source space as indicated by the Laterality Index (all left- minus all right-hemispheric MNE amplitude values divided by their sum), averaged across subjects. Positive values indicate left lateralized activity and negative values indicate right-lateralized activity. T-bars rep- resent the standard error of the mean). Tone = 600 Hz tone, NoComp = incomprehensible word string, Comp = word string after comprehension as sentence, T.seq= melody-like sequence of tones, Scale = tone scale. filled circle). No such peak was observed during the base- strates that conversion using the Minimum Norm Esti- line. A comparison of the grand averages of the power mate (see methods) preserves the basic profile across spectra in sensor and source space (see Fig. 2A) demon- conditions. Page 4 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 As expected for acoustic stimulation, overall MNE ampli- similar statistical power for the CONDITION effect. This tudes were most pronounced in auditory areas of both cannot be explained simply by a reduced signal-to-noise hemispheres, with a varying degree of laterality. For the ratio in the verbal conditions, because normalized values Laterality Index (see methods and Fig. 2B) an interaction show the same effect. We rather assume that increased lat- of CONDITION × HEMISPHERE (F(4,44) = 3.06, p < erality varies with decreased inter-hemispheric communi- 0.05, ε = 0.69) verified that nonverbal conditions as cation (coherence). compared to the verbal ones induced a more pronounced asymmetry with more activity in the right compared to the Inter-hemispheric coherence between dipoles located in left hemispheres (for the main effect of HEMISPHERE, the left (left cortical input) and right (right cortical input) F(1, 11) = 3.33, p < 0.1, and for the main effect of CON- auditory cortex and the remaining dipole sites are charac- DITION, F(4,44) = 12.65, p < 0.0001, ε = 0.57). Planned terized (Fig. 3B) by larger coherence of activity across comparisons confirmed significant effects of HEMI- areas including the left auditory, occipital and right-poste- SPHERE only for the nonverbal conditions (tone, t(1,11) rior regions in response to the comprehensible sentence = 4.5, p < 0.0001, scale, t(1,11) = 4.3, p < 0.000, and mel- relative to the incomprehensible word string. ody-like tone sequence, t(1,11) = 3.8, p < 0.0005). Considering coherent activity, i.e., synchronized oscilla- Intra-hemispheric coherence was specifically affected by tions between spatially distributed maps, as the represen- conditions (CONDITION × HEMISPHERE, F(4, 44) = tation of a percept, we followed Makeig et al. [6,7] who 3.72, p < 0.05, ε = 0.46): As illustrated in Fig. 3A for the discuss evoked activity in terms of oscillatory perturba- Laterality Index, higher intra-hemispheric coherence in tions, i.e., alteration of synchrony in ongoing activity. The the left than in the right hemisphere was induced when comparison of two conditions with identical physical the string of words became a comprehensible sentence stimulation but different degrees of integration into a per- (planned comparison: t(1, 11) = 2.7, p < 0.01), whereas cept revealed that the synchronicity of auditory SSF the tone induced higher intra-hemispheric coherences in increased among areas in the posterior left-temporal and the right as compared to the left hemisphere (t(1, 11) = right-occipital cortex when a sentence was comprehensi- 2.3, p < 0.05). The main effect of CONDITION was signif- ble compared to the same material being incomprehensi- icant for intra-hemispheric coherence (F(4,44) = 8.35, p < ble. This suggests that a network was activated when an 0.001, ε = 0.62) and inter-hemispheric coherence (F(4,44) intelligible sentence was being processed. This assump- = 10.79, p < .001, ε = 0.61) indicating higher coherence tion is in line with previous research in which a left-poste- was induced by nonverbal than by verbal conditions. rior activity focus was found during semantic processing Since inter-hemispheric coherence may depend on the [19-23], a left lateralized auditory-conceptual interface different generator strength, which was higher in the right was localized at the temporal-parietal-occipital junction than in the left hemisphere, the coherence measures were [24], and an occipital focus of oscillatory activity found normalized in order to compensate for an effect of the sig- for the processing of (visually presented) content words nal to noise ratio. For normalization, the inter-hemi- relative to verbs [25]. spheric coherence measures were divided by the intra- hemispheric coherence measure of each condition. Still, a Whereas Scott et al. [26] reported an increase in regional main effect CONDITION (F(4,44) = 12.1, p < 0.0001, cerebral blood-flow in the anterior part of the left superior epsilon = 0.76) indicates that coherence was larger for temporal sulcus for intelligible sentences compared to nonverbal than for verbal conditions. acoustically equivalent non-intelligible sentences, the present results indicated such a pattern – enhanced left- Given that the major goal was to depict network signa- anterior coherence – to be induced by the incomprehensi- tures specifically involved in sentence comprehension, we ble string of words (see Fig. 3B). At this point, hypotheses applied an ANOVA to compare the coherence measure of to resolve this discrepancy must remain provisional. How- the two verbal conditions. These were identical with ever, it seems possible, that the speech-like – though respect to the physical stimulation, but differed in mean- incomprehensible – stimuli activated syntactical process- ingful comprehension. For intra-hemispheric coherence a ing which has been associated with frontal activity [27]. In significant interaction involving CONDITION × HEMI- addition, the attempt to determine a syntactical structure SPHERE × GRADIENT (F(1, 11)= 7.37, p < 0.05) reflected has been found to activate the right temporal area [39] a relatively higher coherence in the left-posterior area after which would be in line with the right temporal coherence the string of words had been made comprehensible by found for the present condition of incomprehensible explaining the sentence's meaning as opposed to the word string processing (see Fig. 3B). Patel and Balaban higher coherence in the right-posterior area for the incom- [12] discussed increased coherence between the left poste- prehensible word string. Profiles of intra- and inter-hemi- rior and right frontal areas for melody-like stimuli as a spheric coherence were similar, thereby resulting in correlate of integrative processing of local and global Page 5 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 Topogra Figure 3 phy of coherence measure in source space (averages of 12 subjects) Topography of coherence measure in source space (averages of 12 subjects). A: Asymmetry of intra-hemispheric coherence measures as indicated by the Laterality Index (all left- minus all right-intra-hemispheric coherence values divided by their sum) for the five conditions. Positive values indicate left-lateralized coherence and negative values indicate right-lateralized coher- ence. T-bars represent the standard error of the mean). Tone = 600 Hz tone, NoComp = incomprehensible word string, Comp = word string after comprehension as sentence, T.seq= melody-like sequence of tones, Scale = tone scale. B: Differ- ence-maps and t-maps of the coherence values comparing the two verbal conditions (Comp minus NoComp). Maps are shown in 110° top view and are spherical spline interpolated; nose indicated by triangle. The left graphs show the coherence between the area of the left auditory cortex and all 77 dipole-locations (left cortical input), the right graphs display the coherence between the area of the right auditory cortex and all 77 dipole-locations (right cortical input). For difference-maps, each con- tour-line represents a step of 0.025, without units. Pink and red colours illustrate higher coherence values after sentence com- prehension (Comp>NoComp), grey and black colours illustrate higher coherence values for the incomprehensible word string (Comp<NoComp). For t-maps, significant differences are shown at the 5% level in red (Comp>NoComp) and black (Comp<NoComp). Page 6 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 pitch information. Thus, it seems possible that in our Conclusions study the condition of incomprehensible word string sim- In sum, the present study demonstrates that the analysis ilarly activated pitch processing. of the synchronization of evoked magnetic steady-state fields in the source space can map neuronal networks (co- Finally, there is the possibility that the order of stimulus )activated during speech comprehension. Our techniques presentations may have affected the results. While coun- add spatial information to evidence on left-hemispheric terbalancing was not possible for the specific verbal areas involved in language processing, and support co- stimulus condition (see methods), we would not have activation or synchronization within complex neuronal expected order effects to be large since similar temporal networks as a cortical substrate of integration in percep- dynamics were not observed for the nonverbal condi- tion – like speech comprehension. tions. However, an effect of time cannot be ruled out as steady state responses and their generator activity were Methods largest for a simple 600-Hz tone which was presented first. Subjects Data of twelve German native speaking subjects (7 female, SSF were larger for the nonverbal conditions (tone, scale, mean age 25.3 ± 6.3 years) were included in the analysis. melody) than for the verbal material, particularly in the (From the 14 subjects, who had participated in the study, right hemisphere. While right-hemispheric processing of data from one subject had to be discarded because of fre- tonal perception has frequently been reported [28-31], quent movement artifacts and from another one, who rec- the general dominance of right-hemispheric SSF remains ognized the play-of-words, see below.) It was ascertained to be explained. As mentioned before, it seems possible by interview that the subjects did not suffer from any lan- that it reflects a carry-over effect from the sequence of con- guage, audiological or neurological dysfunction. Right- ditions which invariably started with the tone. It may also handedness was assessed by a modified version of the reflect bilateral processing of verbal material which has Edinburgh handedness questionnaire [32] to be 97.1 ± been indicated by various imaging approaches [19]. The 4.3. Moreover, all subjects reported having first-degree combination of verbal and nonverbal conditions within right-handed relatives. None of the subjects reported to be one experimental session may have blurred rather than a professional musician and none reported to be particu- elucidated the co-activation of material-specific networks. larly involved in hearing or practicing music. Prior to the experimental session, subjects were informed about the Still, greater right- over left-hemispheric generator activity procedure and given informed consent forms. After the asymmetry was found in the nonverbal conditions and experiment, each subject received a financial bonus of less asymmetry found in the verbal conditions. Moreover, 15€. intra-hemispheric coherence patterns showed distinct, hemisphere-specific patterns for verbal (more pro- Material and design nounced left-hemispheric) and nonverbal (more pro- All stimuli were amplitude modulated at 41.5 Hz (sinu- nounced right-hemispheric coherence) processing. When soidal amplitude) with a modulation depth of 90%. Ver- lateralized coherence patterns were examined by a lateral- bal stimuli consisted of words composing a sentence. ity index, the clearest left-hemispheric coherence focus Nonverbal stimuli consisted of tones forming a scale or a emerged for the comprehensible sentence and the clearest tune or a simple tone. A German play-on-words served as right-hemispheric coherence focus emerged for the tone. the template for the two verbal conditions. In the first case While we had expected a melody induced dominant right- a sentence is spoken without spacing between words and hemispheric activation, a more bilateral activation was without accents which creates an incomprehensible word found for the melody-like tone sequence. For the scale, string. The German sentence 'Mähn Äbte Heu? Heu mähn there was a shift towards left-hemispheric asymmetry of Äbte nie! Äbte mähn Gras' means in English 'Do abbots coherence. An explanation for this finding might be that cut hay? Abbots never cut hay, abbots mow lawns'. If pro- the 'melody' was constructed to include the tones of the nounced as a string 'MähnÄbteHeuHeumäh- scale which may have resulted in a melody-like tone nÄbtenieÄbtemähnGras' this utterance, due to a lack of sequence even though it did not resemble common mel- non-phonetic context [33], sounds like speech although odies or songs. This processing of an unfamiliar 'melody' meaning cannot be inferred. When the sentence is prop- might have activated temporal (left) and spectral (right) erly pronounced in the second case, the meaning becomes processing, as suggested by [28,29], resulting in a more clear and can be used to parse the information at subse- bilateral activation. While a simple tone contains only quent trials, allowing a listener to comprehend the sound spectral information, a melody also contains temporal string as a sentence. For the present study, the incompre- information. hensible string-of-word-version was generated syntheti- cally (software: MBROLA) with a female voice and a fundamental frequency of 200 Hz. None of the 12 sub- Page 7 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 jects included in the data analyses knew the play-of-words listened to the string of words knowing its meaning, and were unable to recognize the meaning of the sentence Again, the subjects were asked after 5 repetitions, if they before it was properly articulated and explained. could reproduce the meaning of the sentence, which now they all could. Given that once the sentence's meaning is The three nonverbal conditions comprised of a 600 Hz obvious, one can easily grasp the sentence, the sequence sinusoidal tone, a descending major scale (C6 B5 A5 G5 of condition 2 and 3 could not be reversed and thus, the F5 E5 D5 C5, 1034 – 517 Hz), and an arrangement of the sequence of presentation could not be randomized across same tones (C5 E5 G5 C6 A5 F5 D5 B5). All stimuli of all subjects. conditions were adjusted to the length of the sentence and lasted for 4419 ms (sample-rate of 16 kHz/16 bit, mono), Condition 4 (scale) and 5 (melody-like tone sequence) and each of the five conditions comprised 15 repetitions were arranged in a similar way, in that the subject was that were separated by inter-stimulus intervals of 4419 asked after 5 repetitions each, whether or not s/he per- ms. This long inter-stimulus interval allowed the same sig- ceived the sequence of tones as a melody. Eleven of the nal-to-noise ratio for the baseline and the stimulus condi- twelve subjects indicated that the tone sequence sounded tions which should prevent habituation effects on the SSF. like a melody and one was not sure about it. None of Stimuli were adjusted to have the same average loudness them perceived the scale or the simple tone as melodic. by normalizing to root-mean square (RMS) and were pre- sented at 50 dB above the individually assessed hearing Data acquisition and analysis threshold balanced for both ears. In each subject, the The magnetoencephalogram (MEG) was recorded with a hearing threshold was assessed by presenting short 600 148-channel whole head system (MAGNES 2500WH, 4D Hz beeps with ascending and descending intensity. For Neuroimaging, San Diego, USA) installed in a each subject and ear the mean hearing threshold was magnetically shielded room (Vaccumschmelze, Hanau, determined from the ascending and descending sequence. Germany). Data were recorded continuously with a sam- pling-rate of 1017.25 Hz and a 0.1–100 Hz band-pass fil- Task and procedure ter. The electrooculogram (EOG) and the During the experiment, which lasted about 45 minutes, electrocardiogram (EKG) were recorded and stored the subject was seated in a supine position. Subjects were together with the MEG-data for offline artifact control. Sil- asked to listen carefully to the stimuli, while fixating a ver-silverchloride electrodes were placed on the outer can- point at the ceiling of the chamber in order to avoid head thi for the monitoring of horizontal eye movements, and and eye movements. They were further informed that they above and below the right eye for vertical eye movements. would be asked questions about the stimuli during the EKG electrodes were placed on the right collarbone and experimental session, and that they should reply by saying below left costal arch. 'yes' or 'no'. Prior to data analysis, the trials for each condition were All stimuli were presented in blocks with 15 repetitions. submitted to a noise-reduction procedure that subtracted Conditions were separated by breaks of about 5 min each. the external noise recorded by MEG reference channels. For every subject the experimental session started with the These noise-corrected data were then bandpass filtered 600-Hz sinus tone (15 repetitions, condition 1), followed (28–60 Hz, 48 db/Oct, zerophase) and averaged across by the word string (condition 2). After 5 repetitions, the epochs separately for each condition (epoch-length: 8838 subject was asked whether he/she understood what he/ ms, 4419 ms pre-stimulus baseline). Epochs were visually she was hearing and could reproduce the meaning of the inspected for EOG and EKG artifacts and epochs with speech. (None of the subjects could.) Subsequently, the magnetic fields greater than 5 pT were rejected. A mini- stimulus presentation was continued, and the subject was mum 13 (of the total 15) epochs per subject were availa- th asked again after the 10th and the 15 presentation, ble for further analyses. whether he/she understood the meaning of the speech (None of them could). The steady state field (SSF) in response to the 41.5-Hz amplitude modulated stimuli was extracted using a mov- Then, the experimenter entered the room and pro- ing average procedure. A window of 5 cycles (120.5 ms) of nounced the sentence properly and slowly, so that its the 41.5 Hz Steady-State signal was shifted 179 times meaning became clear. Each subject was asked to repro- cycle-by-cycle (24.5 ms) across averaged epochs (sepa- duce the sentence, in order to ascertain that it was prop- rately for the 4419-ms baseline and the 4419-ms stimulus erly understood. After the experimenter had left the duration, the moving average procedure starting 144.5 ms subject chamber, the experiment continued with condi- post stimulus). The resulting moving-average epoch was tion 3, which comprised the identical physical stimula- detrended. Figure 1 illustrates that a SSF was successfully tion as condition 2 differing only in that the subject now induced by the stimulation. Page 8 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 The generators of the SSF were determined in the source tor (Pxx.*Pyy), where Pxy(f) is the cross power spectrum space for each epoch using the minimum norm estimate, estimate, Pxx(f) is the power spectrum of the time series at MNE [34-37] using an algorithm implemented in MAT- location x, Pyy(f) is the power spectrum estimate of time LAB-based in-house software developed by Hauk [35,36]. series at location y and f is the frequency index. The MNE is an inverse method reconstructing the primary The vectors of 4495 points in length were subdivided into current that underlies an extracranially recorded time- 8 overlapping segments of 613 points (603 ms), each of locked magnetic field. The procedure is based on the which was submitted to Hanning windowing. For each assumption that the data vector d, which contains the vector, the power spectra were obtained as the product of recorded magnetic activity at given sensor sites, can be the Discrete Fourier Transforms and its complex conju- described as the product of the leadfield matrix L, which gate, scaled by the number of points used for x and y, and specifies the sensor's sensitivity to the sources, the source averaged across segments [38]. For cross spectra the prod- current vector j [34] and a noise component ε. Since L and ucts of discrete Fourier Transforms for vectors x and y were d are known, and ε is treated as if estimated with an accu- averaged across segments. This algorithm was applied to racy of ~.05, the MNE for j is the mathematically unique four pairs of dipole orientations (dpo), normalized solution of the equation which minimizes the squared (Fisher Z-transformation) and averaged to result in one current density (j = min). This solution is obtained by coherence measure for every pair of locations (dp -dp ). x y multiplying the pseudo-inverse of the leadfield matrix L As a measure of co-activation or coherence, the first order with the data. Given the high number of sensors and the coherence between a region of interest (ROI) covering 7 presence of noise, spatial regularization is performed with locations over Heschl's gyrus and all other 77 locations the factor λ. This algorithm allows sources to be omitted, was determined. if they do not contribute to the measured magnetic field. A priori information about the number or locations of Effects of the five conditions on the distribution of MNE cortical sources is not required. Following Hauk et al. amplitudes and on the coherence measure were evaluated [35,36], who evaluated the dependence of the accuracy of by means of repeated measurement analyses of variance inverse solutions on the depth of the source for concentric (ANOVA) with the factors CONDITION, HEMISPHERE shells, solutions for a shell at 60% radius were determined (comparing all left and all right dipole-locations, exclud- as a compromise between blurring and depth sensitivity ing midline locations), and GRADIENT (comparing left- (ca. average radius of cortex, 77 equidistant dipole loca- and right-anterior versus left- and right- posterior dipole- tions, covering the lateral surface of the brain, were cho- locations, excluding midline locations). For inspection of sen). That is, voltage data were projected to a source space the hemispheric asymmetry of MNE, the ANOVA was per- consisting of 350 evenly distributed dipoles with three formed on the Laterality Index (LI: left- minus right-hem- orthogonal orientations at each dipole location. For every ispheric MNE divided by their sum, resulting in an index location two tangentially orientated dipole-components without units). For the evaluation of intra-hemispheric were included in further analysis. The mean MNE ampli- coherence, the first order coherence between the respec- tude, corresponding to the dipole strength in nAm/cm , tive left- or right-hemispheric ROI and the other 34 loca- was determined as mean vector length of both tangen- tions of the respective hemisphere entered the ANOVA tially orientated dipole-components across 5 cycles. (comparing conditions), for the evaluation of inter-hemi- spheric coherence, coherence between the left-hemi- Co-activation of generators was evaluated by all possible spheric ROI and all other 34 locations of the right pair-wise combinations of the MNE at all dipole loca- hemisphere and between the right-hemispheric ROI and tions, according to the algorithm (Matlab, Mathworks): all other 34 locations of the left hemisphere was submit- ted to the ANOVA comparing conditions. A separate ANOVA of the two verbal conditions with the factors Pxy f () CONDITION, HEMISPHERE and GRADIENT probed the Cxy f = () Pxx() f Pyy() f hypothesis of a change in coherence-topography induced by sentence comprehension. Where appropriate, signifi- Spectral coherence is a function of frequency with values cance levels are reported with Greenhouse-Geisser correc- between 0 and 1 that indicate how well the input x (in the tion adjusted degrees of freedom. Interactions were present study MNE at dipole location x) corresponds to verified by planned posthoc comparisons (two-tailed the output y (MNE at dipole location y) as a function of paired t-tests), and displayed in t-maps without addi- frequency (in the present study 41.5 Hz). This algorithm tional alpha correction. estimates the coherence of two vectors x and y by comput- ing the ratio of the squared cross power spectra (Pxy), List of abbreviations divided by the product of the power spectra for each vec- ANOVA: Analysis of variance Page 9 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 crete and abstract nouns in humans. Neuroscience Letters 1996, EEG: Electroencephalogram 209:17-20. 15. Weiss S, Rappelsberger P: Long-range EEG-synchronisation MEG: Magnetoencephalogram during word encoding correlates with successful memory performance. Cognitive Brain Research 2000, 9:299-312. 16. Bhattacharya J, Petsche H, Perda E: Long-range synchrony in the MNE: Minimum Norm Estimate gamma band: Role in music perception. Journal of Neuroscience 2001, 21:6329-6337. 17. Gaetz M, Weinberg H., Rzempoluck E, Jantzen KJ: Neural network RMS: Root-mean square classifications and correlation analysis of EEG and MEG activity accompanying spontaneous reversals of the Necker cube. Cogn Brain Res 1998, 6:335-346. SSF: Steady-State- (magnetic) Field 18. Miltner WHR, Braun C, Arnold M, Witte H, Taub E: Coherence of γ-EEG activity as a basis of associative learning. Nature 1999, SSR: Steady-State-Response 397:434-436. 19. Newman AJ, Pancheva R, Ozawa K, Neville HJ, Ullman MT: An event-related fMRI study of syntactic and semantic Authors' contributions violations. Journal of Psycholinguistic Research 2001, 30:339-364. 20. Binder J: The new neuroanatomy of speech perception. Brain MH developed the experimental design, carried out the 2000, 123(Pt 12):2371-2372. experimental study and developed and accomplished the 21. Binder JR, Frost JA, Hammeke TA, Rao SM, Cox RW: Function of data analyses, BR supervised the study and composed the the left planum temporale in auditory and linguistic processing. Brain 1996, 119:1239-1247. paper, AK advised and assisted the SS-design and SSF anal- 22. Demb JB, Desmond JE, Wagner AD, Vaidya CJ: Semantic encoding ysis, CW supervised the MEG measurements and advised and retrieval in the left inferior prefrontal cortex: A func- tional MRI study of task difficulty and process specificity. Jour- the coherence analyses, TE provided the experimental nal of Neuroscience 1995, 15:5870-5878. idea, advised the experimental design, the MEG methods 23. Demonet JF, Chollet F, Ramsay S, Cardebat D, Nespoulous JL, Wise and analyses. R, Rascol A, Frackowiak R: The anatomy of phonological and semantic processing in normal subjects. Brain 1992, 115:1753-1768. Acknowledgment 24. Hichok G, Poeppel D: Towards a functional neuroanatomy of Research was supported by the Deutsche Forschungsgemeinschaft and the speech perception. Trends Cogn Sci 2000, 4:131-138. 25. Pulvermüller F, Preissel H, Lutzenberger W, Birbaumer N: Brain Volkswagen-Stiftung. We are grateful to Dr. William J. Ray for correcting rhythms of language: Nouns vs verbs. European Journal of the revision. Neuroscience 1996, 8:937-941. 26. Scott SK, Blank CC, Rosen S, Wise RJS: Identification of a path- References way for intelligible speech in the left temporal lobe. Brain 1. Hebb DO: The organization of behavior; a neuropsychologi- 2000, 123:2400-2406. 27. Hahne A, Jescheniak JD: What's left if the Jabberwock gets the cal theory. Wiley, New York; 1949. 2. Singer W: Search for coherence: A basic principle of cortical semantics? An ERP investigation into semantic and syntactic processes during auditory sentence comprehension. Cognitive self-organization. Concepts Neurosci 1990, 1:1-26. 3. Singer W: Development and plasticity of cortical processing Brain Research 2001, 11:199-212. 28. Zatorre RJ, Evans AC, Meyer E: Neural mechanisms underlying architectures. Science 1995, 270:758-764. 4. Löwel S, Singer W: Experience-dependent plasticity of intrac- melodic perception and memory for pitch. Journal of Neuroscience 1994, 14:1908-1919. ortical connections. In In Perceptual Learning Edited by: Fahle M, Poggio T. Cambridge: MIT Press; 2002:3-18. 29. Zatorre RJ, Belin P, Penhune VB: Structure and function of audi- tory cortex: music and speech. Trends Cogn Sci 2002, 6:37-46. 5. Makeig S, Westerfield M, Townsend J, Jung TP, Courchesne E, Sei- jnowski TJ: Functionally independent components of the late 30. Liegeois-Chauvel C, Peretz I, Babai M, Laguitton V, Chauvral P: Con- tribution of different cortical areas in the temporal lobes to positive event-related potential during visual spatial attention. J Neurosci 1999, 19:2665-2680. music processing. Brain 1998, 121:1853-1867. 31. Evers S, Dannert J, Rödding D, Rötter G, Ringelstein EB: The cere- 6. Makeig S, Westerfield M, Jung TP, Enghoff S, Townsend J, Courchesne E, Seijnowski TJ: Dynamic brain sources of visual evoked bral haemodynamics of music perception. A transcranial Doppler sonography study. Brain 1999, 122(Pt 1):75-85. responses. Science 2002, 295:690-93. 7. Makeig S, Delorme A, Westerfield M, Jung TP, Townsend J, 32. Oldfield RC: The assessment and analysis of handedness: The Edinburgh Inventory. Neuropsychologia 1971, 9:157-200. Courchesne E, Sejnowski TJ: Electroencephalographic brain dynamics following manually responded visual targets. PLS 33. Kohler KJ: The disappearance of words in connected speech. ZAS Working Papers in Linguistics 1998, 11:21-34. Biol 2004, 2:747-762. 8. Singer W: Neural Synchrony: A versatile code for the defini- 34. Hämäläinen M, Ilmoniemi RJ: Interpreting Measured Magnetic Fields of the Brain: Estimates of Current Distributions. Tech- tion of relations? Neuron 1999, 24:29-65. 9. Valera F, Lachaux JP, Rodriguez E, Martinerie J: The brainweb: nical Report TKK-F-A559 HUT Finland 1984. 35. Hauk O, Berg P, Wienbruch C, Rockstroh B, Elbert T: The mini- Phase synchronisation and large-scale integration. Nature Reviews Neuroscience 2001, 2:229-238. mum norm method as an effective mapping tool for MEG analysis. In In Recent Advances in Biomagnetism Edited by: Yoshimoto 10. Elbert T, Keil A: Imaging in the fourth dimension. Nature 2000, 404:29-30. T, Kotani M, Kuriki M, Karibe H, Nakasato N. Tohoku University Press; 1999:213-216. 11. Galambos R, Makeig S, Talmachoff PJ: A 40 Hz potential recorded from the human scalp. Proc Natl Acad Sci 1981, 78:2643-2647. 36. Hauk O, Keil A, Elbert T, Müller MM: Comparison of data trans- formation procedures to enhance topographical accuracy in 12. Patel AD, Balaban E: Temporal patterns of human cortical activity reflect tone sequence structure. Nature 2000, time-series analysis of the human EEG. J Neurosci Methods 2002, 113:111-112. 404:80-84. 13. Srinivasan R, Russell DP, Edelman GM, Tononi G: Increased syn- 37. Grave de Peralta Menendez R, Hauk O, Gonzalez Andino S, Vogt H, Michel C: Linear inverse solutions with optimal resolution chronisation of neuromagnetic responses during conscious perception. Journal of Neuroscience 1999, 19:5435-5448. kernels applied to electromagnetic tomography. Human Brain Mapping 1997, 5:454-467. 14. Weiss S, Rappelsberger P: EEG coherence within the 13–18 Hz band as a correlate of distinct lexical organisation of con- 38. Proakis JG, Manolakis DG: Digital signal processing: Principles, algorithms, and applications. 3rd edition. London: Prentice-Hall . Page 10 of 11 (page number not for citation purposes) BMC Neuroscience 2004, 5:40 http://www.biomedcentral.com/1471-2202/5/40 39. Knosche TR, Maess B, Friederici AD: Processing of syntactic information monitored by brain surface current density mapping based on MEG. Brain Topogr 1999, 12:75-87. Publish with Bio Med Central and every scientist can read your work free of charge "BioMed Central will be the most significant development for disseminating the results of biomedical researc h in our lifetime." Sir Paul Nurse, Cancer Research UK Your research papers will be: available free of charge to the entire biomedical community peer reviewed and published immediately upon acceptance cited in PubMed and archived on PubMed Central yours — you keep the copyright BioMedcentral Submit your manuscript here: http://www.biomedcentral.com/info/publishing_adv.asp Page 11 of 11 (page number not for citation purposes)
BMC Neuroscience – Springer Journals
Published: Oct 24, 2004
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.