Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language
Section snippets
Introduction and preliminaries
The functional anatomic framework for language which is presented in this paper is based on a rather old insight in language research dating back at least to the 19th century (e.g. Wernicke, 1874/1969), namely that sensory speech codes must minimally interface with two systems: a conceptual system and a motor–articulatory system. The existence of an interface with the conceptual system requires no motivation; such an interface is required if we are to comprehend the meaning of the words we
Overview of the framework
The framework we have proposed (Hickok & Poeppel, 2000) and further develop here draws heavily on what is known about the functional anatomy of vision, and more recently audition, particularly the distinction that has been made between dorsal and ventral streams. Most of the discussion of dorsal and ventral streams in the literature centers on the concept of “where” and “what” pathways (Ungerleider & Mishkin, 1982). The fundamental distinction proposed by Ungerleider and Mishkin was that visual
Task dissociations in “speech perception”
One central thesis of our approach is that the execution of different linguistic tasks (functions) involves non-identical neural networks, even with stimulus conditions held constant. In this section we review evidence that supports this assumption in the domain of speech perception. In particular, the evidence shows that the ability to perform sub-lexical speech tasks (phoneme identification, rhyming tasks, and so on) double-dissociates from the ability to comprehend words (which presumably
The ventral stream
The ventral stream, which one can broadly conceptualize as an auditory ‘what’ system, deals with the conversion of sensory information into a format suitable for linguistic computation (in the case of speech input). As such, this pathway deals with (probably multiple levels of) acoustic–phonetic processing, the interface of acoustic–phonetic representations with lexical representations, and the interface of the lexical items or roots with the computational system responsible for syntactic and
The dorsal stream
Using the organization of the visual system as a guide, we have hypothesized the existence of a dorsal auditory stream which is critical for auditory–motor integration (Hickok & Poeppel, 2000). In this section we first outline current views on dorsal-stream sensory–motor integration networks in vision, and then specify the role that an auditory–motor integration system might play in speech/language. Finally, we turn to neural evidence relevant to mapping the spatial distribution of this network.
Perception–production overlap in posterior “sensory” cortex
A critical component of Wernicke's 1874 model was that auditory representations of speech played an important role in speech production; this is how speech production errors (paraphasias) were explained in aphasia caused by lesions to left auditory areas. Available evidence suggests he was correct (Buchsbaum et al., 2001, Hickok, 2001, Hickok et al., 2000).
Some of the best evidence comes from conduction aphasia (Hickok, 2000). Such patients have two primary deficits, phonemic paraphasias in
Understanding aphasia
The framework outlined in this article and schematized in Fig. 1 is in part motivated by findings from the deficit-lesion literature and should account in natural ways for relevant aphasic syndromes. Here we summarize how the proposal provides a framework to discuss deficit-lesion data using four types of clinical deficits.
Summary and conclusions
The framework for the functional anatomy of language which we have outlined here has strengths and weaknesses. The limitations are straightforward. It is very broad in scope, and therefore glosses over many important details: what exactly is an “acoustic–phonetic representation of speech”? What are the computations involved in mapping sound onto meaning, or auditory onto motor representations? (However, some existing models may fit well into the current framework, as suggested above.) It does
Acknowledgements
This work has benefited from many discussions and correspondences with colleagues and students including Kathy Baines, Laura Barde, Brad Buchsbaum, Hugh Buckingham, Nina Dronkers, Nicole Gage, Jack Gandour, Colin Humphries, John Jonides, Sophie Scott, and Richard Wise. We are also grateful to four Cognition reviewers who provided excellent and constructive comments on this manuscript. This work was supported by NIH grant R01DC0361 (G.H.), and by NIH R01DC 05660 (D.P.). During the preparation of
References (121)
- et al.
The evolutionary origin of language areas in the human brain. A neuroanatomical perspective
Brain Research Reviews
(1997) - et al.
Size-contrast illusions deceive the eye but not the hand
Current Biology
(1995) - et al.
Conduction aphasia and the arcuate fasciculus: a reexamination of the Wernicke-Geschwind model
Brain and Language
(1999) - et al.
Interaction between phonological and semantic factors in auditory comprehension
Neuropsychologia
(1981) - et al.
Parametrically dissociating speech and nonspeech perception in the brain using fMRI
Brain and Language
(2001) - et al.
Phonological factors in auditory comprehension in aphasia
Neuropsychologia
(1977) - et al.
The perception and production of voice-onset time in aphasia
Neuropsychologia
(1977) - et al.
Role of left posterior superior temporal gyrus in phonological processing for speech perception and production
Cognitive Science
(2001) Do action systems resist visual illusions?
Trends in Cognitive Sciences
(2001)Cognitive reality and the phonological lexicon: a review
Journal of Neurolinguistics
(1998)