Language processing in the brain


In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.
Throughout the 20th century the dominant model for language processing in the brain was the Geschwind–Lichteim–Wernicke model, which is based primarily on the analysis of brain-damaged patients. However, due to improvements in intra-cortical electrophysiological recordings of monkey and human brains, as well non-invasive techniques such as fMRI, PET, MEG and EEG, an auditory pathway consisting of two parts has been revealed and a two-streams model has been developed. In accordance with this model, there are two pathways that connect the auditory cortex to the frontal lobe, each pathway accounting for different linguistic roles. The auditory ventral stream pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The auditory dorsal stream in both humans and non-human primates is responsible for sound localization, and is accordingly known as the auditory 'where' pathway. In humans, this pathway is also responsible for speech production, speech repetition, lip-reading, and phonological working memory and long-term memory. In accordance with the 'from where to what' model of language evolution, the reason the ADS is characterized with such a broad range of functions is that each indicates a different stage in language evolution.
The division of the two streams first occurs in the auditory nerve where the anterior branch enters the anterior cochlear nucleus in the brainstem which gives rise to the auditory ventral stream. The posterior branch enters the dorsal and posteroventral cochlear nucleus to give rise to the auditory dorsal stream.
Language processing can also occur in relation to signed languages or written content.

Early neurolinguistics models

Throughout the 20th century, our knowledge of language processing in the brain was dominated by the Wernicke–Lichtheim–Geschwind model. The Wernicke–Lichtheim–Geschwind model is primarily based on research conducted on brain-damaged individuals who were reported to possess a variety of language related disorders. In accordance with this model, words are perceived via a specialized word reception center that is located in the left temporoparietal junction. This region then projects to a word production center that is located in the left inferior frontal gyrus. Because almost all language input was thought to funnel via Wernicke's area and all language output to funnel via Broca's area, it became extremely difficult to identify the basic properties of each region. This lack of clear definition for the contribution of Wernicke's and Broca's regions to human language rendered it extremely difficult to identify their homologues in other primates. With the advent of the fMRI and its application for lesion mappings, however, it was shown that this model is based on incorrect correlations between symptoms and lesions. The refutation of such an influential and dominant model opened the door to new models of language processing in the brain.

Current neurolinguistics models

Anatomy

In the last two decades, significant advances occurred in our understanding of the neural processing of sounds in primates. Initially by recording of neural activity in the auditory cortices of monkeys and later elaborated via histological staining and fMRI scanning studies, 3 auditory fields were identified in the primary auditory cortex, and 9 associative auditory fields were shown to surround them. Anatomical tracing and lesion studies further indicated of a separation between the anterior and posterior auditory fields, with the anterior primary auditory fields projecting to the anterior associative auditory fields, and the posterior primary auditory field projecting to the posterior associative auditory fields. Recently, evidence accumulated that indicates homology between the human and monkey auditory fields. In humans, histological staining studies revealed two separate auditory fields in the primary auditory region of Heschl's gyrus, and by mapping the tonotopic organization of the human primary auditory fields with high resolution fMRI and comparing it to the tonotopic organization of the monkey primary auditory fields, homology was established between the human anterior primary auditory field and monkey area R and the human posterior primary auditory field and the monkey area A1. Intra-cortical recordings from the human auditory cortex further demonstrated similar patterns of connectivity to the auditory cortex of the monkey. Recording from the surface of the auditory cortex reported that the anterior Heschl's gyrus projects primarily to the middle-anterior superior temporal gyrus and the posterior Heschl's gyrus projects primarily to the posterior superior temporal gyrus and the planum temporale. Consistent with connections from area hR to the aSTG and hA1 to the pSTG is an fMRI study of a patient with impaired sound recognition, who was shown with reduced bilateral activation in areas hR and aSTG but with spared activation in the mSTG-pSTG. This connectivity pattern is also corroborated by a study that recorded activation from the lateral surface of the auditory cortex and reported of simultaneous non-overlapping activation clusters in the pSTG and mSTG-aSTG while listening to sounds.
Downstream to the auditory cortex, anatomical tracing studies in monkeys delineated projections from the anterior associative auditory fields to ventral prefrontal and premotor cortices in the inferior frontal gyrus and amygdala. Cortical recording and functional imaging studies in macaque monkeys further elaborated on this processing stream by showing that acoustic information flows from the anterior auditory cortex to the temporal pole and then to the IFG. This pathway is commonly referred to as the auditory ventral stream. In contrast to the anterior auditory fields, tracing studies reported that the posterior auditory fields project primarily to dorsolateral prefrontal and premotor cortices. This pathway is commonly referred to as the auditory dorsal stream. Comparing the white matter pathways involved in communication in humans and monkeys with diffusion tensor imaging techniques indicates of similar connections of the AVS and ADS in the two species. In humans, the pSTG was shown to project to the parietal lobe, and from there to dorsolateral prefrontal and premotor cortices, and the aSTG was shown to project to the anterior temporal lobe and from there to the IFG.

Auditory ventral stream

The auditory ventral stream connects the auditory cortex with the middle temporal gyrus and temporal pole, which in turn connects with the inferior frontal gyrus. This pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway. The functions of the AVS include the following.

Sound recognition

Accumulative converging evidence indicates that the AVS is involved in recognizing auditory objects. At the level of the primary auditory cortex, recordings from monkeys showed higher percentage of neurons selective for learned melodic sequences in area R than area A1, and a study in humans demonstrated more selectivity for heard syllables in the anterior Heschl's gyrus than posterior Heschl's gyrus. In downstream associative auditory fields, studies from both monkeys and humans reported that the border between the anterior and posterior auditory fields processes pitch attributes that are necessary for the recognition of auditory objects. The anterior auditory fields of monkeys were also demonstrated with selectivity for con-specific vocalizations with intra-cortical recordings. and functional imaging One fMRI monkey study further demonstrated a role of the aSTG in the recognition of individual voices. The role of the human mSTG-aSTG in sound recognition was demonstrated via functional imaging studies that correlated activity in this region with isolation of auditory objects from background noise, and with the recognition of spoken words, voices, melodies, environmental sounds, and non-speech communicative sounds. A meta-analysis of fMRI studies further demonstrated functional dissociation between the left mSTG and aSTG, with the former processing short speech units and the latter processing longer units. A study that recorded neural activity directly from the left pSTG and aSTG reported that the aSTG, but not pSTG, was more active when the patient listened to speech in her native language than unfamiliar foreign language. Consistently, electro stimulation to the aSTG of this patient resulted in impaired speech perception. Intra-cortical recordings from the right and left aSTG further demonstrated that speech is processed laterally to music. An fMRI study of a patient with impaired sound recognition due to brainstem damage was also shown with reduced activation in areas hR and aSTG of both hemispheres when hearing spoken words and environmental sounds. Recordings from the anterior auditory cortex of monkeys while maintaining learned sounds in working memory, and the debilitating effect of induced lesions to this region on working memory recall, further implicate the AVS in maintaining the perceived auditory objects in working memory. In humans, area mSTG-aSTG was also reported active during rehearsal of heard syllables with MEG. and fMRI The latter study further demonstrated that working memory in the AVS is for the acoustic properties of spoken words and that it is independent to working memory in the ADS, which mediates inner speech. Working memory studies in monkeys also suggest that in monkeys, in contrast to humans, the AVS is the dominant working memory store.
In humans, downstream to the aSTG, the MTG and TP are thought to constitute the semantic lexicon, which is a long-term memory repository of audio-visual representations that are interconnected on the basis of semantic relationships.. The primary evidence for this role of the MTG-TP is that patients with damage to this region are reported with an impaired ability to describe visual and auditory objects and a tendency to commit semantic errors when naming objects. Semantic paraphasias were also expressed by aphasic patients with left MTG-TP damage and were shown to occur in non-aphasic patients after electro-stimulation to this region. or the underlying white matter pathway Two meta-analyses of the fMRI literature also reported that the anterior MTG and TP were consistently active during semantic analysis of speech and text; and an intra-cortical recording study correlated neural discharge in the MTG with the comprehension of intelligible sentences.