Word Comprehension

  1. How do we understand the words we hear?
  2. Semantic memory
  3. Organization of the lexicon

1. How do we understand the words we hear?
Language comprehension occurs rather effortlessly for most of us, and at the same time is the result of complex mechanisms. These mechanisms have been identified by the three main ways of discovering cognitive mechanisms. Like most areas of cognition, the bulk of the evidence for language comprehension mechanisms comes from reaction time and accuracy studies, although some of these findings have been replicated with neuroimaging techniques, whose importance has grown tremendously recently in a relatively short period of time. Research with patients who have language impairments have also been conducted with reaction time and other performance tasks, and these tasks have also begun to be carried out in the scanner with patients.

Conducting imaging studies with patients who have had impairments but are later able to recover some function allows us to be able to identify areas that might become responsible for neuronal firing when normal pathways have been lesioned, as well as monitor any recovery of the lesioned area. In people who have been permanently debilitated, one might expect to see little or no activation in the regions other imaging studies have found to be activated; however, these are hypotheses waiting to be explored.

The three methods of inquiry provide information about how comprehension occurs. If you were building a comprehension machine, what components would you think necessary to include? First, it would need a way to take the sounds it hears and recognize them as being a word and not just noise. It would need to have a collection of recognizable words already stored in its memory, and it would need to be able to connect these words to the mental representations (stored knowledge) of the objects that they name.

Additionally, to facilitate comprehension, you would need to understand the words in relation to each other. For example, if someone told you that she were going to the store, your comprehension would be more complete if you knew not only that a store is a place to buy goods and not a place to go skiing, but if you also understood what the items in the store were. You would also need to know that a place where gas is sold is not called a store, but has its own name, "gas station"; you would have to build in the ability to distinguish among different stored words (lexical entries) that share the same or similar senses. When your friend arrived at the store, she would also need to know that toasters can be found in the appliances section and flashlights in the hardware section. Thus, your comprehension machine would need to understand relationships among the concepts as well.

2. Semantic memory
As mentioned earlier, words can be typified as stored in networks in memory. The relationship between words and the objects they represent is called reference. When a word refers to an object, the word connects to a concept of the object in your mental dictionary or lexicon; for example, the sound of the word dog refers to the concept of domestic canines, along with attributes you have stored for the concept dog, such as furry or possessing a tail.

The relationship that words have with other words is called sense. Words can be related to each other in many ways, including being antonyms, synonyms, hypernyms (if X is a kind of Y, X is a hypernym of Y) and metonyms, meronyms. WordNet, created by George Miller and his colleagues at Princeton University is a large database of words that are linked to each other according to their senses, primarily synonym, hypernym, meronym, and hyponym relations.

The store of concepts and the words that name them comprise the lexicon. The process by which we call up a word from memory is called lexical access.

A number of factors can influence the rate at which words are retrieved, including how often the word appears in print or speech, commonly referred to as frequency, (e.g., Rubenstein, Garfield, & Milliken, 1970) or how people rate a word for familiarity (Gilhooly & Logie, 1980). Although familiarity plays a large role in language processing and can account for effects that frequency does not, the effects of frequency have been most extensively reported and are robust.

One task in which the influence of frequency has been investigated is lexical decision. In one version of this task, words and strings of letters that form non-words, e.g. SHRUK, appear one at a time on a computer screen. For example, research participants might read SHOE NEWT BARL FROMP. After seeing each letter string, participants decide as quickly and as accurately as they can if the string of letters formed a word or a non-word, and they press a key to indicate their decision.

In the lexical decision task, you responded to both words and non-words. A strong finding in the literature is that words are responded to much more quickly than non-words because words have already been encountered and have formed representations in memory. In addition, words that are accessed more often are accessed more quickly; hence words that are high in frequency are typically responded to faster than other words and produce fewer errors.

3. Organization of the lexicon
Collins and Quillian (1969) proposed that concepts are stored hierarchically in our mental dictionary of words, or mental lexicon. Property relations are represented within the hierarchy; for example, the concept of "animal" would be stored at a node that is above "bird", which would be stored above "canary". Connected to each category node are properties, such as "has skin" and "can move around" for "animal".

Figure 1. Collins and Quillian's hierarchical model of mental lexicon

Collins and Quillian predicted that the more closely situated on the hierarchy the concepts were, the faster they would activate each other; that is, people should be faster to say that the proposition "Birds have feathers" is true than they would say that "Birds eat" is true. This pattern of reaction times in the verification task was obtained, although for a reason different from that proposed by Collins and Quillian. Conrad (1972) conducted a similar property verification task, but he manipulated the frequency with which the category and its properties were associated. Rather than distance, the frequency of the relationships was responsible for the faster responses.

When you comprehend a word, other words can become active in your mental lexicon as well. This finding was made famous by a study by Meyer and Schvaneveldt in 1971. Meyer and Schvaneveldt presented research participants with words and non-words, one item at a time. Participants were instructed to press a particular key if they saw a word, and a different key if they saw a non-word, and their response times were measured. This task is called a lexical decision task, and it was designed to be a measure of the concepts activated in memory. Reading times for a word may be influenced by a preceding word. When the word NURSE appeared before the word DOCTOR, participants were faster to indicate that DOCTOR is a word compared to when the word BUTTER preceded DOCTOR. The fact that response times were facilitated suggests that the relatedness of words is represented in the mental lexicon. In particular, once a word is activated in memory, it can spread activation to related words.

Although we are still investigating how spread of activation is actually represented neurally, it is possible that related concepts are stored in nearby places, or that neural connections that are commonly made occur quickly. Consistent findings have been obtained suggesting that nouns are stored in the temporal lobe, and verbs are stored in the frontal lobe. Additionally, ERP studies have suggested that, when processed in sentence contexts, function words are activated in brain areas separate from content words, with greater left activation for function words.

English is a language that contains a predominance of ambiguous words; that is, it is composed of words that have more than one meaning. According to Merriam-Webster’s online Dictionary, for example, the verb "to run" has 68 different meanings. Chinese dialects such as Mandarin are even more ambiguous than English, for example. The occurrence of ambiguous words in a language is not a universal characteristic. How do we access the appropriate meaning of "run" when we hear the sentence, "Bert likes to run in the museum"? Right away you might think that frequency plays a role, and you would be right—to an extent. The most common meaning of "run" is the physical exercise sense, but a secondary sense is readily available when you read, "The optimistic undergraduate plans to run for office". Although you may experience no confusion, research suggests that multiple meanings of words can be active in memory during the first few hundred seconds after reading a word. Thus, we cannot always be conscious of having to deal with multiple meanings, and that is quite a good thing if your language is English or Mandarin! Although research in the area of lexical ambiguity has looked at this question for more than twenty-five years, there is strong evidence that both frequency and the context in which ambiguous words are presented determine your success at arriving at the intended meaning. (So the next time someone tells you a bad pun, and you don’t get it, don’t feel bad—the teller might not have given you the appropriate context and is perhaps referring to an infrequent sense). The findings of one respected researcher indicate that an ambiguous word presented outside of a context that could sway interpretation leads to activation of multiple meanings regardless of the frequency of the meaning; but, when a context is provided, only the appropriate meaning is activated for the dominant sense, and multiple meanings are activated when it is the subordinate sense that is intended.

Activation of meanings as they occur in the brain has been studied by presenting ambiguous words in solely the right or left visual fields and measuring response times to dominant and subordinate meanings (for example, "bank" is presented and is followed by either "money" or "river"). Immediately following presentation of an ambiguous word, both meanings are activated in both hemispheres. After a short time, however, only the dominant meaning is available in the left hemisphere, whereas both meanings are still available in the right. Evidence from neuroimaging studies indicates that word processing in the right hemisphere occurs in analogous positions to those in the left hemisphere.