Why is it important to validate information presented in a material that we read or listen to?

In psychological research, the comprehension of linguistic information and the knowledge-based assessment of its validity are often regarded as two separate stages of information processing. Recent findings in psycholinguistics and text comprehension research call this two-stage model into question. In particular, validation can affect comprehension outcomes, and the comprehension process involves a routine and early validation of the communicated information. These findings suggest that the comprehension and validation of information are more closely interwoven than traditionally assumed. Relationships of validation with integration and prediction and the broader implications of the concept for text comprehension research are discussed.

Chicago is located in Southern Italy.

The emancipation of women is a negative development.

Tom likes sweet cookies. He eats them with tomatoes and basil.

Finding meaning in the three examples above, that is, integrating the word meanings into a coherent semantic structure, is fairly routine and can be accomplished quickly with little cognitive effort. Readers are also likely to quickly and effortlessly notice that the assertion from each sentence conflicts with prior knowledge and beliefs about the world.

These examples (two of them taken from experiments discussed later) illustrate some basic facts about linguistic communication. The information communicated in a text or discourse can be inaccurate, that is, it can contain factual errors or even intentional lies that are inconsistent with the recipient's knowledge and beliefs. Factual errors occur unsystematically as slips of the pen (or tongue) and seem to appear more regularly in texts that deal with complex topics. Examples for texts with blatant factual errors include scientific publications retracted later because of errors in the research process (Fanelli, 2013), middle-school physics textbooks (e.g., Hubisz, 2003), and popular science texts presenting scientific results for a nonexpert audience (for a thorough analysis of an informative case see Chabris, 2012). Intentional lies are prevalent in everyday conversations (depending on to whom individuals are talking; Serota, Levine, & Boster, 2010) and political propaganda, but they even appear as forged data in scientific publications, which most recipients expect to be bound to objective truth. Texts can also be internally inconsistent, for example, when a text contains contradictory information (O'Brien & Albrecht, 1992) or its message can stand in conflict with information communicated by other texts on the same topic (e.g., Perfetti, Rouet, & Britt, 1999) or prior beliefs of the reader (e.g., Maier & Richter, 2013a).

The pervasiveness of inaccuracies and inconsistencies in text-based communication raises the question of how individuals process this information, a question that attracts growing interest among text comprehension researchers (Rapp & Braasch, 2014; Richter & Rapp, 2014). Is there a general psychological mechanism that protects mental representations constructed during comprehension from being contaminated by such information—at least to some extent? The notion of validation refers to such a mechanism. It denotes the idea that comprehenders use their prior knowledge and beliefs to monitor incoming linguistic information for consistency with previous text information and with their own knowledge and beliefs (Singer, 2013). In this article I discuss recent findings from our lab and other researchers that have focused on two aspects of the relationship between validation and comprehension. The first line of research has investigated the extent that validation affects comprehension outcomes, and the second more ambitious focus has been on the underlying cognitive processes. The ability of readers to strategically use their knowledge and beliefs to evaluate text information after they have comprehended text is beyond dispute. However, whether validation is a routine part of regular comprehension processes is a point of contention.

The notion of validation implies that readers are sensitive to the plausibility of text information. From a psychological perspective, plausibility may be defined as the “acceptability or likelihood of a situation or a sentence describing it” (Matsuki et al., 2011, p. 926) or as “the degree of fit between a given scenario and prior knowledge” (Connell & Keane, 2006, p. 98). Validation is assumed to generate plausibility judgments as a byproduct of comprehension processes. Based on this assumption, a straightforward hypothesis on how validation affects comprehension could be that readers use perceived plausibility as a heuristic to select or weigh information for deeper processing. The implied plausibility bias in this statement suggests that plausible information might be more likely to be integrated into the mental representation of the text content.

In an earlier experiment (Schroeder, Richter, & Hoever, 2008), we tested the assumption of a plausibility bias with university students who read typical expository texts from their field of study (psychology). Plausibility was manipulated by inserting strong (plausible) versus weak (implausible) arguments. The implausible arguments contained common argumentation errors so that the claim was only weakly unsupported but comprehensibility was left intact. For example, participants read the following sentence about nicotine addiction: If the father shows that his cigarette after dinner tastes wonderful the children develop a positive attitude toward smoking. The sentence was made implausible by switching cause and effect: If the children develop a positive attitude toward smoking the father shows that his cigarette after dinner tastes wonderful. After reading the texts, participants provided recognition (Did the information appear in the text?) and comprehension judgments (Does the information go with the state of affairs described in the text?) for paraphrases, inference, and distracter items (Schmalhofer & Glavanov, 1986) and also judged the plausibility of each item. An analysis of these responses with multinomial models revealed a clear-cut pattern that plausible information was far more likely to be judged as consistent with the state of affairs described in the text (plausibility bias). However, information that had been judged as matching the situation in the text was also far more likely to be judged as plausible (independent of its objective plausibility). This bidirectional relationship of plausibility and comprehension provides evidence for a strong role of validation as a type of epistemic gatekeeper in situation model construction. The plausibility of incoming information seems to be evaluated against the background knowledge as well as against the current state of the situation model, resulting in a situation model that is biased toward plausible information.

In more recent research, we investigated whether a similar plausibility bias occurs when nonexperts read multiple texts on controversial socioscientific topics. This comprehension situation has several features that make it a particularly interesting environment to examine the validation–comprehension relationship. Socioscientific topics are not only debated in science, they also have a strong ethical or political relevance (e.g., global warming, the effects of educational measures, or medical topics). Although nonexpert readers may not be expected to possess strong prior knowledge on such issues, they often hold strong prior beliefs and favor one position of the debate over the other. Moreover, they usually rely on heterogeneous web-based sources (e.g., science blogs, magazine articles, Wiki entries, and scientific articles) that present divergent viewpoints and differ in quality. Hence, nonexpert readers using multiple texts to learn about highly debated socioscientific topics are faced with the problem of developing a coherent and consistent mental representation on the basis of conflicting and inaccurate information.

Readers can apply several strategies to solve this problem. The optimal strategy to achieve coherence would be to construct a full documents model that represents not only the content of the multiple texts but also information about the source (and possibly its credibility) and the argumentative relationships between the texts that usually need to be inferred (Perfetti et al. 1999). However, this strategy is highly demanding, requiring cognitive effort in processing prior knowledge, knowledge of text genres, and disciplinary conventions. Thus, it seems to be a normative standard rather than a strategy that nonexperts may be expected to apply spontaneously.

A simpler and cognitively less demanding strategy is content-based and builds on the validation of text information. Readers might use the perceived plausibility of the text content as a heuristic to select information for deeper processing. However, research in social psychology suggests that under certain circumstances readers elaborate specifically on belief-inconsistent information to refute it (Edwards & Smith, 1996; Kruglanski & Webster, 1996). Thus, if plausibility is monitored routinely during comprehension, the extent that readers manage information that has been detected as implausible will depend on the processing goals. In particular, reading goals that are associated with a high need for validity should reduce the plausibility bias.

We tested this assumption in several experiments with multiple texts consisting of opposing views on one of the three following socioscientific topics: interpretation of the PISA results (Maier & Richter, 2013b), biomass as the energy source of the future, and neuroscientific evidence for and against the existence of free will (Maier & Richter, unpublished data). The student participants read these texts with the goal to develop a justified point of view or with the goal to memorize as much information from the text as possible. Afterward, they worked on a comprehension task with different types of test items (paraphrases, inferences, and distracters) and provided plausibility judgments for these items (similar to the experiment discussed above by Schroeder et al., 2008). Across all three topics we found a strong plausibility bias on comprehension. Inference items were much more likely to be correctly recognized as matching the situation described in the text when they were judged as plausible (for an example, see Figure 1). It is important to note that this relationship held even when we controlled for individual response tendencies. In addition, distracter items judged as plausible were not more likely to be classified as inferences. Thus, the activation of the plausibility bias as the result of a metacognitive effect at the time of test seems unlikely. Rather, the plausibility bias seems to be due to cognitive processes that occur during comprehension. The second major finding across all three topics was that the plausibility bias was attenuated (but not removed completely) when participants read the texts with the goal to develop a justified point of view.

In summary, the strong association between perceived plausibility and comprehension on the situation model level—in the comprehension of single expository texts as well as in multiple text comprehension—is consistent with the notion of validation. However, this association seems to be reduced when goals are pursued that require a thorough processing of belief-inconsistent information. This latter finding suggests that readers can strategically control the heuristic use of plausibility.

One limitation of these studies is that the evidence for the plausibility–comprehension link is only correlational, leaving open alternative causal interpretations. One alternative explanation is based on processing fluency. The greater the ease of retrieval or the familiarity of correctly recognized inferences might have increased their plausibility (although fluency effects on plausibility are typically much smaller; Reber & Schwarz, 1999; Schwarz, Sanna, Skurnik, & Yoon, 2007). A more general alternative explanation is that background knowledge might be the common factor that increases comprehension or perceived plausibility. When text information triggers the activation of background knowledge, readers might comprehend this information to a greater extent and simultaneously perceive it as more plausible. To rule out these alternative explanations and to provide stronger evidence for the causal link between perceived plausibility and comprehension, we attempted to replicate the plausibility bias experimentally (Maier & Richter, unpublished data). To dissociate perceived plausibility from background knowledge, we used two texts arguing for two opposing positions on the controversy of whether spider silk can (eventually) be used for repairing torn nerves. This controversial topic was chosen because it was associated with very little prior knowledge and weak prior beliefs but with high interest in our student participants. Before reading the two texts, participants watched a 7-minute video on spider silk and its use in neuroregeneration. The video was presented in one of two versions that were fully identical in factual information but contained opposing claims concerning the controversial topic. By using this method, we manipulated the perceived plausibility of information in the two experimental texts, creating either pro- or contra-beliefs in our participants. As expected, information from the text that was consistent with the position of the belief-manipulation video tended to be judged as plausible compared with the text that was incongruent. More importantly, we found a similar effect with situation model integration, that is, with the proportion of paraphrases and inferences correctly classified as matching the state of affairs described in the text as the dependent variable. The latter effect was fully mediated by perceived plausibility.

We have come full circle with these results. The effect of the experimental plausibility manipulation was found to be fully mediated by perceived plausibility, the variable for which we have found a close relationship with comprehension outcomes in the earlier studies. Replications with more texts on different topics, with different samples and different operational definitions of plausibility and comprehension, are clearly needed. Nevertheless, the results by Maier and Richter (unpublished data) are an encouraging first test of the conjecture that validation causally affects comprehension outcomes.

Demonstrating that plausibility is closely associated with comprehension outcomes is not sufficient to affirm whether validation should be regarded as a component of comprehension. The mainstream view in psychology is that comprehension and validation form separate stages of information processing. Many examples of this view can be found in the language and text comprehension research. In sentence comprehension research, for example, some researchers have adopted a distinction between lexical (semantic) knowledge that is immediately relevant for comprehension and world knowledge, which is not part of the lexicon and is used for subsequent pragmatic processes (Katz, 1972). If world knowledge is not used in the initial interpretation of a sentence, validation must also be deferred to the later stage at which pragmatic processes take effect. In text comprehension research, world knowledge is regarded as highly relevant for comprehension. However, its role is confined to a knowledge base for interpreting and evaluating information, whereas the use of knowledge for evaluating incoming text information is rarely ever considered. Accordingly, the focus of the major theories in the field is on coherence rather than consistency or truth and veridicality of the mental representations constructed during comprehension (for an overview, see McNamara & Magliano, 2009).

An explicit distinction between comprehension and validation has been conceptualized in two-stage models of comprehension and validation (Connell & Keane, 2006; Gilbert, 1991). According to Gilbert (1991), readers automatically and invariably accept information as being true when they comprehend an assertion. Validation occurs later only at a second stage of information processing when information initially accepted as being true can be actively unbelieved. However, the cognitive processes located at the validation stage are assumed to be strategic and resource demanding, implying that individuals who are not able or motivated to spend the required cognitive effort will not engage in validation. The primary evidence for Gilbert's two-stage model comes from experiments that have shown an affirmation bias in learning assertions about fantasy facts marked as true or false (e.g., A twyrin is a doctor [FALSE]; Gilbert, Krull, & Malone, 1990). When participants were put under cognitive load or time pressure during a learning task, they mistakenly remembered assertions marked as false as being true. The problem with these experiments is that fantasy facts are not very informative for investigating validation, because participants lack the knowledge that they could use for evaluating the truth or plausibility of the presented information (the label FALSE functions in these experiments rather as an abstract negation operator). In fact, later experiments using real assertions with Gilbert's original paradigm have shown that the affirmation bias disappears when participants possess relevant background knowledge (Richter, Schroeder, & Wöhrmann, 2009, Experiments 1 and 2).

These results augment the possibility of the alternative view that some form of validation is a component of regular comprehension processes, or at least it is closely related to such processes. Singer's validation theory of bridging inferences (Singer, 1993; Singer, Halldorson, Lear, & Andrusiak, 1992) can take credit for having prepared the ground for this view. In a number of inventive reaction time experiments, Singer and colleagues showed that readers not only infer but also validate the missing premise needed to establish a causal link between two events. For example, participants read the causally consistent sequence Dorothy poured the bucket of water on the fire. The fire went out or the causally inconsistent sequence Dorothy poured the bucket of water on the fire. The fire grew hotter. Their responses to the question Does water extinguish fire? were faster than responses to similar sentence pairs that did not prompt a causal inference.

The widespread view that comprehension is essentially the construction of a situation model (van Dijk & Kintsch, 1983) or mental model (Johnson-Laird, 1983) also has implications for the comprehension–validation relationship, although several variants of the situation model construct have been proposed over the years, and not all of them are related to the issue of validation. Some authors, however, have conceptualized situation/mental models as referential representations of the state of affairs described in the text (van Dijk & Kintsch, 1983; Johnson-Laird, 1983) that are iconic or analogue representations (Glenberg, Kruley, & Langston, 1994; Zwaan & Radvansky, 1998) and that represent extensional aspects of meaning (i.e., reference and truth; Johnson-Laird, Herrmann, & Chaffin, 1984). In short, situation/mental models specify how linguistic expressions relate to the world. Nonetheless, constructing this kind of mental representations without some kind of validation of the incoming information is difficult to conceive. The distinct possibility is evident that readers' world knowledge activated during situation/mental model construction is used not only to interpret and enrich text information but simultaneously to validate it (Figure 2). The theory of mental models explicitly endorses the assumption that the integration of new information into a mental model during comprehension entails “a procedure which, if all the entities referred to in the assertion are represented in the current model, verifies whether the asserted properties or relations hold in the model” (Johnson-Laird, 1983, p. 249). Wyer and Radvansky (1999) proposed a similar suggestion in their application of the situation model concept to social information processing. Thus, the proposition that validation and the construction of a situation model are closely intertwined is not entirely new, although the relationship has received relatively little attention in the field.

In our lab, we developed an experimental paradigm to examine the routine and involuntary characteristic of validation processes during comprehension. The procedure is called the epistemic Stroop paradigm, because its logic resembles that of the experiments on interference effects of color words on color naming conducted by Stroop (1935). In a typical experiment based on the epistemic Stroop paradigm, participants see words appearing sequentially on the computer screen for a fixed amount of time (e.g., for 300 ms; Figure 3). These words progressively form sentences. At specific words, the presentation stops and participants are prompted to give a response that is unrelated to the content of the sentence. For example, they are prompted to provide a spelling judgment (Is the word spelled correctly? Yes/No). In experimental trials, the prompt appears at the critical word for which the truth value of the sentence can be computed. The rationale of the paradigm is as follows: If the assertion is false and its content is validated nonstrategically, a negative response tendency should occur—even though the task is unrelated to the sentence content. This negative response tendency should interfere with positive (affirmative) responses in the task and slow down these responses.

We have used this paradigm with a variety of tasks and experimental materials. In the first experiments, the experimental stimuli were simple assertions that were associated with strong knowledge (Richter et al., 2009, Experiments 3 and 4). These assertions were either true (e.g., Cognac contains alcohol) or false (e.g., Computers have emotions). The experimental task was a spelling judgment task. We found a clear epistemic Stroop effect. Yes responses after false sentences took more time compared with yes responses after true sentences and also compared with no responses after false sentences (Figure 4). Given that validating the assertions was irrelevant to the spelling judgment task, this finding clearly indicates that readers routinely validate and detect false information that is inconsistent with their world knowledge.

In a second series of experiments, we applied the paradigm to sentences that were not clearly true or false but merely plausible and implausible (Isberner & Richter, 2013). Plausibility was manipulated through a context sentence. For example, the plausible version of one experimental text was Frank has a broken pipe (context sentence). He called the plumber. In the implausible version, the word pipe was replaced by the word leg. In Experiment 2 these stimuli were combined with a color change task. That is, participants were instructed to respond with yes when the color of the critical word changed from black to blue and no when it remained black. Again, we found a strong epistemic Stroop effect: Yes responses were prolonged in implausible sentences compared with plausible sentences. Potential alternative explanations in terms of semantic priming through the context sentence can be ruled out, because a nonlinguistic task was used.

In another experiment with the epistemic Stroop paradigm, we tested whether the epistemic Stroop effect also appears in a nonevaluative task that did not require a yes or no judgment (Isberner & Richter, 2014a). In this experiment, we used true versus false assertions but this time with a simple probe task. After the words of each trial, participants saw either the word TRUE or the word FALSE on the screen. Participants responded to these probe words with different keys. After false sentences, responses to the TRUE probe word were prolonged compared with responses to the FALSE probe word, indicating that an epistemic Stroop effect occurred even with a nonevaluative task. This finding also provides evidence for the involuntary characteristic of validation processes, and it implies that validation does not require an evaluative mindset that was possibly created by the yes or no tasks used in the previous experiments (Wiswede, Koranyi, Müller, Langner, & Rothermund, 2013).

The experiments with the epistemic Stroop paradigm highlight the involuntary, nonstrategic aspect of validation, but the results provide no information about the temporal nature of validation and comprehension. If validation and comprehension are closely related (or if validation can even be regarded as a component process of situation model construction), validation processes should occur early in sentence processing. Evidence from experiments investigating event-related potentials (ERPs) show that the speed of detecting violations of world knowledge is as fast as detecting semantic anomalies (indicated by a strongly overlapping N400; Hagoort, Hald, Bastiaansen, & Petersson, 2004; for similar results with assertions that conflict with a reader's value system, see Van Berkum, Holleman, Nieuwland, Otten, & Murre, 2009; for an overview, see Isberner & Richter, 2014b). Ferretti, Singer, and Patterson (2008) measured ERPs to show that implausible information is detected within 200 ms after encountering the critical target word rather than at sentence completion. Interestingly, the effects of implausibility were modulated by verb factivity (factive verbs such as to know imply the truth of their complements, whereas nonfactive verbs such as to think do not), which suggests similar implicit validation processes than those that occur during the reading of explicit verification judgments (see also Singer, 2006). In addition, eye-tracking experiments have demonstrated effects of (im)plausibility on indicators of early comprehension processes (e.g., Matsuki et al., 2011; Staub, Rayner, Pollatsek, Hyönä, & Majekwski, 2007). Measuring ERPs and eye tracking also indicates that validation occurs regularly under normal reading conditions and not only in the relatively artificial situation created by the Stroop paradigm.

Validation processes occur regularly, involuntarily, and quickly during reading, supporting the assumption that validation is closely linked to or even part of comprehension. When and exactly how validation takes effect in the comprehension process and how the concept relates to existing theoretical conceptions still needs to be explicated (Kendeou, 2014). A broad consensus exists in the text comprehension literature that comprehension involves the activation of prior text information and world knowledge and the integration of text information with activated knowledge (McNamara & Magliano, 2009). Activation is widely accepted to be based predominantly on a passive resonance process (memory-based text processing, e.g., O'Brien & Albrecht, 1992). Validation is unlikely to affect passive activation processes, although it might affect strategic memory retrieval, which might partly explain the plausibility effects reviewed above (Maier & Richter, 2013b, unpublished data; Schroeder et al., 2008). Instead, a reverse causal relationship seems more plausible. Knowledge that is used for routine validation during comprehension seems to be supplied quickly and efficiently by a resonance-like mechanism (for a similar argument, see Singer, 2006).

Different theoretical views have been proposed regarding the nature of knowledge integration, and not all of them fit naturally together with the notion of validation. Some theoretical views describe integration as a passive, text-driven process. For example, according to the Construction-Integration Model (Kintsch, 1988), text information and activated knowledge are integrated by a convergence process based on associationist principles of spreading activation and constraint satisfaction. Validation, which essentially involves a check of the consistency of text information with prior knowledge and beliefs, cannot be fully explained by associationist principles. In contrast, validation is strongly related to the constructionist idea that integration involves an active evaluation process of the activated information (Long & Lea, 2005). According to the constructionist view, readers “evaluate the content of activated knowledge structures to construct coherent text representations” (Long & Lea, 2005, p. 293). In the framework advocated here, validation is conceived as an evaluative process that checks the consistency of information with previous text information and a reader's knowledge and beliefs. Consistency is a narrower construct than coherence (completely unrelated sentences or completely unrelated cognitions are incoherent but not inconsistent), but it nevertheless covers central aspects of coherence. Accordingly, validation is a prime candidate for an evaluative process that is central to forming a coherent representation. Consistent with this notion, Cook and O'Brien (2014) proposed that activation, integration, and validation operate in parallel but are asynchronous processes, with activation starting earlier than integration and integration starting earlier than validation.

The central role of validation for building a coherent representation is illustrated well by Singer's experiments on causal bridging inferences (Singer, 1993; Singer et al., 1992). Numerous studies based on the inconsistency paradigm have demonstrated that inconsistent information leads to prolonged reading times when it is reactivated by memory-based processes (O'Brien & Albrecht, 1992). These studies not only support the idea of knowledge activation by memory-based processes, they may also be regarded as instances of validation in the service of building coherent mental representations (see also Cook & O'Brien, 2014). It should be noted that readers not only integrate text information that they have already encountered, but they also make knowledge-based predictions about impending text if the semantic context is sufficiently constraining (e.g., Calvo & Castillo, 1996; Fincher-Kiefer, 1993; Van Berkum, Brown, Zwisterlood, Koojiman, & Hagoort, 2005; for an overview, see Pickering & Garrod, 2007). Validation is also likely to play a major role for such predictive processes, because predictions can fail and readers need to detect prediction failures to construct a coherent and adequate representation of the text content. Predictions rely on forming representations of hypothetical facts (Campion, 2004). Validation might be the process whereby these hypotheses are tested.

The assumption that validation is an evaluative process tied to integration and prediction suggests that a negative outcome of validation should occur concurrently with an integration or prediction failure. This assumption suggests the possibility that not only information contradicting readers' knowledge and beliefs should lead to a negative validation response but also information that readers cannot integrate because they lack the relevant knowledge. Isberner and Richter (2014a) provided tentative support for this possibility using the epistemic Stroop paradigm. In this experiment, a control condition was included with assertions that participants could not validate because they lacked relevant knowledge (e.g., Toothpaste contains sulfur). Compared with reactions to the TRUE probe word after true sentences (e.g., Libraries have books), the same reactions in the control condition were prolonged not only after false sentences (e.g., Computers have emotions) but also after the no-knowledge sentences in the control conditions.

Recent results reported by Ozuru, Bowie, and Kaufman (2015) are also consistent with the assumption that integration failures evoke a negative validation response. Ozuru et al. collected metacognitive comprehension judgments and evaluative judgments (agreement/disagreement) for one-sentence assertions. Two findings are of particular relevance in the present context. First, assertions that participants reported to not understand also consistently evoked a disagree response. Second, when participants choose between metacognitive and evaluative responses, evaluative judgments were provided far more often than metacognitive judgments, which is consistent with the assumption that validation is a regular component of comprehension. At this point, however, the cognitive processes that potentially link integration failures to a negative validation response are unclear. One mechanism that might link the two processes is processing fluency. Disfluent cognitive processing is known to be associated with lower perceived plausibility of information (Alter & Oppenheimer, 2009; Reber & Schwarz, 1999), and integration failures clearly disrupt processing fluency during comprehension.

Even if validation were a regular component of comprehension, it might still depend on the standards of coherence a reader adopts during reading (Van den Broek, Risden, & Husebye-Hartmann, 1995). Experiments based on the epistemic Stroop paradigm suggest that validation is tied to comprehension, implying that a minimal comprehension goal is required to evoke a validation response. For example, Wiswede et al. (2013) found no epistemic Stroop effects when they paired the epistemic Stroop paradigm with a second task that required participants to detect merely whether the words in an experimental sentence matched the words in a test sentence. However, when the same paradigm was paired with a task that required participants to superficially comprehend the experimental sentences (participants were asked to indicate whether or not some sentences referred to an animate object), an epistemic Stroop effect emerged (Isberner & Richter, 2014a). Apparently, validation processes can be prevented by a very low standard of coherence that forgoes the need for semantic processing. Validation processes might even be goal-dependent as suggested by studies showing that readers' sensitivity to false or implausible information seems to vary with their goals (e.g., Rapp, Hinze, Kohlhepp, & Ryskin, 2014) and are dependent on text characteristics, most notably text genre. For example, narratives seem to invoke an immersed state of processing (transportation, Gerrig, 1993) that makes readers particularly susceptible to false or implausible information (e.g., Appel & Richter, 2007). In short, validation is arguably a component of comprehension, but reader and text characteristics commonly assumed to affect readers' standards of coherence (Van den Broek et al., 1995) seem to also moderate the scrutiny of validation processes and their effects on comprehension outcomes. Strategic memory retrieval might be involved in this validation processes. Readers pursuing a higher standard of coherence might invest more cognitive effort to search and retrieve relevant information, which is then used for a more complete validation of incoming text information. Another possible explanation is based on the time course of processing (Cook & O'Brien, 2014). Readers pursuing a lower standard of coherence might not wait until the validation process has been completed. However, these possible explanations need to be tested in further research.

The research reviewed in this article suggests that validation basically serves two functions during comprehension. First, being tied to integration, validation is important for establishing and monitoring local and global coherence. A second function might be that validation can protect the mental system against false and implausible information (epistemic vigilance; Sperber et al., 2010). However, this protection seems to be limited in several ways. Failures of validation are numerous, for example, when readers apparently ignore inconsistencies in a text (e.g., Otero & Kintsch, 1992) or incorporate false information (Rapp, 2008; Rapp et al., 2014). One limitation is that only knowledge that is available and activated during comprehension by memory-based or strategic processes can be used for validation. If inconsistent information is not co-activated, validation cannot occur and readers will not notice the inconsistency. Likewise, if the relevant (adequate) knowledge is not sufficiently activated, comprehension as well as validation can fail, leading to semantic illusions such as the Moses illusion (Bredart & Modolo, 1988). Most participants fail to notice the anomaly in the question How many animals did Moses take on the ark? In addition, inconsistencies in argumentative discourse can only be detected when readers invest cognitive effort and possess the relevant strategies. As a result, readers frequently check the consistency of claims with their prior knowledge and beliefs when evaluating arguments and neglect to scrutinize the relevance and sufficiency of the reasons given for the claim (Shaw, 1996). A final limitation is that validation may be based on faulty knowledge or partial beliefs (e.g., Maier & Richter, 2013a). Under these conditions, epistemic vigilance backfires and can thereby contribute to the persistence of misconceptions rather than protect the mental system from false information.

The goal of this article was to summarize empirical and theoretical arguments for why text comprehension researchers should include validation in their conceptual toolbox. The notion of validation can help researchers gain a better understanding of the long-standing problem of how readers form a coherent representation based on text information and prior knowledge. Nonetheless, investigating validation draws attention to further, often overlooked aspects of language and text comprehension. For example, sentences can be true or false about the world, which can be real, possible, or fictitious. Philosophers of language have acknowledged this fundamental property of language by proposing that the meaning of linguistic expressions depends on the truth conditions of sentences that include them (e.g., Davidson, 1967).

In contrast, much research in the field seems to be based on the tacit premise that the veridicality of textual information is irrelevant for comprehension. Related to this point, the information processing approach to cognitive psychology has been criticized for operating with abstract mental representations without explaining how these representations acquire meaning in the first place and how they are connected to the world (symbol-grounding problem; Searle, 1980). Validation may be regarded as a top-down process that contributes to grounding by aligning linguistic expressions and cognitive representations with perceptual experiences (complementing bottom-up processes that link perceptual experiences with categorical representations; e.g., Harnad, 1990).

Finally, the main function of assertive statements in a text, pragmatically understood, is to convince readers that the statement is true and to provide more or less complete arguments to support this claim. From this perspective, the assumption that communicators possess a basic capacity to routinely check the validity of the communicated information makes sense (Sperber et al., 2010). Thus, although much work is ahead to specify the details of the underlying cognitive processes, the concept of validation has the potential to broaden and enrich our understanding of text comprehension.

Toplist

Latest post

TAGs