0% found this document useful (0 votes)
21 views14 pages

CH 7

Chapter 7 discusses coherence in discourse, emphasizing that understanding messages requires more than grammar and syntax; it relies on contextual knowledge and shared conventions. It explores the role of communicative functions, speech acts, and background knowledge in interpreting discourse, highlighting the interplay between bottom-up and top-down processing. Additionally, it introduces concepts like frames, scripts, and scenarios to explain how knowledge is structured and activated during comprehension, illustrating the complexity of language understanding.

Uploaded by

Radwa Ayman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views14 pages

CH 7

Chapter 7 discusses coherence in discourse, emphasizing that understanding messages requires more than grammar and syntax; it relies on contextual knowledge and shared conventions. It explores the role of communicative functions, speech acts, and background knowledge in interpreting discourse, highlighting the interplay between bottom-up and top-down processing. Additionally, it introduces concepts like frames, scripts, and scenarios to explain how knowledge is structured and activated during comprehension, illustrating the complexity of language understanding.

Uploaded by

Radwa Ayman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Radwa Ayman

2nd year
Discourse Analysis

Chapter 7: Coherence in the Interpretation of Discourse

7.1 Coherence in Discourse


Coherence in discourse refers to our ability to understand a message not only from the words
and grammar used but from a broader understanding of meaning beyond the literal content of a
sentence. While the syntactic structure and lexical choices are important, it is a mistake to think
that these alone allow us to interpret messages fully. A sentence may be grammatically correct
and still feel incomplete or vague unless we use additional contextual information to understand
it properly. For example, the opening sentence of Tom Wolfe’s novel The Right Stuff—"Within
five minutes, or ten minutes, no more than that, three of the others had called her..."—is
grammatically well-formed, but it only partially communicates what is happening. The reader is
invited to read on to discover the full context, highlighting that understanding often goes
beyond the surface structure of language.

At the other extreme, there are discourse fragments, such as notices or advertisements, which
do not form complete sentences but are still interpreted easily. A notice like “Epistemics
Seminar: Thursday 3rd June, 2.00 p.m. Steve Harlow...” communicates a lot despite not being a
full sentence. We infer that Steve Harlow will give a talk on the mentioned topic at the
University of Edinburgh, even though these details are not stated explicitly. This kind of
understanding relies on shared knowledge and conventions. Similarly, advertisements such as
“Self Employed Upholsterer. Free estimates. 332 5862.” are understood to be offering services,
and the reader knows who to contact and for what. Though the literal message does not state
that the advertiser is offering to do upholstery work, this is easily inferred.

In these examples, we see that understanding discourse often depends on more than grammar.
Readers rely on expectations of coherence, where even unconnected phrases or fragments are
interpreted as meaningful when placed together. The assumption that the message is coherent
leads readers to mentally fill in missing links. This principle can guide interpretations, even
when formal cues are missing. Thus, coherence is not simply a feature of sentence structure but
is deeply tied to the reader's background knowledge, assumptions about communication, and
willingness to connect the dots. It allows us to make sense of incomplete or context-dependent
discourse and is essential to effective communication in both spoken and written forms.
7.2 Computing Communicative Function
Understanding a discourse involves more than recognizing the grammatical form of an
utterance; it also includes computing the communicative function of that utterance—what the
speaker or writer is trying to do with it. Researchers such as social anthropologists and
sociolinguists have long emphasized that language does not only convey propositional content,
but also serves social purposes. This perspective considers language as a form of social action,
where utterances are treated as acts—such as a greeting, a promise, or an announcement—
depending on the context in which they are produced. For instance, a message about an
academic seminar may be interpreted as an announcement rather than a warning or promise
based on its form and placement. This interpretation depends not only on discourse knowledge
but also on socio-cultural understanding. A reader who knows that “Steve Harlow” is more
likely a person than “Epistemics Seminar” is relying on conventional knowledge to interpret the
communicative function correctly.

Labov (1970) explained this further by showing that coherence in conversation relies not just
on the structure of language, but on how utterances function as actions. In one example, a
doctor asks a patient’s name, and the patient gives a response that lacks clear relevance. This is
seen as incoherent, not because of sentence structure, but because the patient's reply does not
perform the expected social action—namely, answering a question. Brown and Levinson (1978)
provided another example: if someone responds to “What time is it?” by saying “The postman
has already been,” we understand this as a kind of answer due to our assumption that the
speaker is rational and cooperative. This means we interpret the utterance functionally rather
than literally. Widdowson (1978) also emphasized how actions within a conversation follow a
conventional sequence, even when there are no formal cohesive links. For example, if one
person says, “That’s the telephone,” and another replies, “I’m in the bath,” followed by “OK,”
we understand this sequence through shared knowledge of typical interaction patterns, not
through sentence grammar alone.

Sinclair and Coulthard (1975) extended this by analyzing classroom talk, identifying a
hierarchy of discourse categories like “transaction,” “exchange,” and “act.” Their model treats
each utterance as performing a specific function within a structured social interaction, whether
in classrooms or even in non-verbal interactions like sports. Ultimately, computing the
communicative function requires attention to both context and social convention. It helps
explain how listeners derive meaning that goes far beyond the literal sentence-level content of
an utterance.
7.3 Speech Acts
Speech act theory explores how language is not only used to describe things but also to perform
actions. This idea was introduced by Austin (1962), who noticed that some utterances, such as
“I name this ship the Queen Elizabeth,” are not simply describing reality—they are performing
an act, like naming a ship. He called these “performatives.” For such utterances to be
successful, specific conditions must be met, known as felicity conditions. Some performatives
are explicit, containing a verb like “promise” or “warn.” Others are implicit and lack such verbs
but still perform acts. For example, saying “Out!” in cricket functions as an act of dismissal.
“Sixpence” in a card game may indicate a bet, while “I’ll be there at 5 o’clock” serves as a
promise. These examples are known as illocutionary acts, where a speaker does something by
saying something. In addition, utterances also have a perlocutionary effect, which is the
response or impact they produce on the hearer. Searle (1975) expanded on Austin’s work by
introducing indirect speech acts, where one act is performed by doing another. For example,
“Can you speak a little louder?” is technically a question about ability but is usually understood
as a polite request.
Speech act theory helps explain how unrelated sentences in conversation can still be connected
meaningfully. However, there are difficulties in identifying speech acts in real conversations.
Levinson (1980) argues that it’s often unclear how to classify speech acts accurately without
guessing. Also, one sentence can do several things at once. For instance, “Hey, Michele, you’ve
passed the exam” could assert, congratulate, or even apologize. Although useful, speech act
theory does not yet offer a complete system for identifying the intended meaning of an utterance
based solely on its form or position in a conversation.

7.4 Using Knowledge of the World


Understanding discourse relies not only on language knowledge but also on broader
knowledge of the world. As language users, we bring a wide range of general social and cultural
knowledge to any interaction. This background knowledge allows us to make sense of discourse
and interpret meaning even when things are not fully stated. As de Beaugrande (1980) noted,
understanding a text is just a special example of understanding the world in general. Earlier in
the book, it was suggested that much of our interpretation is based on past experience. Adults
typically have a large store of experience and knowledge. But how do we manage all of this
information and only use the parts we need in the moment? This is a central question in
discourse analysis. One possible answer lies in the way knowledge is stored and organized in
our minds—a topic discussed in the next section.

In short, when we understand a text, we often rely on analogy: we relate what we’re reading to
things we’ve experienced before. Our world knowledge helps us interpret unclear or unfamiliar
discourse by providing a familiar framework to work from

7.5 Top-down and Bottom-up Processing


One way to understand how people process discourse is through a model drawn from
computational language processing. This model suggests that understanding discourse involves
two main types of activity: bottom-up and top-down processing. In bottom-up processing, the
reader works out the meaning of individual words and sentence structures and builds meaning
from these smaller units. This approach is widely used in traditional linguistic analysis and
Artificial Intelligence (AI), where focus is placed on grammar and sentence structure. In such
approaches, if a sentence is ungrammatical, it is rejected, and no attempt is made to interpret its
meaning. However, human readers do not reject texts that have errors. Instead, they use top-
down processing to predict or guess the likely meaning based on the discourse context and prior
knowledge. This strategy allows readers to make sense of a sentence even if it is poorly written
or ungrammatical. For example, when a person reads the first part of a sentence, they begin to
anticipate what the next part will mean. These expectations are formed using both previous
sentences and their own world knowledge.

Top-down processing enables the interpretation of unclear or unexpected language input,


whereas bottom-up processing provides structure and accuracy. Both processes work together.
While bottom-up processing depends on sentence-level rules, top-down processing relies on
experience, context, and stored background knowledge. For instance, if someone has read
similar texts before or is familiar with the topic, they will use that knowledge to anticipate what
is coming next. Together, these two processes make human comprehension more flexible than
machine parsing. Bottom-up processing constructs meaning step by step, and top-down
processing adjusts or supplements that meaning based on what the reader already knows or
expects.

7.6 Representing Background Knowledge


In discourse analysis, understanding what people mean often depends on how background
knowledge is stored and used. Research has shown that this knowledge is typically organized in
a fixed way, not as separate pieces but as complete units of stereotypical knowledge. For
example, when people read about a restaurant, they don’t recall random facts about restaurants;
instead, they retrieve a full “restaurant scene” from memory. This suggests that comprehension
is essentially a memory-based process.
In computational linguistics and artificial intelligence, researchers have tried to replicate this
by creating large, fixed memory systems for computers. However, it quickly became clear that
representing general world knowledge is difficult due to its vastness. To solve this, some
researchers designed smaller knowledge systems focused on specific domains, like colored
blocks or airline booking systems. This led to the idea that human knowledge is divided into
specialized, interconnected sets that are activated only when needed. For instance, when reading
about visiting a dentist, people automatically access their “dentist-visit” knowledge and not
unrelated topics like typing a letter—unless the text demands it.

Thus, discourse comprehension is closely linked to how knowledge is mentally stored and
retrieved for interpreting language meaningfully.

7.6.1 Frames
One influential way of representing background knowledge in discourse understanding is
through Minsky’s frame theory. According to Minsky, knowledge is stored in memory in the
form of frames, which are structured mental representations of stereotypical situations. When
someone encounters a new situation or reinterprets a current one, a relevant frame is activated
from memory. The person then adapts this existing framework by filling in or adjusting certain
details to match the new context. Although Minsky’s theory mainly focuses on visual memory
and perception, it has implications for language understanding. For instance, he compares a
visual frame for a room with a linguistic frame for a noun phrase. Both contain required
components (like walls in a room or a noun in a phrase) and optional elements (such as
decorations or determiners). A frame includes labeled slots, and each slot can be filled with
specific expressions or even other frames. For example, a "house" frame might include slots
labeled “kitchen,” “bathroom,” or “address,” which become populated when referring to a
specific house.

Frames serve as fixed representations of conventional knowledge. Some researchers define a


frame as a static structure containing facts about a single stereotypical topic. Others suggest that
frames can function dynamically, executing retrieval processes and inferences during
comprehension. A practical example involves a “voting frame.” If someone receives a voter
notification card, they might already assume the existence of a polling station and a clerk,
without needing that explicitly stated. Their existing voting frame fills in those details. This
enables discourse to be both efficient and informative, acting as a reminder for those with
existing knowledge and an explanation for those without it. However, a key issue in frame-
based understanding is over-activation. A single cue in a sentence might trigger multiple
frames. For instance, the phrase “The Cathedral congregation watched…” could activate a
"cathedral" frame, a "television" frame, or a "meeting" frame. But most of these frames are not
needed for understanding the main point, leading to the idea that "many frames are called, but
few are chosen”. Despite such challenges, frame theory has remained valuable in linguistics,
sociology, and artificial intelligence for modeling how stereotypical knowledge shapes
discourse understanding.

7.6.2 Scripts
Following the introduction of frames as structured memory units representing stereotypical
situations, the concept of scripts was developed to deal specifically with predictable sequences
of events. While frames capture static elements of a scene (like objects in a room), scripts focus
on dynamic, ordered actions typically associated with a situation. Schank and Abelson (1977)
introduced the notion of scripts as structured representations that encapsulate routine behavioral
patterns, such as visiting a restaurant or attending a class. These were derived from Abelson's
(1976) earlier work on how attitudes and behaviors are mentally organized and connected.

In this model, understanding a sentence involves more than just interpreting its words; it
involves placing it within a larger expected context. For example, the sentence "John ate the ice
cream with a spoon" is understood not merely by parsing its syntax but by drawing on world
knowledge. Schank reformulated such sentences into conceptual dependencies, where "John ate
the ice cream with a spoon" becomes "John ingested the ice cream by transferring it with a
spoon to his mouth". This elaboration captures unstated but understood aspects of the action,
showing how language comprehension relies on background knowledge.

Riesbeck and Schank (1978) further argued that readers and listeners are expectation-based
parsers—we predict what’s coming next in a discourse based on stored scripts. This prediction
can lead to errors when expectations are violated, as in the example where a hunting story ("We
shot two bucks") turns unexpectedly into a financial story ("That was all the money we had").
Such shifts force readers to reassess their understanding, highlighting the script’s role in setting
expectations. Scripts function like programs: they activate default expectations about people,
places, and actions involved in common situations. They are essential in making inferences that
are not explicitly stated in texts. For example, if a news article states that someone was taken to
the hospital and released, readers infer that the person was injured—based not on the sentence’s
surface structure but on the hospital-visit script. Despite their usefulness, scripts face criticism.
Scholars question how to limit the amount of background knowledge activated, or how to
account for idiosyncratic scripts, such as those shaped by personal experience. Nevertheless,
experimental evidence supports their role in memory and comprehension, indicating that
readers often recall actions suggested by a script even if they weren’t in the original text.

7.6.3 Scenarios
Following the development of scripts, the concept of scenarios was introduced by Sanford and
Garrod (1981) to explain how background knowledge influences the interpretation of texts.
While scripts describe fixed, stereotypical event sequences, scenarios are broader cognitive
structures that represent settings and contexts related to a situation. They act as frameworks that
guide comprehension by supplying likely participants, goals, and objects when a reader
processes a text.

A scenario helps the reader build a coherent mental representation of a discourse by supplying
unstated but expected elements. For instance, in a text about being “in court,” a reader activates
a legal scenario, bringing in roles such as judges, lawyers, and defendants. Sanford and Garrod
demonstrated this through experiments measuring reading times. Participants read passages
such as: “Fred was being questioned. He had been accused of murder. The lawyer was trying to
prove his innocence.” When preceded by a title like “In court,” readers processed the text more
quickly, as the legal scenario had already been activated. Without this context (e.g., under a
vague title like “Telling a lie”), reading times increased, indicating a slower or less effective
comprehension process. The authors emphasize that texts must contain enough cues to activate
the correct scenario. For example, just naming a character or describing an isolated event might
not be sufficient unless it directly links to the larger background situation. Text structure,
particularly thematic elements introduced early in the discourse, can help guide the reader
toward the intended scenario. This aligns with broader ideas about thematic staging discussed
earlier in discourse analysis, where themes serve to orient readers to upcoming content. While
scenarios aid in efficient processing, Sanford and Garrod do not claim that texts lacking a clear
scenario are impossible to understand—only that they may be processed more slowly. For
example, a complex, multi-faceted description like the one involving the Pope, the Archbishop,
and a helicopter on a recreation ground challenges readers because it doesn’t neatly fit a single,
familiar scenario. In such cases, comprehension relies more on assembling elements piece by
piece rather than relying on a pre-activated knowledge structure.

In sum, scenarios serve as extended domains of reference. They provide psychological insight
into how readers make sense of texts by drawing on situational knowledge that supports
coherence, expectation, and speed of processing.

7.6.4 Schemata
Building on the previous discussion of frames, scripts, and scenarios, the concept of schemata
provides another important way of understanding how background knowledge influences
discourse interpretation. Schemata (singular: schema) are structured mental frameworks that
organize and store general knowledge, allowing individuals to comprehend new information by
relating it to prior experience. Unlike scripts, which focus on predictable sequences of events, or
scenarios, which are specific to particular settings, schemata can be broader, more abstract
knowledge structures that guide both comprehension and memory. In discourse studies,
schemata help explain how people make sense of stories or texts based on prior exposure to
conventional structures. For instance, in story-grammar theory, a story-schema includes
elements like setting, characters, problem, and resolution. Readers familiar with these elements
can more easily follow and recall stories that conform to this pattern. However, it is not texts
that have schemata — people do. A schema exists in the mind of the reader, and it influences
how the discourse is processed and understood. Schemata are often described as “structures of
expectation” (Tannen, 1979). When we encounter a new discourse, we use existing schemata to
predict its content and structure. This has been demonstrated through studies where participants'
interpretations of the same text varied based on their background and interests. In Anderson et
al. (1977), students with different interests (music vs. sports) read the same ambiguous passage.
Music students interpreted it as describing a musical gathering, while sports-oriented students
believed it depicted a card game. Their personal histories influenced the schema they activated,
which shaped how they understood the discourse. Bartlett (1932), who first introduced the term
schema in psychological research, argued that memory is not reproductive but reconstructive.
That is, people do not recall texts word for word; instead, they rebuild meaning by combining
the information encountered with relevant past experiences. Bartlett described schemata as
"active and developing", constantly shaped by interaction with new experiences. In this way, a
schema is not merely a fixed container of facts, but a dynamic mental tool that aids
comprehension.

Still, not all scholars define schemata as flexible or developing. For instance, Rumelhart and
Ortony (1977) define schemata more rigidly, as “stereotypes of concepts.” Their example of a
FACE schema includes subschemata like EYE and MOUTH, resembling the slot-filler structure
of frames. Likewise, some researchers treat schemata as prototypes—fixed structures similar to
Minsky’s frames. These contrasting views illustrate the theoretical tension between seeing
schemata as static or dynamic. Despite these differences, the role of schemata in discourse
interpretation is widely recognized. They help individuals generate expectations, process
familiar structures more efficiently, and even misremember information that fits the schema but
wasn’t actually present in the text. This was demonstrated by Bower et al. (1979), who found
that subjects often recalled unstated script-actions simply because they were implied by the
schema.

Overall, schemata function as powerful cognitive tools that shape how we interpret and
remember discourse. They bridge the gap between textual input and mental representation by
anchoring new experiences in familiar cognitive structures.

7.6.5 Mental Models


The concept of mental models offers a different and important perspective on how individuals
understand discourse. While frames, scripts, scenarios, and schemata all focus on stored, often
stereotypical knowledge, mental models refer to internally constructed representations of a
specific situation, based on the content of discourse and one's existing knowledge. Unlike static
storage structures, mental models are dynamic, situation-specific, and constructed in real time
during discourse processing.

Psychologist Philip Johnson-Laird is closely associated with this concept. He argues that we
comprehend discourse by forming mental simulations of the events and relationships described,
rather than analyzing language by decomposing word meanings into smaller parts. For example,
instead of mentally breaking down the word “man” into features like “human,” “adult,” and
“male” (as in Katz and Fodor’s semantic theory), Johnson-Laird suggests that we understand a
sentence like “The man is walking his dog in the park” by creating a mental image or model of
that scene in our minds. Importantly, these models are not direct linguistic representations;
rather, they are constructed from linguistic cues and enriched with our world knowledge. When
we hear or read a sentence, we do not usually analyze its literal logical structure in detail.
Instead, we quickly form a plausible interpretation, often without realizing it. For instance,
Johnson-Laird gives the example: “This book fills a much-needed gap.” At first glance, most
people interpret this as praise for the book. However, when analyzed logically, it suggests that
the “gap” (not the book) was much-needed — a less flattering view. This example shows how
readers build practical, familiar models of meaning rather than rely on logical analysis.

Mental models are flexible and varied, allowing different people to interpret the same
discourse in different ways. This is especially relevant when the discourse is vague or
ambiguous. The variability also explains why different readers may construct slightly different
interpretations of a text, depending on their experiences, assumptions, and prior knowledge.
While other structures like frames and schemata draw on predefined templates, mental models
are constructed on the spot, tailored to the demands of the discourse. Another strength of mental
model theory is its psychological grounding. Experimental evidence supports the idea that
people recall and interpret discourse in ways consistent with constructed models. For instance,
in studies by Anderson and colleagues, participants better recalled words like “shark” than
“fish” after reading the sentence “The fish attacked the swimmer”. This shows that people
imagined a specific instance, a shark, even though the sentence didn’t explicitly state it.
According to Johnson-Laird, the word “fish” triggered the creation of a mental model that
included a large, aggressive fish — typically a shark — based on context and general
knowledge.

In contrast to formal semantic models, which are abstract and not psychologically based,
Johnson-Laird emphasizes that mental models are psychological representations. They operate
within the mind and represent how the world might be, as understood by a particular individual.
These models help people draw inferences, understand implications, and even detect
inconsistencies or surprises in texts. Still, mental models are not without challenges. One issue
concerns the lack of a clear definition for what makes a model “familiar,” and how exactly the
mind selects which details to include. Additionally, while mental models avoid the problem of
overgeneralization seen in script-based systems, they must still manage cognitive load, since
constructing a new model for every situation requires mental effort. Despite these challenges,
mental models are highly valuable because they allow for individualized and context-sensitive
interpretations of discourse.

In summary, mental models are internal, flexible simulations that people create to represent
and understand the meaning of discourse. Unlike stereotyped knowledge structures, they are
built dynamically during processing and offer a psychologically realistic view of how we
comprehend complex and variable language.

7.7 Determining the Inference to Be Made


A crucial step in interpreting discourse is determining which inferences should be made by the
reader or listener. While earlier sections have shown that people draw on stored knowledge and
build mental models to interpret texts, the challenge lies in selecting the relevant inferences at
any given moment. Not all possible background knowledge is activated; rather, only the most
contextually appropriate inferences are used to ensure coherence.

Inference-making is not a mechanical process. It involves assessing the communicative


intention of the speaker or writer. For example, a sentence like “John was late to the meeting
because he missed the train” invites the inference that missing the train caused the lateness. But
if the sentence was “John was late to the meeting. He missed the train,” the reader still
connects the two events, based on assumptions about cause and effect and the expectation of
coherence. Discourse understanding often depends on guessing the speaker’s intended meaning,
which is guided by context, shared knowledge, and discourse structure. The reader must decide
what assumptions the speaker expects them to make, and which connections are meant to be
drawn. As such, determining the correct inference is a process deeply tied to pragmatic
reasoning and the assumption of coherence in discourse.

7.8 Inferences as Missing Links


In many instances, inferences function as the missing links that enable a reader or listener to
form a coherent understanding of a discourse, even when certain details are not explicitly stated.
The linguistic surface of a text may present ideas or events in a sequence, but the logical or
causal connections between them are often left unstated. It is the responsibility of the interpreter
to supply those missing links through inference. Consider the following example: “John picked
up his keys. A few minutes later, he was stuck in traffic.” There is no explicit statement that
John left the house, got into a car, and started driving. However, readers typically infer this
sequence because they assume coherence and apply background knowledge about everyday
behavior. This illustrates how inferences allow the filling in of conceptual gaps between
adjacent sentences or events in a text. The need for inferences arises partly from the economy of
language. Speakers and writers often omit information that they expect their audience already
knows or can easily supply. Including every detail would make communication unnecessarily
long and cumbersome. As such, inferences operate as an essential mechanism that lets language
users streamline messages while still conveying complete ideas.

These inferences are not random or infinite; they are constrained by context, world knowledge,
and the discourse function. For example, in a news story reporting, “The suspect was arrested
near the scene. A weapon was also found,” readers infer that the weapon may have belonged to
the suspect, even though this link is never directly stated. Such inferences are guided by cultural
and situational knowledge — in this case, assumptions about criminal investigations. This idea
of inference as a missing link is especially significant in narrative texts, where readers construct
storylines and character motivations based on implied content. For instance, when a character
slams a door and then refuses to speak, readers may infer anger or frustration. These emotional
and motivational connections are rarely made explicit, yet they are crucial for comprehension.

The same principle applies to more formal or academic texts, where logical relations such as
cause, contrast, or elaboration may be inferred from the order of sentences rather than signaled
by cohesive devices. A sentence like “The results were surprising. The theory did not predict
them,” encourages the reader to infer a contrast or failure of the theory without using explicit
connectives. It is important to note that inference-making can lead to misunderstanding if the
reader supplies the wrong missing link. This happens when the background knowledge or
assumptions used are not aligned with the speaker’s intentions. Therefore, the reliability of
inferences depends on how well the speaker and listener share common ground — a set of
shared beliefs, knowledge, and expectations.

In conclusion, inferences as missing links are not just add-ons to understanding; they are
central to discourse comprehension. They fill in gaps left by language and ensure that
communication remains efficient and meaningful, even when explicit connections are absent.

7.9 Inferences as Non-Automatic Connections


While many inferences in discourse are made quickly and effortlessly, not all interpretive
connections are automatic. In fact, some inferences require deliberate reasoning and are highly
dependent on context, intention, and the reader’s or listener’s knowledge base. These are
referred to as non-automatic inferences, which differ from routine background assumptions
because they involve a conscious interpretive effort.

A non-automatic inference occurs when the surface meaning of a sentence is not sufficient to
explain the relationship between ideas or when the discourse presents a kind of interpretive
tension. For example, in the exchange:
A: “Can you go to Edinburgh tomorrow?”
B: “B.E.A. pilots are on strike.”
The response does not directly answer the question. However, the reader infers that the strike
will prevent B from going. This inference is not automatic because the connection depends on
the listener’s ability to interpret the relevance of the reply within the social and situational
context. Such inferences depend on pragmatic reasoning — figuring out what the speaker
intended by saying something indirectly. Gricean principles, particularly the Cooperative
Principle, explain that speakers aim to be relevant, informative, and truthful. When a response
seems unrelated at first glance, listeners infer that it must be connected to the original question
in some way. This process is cognitively active and shaped by expectations about how
conversation works.

These types of inferences often involve recognizing implications, reading between the lines, or
interpreting non-literal language. Consider sarcasm or irony: “Oh, that’s just great,” might
literally express approval, but in a given context — say, after a car breaks down — it likely
expresses frustration. The hearer must actively interpret tone, context, and likely intentions to
arrive at the appropriate meaning. Again, this inference is not automatic; it demands contextual
reasoning and a flexible interpretive process.

Moreover, non-automatic inferences become especially relevant in cross-cultural


communication or unfamiliar situations, where the assumptions needed for automatic
understanding are not shared. In these cases, listeners cannot rely on default scripts or schemata.
Instead, they must engage more deeply to construct coherence. For example, reading a story
from a different culture may require learning new values or norms to correctly interpret
character actions and narrative structure. Additionally, some texts are deliberately written to
provoke non-automatic interpretation. Literary texts, for example, often resist straightforward
readings and invite readers to explore multiple layers of meaning. Poetry, metaphor, and
complex argumentation require the reader to go beyond literal interpretation and supply
inferences that are not immediately obvious. These inferences are also important in persuasive
writing or argumentation. A speaker may hint at a conclusion without stating it, leading the
reader to infer the intended point. For instance, by presenting a list of negative consequences, a
writer might guide the reader to conclude that a policy is flawed, without ever making that
claim directly.

In summary, non-automatic inferences are essential in situations where meaning is indirect,


implied, or ambiguous. They require readers to engage deeply with context, intention, and
broader knowledge. Unlike default assumptions or culturally fixed scripts, these inferences are
shaped by active reasoning, playing a crucial role in how we construct meaning beyond what is
explicitly stated.
7.10 Inferences as Filling Gaps or Discontinuities in Interpretation
In natural discourse, speakers and writers often leave parts of the message unstated, whether
intentionally or unintentionally. These missing elements can result in gaps or discontinuities in
interpretation, which listeners and readers must bridge through inference. This process is
fundamental to our ability to make sense of communication, especially when the linguistic
surface alone does not provide a complete or coherent message. Gaps may occur in a variety of
ways — through omitted information, sudden topic shifts, or unconventional sequencing of
events. For example, in the sequence: “Karen sat down. The music started playing.” the
connection between the two actions is not explicitly stated. The reader might infer that Karen
turned on a stereo or that music began playing in response to her sitting. The exact causal or
temporal link is absent from the text but filled in by the reader’s background knowledge and
assumptions about coherence.

These inferences are especially important when dealing with fragments, ellipsis, or abbreviated
text forms. In everyday conversation or public signage, we frequently encounter messages that
are far from grammatically complete yet are still fully interpretable. For instance, a sign reading
“Children under 5 free” lacks a verb or full structure, but we infer it means “Children under 5
years old are allowed in without charge.” Such examples illustrate how inferences restore
completeness to structurally reduced discourse. Discontinuities may also emerge when
discourse violates typical expectations, such as when the narrative sequence is disrupted or
essential context is missing. In these cases, the reader actively constructs connections to
maintain coherence. This may involve inserting actions or events into a mental model that are
not in the text. For example, if a text reads: “John ordered a steak. He paid the bill and left,”
the reader assumes that the steak was served and eaten, even though these steps are not
mentioned. The inference fills a logical and experiential gap. Inferences of this kind reflect the
broader cognitive strategy of maintaining interpretive coherence. Rather than treating a gap as a
breakdown in communication, the reader assumes that something relevant and recoverable has
been omitted, and seeks to reconstruct it. This process relies on the assumption of relevance and
communicative intent — the belief that the speaker or writer is trying to be understood.

Thus, inferences that fill gaps or discontinuities are not optional add-ons to comprehension.
They are core mechanisms that ensure discourse is understood as coherent, complete, and
meaningful — even when parts are missing or only partially stated.

7.11 Conclusion
In this chapter, we have explored various theoretical perspectives and empirical findings
concerning the interpretation of discourse, focusing especially on the concept of coherence. Our
discussion began by questioning the assumption that understanding a linguistic message relies
solely on the literal meaning of its syntactic and lexical components. We have seen that
coherence, far from being a property of sentences alone, emerges from the interaction of
linguistic input with contextual knowledge, conventional social understandings, and
expectations formed through experience. We examined how discourse participants make use of
socio-cultural knowledge, conventional communicative functions, and principles such as
analogy, inference, and local interpretation to compute meaning. Notably, different approaches
such as speech act theory, script theory, frame theory, schema theory, and mental model theory
have attempted to explain how listeners and readers interpret discourse beyond the literal level.
Each model demonstrates how meaning is not simply derived from sentences but is also shaped
by the background knowledge and expectations of the interpreter.

These perspectives converge in showing that discourse comprehension is a dynamic process


that combines top-down and bottom-up processing. While linguistic input plays a role, much of
our understanding depends on pre-existing structures of knowledge and our ability to activate
appropriate representations during interpretation. As we have seen in the discussion of scripts,
frames, and mental models, this activation is selective and context-sensitive. However,
challenges remain in constraining the amount of background information used and determining
what counts as the speaker’s or writer’s intended meaning.

Overall, the study of coherence in discourse interpretation emphasizes that understanding is not
merely about decoding isolated sentences but about constructing meaning from interconnected
elements within a discourse. This process involves inferring intentions, drawing on shared
knowledge, and recognizing how discourse structures align with familiar patterns. Thus,
coherence is not inherent in the text alone but is co-constructed by both the discourse producer
and the interpreter through active cognitive engagement.

You might also like