Reasoning as a tool at the service of our goals.
Amelia Gangemi
Department of Cognitive Science, University of Messina.
[email protected] Abstract
What makes the difference between rational or irrational reasoning? In this chapter, I will try to
answer to this question, giving a different view on what we can mean with the term rationality. In
particular, a functional and pragmatic account of rational reasoning will be proposed. According to
it, the best kind of thinking (i.e. the more rational) is whatever kind of thinking that best helps
people to achieve or protect their goals and reduce the costs of crucial errors. Therefore, contrary to
normative theories, such as logic or probabilistic reasoning, rationality is not the same as accuracy,
and irrationality is not the same as an error. Rationality can be instead considered a matter of
degree. We can say that a way of reasoning is “more rational” or “less rational” than another. It
depends on how much it can be helpful for our goals. There may be also not only a “best” way of
reasoning. There may be different ways of reasoning that are comparable in terms of their value in
helping people to achieve their goals. They depend on the beliefs, contexts or domains in which we
are reasoning. Finally, these kinds of reasoning do not deny emotions but give them a relevant role.
Emotions sometimes even improve our reasoning when we want to achieve or defend our own
goals and interests.
1. Introduction
“Our aim is to reach new and parsimonious conclusions that are plausible given the premises
and our knowledge. Our reasoning in everyday life doesn’t occur in a vacuum. We reason because
we have a goal beyond that of drawing a conclusion. We have a problem to solve, a decision to
make, a plan to devise. The goal constrains the sorts of strategies that we draw” (Johnson-Laird,
2006, p. 12). This is what Johnson-Laird writes when he introduces his view on rationality in
reasoning. And this is the starting point of this chapter. With it, I would like to present a different
view on what we can mean with the term rationality. Thanks to scholars like Baron (2008),
Johnson-Laird (2006) or Mancini (e.g. Mancini, Gangemi & Johnson-Laird, 2007), we know now,
that rational thinking is generally the best one in achieving or protecting the reasoner’s goals, where
rationality concerns the methods of thinking used, not the conclusions of thinking.
This perspective goes against the normative theories of reasoning, which state that we are
rational, only if we use mathematical norms, such as Conditional Probability or logic rules. If we do
not, we are considered irrational. Rationality is, therefore, the same as accuracy, and irrationality is
the same as error.
In what follows, I will try to argue that rational is instead that kind of thinking that does not deny
emotions and desires, that is that kind of thinking we try to use when we want to achieve our own
goals and interests. Our goals are, indeed, what we want to pursue, even if we are not aware of
them. They are the criteria by which we evaluate everything about our life. Thus, we would want to
think “rationally,” in this sense. It is worth noting, that we use this point of view in everyday life,
even if we are not aware of this, and above all, even if we are not thinking and reasoning theories
experts . It’s a sort of naïve theory on ours’ or others’ mind.
2. Normative theories and rational thinking
The term rational comes from the Latin word rationalis, meaning reasonable or logical. As already
explained, we are considered as rational, if our thinking is based on mathematical, logic or
reasoning norms. Indeed, normative theories of reasoning (i.e., probabilistic or logical reasoning)
seem to well explain the rules or criteria that an “ideal reasoner” should use in order to think
rationally. For example, the probabilistic reasoning has been studied according to the theory of
probability. Probability theory is a normative theory used for evaluating numerically the strengths
of our beliefs in various propositions. The probability calculus, as every gambler knows, was
invented in 1654 when the Chevalier de Méré wrote to Pascal about a gambling game, and Pascal,
in turn, consulted with Fermat. The probability calculus enables us, in fact, to deduce the right
answers about certain probabilities. Certainly, people also reasoned on probabilities before the
1650s, but no one knew how they ought to reason about them. But, probability calculus became, as
psychologists say, a “normative” theory. The most common normative theory on probabilistic
psychologists say, a “normative” theory. The most common normative theory on probabilistic
reasoning is based on the so-called Bayes’ theorem, named in honor of the 18th-century British
mathematician Thomas Bayes. Thanks to his mathematical formula, this theorem tells us how to
infer Conditional Probability, defined as the likelihood of an event or outcome occurring, based on
the occurrence of a previous event or outcome. In particular, Conditional probability is calculated
by multiplying the probability of the preceding event by the updated probability of the succeeding
or conditional event. Thus, the theorem provides a way to revise existing predictions or theories
(update probabilities) given new or additional evidence.
Let’s see a realistic example taken by Baron (2008). Suppose you are a physician and a woman in
her thirties comes to you saying that she has found a small lump in her breast, and she is worried
that it might be cancerous. After you examine her, you think—on the basis of everything you know
about breast cancer, women with her kind of medical history, and other relevant information — that
the probability of cancer is .10—that is, 1 in 10. You recommend a mammogram, which is an X-ray
study of the breast. You know that in women of this type who have cancer, the mammogram will
indicate cancer 90% of the time. In women who do not have cancer, the mammogram will indicate
cancer falsely 20% of the time. We can say that the hit rate is 90% and the false alarm rate is 20%.
The mammogram comes out positive. What is the probability that the woman actually has cancer?
Try to answer this by yourself (intuitively, without calculating) before going on. Many people say
that the probability is 90%; many others say that it is over 50%. Surprisingly, even some medical
textbooks make this mistake (Eddy, 1982). Let’s see what it would actually be, if our probability
judgments were coherent. First, let’s simply calculate it carefully; later, let’s see a general formula
for this sort of calculation. Suppose that there are 100 women of this type, and the numbers apply
exactly to them. We know that 10 of them have cancer and 90 do not. Of the 90 without cancer,
20% will show a positive mammogram, and 80% will not. Of the 10 that do have cancer, 90% will
show a positive mammogram and 10% will not. Thus we have the following:
Mammogram
Positive Negative
No cancer (90 patients) 90 x .20 =18 90 x .80 = 72
Cancer (10 patients) 10 x .90 = 9 10 x .10= 1
Because the patient had a positive mammogram, she must fall into one of the two groups with a
positive mammogram. In the combined group consisting of these two groups, there are 27 patients,
9 affected by cancer. Thus, the probability that our patient has cancer is 9/27, or .33. It is far more
likely that she was one of those women without cancer but with a positive result, than that she
really had cancer. Of course, a .33 chance of cancer may well be high enough to justify the next
step, probably a biopsy, but the chance is far less than .90.
Since the ’50s, and for decades, this way to calculate the conditional probability has been
considered by psychologists as an ideal rule in order to evaluate individuals’ naïve probabilistic
judgments under uncertainty. The use of these calculations, and thus to a mathematician norm,
would make rational individuals’ probabilistic thinking. Yet, a wide number of evidence show the
occurrence of systematic errors in reasoning, called, as we will see below, biases. Biases exist
because of the application of some reasoning strategies called heuristics, which are quick and
economical strategies of reasoning (see below for a detailed definition). According to the normative
point of view, these simple and quick strategies would make us irrational.
Also, for deductive and inductive reasoning, formal logic has often been used to define reasoning
problems. The term "logic" refers to the science that studies the principles of correct reasoning. The
foundation of a logical argument is its proposition or statement. The proposition is either accurate
(true) or not accurate (false). The argument is then built on some premises, propositions used to
build the argument. Then, an inference is made from the premises. Finally, a conclusion is drawn. A
simple example of deductive reasoning is: All squares are rectangles, All rectangles have four
sides. Logic, therefore, tells us that All squares have four sides.
For well over a century, theorists stated that logic plays a central role in reasoning. Many logicians
have thought, for example, that the formal rules of logic were the laws of thought. Hilbert, one of
the greatest mathematicians of the twentieth century, wrote: “The fundamental idea of my proof
theory is none other than to describe the activity of our understanding, to make a protocol of the
rules according to which our thinking actually proceeds” (Hilbert 1928/1967, pg.475) Some other
rules according to which our thinking actually proceeds” (Hilbert 1928/1967, pg.475) Some other
theorists made little or no distinction between logic and thinking. For instance, the logician George
Boole (1854/1951, cited in Dellarosa, 1988), describing a system of propositional reasoning,
entitled his seminal work “An investigation into the laws of thought”. Anyway, the most influential
godfather of this approach is Jean Piaget, a pioneer Swiss investigator of children’s psychology. He
argued that the mental construction of formal logic was the last great step in their intellectual
development and that it occurred at about the age of twelve. With Inhelder, Piaget claimed in 1958,
that thought is the ability to perform the 16 operations of propositional logic.
Later, other theorists have postulated formal rules for deductive reasoning. For example, Wason
(e.g. 1966) considered formal logic as a normative model of thinking. During the ‘80s the
development of logic and its psychology provided an example of the approach comparing human
reasoning to normative models. Also nowadays, people often use the word “logical” as synonymous
of “reasonable” or “rational.” However, likewise probabilistic reasoning, a huge number of
evidence show the occurrence of biases in logic reasoning (see below), due to the use of heuristics.
Again, according to the normative point of view, these simple and quick strategies would make us
irrational. But is it actually true?
3. Rational thinking and reasoners’ errors
According to normative theories, we are rational when our thinking is based on mathematician or
logical rules and, otherwise, we are thought to be irrational. Certainly, all of us can learn and use
these rules to solve hard problems. The question, in this case, is whether those of us with any
training in formal logic or probability calculation, actually follow these rules of thought (cf.
Johnson-Laird, 2006).
As above anticipated, starting from the ‘80s, a large number of studies showed that when we reason
on deductive or probabilistic problems, most of us do not use the normative rules but other, simpler
strategies named heuristics. But what is a heuristic? A heuristic is any approach to think that
employs a practical method not guaranteed to be optimal, perfect or rational (in a normative way),
but which is nevertheless sufficient for reaching an immediate, short-term goal. Heuristic methods
can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental
shortcuts that ease the cognitive load of making a decision or to draw any kind of
conclusions. Simple examples that employ heuristics include using trial and error, or common
sense.
The study of heuristics in human reasoning was developed in the 1970s and ‘80s by the two well-
known psychologists Amos Tversky and Daniel Kahneman, although the concept was originally
introduced by Herbert A. Simon. Simon's original primary object of research was the
problem–solving. Simon stated that we operate within what he calls “bounded rationality”,
affirming the idea that rationality is limited because of the cognitive limitations of the mind and the
time available to solve a problem. In particular, investigating the decision-making process, he
maintained that decision-makers, in this view, usually act in order to find a satisfactory solution or
accepting choices or judgments that are "good enough" for their purposes, rather than the optimal
ones. In this perspective, Simon proposed the bounded rationality as an alternative basis for the
mathematical modeling of decision-making, as used in economics, political sciences, and related
disciplines. It contrasts the concept of "rationality as optimization" and decision-making as a fully
rational process of finding the best choice given the information available.
Extending these concepts to probabilistic reasoning, Kahneman and Tversky carried out a
pioneering series of experiments that examined intuitive judgments of probabilities. The results
upturned the classical conception of human rationality—the view that the probability calculus was
just common sense. Their studies showed, for example, that if knowledge makes a possibility
available to us, we tend to think that it has a higher probability than events that are less available.
For example, if we ask: which is the most probable death reason in the US? a) car accident; b)
stroke, or, c) stomach cancer? We may answer “car accident” because it seems to be more frequent
to us, given that it is a common topic of news. However, the most frequent cause of death is stroke,
followed by stomach cancer, and, finally, by a car accident. Availability is a heuristic that we tend
to use in everyday intuitive judgments about probabilities. Another example is when we are asked
the probability that an individual is a member of a particular set or category. In this case, we would
use how representative of a category he or she seems to be. This heuristic is sensible, but it can lead
us into error if we fail to test our intuitions. A famous example from Tversky and Kahneman’s
studies concerns a woman called Linda: Linda is 31 years old, single, outspoken, and very bright.
She majored in philosophy. As a student, she was deeply concerned with issues of discrimination
and social justice and also participated in antinuclear demonstrations. From this description,
participants in many experiments gave a higher ranking to the probability that Linda is a feminist
participants in many experiments gave a higher ranking to the probability that Linda is a feminist
bank teller than to the probability that Linda is a bank teller. Her description is more representative
of the former than the latter (as independent ratings showed). Yet, the set of bank tellers includes
the set of feminist bank tellers, and so the rankings of their probabilities violate a basic principle of
probability. Our intuitions are answering the wrong question, and our deductive powers are
overlooking the error.
All these studies thus show that there is a gap between our knowledge and our performance. As
Johnson-Laird states, this “is the fundamental paradox of rationality. On the one hand, we can
understand the principles of rationality—the canons of logic and the calculus of probabilities. On
the other hand, we fail to live up to them” (2006, p. 74). And above all, we all fail to live up to them
in the same way. Indeed, we make errors in reasoning, a lot of errors, and mainly we make the same
errors (i.e biases)!!. This aspect changes the perspective, leading researchers to investigate not only
how we should reason, which strategy we should use (competences), but how we do actually
reason, what kind of processes we actually follow, and, finally, why we make mistakes
(performances). The explanation of these errors becomes so the focus of the studies, together with
the strategies that lead to these errors. In line with this, in the last decades, the focus also turned in
order to understand why we use these strategies and whether they make us always irrational (not
only from a normative point of view). This is still a subject of debate among scholars: should or
must we use alternative criteria, different from the traditional ones (normative ones: formal logic or
probabilistic calculation), according to which we could evaluate a strategy or a performance as
rational or irrational?
4. What makes us rational or irrational? Our goals.
In what follows I would like to give a deeper understanding of what we mean with rational
thinking. In particular, I would like to answer the question: what are the criteria according to we
can say that we are thinking in a rational way?
According to logic, if we follow for example the “laws of thought”, we are rational. These laws are
the same made explicit in formal logic (Johnson-Laird, 2006). We can certainly learn them, and we
often use them to solve several and difficult problems. However, those of us with any training in
formal logic, do actually follow these laws of thought? Indeed, we often use other simpler
strategies, like the heuristics, and errors or biases in this area of reasoning that come just from these
other methods of thinking. The question at issue in this paragraph is thus: can we be considered
irrational, when we do not follow these “laws of thought” or other normative laws, like the
probabilistic ones?
According to a number of quoted scholars (e.g. Baron, Johnson-Laird and Mancini) the answer is
NO!! In fact, we can use other criteria to describe if a reasoning strategy is rational or irrational.
The main criteria are those related to our goals and interests. In line with this view, I will adopt a
perspective that is functional and pragmatic; related to that kind of thinking we try to elaborate
when we want to achieve our goals and wishes. Our goals are, indeed, what we focus on, even if we
are not aware of them. They are the criteria by which we evaluate everything we try to achieve in
our life. Thus, we tend to think “rationally,” in this sense. These criteria of rationality help us to
define the Good Thinking (Baron, 2008) as the best kind of thinking. We can define it as rational
because it is whatever kind of thinking that best helps people to achieve their goals. Moreover, it
does not exclude that it could in some cases correspond to the rules of formal logic. Indeed, it is
possible that following the rules of formal logic lead to achieving a relevant goal, such as happiness
(see Baron, 2008). In this case, it is “rational ” (i.e. Good Thinking) to follow these laws of logic.
On the other hand, if, in another context, violating the laws of logic leads to achieving the same
goal, happiness, then they are these violations that we can call “rational” (i.e. Good Thinking). It is
worth noting, that rational thinking can be defined as relative to a person at a given time, a specific
context, and a set of beliefs and goals. It does not matter if these beliefs are false. People may think
rationally as well. For example, as Baron suggests, if I believe (a delusion) that the Isis is pursuing
me, I might still make rational decisions for coping with that situation. Similarly, people who
pursue irrational goals may still do good thinking about how to achieve them. For instance, if my
goal is to escape from Isis, I may pursue it well or badly. It is worth underlying also, that rationality
does concern the methods of thinking we use, not the conclusions of our thinking. Therefore,
rational methods are those that are generally best in achieving our goals. That’s the reason why
when we say a friend is “irrational,” we usually disagree with her/his conclusion in a peculiar way.
We think that s/he could have been used better methods, more functional in reaching that
conclusion. When we call someone “irrational,” we are focusing on how s/he ought to have thought
(cf. Baron, 2008).
4.1 What about heuristics and rationality?
Therefore, according to Baron and the Good Thinking, when we define rational or irrational
thinking, we pass from normative criteria to some functional and pragmatic ones. We can indeed
distinguish between "good" and "bad" thinking, meaning with the former, what allows us to achieve
our goals and, with the latter, what could preclude us to achieve them. In general, good thinking
consists of (1) searching what is exhaustive in proportion to the importance of the question, (2) get
confidence about what is appropriate to the amount and quality of thinking done, and (3) get
fairness about other possibilities than the one we initially favor (baron, 2008). At this point, it
remains to be clarified what position the heuristics and biases take with respect to these
rationality/irrationality criteria. In other words, as simple cognitive shortcuts, can heuristics be
defined irrational in an absolute sense? On closer inspection, it would seem not. In line with our
functional and pragmatic perspective, which considers our goals the criteria by which we evaluate
everything we do in our life, heuristics and biases cannot be defined as rational or irrational in an
absolute sense. And this, although they seem to possess all the characteristics of irrationality. As
fast procedures, they do not systematically consider all the possibilities. Moreover, they lead to
accepting conclusions reached in a fast way, based on earlier beliefs or still get through a limited
search for information. However, in particular conditions and contexts, heuristics can be considered
useful strategies and therefore rational according to Baron. These conditions are:
1. When the time is limited. The heuristics allow us to quickly arrive at a solution, thereby opposing
the algorithms. The latter, in fact, guarantees the achievement of the correct response but requires
very long times, as they systematically consider all the possibilities to reach the goal and require an
exhaustive search for specific information.
2. When resources are limited. The heuristics, as economic but not systematic procedures, are
effective in situations where we do not have sufficient energy to consider all the possibilities or all
data. On the other hand, possibilities or data are not always cognitively accessible or simply
available. In all these cases, the algorithms would not allow us to make any decisions.
3. High costs for finding information. The algorithms, as systematic resolution systems, require an
exhaustive search for information before the formulation of a judgment or decision and for this
reason they can lead the individual to miss out on good opportunities.
In summary, when a quick decision is required, our goal is to think quickly. In this case, good
decision-making involves sufficient search for possibilities and evidence. Searching is sufficient
when it best serves the thinker’s personal goals, including the goal of minimizing the cost of
thinking. Accordingly, the heuristic is "useful", functional, and rational, if it is not possible to act
otherwise, that is, in conditions of a short time, limited resources and high costs. On the opposite, it
is irrational to use it when is not necessary.
In what follows, I will try to show how and why in specific contexts, for one of the most reasoning
investigated process, the hypothesis-testing, people are mainly motivated by the expected costs of
erroneous decisions in testing the validity of the hypothesis, and how in these contexts the
confirmation bias and the strategies leading to it, can thus result rational.
5. Logical errors in Hypothesis-Testing are actually errors?
In everyday life, we often make questions about ourselves and others and try to answer them by
seeking for relevant evidence or information. We might want to answer questions about others’
personality (“is Ellen really a friend?”), about an explanation of their behavior (“Did Ellen offend
me because she was angry?”), or about some rule (“Does Ellen get a reward if she pass the
exam?”). These are all examples of hypotheses or beliefs we may generate and try to evaluate by
seeking evidence concerning their validity. Through Hypothesis-Testing, we actively relate our
stored knowledge to immediate judgments about reality. We generate hypotheses from preexisting
mental models of our world (eg. Johnson-Laird, 2006; Trope & Liebermann, 1996), and once
generated, we try to test their validity, through information searching and processing. Scientists try
to do this when they make experiments. According to Popper’s view of Science (i.e. falsificationist
methodology; e.g. 1959), scientists should test their hypotheses by trying to falsify them. In line
with logic theory and Popper, those of us who are trying to test a hypothesis should reason like a
scientist, that is we should first try to find counterexamples of our hypothesis or belief, in order to
try to falsify it. Counterexamples are indeed crucial for logic. A counterexample to a general
proposition shows that the hypothesis is false. If the hypothesis resists to any attempt aimed at
falsifying it, then we can say that it is true. Yet, we often fail to search for counterexamples. So, it
seems that we do not actually test our hypotheses or beliefs as scientists when we reason in
everyday life. What we see is that we can find certain biases that cause us to depart from this
normative model. These biases are always irrational for logic. But, from a pragmatic perspective
and according to the Good Thinking, some of them can be functional, that is they can help us to
and according to the Good Thinking, some of them can be functional, that is they can help us to
better achieve our relevant goals, like even surviving, especially if we are in danger. One of these
biases, maybe one of the most investigated in thinking and reasoning literature, is the confirmation
bias.
5.1. The four card problem and the better safe than sorry strategy
In cognitive psychology, researchers have often used variants of Wason’s hypothesis testing
problem (Wason’s Selection task –WST; Wason, 1966), in order to investigate participants’
hypothesis-testing strategy. The task invites people to see if a conditional rule of the form “if p then
q” has been violated by one of four instances about which individuals have incomplete information.
Each instance is represented by a card. One side of a card tells whether the antecedent is true or
false (i.e., whether p or not-p is the case), and the other side of the card tells whether the consequent
is true or false (i.e., whether q or not-q is the case). People, who are permitted to see only one side
of each card, are asked to say which card(s) must be turned over to see if any of them violate the
rule. The four cards that participants must choose to represent the values of p, not-p, q, and not-q.
From the point of view of formal logic, only the combination on the same card of a true antecedent
(p) with a false consequent (not-q) can falsify a conditional rule: i.e. those cards with the potential
to reveal a falsifying instance (Popper, 1959). However, as few a 4% of participants make this a
response, other responses are far more common: p and q cards (46%); p card only (33%); p, q, and
not-q cards (4%) (Johnson-Laird & Wason, 1970).
The term “confirmation bias” has been used for this kind of behavior (Mynatt, Doherty, and
Tweney, 1977). The idea is that subjects try systematically to confirm their hypothesis rather than to
test it or falsify it. The distinction between logic and reasoning, which was presaged in Wason’s
findings, has now proceeded to a far greater degree than anyone could have imagined. Logic is an
essential tool for all sciences, but it is not a psychological theory of reasoning. This robust and
reliable effect has been indeed widely interpreted to cast doubt on human rationality (Cohen, 1981;
Manktelov & Over, 1993; Stich, 1985; 1990), and has been the basis of several general theories
about reasoning and rationality. In general, they all seem to agree in one way or another that
individuals do not reason according to the normative rules of formal logic and that their answers are
errors (e.g.: Evans, 1989; Johnson-Laird & Byrne, 1991; Rips, 1994). But, is this true? Every time
we try to confirm a hypothesis are we always irrational, and every time we try to falsify it are we
always rational? If we turn back to Good Thinking, we could answer no. Indeed, if we are rational
every time we think in order to achieve a relevant goal or to avoid compromising it, then our
thinking depends on the relevance of the hypothesis we have to test and on the context in which we
test it. In other words, what makes our hypothesis-testing process rational or irrational is not the
systematic use of a falsification strategy, but choosing the strategy (confirmatory vs. falsificatory)
that could avoid us to commit crucial errors, from the point of view of our goals. Our hypothesis
testing is indeed a pragmatic directed process, mainly motivated by the costs of our inferential
errors. Several studies clearly demonstrated that people’s hypothesis-testing process is indeed
domain-specific and guided by their relevant goals: individuals’ reasoning performances depend on
the perceived relevance of the hypothesis to one’s personal interests. (Evans & Over, 1996; Kirby,
1994; Manktelow & Over, 1991; Smeets, de Jong & Mayer, 2000).
In line with this perspective, a selection task is driven on by this factor: what individuals say about
the truth or falsity of conditional rules depends on their preferences between various possible
outcomes or states of affairs. These preferences fix the utilities we attach to these outcomes or
states of affairs. By implication, a positive hypothesis testing strategy in reasoning (confirming
information seeking), coexists with more normative test strategies (falsificating information
seeking), and these variations in testing strategy (confirmation vs. falsification) depend on
foremost just on the perceived utility and the costs of the outcomes. For instance, Smeets and
colleagues (2000) showed that, in the context of general threat, with a series of selection tasks
containing safety rules (i.e., if p then safe) and danger rules (i.e., if p then danger), participants
adopted a verificationistic strategy in the case of danger rules, and tended to look for falsifications
in the case of safety rules. Indeed, in potentially dangerous situations it is more adaptive,
functional to rely on confirming information concerning danger rules. For example, given the rule
“if the alarm bell rings, then there is a fire” one is well advised to check whether the alarm is
followed by the fire and whether the fire is indeed preceded by the alarm. The logical option of
false alarm (the bell rings in the absence of fire) is less relevant for survival. Thus, one’s interests
are better served knowing whether the bell sometimes rings when there is fire, than whether the
bell sometimes rings in the absence of fire. The opposite is true for safety rules such as “if the
monkeys scream, then it is safe”. In this instance, it is adaptive to check whether it is, indeed, safe
when the signal is present. That is, in case of safety rules, one’s interests are better served by
when the signal is present. That is, in case of safety rules, one’s interests are better served by
searching for potentially disconfirming information (Are there screaming monkeys, and is it
perhaps not safe?). Thus, hypothesis-testing process seems to be largely guided by individuals’
perceived utilities and thus by their goals and interests.
According to these findings, the confirmation strategy can be as rational as the falsificationist
strategy, if it is functional to individuals’ goals, like in the example of the bell ringing. We can call
this strategy better-safe-than-sorry reasoning strategy (e.g., de Jong, Haenen, Schmidt, & Mayer,
1998; Smeets, de Jong, & Mayer, 2000). All human beings tend to adopt this strategy in the face of
exposure to a threat. It focuses them on the danger and leads them to search for examples
confirming it. If they are focusing on safety, they search for a counterexample of it. This strategy
reflects a prudential reasoning strategy that helps us to protect our goals and interests (Mancini &
Gangemi, 2004; Jonson-Laird, Mancini & Gangemi, 2006).
This prudential strategy is especially relevant to anxiety disorders, which are marked by an intense
emotional reaction to the disorder-specific threats. For example, de Jong and colleagues (1998; de
Jong, Mayer, & van den Hout, 1997; Smeets, de Jong, & Mayer, 2000) found that individuals with
hypochondriasis are more likely to selectively search for confirming information when asked to
judge the validity of a danger conditional hypothesis in the context of health threats (e.g., If a
person suffers from a headache, then that person has a brain tumor). The threat can also be related
to guilt and responsibility, which are central features of OCD (Arntz, Voncken, & Goosen, 2007;
Mancini & Gangemi, 2014, 2018; Niler & Beck, 1989; Rachman, 1993; Salkovskis, 1985;
Salkovskis & Forrester, 2002; Van Oppen & Arntz, 1994). Specifically, studies have demonstrated
that people check safety and danger hypotheses related to the outcome for which they feel
responsible more prudently than subjects who are not made to feel responsible (e.g., Gangemi,
Mancini & Dar, 2015; Gangemi, Tenore & Mancini, 2019). The persistent doubt in OCD (e.g., is
the door really locked?) could, therefore, result from the motivation to minimize the threating
perception.
In summary, people are motivated to test their hypotheses effortfully to pursue their goals, thus
reducing the likelihood of crucial errors, and thereby to avoid their costs (see Friedrich, 1993). As
such, in this pragmatic framework, hypothesis testing aims at accuracy, in the sense of reducing
errors leading to compromising a relevant goal, and the apparent failure of people to falsify in every
case a hypothesis, is not a result of a general problem in the cognitive processes, but rather reflects
the "better safe than sorry" policy.
6. Can emotions make us rational?
Finally, I want to focus on what is the relation between rational thinking and emotion. A wide
literature tends to see this relationship as a simple contrast between the two. Indeed, many scholars
tend to use the term “emotion” as a substitute for the word “irrationality”. They say that rational
thinking needs to be always cold. We cannot thus reason well, reaching our goals, if we are affected
by our emotions. In this perspective, emotions should always worse our reasoning, making us
irrational. Yet this is not the position of the Appraisal Theories of reasoning. In general, these
theories assume that an individual’s goals (i.e., desires, needs, values) and beliefs (i.e., cognitions,
representations, assumptions) are proximal determinants of our behavior (cfr. Castelfranchi &
Paglieri, 2007). In particular, the appraisal based theories claim that all emotional states and
behaviors are based on “a person’s subjective evaluation or appraisal of the personal significance
of a situation, object, or event on a number of dimensions or criteria” (Scherer, 1999, p. 637).
Among these dimensions or criteria, our own goals or interests are the most important. When we
believe that they could be compromised, then our reasoning becomes a tool to make them safe.
Accordingly, these theories claim that our emotions can even improve our capacity to protect our
goals by driven our reasoning processes. And emotions can improve our reasoning in some
contexts, as demonstrated by several studies (e.g., Blanchette & Richards, 2010). Let’s see what are
these contexts.
Some research show that when emotions are incidental (e.g., they are induced by a movie) they
actually burden the system and lead to poorer performance in a reasoning task (e.g. Blanchette,
2006; Blanchette & Richards, 2004). But, when emotions are integral, arising from the topic of
reasoning, they enhance reasoning (e.g. Johnson-Laird et al., 2006; Gangemi, Mancini & Johnson-
Laird, 2013). For example, Blanchette and her colleagues have shown such effects in a study
conducted with war veterans suffering from the post-traumatic stress disorder. In this study, war
veterans evaluated syllogisms better when the conclusions referred to war than to neutral topics
(Blanchette & Campbell, 2005). Analogous effects occurred in the evaluation of syllogisms after
the terrorist attacks in London, UK, in July 2005 (Blanchette, Richards, Melnyk, & Lavda, 2007).
Participants resident in London, UK, were more accurate in evaluating syllogisms concerning
Participants resident in London, UK, were more accurate in evaluating syllogisms concerning
terrorism than those in Manchester, UK, who in turn were more accurate than those living in
another city in Canada. Thus, the closer the geographical proximity of the participants was to the
attacks, greater was the proportion of them who correctly evaluated syllogisms. The difference
between the Mancunians and Canadians disappeared six months later, but the Londoners still
reasoned more accurately about terrorism than the other two groups. The effect depended on the
emotion related to the terrorist attack because the three groups differed in the reported intensity of
their basic emotions.
So, how do emotions explain an improvement in reasoning? The mental model theory of reasoning
offers a hypothesis. The theory postulates that reasoning depends on envisaging possibilities, and so
emotions induced by the topic lead individuals to make a more comprehensive search for
possibilities pertinent to their cause than the search they make in other cases (e.g., Bucciarelli &
Johnson-Laird, 1999; Johnson-Laird et al. , 2006; Gangemi et al., 2013; Gangemi, et al. 2015;
Gangemi et al. 2019). In this way, our emotions may help us to achieve or protect our goals,
orienting our reasoning processes. Reasoning thus becomes a tool to make them safe. This means
that rational reasoning does not need to be cold. In many contexts, such as in a danger one, emotion
can even improve our capacity to reason on the threat. An example of a functional reasoning
strategy activated by our emotions is the above mentioned Better Safe than Sorry strategy. In a
context of danger, we focus on the threat and this leads us to feel a congruent emotion, such as
anxiety. This emotion leads us to prudentially search for evidence confirming the danger
hypothesis. The confirmation of the threat protects us from crucial errors, i.e. undervalue a danger,
when it is true. For example, if we are worried because of a persistent symptom, such as a liver
pain, we could make a transition to great anxiety, focusing on the worst-case as a result of our own
anxiety: we could have a serious illness. This danger hypothesis is likely to start a confirmatory
pattern of inferences, rather than a falsification process. We thus search for evidence confirming
this hypothesis from an available source of information, such as an analogy with a friend, a relative,
or a case in a newspaper, and this strengthens our belief in the worst-case scenario (see, e.g.,
Gangemi et al., 2019). We then infer that we should consult a doctor. If we are mistaken about our
illness, no harm is done, but if we fail to consult a doctor and we have the illness, the consequences
will be disastrous. We are adopting the better-safe-than-sorry reasoning strategy (de Jong, Haenen,
Schmidt, & Mayer, 1998) that helps us to focus on the danger and leads us to search for examples
confirming it.
In summary, the relationship between rational thinking and emotion is more complex than a simple
contrast between the two, if we avoid using the term “emotion” as a substitute for the word
“irrationality”. Emotions can make our reasoning rational, when they orient it in order to help our
relevant goals, for example by achieving or protecting them.
7. Conclusions
What makes the difference between rational or irrational reasoning? In this chapter, I defend a
functional and pragmatic account of rational reasoning. According to it, the best kind of thinking
(rational) is whatever kind of thinking that best helps people to achieve or protect their goals and
reduce the costs of crucial errors. Therefore, contrary to normative theories, such as logic or
probabilistic reasoning, rationality is not the same as accuracy, and irrationality is not the same as
an error. Rationality can be considered a matter of degree. We can say that a way of reasoning is
“more rational” or “less rational” than another. It depends on how much it can be helpful for our
goals. There may be also not only a “best” way of reasoning. There may be different ways of
reasoning that are comparable in terms of their value in helping people to achieve their goals. They
depend on the beliefs, contexts or domains in which we are reasoning. Finally, these kinds of
reasoning do not deny emotions but give them a relevant role. Emotions sometimes even improve
our reasoning when we want to achieve or defend our own goals and interests.
References
Arntz, A., Voncken, M., Goosen, A. C. A. (2007). Responsibility and obsessive-compulsive
disorder: An experimental test. Behaviour Research and Therapy, 45, 425–435.
Baron, J. (2008). Thinking and deciding. Cambridge: Cambridge University Press.
Blanchette, I. (2006). The effect of emotion on interpretation and logic in a conditional reasoning
task. Memory & Cognition, 34, 1112-1125.
Blanchette, I., & Campbell, M. (2005). The effect of emotion on syllogistic reasoning in a group
of war veterans. In Proceedings of the XXVIIth Annual Conference of the Cognitive Science
Society (p. 1401). Mahwah, NJ: Erlbaum.
Blanchette, I., & Richards, A. (2004). Reasoning about emotional and neutral materials. Is logic
Blanchette, I., & Richards, A. (2004). Reasoning about emotional and neutral materials. Is logic
affected by emotion? Psychological Science, 15, 745–752.
Blanchette, I., & Richards, A. (2010). The influence of affect on higher level cognition: A review
of research on interpretation, judgement, decision-making and reasoning. Cognition and
Emotion, 15, 561-595.
Blanchette, I., Richards, A., Melnyk, L., & Lavda, A. (2007). Reasoning about emotional
contents following shocking terrorist attacks: a tale of three cities. Journal of Experimental
Psychology. Applied, 13, 47–56.
Bucciarelli, M., & Johnson-Laird, P.N. (1999). Strategies in syllogistic reasoning. Cognitive
Science, 23, 247–303.
Castelfranchi, C., Paglieri, F. (2007). The role of beliefs in goal dynamics: Prolegomena to a
constructive theory of intentions. Synthese, 155, 237-263.
Cohen, L. J. (1981). Can human irrationality be experimentally demonstrated? Behavioural and
Brain Sciences, 4, 317 – 370.
de Jong, P. J., Haenen, M., Schmidt, A., & Mayer, B. (1998). Hypochondriasis: The role of fear-
confirming reasoning. Behaviour Research and Therapy, 36, 65–74.
de Jong, P. J., Mayer, B., & van den Hout, M. (1997). Conditional reasoning and phobic fear:
Evidence for a fear-confirming reasoning pattern. Behaviour Research and Therapy, 35, 507–
516.
Dellarosa., D. (1988). A history of thinking. In R.J. Sternberg & E.E. Smith (eds.), The Psychology
of human thought (pp. 1-18). Cambridge: Cambridge University Press.
Eddy, D. M. (1982). Probabilistic reasoning in clinical medicine: Problems and opportunities. In D.
Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases
(pp. 249–267). Cambridge University Press.
Evans, J. St. B. T. (1989). Bias in human reasoning: Causes and consequences. Hove, UK:
Lawrence Erlbaum Associates Ltd.
Evans, J. St. B. T., & Over, D. E. (1996). Rationality in the selection task: Epistemic utility versus
uncertainty reduction. Psychological Review, 103, 356 – 363.
Friedrich J. (1993). Primary Error Detection and Minimization (PEDMIN) strategies in social
cognition: A reinterpretation of confirmation bias phenomena. Psycological Review, 100, 298-
319.
Gangemi, A., Mancini, F. & Dar, R. (2015). An experimental re-examination of the inferential
confusion hypothesis of obsessive-compulsive doubt. Journal of Behaviour Therapy and
Experimental Psychiatry, 48, 90-97.
Gangemi, Mancini & Johnson-Laird, 2013
Gangemi, A., Tenore, K. & Mancini, F. (2019). Two reasoning strategies in patients with
psychological illnesses. Frontiers in Psychology, 10.
Gilbert, 1998
Hilbert, D. (1967) The foundations of mathematics. In van Heijenoort, J. (ed.), From Frege to
Gödel: a Source Book in Mathematical Logic, 1879–1931. Cambridge, MA: Harvard University
Press, pp. 464–479. (Originally published in German in 1928.)
Inhelder, B., Piaget, J. (1958), The growth of logical thinking from childhood to adolescence.
(Parsons, A., Milgram, S. Trans.), New York Basic Books (Original work published 1955).
Johnson-Laird, P. N., & Byrne, R. M. J. (1991). Deduction. Hove, UK: Lawrence Erlbaum
Associates Ltd.
Johnson-Laird, P.N. (2006). How we reason. Oxford: Oxford University Press.
Johnson-Laird, P.N. & Wason, P. C. (1970). A theoretical analysis of insight into a reasoning task.
Cognitive Psychology, 1, 134-148.
Kirby, K.N. (1994). Probabilities and utilities of fictional outcomes in Wason’s four card selection
task. Cognition, 51, 1 – 28.
Mancini F., Gangemi A. (2006). Role of fear of guilt at behaving irresponsibly in hypothesis-
testing. Journal of Behavior Therapy and Experimental Psychiatry, 37, 333-346.
Mancini, F. & Gangemi, A. (2014). Deontological guilt and obsessive compulsive disorder. Journal
of Behavior Therapy and Experimental Psychiatry, 49, 157-163.
Mancini, F. & Gangemi, A. (2018). Deontological guilt and altruistic guilt: A dualistic approach.
Giornale Italiano di Psicologia, 45, 3, 483-510
Mancini, F., & Gangemi, A. (2004). Fear of guilt of behaving irresponsibly in obsessive-
compulsive disorder. Journal of Behavior Therapy and Experimental Psychiatry, 35, 109–120.
Mancini, F., Gangemi, A. & Johnson-Laird, P.N. (2007). Il ruolo del ragionamento nella
psicopatologia secondo la Hyper Emotion Theory. Giornale Italiano di Psicologia, 4, 763-794.
Manktelow, K. I., & Over, D. E. (1991). Social roles and utilities in reasoning with deontic
Manktelow, K. I., & Over, D. E. (1991). Social roles and utilities in reasoning with deontic
conditionals. Cognition, 39, 85 – 105
Manktelow, K. I., & Over, D. E. (1993). Rationality: Psychological and philosophical perspectives.
London: Routledge.
Mynatt, C. R. Doherty, M. E. Tweney, R. D. (1977). Confirmation bias in a simulated research
environment: An experimental study of scientific inference. Quarterly Journal of Experimental
Psychology 29, 85–95.
Niler, E. R., & Beck, S. J. (1989). The relationship among guilt, disphoria, anxiety and obsessions
in a normal population. Behaviour Research and Therapy, 27, 213–220.
Popper K.R. (1959). The logic of scientific discovery. New York: Basic Books.
Rachman, S. (1993). Obsessions, responsibility and guilt. Behaviour Research and Therapy, 31,
149–154.
Salkovskis, P. M. (1985). Obsessional-compulsive problems: A cognitive-behavioural analysis.
Behaviour Research and Therapy, 23, 571–583.
Salkovskis, P. M., & Forrester, E. (2002). Responsibility. In R. O. Frost, & G. Steketee (Eds.),
Cognitive Approaches to Obsessions and Compulsions. Oxford: Pergamon.
Scherer, K. R. (1999). Appraisal theory. In T. Dalgleish & M. J. Power (Eds.), Handbook of
cognition and emotion (p. 637–663). John Wiley & Sons Ltd.
Smeets G., de Jong P. J., & Mayer B. (2000). If you suffer from a headache, then you have a brain
tumor: Domain-specific reasoning ‘‘bias’’ and hypochondriasis. Behaviour Research and
Therapy, 38, 763 – 776.
Stich, S. (1985). Could man be an irrational animal? Synthese, 64, 115 – 135.
Stich, S. (1990). The fragmentation of reason. Cambridge, MA: MIT Press.
Trope, Y., & Liberman, A. (1996). Social hypothesis testing: Cognitive and motivational
mechanisms. In E. T. Higgins & A. W. Kruglanski (Eds.), Social psychology: Handbook of basic
principles (p. 239–270). Guilford Press.
Van Oppen, P., & Arntz, A. (1994). Cognitive therapy for obsessive-compulsive disorder. Behavior
Research and Therapy, 32, 79–87.
Wason, P.C. (1966) Reasoning. In Foss, B.M. (ed.), New Horizons in Psychology. Harmondsworth,
Middx: Penguin, pp. 135–151.
Amelia Gangemi, PhD, Full Professor of General Psychology and Cognitive
Psychology
at the Dept. of Cognitive Science, Psychology, Education and Cultural Studies of the
University of Messina, Italy.