Chapter 14 - Eysenck - Cognitive Psychology A Students Handbook
Chapter 14 - Eysenck - Cognitive Psychology A Students Handbook
by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
52 Vanderbilt Avenue, New York, NY 10017
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2020 Michael W. Eysenck and Mark T. Keane
The right of Michael W. Eysenck and Mark T. Keane to be
identified as authors of this work has been asserted by them
in accordance with sections 77 and 78 of the Copyright,
Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted
or reproduced or utilised in any form or by any electronic,
mechanical, or other means, now known or hereafter
invented, including photocopying and recording, or in any
information storage or retrieval system, without permission
in writing from the publishers.
Trademark notice: Product or corporate names may be
trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
First edition published by Lawrence Erlbaum Associates 1984
Seventh edition Published by Routledge 2015
Every effort has been made to contact copyright-holders.
Please advise the publisher of any errors or omissions, and
these will be corrected in subsequent editions.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data
A catalog record has been requested for this book
INTRODUCTION
For hundreds of years, philosophers have distinguished between two kinds
of reasoning. One is inductive reasoning, which involves drawing general
conclusions from premises (statements) referring to particular instances.
A key feature of inductive reasoning is that the conclusions of inductively
valid arguments are probably (but not necessarily) true.
The philosopher Bertrand Russell provided the following example. A
turkey might use inductive reasoning to draw the conclusion “Each day I
am fed”, because that has always been the case in the past. However, there
is no certainty that the turkey will be fed tomorrow. Indeed, if tomorrow is
Christmas Eve, it is likely to be proven false.
Scientists very often use inductive reasoning similarly to Russell’s
hypothetical turkey. A psychologist may find across numerous experiments
that reinforcement (reward) is needed for learning. This might lead them
to use inductive reasoning to propose the hypothesis that reinforcement is
KEY TERMS essential for learning. This conclusion is not necessarily true because future
experiments may not replicate past ones.
Inductive reasoning
Forming generalisations The other kind of reasoning identified by philosophers is deductive
(that may be probable reasoning. Deductive reasoning allows us to draw conclusions that are
but are not certain) definitely or certainly valid provided other statements are assumed to be
from examples or true. For example, the conclusion Tom is taller than Harry is necessarily
sample phenomena; see
true if we assume Tom is taller than Dick and Dick is taller than Harry.
deductive reasoning.
Deductive-reasoning problems owe their origins to formal logic.
Deductive reasoning An important issue is whether the distinction between inductive and
Reasoning to a conclusion
deductive reasoning is as clear-cut in practice as it appears above. There is
from a set of premises
or statements where increasing evidence that similar processes are involved in both cases. For
that conclusion follows example, Stephens et al. (2018) asked participants to evaluate identical sets
necessarily from the of arguments after receiving inductive- or deductive-reasoning instructions.
assumption the premises With the former instructions, participants decided whether the conclusion
are true; see inductive
was plausible, strong or likely to be true. With the latter instructions, they
reasoning.
decided whether the conclusion was necessarily true.
Stephens et al. (2018) found participants used the same processes KEY TERMS
whether instructed to reason inductively or deductively. The only major
Informal reasoning
difference was that greater argument strength was required to decide that a A form of reasoning
conclusion was necessarily true (deductive condition) than to decide it was based on one’s relevant
strong or likely to be true. knowledge and
The wide chasm between the artificial, logic-driven, deductive-reasoning experience rather than
logic.
tasks traditionally used in the laboratory and everyday reasoning in the
form of argumentation has led to a rapid increase in research on informal Falsification
reasoning. Informal reasoning (discussed later, pp. 694–701) is based on Proposing hypotheses and
then trying to falsify them
our knowledge and experience rather than logic. It is a form of inductive
by experimental tests; the
reasoning that resembles our everyday reasoning. logically correct means
A major consequence of this shift in research is that it is increasingly by which science should
accepted that reasoning processes often resemble those used in judgement work, according to Popper
and decision-making. For example, the Bayesian approach, according to (1968).
which our subjective probabilities (i.e., that X is dishonest) are adjusted in
the light of new information, plays a prominent role in theorising about
judgements (see Chapter 13). In a similar fashion, the Bayesian approach is
increasingly applied to reasoning (Navarrete and Mandel, 2016).
HYPOTHESIS TESTING
Karl Popper (1968) distinguished between confirmation and falsifica-
tion. Confirmation involves the attempt to obtain evidence confirming
or supporting one’s hypothesis. In contrast, falsification involves the
attempt to falsify hypotheses by experimental tests. Popper claimed we
cannot achieve confirmation via hypothesis testing. Even if all the avail-
able evidence supports a hypothesis, future evidence may disprove it.
He argued falsifiability (the potential for falsification) separates scien-
tific from u
nscientific activities such as religion or pseudo-science (e.g.,
psychoanalysis).
According to Popper, scientists should focus on falsification. In fact,
as discussed later, they often seek confirmatory rather than disconfirmatory
evidence when testing their hypotheses. It has also been claimed the same
excessive focus on confirmatory evidence is found in laboratory studies on
hypothesis testing – research to which we now turn.
(1) those (e.g., 12-8-4) where participants expect to receive the answer
“No” and which are therefore confirmatory;
(2) those (e.g., 1-4-9) where participants expect to receive the answer
“Yes” and which are therefore disconfirmatory.
Findings
Participants typically engage in very few falsification attempts on the 2-4-6
task and have a low success rate. Tweney et al. (1980) enhanced perfor-
mance by telling participants the experimenter had two rules in mind and
they had to identify both. One rule generated DAX triples and the other
MED triples. They were also told 2-4-6 was a DAX triple. After generating
each test triple, participants were informed whether the set fitted the DAX
or MED rule. The DAX rule was any three numbers in ascending order and
the MED rule covered all other sets of numbers.
Over 50% of participants produced the correct answer on their first
attempt (much higher than with the standard problem). Of importance,
participants could identify the DAX rule by using positive testing to
confirm the MED rule, and so they did not have to try to disconfirm the
DAX rule.
Gale and Ball (2012) carried out a study resembling that of Tweney
et al. (1980). They always used 2-4-6 as an example of a DAX triple,
but the example of a MED triple was 6-4-2 or 4-4-4. Success in identify-
ing the DAX rule was much greater when the MED example was 6-4-2
(75%) rather than 4-4-4 (23%). The greatest difference between solvers and
non-solvers of the DAX rule was the number of descending triples they
produced. This indicates the importance of participants focusing on the
ascending/descending dimension, which was difficult to do when the MED
example was 4-4-4.
Cowley and Byrne (2005) argued people show confirmation bias because
they are loath to abandon their initial hypothesis. They suggested people
might be much better at managing to falsify a given incorrect hypothesis
if told it was someone else’s. As predicted, 62% of participants abandoned
the other person’s hypothesis compared to only 25% who abandoned their
Theoretical analysis
Most hypotheses are sparse or narrow (applying to under half the possi-
ble entities in any given domain: Navarro & Perfors, 2011). For example,
Perfors and Navarro (2009) asked people to generate all the rules and
hypotheses applying to numbers in a given domain (numbers 1 to 1,000).
The key finding was that 83% of the rules (e.g., two-digit numbers; prime
numbers) applied to fewer than 20% of the numbers.
With sparse hypotheses, positive testing is optimal “because there
are so many ways to be wrong and so few to be right”. In such circum-
stances, the learner will discover “the world has a bias towards saying
‘no’, and asking for ‘yes’ is the best way to overcome it” (Perfors &
Navarro, 2009, p. 2746). Thus, positive testing is typically successful. In
contrast, the 2-4-6 task penalises positive testing because the target rule is
so general.
Evaluation
Wason’s 2-4-6 task has been “the classic test-bed reasoning task for investi-
gations of hypothesis falsification for over forty years” (Cowley, 2015, p. 2),
with research having clarified the strengths and limitations of human induc-
tive reasoning. The processes involved in the 2-4-6 task are of relevance to
understanding scientists’ hypothesis testing.
What are the limitations of Wason’s approach? First, his task differs
from real-life hypothesis testing. Participants given the 2-4-6 task receive
immediate accurate feedback but are not told why the numbers they pro-
duced attracted a “yes” or “no” response. In the real world (e.g., scientists
testing hypotheses), the feedback is much more informative, but is often
delayed in time and sometimes inaccurate.
Second, the correct rule or hypothesis in the 2-4-6 task (three numbers
in ascending order of magnitude) is very general because it applies to a
fairly high proportion of sets of three numbers. In contrast, most rules
or hypotheses apply to only a smallish proportion of possible objects or
events. Positive testing works poorly on the 2-4-6 task but not with most
other forms of hypothesis testing.
Third, Wason argued most people show confirmation bias and find a
falsification approach very hard to use. However, there is much less con-
firmation bias but more evidence of falsification when testing someone
else’s hypothesis (Cowley, 2015; Cowley & Byrne, 2005). This is consistent
with scientists’ behaviour. For example, at a conference in 1977 on the
levels-of-processing approach to memory (see Chapter 6), nearly all the
research presented identified limitations with that approach.
DEDUCTIVE REASONING
In deductive reasoning, conclusions can be drawn with certainty. In this
section, we will mostly consider conditional and syllogistic reasoning prob-
lems based on traditional logic. In the next section, we consider general
theories of deductive reasoning. As we will see, theory and research increas-
ingly focus on the non-logical strategies and processes used when people
solve deductive-reasoning problems.
Premises
If Nancy is angry, then I am upset.
I am upset.
Conclusion
Interactive exercise:
Therefore, Nancy is angry. Conditional reasoning
Many people accept the above conclusion as valid. However, it is not valid
because I may be upset for some other reason (e.g., my football team has
lost).
Here is another problem in conditional reasoning:
Premises
If it is raining, then Nancy gets wet.
It is raining.
Conclusion
Nancy gets wet.
Premises
If it is raining, then Nancy gets wet.
Nancy does not get wet.
Conclusion
It is not raining.
People consistently perform much better with modus ponens than modus
tollens: many people argue incorrectly that the conclusion to the above
problem is invalid.
Another inference involves denial of the antecedent:
Premises
If it is raining, then Nancy gets wet.
It is not raining.
Conclusion
Therefore, Nancy does not get wet.
Theories
Here we briefly discuss theories of conditional reasoning. More general
theories of deductive reasoning are discussed later. Klauer et al. (2010)
proposed a dual-source model of conditional reasoning. There is a
knowledge-based process influenced by premise content where the subjec-
tive probability of the conclusion depends on individuals’ relevant know
ledge. There is also a form-based process influenced only by the form of the
premises.
Verschueren et al. (2005) also proposed a dual-process model (other
more general dual-process models are discussed later, pp. 683–690). They
focused on individual differences in conditional reasoning more than
Klauer et al. (2010). Some reasoners use a relatively complex counter
example strategy: a conclusion is considered invalid if the reasoner can
find a counterexample to it (this process is discussed later in the section
on mental models, pp. 681–683). The other process is an intuitive
statistical strategy based on probabilistic reasoning triggered by relevant
knowledge.
Findings
Singmann et al. (2016) tested the dual-source model using many condition-
al-reasoning problems. For each problem, participants rated the likelihood
of the conclusion being true on a probability scale running from 0% to
100%. Knowledge-based processes were more important than form-based
ones. Overall, the model accounted very well for participants’ performance.
De Neys et al. (2005) obtained evidence relevant to Verschueren
et al.’s (2005) dual-process model. On some trials, participants were pre-
sented with few or many counterexamples conflicting with valid conclu-
sions (modus ponens and modus tollens). According to classical logic, these
counterexamples should have been ignored. In fact, however, participants
were more likely to decide wrongly the conclusions were invalid when there
were many counterexamples.
Markovits et al. (2013) tested the dual-process model using problems
involving affirmation of the consequent (where the conclusion is invalid).
Here are two examples:
Reasoners using the statistical strategy were influenced by the fact that
the subjective probability that “If a finger is bleeding, it was cut” is greater
than the probability that “If a window is broken, it was broken by a rock”.
As a result, such reasoners accepted the invalid conclusion more often in
problem (2) than (1).
In contrast, reasoners using the counterexample strategy accepted the
conclusion if no counterexample came to mind. They also accepted the
invalid conclusion more often in problem (2). This was because it was
easier to find counterexamples with respect to the conclusion of problem
(1) (the window might have been broken by several objects other than a
rock) than the conclusion of problem (2).
According to the model, the counterexample strategy is more cogni-
tively demanding than the statistical strategy. Accordingly, Markovits
et al. (2013) predicted it would be used less often when participants had
limited time. This prediction was supported: that strategy was used on 49%
of trials with unlimited time but only 1.7% of trials with limited time.
Markovits et al. (2017) gave their participants modus ponens inferences
which are always valid. Each problem was presented with additional infor-
mation indicating the relative strength of evidence supporting the inference
(50%; 75%; 99%; or 100%). Markovits et al. compared groups of partici-
pants previously identified as using counterexample or statistical strategies.
Markovits et al. (2017) found clear differences between the two groups
(see Figure 14.1). Statistical reasoners were strongly influenced by relative
strength when deciding whether to accept modus ponens inferences. In con-
trast, counterexample reasoners showed a sharp reduction in acceptance
when some evidence failed to support the inference (e.g., the 99% and 75%
conditions). These findings were as predicted.
Figure 14.1
Mean number of MP 3
1 Statistical
0.5
0
100% 99% 75% 50%
Relative strength
Summary
Research on conditional reasoning has become much more realistic. For
example, reasoners are encouraged to use their relevant knowledge on rea-
soning tasks. They also assign probabilities to the correctness of conclu-
sions rather than simply deciding conclusions are valid or invalid.
Theoretically, it is assumed reasoners are influenced by the form of the
premises. More importantly, however, their relevant knowledge and expe-
rience lead them to engage in probabilistic reasoning (dual-source model)
or to try to find counterexamples to the stated conclusion (dual-process
model).
other letter on the other side, we have discovered nothing about the rule’s KEY TERMS
validity. The correct answer is to select the R and 7 cards, an answer given
Matching bias
by only about 10% of university students. The 7 is necessary because it The tendency on the
would definitely disprove the rule if it had an R on the other side. Wason selection task to
How can we explain performance on the Wason selection task? select cards matching the
Several factors are involved. First, performance is worse with abstract items explicitly mentioned
in the rule.
versions of the task (as above) compared to concrete versions referring
to everyday events (e.g., “Every time I travel to Manchester, I travel by Deontic rules
train”). Ragni et al. (2018) found in a meta-analysis that the percentage Rules relating to
obligation and
of correct answers increased from 7% with abstract versions to 21% with
permissibility.
concrete versions.
Second, there is matching bias (the tendency to select cards matching
the items named in the rule). Thompson et al. (2013a) obtained strong evi-
dence for matching bias on the selection task. In addition, cards named in
the rule were selected faster than other cards and produced greater feelings
of rightness.
Third, the logical solution to the Wason selection task conflicts with
everyday life (Oaksford, 1997). According to formal logic, we should test
the rule “All swans are white” by searching for swans and non-white birds.
However, this would be extremely time-consuming because only a few
birds are swans and the overwhelming majority of birds are non-white. It
would be preferable to adopt a probabilistic approach based on the likely
probabilities of different kinds of events or objects.
The problem of testing the above rule resembles the Wason selection
task, which has the form “If p, then q”. We should choose q cards (e.g., 2)
when the expected probability of q is low but not-q cards when q’s expected
probability is high to maximise information gain. As predicted, far more
q cards were selected when the percentage of q cards was low (17%) than
when it was high (83%) (Oaksford et al., 1997).
Fourth, motivation is important. People are more likely to select the
potentially falsifying card (7 in the original version of the task) if moti-
vated to disprove the rule. Dawson et al. (2002) gave some participants the
rule that individuals high in emotionality lability experience an early death.
The four cards showed high emotional lability, low emotional lability,
early death and late death, with the correct answer involving selecting the
first and last cards. Of participants led to believe they had high emotional
lability (and so motivated to disprove the rule), 38% solved the problem
(versus 9% of control participants).
Motivation is also involved with deontic rules (rules concerned with
obligation or permission). Sperber and Girotto (2002) used a deontic rule
relating to cheating. Paolo must decide whether he is being cheated when
buying things through the internet: the answer is to select the “item paid
for” and “item not received” cards (selected by 68% of participants). This
unusually high level of performance was achieved because the motiva-
tion to detect cheating led participants to select the “item not received”
card. In a meta-analysis, Ragni et al. (2018) found the correct answer was
selected by 61% of participants with deontic versions but 7% using abstract
versions.
Marrero et al. (2016) proposed a general motivational approach
accounting for the above findings. Individuals concerned about potential
KEY TERMS costs focus on disconfirming evidence whereas those concerned about
potential benefits focus on confirming evidence.
Syllogism
A type of problem used Fifth, Ragni et al. (2018) evaluated 15 theories of relevance to Wason’s
in deductive reasoning; selection task. In a large-scale meta-analysis, they found Johnson-Laird’s
there are two statements (1983) mental model theory (discussed shortly, pp. 681–683) best predicted
or premises and a performance. In essence, this theory assumes that selections on Wason’s
conclusion that may or
selection task depend on two processes:
may not follow logically
from the premises.
(1) There is an intuitive process producing selections matching the rea-
Belief bias
soners’ hypothesis (e.g., selection of R in the version of the task
In syllogistic reasoning,
the tendency to accept shown in Figure 14.2).
invalid but believable (2) There is a more deliberate process producing selections of potential
conclusions and reject counterexamples to the hypothesis (e.g., selection of 7 in the same
valid but unbelievable version).
ones.
Syllogistic reasoning
Syllogistic reasoning has been studied for over 2,000 years. A syllogism
consists of two premises or statements followed by a conclusion. Here is
an example: “All A are B; all B are C. Therefore, all A are C”. A syllogism
contains three items (A, B and C), with one (B) occurring in both premises.
The premises and conclusion all contain one of the following quantifiers:
all; some; no; and some . . . not.
When presented with a syllogism, you must decide whether the conclu-
sion is valid assuming the premises are valid. The validity (or otherwise)
of the conclusion depends only on whether it follows logically from the
premises – the conclusion’s truth or falsity in the real world is irrelevant.
Consider the following example:
Premises
All children are obedient.
All girl guides are children.
Conclusion
Therefore, all girl guides are obedient.
The conclusion follows logically from the premises. Thus, it is valid regard-
less of your views about children’s obedience.
Findings
Various biases cause errors in syllogistic reasoning. Of special impor-
tance is belief bias, the tendency to accept invalid conclusions as valid
if believable and to reject valid (but unbelievable) conclusions as invalid
(theoretical explanations are discussed later). Klauer et al. (2000) investi-
gated belief bias thoroughly. The conclusions of half their syllogisms were
believable (e.g., “Some fish are not trout”) whereas the others were unbe-
lievable (e.g., “Some trout are not fish”). Half the syllogisms were valid
Premises
We must do something to save the country.
Our policy is something.
Conclusion
Our policy will save the country.
Invalid conditional reasoning occurs frequently in everyday life. Earlier we discussed the logical
fallacy known as denial of the antecedent. Here is a real-world example:
Premises
If you have got nothing to hide, you have nothing to fear.
You have nothing to hide.
Conclusion
You have nothing to fear.
The above argument is invalid because it implies that people are only interested in privacy
because they have something to hide. In fact, of course, we all have a basic human right to
privacy. Ironically, the authorities who argue strongly in favour of greater surveillance of the public
are often notoriously reluctant to provide information about their own activities!
Finally, we consider an everyday example of the logical fallacy known as affirmation of the con-
sequent (discussed earlier, p. 673).
Premises
If the Earth’s climate altered throughout pre-human history, this was due to natural climate
change.
The Earth’s climate is currently altering.
Conclusion
Natural climate change is occurring currently.
This is clearly an invalid argument. The fact that past climate change was not due to humans
does not necessarily mean that current climate change is not due to human intervention.
In sum, many groups in society (e.g. politicians; climate change deniers) are strongly motivated
to persuade us of the rightness of their beliefs. This often leads them to engage in invalid forms of
reasoning. The take-home message is that we all need to be sceptical and vigilant when exposed
to their arguments.
attention to the increasingly popular dual-process approach. The word KEY TERMS
deductive in the title of this section has been put in quotation marks to indi-
Mental models
cate that individuals presented with deductive-reasoning problems often fail An internal representation
to use deductive processes when trying to solve them. of some possible situation
or event in the world
having the same structure
Mental models as that situation or event.
Johnson-Laird (e.g., 1983; Johnson-Laird et al., 2018) argues that reasoning Principle of truth
involves constructing mental models. What is a mental model? According The notion that assertions
are represented by
to Johnson-Laird et al. (2015, p. 202), a mental model is “an iconic rep-
forming mental models
resentation of a possibility that depicts only those clauses in a compound concerning what is true
assertion that are true. The mental models of a disjunction, ‘A or B but while ignoring what is
not both’ accordingly represent two possibilities: possibly (A) and possibly false.
(B)”. It is iconic because its structure corresponds to what it represents.
Here is a concrete example of a mental model:
Premises
The lamp is on the right of the pad.
The book is on the left of the pad.
The clock is in front of the book.
The vase is in front of the lamp.
Conclusion
The clock is to the left of the vase.
The conclusion the clock is to the left of the vase clearly follows from the
mental model. The fact we cannot construct a mental model consistent
with the premises (but inconsistent with the conclusions) (i.e., we cannot
construct a counterexample) indicates the model is valid.
Here are the theory’s main assumptions:
KEY TERM (1) The more intelligent reasoners had greater difficulty resolving con-
flict when providing belief-based responses rather than logic-based
Meta-reasoning
Monitoring processes responses.
that influence the time, (2) The less intelligent reasoners exhibited the opposite pattern.
effort and strategies used
during reasoning and These findings suggest more intelligent individuals generate logic-based
problem solving.
responses faster than belief-based ones, whereas less intelligent individuals
generate belief-based responses faster.
What conclusions should we draw? First, rather than arguing belief-
based responses involve fast Type 1 processing whereas logic-based
responses involve slow Type 2 processing, we need to consider individ-
ual differences. Second, “If responses (belief- or logic-based) can be gen-
erated either quickly and effortlessly or slowly and deliberately, perhaps
these responses merely differ on a single dimension, namely, complexity”
(Newman et al., 2017, p. 1165).
Figure 14.6
Reasoning Meta-reasoning The approximate time
courses of reasoning and
Meta-cognitive Meta-cognitive meta-reasoning processes
monitoring control during reasoning and
Time-
Identifiy components Assessment of Think? Search problem solving.
line
and goal knowledge and memory? Change From Ackerman & Thompson
strategies strategy? Stop? (2017).
ratings were higher when the first response was produced rapidly rather
than slowly, indicating the importance of response fluency.
Most research indicates response fluency is a fallible measure of response
accuracy. For example, people give higher feeling-of-rightness ratings to
reasoning problems having familiar rather than unfamiliar content even
when the former problems are harder (Ackerman & Thompson, 2017).
Evaluation
What are the strengths of the contemporary dual-process approach? First,
dual-process theories have become increasingly popular and wide-ranging.
For example, they provide explanations for syllogistic reasoning and con-
ditional reasoning (Verschueren et al, 2005, discussed earlier, pp. 674–675).
In addition, such theories account for findings in problem solving (see
Chapter 12), judgement and decision-making (see Chapter 13). Second,
“Dual-process theory . . . provides a valuable high-level framework within
which more specific and testable models can be developed” (Evans, 2018,
p. 163).
Third, there have been increasingly sophisticated attempts to clarify
the relationship between Type 1 and Type 2 processes (e.g., whether they
are used serially or in parallel). Fourth, we have an enhanced understand-
ing of meta-reasoning processes (especially those involved in monitoring).
Fifth, recent theory and research are starting to take account of the flexi-
bility of processing on reasoning problems due to the precise form of the
problem and individual differences (Thompson et al., 2018).
What are the limitations with the dual-process approach? First, the pro-
cesses used by reasoners vary depending on their abilities and preferences,
their motivation and their task requirements. Melnikoff and Bargh (2018;
see Chapter 13) identified two ways many dual-process theories are over-
simplified: (1) they often imply that Type 1 processes are “bad” and error-
prone, whereas Type 2 processes are “good”; and (2) they assume many
cognitive processes can be assigned to just two types.
Second, “The absence of a clear and general definition of a Type 1
or Type 2 response does create difficulty for researchers wishing to test
[dual-process] theories” (Evans, 2018, p. 163). For example, it is often
assumed theoretically that fast responses reflect Type 1 processing and are
error-prone whereas slow responses reflect Type 2 processing and are gen-
erally accurate. However, we have discussed various studies disconfirming
those assumptions.
Third, there has been a rapid increase in the findings that require to be
explained theoretically, and theories have not kept pace with this increase.
For example, meta-reasoning often plays an important role in influenc-
ing reasoners’ processing strategies and performance. As yet, however, no
theorists have integrated meta-reasoning processes into a comprehensive
dual-process theory of reasoning.