Carl F. Craver (Editor) - John Bickle (Editor) - A. S. Barwich (Ed - The Tools of Neuroscience Experiment - Philosophical and Scientific Perspectives (2022, Routledge) - Libgen - Li
Carl F. Craver (Editor) - John Bickle (Editor) - A. S. Barwich (Ed - The Tools of Neuroscience Experiment - Philosophical and Scientific Perspectives (2022, Routledge) - Libgen - Li
Experiment
Understanding Perspectivism
Scientific Challenges and Methodological Prospects
Edited by Michela Massimi and Casey D. McCoy
Edited by
John Bickle, Carl F. Craver,
and Ann-Sophie Barwich
First published 2022
by Routledge
605 Third Avenue, New York, NY 10158
and by Routledge
2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2022 selection and editorial matter, John Bickle, Carl F.
Craver, and Ann-Sophie Barwich; individual chapters, the
contributors
The right of John Bickle, Carl F. Craver, and Ann-Sophie
Barwich to be identified as the authors of the editorial
material, and of the authors for their individual chapters, has
been asserted in accordance with sections 77 and 78 of the
Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted
or reproduced or utilised in any form or by any electronic,
mechanical, or other means, now known or hereafter invented,
including photocopying and recording, or in any information
storage or retrieval system, without permission in writing from
the publishers.
Trademark notice: Product or corporate names may be
trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Library of Congress Cataloging-in-Publication Data
A catalog record for this title has been requested
DOI: 10.4324/9781003251392
Typeset in Sabon
by codeMantra
Contents
Foreword xi
Editors’ Introduction 1
J O H N B I C K L E , C A R L C R AV E R A N D A N N - S O P H I E B A RW I C H
SECTION 1
Research Tools in Relation to Theories 11
SECTION 3
Research Tools, Integration, Circuits, and Ontology 219
SECTION 4
Tools and Integrative Pluralism 285
SECTION 5
Tool Use and Development Beyond Neuroscience 319
Stuart Firestein
The comic Emo Phillips has a very wry comment about the brain. He
muses that he always thought his brain was the most wonderful organ in
his body. Until one day he asked himself, “Wait a minute, who’s telling me
that?” Who indeed. This quip points out the hopelessness of understand-
ing the brain by introspection. Indeed, having a brain may be the biggest
obstacle to understanding it. Not that the organ is not smart enough to
understand how it works, but that the first-person experience of having a
brain, what it feels like it’s doing, is rarely a very good indication of what
is actually happening up there. In fact, it is usually dead wrong.
Thus, there is the need for instruments to visualize the structures of
the brain, to measure the activity of the brain and to probe its work-
ing at every level from single molecules to the whole buzzing, whirring
3.5 pounds of electrified pate. These instruments allow us to have a kind
of third-person disinterested view of the brain that is not as likely to be
swayed by its pernicious self-promoting propaganda. Or such would be
the ideal.
Of course, there always lurks the danger that the choice of measuring
device will reflect the answer we expect to get. As the geneticists warn,
you always get what you screen for. As neuroscientists, if we use elec-
trodes and amplifiers we will see the brain as a fundamentally electric
organ and believe we can understand it by recording voltages and ion
currents. The chemists and pharmacologists on the other hand will be
measuring neurotransmitters and binding kinetics revealing a picture
of a massive chemical factory. Then the anatomists with their golden
rule – function follows structure – will convince you that it’s all in the
connections and the vast parts list. And so it goes.
A hopeful solution to this is to take a pluralistic approach – not
wholistic – but pluralistic. Recognizing that the brain is a subject (or is it
an object?) that can be examined at many different levels and from many
different perspectives. It is important to recognize that these multiple
perspectives are not likely to add up to a complete picture. Indeed, they
are more likely to maintain a very fragmented view of the brain. I for one
xii Foreword
believe this is the correct view of the brain. It will not be summed up in
a neat formula or model. Its very interest lies in its multiplicity.
When I first began in Neuroscience there were no departments of neu-
roscience at any major university. There were loose confederations of
faculty whose research was focused on the brain, but they all had their
primary appointments in various departments – anatomy, molecular
biology, physiology, evolution, psychology, computer science, electrical
engineering – all over the campus. Not to strike a nostalgic note here,
but there was something particularly correct about having brain research
distributed in that way. I’m still a bit uneasy about the proliferation of in-
stitutes dedicated to BRAIN RESEARCH, as if that was some monolithic
endeavor like building pyramids or cathedrals. It does seem like some-
thing a brain would be very pleased about – and that may not be good.
An alternative and more positive view of this concentration of brain
scientists into single departments or institutes is that neuroscience may
now find outreach to other fields even more beneficial. And those other
fields could even be outside the traditional scientific silos. They might in-
clude philosophy, literature, law, the arts, economics, sociology – indeed
many of these sorts of overtures are already in existence. Writers or art-
ists in residence at Neuroscience institutes have become more common;
books on music and the brain have proliferated; there are conferences
on ethics and law in the light of neuroscience findings. Perhaps counter-
intuitively neuroscience has become more pluralistic by becoming more
monolithic. After all, what we do with this brain is just as important to
understand as to how the parts fit together. The importance of instru-
ments and technology in driving the evolution of neuroscience both as
science and cultural phenomenon is the main subject of this collection,
so I will not go any further with this argument and allow the authors
herein to make it more thoroughly.
The pleasure of this volume is the inclusion of so many and varied
ideas about the brain, not only from practicing neuroscientists but also
philosophers and historians of neuroscience, all grounded in a wealth, an
abundant wealth, of technical, intellectual and instrumental approaches
that provide a wonderfully complicated view of this miraculous organ.
But who’s telling you that?
Editors’ Introduction
John Bickle, Carl Craver and
Ann-Sophie Barwich
DOI: 10.4324/9781003251392-1
2 John Bickle et al.
y Cajal described “instrument addiction” as one ever-present form of
malaise threatening to derail the career of the young investigator.
Yet, even if we acknowledge Ramón y Cajal’s concern, there can be
no scientific theory of the brain or mind without the existence of instru-
ments that hone and refine our ability to intervene in and measure brains
and behavior. Good theory requires good data, and good data requires
instruments that work. Ramón y Cajal, himself a pioneer and devotee of
his own instruments (in his case, stains and light microscopes), would
surely acknowledge that nobody can understand contemporary neuro-
science or its history without attending to the arsenal of tools neurosci-
entists use to probe nature — to, in words often attributed to Francis
Bacon, “twist the lion’s tail.”
Specific research tools inform each of neuroscience’s many “levels” of
investigation, from the designer pharmacology used to study molecular
activities in intra- and inter-neuron signaling to the electrophysiology
rigs used to probe individual neuronal activities, to the functional imag-
ing modalities used to study neural systems, to the tasks and protocols
used to study behaving organisms and their cognitive processes. Indeed,
the often ambiguous notion of “levels” in neuroscience is closely tied to
the observational affordances of experimental tools and their heuristics.
Sometimes the localization of a particular event or item to a given level
is merely shorthand for describing the research tools and techniques em-
ployed to study it.
Some contend that every landmark discovery of contemporary neu-
roscience can be tied directly to the development and ingenious use of
one or more research tools (see Bickle this volume). Perhaps this claim is
exaggerated, but if so, only slightly. This centrality of research tools in
current neuroscience practice even motivates one of today’s most well-
funded and influential government-private enterprise partnerships, the
BRAIN Initiative. Its name is the acronym for Brain Research through
Advancing Innovative Neurotechnologies®, and it aims explicitly at “rev-
olutionizing our understanding of the human brain… [b]y accelerating
the development and application of innovative technologies,” (https://
braininitiative.nih.gov). The aim is to revolutionize neuroscience by
funding research into the development of new research tools and instru-
ments. The sheer dollar amounts associated with this initiative speak
volumes about neuroscientists’ recognition of the importance of new re-
search tools in their professional activities. Through 2019, BRAIN Ini-
tiative research awards already exceeded 1.3 billion US dollars (https://
braininitiative.nih.gov/about/overview).
Neuroscience is not unique in its reliance on experimental tools.
Many landmark discoveries in other sciences can likewise be tied di-
rectly to their innovations and developments. Telescopes guided Galil-
eo’s heliocentrism, the microscope was crucial to the development of the
germ theory of disease, and Rosalind Franklin’s X-ray crystallography
Editors’ Introduction 3
led directly to the double helix model of DNA, to name just a few of the
most obvious examples. However, there is one crucial difference between
neuroscience and other tool-reliant sciences: the target of inquiry. Neu-
roscience’s ultimate explanatory target is our brains, the organ driving
our behaviors. This target catapults the investigation of research tools in
neuroscience into a brighter philosophical spotlight.
Philosophers are well suited to join neuroscientists in reflecting on the
development and use of research tools. Increasingly, young philosophers
are acquiring backgrounds in neuroscience as part of their undergrad-
uate, graduate, and postgraduate training. Acquiring this background
often involves using the tools involved in neuroscience research. Still,
sustained philosophical attention to the development and uses of exper-
iment tools in neuroscience remains rare. When new functional neuro-
imaging technologies such as positron emission tomography (PET) and
functional magnetic resonance imaging (fMRI) developed toward the
end of the twentieth century, there was a flurry of philosophical interest
in how these techniques worked and were being used in experiments in
the then-new cognitive neurosciences (e.g., Bogen, Roskies, cited in these
chapters). Unfortunately, those earlier discussions remained relatively
self-contained, as philosophers often linked their analysis of these new
techniques primarily to orthodox philosophical questions about the na-
ture of mind. Around that same time, the science-in-practice movement
across all of philosophy of science was successfully directing more philo-
sophical attention to aspects of day-to-day scientific activities and away
from philosophy of science’s traditional focus on the justification of sci-
entific theories and the relationship between theory and world. Many
philosophers of neuroscience came to count their research as part of the
broader science-in-practice movement, or the “practice turn.” Philoso-
phers of neuroscience have also been at the forefront of recent broader
interest in the philosophy of experiment. Nevertheless, the published
literature from this important refocusing on practices and experimenta-
tion still has one glaring lacuna. It has not yet addressed systematically
how the tools of scientific experiment come to be, how they are justified,
adopted, and proliferated, and finally how our theories change through
the uses of these tools, leading to cycles of change in the ability to inter-
vene into and detect brain function. In addition to epistemic and prag-
matic considerations about their application, another emerging question
concerns the explicitly cognitive role of tools in this context. Just how do
tools facilitate new thinking in scientific practice?
This gap in the philosophical literature presents philosophers with a
rare opportunity: to contribute fresh and novel ideas by reflecting on
how these research tools provide access to the brain and nervous system.
Furthermore, such philosophical contribution can play a notably active
role in the ongoing neuroscientific debate, given the dynamic nature of
the field and the relative recency with which many experimental tools
4 John Bickle et al.
have entered the brain sciences. (After all, fMRI is a child of the 1990s.)
Making such an active contribution will require detailed knowledge of
tool use in neuroscience practice. The project calls for collaborations
between philosophers and neuroscientists. Moreover, the work of the
handful of philosophers who have wandered onto this topic must even-
tually be combined into a coherent research program. This is what we
are trying to facilitate with this volume. As reflected in the chapters of
this volume, the work of a cadre of philosophers and philosophically in-
terested neuroscientists has begun to coalesce. We showcase their work
here in an effort to spawn more effective collaboration, to encourage
more consolidation into a recognized research area, and to instigate dis-
cussions and arguments that, we believe, will contribute meaningfully
and uniquely to our understanding of science as practiced.
The idea for this volume developed over the past half-decade. When
a handful of us first noticed our shared interest in a new tool in 21st-
century neurobiology, two of the editors of this volume (Bickle and
Craver) joined up with two other philosophers of neuroscience (Sarah
Robins and Jacquiline Sullivan) to organize a submitted session on “op-
togenetics” at the 2016 Philosophy of Science Association meeting. Co-
incidentally, neuroscientist Stuart Firestein, who wrote the Foreward to
this volume, and the third editor, Barwich, were in the sessions’ audi-
ence. Attention to those discussions on the nature of new technologies in
the brain sciences seemed to persist, so Bickle and Craver co-organized
two recent professional workshops on tool development in neuroscience;
Barwich presented her research at both. The first was held in Pensacola
Beach, Florida, in September 2019. We expected 10–12 submissions to
our Call for Abstracts; by the submission deadline, we had received 25.
We held two ten-hour days of presentations and discussions. The second
conference was an Early Career Workshop on Neurotechnology, hosted
by the Center for Philosophy of Science and the Center for the Neural
Basis of Cognition at the University of Pittsburgh in January 2020 (for-
tunately, just before the Covid-19 pandemic shut down in-person con-
ferences). Bickle and Craver co-organized this second workshop with
Colin Allen and two History and Philosophy of Science Ph.D. candidates
at the time, Mahi Hardalupas and Morgan Thompson. Although the
general topic of this second workshop was broader than just research
tool development, that theme was prominent in our call for submissions.
In the end, we accepted eight submitted abstracts for presentation, each
was assigned a senior commentator; we invited four keynote speakers
and held a poster session. This workshop likewise comprised two full
days of presentations and discussions. And here our focus explicitly was
to promote work by junior investigators, the philosophers and scientists
who will be working on these topics for the next three to four decades.
These workshops, just like this volume, reflect the youth and diversity of
the philosophers and scientists who’ve taken interest in this topic. The
Editors’ Introduction 5
philosophy of neuroscience is fortunate to have attracted not only highly
talented scholars but an impressively diverse group as well.
Those events led to this volume. Our fundamental goal is to promote
sustained investigations into tool development in neuroscience by phi-
losophers and neuroscientists. We hope this volume encourages more re-
searchers to reflect and perhaps extend this discussion to other sciences.
In these essays, broad epistemic considerations juxtapose seamlessly
with practical concerns. New insights emerge about the special place of
research tools in science, the norms of experimental practices, and the
strategies experimentalists use for teasing out nature’s secrets about that
wonderfully complex organ of thought, affect, and consciousness, the
human brain.
We’ve divided the 15 chapters in this volume into five sections. The
first section, Research Tools in Relation to Theory, has chapters orga-
nized around several case studies of tool development, some historical,
some recent. Each chapter discusses the development of revolutionary
(or potentially revolutionary) research tools across neuroscience and
supports different conclusions concerning the relationship between new
research tools and theory progress. Bickle describes the late-1950s de-
velopment of the metal microelectrode for single-cell neurophysiology
and the late-1970s development of the patch-clamp. He argues that the
historical details of these cases show that theory in neurobiology is dou-
bly dependent, not only on new research tools but also on atheoretical
“laboratory tinkering” through which these tools developed. Johnson
looks at examples of tools Bickle has stressed previously, especially gene
targeting and optogenetics. He finds a much less tidy picture than Bick-
le’s “tools-first” account proposes, and one that reflects an important
ambiguity in the claim that “theories depend on tools.” He argues that
sometimes theories and tools work in tandem, while in other cases, tools
are “first” and theory has little or no role in the investigation. Ata-
nasova, Williams, and Vorhees recount the process of the designing and
continuous refinement of the Cincinnati water maze, a tool utilized in
the study of rat egocentric learning and memory. This case study exem-
plifies the interplay between toolmaking, experiment, and theory where
tool availability often determines the kinds of experiments performed
in a certain laboratory. The authors introduce the notion of theories as
interpretative devices to capture cases in the practice of neuroscience
in which multiple experiments precede, rather than succeed, the the-
ory for which they provide empirical support. Barwich and Xu present
a first-hand account of Xu’s recent application of a new tool, SCAPE
microscopy, which uncovered a new mechanism of mixture coding at
the olfactory periphery. SCAPE (Swept, Confocally-Aligned Planar
Excitation) microscopy allows for fast three-dimensional, high-
resolution imaging of entire small organisms (e.g., larvae) or large
intact tissue sections (e.g., in mice). Barwich and Xu illustrate how
6 John Bickle et al.
technologies facilitate a broader and different theoretical perspective by
being an integral part of the thinking process itself and by co-creating
mental structures. Hardcastle and Stewart show how “tinkering” with
19th-century manual “handheld” autopsy techniques led to discoveries
about otherwise puzzling “brain fog” reported in some advanced cases
of COVID-19 infection. This revamped technology enabled research-
ers to get around CDC (and common sense) restrictions on using aero-
sol sprays in brain autopsies of COVID patients and to find in infected
brains a class of large platelet-producing cells normally found in the
bone marrow. In this case, the “tinkering” with tools has led to new
implications about the blood-brain barrier, viral activity and the rela-
tionships between nervous and other bodily tissues.
We title the second section Research Tools and Epistemology. It in-
cludes chapters that show how a focus on neuroscience’s research tools
and their development transforms some traditional, much-discussed
epistemological issues in the philosophy of science, including the dis-
semination of knowledge, the integration of knowledge across fields of
neuroscience, and the relationship between explanation and prediction.
Silva argues that the power of a new research tool to capture scientists’
interests depends not only on its ability to reveal new and transforma-
tive phenomena, but also on its ease of use and transferability across
labs. He illustrates these dual(ing) concerns by discussing recent devel-
opments of “miniscopes,” miniaturized head-mounted fluorescent mi-
croscopes capable of recording from hundreds of cells for days or weeks
in freely-moving, behaving animals. Besides their potential impact on
science, miniscopes are also inexpensive, require no previous experience
with in vivo recordings, and can even be manufactured in the lab at
very low costs. How exactly do these epistemic and practical concerns
interact in daily laboratory science? Craver uses the early history of op-
togenetics technologies to study the norms governing the evaluation of
new interventionist technologies. Craver integrates this discussion with
James Woodward’s much-discussed manipulationist view of causal rele-
vance and lays out some foundational assumptions for an epistemology
of intervention. Craver calls attention to crucial normative distinctions
between those (the modelers) who use new tools for the purposes of ex-
ploring the brain and those (the makers) who use those tools to engineer
brain systems for our own ends, arguing for some significant epistemic
differences between these two groups of scientists. Tramacere analyzes
various cases of triangulation, where multiple tools have provided both
discordance and concordance of evidence. She argues that the use of
two or more measurement techniques (such as electroencephalogra-
phy (EEG), magnetic resonance imaging (MRI), and functional MRI
(fMRI)), has proven useful to minimize the limitations of each tool and
to refine experimental hypotheses about the role of the brain in cogni-
tion. Finally, she argues that triangulation, especially within the same
Editors’ Introduction 7
experimental setting, is helpful with the integration of data and find-
ings, contributing to the understanding of how the mind works. Nathan
examines neuroimaging techniques in cognitive neuroscience and their
use in “reverse inferencing” psychological states from locations or pat-
terns of brain activity. Using these strategies the measured neural states
predict psychological features, but provide no causal-mechanistic ex-
planations of the psychological processes. This aspect of contemporary
cognitive neuroscience investigations, which Nathan also finds in other
human brain-mapping endeavors, does not sit easily with traditional
philosophy of science’s related treatments of explanation and prediction.
He suggests a “black-box” solution, according to which prediction and
explanation are grounded in the same underlying causal network, but
require different degrees of mechanistic detail.
Section 3 concerns Research Tools, Integration, Circuits and Ontol-
ogy. New research tools in neuroscience speak to the most general of
philosophical questions about how we characterize the nature of the
portion of the world that neuroscience addresses. The chapters in this
section cover three such questions: do research tools help or hinder at-
tempts to integrate the many fields and “levels” of neuroscientific inves-
tigation (or both)? The great increase in the contents of neuroscience’s
“toolkit” over the recent decades seems not yet to have yielded compar-
atively great explanatory increases in uncovering the mechanisms of be-
havior in even our simpler animal models. Is this because new tools need
the coincidental development of new neuroscientific concepts? How do
our research tools help us characterize the correct set of basic categories,
the “ontology,” of the brain and its activities? Colaço discusses CLAR-
ITY, a tissue clarifying tool that extends the scale at which optical mi-
croscopy can be used and so has successfully contributed what he calls
“local” integrations. However, his analysis of CLARITY also addresses
why the constraints and productivity of a research tool can contribute to
the difficulty, acknowledged by philosophers and neuroscientists alike,
of “systematic integration” of methods, data and explanatory schema
across neuroscience’s numerous fields. Parker considers the use of ex-
perimental tools to understand neural circuits. In the last 25 years, the
range of tools available for these analyses has increased markedly, but
he argues that neural circuit understanding has not increased signifi-
cantly despite techniques that allow us to examine more components
with greater precision and in more detail than before. He considers
whether we will have to develop tools that will allow us to address as-
pects of neurobiology that have not been considered in traditional circuit
approaches. Burnston investigates recent novel data-analysis tools that
some have proposed to offer illuminating explanations of how mental
functions are realized in the brain. He rejects this conclusion and instead
insists that results with these new tools recommend switching to a “task-
based cognitive ontology” of neuroscience’s basic explanatory target.
8 John Bickle et al.
Section 4, Tools and Integrative Pluralism explores how new com-
puting technologies have combined with more standard neuroscientific
wet-lab and behavioral laboratory tools and protocols to yield powerful
new approaches that integrated laboratory and computational meth-
ods in exciting new ways. We organize these two chapters historically.
Favela presents Hodgkin and Huxley’s Nobel Prize-winning work as
an instance of research constrained by both technological limitations
and mathematical assumptions. He then argues how the discovery of
scale-invariant neuronal dynamics required both technological and
mathematical advances, and points out how these cases conflict with
the priority on technological dependence that some philosophers of
neuroscience have urged. Prinz describes the dynamic clamp technique,
which interfaces living neurons and circuits with computational mod-
els in real-time and at multiple levels, ranging from models of cellular
components and synapses to models of individual neurons to entire cir-
cuits. This technique creates hybrid in vivo-in silico systems where living
brains and computer models directly “talk to each other,” to take advan-
tage of combining experimental investigation of living neural systems
with the precise, theoretically-guided control over neural, synaptic, and
circuit parameters that computational models provide.
We end the volume with a section on Tool Use and Development Be-
yond Neuroscience. Neuroscience is hardly the only scientific endeavor
driven by its research tools. And yet none of the philosophies of specific
sciences has emphasized explicitly the place of tools in that science’s ac-
tivities or the implications tool use and development hold for philosoph-
ical reflection on its practices and products. Our hope is that this volume
spurs philosophical interest in research tools across all of science. To
illustrate that potential, in our final chapter Baxter investigates biolo-
gists conducting loss of function experiments, where they often single
out the genes they manipulate to explain the observable differences these
manipulations produce. She notices that the logical structure of the re-
sulting explanations vary in a regular fashion and argues that the types
of experimental tools that biologists use to intervene on genes account
for this variety.
Our broader goal of introducing a new concern into the philosophy
of neuroscience predates our specific interests in neuroscience’s research
tools. In mid-February 2008, two of us (Bickle and Craver) were in-
vited speakers at a one-day “Symposium on Neurophilosophy” hosted
by the then-new Ludwig-Maximilians-Universität Munich Center for
Neuroscience. Rather than immediately fly back to the States, we spent
the weekend at the Tegelberg in southern Bavaria. On the train back
to Munich late Sunday evening, before a long day of air travel back
to the States the next day, we decided that the ruthless reductionism-
mechanism disagreement we had both been so committed to fighting
for nearly a decade was growing stale. (That was just as the issue of
Editors’ Introduction 9
mechanisms was becoming the clearly dominant view in the philosophy
of neuroscience.) We wanted to find a new topic, perhaps even one on
which we could agree. It took us a while to find tool development; it
was another seven years after that first discussion until the idea for the
optogenetics symposium at PSA 2016 took shape. But it was fun to see
a new topic attract the attention of two longtime philosophical antag-
onists (and personal friends), especially one that also captured the at-
tention of Barwich and this outstanding collection of contributors from
philosophy and neuroscience.
By integrating these pluralistic approaches and interdisciplinary works
on tool use and development in neuroscience, we hope with this volume
to give this emerging research further momentum and to facilitate grow-
ing interest in this area.
We thank Andrew Weckenmann, editor of the Routledge Studies in
Philosophy of Science series, for his interest in pursuing a volume of
essays that seeks to introduce a new topic for further investigation, and
for the speed with which he and Routledge got our project into press. We
also are thankful for the helpful and positive comments of two anony-
mous reviewers on the book’s proposal.
Note
1 And recently made available in paperback by MIT Press (Cambridge: MIT
Press/Bradford Book, 2004).
Section 1
DOI: 10.4324/9781003251392-3
14 John Bickle
read about in the next section) once remarked that “our science seemed
not to conform to the science that we are taught in high school, with its
laws, hypotheses, experimental verification, generalizations, and so on”
(Hubel 1996, 312). Okay! So what did their science “conform to”? Our
philosophical view of what laboratory science is all about may hang in
the balance of these investigations.
It is important to clarify a number of points at the outset. First, re-
jecting theory-centrism about science is not tantamount to claiming that
theory has no place in wetlab sciences like neurobiology. Such a claim is
not only preposterous but also conflicts with the simple observation that
neuroscience’s best-confirmed current theories are from wetlab neurobi-
ology. The structure and function of ion channels and active transport-
ers that underlie neural conductance of action potentials; the detailed
molecular mechanisms of chemical neurotransmission in both pre- and
post-synaptic neurons; the intra- and inter-neuronal signaling pathways
leading to new gene expression, protein synthesis, and ultimately plas-
ticity of structure and responses in individual synapses, neurons, cir-
cuits, and ultimately organism behavior; these are current neuroscience’s
experimentally best-confirmed theories. The rejection of mainstream
philosophy of science’s theory-centrism is instead an attempt to put
theory in its proper place. Theory in wetlab neurobiology is dependent
upon the development and ingenious use of new research tools. The gen-
esis of every piece of theory just mentioned can be traced back directly
to the development of some new research tools. Theoretical progress in
this paradigmatic science of our times is secondary to and dependent
upon new tool development, both temporally and epistemically. Not the
other way around.
In this chapter, I will urge one more step away from theory-centrism
through further reflection on research tool development in neurobiology.
Theory progress therein is also dependent on what I will here call athe-
oretical laboratory tinkering. Theory progress in neurobiology is thus
doubly dependent, and hence tertiary in both epistemic and temporal
priority:
My goal in this chapter is to argue for this additional step toward “put-
ting theory in its place,” in laboratory sciences like neurobiology.
Second important clarification: A common challenge to my attacks
on theory-centrism has been: “Surely the development of these research
tools presupposed theory! So how can theory be so dependent on them?”
Of course, theory informed the work that produced these research tools.
But as the detailed history of each tool shows, the theory progress that re-
sults didn’t guide the tool’s development. Here a distinction that emerged
Tinkering in the Lab 15
in the recent philosophical literature on “exploratory experimentation”
is helpful. Franklin (2005) distinguishes experiments directed by both
theoretical background and local theory about the specific objects be-
ing investigated from experiments directed only by the theoretical back-
ground. Burian (2007) distinguishes theory-driven experiments from
theory-informed experiments depending upon whether or not the the-
ory provides expectations about what experimenters will find. Like ex-
ploratory experiments, tool development experiments appear to be these
latter types.1
A final clarification: Everyone concerned about the place of theory
in science should embrace philosopher Wilfrid Sellars’ quip, “the term
‘theory’ is one of those accordion words which, by their expansion
and contraction, generate so much philosophical music” (1965, 172).
I won’t “provide an analysis” of “theory” here. Sellars’ quip explains
why any “analysis” will simply be an explication of one of any number
of concepts that ambiguous word signifies. Instead, I’ll pursue my usual
metascientific approach (Bickle 2003; Silva et al. 2014) and present some
little-discussed history of laboratory neurobiology, namely, the devel-
opment of two of its most impactful research tools: the metal micro-
electrode for extended sessions of single-cell neurophysiology; and the
patch clamp, the tool that opened the door to the molecular revolution
that has guided mainstream neurobiology for the last four decades. The
mainstream philosophical sense(s) of “theory” that I’m trying to “put in
its (their) place” will hopefully become clearer in contrast with some his-
torical details about the kinds of atheoretical solutions that developers of
these laboratory research tools hit upon to get them to work.
The italics I’ve added emphasize the many practical considerations that
drove Hubel’s discovery. These are the phrasings of a skilled tinker, not
a physical chemist.
Hubel’s remembrances of inventing the hydraulic microelectrode ad-
vancer, required to drive the metal microelectrode to precise locations
in brain tissue in vivo, are equally revealing of the practical problems he
confronted and the tinkering solutions he found. “The problem was not
entirely simple … a hydraulic system seemed to be the best bet, but one
had to make the piston-and cylinder compatible with a chamber closed to
the atmosphere” (1996, 304). A closed chamber was required to prevent
movements of the cortex caused by pulsations in an open environment,
which would alter the electrode penetration path. Hubel found himself
“having to continually mollify machinists” when he brought back their
“skillfully built … latest model” and explained to them “why it couldn’t
work” (1996, 304). His solution? “Finally I decided I must learn how
to operate a lathe, and went to night school in downtown Washington,
D.C.,” (1996, 304). When all else fails, do it yourself (in Hubel’s case,
on your own time and dime). Note that this attitude about tool develop-
ment was from a future Nobel laureate.
Hubel’s tinkering attitude likewise drove his (and Wiesel’s) addition
to the histological “reconstruction technique” pioneered by Mountcas-
tle for confirming the exact placement of the recording electrode where
single-neuron recordings were taken. “Our addition … was the strat-
egy of making multiple small (roughly 100 μm diameter) electrolytic
lesions along each track by passing small currents through the tungsten
electrodes” (1981, 37). Did “theory” inform Hubel’s discovery of this
“addition”? Not hardly: “I worked out this method at Walter Reed by
watching the coagulation produced at the electrode tip on passing cur-
rents through egg white” (1981, 37). Tinkering redux.
18 John Bickle
And the rest really was history, in terms of progress in the neurobiol-
ogy of vision. Hubel and Wiesel’s work with Hubel’s tinkering-generated
metal microelectrode and hydraulic microdriver on single neuron re-
sponses in cat primary visual cortex (striatal, V1), then on into extras-
triatal cortex (V2, V3), won them their shares of the 1981 Nobel Prize
for Physiology or Medicine “for their discoveries concerning informa-
tion processing in the visual cortex” (https://www.nobelprize.org/prizes/
medicine/1981/summary/). The Press Release announcing the award
ends by noting the principle of cortical organization based on individual
neuron function that directed subsequent neuroscience research for the
next quarter-century:
By following the visual impulses along their path to the various cell
layers of the optical cortex, Hubel and Wiesel have been able to
demonstrate that the message about the image falling on the retina
undergoes a step-wise analysis in a system of nerve cells stored in
columns. In this system each cell has its specific function and is
responsible for a specific detail in the pattern of the retinal image.
(https://www.nobelprize.org/prizes/medicine/1981/press-release/;
emphases in original)
The last sentence expresses the key theoretical concept of a sensory neu-
ron’s “receptive field,” which soon found application in motor and even
associational “cognitive” cortices. This experiment tool and approach
not only informed neuroscientific research through the 1980s that I sur-
veyed briefly at the beginning of this section, but its use also generated
the “subway map” models of visual cortical processing, and the distinc-
tion between the “dorsal visual stream” through the posterior parietal
cortex and the “ventral visual stream” through the inferior temporal cor-
tex (a distinction first found in nonhuman primates using Hubel’s metal
microelectrodes). All of this has been textbook neuroscience for three
decades. In his Nobel Award Ceremony Speech introducing Hubel and
Wiesel, Karolinska Institutet physiologist David Ottoman compared the
pair’s “translation” of the “symbolic calligraphy of the brain cortex” to
Later in 1980, the group published their initial recordings of Na+ cur-
rents through the gigaseal-achieving patch clamp, through single ace-
tylcholine receptors in rat muscle fibers (Sigworth and Neher 1980).
By 1981 they published a full report of the improved new technique,
including how to mass-produce patch micropipettes, procedures for re-
liably achieving gigaseals, recording circuits and designs for improving
frequency responses, and improved procedures for preparing cell mem-
branes and physically isolating patches (Hamill et al. 1981). That paper
reported recordings using the improved technique from small-diameter
cells, including Na+ currents from single channels in bovine chromaffin
24 John Bickle
cells. Resolving currents from these channels had been impossible due
to too-weak seals, because of the currents’ tiny size and short duration.
The trial-and-error atheoretical tinkering attitude I am stressing is ap-
parent in Neher’s remarks quoted above (“by chance,” “it turned out”).
It is also explicit in his remark on the group’s capacity to amplify these
tiny measured currents. “Fortunately Fred Sigworth had just joined the
laboratory. With his experience in engineering, he improved the elec-
tronic amplifiers to match the advances in recording conditions” (1991,
15). Science writer (and biochemistry Ph.D.) Karen Hopkin has docu-
mented further remarks by Neher, Sakmann, and collaborators on their
achievement of the gigaseal (Hopkin 2010). Their “tinkering-first” at-
titude is unmistakable in those remarks. For example, Hopkins quotes
Neher suggesting that experimental know-how led to his breakthrough:
“you had to apply a little bit of suction in order to pull some mem-
brane into the orifice of the pipette … if you did it the right way it
worked” (Hopkin 2010; my emphases). She quotes lab member Owen
Hamill recalling “a weird period where we could no longer get gigaseals
… Then Bert [Sakmann] suggested that you have to blow before you
suck” (Hopkin 2010). Blowing solution through the clean pipette tip as
it approached the membrane apparently ejected any minute particles in
the tip that weakened the seal. No theory guided that discovery; it had
nothing to do with membrane polishing, Sakmann’s expertise. Rather,
this discovery was made “on the bench” by an experienced (and first-
rate) laboratory tinker.
Armed with their now-reliable new tool, Neher, Sakmann, and collab-
orators quickly demonstrated experimentally that the macroscopic ionic
Na+ and K+ currents in neurons measured in voltage clamp experiments
going back to Hodgkin and Huxley’s work were the sum of a huge num-
ber of microcurrents through individual channels specific to each ion.
The latter were now reliably measurable by the gigaseal patch clamp.
Either current could be induced through the voltage clamp stimulating
electrode by changing macroscopic membrane current. During multiple
current inductions, Na+ or K+ currents could be measured through the
patch clamp, with the other type of channels blocked pharmacologi-
cally by antagonists. An average of all of those (thousands of) patch
clamp recordings could then be computed and graphed. The graph of
time with respect to ion microcurrents exactly matched that for mem-
brane macrocurrents. The activities of Hodgkin and Huxley’s specula-
tive “voltage gates” in neuron membranes, and their role in generating
action potentials, were now observable on the lab bench. Coupling these
patch clamp discoveries with ones using an even older experimental tool,
X-ray crystallography (which had been around for nearly a century by
that time), quickly revealed channel structure. They were configured
proteins with differently charged amino acids at specific locations. And
the theory of ion channels and active transporters in neural conductance
Tinkering in the Lab 25
that we possess today was quickly fleshed out experimentally. There is
an often-traced direct line of theory progress in neurobiology from Hod-
gkin and Huxley’s Nobel Prize to Roderick MacKinnon’s half-share of
the Nobel Prize in Chemistry in 2003 for his work on the structure and
mechanistic functioning of K+ channels. That path runs directly through
Neher, Sakmann and results obtained using their patch clamp.
And that was just the beginning of the patch clamp’s contributions to
neurobiology. As Salk Institute neurobiologist Charles Stevens, whose
Yale lab had hosted Neher’s 1975–1976 visit, remarked a decade ago,
“everything that we’re learned about the nervous system over the past
25 years we could not have done without patch camping. Erwin’s dis-
covery of the gigaseal made all the difference in the world” (quoted in
Hopkin 2010). The next few years following the reliable establishment
of the gigaseal bear out Stevens’ assessment. Ingenious experimenters
quickly discovered numerous ways the isolated patches of neuron mem-
brane could be manipulated. In his Nobel Prize address, Neher refers
to these further discoveries as “Unexpected Benefits” (1991, 15). They
were “unexpected” because they were not predicted by any theory; they
were discovered purely by chance or trial-and-error experimentation;
nobody saw these coming. In the words of this paper, they were the re-
sult of extended laboratory tinkering. First, numerous variations on cell
surface patching were found. Hamill et al. (1981) reported that a strong
pulse of suction through the pipette would not only establish the gigas-
eal but would also lesion the sealed membrane. This made the cytoplasm
of the cell continuous with the interior of the pipette and afforded ex-
perimenters a means to manipulate interior neuron function molecularly
in real time. These so-called “whole cell recordings” via the patch clamp
remain a laboratory standard in electrophysiology to this day.
Hamill et al. (1981) also reported that by applying suction to es-
tablish the gigaseal, then slowly lifting the pipette, the patch of mem-
brane sealed inside the pipette could be detached from the cell. This
“inside-out” patch-clamp variation exposed the cytoplasmic surface of
the membrane and its channels or receptors to molecular manipulation
via the solution in the nutrient bath. Finally, and most remarkably, the
gigaseal could be established with the cell membrane, strong suction
applied to lesion the membrane, as with the whole-cell recording tech-
nique, then the pipette lifted to rupture the cell membrane, only lift-
ing slightly more slowly than with the inside-out technique to remove
slightly more cell membrane on either side of the seal. The two detached
strands of the membrane would anneal together. This “outside-out”
patch clamp variation made the extracellular domain and its channel or
receptor components accessible to molecular manipulations for the first
time via the nutrient bath. All of the experimentally-confirmed theory
we now possess about receptors, including metabotropic receptors and
the variety of intra- and intercellular signaling their activation leads to,
26 John Bickle
traces back to these innovations with basic patch clamp techniques. All
these variations had been developed, refined, and put to use in numerous
laboratories worldwide by the time Sakmann and Neher published their
first comprehensive scientific review paper of the patch clamp technique
and its principal results, in 1984.
Neher himself, in his Nobel Prize address one decade later, remarked
on how transformative the patch clamp had already been in electrophys-
iology. He notes that the whole-cell recording variant quickly became
“the method of choice for recording from most cell culture prepara-
tions” because of its range of experimental applications: “many cell
types, particularly small cells of mammalian origin, became accessible
to biophysical analysis for the first time” (1991, 17). These cells simply
“would not tolerate multiple conventional impalements” that previous
tools required (1991, 17). Patch clamp whole-cell recording, inside-out,
and outside-out variants could also be used in combinations in detailed
multiple-experiment designs, and suddenly “individual current types
could be separated through control of solution composition on both
sides of the membrane” (1991, 17). The patch clamp radically shifted
experimental practices in laboratory neurobiology. It made the cells of
real interest, including mammalian neurons, experimentally accessible,
which technical limitations had precluded previously. And experiments
changed drastically, almost overnight. Neher reports some details:
In the same paragraph, Hubel also remarks, “It has always surprised
me how few attempts are made to devise new methods—perhaps it is
because one is generally rewarded not for inventing new methods but
for the research which results from their use” (1996, 304). The lament
of a tinker!
This attitude pervaded Hubel’s entire approach to science. Speaking
for himself and Wiesel, he remarks that “we felt like 15th century ex-
plorers, like Columbus sailing West to see what he might find” (1996,
312; my emphasis). The impact of theory on their landmark research
was minimal:
And he generalizes this last point beyond himself and Wiesel: “I suspect
that much of science, especially biological science, is primarily explor-
atory in this sense” (1996, 313).
Hubel’s account of science’s “main pleasures” will be unfamiliar to
theorists: “To me the main pleasures of doing science are in getting ideas
for experiments, doing surgery, designing and making equipment, and
above all the rare moments in which some apparently isolated facts click
into place like a Chinese puzzle” (1981, 48). The first joy on Hubel’s list
is that of experimentalists, the second that of lab technicians, and the
third and fourth those of the tinkers, both mechanical and intellectual.
The task of writing Hubel’s obituary for Neuron, published a few
days after his death in October 2013, fell to Margaret Livingstone, his
second long-time research collaborator. She notes that he was “a giant in
our field,” but also “the guy in the lab who did the work that made him
great, and there is surely some connection between the way he daily went
about doing science and how successful he was” (2013, 735). She notes
the “two characteristics” that were most important to his scientific suc-
cess: “his mechanical inventiveness and his perseverance” (2013, 735).
Tinkering in the Lab 33
She closes his obituary by remarking on the teaching Hubel undertook
after ending his research career:
Notes
1 A detailed investigation of this connection must await future work.
2 Microns, μm, millionths of one meter. The necessary tip dimensions thus
ranged from less than 4 one-hundred thousandths of an inch to less than 2
ten-thousandths of an inch.
3 By passing a 2- to 6-volt alternating current between the tungsten wire and
a nearby carbon rod, with the wire inserted into a 27-gauge hypodermic
needle, and its tip exposed and inserted into a saturated aqueous potassium
nitrite solution. I draw all details in this paragraph from Hubel’s (1957)
paper.
4 In the terminology of my metascientific model of tool development exper-
iments (Bickle 2016), these three publications present the metal microelec-
trode’s second-phase hook experiments, which bring the tool and its usage
potential to a wider range of scientists, and sometimes to the broader general
public, beyond specialists in the field in which the tool is developed.
5 These numbers were obtained from PubMed Central (https://www.ncbi.
nlm.nih.gov/pmc/), queried May 12, 2021.
34 John Bickle
6 Crick maintained this judgment to the end of his life. See one of his last pub-
lications before his death, co-authored with Christof Koch, where the pair
propose “a framework” for a neurobiology of consciousness, emphasizing
that a framework is not a theory, but rather just a “suggested point of view
for an attack on a scientific problem” (2003, 119).
7 See, e.g., my discussion (Bickle 2019) of Mario Capecchi’s remark about
his experiences with an NIH panel on his initial funding request for devel-
oping techniques of gene targeting by homologous recombination, followed
by subsequent remarks from the same panel a couple of years later after
he had persisted in his research despite that panel’s initial mistaken—and
theory-grounded!—pessimistic judgment about its potential.
8 Hubel’s wording here should remind the reader of one of Ian Hacking’s
(1983) most quoted remarks about the development of microscopes: “We
should not underestimate … the pretheoretical role of … fooling around.”
References
Atanasova, N. (2015). “Validating animal models.” Theoria: An International
Journal for Theory, History and Foundations of Science 30: 163–181.
Bickle, J. (2003). Philosophy and Neuroscience: A Ruthlessly Reductive
Account. Dordrecht: Springer.
Bickle, J. (2015). “Marr and reductionism.” Topics in Cognitive Sciences (TOP-
ICS) 7: 299–311.
Bickle, J. (2016). “Revolutions in neuroscience: Tool development.” Frontiers in
Systems Neuroscience 10: 24. doi: 10.3389/fnsys.2016.00024.
Bickle, J. (2018). “From microscopes to optogenetics: Ian Hacking vindicated.”
Philosophy of Science 85: 1065–1077.
Bickle, J. (2019). “Linking mind to molecular pathways: The role of experiment
tools.” Axiomathes: Where Science Meets Philosophy 29(6): 577–597.
Bickle, J. (2020). “Laser lights and designer drugs: New techniques for descend-
ing levels of mechanisms in a ‘single bound’?” Topics in Cognitive Sciences
(TopiCS) 12: 1241–1256.
Burian, D. (2007). “On microRNA and need for explanatory experimentation
in post-genomic molecular biology.” History and Philosophy of the Life
Sciences 29(3): 283–310.
Churchland, P.S. (1986). Neurophilosophy: Toward a Unified Science of the
Mind-Brain. Cambridge: MIT Press.
Churchland, P.S. and Sejnowski, T.J. (1992). The Computational Brain. Cam-
bridge: MIT Press.
Churchland, P.S. and Sejnowski, T.J. (2016). “Blending computational and
experimental neuroscience.” Nature Reviews Neuroscience 17(11): 567–568.
Crick, F.H. (1979). “Thinking about the brain.” Scientific American 241(3)
(September): 219–232.
Crick, F.H. and Koch, C. (2003). “A framework for consciousness.” Nature
Neuroscience 6: 119–126.
Desimone, R., Albright, T.D., Gross, C.G., and Bruce, C. (1984). “Stimulus-
selective properties of inferior temporal neurons in the macaque.” Journal of
Neuroscience 4(8): 2051–2062.
Fenwick, E.M., Marty, A., and Neher, E. (1982). “Sodium and calcium currents
in bovine chromaffin cells.” Journal of Physiology 331: 595–635.
Tinkering in the Lab 35
Fesenko, E.E., Kolesnikov, S.S., and Lyubarsky, A.L. (1985). “Induction of cy-
clic GMP in cationic conductance in plasma membrane of retinal rod outer
segment.” Nature 313: 310–313.
Franklin, L. (2005). “Explanatory experiments.” Philosophy of Science (Pro-
ceedings) 72: 888–889.
Funahashi, S., Chaffe, M.V. and Goldman-Rakic, P.S. (1993). “Prefrontal neu-
ronal activity in rhesus monkeys performing a delayed anti-saccade task.”
Nature 365: 753–756.
Gold, I. and Roskies, A.L. (2008). “Philosophy of neuroscience.” In M. Ruse
(ed.), The Oxford Handbook of Philosophy of Biology. New York: Oxford
University Press, 349–380.
Gross, C.G., Rocha-Miranda, C.E., and Bender, D.B. (1972). “Visual proper-
ties of neurons in inferotemporal cortex of the macaque.” Journal of Physiol-
ogy 35(1): 96–111.
Hacking, I. (1983). Representing and Intervening. Cambridge: Cambridge Uni-
versity Press.
Hamill, O.P, Marty, A., Neher, E., Sakmann, B., and Sigworth, FJ. (1981). “Im-
proved patch clamp techniques for high-resolution current recording from
cells and cell-free membrane patches.” Pflügers Archiv (European Journal of
Physiology) 391: 85–100.
Hodgkin, A.L and Huxley, A.F. (1952). “A quantitative description of mem-
brane current and its application to conduction and excitation in nerve.”
Journal of Physiology 117: 500–544.
Hopkin, K. (2010). “It’s electric.” The Scientist. https://www.the-scientist.com/
uncategorized/its-electric-43471.
Hubel, D.H. (1957). “Tungsten microelectrode for recording from single units.”
Science 125(3247): 349–550.
Hubel, D.H. (1981). “Evolution of ideas of the primary visual cortex, 1955–
1978. A biased personal history.” Nobel Lecture. https://www.nobelprize.
org/nobel_prizes/medicine/laureates/1981/hubel-lecture.pdf.
Hubel, D.H. (1996). “David H. Hubel.” In L. Squire (ed.), The History of Neu-
roscience in Autobiography, vol. 1. Washington, DC: Society for Neurosci-
ence, 294–317.
Hubel, D.H. and Wiesel, T.N. (1959). “Receptive fields of single neurons in the
cat’s striate cortex.” Journal of Physiology 148: 574–591.
Hubel, D.H. and Wiesel, T.N. (1962). “Receptive fields, binocular interaction,
and functional architecture in the cat’s striate cortex.” Journal of Physiology
160: 106–154.
Hubel. D.H. and Wiesel, T.N. (1965). “Receptive fields and functional archi-
tecture in two non-striate visual areas (18 and 19) of the cat.” Journal of
Neurophysiology 28: 2290289.
Kameyama, M., Herscheler, J., Hofmann, F., and Trautwein, W. (1986). “Mod-
ulation of Ca current during the phosphorylation cycle of the guinea pig
heart.” Pflügers Archiv (European Journal of Physiology) 407: 123–128.
Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago, IL: Univer-
sity of Chicago Press.
Livingstone, M. (2013). “David H. Hubel, 1926–2013.” Neuron 80: 735–737.
Mishkin, M., Ungerleider, L.G., and Macko, K.A. (1983). “Object vision and
spatial vision: Two cortical pathways.” Trends in Neurosciences 6: 414–417.
36 John Bickle
Neher, E. (1991). “Ion channels for communication between and within
cells.” Nobel Lecture. https://www.nobelprize.org/nobel_prizes/medicine/
laureates/1991/neher-lecture.pdf.
Neher, E. and Sakmann, B. (1976). “Single-channel currents recorded from
membrane of denervated frog muscle fibers.” Nature 260 (April 29): 799–802.
Neher, E., Sakmann, B., and Steinback, J.H. (1978). “The extracellular parch
clamp: A method for resolving currents through individual open channels in
biological membranes.” Pflügers Archiv (European Journal of Physiology)
375: 219–228.
Penner, R., Matthews, G., and Neher, E., (1988). “Regulation of calcium influx
by second messengers in rat mast cells.” Nature 334: 499–504.
Purves, D., Augustine, G.J., Fitzpatrick, D., Hall, W.C., LaMantia, A.-S.,
Mooney, R.D., Platt, M.L., and White, L.E. (2017). Neuroscience, 6th Ed.
New York: Oxford University Press.
Robins, S.K. (2016). “Optogenetics and the mechanism of false memory.” Syn-
these 193: 1561–1583.
Robins, S.K. (2018). “Memory and optogenetic intervention: Separating the en-
gram from the ecphory.” Philosophy of Science 85(5): 1078–1089.
Romo, R., Hernández, A., Zainos, A., and Salinas, E. (1998). “Somatosensory
discrimination based on cortical microstimulation.” Nature 392: 387–390.
Sakmann, B. and Neher, E. (1984). “Patch clamp techniques for studying
ionic channels in excitable membranes.” Annual Review of Physiology 46:
455–472.
Salzmann, C.D., Murasugi, C.M., Britten, K.H., and Newsome, W.D. (1992).
“Microstimulation in visual area MT: Effects on direction discrimination
performance.” Journal of Neuroscience 12(6): 2331–2355.
Sellars, W. (1965). “Scientific realism or irenic instrumentalism.” In R.S. Cohen
and Marx W. Wartofsky (eds.), Boston Studies in the Philosophy of Science,
vol. II. New York: Humanities Press, 171–204.
Sigworth, F.J. and Neher, E. (1980). “Single Na+ channel currents observed in
cultured rat muscle cells.” Nature 287: 447–449.
Silva, A.J., Landreth, A., and Bickle, J. (2014). Engineering the Next Revolu-
tion in Neuroscience. New York: Oxford University Press.
Sullivan, J.A. (2009). “The multiplicity of experimental protocols: A challenge
to reductionist and non-reductionist models of the unity of neuroscience.”
Synthese 167: 511–539.
Sullivan, J.A. (2010). “Reconsidering ‘spatial memory’ and the Morris water
maze.” Synthese 177: 261–283.
Sullivan, J.A. (2018). “Optogenetics, pluralism and progress.” Philosophy of
Science 85(5): 1090–1101.
Van Essen, D.C., Anderson, C.H., and Felleman, D.J. (1992). “Information
professing in the primate visual system: An integrated systems perspective.”
Science 255(5043): 419–423.
2 Tools, Experiments, and
Theories
An Examination of the Role
of Experiment Tools
Gregory Johnson
1 Introduction
John Bickle (2016, 2018, 2019) offers two frameworks for thinking
about the role of experiment tools in neurobiology. First, he argues that
“revolutions in neuroscience” do not proceed in a Kuhnian manner such
that a dominant paradigm is replaced in response to an accumulation
of anomalies. Rather, revolutions in this science begin with motivating
problems that spur the development of new experiment tools, the impor-
tance of which is revealed in initial- and second-phase hook experiments
(2016). The initial-phase hook experiments demonstrate the feasibility
of the new tool. The second-phase hook experiments demonstrate its
usefulness in a wider range of experimental contexts and bring it to the
attention of a much larger audience—both scientific and more general.
Bickle extends this to a second framework, which is grounded in a
critique of “theory-centrism” in the philosophy of neuroscience (2018,
2019). As Bickle has it, theory-centrism is the view that neuroscience
needs theories—on the model of physics, early modern astronomy, or
evolutionary biology—that will drive successful research efforts. One
advocate of this view is Patricia Churchland. She writes,
And then, more than two decades later, Ian Gold and Adina Roskies say,
DOI: 10.4324/9781003251392-4
38 Gregory Johnson
complex biological organ which has evolved to perform a variety
of sophisticated tasks. It yields its secrets grudgingly. Nonetheless,
there is no principled reason why we cannot expect that, in time,
we will be able to formulate more general theories about the neural
processing that underlies these diverse functions.
(2008, p. 353)
Bickle aims to dispel this view that theory should have a central po-
sition in our understanding of neuroscience. Rather, he argues that we
should appreciate the central role of tool development and use. To the
extent that theory has a role in contemporary neuroscience, it is “ter-
tiary” in importance:
Rather than being the crux point on which everything else depends,
… theory turns out to be doubly dependent, and hence of tertiary,
not primary, importance. Our best confirmed theory is totally de-
pendent on what our experiment tools allow us to manipulate. And
those tools developed by way of solving engineering problems, not
by applying theory.
(2019, p. 578)
This can be interpreted in two ways. On the one hand, when a theory is
proposed, the theory depends on experiments and the tools used in those
experiments for its confirmation. Hence, without those tools, the theory
would fail to be confirmed. Arguably, though, dependence in this sense
doesn’t detract from a central role for theory in the scientific process.
This sense of dependence, however, doesn’t seem to be what Bickle has
in mind. For instance, in one place, he writes,
Here it seems that we are meant to understand that the temporal order,
as well as the order of importance, is, as Bickle later lays it out, “en-
gineering solutions → new experiment tools → better theory” (2019,
p. 595). I will call this the tools first (or anti-theory-centric) method with
the idea that, as Bickle stresses, the application of an experiment tool is,
Tools, Experiments, and Theories 39
along with other factors, central to the investigation—but one of those
other factors is not the testing of a theory.
Surprisingly, given the role that it has in his analysis, Bickle is stu-
diously coy about what he means by theory. For the most part, rather
than use this term, I will use hypothesis, defined here as an explana-
tion for a process or phenomenon that still requires confirmation. By
focusing on hypotheses, I am deliberately setting aside theory used in
the sense of understanding, knowledge of the discipline, or completed
explanation—or, as Churchland says, “this conglomeration of back-
ground assumptions, intuitions, and assorted preconceptions” (1986,
p. 405). I will take it for granted that theory in this latter sense is perva-
sive at all stages of neurobiological investigations.1
Bickle’s assertion that the tools first method is always used in con-
temporary neurobiology is a strong claim, and it will be our focus. In
Sections 2 and 3, I will look at two cases. The first, gene targeting and
investigations of the relationship between memory and long-term poten-
tiation, is extensively discussed by Bickle (2016, 2019). I find, however,
that a well-defined hypothesis does have a prominent role in these in-
vestigations. In short, a hypothesis was developed and then confirmed
by experiments using gene targeting. The second case, however—an op-
togenetic investigation of neurons in the extended amygdala that were
found to drive both anxiety and anxiety-reduction—illustrates the ap-
plication of Bickle’s tools first method.
The takeaway, then, is twofold. First, scientific method in contempo-
rary neurobiology is more varied than Bickle suggests, and sometimes
theory does have a central role. But, second, there are important investi-
gations in neurobiology that proceed without a hypothesis or theory as
the starting point (and without either coming into play at any point, for
that matter). This is, in part, a consequence of, as Bickle argues, experi-
ment tools that allow for ever more precise investigations of cellular and
molecular processes. It is also a consequence of the explanatory goals in
neurobiology, namely, the description of mechanisms. When these two
consequences come together, there is no longer an apparent need for the-
ories of the sort encountered in physics or evolutionary biology.
In 1992, this work led to the report of the first knockout mice in neu-
roscience. It was shown that deletion of the α-CaMKII gene causes
a deficit in LTP at Schaffer collateral-CA1 synapses and an impair-
ment of spatial learning. Even before publication, however, we were
aware of the limitations of this approach in conventional knockout
mice. These limitations were primarily because the gene of interest
is deleted in the entire animal throughout the animal’s life. Although
no obvious developmental defects were observed in the α-CaMKII
knockout mice, more subtle defects could not be excluded. Further-
more, the universal absence of the protein in question (in this case,
α-CaMKII) certainly did not permit the establishment of a causal
relationship between CA1 LTP and spatial learning.
(2001; see also Tsien et al. 1996, p. 1327)
So, while Silva et al. showed that gene targeting could be used in neuro-
biological investigations and their study added some degree of confirma-
tion to the LTP → spatial memory hypothesis, their results encountered
much the same problem as did Morris’s. Like Morris, Silva and his col-
leagues disrupted one component in the process that gives rise to LTP
and, at the same time, this intervention impaired spatial learning. But,
in both cases, the intervention was broad enough to raise questions
about whether it was the absence of LTP or the impairment of some
other process that was responsible for the poor performance on the spa-
tial memory task. Four years later, however, further work on the gene
42 Gregory Johnson
targeting technique by Joe Tsien—who was also working in Tonegawa’s
lab—made it possible to draw a stronger conclusion.
Silva et al. had introduced the non-functional copy of the α-CaMKII
gene into embryonic stem cells that were then injected into fertilized
mouse embryos. The embryos were implanted in the uteruses of surro-
gate females, and some of the mice that were born contained the mutant
gene. Mating those mice with wild types and then mating mice that
were heterozygous for the mutant gene eventually yielded mice that were
homozygous for the mutant gene. This ensured that these mice lacked
α-CaMKII in CA1 neurons in the hippocampus. But these mice also
lacked this protein at all stages of development and in all brain areas
where it would normally be expressed: throughout the hippocampus, in
the cortex, septum, striatum, and amygdala.
Shortly after Silva et al.’s study was published, Tsien used the same em-
bryonic stem cell method to create two lines of mice. One line expressed
an enzyme derived from the bacteriophage P1 only in pyramidal cells in
the hippocampus’s CA1 region. This enzyme, Cre recombinase, targets
specific sequences of DNA, 34 base pair “loxP” sites, and it will excise
a sequence of DNA that is flanked by these 34 base pair sequences. (Or,
in a procedure that was, at this point, not yet developed, Cre recombi-
nase can flip the orientation of a gene so that, depending on its original
orientation, it either will be transcribed or won’t be transcribed.) In the
other mouse line, loxP sites flanked the NMDAR1 gene, a gene that
encodes an essential subunit of the NMDA receptor. But besides the in-
clusion of the loxP sites, these mice had normal genomes and developed
normally with a functioning NMDR1 gene. Mating mice from these two
lines produced offspring in which the Cre recombinase would excise the
NMDR1 gene in CA1 neurons, but nowhere else (since Cre recombinase
wasn’t expressed in any other neurons). And, since Cre recombinase only
begins to be expressed several weeks after birth, the loss of the NMDR1
gene did not interfere with prenatal or perinatal development.
Tsien and his colleagues found that the mice that lacked the NMDAR1
gene exhibited impaired spatial learning on the Morris water maze with-
out an impairment to non-spatial learning, and, in vitro, stimulating
CA3 axons did not induce LTP in CA1 neurons. Tsien et al. conclude:
“these results provide strong support for the hypothesis that NMDAR-
mediated LTP in the CA1 region is crucially involved in the formation
of certain types of memory” (1996, p. 1328). And a little later they add:
3 Optogenetics
The second “neuroscientific revolution” that Bickle (2016) describes
is the development and subsequent use of optogenetics. This tool,
which was initially developed in the early 2000s by Karl Deisseroth,
Edward Boyden, and colleagues working at Stanford University, uses
light to control the activation or inactivation of neurons (Boyden et al.
2005). This is done by taking genes for light-activated proteins from
algae, archaebacteria, and other microbial organisms and expressing
them in the neurons of interest. For instance, the frequently used ion
channel channelrhodopsin-2 occurs naturally in the green alga C. re-
inhardtii. When it is expressed in the neural membrane and exposed
to blue light, channelrhodopsin-2 allows positively charged ions to
enter the cell, which causes an excitatory response. Conversely, halor-
hodopsin, which is found in the single-celled archaeon N. pharaonis,
pumps negatively charged chloride ions into the cell when it is ex-
posed to yellow or orange light. This hyperpolarizing current inhibits
the cell. Once the light-sensitive protein is expressed in neurons, an
optical fiber that has been implanted in the animal’s brain over the
area of interest is used to deliver a specific wavelength of light. Thus,
neurons can be activated or inhibited while still allowing the animal
to move freely.
44 Gregory Johnson
Bickle describes the motivating problem that spurred the development
of optogenetics this way:
This captures the central issue, but we can give a less hypothesis-centric
characterization of the problem. Here is how Boyden puts it:
The extended amygdala, including the bed nucleus of the stria ter-
minalis (BNST), modulates fear and anxiety, but also projects to the
ventral tegmental area (VTA), a region implicated in reward and
aversion, thus providing a candidate neural substrate for integrating
diverse emotional states. However, the precise functional connectiv-
ity between distinct BNST projection neurons and their postsynap-
tic targets in the VTA, as well as the role of this circuit in controlling
motivational states, have not been described.
(2013, p. 224)
4 Conclusion
Scientific method in contemporary neurobiology is varied. Hypotheses
may be formulated and tested, but they need not be. Ian Hacking (1983)
discusses this point, and it is instructive to see how far he is willing to
take it. He begins by quoting the 19th-century chemist Justus von Liebig
who wrote,
you must have some ideas about nature and your apparatus before
you conduct an experiment. A completely mindless tampering with
nature, with no understanding or ability to interpret the result,
would teach almost nothing.
(1983, p. 153)6
Acknowledgments
This chapter has benefited from many discussions of John Bickle’s work
on tool development at Mississippi State University and at the 2018 and
2019 Philosophy and Neuroscience at the Gulf annual meetings. I would
also like to thank John Bickle for his comments on a draft of this chapter.
Notes
1 I am not claiming that theory in the sense of a hypothesis (or a collection
of hypotheses) can be cleanly distinguished from any of the other senses of
theory or that we should always want to make such a distinction. I am only
claiming that the distinction will be useful here.
Bickle, for his part, uses theory in different ways. When he invokes and
rejects “theory-centrism,” he is adopting Churchland, Gold, and Roskies’
call for theory in the sense of proposed explanations that direct experimen-
tal investigations and are confirmed or disconfirmed by those investigations.
But he also uses theory in the sense of well-confirmed and mainly completed
explanations—for instance, when he says,
neuroscience, I claim (Bickle 2015, 2016, 2018), has lots of good,
well-confirmed theory; … the mechanisms of neuronal conductance
and transmission at chemical synapses, receptor and ion channel func-
tion, mechanisms of synaptic plasticity, including its molecular-genetic
and epigenetic mechanisms, details of anatomical circuitries linking
neurons to other neurons, and ultimately to sensory receptors and mus-
cle tissue.
(2019, p. 580).
Then, in his description of the development of gene targeting and optogenet-
ics, he is at pains to show that these tools were developed by “catch-as-catch
can laboratory tinkering” (2019, p. 594) not by the considered application
of theory—that is, not by the application of well-confirmed scientific expla-
nations (2018, pp. 1071–1074; 2019, pp. 591–594).
52 Gregory Johnson
2 Silva et al., in the introduction to their own study, make the point plainly:
Although LTP has been studied as a mechanism responsible for some
types of learning and memory, the actual evidence for this hypothesis is
not extensive. The main support for LTP as a memory mechanism is the
observation that pharmacological agents that block hippocampal gluta-
mate receptors of the N-methyl-D-aspartate (NMDA) class and thus pre-
vent the induction of LTP also impair spatial learning in rodents (Morris
et al. 1991). The problem with this evidence is that blocking NMDA
receptors disrupts synaptic function and thus potentially interferes with
the in vivo computational ability of hippocampal circuits. Perhaps the
failure of learning results not from the deficit in LTP but simply from
some other incorrect operation of hippocampal circuits that lack NMDA
receptor function.
(1992b, p. 201)
3 Bickle’s analysis is different. He focuses on the theory, or lack thereof, that
was used by Capecchi to develop gene targeting in mice (2019, pp. 591–594).
That, however, switches the focus from the theory that is stated in the mo-
tivating problem, and which clearly concerned these investigators, to theory
in a different context.
4 Control animals were from the same two lines of mice. Each received the injec-
tion of the viral construct and the optical fiber was implanted in its VTA. But, for
these mice, the viral construct did not contain the gene for channelrhodopsin-2,
and so their vBNST → VTA neurons could not be activated by light.
5 For the curious, using immunohistochemical staining, photostimulation,
and patch-clamp recording in brain slices, Jennings et al. found that the
vBNST → VTA neurons “formed functional synapses primarily onto
non-dopaminergic and medially located dopaminergic neurons.” They then
add, “these data provide a circuit blueprint by which vBNST subcircuits
interact with VTA-reward circuitry” (2013, pp. 225–226).
6 Although, he later adds,
Indeed even the weak version is not beyond doubt. The physicist George
Darwin used to say that every once in a while one should do a completely
crazy experiment, like blowing the trumpet to the tulips every morning
for a month. Probably nothing will happen, but if something did happen,
that would be a stupendous discovery.
(1983, p. 154)
7 Bickle, in another context, has made this same connection between new
experiment tools that began to be developed in the 1980s and 1990s and
mechanistic explanations (2015; see especially Section 4).
References
Adamantidis, A. R., Tsai, H.-C., Boutrel, B., Zhang, F., Stuber, G. D., Budygin,
E. A., Touriño, C., Bonci, A., Deisseroth, K., & de Lecea, L. (2011). Opto-
genetic interrogation of dopaminergic modulation of the multiple phases of
reward-seeking behavior. Journal of Neuroscience, 31(30), 10829–10835.
Bechtel, W. (2005). The challenge of characterizing operations in the mecha-
nisms underlying behavior. Journal of the Experimental Analysis of Behav-
ior, 84(3), 313–325.
Tools, Experiments, and Theories 53
Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative.
Studies in History and Philosophy of Biological and Biomedical Sciences,
36, 421–441.
Bickle, J. (2015). Marr and reductionism. Topics in Cognitive Science, 7(2),
299–311.
Bickle, J. (2016). Revolutions in neuroscience: Tool development. Frontiers in
Systems Neuroscience, 10, 1–13.
Bickle, J. (2018). From microscopes to optogenetics: Ian Hacking vindicated.
Philosophy of Science, 85(5), 1065–1077.
Bickle, J. (2019). Linking mind to molecular pathways: The role of experiment
tools. Axiomathes, 29(6), 577–597.
Bliss, T. V. P., & Lømo, T. (1973). Long-lasting potentiation of synaptic trans-
mission in the dentate area of the anaesthetized rabbit following stimulation
of the perforant path. The Journal of Physiology, 232(2), 331–356.
Boyden, E. S. (2011). A history of optogenetics: The development of tools for
controlling brain circuits with light. F1000 Biology Reports, 3(11), 1–12.
Boyden, E. S., Zhang, F., Bamberg, E., Nagel, G., & Deisseroth, K. (2005).
Millisecond-timescale, genetically targeted optical control of neural activity.
Nature Neuroscience, 8(9), 1263–1268.
Churchland, P. S. (1986). Neurophilosophy: Toward a unified science of the
mind-brain. Cambridge: MIT Press.
Craver, C. F. (2013). Functions and mechanisms: A perspectivalist view. In
P. Huneman (Ed.), Functions: Selection and mechanisms (pp. 133–158). Dor-
drecht: Springer.
Crow, T. J. (1972). A map of the rat mesencephalon for electrical self-stimulation.
Brain Research, 36(2), 265–273.
Deisseroth, K. (2014). Circuit dynamics of adaptive and maladaptive behaviour.
Nature, 505(7483), 309–317.
Gardner-Medwin, A. R. (1969). Modifiable synapses necessary for learning.
Nature, 223(5209), 916–919.
Gold, I., & Roskies, A. L. (2008). Philosophy of neuroscience. In M. Ruse (Ed.),
Oxford handbook of philosophy of biology (pp. 349–380). Oxford: Oxford
University Press.
Hacking, I. (1983). Representing and intervening: Introductory topics in the
philosophy of natural science. Cambridge: Cambridge University Press.
Harris, E. W., Ganong, A. H., & Cotman, C. W. (1984). Long-term potentia-
tion in the hippocampus involves activation of N-methyl-D-aspartate recep-
tors. Brain Research, 323(1), 132–137.
Hebb, D. O. (1949). The organization of behavior: A neuropsychological the-
ory. New York: Wiley.
Jennings, J. H., Sparta, D. R., Stamatakis, A. M., Ung, R. L., Pleil, K. E., Kash,
T. L., & Stuber, G. D. (2013). Distinct extended amygdala circuits for diver-
gent motivational states. Nature, 496(7444), 224–228.
Lømo, T. (1966). Frequency potentiation of excitatory synaptic activity in the
dentate area of the hippocampal formation. Acta Physiologica Scandinavica,
68(S277), 128.
Morris, R. G. M. (1989). Synaptic plasticity and learning: Selective impair-
ment of learning rats and blockade of long-term potentiation in vivo by the
54 Gregory Johnson
N-methyl-D-aspartate receptor antagonist AP5. Journal of Neuroscience,
9(9), 3040–3057.
Morris, R. G. M., Anderson, E., Lynch, G. S., & Baudry, M. (1986). Selective
impairment of learning and blockade of long-term potentiation by an N-meth-
yl-D-aspartate receptor antagonist, AP5. Nature, 319(6056), 774–776.
Morris, R. G. M., Davis, S., & Butcher, S. P. (1991). Hippocampal synaptic
plasticity and N-methyl-D-aspartate receptors: A role in information storage?
In M. Baudry & J. L. Davis (Eds.), Long-term potentiation: A debate of cur-
rent issues (pp. 267–300). Cambridge: MIT Press.
Morris, R. G. M., & Kennedy, M. B. (1992). The Pierian Spring. Current Biol-
ogy, 2(10), 511–514.Olds, J., & Milner, P. (1954). Positive reinforcement pro-
duced by electrical stimulation of septal area and other regions of rat brain.
Journal of Comparative and Physiological Psychology, 47(6), 419–427.
Rodriguez-Romaguera, J., Ung, R. L., Nomura, H., Otis, J. M., Basiri, M. L.,
Namboodiri, V. M. K., Zhu, X., Robinson, J. E., Munkhof, H. E. van den,
McHenry, J. A., Eckman, L. E. H., Kosyk, O., Jhou, T. C., Kash, T. L., Bru-
chas, M. R., & Stuber, G. D. (2020). Prepronociceptin-expressing neurons
in the extended amygdala encode and promote rapid arousal responses to
motivationally salient stimuli. Cell Reports, 33(6), 108362.
Silva, A. J., Paylor, R., Wehner, J. M., & Tonegawa, S. (1992a). Impaired spa-
tial learning in alpha-calcium-calmodulin kinase II mutant mice. Science,
257(5067), 206–211.
Silva, A. J., Stevens, C. F., Tonegawa, S., & Wang, Y. (1992b). Deficient hippo-
campal long-term potentiation in alpha-calcium-calmodulin kinase II mutant
mice. Science, 257(5067), 201–206.
Stamatakis, A. M., & Stuber, G. D. (2012). Activation of lateral habenula in-
puts to the ventral midbrain promotes behavioral avoidance. Nature Neuro-
science, 15(8), 1105–1107.
Stamatakis, A. M., Sparta, D. R., Jennings, J. H., McElligott, Z. A., Decot,
H., & Stuber, G. D. (2014). Amygdala and bed nucleus of the stria terminalis
circuitry: Implications for addiction-related behaviors. Neuropharmacology,
76, 320–328.
Stamatakis, A. M., Van Swieten, M., Basiri, M. L., Blair, G. A., Kantak, P., &
Stuber, G. D. (2016). Lateral hypothalamic area glutamatergic neurons and
their projections to the lateral habenula regulate feeding and reward. Journal
of Neuroscience, 36(2), 302–311.
Stuber, G. D. (2016, September 28). Functional and molecular dissection of a
neural circuit for anxiety [Conference presentation]. DECODE Summit, Palo
Alto, CA. https://youtu.be/lj4_54w91wI.
Stuber, G. D., Hnasko, T. S., Britt, J. P., Edwards, R. H., & Bonci, A. (2010).
Dopaminergic terminals in the nucleus accumbens but not the dorsal striatum
corelease glutamate. Journal of Neuroscience, 30(24), 8229–8233.
Tonegawa, S. (2001). Roles of hippocampal NMDA receptors in learning and
memory. Riken BSI News, 12. https://bsi.riken.jp/bsi-news/bsinews12/no12/
issue1e.html.
Tsai, H.-C., Zhang, F., Adamantidis, A., Stuber, G. D., Bonci, A., de Lecea, L.,
& Deisseroth, K. (2009). Phasic firing in dopaminergic neurons is sufficient
for behavioral conditioning. Science, 324(5930), 1080–1084.
Tools, Experiments, and Theories 55
Tsien, J. Z., Huerta, P. T., & Tonegawa, S. (1996). The essential role of hippo-
campal CA1 NMDA receptor–dependent synaptic plasticity in spatial mem-
ory. Cell, 87(7), 1327–1338.
Ungerstedt, U. (1971). Adipsia and aphagia after 6-hydroxydopamine induced
degeneration of the nigro-striatal dopamine system. Acta Physiologica Scan-
dinavica, 82(S367), 95–122.
van Zessen, R., Phillips, J. L., Budygin, E. A., & Stuber, G. D. (2012). Activa-
tion of VTA GABA neurons disrupts reward consumption. Neuron, 73(6),
1184–1194.
Wise, R. A., Spindler, J., deWit, H., & Gerberg, G. J. (1978). Neuroleptic-
induced “anhedonia” in rats: Pimozide blocks reward quality of food. Sci-
ence, 201(4352), 262–264.
3 Science in Practice in
Neuroscience
Cincinnati Water Maze in the
Making
Nina A. Atanasova, Michael T.
Williams and Charles V. Vorhees
1 Introduction
Historically, philosophy of science was largely preoccupied with ques-
tions regarding the nature of scientific theories, what makes one theory
better than another, and what implications scientific theories have for
traditional philosophical questions. Philosophers conceptualized scien-
tific change in terms of theory change. Theory was considered a finished
product independent from its social and institutional context. However,
the process of theory in the making has yet to be carefully studied in
philosophy of science.
In our view, an integrated approach viewing science as a human prac-
tice embedded in a historical and institutional context should be ad-
opted for the study of multiple relevant factors involved in the making
of theories. This approach requires zooming out of the narrow view of
science which reduces it to its theories and looking at the bigger picture
of science as a practice, which generates theories along with technolog-
ical, medical, and social innovations motivated by societal needs and
available funding. Here we present a case study from contemporary ex-
perimental neuroscience which focuses on the routine practices of grant-
funded research and tool development which drive the research process
in biomedical science. We propose an account of science in which the-
ories serve as interpretative devices of experimental data. This result
contradicts some philosophical accounts of science that tend to reduce
it to its theories. By theory, we mean abstract models and narratives
which postulate comprehensive ontologies for the explanation of target
phenomena and the mechanisms that produce them.
Our approach recognizes the multiple functions of theories such as
prediction, explanation, and modeling, which have been extensively an-
alyzed in philosophy of science. While we recognize the importance of
all these functions, we show that the role theory plays in contempo-
rary neuroscience is as a device for interpretation of data generated by
exploring focused hypotheses about specific interventions, for example,
lesions of the hippocampus and their corresponding deficits in spatial
DOI: 10.4324/9781003251392-5
Science in Practice in Neuroscience 57
navigation. Contemporary neuroscience is data-driven. Scientific inno-
vation consists in the development and consolidation of ensembles of
experimental tools and techniques. It depends largely on the availability
of funding and relevant technology.
Our goal is to present a case study which utilizes an integrated meth-
odology in which philosophers and scientists reflect together on the prac-
tice of science. The authors are a philosopher and two neuroscientists in
whose lab the philosopher worked while a graduate student. Although
the majority of the narrative is a first-person recount of the two scien-
tists, we use a third-person structure in order to ease the flow of the text.
In what follows, we provide a brief characterization of two traditional
philosophical views of science which we challenge. We then turn to the
philosophy of science-in-practice. Consistent with the turn to the philos-
ophy of science-in-practice, philosophers of neuroscience have focused
their attention on experimentation and tool development (Bickle 2006;
Sullivan 2009; Silva et al. 2014; Atanasova 2015). This motivated our
choice to trace the invention, development, and the continuous refine-
ments of an experimental tool in neuroscience, the Cincinnati water
maze (CWM). The case demonstrates the significance of the turn to the
philosophy of science-in-practice by exemplifying the notion of research
repertoires advanced by Leonelli and Ankeny (2015) and Ankeny and
Leonelli (2016). We show how factors such as availability of funding,
technology, and collaborations determine many rational choices in re-
search practice. Finally, we introduce the notion of theories as interpreta-
tive devices to capture the role of theory in contemporary neuroscience.
The society has grown and given rise to a trend surpassing the frame-
work of postpositivism. Prominent leaders of the society, Rachel Ankeny
and Sabina Leonelli, introduced the notion of research repertoire to ac-
count for the collaborative practices in the contemporary life sciences.
They define research repertoire as follows:
Our case study exemplifies the enactment of one such repertoire in be-
havioral neuroscience. It shows the superiority of this account of science
over the, still influential, notion of scientific paradigm first advanced by
Thomas Kuhn and Paul Feyerabend in the 1960s–1970s.
Figure 1a D
rawing of Cincinnati water maze featuring the training path which
begins at S (start) and ends at G (goal). The two points are reversed
during reversal tests.
Figure 2a D
rawing of Morris water maze featuring different positions of the
escape platform during different training and testing phases.
after other treatments, the performance in one but not the other was
impaired (Williams et al. 2002, 2003). Vorhees and Williams realized
that such data pointed to a corresponding difference in the brain areas
and the neurotransmitters that were involved in solving the two tasks.
This inspired theoretical questions, but the maze would undergo sev-
eral modifications in response to calibration issues before they could be
addressed.
Vorhees and Williams discussed what makes the CWM unique and
what brain regions were involved in solving it. They knew that distal
cues are important for spatial memory, but the room that housed the
CWM had no prominent cues but had many standard room features
(such as air intake ducts and electrical outlets on the wall). Williams
wondered, “Are the rats using background spatial cues as well as prox-
imal cues in the CWM?” The task had presumably been solved on the
68 Nina A. Atanasova et al.
basis of egocentric route-based proximal cues. So, they explored ways to
eliminate spatial cues and the first thing they tried was hoods to cover
the rats’ eyes so that they could not see distal cues. Whishaw and Maas-
winkel (1998) made hoods to obstruct rats’ vision using a large circular
platform maze where they first found food visually, then had to re-find
it blindfolded.
Inspired by this publication, a student in the Vorhees/Williams lab,
Tracy Blankemeyer, made rat hoods. However, when the rats were
tested in the maze and the hoods got wet, they pried the hoods off, then
searched for the goal. Efforts to get the rats accustomed to the hoods
also failed by having the rats wear them for a week prior to maze testing.
The rats did adjust to the hoods and left them on but when put in the
maze, they again pried them off.
Therefore, Vorhees and Williams considered surgically blinding the
rats but found this objectionable. Neither thought they could sew the
eyelids shut even though this is a published procedure. They settled on a
more humane solution: they turned off the lights but provided a red light
because rat vision is reported to be deficient at the red end of the visible
spectrum. The maze was harder than under white light, but the rats still
solved the task rapidly. This suggested that there was enough light that
the rats were still able to see room cues. At that time, it was known that a
single landmark can be sufficient to solve an MWM. Therefore, Vorhees
and Williams thought rats might be using even minimal red light as an
orientation beacon to navigate to the goal.
This led to the decision to use infrared light. In this setup, rats per-
formed worse than under red light, but they still appeared to orient to
some unidentified light source. It turned out that the light leaking un-
derneath and around the door was sufficient for the rats to orient. A hint
that it was the door came out of experiments a graduate student, Nicole
Herring, did. She found that rats treated with methamphetamine had
CWM deficits (Herring et al. 2008). However, when she attempted to
replicate the results, the effect was smaller. Something had changed. Af-
ter inspection, it was determined that in the second experiment the door
was not closing fully, providing more light than in the first experiment.
Therefore, trim was installed around the door and the closer was fixed
so no light could get in the room. The principal investigators stood in
the room making sure after dark adaptation that nothing could be seen.
When experimenters entered the room to test a rat, they carried a red-
light flashlight to see enough to place rats in and out of the maze without
interfering with the dark adaptation of rats waiting to be tested. The dif-
ficulty rats had in solving the task under complete darkness meant that
they were no longer using visual cues for navigation. Performance was
monitored using an infrared-sensitive camera mounted above the maze
connected to a monitor in an adjacent room where the experimenter
scored performance. The tool was now calibrated to detect the effects
Science in Practice in Neuroscience 69
which the investigators studied. In this process, there was little theoret-
ical speculation about what the effects of the experimental procedures
would imply for the nature of egocentric learning and memory, although
tacit knowledge about the behaviors of rats under experimental condi-
tions informed procedural changes. Some of the findings were accidental
others were predicted.
Another inadvertent discovery was the importance of training. This
came about when Vorhees and Williams helped another lab to set up
a CWM by providing them with drawings for the maze. Later, the lab
called Vorhees stating that the CWM was not working and that all rats
failed to learn even when tested under normal light. The other lab had
only asked for drawings, not test procedures. Vorhees had not provided
information on the straight channel trials under the assumption they
would later ask for procedures, but they never did. Hence, their rats
did not learn that escape was possible and because the CWM even in
the light is complex, and rats gave up searching without ever finding
the escape. The tool was not properly calibrated, not all procedures for
its proper use were followed. This is when Vorhees appreciated that the
straight channel was essential to teach rats about escape. Without this
knowledge, the rats became frustrated and would give up and tread wa-
ter until removed. Hence, it became clear that minimal training was
required for rats to solve a maze as difficult as the CWM. Until then,
running the rats through the straight channel, just like Biel did, was
considered an assessment of the rats’ swimming capacities before they
went into the maze, not a learning experience with positive transfer of
training to the maze. It was meant to establish that rats were proficient
swimmers and motivated to escape the water, and ensure they had no
motoric impairment if they were in the treated group. This procedure
accounted for the possibility that a treatment affecting swim speed could
confound the interpretation of group differences obtained in the maze.
Because the straight channel was used in the lab to obtain performance
(swim speed) information, it was not initially appreciated that it was
an essential part of the procedure. Employing the test for this purpose
is part of the calibration strategy, but it became clear that it was a nec-
essary component of the experimental setup. This shows that tacit pro-
cedural knowledge can make a difference in the way research develops
without the employment of a theory about the process which underlies
the studied behaviors. The causal connection between this element of
the procedure and the observed behavior was articulated after the fact.
It was not a hypothesis that was tested purposefully.
An even more recent change in the maze design was the introduc-
tion of a tenth T cul-de-sac. Williams noticed something that Vorhees
overlooked. In Vorhees’ drawing, one wall was slightly out of alignment
limiting the maze to nine cul-de-sacs. However, once the alignment was
corrected, there was room for an additional T. Recent data from the new
70 Nina A. Atanasova et al.
design show that the change made a difference in maze difficulty and it
now requires more trials for rats to become proficient. The rats’ optimal
performance in the ten-T version for the equivalent number of testing
days is not as good as in the nine-T version.
More changes were introduced in response to calibration problems.
For example, initially, only the errors in the arms of a T at the end of a
cul-de-sac were counted. Now stem and T-errors are counted in recog-
nition of the fact that all turns not leading to the goal are errors when-
ever a rat deviates from a direct path to the end regardless of whether it
goes the full distance to the end of a right or left arm of a T. Moreover,
counting only the end of cul-de-sac errors was not capturing valuable
information about mistakes rats made. As rats learn, one of the interme-
diate steps they go through is recognizing when they make a wrong turn
and swim down the stem of a T. Once the rats have visited that dead-
end T several times, thereafter when they turn in down that stem, they
recognize that they have been there before. The rat then stops short and
reverses course. Counting only T-arm errors missed these intermediate
errors. Adding stem errors further increased the sensitivity of the test.
The change in error counting occurred at a lab meeting when it became
clear that group differences were larger on some occasions depending on
how different experimenters counted errors. They needed to calibrate
experimenters as well as definitions of errors more precisely. This is not
trivial because when rats begin to learn the maze, they progressively in-
hibit errors. At first, rats go all the way to the end of a T and turn right
and left, and sometimes go back and forth before exiting. Later, rats
hesitate partway down the stem before turning around. These ‘inter-
mediate’ mistakes were being missed. The lab considered whether arm
errors were in a sense more severe than stem errors. To find out, the next
experiment counted both types of errors. Analysis of the data provided
no support for the idea that arm errors represent a more severe deficit
than stem or arm + stem errors in revealing group differences. Therefore,
all errors are now treated the same.
In addition to errors, experimenters track the time it takes for a rat
to find the platform (latency). However, there is a per-trial time limit
of five minutes designed to prevent fatigue. This aspect of the test pro-
cedure also evolved. Initially, the time limit was 12 minutes. Rats were
given two trials per day as before. However, rats that did not find the
goal after 12 minutes would get exhausted. This caused rats to become
tired faster on trial-2. The first approach to this problem was to give rats
reaching the 12-minute limit, a one-hour rest. Later, the trial limit was
reduced to ten minutes and the rest after a trial-1 failure was shortened
to 30 minutes. Over a period of several years, the limit was gradually
reduced from 10 to 6, and finally five minutes.
Some of the adjustments were also made in response to peer criticism.
Critics pointed out that the resting time between trials might not be
Science in Practice in Neuroscience 71
enough and, if insufficient, would compromise performance on trial-2
of each day. In response, Vorhees and Williams tested performance with
rest intervals up to 30 minutes after a trial-1 failure to reach the goal
and found it made little difference unless the trial time limit was shorter
than five minutes. Therefore, five minutes was empirically determined
and became standard practice. The data from these refinement exper-
iments were not published separately because they were not sufficient
by themselves to be worthy of a paper even though they were essential
in the evolution of the method and are found within papers published
across many years.
4 Conclusion
We presented a case study of the invention and calibration of a tool in
experimental neuroscience, the Cincinnati water maze. Our results show
that making choices regarding what experiments to pursue more often
than not relies on pragmatic considerations of funding, equipment, and
collaborations. Our analysis is consistent with the notion of scientific
repertoire and challenges approaches to philosophy of science which fo-
cus on the study of scientific theories rather than scientific practice. The
case we presented is exemplary of the tendency in contemporary neuro-
science for theories to be articulated for the purposes of interpretation
of disparate experimental data that need to be integrated into a cohesive
framework. Theory in this case functions as an interpretative device.
References
Ankeny, R. A. and S. Leonelli (2016). “Repertoires: A post-Kuhnian perspective
on scientific change and collaborative research”. Studies in History and Phi-
losophy of Science, 60: 18–28.
Atanasova, N. (2015). “Validating animal Models”. THEORIA: An Inter-
national Journal for Theory, History and Foundations of Science, 30(2):
163–181.
Bickle, J. (2006). “Reducing mind to molecular pathways: Explicating the re-
ductionism implicit in current cellular and molecular neuroscience”. Syn-
these, 151(3): 411–434.
Bickle, J. (2016). “From microscopes to optogenetics: Ian Hacking vindicated.”
Philosophy of Science, 85(5): 1065–1077.
Biel, W. (1940). “Early age differences in maze performance in the albino rat.”
The Journal of Genetic Psychology, 56: 439–453.
Boyd, R. (1980). “Scientific realism and naturalistic epistemology.” PSA: Pro-
ceedings of the Biennial Meeting of the Philosophy of Science Association,
2: 613–662.
Braun, A. A., D. L. Graham, T. L. Schaefer, C. V. Vorhees and M. T. Wil-
liams (2012). “Dorsal striatal dopamine depletion impairs both allocentric
and egocentric navigation in rats.” Neurobiology of Learning and Memory,
97: 402–408.
Braun, A. A., R. M. Amos-Kroohs, A. Guetierez, K. H. Lundgren, K. B. Se-
roogy, M. R. Skelton, C. V. Vorhees and M. T. Williams (2015). “Dopamine
depletion in either the dorsomedial or dorsolateral striatum impairs egocen-
tric Cincinnati water maze performance while sparing allocentric Morris wa-
ter maze learning.” Neurobiology of Learning and Memory, 0: 55–63.
Science in Practice in Neuroscience 81
Butcher, R., C. Vorhees and H. Berry (1970). “A learning impairment associated
with induced phenylketonuria.” Life Sciences, 9, Part I: 1261–1268.
Chakravartty, A. (2007). A metaphysics for scientific realism. Knowing the un-
observable. Cambridge: Cambridge University Press.
Dringenberg, H. C. (2020). The history of long-term potentiation as a mem-
ory mechanism: Controversies, confirmation, and some lessons to remember.
Hippocampus, 30: 987–1012.
Feest, U. (2010). “Concepts as tools in the experimental generation of knowl-
edge in cognitive neuropsychology.” Spontaneous Generations: A Journal for
the History and Philosophy of Science, 4(1): 173–190.
Feyerabend, P. (1988/1975). Against method. London and New York: Verso.
Gross, P. R. and N. Levitt (1994). Higher superstition: The academic left and
its quarrels with science. Baltimore, MD: Johns Hopkins University Press.
Gross, P. R., N. Levitt, and M. W. Lewis (eds.) (1996). The flight from science
and reason. New York: New York Academy of Sciences.
Hacking, I. (1983). Representing and intervening: Introductory topics in the
philosophy of natural science. Cambridge and New York: Cambridge Uni-
versity Press.
Hanson, N. R. (1965). Patterns of discovery. Cambridge: Cambridge University
Press.
Herring, N. R., T. L. Schaefer, G. A. Gudelsky, C. V. Vorhees and M. T. Wil-
liams (2008). “Effects of (+)-methamphetamine on path integration learning,
novel object recognition, and neurotoxicity in rats.” Psychopharmacology,
199(4): 637–650.
Hochstein, E. (2016). Giving up on convergence and autonomy: Why the theo-
ries of psychology and neuroscience are codependent as well as irreconcilable.
Studies in History and Philosophy of Science, 56, 135–144.
Kuhn, T. (1970). The structure of scientific revolutions. Chicago, IL: University
of Chicago Press. First published 1962.
Leonelli, S. and Ankeny, R. A. (2015). “Repertoires: How to transform a project
into a research community”. Bioscience, 65(7): 701–708.
Longino, H. E. (2013). Studying human behavior: How scientists investigate
aggression and sexuality. Chicago, IL: University of Chicago Press.
Morford, L. L., S. L. Wood, G. A. Gudelsky, M. T. Williams and C. V. Vorhees
(2002). “Impaired spatial and sequential learning in rats treated neonatally
with d-fenfluramine.” European Journal of Neuroscience, 16 (3), 491–500.
Morris, R. G. M. (1981). “Spatial localization does not require the presence of
local cues”. Learning and Motivation, 12: 239–260.
Polidora, V. J., D. E. Boggs and H. A. Waisman (1963). “A behavioral deficit as-
sociated with phenylketonuria in rats.” Proceedings of the Society for Biology
and Medicine, 113: 817–820.
Putnam, H. (1976). “X* – What is ‘realism’?” Proceedings of the Aristotelian
Society, 76(1): 177–194.
Regan, S. L., J. R Hufgard, E. M. Pitzer, C. Sugimoto, Y. Hu, M. T. Williams
and C. V. Vorhees (2019). “Knockout of latrophilin-3 in Sprague-Dawley rats
causes hyperactivity, hyper-reactivity, under-response to amphetamine, and
disrupted dopamine markers.” Neurobiology of Disease, 130: 104494.
Robins, S. K. (2016). “Memory and optogenetic intervention: Separating the
engram from the ecphory”. Philosophy of Science, 85(5): 1078–1089.
82 Nina A. Atanasova et al.
Ross, A. (ed.) (1996). Science wars. Durham, NC: Duke University Press.
Silva, A. J. and J. Bickle (2009). The science of research and the search for
molecular mechanisms of cognitive functions. In J. Bickle (Ed.), The Oxford
handbook of philosophy and neuroscience (pp. 91–126). Oxford: Oxford
University Press.
Silva, A., A. Landreth and J. Bickle (2014). Engineering the next revolution in
neuroscience. New York: Oxford University Press.
Sullivan, J. A. (2009). “The multiplicity of experimental protocols: A challenge
to reductionist and non-reductionist models of the unity of neuroscience.”
Synthese, 167: 511–539.
Sullivan, J. A. (2010). “Reconsidering ‘spatial memory’ and the Morris water
maze.” Synthese, 177: 261–283.
Sullivan, J. A. (2016). “Optogenetics, pluralism, and progress.” Philosophy of
Science, 85(5): 1090–1101.
Vorhees, C. V. (1987). “Maze learning in rats: A comparison of performance
in two mazes in progeny prenatally exposed to different doses of phenytoin.”
Neurotoxicology and Teratology, 9: 235–241.
Vorhees, C. V. and M. T. Williams. (2006). “Morris water maze: Procedures for
assessing spatial and related forms of learning and memory.” Nature Proto-
cols, 1(2): 848–858.
Vorhees, C. V. and M. T. Williams (2014). “Assessing spatial learning and mem-
ory in rodents.” ILAR Journal, 55(2): 310–332.
Vorhees, C. V. and M. T. Williams (2015). “Reprint of ‘Value of water mazes for
assessing spatial and egocentric learning and memory in rodent basic research
and regulatory studies.’” Neurotoxicology and Teratology, 52: 93–108.
Vorhees, C. V. and M. T. Williams (2016). “Cincinnati water maze: A review of
the development, methods, and evidence as a test of egocentric learning and
memory.” Neurotoxicology and Teratology, 57: 1–19.
Whishaw, I. Q. and H. Maaswinkel (1998). “Rats with fimbria-fornix lesions
are impaired in path integration: A role for the hippocampus in ‘sense of di-
rection’”. The Journal of Neuroscience, 18(8): 3050–3058.
Williams, M. T., L. L. Morford, A. E. McCrea, S. L. Wood and C. V. Vor-
hees (2002). “Administration of D, L-fenfluramine to rats produces learning
deficits in the Cincinnati water maze but not in the Morris water maze: re-
lationship to adrenal cortical output.” Neurotoxicology and Teratology, 24:
783–796.
Williams, M. T., L. L. Morford, S. L. Wood, S. L. Rock, A. E. McCrea, M.
Fukumura, T. L. Wallace, H. W. Broening, M. S. Moran and C. V. Vorhees
(2003). “Developmental 3,4-methylenedioxymethamphetamine (MDMA)-in-
duced learning deficits are not related to undernutrition or litter effects: Novel
use of litter size to control for MDMA-induced growth decrements.” Brain
Research, 968(1), 89–101.
4 Where Molecular Science
Meets Perfumery
A Behind-the-Scenes Look
at SCAPE Microscopy and
Its Theoretical Impact on
Current Olfaction
Ann-Sophie Barwich and Lu Xu
DOI: 10.4324/9781003251392-6
84 Ann-Sophie Barwich and Lu Xu
Previous cognitive theories of science, meaning cognitive approaches
that target the mental mechanisms and environmental affordances mak-
ing scientific reasoning possible, have been principally applied to scien-
tific models and theories (Giere 1988; Churchland 1996; Thagard 2012),
not the specific nature of tools. In recent years, Bickle (2016, 2019, 2020,
this volume) has shown with historical examples that engineering and
tinkering with tools has been a central driver of theories throughout the
history of neuroscience. Such an account pointing toward the cognitive
role of tools in neuroscience further invites a contemporary perspective
to complement and expand upon our understanding of scientific tool use
in action.
This chapter offers the rare opportunity of going behind the scenes at
the frontiers of science to get a better look at the pragmatic and cognitive
dimensions of tool use and development. Specifically, it provides a first-
hand account of how SCAPE arguably revolutionizes current research
on the brain by revealing how this tool was made to fit its experimental
potential, and notably one with tremendous theoretical implications.
The authors are well positioned to undertake this challenge. Between
2015 and 2018, Barwich was the resident philosopher in the Firestein lab
at Columbia University (Barwich 2020a), where Xu was the lead scien-
tist in the process of adopting the new tool of SCAPE to tackle mixture
coding in the nose (Xu et al. 2020). The results of these experiments
would turn out to be remarkable: Xu et al.’s study, published in Science,
uncovered a previously unknown molecular mechanism in the sensory
periphery of olfaction.
In what follows, we present a combined narrative of tool development
and its broader philosophical implications to better understand scientific
reasoning at the laboratory bench. This chapter integrates philosophical
analysis with contemporary scientific history to engage with the funda-
mental question of how scientists use new experimental tools to access
yet unknown features of research materials. Specifically, we want to un-
derstand how tools can perform a cognitive function that opens new the-
oretical perspectives for scientists in their experimental investigations.
What characterizes the cognitive function of scientific tool use in action?
Specifically, how do tools structure scientific reasoning by providing the
observational and conceptual scaffolds that create or stimulate the abil-
ity to conceive new perspectives on research objects or processes? How
does tool use afford the conceptual reconfiguration of a research topic?
In short, we want to understand how tools unlock conceptual possibili-
ties that were previously difficult to imagine.
SCAPE is an excellent example to show how tool innovations can pro-
vide scientists with new cognitive scaffolds in the advancement of scien-
tific theorizing. Traditionally, the cognitive structure of tools in research
practice has been considered mostly in terms of their theory-ladenness,
meaning their construction and application embody existing theoretical
Molecular Science Meets Perfumery 85
assumptions that shape what we see and learn to understand with their
use (e.g., van Fraassen 1980, 2008; Barwich 2017). According to this
view, tools primarily assist and supplement scientific thinking. Here, we
want to highlight that research tools also embody a crucially constitu-
tive part of that thinking process, so much so that tools occupy an essen-
tial causal role in the mental mechanisms of scientific reasoning. In other
words, tools do not merely embody existing theoretical knowledge. In-
stead, tools are active drivers of new models and theories by accommo-
dating and, moreover, extending the cognitive process of scientists.
We present our analysis in three steps. First, we introduce the scientific
challenges and developments in recent research on odor coding, involv-
ing the receptors and the stimulus (Section 2). This will help to situate
the significance of the SCAPE study for general readers and highlight
the theoretical impact of its results. Then the SCAPE study takes center
stage (Section 3). Here, we blend the scientific study and its published
results with Xu’s own experience on getting the experiments to work
in the first place. In other words, we get a glimpse of how the sausage
was made! These details present the backdrop against which we clarify
the theoretical impact of the SCAPE study for theories of odor coding
(Section 4). We conclude with a (brief) philosophical reflection on the
cognitive dimensions of tool use (Section 5). Specifically, we draw on
theories of distributed cognition in recent cognitive science to suggest
that tools are a constitutive and active part of the reasoning process by
which scientists generate new exploratory and explanatory concepts.
Figure 1 T
he SCAPE Microscopy System. (a) Schematic of the SCAPE layout.
(b) Close-up of the O1 objective. An oblique light sheet images through
the O1 stationary objective lens and scans the sample laterally in the
x-direction to form a 3D image (Adapted from Voleti et al. 2019).
It was 2015 when I first learned about SCAPE microscopy (the year
that the Hillman lab published their first SCAPE paper). Wenze and
I were walking our dog Muffin when he decided to show off his re-
cent progress in the lab: a video showing a 3D Zebrafish heart beat-
ing. The contour of the heart was clearly outlined by the heart cells
92 Ann-Sophie Barwich and Lu Xu
(which were labeled with some sort of fluorescence); as the heart
contracts and relaxes, I could also see individual blood cells travel
through the ventricle and atriums. Wenze told me that this was im-
aged with a 3D imaging technique called SCAPE.
At the moment, I was frustrated by the difficulty that I have en-
countered in my own research: the olfactory sensory neuron dissoci-
ation protocol (which I have been using for the past three years with
no problem) no longer works, and this was the main method of my
study at the moment. Briefly, what the protocol does is to first strip
off the olfactory epithelium from the nasal turbinates of the mouse,
digest with enzymes, then collect olfactory sensory neurons from the
supernatant and place them on a piece of coverslip pre-coated with
concavcalin A – this coverslip will later be used for calcium imaging.
Normally I would get several hundred of neurons on each coverslip.
But during that time, I was only getting ~50 cells per coverslip – a pa-
thetic cell density, and I couldn’t figure out why because my colleagues
(Erwan Poivet and Narmin Tahirova) were using the same protocol
and same reagents and they were getting a normal number of cells.
This was the original momentum of me seeking an alternative
solution of imaging the olfactory sensory neurons. The cell disso-
ciation protocol was not perfect anyway: first, during the dissocia-
tion process, the axons are severed from the olfactory bulb and the
cilia are also damaged by the enzymes. Second, the efficacy is low.
Even with a good cell density, we could only get ~200–400 neu-
rons per field of view, among which only 5%–10% would respond
to a particular monomolecular odor stimulus. The population will
be further narrowed down if we want to study a receptor/neuron
responding to multiple odors. Finally, the dissociation process is ex-
tremely time-consuming. If we start at 9:30 am, the cells would be
ready for imaging at ~4 pm, during which we only have a 40min
break for lunch, and there is no way to know if the dissociation is
successful until the last step. Also, the cells can only be imaged the
same day – if you leave them in a cell incubator, many of them will
still be alive the following day but no longer functional.
As a lazy person,7 I always wanted to take a shortcut. The easiest
thing to come up with is to simply peel off the thin layer of the olfac-
tory epithelium and image it directly. It seems to be straightforward,
but in practice, it’s difficult to mount the epithelium on a coverslip
securely. And even though the epithelium is only ~100 um thick, its
surface is curved, making it difficult to visualize many individual
neurons in a single focal plane (imagine the difference of a flat piece
of paper and a piece of crumpled paper).
The invention of SCAPE has brought new hope to this (almost)
dead end. It’s capability of imaging non-transparent tissue in 3D
Molecular Science Meets Perfumery 93
was exactly what I needed. How to harness this novel technique is
yet another story.
(ps. Several months later, I finally figured out why the dissociation
protocol was not working: we made a new batch of concavcalin A
from a different manufacturer, and somehow it takes longer to dry
out on the coverslip completely. If it’s not dry, it will be washed off
by the culture media along with all the neurons attached to it. I
used to prepare the coverslip in the morning of the experiment day,
which was fine for the old stock but not enough time for the new
stock. Meanwhile, Erwan and Narmin had prepared the coverslip
one night before, and that’s why they never had the problem!)
These were the roots of the SCAPE study. But back to its experimental
details: Xu used genetically engineered mice to track active cells with
a fluorescent glow. In consultation with the master perfumer Christo-
phe Laudamiel, she decided on two mixtures, each mixture consisting
of three different compounds: (a) acetophenone, benzyl acetate, and ci-
tral, and (b) dorisyl, dartanol, and isoraldeine. The central criterion for
choosing these components was that they were chemically and perceptu-
ally dissimilar. Additionally, the hypothesis was that the combination of
these odorants would yield a configural odor image—configural mean-
ing that the mixtures produced a quality different from their elemental
components. Furthermore, odor set 1 was tested in equal concentration;
odor set 2 in unequal concentration.
3.2 C
hallenges: Choosing the Chemical Stimulus and Testing
Cell Responses
Scientific papers tend to present their findings without hiccups. Behind
the scenes, the story reads differently. Indeed, Xu dealt with several chal-
lenges already at the stage of choosing the chemical stimulus, specifically
the stimulus sets:
With the stimulus selection in place, and over the course of several trying
years, Xu repeatedly tested how the cells responded to each compound
when administered individually, in pairs, and then in the tripartite mix
(Figure 2):
Figure 2 L
u Xu working with the SCAPE 1.0 system during the pilot
experiments.
Rather than creating the SCAPE protocol from scratch, what I did
was to use the old cell dissociation protocol as a starting point and
gradually shaped it to the way I wanted (which ended up being quite
different). In our very first experiment, I prepared the OE [olfactory
epithelium] sample by cutting off the whole olfactory turbinates us-
ing a pair of fine scissors – this was actually an intermediate step
during cell dissociation. The next steps were to peel off epithelium
from the turbinates and cut it into pieces – I then placed it into a
perfusion chamber with the epithelium side facing up. Perfusion is a
standard thing to do in physiology in order to maintain the homeo-
stasis of the extracellular environment, and I simply adopted the rec-
ipe of Ringer’s solution we used to perfuse the dissociated neurons.
Next, I tried to deliver odor stimulus by injecting odor solutions us-
ing an automatic syringe injector, which didn’t work out because the
sudden change of flow rate would cause the sample to float around.
Given our field of view was only ~600 um × 400 um × 300 um at
the moment, a displacement less than 1mm would make the sample
disappear from the screen. Therefore, in the following experiments,
we had to transport the Agilent isocratic pump from our lab to the
Hillman lab using a small cart and used that for stable perfusion and
96 Ann-Sophie Barwich and Lu Xu
odor delivery. This lasted until the Hillman lab moved to the ZI, and
we purchased a new pump.
Nevertheless, the sample is still not stable enough in the chamber.
At first, I thought of using a smaller chamber such as the sample
is more confined in place, but then I realized it’s actually easier if I
don’t cut the turbinates at all and leave them attached to the mouse
head. This way all I have to do is to cut the mouse head sagitally and
remove the septum to expose the OE. What’s even better about this
prep is that if I cover the mouse hemi-head with a coverslip, a narrow
space will be naturally formed between the coverslip and the turbi-
nates, which allows a very small volume of liquid to flow in through
the nostril and out through the throat thanks to capillary action. Es-
sentially, this is a re-construction of the nasal cavity by replacing the
septum with a transparent coverslip, which allows us to observe the
OE from the medial side. To implement this idea, we have designed
our own perfusion chamber which resembles the shape of a mouse
head. In the final prep, the mouse hemi-head is placed facing down
to the coverslip on the bottom of the chamber, with an inverted ob-
jective acquiring the image from underneath (as shown in figure 1 of
our published paper, and figure 3 in this chapter). During subsequent
experiments, we further optimized sample mounting by applying a
blue light-cured dental gel to fix the sample to the chamber. As a
result, the displacement of the sample is now less than 10um at all
three dimensions during a one-hour imaging session.
Figure 4 E
volution of the SCAPE technique. A comparison is made between
data (3D images of mouse olfactory epithelium) acquired in 2015 and
2019, respectively. Note the improvement in both fields of view and
resolution of the image.
Obtaining stable measures and solid experimental results were not yet
the end point. A final challenge involved data analysis, to which we come
next.
3.3.1 Preprocessing
The raw SCAPE data are spool files and need a specific script to load
into the workspace of Matlab. Since the specimen was scanned with an
oblique light-sheet, each 3D volume needs to be de-skewed to the orig-
inal scale. This part has been taken care of primarily by Wenze and his
lab-mates.
4 The Findings
The patterns of cell populations showed remarkable and widespread
differences in their responses to the mixtures and their components in
isolation. Specifically, up to 38% of cells responding to a mix differed
compared to scans of the collection of cells responding to the individual
odorants in such mixture. More importantly, Xu found two different
and striking effects across receptor cell populations.
Figure 5 A
veraged heat maps of olfactory sensory neurons responding to two
sets of odor stimuli (in monomolecular solutions, binary and tripartite
combinations). Cells show significant suppression (left) and enhance-
ment (right) effects (Image modified from Xu et al. 2019).
Figure 6 R
eceptor modulation model in olfaction. Xu’s model of sparse coding
in mixture detection at the sensory periphery in olfaction. Comparison
between the received combinatorial model (left) and the new model of
receptor modulation (right) (Image 7 in Xu et al. 2019).
This is why the significance of Xu et al. (2020) goes beyond mere techno
logical finesse using a new tool to advance understanding of an old prob
lem. The SCAPE study did not just provide some new details and more
depth of observation. It facilitated the development of an entirely new
theoretical approach to the study of odor coding, one that supplanted
Molecular Science Meets Perfumery 107
the analytical approach and assured a more biologically realistic concep-
tual orientation to the study of the olfactory system.
Xu’s model, summarized in Figure B (and highlighted in Firestein’s
earlier email remarks) introduced a new model explaining how olfactory
signals at the periphery are sufficiently differentiated for the brain to
make sense of complex blends by showing either (i) a reduction of activ-
ity via inhibitory modulation that results in a sparser and thus differen-
tiated signal, or (ii) an enhancement of cell activity that allows the brain
to further distinguish complex mixture signals.
Crucially, the use of SCAPE in Xu et al. (2020) changed the chief tar-
get of explanation in theories of olfaction: explanations of odor coding
are less about the chemistry of the stimulus and more about what signals
actually reach the brain via the receptors.12
The SCAPE study is a demonstration [of] how new tools allow for a
targeted reconsideration of causal elements as defined by the limits of
previous technologies. Older technologies may have posited different
elements as central and as partaking in different levels of causal mech-
anisms – e.g., the behavior of individual cells versus cell populations.
The response of cells in the context of other cells as a population can
now be modeled on one causal plane and integrated into a causal mech-
anistic explanation with fewer levels of hypothetical mechanisms.
(Barwich 2020c, including a full analysis of the ontological
shift from single cells to cell populations in odor
coding. Also, see Figure 8)
Figure 8 C
omparison of the ontologies in between the different models of cod-
ing at the periphery. SCAPE changed the ontological structure of odor
coding ad its fundamental components. Components (including their
relevant features) are not defined in isolation but as causal units via
their causal role within a mechanism. This means that the fundamen-
tal level in odor coding is not the {code = single odorant binding com-
binatorically to receptors} but {code = receptor populations responding
to multiple odorants in combination}.
Molecular Science Meets Perfumery 111
Tools in experimental practice in such contexts, therefore, present an
extension of the thought process that allows for the creation of new pat-
terns, new pattern conceptualizations and combinations, and, in further
consequence, new inferences about the theoretical nature of this data.
The visual display of more comprehensive cell reactions to various
combinations of chemical stimuli carried a powerfully suggestive power
of introducing new significances, i.e., new causal relations and even new
causal constituents (i.e., population behavior as representing the overall
olfactory code, instead of single-cell calculations from individual stim-
uli). This display restructured the intellectual framework by introducing
new concepts, relations, and avenues of further research (i.e., the shift
from stimulus chemistry to receptor biology as a key determinant of
odor coding). Moreover, SCAPE produced new questions: what kind
of mechanism could cause enhancement effects, and to what potential
purpose from a system’s theoretical point of view?
The skeptic of distributed cognition may ask whether such a concep-
tual shift in thinking about odor coding could have also happened with-
out SCAPE. Perhaps someone could have considered this mechanism,
in principle. The point is, however, that it did not happen that way. To
understand the realities of scientific reasoning, we need to look at what
happens at the bench, instead of what is hypothetically conceivable in
the armchair. Besides, a distributed account of cognition does not posit
that those ideas are inconceivable without the appropriate technology.
It asserts that technologies facilitate a broader and different perspective
by being an integral part of the thinking process itself. In other words,
technologies do more than produce data to align with hypotheses—their
use co-creates mental structures in scientific reasoning. As the exam-
ple of SCAPE illustrated, these mental structures include iterative pro-
cedures of conceptual refinement, the direction of attention to specific
empirical observations (which may not always be transparent in their
interpretation), and the combination of otherwise separately occurring
phenomena on one observational place. (In a way, this can be compared
with the use of language, where syntax and semantics afford meaningful
but language-dependent creations, so much so that some works of po-
etry and especially humor are difficult to translate into another language
without sufficiently similar syntactic structure.)
In conclusion, our brief discussion of the cognitive function of tools in
neuroscientific experiments primarily serves as an invitation for philoso-
phers and researchers to further consider the ways in which tools struc-
ture and actively contribute to scientific theorizing. For the analytically
inclined philosopher, this discussion must remain wanting in conceptual
detail and argumentative explication. Yet, given the constraints of this
chapter—with its primary purpose (of providing a first-hand account of
a new tool and in its theoretical impact)—the concluding remarks are
best understood as a pointer toward the promise that targeted cognitive
112 Ann-Sophie Barwich and Lu Xu
studies of tools in experimental practice yields for the study of scientific
reasoning.
Acknowledgments
We are grateful to our colleagues in the Firestein and Hillman labs, es-
pecially Stuart Firestein and Wenze Li. Additional thanks belong to Carl
Craver, John Bickle, and Marco Nathan for their constructive sugges-
tions in the revision of this manuscript.
Notes
1 Review of the issues concerning the heterologous expression of olfactory
receptors in Peterlin, Firestein, and Rogers (2014).
2 Notably, it took almost ten years to find the insect receptors after the mam-
malian receptors had been discovered because they are substantially differ-
ent genetically (Clyne et al. 1999; Vosshall et al. 1999).
3 The “one neuron—one receptor gene” doctrine has been called into question
recently (Mombaerts 2004).
4 This laborious type of work is currently under way, though, for example in
the lab of Hiroaki Matsunami at Duke University.
5 See Passy (1895).
6 Personal communication.
7 The authors disagree on that verdict.
8 Peterlin was a postdoctoral researcher in Firestein’s lab before she moved to
work at Firmenich, Princeton.
9 Detailed analysis of the theoretical implications emerging from recent in-
sights into receptor coding in olfaction in Barwich (2020a).
10 Analysis of the mechanisms of odor coding at the periphery, including the
SCAPE study, and how it links to pattern recognition further downstream
in the central nervous system in Barwich (2020a).
11 Revolution not necessarily in the Kuhnian sense.
12 Further elaboration and details about the implications of this study in the
broader context of olfaction, and why smell needs to be modeled via the
biological processes encoding chemical features (rather than stimulus chem-
istry), in Barwich (2020a, 2020b).
13 The notion of “tinkering” is adopted from Bickle’s (this volume) analysis of
historical examples that look at tool engineering as drivers of theorizing in
neuroscience.
References
Avants, Brian B., Tustison, Nick and Song, Gang. 2009. “Advanced normaliza-
tion tools (ANTS).” Insight Journal 2(365):1–35.
Axel, Richard. 2005. “Scents and sensibility: A molecular logic of olfactory
perception (Nobel Lecture).” Angewandte Chemie International Edition
44:6110–27.
Barwich, Ann-Sophie. 2015a. “What is so special about smell? Olfaction as a
model system in neurobiology.” Postgraduate Medical Journal 92(1083):27–33.
Molecular Science Meets Perfumery 113
Barwich, Ann-Sophie. 2015b. “Bending molecules or bending the rules? The
application of theoretical models in fragrance chemistry.” Perspectives on
Science 23(4):443–65.
Barwich, Ann-Sophie. 2017. “Is Captain Kirk a natural blonde? Do X-ray crys-
tallographers dream of electron clouds? Comparing model-based inferences
in science with fiction.” In: Thinking about Science, Reflecting on Art, ed.
by Otávio Bueno, George Darby, Steven French, Dean Rickle. London: Rout-
ledge, pp. 62–79.
Barwich, Ann-Sophie. 2018. “How to be rational about empirical success in
ongoing science: The case of the quantum nose and its critics.” Studies in
History and Philosophy of Science 69:40–51.
Barwich, Ann-Sophie. 2020a. Smellosophy: What the Nose Tells the Brain.
Cambridge, MA: Harvard University Press.
Barwich, Ann-Sophie. 2020b. “What Makes a Discovery Successful? The Story
of Linda Buck and the Olfactory Receptors.” Cell 181(4):749–53.
Barwich, Ann-Sophie. 2020c. “Imaging the living brain: An argument for ruth-
less reductionism from olfactory neurobiology.” Journal of Theoretical Biol-
ogy 512:110560.
Barwich, Ann-Sophie and Bschir, Karum. 2017. “The manipulability of
what? The history of G-protein coupled receptors.” Biology & Philosophy
32(6):1317–39.
Bickle, John. 2016. “Revolutions in neuroscience: Tool development.” Frontiers
in Systems Neuroscience 10:24.
Bickle, John. 2019. “Linking mind to molecular pathways: The role of experi-
ment tools.” Axiomathes 29(6):577–97.
Bickle, John. 2020. “Laser lights and designer drugs: New techniques for de-
scending levels of mechanisms ‘in a single bound’?.” Topics in Cognitive Sci-
ence 12(4):1241–56.
Bickle, John. Forthcoming. “ Research Tools in Relation to Theories.” In: The
Tools of Neuroscience Experiment: Philosophical and Scientific Perspectives,
ed. by John Bickle, Carl Craver, Ann-Sophie Barwich. London: Routledge.
Bouchard, Matthew B., Voleti, Venkatakaushik, Mendes, César S., Lacefield,
Clay, Grueber, Wesley B., Mann, Richard S., Bruno, Randy M. and Hillman,
Elizabeth M. C. 2015. “Swept confocally-aligned planar excitation (SCAPE)
microscopy for high-speed volumetric imaging of behaving organisms.” Na-
ture Photonics 9(2):113–9.
Buck, Linda B. 2005. “Unraveling the sense of smell (Nobel lecture).” Ange-
wandte Chemie International Edition 44:6128–40.
Buck, Lind B. and Axel, Richard. 1991. “A novel multigene family may encode
odorant receptors: A molecular basis for odor recognition.” Cell 65(1):175–87.
Bushdid, Caroline, Magnasco, Marcelo O., Vosshall, Leslie B. and Keller, An-
dreas. 2014. “Humans can discriminate more than 1 trillion olfactory stim-
uli.” Science 343(6177):1370–72.
Butterwick, Joel A., Del Mármol, Josefina, Kim, Kelly H., Kahlson, Martha
A., Rogow, Jackson A., Walz, Thomas and Ruta, Vanessa. 2018. “Cryo-EM
structure of the insect olfactory receptor Orco.” Nature 560(7719):447–52.
Cain, William S. 1974. “Odor intensity—Mixtures and masking.” Bulletin of
the Psychonomic Society 4:244.
114 Ann-Sophie Barwich and Lu Xu
Churchland, Paul M. 1996. The Engine of Reason, the Seat of the Soul: A Phil-
osophical Journey into the Brain. Cambridge: MIT Press.
Clark, Andy 2008. Supersizing the Mind: Embodiment, Action, and Cognitive
Extension. New York: Oxford University Press.
Clark, Andy and Chalmers, David. 1998. “The extended mind.” Analysis
58(1):7–19.
Clyne Peter J., Warr, Coral G., Freeman, Marc R., Lessing, Derek, Kim, Junhyong
and Carlson, John R. 1999. “A novel family of divergent seven-transmembrane
proteins: Candidate odorant receptors in Drosophila.” Neuron 22(2):327–38.
Dey, Sandeepa, Zhan, Senmiao and Matsunami, Hiroaki. 2011, “Assaying sur-
face expression of chemosensory receptors in heterologous cells.” Journal of
Visualized Experiments 48:e2405.
Firestein, Stuart. 2005. “A nobel nose: The 2004 Nobel Prize in physiology and
medicine.” Neuron 45:333–8.
Firestein, Stuart, Greer, Charles and Mombaerts, Peter. 2014. The molecular
basis for odor recognition. Cell Annotated Classic (accessed 09/29/2021).
https://els-jbs-prod-cdn.jbs.elsevierhealth.com/pb/assets/raw/journals/re-
search/cell/libraries/annotated-classics/ACBuck.pdf
Gerkin, Richard C. and Castro, Jason B., 2015. “The number of olfactory stim-
uli that humans can discriminate is still unknown.” eLife 4:e08127.
Giere, Ronald. 1988. Explaining Science: A Cognitive Approach. Chicago, IL:
Chicago University Press.
Giere, Ronald. 2002. “Scientific cognition as distributed cognition.” In: The
Cognitive Basis of Science, ed. by Peter Carruthers, Stephen Stich, Michael
Siegal. Cambridge: Cambridge University Press, p. 285.
Giere, Ronald N. and Moffatt, Barton. 2003. “Distributed cognition: Where
the cognitive and the social merge.” Social Studies of Science 33(2):301–10.
Hillman, Elizabeth M. C., Voleti, Venkatakaushik, Li, Wenze and Yu, Hang.
2019. “Light-sheet microscopy in neuroscience.” Annual Review of Neuro-
science 42:295–313.
Hutchins, Edwin. 1995. Cognition in the Wild. Cambridge: MIT Press.
Inagaki, Shigenori, Iwata, Ryo, Iwamoto, Masakazu and Imai, Takeshi 2020.
“Widespread inhibition, antagonism, and synergy in mouse olfactory sensory
neurons in vivo.” Cell Reports 31(13):107814.
Kay, Leslie M., Crk, Tanja and Thorngate, Jennifer. 2005. “A redefinition of
odor mixture quality. Behavioral Neuroscience 119:726–33.
Keller, Andreas and Vosshall, Leslie B. 2016. “Olfactory perception of chemi-
cally diverse molecules.” BMC Neuroscience 17(1):55.
Klein, Ursula. 2003. Experiments, Models, Paper Tools: Cultures of Organic
Chemistry in the Nineteenth Century. Stanford, CA: Stanford University Press.
Kurian, Smija M., Naressi, Rafaella G., Manoel, Diogo, Barwich, Ann-Sophie,
Malnic, Bettina and Saraiva, Luis R. 2021. “Odor coding in the mammalian
olfactory epithelium.” Cell and Tissue Research 338: 445–456.
Laing, David G., Panhuber, H., Willcox, M.E. and Pittman, E.A., 1984.
“Quality and intensity of binary odor mixtures.” Physiology & Behavior
33(2):309–319.
Liu, Zhicheng, Nersessian, Nancy and Stasko, John. 2008. “Distributed cogni-
tion as a theoretical framework for information visualization.” IEEE Trans-
actions on Visualization and Computer Graphics 14(6):1173–80.
Molecular Science Meets Perfumery 115
Maarse, H., 1991. Volatile compounds in foods and beverages (Vol. 44). New
York: CRC Press.
Mainland, Joel D., Li, Yun R., Zhou, Ting, Liu, Wen LingL and Matsunami,
Hiroaki. 2015. “Human olfactory receptor responses to odorants.” Scientific
Data 2:150002.
Malnic, Bettina, Hirono, Junzo, Sato, Takaaki and Buck, Linda B. 1999. “Com-
binatorial receptor codes for odors.” Cell 96(5):713–23.
Matsunami, Hiroaki. 2016. “Mammalian odorant receptors: Heterologous ex-
pression and deorphanization” Chemical Senses 41(9):E123.
McClelland, James L., Rumelhart, David E. and PDP Research Group. 1986.
Parallel Distributed Processing. Vol. 2. Cambridge: MIT Press.
Meister, Markus. 2015. “On the dimensionality of odor space.” eLife 4:e07865.
Mombaerts, Peter. 2004. “Odorant Receptor Gene Choice in Olfactory Sensory
Neurons: The One Receptor–One Neuron Hypothesis Revis- ited,” Current
Opinion in Neurobiology 14(1):31–36.
Mombaerts, Peter, Wang, Fan, Dulac, Catherine, Chao, Steve K., Nemes, Adri-
ana, Mendelsohn, Monica, Edmondson, James and Axel, Richard. 1996.
“Visualizing an olfactory sensory map.” Cell 87(4):675–86.
Ohloff, Günther, Pickenhagen, Wilhelm and Kraft, Philip. 2012. Scent and
Chemistry: The Molecular World of Odors. Zürich: Wiley-VCH.
Passy, Frédéric. 1895. L’Année Psychologique, second year, p. 380.
Peterlin, Zita, Li, Yadi, Sun, Guangxing, Shah, Rohan, Firestein, Stuart and
Ryan, Kevin. 2008. “The importance of odorant conformation to the binding
and activation of a representative olfactory receptor.” Chemistry & Biology
15(12):1317–27.
Peterlin, Zita, Firestein, Stuart and Rogers, Matthew. 2014. “The state of the
art of odorant receptor deorphanization: A report from the orphanage.” Jour-
nal of General Physiology 143(5):527–42.
Pfister, Patrick, Smith, Benjamin C., Evans, Barry J., Brann, Jessica H., Trim-
mer, Casey, Sheikh, Mushhood, Arroyave, Randy, Reddy, Gautam, Jeong,
Hyo-Young, Raps, Daniel A., Peterlin, Zita, Vergassola, Massimo and Rog-
ers, Matthew E. 2020. “Odorant receptor inhibition is fundamental to odor
encoding.” Current Biology 30:2574–87.
Pnevmatikakis, Eftychios A., Soudry, Daniel, Gao, Yuanjun, Machado, Tim-
othy A., Merel, Josh, Pfau, David, Reardon, Thomas, Mu, Yu, Lacefield,
Clay, Yang, Weijian, Ahrens, Misha, Bruno, Randy, Jessel, Thomas M.,
Peterka, Darcy S., Yuste, Rafael and Paninski, Liam. 2016. “Simultaneous
denoising, deconvolution, and demixing of calcium imaging data.” Neuron
89(2):285–99.
Pnevmatikakis, Eftychios A. and Giovannucci, Andrea. 2017 “NoRMCorre:
An online algorithm for piecewise rigid motion correction of calcium imaging
data.” Journal of Neuroscience Methods 291:83–94.
Poivet, Erwan, Peterlin, Zita, Tahirova, Narmin, Xu, Lu, Altomare, Clara,
Paria, Anne, Zou, Dong-Jing and Firestein, Stuart. 2016. “Applying me-
dicinal chemistry strategies to understand odorant discrimination.” Nature
Communications 7:11157.
Poivet, Erwan, Tahirova, Narmin, Peterlin, Zita, Xu, Lu, Zou, Dong-Jing,
Acree, Terry and Firestein, Stuart. 2018. “Functional odor classification
through a medicinal chemistry approach.” Science Advances 4(2):eaao6086.
116 Ann-Sophie Barwich and Lu Xu
Rheinberger, Hans-Jôrg. 1997. Toward a History of Epistemic Things: Syn-
thesizing Proteins in the Test Tube. Stanford, CA: Stanford University Press.
Rossiter, Karen J. 1996. “Structure-odor relationships.” Chemical Reviews
96(8):3201–40.
Sell, Charles S. 2006. “On the unpredictability of odor.” Angewandte Chemie
International Edition 45(38): 6254–61.
Snogerup-Linse, Sara. 2012. “Studies of G-protein coupled receptors. The No-
bel Prize in chemistry 2012. Award ceremony speech.” The Royal Swedish
Academy of Sciences (accessed 09/25/2014). http://www.nobelprize.org/no-
bel_prizes/chemistry/laureates/2012/advanced-chemistryprize2012.pdf
Solomon, Miriam. 2007. “Situated cognition.” In: Philosophy of Psychology
and Cognitive Science , ed. by Paul Thagard. Amsterdam: North-Holland,
pp. 413–28.
Thagard, Paul. 2012. The Cognitive Science of Science Explanation, Discov-
ery, and Conceptual Change. Cambridge: MIT Press.
Vaadia, Rebecca D., Li, Wenze, Voleti, Venkatakaushik, Singhania, Aditi, Hill-
man, Elizabeth M. C. and Grueber, Wesley B. 2019. “Characterization of
proprioceptive system dynamics in behaving Drosophila larvae using high-
speed volumetric microscopy.” Current Biology 29(6):935–44.
van Fraassen, Bas. 1980. The Scientific Image. Oxford: Oxford University Press.
van Fraassen, Bas. 2008. Scientific Representation: Paradoxes of Perspective.
Oxford: Oxford University Press.
Voleti, Venkatakaushik, Patel, Kripa B., Li, Wenze, Campos, Citlali Perez,
Bharadwaj, Srinidhi, Yu, Hang, Ford, Caitlin, Casper, Malte J., Yan, Rich-
ard Wenwei, Liang, Wenxuan, Wen, Chentao, Kimura, Koutarou D., Tar-
goff, Kimara L. and Hillman, Elizabeth M. C. 2019. “Real-time volumetric
microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0.”
Nature Methods 16(10):1054–62.
Vosshall, Leslie B., Amrein, Hubert, Morozov, Pavel S., Rzhetsky, Andrey and
Axel, Richard. 1999. “A spatial map of olfactory receptor expression in the
Drosophila antenna.” Cell 96(5):725–36.
Xu, Lu, Li, Wenze, Voleti, Venkatakaushik, Hillman, Elizabeth M. C. and
Firestein, Stuart. 2019. “Widespread receptor driven modulation in periph-
eral olfactory coding.” bioRxiv 760330.
Xu, Lu, Li, Wenze, Voleti, Venkatakaushik, Zou, Dong-Jing, Hillman, Eliza-
beth M. C. and Firestein, Stuart. 2020. “Widespread receptor driven modu-
lation in peripheral olfactory coding.” Science 368(6487):eaaz5390.
Zhao, Haiqing, Ivic, Lidija, Otaki, Joji M., Hashimoto, Mitsuhiro, Mikoshiba,
Katsuhiro and Stuart Firestein. 1998. “Functional expression of a mamma-
lian odorant receptor.” Science 279:237–42.
5 A Different Role for
Tinkering
Brain Fog, COVID-19, and
the Accidental Nature of
Neurobiological Theory
Development
Valerie Gray Hardcastle and
C. Matthew Stewart
John Bickle and his colleagues are current drivers of the science-in-
practice movement and its focus on the extra-theoretical aspects of sci-
ence for philosophers (see, e.g., Ankeny et al. 2011). This movement grew
out of an intellectual frustration with the near divorce of the philosophy
of science from what scientists actually do in their day-to-day profes-
sional lives. Its adherents promote the detailed and systematic study of
the activities of science, while still maintaining the traditional philoso-
phy of science foci on rationality, justification, observation, evidence,
and theory (and in contrast to prevailing tendencies in the social studies
of science and technology, which focus almost exclusively on science as
a human construct). Philosophers of science-in-practice explore the his-
torical context and current practices that lead to a model or theory in an
effort to better understand the scientific process.
Bickle holds that revolutions in neurobiology-in-practice are driven
exclusively by new tool development, which he believes drives experi-
mental design (2016) – as opposed to, say, Thomas Kuhn’s (1962) model
of scientific theory development as a paradigm shift. He argues that
tool development is an analog of Ian Hacking’s (1983) famous “micro-
scopes” argument for the “relative independence” of “the life of exper-
iment” from theory, further downgrading theory’s purported centrality
in science (Bickle 2018). And following Hacking, Bickle holds that en-
gineering counts more than theory in the development of molecular and
cellular neurobiology research tools. Indeed, according to Bickle,
DOI: 10.4324/9781003251392-7
118 Valerie Gray Hardcastle and C. Matthew Stewart
The larger, and perhaps more controversial, point that Bickle wants to
make is that tool development drives everything: not just theory develop-
ment, but resource allocation, experimental design, and researchers’ time
and attention. Bickle notes that Hubel and Weisel’s advancement in our
understanding of the receptive field properties in striate and extra-striate
cortex, which led to their theory of information processing in the visual
cortex, and which garnered them the 1981 Nobel Prize in Physiology
or Medicine, “rode on the back of Hubel’s catch-as-catch-can, make-
it-work, trial-and-error-tinkered invention of a new experiment tool,
the metal microelectrode and hydraulic microdriver, and the new kinds
of experiments it enabled” (2022, p. 18). If theories weren’t diminished
enough in their place in scientific practice, Bickle is further claiming that
tool development in turn relies on what he calls “laboratory tinkering,”
and what Hacking calls “fooling around” (1983, p. 199). So, actually,
“theory progress in neurobiology is thus doubly dependent, and hence
tertiary in both epistemic and temporal priority” (Bickle 2022, p. 14).
Tinkering in the lab leads to creating new tools for new experimental ap-
proaches, which, in turn leads to theoretical progress (see also Stix 2012).
Bickle goes on to surmise that perhaps this “double dependence” of
theory on tinkering and new tool development might be true across all
the bench scientific fields in the life sciences. These lessons taken from his
examination of neuroscience-in-practice stand in direct contrast to the
lament of many of the early authors in the philosophy of neuroscience:
that there was no theoretical framework for understanding the brain (e.g.,
Crick 1979; Churchland 1986; Churchland and Sejnowski 1992). Still
today, the lack of theory in neuroscience remains a concern among many
(Gold and Roskies 2008; Churchland and Sejnowski 2016). And still to-
day, there is no overarching theoretical framework in which to organize
the brain sciences (Hardcastle and Hardcastle 2015; Hardcastle 2017).
Bickle suggests that the lack of theory in neuroscience should not be
of great concern; instead, he follows Hubel’s explanation that the activi-
ties of neuroscientists “seemed not to conform to the science that we are
taught in high school, with its laws, hypotheses, experimental verifica-
tion, generalizations, and so on” (1996, 312). The actual impact of theory
on the day-to-day practice of neuroscience appears small. Instead, neuro-
scientists tinker around in their laboratories, intervening in the world as
they can with the tools they have (or create) and then seeing what comes
of it. As Hubel explains, they are akin to “15th century explorers, like
Columbus sailing West to see what he might find” (1996, p. 312).
But is this how bench neuroscience works across the board? This
chapter argues that the answer is yes and no. On the one hand, tool de-
velopment and tinkering are fundamental aspects of the neuroscientific
trade. But on the other hand, theories are just another tool that neuro-
scientists can utilize when poking about in the brain. Theories are not
tertiary outcomes of neuroscientific practice; rather, they are a utensil,
A Different Role for Tinkering 119
just like a microelectrode, with which we use to probe the brain. We
illustrate this perspective by recounting two recent discoveries on the
impact of COVID-19 in the brain, both of which were supported by
the tinkering and practices of one scientist. In one case, theory clearly
drove tool innovation; in the other, as with Columbus, he was just sail-
ing West, hoping against hope to find something useful. But each case
resulted in significant theoretical advancement. Sometimes novel tools
drive theory, but other times, theory promotes novel tools. And both are
ways that neuroscience advances.
1 COVID-19
On 31 December 2019, China notified the World Health Organization
(WHO) about a cluster of cases of pneumonia in Wuhan City, home to
11 million. Less than two weeks later, there were 282 confirmed cases,
with six deaths. Four of these cases were outside of China in neighboring
countries. During this same timeframe, the virus responsible had been
isolated and its genome sequenced. The cause of the novel pneumonia
became known as COVID-19 (short for COronaVIrus Disease 2019),
and it was a new coronavirus,1 dubbed SARS-CoV-2 (Chaplin 2020;
Lago 2020). Less than eight weeks after that, there were over 118,000
cases and almost 5,000 deaths across 114 counties, and the WHO calls
it a pandemic. And now, at the time of this writing, a short 22 months
later, the WHO reports over 230 million cases worldwide, with almost
5 million dead (https://covid19.who.int). Forty percent of those cases and
half of the deaths have been in the Americas.
While it is not definitively known how or why the new coronavirus
appeared in humans, genomic analysis suggests that it originated in bats
and was transmitted to perhaps a pangolin, which was illegally sold
at a wet market in Wuhan, where it was then transmitted to humans
(Anderson et al 2020). SARS-CoV-2 is not the first coronavirus to have
been passed from other animals to humans – six have been identified
so far, four of which cause the common cold. A fifth is now known by
SARS (Severe Acute Respiratory Syndrome), which also originated in
China, likely also via bats (then perhaps through civets), in 2002. In the
two years in which it was active, just over 8,000 cases were identified
with roughly 10% of those infected dying. And the sixth, abbreviated
as MERS (Middle Eastern Respiratory Syndrome), originated in Saudi
Arabia in 2012, again likely via bats and then through camels. Unlike
SARS, MERS continues to infect humans to this day, with roughly
2,500 known infections and 860 deaths. Both SARS and MERS cause a
flu-like illness with symptoms ranging from asymptomatic infection to
the sniffles to severe pneumonia to acute respiratory distress syndrome,
septic shock, and multiorgan failure (Kahn and MacIntosh 2005; Chap-
lin 2020; Liu et al. 2020).
120 Valerie Gray Hardcastle and C. Matthew Stewart
MERS now seems to appear most often as a result of animal to human
transmission (perhaps as a function of tending to camels’ birthing their
young) and only secondarily through human spread. This coronavirus is
not terribly contagious and requires very close contact between humans
to be transmitted this way, such as occurs when one is caring for someone
who is ill (Durst et al. 2018). MERS is transmitted via respiratory aerosol
droplets and airborne particles, as is COVID-19. However, relative to
MERS, COVID-19 appears more contagious. This is probably largely
due to asymptomatic carriers and genetic mutations (Lago 2020). In the
four weeks after 55 index cases had been identified of SARS in East Asia,
an additional 3,000 cases had appeared (WHO 2003). But in the four
weeks after the first 59 cases of COVID-19 were identified, 25,000 cases
had been confirmed – an eight-fold increase in new cases (WHO 2020).
The COVID-19 pandemic is often compared to global flu outbreaks
as a way of putting the disease in context. However, seasonal flus kill
250,000–650,000 annually, depending on the outbreak. And the H1N1
swine flu epidemic in 2009–2010 caused 280,000 deaths worldwide (Kelly
et al. 2011). Obviously, these numbers do not compare with what we have
seen with COVID-19. The closest is perhaps the H1N1 flu pandemic in
1918–1919, in which 50 million worldwide died, with 650,000 of those
in the United States (Centers for Disease Control and Prevention 2019).
As with SARS and MERS, the clinical presentation of COVID-19 var-
ies from asymptomatic infection to mild illness to severe disease and
death. Sudden deterioration is common, most often in the second week of
the disease, and attributed to cytokine “storms2” or hyperinflammation.
Risk factors for severe disease include older age, immunosuppression,
hypertension, diabetes, and cardiovascular disease, chronic respiratory
disease, and kidney disease. Some of the risk factors could be explained
by the virus’s high affinity for binding to ACE2 (angiotensin-converting
enzyme 2), which is expressed by the epithelial cells in lungs, intestines,
kidneys, and blood vessels (Gouvea dos Santos, 2020).
Figure 1 M
allet, rasp, chisels, curette, and gouge. From Fagan and Jackler
(2017), p. 2.
124 Valerie Gray Hardcastle and C. Matthew Stewart
Stewart partnered with the rapid autopsy program for inpatient
COVID-19 deaths at Johns Hopkins School of Medicine, pioneering the
use of hand tools as a safer method for autopsying the brains potentially
infected with SARS-CoV-2. To investigate whether the virus could be
found in the middle ear, he started with the usual method for detecting
SARS-CoV-2: the PCR (polymerase chain reaction) swab test to search
for genomic sequencing of SARS. He accessed the inner ear by trephin-
ing and curetting through the mastoid bone using a small hammer and
a chisel and then swizzled in the middle ear with a tiny scoop or gouge.
He and his team discovered that decedents who had died from compli-
cations of COVID-19 exhibited a very strong positive result to the PCR
test. Furthermore, this positive result continued to be evident for quite
long postmortem intervals – up to 96 hours after death (Frazier et al.
2020). SARS-CoV-2 could be found in human ears. (As an aside, this
discovery had immediate implications for medical care, because it was
now clear that providers should not peer into ruptured ears as part of
routine medical examinations because COVID-19 could potentially be
transmitted that way.)
But there is more. Stewart and others had good reason to suspect that
SARS-CoV-2 was infiltrating epithelial tissue in the inner ear. But to
confirm this suspicion, they would need to determine whether ACE2
receptors are indeed present in the human tympanic cavity. The natural
approach would be to use electron microscopy to examine which cells, if
any, in the inner ear have ACE2 receptors, or other markers like Furin or
TMPRSS2, which would promote binding with SARS-CoV-2.
This again is straightforward scientific hypothesis-testing. No ACE2
receptors, Furin, TMPRSS2, or similar would contraindicate the ability
of SARS-CoV-2 to infect the inner ear, which would further suggest
that the central effects that some patients with COVID-19 experienced
would have to be secondary implications of COVID-19. If SARS-CoV-2
cannot directly infect cells in the inner ear, then COVID-19-related tin-
nitus could not be due to SARS-CoV-2 itself but perhaps instead due to
inflammation or other responses by the immune system to fight off the
infection.
But using electron microscopy to examine which cells from the tym-
panic cavity express ACE2 is not an easy protocol to adopt because our
ear bones are among the hardest bones in our bodies. And, unfortu-
nately, the decalcification process required to soften them enough to be
able to slice them thin enough for microscopy examination normally
takes ten months – much too long in our race to understand the impacts
of SARS-CoV-2 on the human body. But once again, Stewart innovated
by modifying techniques from other eras used for other purposes. In
this instance, he reached back to his undergraduate studies and adapted
the procedures he had learned in the chemistry lab to thinly section and
decalcify rocks. Amazingly, using this novel method for decalcifying
A Different Role for Tinkering 125
bones took only 13 days to soften the bones enough for microscopy
examination.
At the time of this writing, Stewart is now harvesting the entire ves-
tibular/cochlear part of the temporal bone for examination. While this
work is still ongoing, enough is known to be able to conclude definitively
that SARS-CoV-2 can infect the cochlear region. It is not just that there
were viral proteins present in the inner ear of COVID-19 cadavers in the
mucosa, but the bone tissue itself in the tympanic cavity can be infected.
Just as SARS-CoV-2 is ubiquitous in the lungs, nasal passages, and vas-
cular systems in patients with severe COVID-19, it is also present in the
ears.
Once again, we see theory driving tool innovation and not the other
way around. In this instance at least, instead of “the genesis …of the-
ory [in neurobiology] …[being] tied directly to the development of new
research tools” such that “theoretical progress …is secondary to and
entirely dependent upon new tool development, both temporally and
epistemically” (Bickle 2022, p. 14), we are seeing that the confirmation
of a neurobiological theory was tied directly to the development of new
research techniques such that while theoretical progress was indeed tem-
porally dependent upon new tool development, it was not epistemically.
However, we do not wish to push too hard on this conclusion or to try
to generalize it very far. In our next case study, we see that the same re-
searcher, still investigating the neurological implications of COVID-19,
this time does not have a hypothesis to drive his innovation. In this next
case, we see that theoretical progress was both temporally and epistem-
ically dependent on tool innovation.
Notes
1 The name coronavirus is derived from the Latin word corona, which means
crown or wreath. The name is due to the shape of virions on the surface of
the virus as visualized by electron microscopy creates an image reminiscent
of a crown (Chauhan 2020).
2 Cytokine literally means “cell mover,” which is exactly what these storms
do – they move cells from one part of the body to another.
3 We also know it can infect the epithelial cells in our blood vessels, but this
fact is not part of our current story.
4 “Long haulers” refer to patients who previously tested positive for COVID-19
but continue to experience adverse symptoms weeks or months after the vi-
rus has been cleared from the body.
References
Almufarrij, I., and Munro, K.J. (2021). One year on: An updated systematic
review of SARS-CoV-2, COVID-19, and audio-vestibular symptoms. Inter-
national Journal of Audiology. doi: 10.1080/14992027.2021.1896793.
Anderson, K.G., Rambaut, A., Lipkin, W.I., Holmes, E.C., and Garry, R.F.
(2020). The proximal origins of SARS-CoV-2. Nature Medicine 26: 450–452.
Ankeny, R., Chang, H., Boumans, M., and Boon, M. (2011). Introduction: Phi-
losophy of science in practice. European Journal of the Philosophy of Science
1: 303–307.
Assaf, G., David, H., McCorkell, L., Wei, H., Brooke, O, Akrami, A., Low, R.,
Mercier, J., and other member of the COVID-19 Body Politic Slack Group.
(2020). Report: What does COVID-19 recovery actually look like? An anal-
ysis of the prolonged COVID-19 symptoms survey by patient-led research
team. Patient Led Research Collaborative. https://patientresearchcovid19.
com/research/report-1/.
Bickle, J. (2016). Revolutions in neuroscience: Tool development. Frontiers in
Systems Neuroscience, 10: 24 doi: 10.3389/fnsys.2016.00024.
Bickle, J. (2018). From microscopes to optogenetics: Ian Hacking vindicated.
Philosophy of Science 85: 1065–1077.
Bickle, J. (2022). Tinkering in the lab. This volume, pp. 13–36.
Brann, D.H, Tsukahara, T., Weinreb, C. et al. (2020). Non-neuronal expres-
sion of SARS-CoV-2 entry genes in the olfactory system suggests mechanisms
A Different Role for Tinkering 131
underlying COVID-19-anosmia. Science Advances 6: eabc5801. doi: 10.1126/
sciadv.abc580.
Centers for Disease Control and Prevention (2019). 1918 Pandemic (H1N1 vi-
rus). https://www.cdc.gov/flu/pandemic-resources/1918-pandemic-h1n1.html
Chaplin, S. (2020). COVID-19: A brief history and treatments in development.
Prescriber May: 23–28.
Chauhan, S. (2020). Comprehensive review of coronavirus 2019 (COVID-19).
Science Direct Biomedical Journal 43: 334–340.
Churchland, P.S. (1986). Neurophilosophy. Cambridge, MA: The MIT Press.
Churchland, P.S., and Sejnowski, T.J. (1992). The Computational Brain. Cam-
bridge: MIT Press.
Churchland, P.S., and Sejnowski, T.J. (2016). Blending computational and ex-
perimental neuroscience. Nature Reviews Neuroscience 17: 567–568.
Couzin-Frankel, J. (2020). From “brain fog” to heart damage, COVID-19’s
lingering problems alarm scientists. Science. Published July 31, 2020. doi:
10.1126/science.abe1147.
Crick, F.H. (1979). Thinking about the brain. Scientific American 241: 219–232.
Douaud, G., Lee, S., Alfaro-Almagro, F., Arthofer, C., Wang, C. Lange, F.,
Andersson, J.L.R., Griffanti, L., Duff, E., Jbabdi, S., Taschler, B., Windkler,
A., Nichols, T.E., Collins, R., Matthews, P.M., Allen, N., Miller, K.L., and
Smith, S.M. (2021). Brain imaging before and after COVID-19 in UK bio-
bank. Preprint posted at https://www.medrxiv.org/content/10.1101/2021.06
.11.21258690v1.
Duarte-Neto, A.N., Monteiro, R.A.A., da Silva, L.F.F., Malheiros, D.M.A.C.,
de Oliveira, E.P., Theordoro-Filho, T., Pinho, J.R.R., Gomes-Gouvea, M.S.,
Salles, A.P.M., de Oliveira, I.R.S., Mauad, T., Saldia, P.H.N., and Dolhnikoff,
M. (2020). Pulmonary and systemic involvement in COVID-19 patients as-
sessed with ultrasound-guided minimally invasive autopsy. Histopathology
77: 186–197. doi: 10.1111/his.14160.
Durst, G. Carvalho, L.M., Rambaut, A., and Bedford, T. (2018). MERS-
CoV spillover at the camel-human interface. eLife 7: e31257. doi: 10.7554/
eLife.31257.
Fagan, J., and Jackler, R. (2017). Hammer & gouge cortical mastoidectomy for
acute mastoiditis. In J. Fagan (Ed.) The Open Access Atlas of Otolaryngol-
ogy, Head & Neck Operative Surgery, pp. 1–14. https://vula.uct.ac.za/access/
content/group/ba5fb1bd-be95-48e5-81be-586fbaeba29d/Hammer%20%20
Gouge%20Mastoidectomy%20for%20acute%20mastoiditis-1.pdf.
Frazier, K.M., Hooper, J.E., Mostafa, H.H., and Stewart, C.M. (2020). SARS-
CoV-2 isolated from the mastoid and middle ear: Implications for COVID-19
precautions during ear surgery. JAMA Otolaryngology-Head & Neck Sur-
gery 146: 964–969.
Garg, R.K., Mahadevan, A., Malhotra, H.S., Rizvi, I. Kumar, N., and Uniyal,
R. (2019). Subacute sclerosing panencephalitis. Reviews of Medical Virology
29: e2058. doi: 10.1002/rmv.2058.
Gold, I., and Roskies, A.L. (2008). Philosophy of Neuroscience. In M. Ruse
(Ed.) The Oxford Handbook of Philosophy of Biology. New York: Oxford
University Press, pp. 349–380.
Gouvea dos Santos, W. (2020). Natural history of COVID-19 and current
knowledge on treatment and therapeutics. Biomedicine and Pharmacother-
apy 129: 110493. doi: 10.1016/j.biopha.2020.110493.
132 Valerie Gray Hardcastle and C. Matthew Stewart
Hacking, I. (1983). Representing and Intervening. Cambridge: Cambridge Uni-
versity Press.
Hardcastle, V.G. (2017). Thinking about the brain: What is the meaning of neu-
roscience knowledge and technologies for capital mitigation? In E.C. Monahan
and J.J. Clark (Eds.) Mitigation in Capital Cases: Understanding and Commu-
nicating the Life-Story. Washington, DC: American Bar Association, Chapter 5.
Hardcastle, V.G., and Hardcastle, K. (2015). A new appreciation of Marr’s lev-
els: Understanding how brains break. Topics in Cognitive Science 7: 259–273.
Heneka, M.T., Golenback, D., Latz, E., Morgan, D., and Brown, R. (2020).
Immediate and long-term consequences of COVID-19 infections for the de-
velopment of neurological disease. Alzheimer’s Research & Therapy 12: 69
doi: 10.1186/s13195-020-00640-3.
Helms, J., Kremer, S., Merdji, H., Clere-Jehl, R., Schenck, M., Kummerlen, C.,
Collange, O., Boulay, C., Fafi-Kremer, S., Ohana, M., Anheim, M., and Me-
ziani, F. (2020). Neurologic features in severe SARS-CoV-2 infection. New
England Journal of Medicine 382: 2268–2270. doi: 10.1056/NEJMc2008597.
Hoffman, L.A., and Vilensky, J.A. (2017). Encephalitis lethargica: 100 Years
after the epidemic. Brain 140: 2246–2251.
Hubel, D.H. (1996). David H. Hubel. In L. Squire (Ed.) The History of Neuro-
science in Autobiography, Volume 1. Washington, DC: Society for Neurosci-
ence, pp. 294–317.
Jensen, M.P., Le Quesne, L., Officer-Jones, J., Teodiósio, A., Traventhiran, J.
Ficken, C., Goddard, M., Smith, C., Menon, D., and Allinson, K.S.J. (2020).
Neuropathological findings in two patients with fatal COVID-19. Neuropa-
thology and Applied Neurobiology 47: 17–25 doi: 10.1111/nan.12662.
Jiao, L., Yang, Y., Yu, W., Zhou, Y., Long, H. Gao, J., Ding, K. Ma, C., Zhao,
S., Wang, J., Li, H., Yang, M., Xu, J., Want, J., Yang, J., Kuang, ., Luo, F.,
Qian, X., Xu, L., Yin, B., Liu, W., Lu, S., and Peng, X. (2021). The olfactory
route is a potential way for SARS-CoV-2 to invade the central nervous system
of rhesus monkeys. Signal Transduction and Targeted Therapy 6: 169. doi: 10.
1038/s41392-021-00591-7.
Kahn, J.S., and MacIntosh, K. (2005). History and recent advances in coronavi-
rus discovery. The Pediatric Infectious Disease Journal 24: S223–S227.
Kelly, H. Peck. H.A., Laurie, K.L., Wu, P., Nishiura, H., and Cowling, B.J.
(2011). The age-specific cumulative incidence of infection with pandemic
influenza H1N1 2009 was similar invarious countries prior to vaccination.
PLoS One 6(8): e21828. doi: 10.1371/journal.pone.0021828.
Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago, IL: Univer-
sity of Chicago Press.
Kulcsar, M.A., Montenego, F.L., Arap, S.S., Tavares, M. R., and Kowalski, L.P.
(2020). High risk of COVID-19 for head and neck surgeons. International
Archives of Otorhinolaryngology 24: e129–e130.
Lago, M.N. (2020). How did we get here? Short history of COVID-19 and other
coronavirus epidemics. Head & Neck 42:1535–1538.
Lechien, J.R., Chiesa-Estomba, C.M., Beckers, E., Mustin, V., M. Ducarme, M.
Journe, J., Marchant, A., Jouffe, L., Barillari, M.R., Cammaroto, G., Circiu,
M.P., Hans, S., and Saussez, S. (2021). Prevalence and 6-month recovery of
olfactory dysfunction: A multicentre study of 1363 COVID-19 patients. Jour-
nal of Internal Medicine. doi: 10.1111/joim.13209.
A Different Role for Tinkering 133
Lee, M.H., Perl, D.P., Nair, G. Li, W., Maric, D., Murray, H., Dodd, S.J.,
Koretsky, A.P., Watts, J.A., Cheung, V., Masliah, E., Horkayne-Szakaly, I.,
Jones, R., Stram, M.N., Moncur, J., Hefti, M., Folkerth, R.D., and Nath, A.
(2021). Microvascular injury in the brains of patients with Covid-19. New
England Journal of Medicine 384: 481–483.
Liu, Y.-C., Kuo, R.-L., and Shih, S.-R. (2020). COVID-19: The first documented
coronavirus in history. Science Direct Biomedical Journal 43: 328–333.
Meinhardt, J., Radke, J., Dittmayer, C. et al. (2021). Olfactory transmucosal
SARS-CoV-2 invasion as a port of central nervous system entry in individuals
with COVID-19. Nature Neuroscience 24: 168–175.
Mukerji, S.S., and Solomon, I.H. (2021). What can we learn from brain autop-
sies in COVID-19? Neuroscience Letters 742:135528.
Nauen, D.W., Hooper, J.E., Stewart, C.M., and Solomon, I.H. (2021). Assessing
brain capillaries in coronavirus 2019. JAMA Neurology 78: 760–762. doi:
10.1001/jamaneurol.2021.0225.
Portmann, G., and Colleagues (1939). Translated by P. Voilé. A Treatise on the
Surgical Technique of Otorhinolaryngology. Baltimore, MD: William Wood
& Company, Medical Division, The Williams & Wilkens Co.
Pratt, J., Lester, E., and Parker, R. (2021). Could SARS-CoV-2 cause tauopathy?
The Lancet 20: P506. doi: 10.1016/S1474-4422(21)00168-X.
Puelles, V.G., Lütgehetmann, M., Lindenmeyer, M.T. et al. (2020). Multiorgan
and renal tropism of SARS-CoV-2. New England Journal of Medicine 38:
590–592.
Rapkiewicz, A.T., Mait, X., Carsons, S.E., Pittaluga, S., Kleiner, D. Berger, J.,
Thomas, S., Adler, N.M., Charytan, D.M., Gasmi, B., Hochman, J.S., and
Reynolds, H.R. (2020). Megakaryocytes and platelet-fibrin thrombi charac-
terize multi-organ thrombosis at autopsy in COVID-19: A case series. EClin-
icalMedicine. 24: 100434. doi: 10.1016/j.eclinm.2020.100434.
Reichard, R.R., Kashani, K.B., Boire, N.A., Constantopoulos, E., Guo, Y.,
and Lucchinetti, C.F. (2020). Neuropathology of COVID-19: A spectrum of
vascular and acute disseminated encephalomyelitis (ADEM)-like pathology.
Acta Neuropathologica 140: 1–6.
Remmelink, M., De Mendonca, R., D’Haene, N. De Clercq, S., Verocq, C.,
Lebrun, L., Lavis, P., Racu, M-L., Trepant, A-L, Maris, C., Rovive, S., Gof-
fard, J-C., De Witte, O., Peluso, L., Vincent, J-L., Decaestecker, C., Taccone,
F.S., and Salmon, I. (2020). Unspecific post-mortem findings despite multior-
gan viral spread in COVID-19 patients. Critical Care 24: 495.
Solomon, I.H., Normandin, E., Bhattacharyya, S. Mukerji, S.S., Ali, A.S., Ad-
ams, G., Hormick, J.L. Padera, R.F., and Sabeti, P. (2020). Neuropatholog-
ical features of Covid-19. New England Journal of Medicine 383: 989–992.
Stix, G. (2012). A Q&A with Ian Hacking on Thomas Kuhn’s legacy as “the
paradigm shift” turns 50. Scientific American (April). https://www.scientifi-
camerican.com/article/kuhn/.
Thakur, K.T., Miller, E.H., Glendinning, M.D., Al-Dalahman, A., Banu, M.A.,
Boehm, A.K. et al. (2021). COVID-19 neuropathology at Columbia Uni-
versity Irving Medical Center/New York Presbyterian Hospital. Brain. doi:
10.1093/brain/awab148.
Uranaka, T., Kashio, A., Ueha, R., Sato, T., Bing, H.Ying, G., Kinoshita, M.,
Kondo, K., and Yamasoba, T. (2020). Expression of ACE2, TMPRSS2, and
134 Valerie Gray Hardcastle and C. Matthew Stewart
furin in mouse ear tissue and the implications for SARS-CoV-2 infection. The
Laryngoscope 131: E2013–E217. doi: 10.1002/lary.29324.
Van Waegeningh, H.F., Ebbens, F.A., van Spronsen, E., and Oostra, R.-J.
(2019). Single origin of the epithelium of the human middle ear. Mechanisms
of Development 158: 103556. doi: 10.1016/j.mod.2019.103556.
Wagner, T., Schweta, F.N.U., Murugadoss, K. et al. (2020). Augmented cura-
tion of clinical notes from a massive EHR system reveals symptoms of im-
pending COVID-19 diagnosis. eLife 9: e58227. doi: 10.7554/eLife.58227.
Wichmann, D, Sperhake, J.P., Lutgehetmann, M. et al. (2020). Autopsy findings
and venous thromboembolism in patients with COVID-19: A prospective co-
hort study. Annals of Internal Medicine 173: 268–277.
World Health Organization (2003). Emergency preparedness, response. Update
83 – One hundred days into the [SARS] outbreak. https://www.who.int/csr/
don/20030618/en.
World Health Organization (2020). Coronavirus disease 2019 (COVID-19).
Situation report – 16. https://www.who.int/emergencies/diseases/novel-
coronavirus-2019/situationreports.
World Health Organization (2021). Coronavirus. https://www.who.int/
health-topics/coronavirus#tab=tab1.
Yang, A.C., Kern, F., Losada, P.M. et al. (2021). Dysregulation of brain
and choroid plexus cell types in severe COVID-19. Nature. doi: 10.1038/
s41586-021-03710-0.
Zubair, A.S., McAlpine, L.S., Gardin, T., Farhadian, S., Kuruvilla, D.E., and
Spudich, S. (2020). Neuropathogenesis and neurologic manifestations of the
coronaviruses in the age of coronavirus disease 2019: A review. JAMA Neu-
rology 77: 1018–1027. doi: 10.1001/jamaneurol.2020.2065.
Section 2
1 Introduction
New tools have often been catalysts of scientific revolutions in neurosci-
ence, from the role of the Golgi stain in the discovery of neurons to the
recent transformation of systems neuroscience by both optogenetics and
ground-breaking neuronal imaging methods. Sophisticated tools open
doors to novel observations and ideas, but not all new tools are either
feasibly adaptable to multiple related uses or equally accessible to neu-
roscientists. In this chapter, I will discuss the idea that the power of a
new tool is not only dependent on its ability to reveal new potentially
transformative phenomena but also on its adaptiveness, ease of use and
transferability. The impact of prohibitively complex or inaccessible tools
is often limited by the relatively small number of laboratories that even-
tually use them, while the power of accessible and nimble tools is am-
plified by their wide dissemination. The ability to carry out replication
and convergent follow up experiments is at the very heart of scientific
revolutions in biology, and therefore most of the tools that fuel innova-
tive leaps in neuroscience, and in other biological disciplines, are almost
without exception very flexible and easily accessible to the majority of
practitioners in their respective fields. I will illustrate these ideas by us-
ing examples from the recent history of neuroscience, and I will finish
the chapter by predicting that a new set of tools that has just become
available to neuroscientists, and with the properties mentioned above,
will have a significant impact on the field, including in bridging the cur-
rent gap between molecular and systems studies in neuroscience.
DOI: 10.4324/9781003251392-9
138 Alcino J. Silva
the early days of the molecular biology and human genetics revolution
(McConkey 1993). Naively, I assumed that the accessibility and adap-
tiveness inherent in most tools that fueled the molecular biology and
genetics revolutions were common to all disciplines in biology. I quickly
discovered that matters were very different in neuroscience, the field I
decided to join. In the late eighties, there were deep divisions amongst
neuroscience fields that could be easily traced back to the tools that in-
dividual labs had access to. Strange that it may seem to neuroscientists
now, at that time neuroscience areas were mostly defined by the tools
used. For example, memory was studied by behavioral neuroscientists,
electrophysiologists and molecular biologists, but there was hardly any
overlap between the approaches they used and consequently the results
they published. For example, behavioral assays and brain lesion tools
were used in behavioral neuroscience to define the brain structures in-
volved in specific forms of memory (e.g., spatial memory), while electro-
physiologists used electrodes to probe the synaptic mechanisms thought
to underlie memory (e.g., long term potentiation or LTP), and molecu-
lar biologists were busy cloning the molecules (e.g., receptors, kinases)
that they thought may mediate mechanisms of memory. The barriers
between fields were so deep that there was even a deeply entrenched re-
luctance to cross them. I will never forget being reminded mockingly by
a leading neuroscientist that my “amateurish efforts” to use molecular
genetics, electrophysiology and behavioral neuroscience approaches in
my new laboratory were naïve since I would be a “jack of all trades and
master of none” (Nelson 2015)!
But all of this would eventually change with the wide use of transgenic
tools in neuroscience research (Grant and Silva 1994; Silva and Giese
1994; Tonegawa, et al. 1995; Silva 1996; Silva, et al. 1997). These tools
would break barriers between neuroscience fields and open the doors to
the integrative studies that now dominate the field (Silva, et al. 1997).
I still remember during my post-doctoral studies my first day in Jeanne
Wehner’s laboratory, and the excitement of starting behavioral studies
of the alpha calmodulin kinase II knockout mice that I had generated in
Susumu Tonegawa’s laboratory. With her post-doctoral fellow Richard
Paylor, we showed that these mutant mice had deficits in hippocampal
learning (Silva, et al. 1992)! Then, I took the same knockout mice to
Charles Stevens laboratory, where along with Yanyan Wang we discovered
deficits in LTP induction(Silva, et al. 1992), thus tentatively connecting
the loss of that synaptic kinase (Bennett, et al. 1983) with hippocam-
pal LTP and h ippocampal-dependent learning (Silva, et al. 1997). In my
own laboratory in 1992, we managed to have behavioral neuroscien-
tists, electrophysiologists and molecular biologists working side by side,
and the same was starting to happen in other neuroscience laboratories
(Silva, et al. 1997). Mutant mice served as bridges between previously
Dissemination and Adaptiveness 139
separate fields in neuroscience, thus fueling a revolution that trans-
formed the field (Mayford, et al. 1995; Silva, et al. 1997).
Transgenic mice, flies and worms are very flexible and easily dissem-
inated tools, since mutants can be generated for almost every gene of
interest, and then shared amongst laboratories. In the years that followed
those early heady days, countless mutants were used as integrative tools
by hundreds of laboratories to test or unravel molecular and cellular
mechanisms of nearly every behavior that had ever been studied (Bickle
2016). Within a few years, viral vectors were introduced to mutate spe-
cific regions of the mammalian brain, so as to confer molecular, cellular
and brain region specificity to studies of behavior (Chen, et al. 2019;
Nectow and Nestler 2020). At the same time, mutant mice were also used
in systems neuroscience studies (Rotenberg, et al. 1996; Cho, et al. 1998),
and they were instrumental in establishing connections between this field
and others in neuroscience, including molecular, cellular and behavioral
neuroscience. It is now difficult to find neuroscience papers in high-profile
journals that do not share the integrative character of these early mutant
mouse studies. A flexible and easy to share tool (i.e., mutant mice) has es-
sentially completely transformed neuroscience in a way that was difficult
to predict at the time (Morris and Kennedy 1992). I will never forget the
attacks that the field suffered in the early days of this integrative effort.
Many of our colleagues thought implicitly or explicitly that nothing of
consequence could be gained from studies that dared to bridge the gaps
between fields as far afield as molecular biology and behavioral neurosci-
ence (Nelson 2015). Their compelling arguments invoked the necessary
superficiality of such studies, the “inevitable” lack of deep expertise in-
volved, and the “unavoidable errors” that would come from such am-
ateurism. Their arguments were not without merit, but everything had
been changed by the irresistible prospect of making causal connections
across neuroscience fields, from molecular mechanisms, to the cellular
properties they regulated, to systems processes mediated by these cellular
mechanisms, all the way to behavior. The ruthless reductionism (Bickle
2007) inherent in work with molecular genetic tools had opened new
broad horizons in neuroscience and there was no going back. The power
of these new tools was inextricably linked to their easy dissemination,
adaptability and wide range of experimental applications (Bickle 2016).
7 Conclusion
In this chapter, I discussed how novel tools can be the catalysts of inno-
vation, and reviewed key properties, including ease of dissemination and
adaptability, that allow tools to open doors to ground-breaking obser-
vations and ideas. I argued that the potential usefulness of a new tool is
not only dependent on its ability to reveal new potentially transformative
phenomena, but also on its ease of use, adaptability and transferability.
Dissemination and Adaptiveness 147
The ability to readily carry out replication and convergent follow up ex-
periments in many laboratories is at the very heart of scientific revolu-
tions in biology, and therefore most of the tools that fuel innovative leaps
in neuroscience, and in other biological disciplines, are frequently very
flexible and easily accessible to practitioners in the field. I propose that
molecular optogenetic tools, capable of optically tracking and manip-
ulating a wide range of molecular phenomena in diverse cell types and
systems in the brain of behaving animals, will be key catalysts for an
upcoming wave of innovation in neuroscience. These nimble and easily
disseminated tools will not only catalyze a much-needed synthesis be-
tween molecular and systems neuroscience, they will also move systems
neuroscience away from its neurocentric roots to a more biological realis-
tic inclusion of other cell types including for example astrocytes, glia and
oligodendrocytes.
References
Aharoni, D., B. S. Khakh, A. J. Silva and P. Golshani (2019). “All the light
that we can see: a new era in miniaturized microscopy.” Nat Methods 16(1):
11–13.
Airan, R. D., K. R. Thompson, L. E. Fenno, H. Bernstein and K. Deisseroth
(2009). “Temporally precise in vivo control of intracellular signalling.” Na-
ture 458(7241): 1025–1029.
Barnes, C. A. (1979). “Memory deficits associated with senescence: a neuro-
physiological and behavioral study in the rat.” J Comp Physiol Psychol 93(1):
74–104.
Bennett, M. K., N. E. Erondu and M. B. Kennedy (1983). “Purification and
characterization of a calmodulin-dependent protein kinase that is highly con-
centrated in brain.” J Biol Chem 258(20): 12735–12744.
Bickle, J. (2007). “Ruthless reductionism and social cognition.” J Physiol Paris
101(4–6): 230–235.
Bickle, J. (2016). “Revolutions in neuroscience: tool development.” Front Syst
Neurosci 10: 24.
Bickle, J. and A. Kostko (2018). “Connection experiments in neurobiology.”
Synthese 195: 5271–5295.
Boyden, E. S. (2011). “A history of optogenetics: the development of tools for
controlling brain circuits with light.” F1000 Biol Rep 3: 11.
Buzsáki, G. (2019). The Brain from Inside Out. New York, Oxford University
Press.
Cai, D. J., D. Aharoni, T. Shuman, J. Shobe, J. Biane, W. Song, B. Wei, M.
Veshkini, M. La-Vu, J. Lou, S. E. Flores, I. Kim, Y. Sano, M. Zhou, K. Baum-
gaertel, A. Lavi, M. Kamata, M. Tuszynski, M. Mayford, P. Golshani and A.
J. Silva (2016). “A shared neural ensemble links distinct contextual memories
encoded close in time.” Nature 534(7605): 115–118.
Callaway, E. M. (2005). “A molecular and genetic arsenal for systems neurosci-
ence.” Trends Neurosci 28(4): 196–201.
Capecchi, M. R. (1989). “The new mouse genetics: altering the genome by gene
targeting.” Trends Genet. 5(3): 70–76.
148 Alcino J. Silva
Chen, S., A. Z. Weitemier, X. Zeng, L. He, X. Wang, Y. Tao, A. J. Y. Huang,
Y. Hashimotodani, M. Kano, H. Iwasaki, L. K. Parajuli, S. Okabe, D. B. L.
Teh, A. H. All, I. Tsutsui-Kimura, K. F. Tanaka, X. Liu and T. J. McHugh
(2018). “Near-infrared deep brain stimulation via upconversion nanoparticle-
mediated optogenetics.” Science 359(6376): 679–684.
Chen, S. H., J. Haam, M. Walker, E. Scappini, J. Naughton and N. P. Martin
(2019). “Recombinant Viral Vectors as Neuroscience Tools.” Curr Protoc
Neurosci 87(1): e67.
Chen, T. W., T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A. Baohan, E.
R. Schreiter, R. A. Kerr, M. B. Orger, V. Jayaraman, L. L. Looger, K. Svoboda
and D. S. Kim (2013). “Ultrasensitive fluorescent proteins for imaging neuro-
nal activity.” Nature 499(7458): 295–300.
Chicharro, D. and A. Ledberg (2012). “When two become one: the limits of
causality analysis of brain dynamics.” PLoS One 7(3): e32466.
Cho, Y. H., K. P. Giese, H. Tanila, A. J. Silva and H. Eichenbaum (1998). “Ab-
normal hippocampal spatial representations in alphaCaMKIIT286A and
CREBalphaDelta-mice.” Science 279(5352): 867–869.
Dombeck, D. A., C. D. Harvey, L. Tian, L. L. Looger and D. W. Tank (2010).
“Functional imaging of hippocampal place cells at cellular resolution during
virtual navigation.” Nat Neurosci 13(11): 1433–1440.
Ghosh, K. K., L. D. Burns, E. D. Cocker, A. Nimmerjahn, Y. Ziv, A. E. Gamal
and M. J. Schnitzer (2011). “Miniaturized integration of a fluorescence mi-
croscope.” Nat Methods 8(10): 871–878.
Gong, X., D. Mendoza-Halliday, J. T. Ting, T. Kaiser, X. Sun, A. M. Bastos, R. D.
Wimmer, B. Guo, Q. Chen, Y. Zhou, M. Pruner, C. W. Wu, D. Park, K. Deisse-
roth, B. Barak, E. S. Boyden, E. K. Miller, M. M. Halassa, Z. Fu, G. Bi, R. Des-
imone and G. Feng (2020). “An ultra-sensitive step-function opsin for minimally
invasive optogenetic stimulation in mice and macaques.” Neuron 107: 38–51.
Goshen, I. (2014). “The optogenetic revolution in memory research.” Trends
Neurosci 37(9): 511–522.
Granseth, B., B. Odermatt, S. J. Royle and L. Lagnado (2006). “Clathrin-
mediated endocytosis is the dominant mechanism of vesicle retrieval at hip-
pocampal synapses.” Neuron 51(6): 773–786.
Grant, S. G. and A. J. Silva (1994). “Targeting learning.” Trends in Neurosci-
ences 17(2): 71–75.
Jacob, A. D., A. I. Ramsaran, A. J. Mocle, L. M. Tran, C. Yan, P. W. Frankland
and S. A. Josselyn (2018). “A compact head-mounted endoscope for in vivo
calcium imaging in freely behaving mice.” Curr Protoc Neurosci 84(1): e51.
Jung, H., S. W. Kim, M. Kim, J. Hong, D. Yu, J. H. Kim, Y. Lee, S. Kim, D.
Woo, H. S. Shin, B. O. Park and W. D. Heo (2019). “Noninvasive optical
activation of Flp recombinase for genetic manipulation in deep mouse brain
regions.” Nat Commun 10(1): 314.
Kakumoto, T. and T. Nakata (2013). “Optogenetic control of PIP3: PIP3 is suf-
ficient to induce the actin-based active part of growth cones and is regulated
via endocytosis.” PLoS One 8(8): e70861.
Kawano, F., R. Okazaki, M. Yazawa and M. Sato (2016). “A photoactivatable
Cre-loxP recombination system for optogenetic genome engineering.” Nat
Chem Biol 12(12): 1059–1064.
Dissemination and Adaptiveness 149
Kennedy, M. B. (1983). “Experimental approaches to understanding the role of
protein phosphorylation in the regulation of neuronal function.” Annu Rev
Neurosci 6(493): 493–525.
Kim, J. M., J. Hwa, P. Garriga, P. J. Reeves, U. L. RajBhandary and H. G.
Khorana (2005). “Light-driven activation of beta 2-adrenergic receptor sig-
naling by a chimeric rhodopsin containing the beta 2-adrenergic receptor cy-
toplasmic loops.” Biochemistry 44(7): 2284–2292.
Klewer, L. and Y. W. Wu (2019). “Light-induced dimerization approaches to
control cellular processes.” Chemistry 25(54): 12452–12463.
Kwon, E. and W. D. Heo (2020). “Optogenetic tools for dissecting complex
intracellular signaling pathways.” Biochem Biophys Res Commun 527(2):
331–336.
Liberti, W. A., L. N. Perkins, D. P. Leman and T. J. Gardner (2017). “An open
source, wireless capable miniature microscope system.” J Neural Eng 14(4):
045001.
Maniatis, T., E. F. Fritsch and J. Sambrook (1982). Molecular Cloning: A Labo-
ratory Manual. Cold Spring Harbor, NY, Cold Spring Harbor Press.
Matiasz, N. J., J. Wood, P. Doshi, W. Speier, B. Beckemeyer, W. Wang, W. Hsu
and A. J. Silva (2018). “ResearchMaps.org for integrating and planning re-
search.” PLoS One 13(5): e0195271.
Matiasz, N. J., J. Wood, W. Wang, A. J. Silva and W. Hsu (2017). “Computer-aided
experiment planning toward causal discovery in neuroscience.” Front Neu-
roinform 11: 12.
Mayford, M., T. Abel and E. R. Kandel (1995). “Transgenic approaches to cog-
nition.” Curr Opin Neurobiol 5(2): 141–148.
McConkey, E. H. (1993). Human Genetics: The Molecular Revolution. Boston,
MA, Jones and Bartlett Publishers.
McCormick, J. W., D. Pincus, O. Resnekov and K. A. Reynolds (2020). “Strat-
egies for engineering and rewiring kinase regulation.” Trends Biochem Sci
45(3): 259–271.
Morris, R. G. M. (1981). “Spatial localization does not require the presence of
local cues.” Learning Motivation 12: 239–260.
Morris, R. G. M. and M. B. Kennedy (1992). “The Pierian spring.” Current
Biology 2(10): 511–514.
Nectow, A. R. and E. J. Nestler (2020). “Viral tools for neuroscience.” Nat Rev
Neurosci 21(12): 669–681.
Nelson, N. C. (2015). “A knockout experiment: disciplinary divides and experi-
mental skill in animal behaviour genetics.” Med Hist 59(3): 465–485.
Nguyen, M. K., C. Y. Kim, J. M. Kim, B. O. Park, S. Lee, H. Park and W. D.
Heo (2016). “Optogenetic oligomerization of Rab GTPases regulates intracel-
lular membrane trafficking.” Nat Chem Biol 12(6): 431–436.
O’Keefe, J. and L. Nadel (1978). The Hippocampus as a Cognitive Map. Lon-
don, Oxford University Press.
Potter, S. M., A. El Hady and E. E. Fetz (2014). “Closed-loop neuroscience and
neuroengineering.” Front Neural Circuits 8: 115.
Rose, T., P. Schoenenberger, K. Jezek and T. G. Oertner (2013). “Developmental
refinement of vesicle cycling at Schaffer collateral synapses.” Neuron 77(6):
1109–1121.
150 Alcino J. Silva
Rost, B. R., F. Schneider-Warme, D. Schmitz and P. Hegemann (2017). “Op-
togenetic tools for subcellular applications in neuroscience.” Neuron 96(3):
572–603.
Rotenberg, A., M. Mayford, R. D. Hawkins, E. R. Kandel and R. U. Muller
(1996). “Mice expressing activated CaMKII lack low frequency LTP and do
not form stable place cells in the CA1 region of the hippocampus.” Cell 87(7):
1351–1361.
Schindler, S. E., J. G. McCall, P. Yan, K. L. Hyrc, M. Li, C. L. Tucker, J. M. Lee,
M. R. Bruchas and M. I. Diamond (2015). “Photo-activatable Cre recombi-
nase regulates gene expression in vivo.” Sci Rep 5: 13627.
Silva, A. J. (1996). Genetics and learning: misconceptions and criticisms. In
Gene Targeting and New Developments in Neurobiology, ed. S. Nakanishi,
A. J. Silva, S. Aizawa and M. Katsuki. Tokyo, Japan Scientific Societies Press,
3–15.
Silva, A. J. and K. P. Giese (1994). “Plastic genes are in!” Curr Opin Neurobiol
4(3): 413–420.
Silva, A. J., A. Landreth and J. Bickle (2014). Engineering the Next Revolution
in Neuroscience: The New Science of Experiment Planning. New York, Ox-
ford Press.
Silva, A. J., R. Paylor, J. M. Wehner and S. Tonegawa (1992). “Impaired spa-
tial learning in alpha-calcium-calmodulin kinase II mutant mice.” Science
257(5067): 206–211.
Silva, A. J., A. M. Smith and K. P. Giese (1997). Gene targeting and the biology
of learning and memory. Annu Rev Genet 31: 527–546.
Silva, A. J., C. F. Stevens, S. Tonegawa and Y. Wang (1992). “Deficient hippo-
campal long-term potentiation in alpha-calcium-calmodulin kinase II mutant
mice.” Science 257(5067): 201–206.
Silva, A. J., Y. Wang, R. Paylor, J. M. Wehner, C. F. Stevens and S. Tonegawa
(1992). “Alpha calcium/calmodulin kinase II mutant mice: deficient long-
term potentiation and impaired spatial learning.” Cold Spring Harb Symp
Quant Biol 57: 527–539.
Sofroniew, N. J., D. Flickinger, J. King and K. Svoboda (2016). “A large field of
view two-photon mesoscope with subcellular resolution for in vivo imaging.”
Elife 5: e14472.
Stamatakis, A. M., M. J. Schachter, S. Gulati, K. T. Zitelli, S. Malanowski, A.
Tajik, C. Fritz, M. Trulson and S. L. Otte (2018). “Simultaneous optogenetics
and cellular resolution calcium imaging during active behavior using a minia-
turized microscope.” Front Neurosci 12: 496.
Stevens, B. (2003). “Glia: much more than the neuron’s side-kick.” Curr Biol
13(12): R469–R472.
Sudhof, T. C. (2017). “Molecular neuroscience in the 21(st) century: a personal
perspective.” Neuron 96(3): 536–541.
Tonegawa, S., Y. Li, R. S. Erzurumlu, S. Jhaveri, C. Chen, Y. Goda, R. Pay-
lor, A. J. Silva, J. J. Kim, J. M. Wehner and C. F. Stevens (1995). “The gene
knockout technology for the analysis of learning and memory, and neural
development.” Prog Brain Res 105: 3–14.
Wang, H., M. Jing and Y. Li (2018). “Lighting up the brain: genetically encoded
fluorescent sensors for imaging neurotransmitters and neuromodulators.”
Curr Opin Neurobiol 50: 171–178.
Dissemination and Adaptiveness 151
Wang, Y., Y. Y. Yau, D. Perkins-Balding and J. G. Thomson (2011). “Recom-
binase technology: applications and possibilities.” Plant Cell Rep 30(3):
267–285.
Yang, W., L. Carrillo-Reid, Y. Bando, D. S. Peterka and R. Yuste (2018). “Si-
multaneous two-photon imaging and two-photon optogenetics of cortical cir-
cuits in three dimensions.” Elife 7: e32671.
Yao, S., P. Yuan, B. Ouellette, T. Zhou, M. Mortrud, P. Balaram, S. Chatter-
jee, Y. Wang, T. L. Daigle, B. Tasic, X. Kuang, H. Gong, Q. Luo, S. Zeng,
A. Curtright, A. Dhaka, A. Kahan, V. Gradinaru, R. Chrapkiewicz, M.
Schnitzer, H. Zeng and A. Cetin (2020). “RecV recombinase system for in
vivo targeted optogenomic modifications of single cells or cell populations.”
Nat Methods 17(4): 422–429.
Yizhar, O., L. E. Fenno, T. J. Davidson, M. Mogri and K. Deisseroth (2011).
“Optogenetics in neural systems.” Neuron 71(1): 9–34.
Zhang, K., L. Duan, Q. Ong, Z. Lin, P. M. Varman, K. Sung and B. Cui (2014).
“Light-mediated kinetic control reveals the temporal effect of the Raf/MEK/
ERK pathway in PC12 cell neurite outgrowth.” PLoS One 9(3): e92917.
Zhang, Y., S. A. Sloan, L. E. Clarke, C. Caneda, C. A. Plaza, P. D. Blumen-
thal, H. Vogel, G. K. Steinberg, M. S. Edwards, G. Li, J. A. Duncan, 3rd, S.
H. Cheshier, L. M. Shuer, E. F. Chang, G. A. Grant, M. G. Gephart and B.
A. Barres (2016). “Purification and characterization of progenitor and ma-
ture human astrocytes reveals transcriptional and functional differences with
mouse.” Neuron 89(1): 37–53.
Ziv, Y., L. D. Burns, E. D. Cocker, E. O. Hamel, K. K. Ghosh, L. J. Kitch, A. El
Gamal and M. J. Schnitzer (2013). “Long-term dynamics of CA1 hippocam-
pal place codes.” Nat Neurosci 16(3): 264–266.
7 Toward an Epistemology of
Intervention
Optogenetics and Maker’s
Knowledge1, 2
Carl F. Craver
1 Introduction
The biological sciences, like other mechanistic sciences, comprise both a
modeler’s and a maker’s tradition. The aim of the modeler, in my narrow
sense, is to describe correctly the causal structures—the mechanisms—
that produce, underlie, maintain, or modulate a given phenomenon or
effect.3 These models are expected to save the phenomena tolerably well
(that is, to make accurate predictions about them) and, in many cases
at least, to represent the components and causal relationships compos-
ing their mechanisms. The aim of the maker, in contrast, is to build
machines that produce, underlie, maintain, or modulate the effects we
desire.4
The works of both maker and modeler depend fundamentally on the
ability to intervene into a system and make it work differently than it
would work on its own. My goal is to identify some dimensions of prog-
ress (or difference) in the ability to intervene in biological systems.
This project complements Alan Franklin’s (1986, 1990, 2012) pio-
neering work on the epistemology of experiment. Franklin focuses on
detection instruments and, specifically, on distinguishing “between a
valid observation or measurement and an artifact created by the ex-
perimental apparatus” (see 1986, 165, 192; 1990, 104). He argues that
scientists defend new detection techniques by showing that they detect
known magnitudes reliably, that their results conform to the expecta-
tions of a well-confirmed theory, and that their findings agree tolerably
well with those of other, more or less causally independent detection
techniques.5 Scientists sometimes defend their instruments directly by
appeal to their well-supported design principles and by showing their
results cannot be explained by known sources of error.6 Yet by focus-
ing on detection specifically, Franklin neglects a crucial aspect of causal
experiments: interventions. My goal is to take some preliminary steps
toward an epistemology of intervention by characterizing the norms by
which improvements in intervention methods are measured.7
In pursuit of this goal, I examine aspects of the early history of optoge-
netics. This technique matured and made its way into the neurosciences
DOI: 10.4324/9781003251392-10
Toward an Epistemology of Intervention 153
around the turn of the Twenty-first century and subsequently has been
adopted widely and rapidly as an improved intervention method. Nu-
merous biologists have won awards for inventing and describing the key
components of this triumph of biological maker’s knowledge (including
Ernst Bamberg, Ed Boyden, Karl Diesseroth, Peter Hegemann, Georg
Nagel, and Gero Miesenböck, according to the latest Wiki update), and
the technique is currently featured in over 800 research articles per year
(see Kolar et al. 2018). By exploring why this technique was adopted so
readily and widely, we get a glimpse of the epistemic norms that make
one intervention technique better than another and into the arguments
by which such claims are defended.8
Optogenetics allows researchers to control the electrophysiological
properties of neurons with light.9 Researchers insert bacterial genes for
light-sensitive ion channels into target cells in a given brain region. The
virus that inserts this construct into cells commandeers the cell’s protein
synthesis and delivery mechanisms to assemble the light-sensitive chan-
nels and install them in the membrane. The light delivered through a
fiber-optic cable then activates or inactivates the channels, changing the
ionic current across the membrane and thereby modulating, producing,
or blocking neural signals.10
In what follows, I first present a schema for thinking about causal
experiments and complicate it to accommodate the complexity of op-
togenetics. I then discuss eleven dimensions of progress or difference
in the ability to intervene into brain function. These give us a sense of
the epistemic norms guiding the assessment of progress in intervention.
I close reflecting on how and why makers and modelers differ in their
assessment of the norms of intervention.
2 Causal Experiments
Figure 1 represents a simplified, standard causal experiment. A given
causal hypothesis or mechanism schema is instantiated in a target sys-
tem. The target system is the subject, organism, or system in which one
performs the experiment.11 One intervenes in the system to change one or
more target variables (T) and detects the resulting value of putative effect
variables (E).
The intervention technique is a means of changing one or more target
variables in the mechanism. In the simple case, one sets T to a value.
Sometimes, as in Kettlewell’s famed experiments on moth populations,
the researcher identifies natural circumstances that set the variable to a
value, but here I focus on interventions under researchers’ control.
Intervention checks detect the value of the target variable to see if the
intervention succeeded in setting T to the desired value. In lesion experi-
ments, for example, one confirms the location of the lesion using CT scans
154 Carl F. Craver
Target System
U D
1
T S E
3 2
4
I C
5
I T
A B C D
A T
A2
where
A1 B C D E
what
F when
A3
allow the experimenter to control neurons with light. Others, such as the
light, are triggering causes.
Many experiments involve more than one complex structural cause. The
target system, for example, may be surgically prepared or pre-treated to
make the experiment possible or convenient. The animal might have been
trained on a task or acclimated to an experimental setting. The target sys-
tem might be set into a baseline state against which the experimental effect
is evaluated. In an experiment discussed below, researchers first intervene
to stimulate the basolateral amygdala (BlA) with direct current; they do so
in order to test the effects of a second intervention, optogenetic inhibition
of BlA terminals in the central amygdala (CeA), on indicators of anxiety.
This brief example illustrates that to represent experimental interven-
tions as a single arrow into a single target T is often an extreme idealiza-
tion. Different ways of organizing multiple interventions give different
experiments their power to answer different kinds of questions about
the causal structure of the target system. Precisely because such causal
organization of experimental systems is key to their epistemic power,
an adequate philosophy of experiment must move beyond this idealized
simplification (See Craver and Darden 2013).
An adequate philosophy of experiment also must avoid assuming that
idealized constraints represented in Figure 2 must be satisfied for an
experiment to be epistemically valuable. Progress in intervention some-
times involves discovering previously unrecognized violations of such
principles and correcting them, but not always. In some experimental
setups, it is impossible or undesirable to change T in a way that makes
Toward an Epistemology of Intervention 157
its value independent of T’s other causes (as required in 1). One might
wish, for example, to increase the amount of dopamine in a system while
allowing endogenous dopamine to vary under the influence of its typi-
cal, physiological causes. Interventions are also frequently ham-fisted,
changing many potentially relevant variables at once, if only because
technology does not permit a more surgical intervention. I discuss below
some situations in which ham-fisted interventions might be preferred.
And for the purposes of maker’s knowledge, a useful intervention might
be non-ideal and nonetheless uniquely useful.
These preliminaries in place, let us consider some dimensions15 along
which one evaluates the quality of an intervention method. Table 1 lists
11 epistemically relevant dimensions along which intervention tech-
niques might vary from one another. I discuss each in turn, using early
work on optogenetics as illustrative examples.
• Which variables?
• Number of (relevant) variables controlled
• Selectivity
• Physiological relevance
• Within variables
• Range
• Grain
• Nature of change
• Valence
• Reversibility
• Physiological relevance
• Level of control
• Efficacy
• Dominance
• Determinism
158 Carl F. Craver
put talk of “codes” and “representations” on firmer epistemic footing.16
So, beginning with the obvious: we make progress in intervention as we
expand our ability to manipulate more and more of the variables that
potentially make a difference to how things we care about work.
V1 V1 ?
I V2 ? E I V2 ? E
V3 V3 ?
t1 t2 t3 t4...
t10 t11 t12 t13...
I T
t100 t101 t103 t104
t500 t501 t502 tk
Fine Grain, t1 t2 t3 t4...
Large Range
t10 t11 t12 t13...
I
t100 t101 t103 t104...
t1 t2 t3 t4...
t10 t11 t12 t13...
I
t100 t101 t103 t104...
t500 t501 t502... tk
Course Grain,
Large Range
value it can take. To the right is an intervention technique that allows the
researcher to set the putative cause variable to only part of the range of
possible values. The grain is the same, but its range is diminished relative
to the case just mentioned. On the bottom is a technique that covers the
same range as the first but at a coarser grain. The technique can set the
variable only to a value somewhere within a block of values.
3.6 Bivalence
A related, but distinct, dimension of progress in intervention is to move
from interventions that can manipulate the value of a variable only in
one direction to interventions that can manipulate the value of the vari-
able in both directions. Lesion studies, for example, remove but cannot
164 Carl F. Craver
V1
C+
I CB E
C-
Figure 6 B
ivalent interventions can increase and decrease the value of a variable
in the same target system.
3.7 Efficacy
As noted in Section 2, interventions are typically complex causal se-
quences, starting with an action and ending with the change in the target
variable. Just as an experimenter’s efforts to detect a given variable can
be complicated by, for example, failures anywhere in the detection appa-
ratus or in the process by which the results are stored and manipulated,
the effort to intervene into a target system might be complicated by fail-
ures anywhere in the complex causal chain in the intervention method.
An experimenter might fail to take the appropriate action. Or she
could take the appropriate action, but her device might glitch and so fail
to initiate the causal sequence. Or the target system might somehow foil
the would-be intervention. In a drug trial, for example, the subject might
forget to take the pill, or the patient might have a stomach enzyme that
Toward an Epistemology of Intervention 165
degrades the drug before it enters the blood. Bracketing experimenter
error, the efficacy of an intervention technique is the reliability with
which the intervention technique produces the desired change to the tar-
get variable.
Efficacy is in part a matter of objective frequency: given that an ex-
perimenter decides to set T to a target value (or target range), and given
that she has activated her instruments to initiate the chosen intervention,
what is the probability that T actually takes the target value (or falls
within the target range)? Or for non-binary values, what is the variance
in the value of T given the intervention to set T to a particular value?
In the case of optogenetics, two measures of efficacy have been partic-
ularly important. One is the extent to which the technique succeeds in
causing the expression of rhodopsin channels in the membranes of all and
only the target cells. Tsai et al (2009) describe one such result as follows:
3.8 Dominance
The dominance of an intervention is the extent to which it satisfies con-
dition (4) in Woodward’s account of ideal interventions (see U in Figure
2). In an ideal intervention, the intervention technique sets T to a value
independent of T’s other causes. Such an intervention screens the value
of T off from all T’s other causes.
Woodward includes this requirement because T’s other causes might
change T’s value in unintended ways, foiling the causal inference. Sup-
pose, for example, one intervenes to reduce the value of T and fails to
notice a change in E. Perhaps this is because other causes of T compen-
sate for the intervention or coincidentally raise the value of T, erasing the
effect of the intervention. In that case, one should not conclude that the
induced change is causally irrelevant to the effect.21
But interventions need not screen T off from its other causes to be
useful, as Eberhardt (2009) demonstrates in his consideration of hard
and soft interventions. Hard interventions remove all causal arrows into
T besides the intervention. T’s value is influenced, if at all, only by the
intervention. Soft interventions do not fix T’s value; rather, they change
the conditional probability distribution over T’s values. In other words,
hard interventions are arrow-breaking, eliminating the influence of T’s
other causes; soft interventions are arrow-preserving, leaving at least
some of the parental influence on T intact.
Optogenetic interventions have been used in both hard and soft in-
terventions. A single neuron in a dish, for example, can be removed
from its cellular context leaving the light stimulus as the only exogenous
Toward an Epistemology of Intervention 167
determinant of cellular activity. 22 That would be a hard intervention. In
contrast, researchers using step function opsins simply raise the mem-
brane potential to make it more probable that an incoming signal will in
fact lead to the generation of an action potential in the post-synaptic cell.
This is a soft intervention.
Eberhardt and Scheines (2007) discuss some of the advantages of hard
interventions for learning about a system’s causal structure. First, if one
uses a hard intervention, one can easily infer that the observed correla-
tion between T and the putative effect variable is not due to the action
of a common cause. By definition, a hard intervention severs the effect
of any such common cause on T. Second, hard interventions allow one
to assess the direction of causal influence between two correlated vari-
ables, i.e., to distinguish cases in which A causes B from cases in which
B causes A. Finally, because hard interventions allow the experimenter
to set the value of the cause variable, it gives the researcher informa-
tion that might be used, for example, to characterize the strength of the
causal relationship.
In other cases, hard interventions are either impossible or undesirable.
They are practically impossible when we don’t know how to break all
the causal arrows into T. They are undesirable in cases in which one
wants to leave the causal structure of the system intact, for experimen-
tal or practical reasons. The value of non-dominating interventions is
perhaps most apparent in the domain of maker’s knowledge. Many stan-
dard medical interventions are non-dominating. Insulin injections for
diabetes, for example, augment rather than replace endogenous insulin
production. L-Dopa treatments for Parkinson’s disease augment rather
than replace endogenous dopamine levels. In such cases, one intervenes
to boost residual capacities of the system and/or to encourage it to com-
pensate for its weaknesses, not to replace it with an artificial control
system (as a heart and lung machine replaces hearts and lungs while they
are off-line).
Modelers can also usefully deploy soft interventions. Eberhardt and
Scheines (2007) demonstrate, for example, that there are conditions in
which multiple simultaneous soft interventions on a system suffice to
determine the system’s causal structure in a single experiment (unlike
hard interventions). They show further that if only a single interven-
tion is allowed per experiment, then the choice between hard and soft
interventions makes no difference to the rate at which the data from
the series of experiments will converge on the correct causal structure
(Eberhardt and Scheines 2007). These results, however, are bracketed
by the assumption that one knows all the variables sufficient to deter-
mine the value of the effect variable. If there are possibly unmeasured
common-cause confounds, as is likely the case in most biological exper-
iments, then hard, dominating interventions have the clear advantage in
trying to learn the system’s causal structure.
168 Carl F. Craver
The move from soft interventions to hard interventions (or vice versa)
is not necessarily a form of progress in the ability to intervene in a system.
Hard and soft interventions should rather be seen as distinct interven-
tion strategies that can be used to solve different kinds of experimental
and practical problems. But in many discovery contexts, there is a clear
reason to prefer dominating interventions.
3.9 Determinism
The last dimension I consider is determinism. Some interventions are de-
terministic. Others are probabilistic. In a deterministic intervention, the
intervention technique is used to set the value of the target variable to one
and only one value. In indeterministic interventions, one allows a range
of possible interventions around a mean, for example. In different con-
texts, deterministic or indeterministic interventions might be warranted.
In several of the early optogenetic experiments, researchers intervened
by stimulating cells with pulses of light spaced around a mean interspike
interval (e.g., 100 ms). They allowed the precise timing of the spikes to
vary in a normal distribution around that mean.
Which sort of intervention is most appropriate depends on the hy-
pothesis under test. In these experiments, researchers wanted merely to
demonstrate that the method could generate action potentials at rates in
the approximately physiological range and, crucially, that they could do so
without presuming any particular pattern of stimulation. One might, how-
ever, hypothesize that mean interspike interval is the difference-making
variable for the effect of interest (i.e., the system functions with a rate
code). In that case, the experimenter might choose an indeterministic in-
tervention to rule out the confounding possibility that observed effects are
due to a particular spike sequence (e.g., regularly spaced spikes at 20 Hz).
Interventions, it should be acknowledged, are typically de facto inde-
terministic because complex intervention mechanisms might always fail.
Most real-world interventions are probabilistic to some extent, if only
due to differences in efficacy and dominance. People make mistakes.
No intervention instrument is perfect. Experimental subjects might have
individual differences in how they respond to the intervention. Any for-
mal treatment of the epistemology of interventions must accommodate
the fact that interventions are often chancy affairs, and that chance can
enter into a complex intervention mechanism at many different stages.
3.11 Summary
Table 1 lists the epistemically relevant ways one intervention might differ
from another discussed above. In some cases, movement from one end
of that dimension to another is meaningfully interpreted as progress. In
other cases, any preference along that dimension depends fundamentally
Toward an Epistemology of Intervention 169
on the empirical or practical problem the researcher faces. Such
context-relativity, however, is underwritten by a deeper sense that there
are distinct kinds of causal question and that distinct forms of interven-
tion can be used most appropriately or efficiently to address them.
Notes
1 Thanks to the Minnesota Causality and Biology workshop for comments on
early drafts, especially John Beatty, David Danks, Christopher Hitchcock,
Sarah Roe, Roberta Millstein, Ken Waters, and Marcel Weber. Thanks also
to Andreas Hütteman’s Research Group on Causality and Explanation at
Toward an Epistemology of Intervention 171
the Universität zu Köln for helpful discussion in Spring 2011. Paul Stein and
Mark Alford offered comments on a draft. Pamela Speh assisted with figure
production. Marshall M. Weinberg and the Molecular, Cellular, & Develop-
mental Biology Department at the University of Michigan provided funds and
a workshop that called my attention to this interesting topic in the first place.
2 This paper was initially produced in August 2012, in the early days of opto-
genetics. The technique has since become a standard tool in neuroscience. I
do not review all subsequent developments; but perhaps a focus on the early
days is justified as especially important for illustrating the epistemic norms
that determine whether a technique is accepted.
3 Not all models are mechanistic models (see Bogen 2005; Craver 2007;
Kaplan and Craver 2011). Here I focus on mechanistic modelers, reverse
engineers. Makers, in contrast, are engineers.
4 Martin Carrier (2004) discusses the relationship between applied and basic
science, which is related to the narrower distinction between modelers and
makers I have in mind. I take no stand on whether one kind of knowledge is
more fundamental than the other and am content to note only that they are
distinct when it comes to an epistemology of intervention. Sometimes, the
modeler and the maker combine, as in the use of hybrid models, to deliver
theoretically motivated interventions based on modeler’s knowledge (e.g.,
Prinz et al. 2004).
5 Chang (2004) discusses aspects of detection validity in history of the
thermometer.
6 Franklin (1990) grounds his account of these strategies in Bayesian statis-
tics. Mayo (1996) provides an alternative analysis grounded in error statis-
tics and severity of testing. Weber (2012) reviews some of the advantages
and disadvantages of these and other approaches to the epistemology of
detection.
7 Franklin (2009) describes on two biological experiments: Kettlewell’s ex-
periments on moths and the Meselsohn-Stahl experiment demonstrating the
semi-conservative nature of DNA replication. In Kettlewell’s experiments,
the intervention (industrial pollution) is out of the experimenter’s hands.
The primary interventions in the Meselsohn-Stahl experiment are (a) the ra-
diolabeling of Nitrogen, and (b) the replication of bacteria. The first of these
interventions is entirely in the service of detecting the difference between
parental DNA and offspring DNA (it is an eliciting condition). The second
is left to the bacteria themselves; the experimenter decides only when to stop
the process. His exemplars thus do not afford much opportunity to consider
interventions.
8 My focus on epistemic norms contrasts with other pragmatic and other con-
cerns such as cost, ease of use, learning curves, moral norms, technological
demands, public acceptance, which obviously also influence which tech-
niques thrive and which founder.
9 Some use the term optogenetics to describe both intervention and detection
techniques (e.g., Wiens and Campbell 2018). Optogenetic detection tech-
niques, for example, use gene constructs to make cells fluoresce when they
express a protein, for example. Others are used to activate intracellular sig-
naling molecules or to uncage biologically active molecules. Here I focus on
interventions involving channel-rhodopsins inserted into neural membranes.
10 For accessible reviews, see Deisseroth (2010, 2011), and Fenno et al. (2011).
An early paper presenting the technique is Zhang et al. (2006). For a discus-
sion of application in nonhuman primates, see Diester et al. (2011).
11 Rheinberger uses the term “experimental system” to refer to include for ex-
ample, the experimental subject, the intervention and detection techniques,
172 Carl F. Craver
lab protocols, preparatory procedures, storage devices, data analysis tech-
niques, and the like (see Weber 2005; Rheinberger 1997). Rheinberger de-
scribes experimental systems as the least unit of experimental analysis. I
focus exclusively on intervention techniques, leaving the rest of Rheinberg-
er’s experimental system as background.
12 They are not interventions on T with respect to E (see Woodward 2003).
Hacking (1983) uses staining as an example of the role of intervention in the
growth of scientific knowledge. But eliciting conditions do not play the same
epistemic role as interventions into putative causes. Kästner (2017) captures
this useful point by distinguishing interventions, in Woodward’s technical
sense, from mere interactions.
13 Woodward’s account of an ideal intervention does not include requirement
(5), as it is not irrelevant to the semantics of causal claims. It is, however,
relevant to how causal claims are tested and how artifacts are avoided (see
Craver and Dan-Cohen 2021). Likewise, in experiments crucially involving
an intervention check, one would want to know if the intervention influences
the intervention check independently of the change induced in the target
variable T.
14 Not all useful interventions are ideal in Woodward’s proprietary sense:
Woodward’s goal is to provide a semantics of causal claims (that is, an an-
swer to the question, “What do we mean when we assert that X causes Y?”).
He does not provide, nor does he intend to provide, an account of the con-
straints that an experiment must satisfy to reveal useful information about
the causal structure of a system. Nor does he intend his account to serve a
model of all causal experiments.
15 The term dimension is intended informally. Still, it conveys the key idea
that the usefulness of an intervention might depend on a number of inde-
pendently varying features of the intervention.
16 This idea underlies Hacking’s (1983) use of interventions as arguments for
realism.
17 One problem with current-generation optogenetics is that different cells ex-
press the opsin molecules to a different extent, yielding a differential re-
sponse to the same light stimulus and perhaps altering system-level behavior.
See Heitmann et al. (2017). This difference might explain the early failures
of optogenetic stimulation of primate motor cortex to initiate the required
movements (see Lu et al. 2015).
18 Optogenetic stimulation does differ from physiological mechanisms in ways
that might prove important. The temporal properties of the action potentials
produced by bacterial rhodopsins are not precisely the same as in standard
action potentials. As noted above, optogenetic stimulation drives the pop-
ulation of neurons all at once, more or less synchronously. Because popula-
tions of neurons interact in virtue of patterns of activation and inactivation
across populations, the inability to produce such distributed patterns is a
potential physiological limitation of the present-day method.
19 Judgments of typicality presuppose a choice of a reference class (think, for
example, of the normal cancer cell or the typical mammalian cell express-
ing bacterial rhodopsin). Judgments of normality are inherently tinged by a
preference for health, life, the good of the species, or some such valued end
(see Craver 2014).
20 Jacques Loeb revealed himself as a paradigmatic “maker” when he wrote to
Mach:
The idea is now hovering before me that man himself can act as a creator
even in living nature, forming it eventually according to his will. Man
Toward an Epistemology of Intervention 173
can at least succeed in a technology of living substance. Biologists label
that the production of monstrosities; railroads, telegraphs, and the rest of
the achievements of the technology of inanimate nature are accordingly
monstrosities. In any case, they are not produced by nature; man has
never encountered them.
(in Pauly 1987, p. 51)
21 One can use intervention checks to confirm the intended value of the target
variable. Tye et al. (2011) use this approach. They record from the CeA
during two major interventions. The first stimulates the BlA. The second in-
hibits the cells of the CeA onto which the cells of the BlA project. The detec-
tion technique located in the CeA is designed to check that the interventions
(electrophysiological stimulation in the BlA and light inhibition of the CeA)
changed the activity of CeA cells as desired.
22 Even under such circumstances, one might still expect occasional “noise”
in the system. Ion channels are chancy machines; one might inhibit elec-
trical activity in a cell significantly and still see occasional spikes. In other
experiments, however, the light pulses are added in addition to the inputs at
dendrites that might generate spikes on their own.
References
Allen, G. (forthcoming). A History of Genetics, 1880–1980: Its Economic,
Social and Intellectual Context.
Bacon, F. (1620). The New Organon. In G. Rees (ed.). The Oxford Francis
Bacon (Vol. 11). Oxford: Clarendon Press. Revised 2004.
Bacon, F. (1626). The New Atlantis. From Ideal Commonwealths. New York:
P.F. Collier & Son. (c)1901 The Colonial Press, expired. Prepared by Kirk
Crady from scanner output provided by Internet Wiretap. This book is in
the public domain, released August 1993. http://oregonstate.edu/instruct/
phl302/texts/bacon/atlantis.html.
Bogen, J. (2005). Regularities and Causality; Generalizations and Causal Ex-
planations. In C. F. Craver and L. Darden (eds.), Mechanisms in Biology,
Studies in History and Philosophy of Biological and Biomedical Sciences,
36: 397–420.
Boyden, E.S., Zhang, F., Bamberg, E., Nagel, G., Deisseroth, K. (2005).
Millisecond-timescale, genetically targeted optical control of neural activity.
Nature Neuroscience, 8: 1263–1268.
Carrier, M. (2004). Knowledge and Control: On the Bearing of Epistemic Val-
ues in Applied Science. In P. Machamer and G. Wolters (eds.), Science, Values
and Objectivity. Pittsburgh, PA: University of Pittsburgh Press; Konstanz:
Universitätsverlag, 275–293.
Chang, H. (2004). Inventing Temperature: Measurement and Scientific Prog-
ress. New York: Oxford University Press.
Craver, C.F. (2007). Explaining the Brain: Mechanisms and the Mosaic Unity
of Neuroscience. Oxford: Clarendon Press.
Craver, C.F. (2010). Prosthetic Models. Philosophy of Science, 77: 840–851.
Craver, C.F. (2014). Functions and Mechanisms: A Perspectivalist Account. In
P. Hunneman (ed.), Functions: Selection and Mechanisms. Springer, Synthese
Press, 133–158.
174 Carl F. Craver
Craver, C.F. and Dan-Cohen, T. (2021). Experimental Artifacts. British Journal
for the Philosophy of Science. doi: 10.1086/715202.
Craver, C.F. and Darden, L. (2013). The Search for Mechanisms: Discoveries
across the Life Sciences. Chicago, IL: University of Chicago Press.
Deisseroth K. (2010). Controlling the Brain with Light. Scientific American,
303: 48–55.
Deisseroth, K. (2011). Optogenetics. Nature Methods, 8: 26–29.
Diester, I., Kaufman, M.T., Mogri, M., Pashaie, R., Goo, W., Yizhar, O., Ra-
makrishnan, C., Deisseroth, K., Shenoy, K.V. (2011). An Optogenetic Tool-
box Designed for Primates. Nature Neuroscience, 14: 387–397. Epub Jan 30.
Dretske, F. (1988). Explaining Behavior: Reasons in a World of Causes. Cam-
bridge: MIT Press. A Bradford Book.
Eberhardt, F. (2009). Introduction to the Epistemology of Causation. The Phi-
losophy Compass, 4(6): 913–925.
Eberhardt, F., Scheines, R. (2007). Interventions and Causal Inference. Philoso-
phy of Science, 74: 981–995.
Fenno, L.E., Yizhar, O., Deisseroth, K. (2011). The development and applica-
tion of optogenetics. Annual Review of Neuroscience, 34: 389–412.
Franklin, A. (1986). The Neglect of Experiment. Cambridge: Cambridge Uni-
versity Press.
Franklin, A. (1990). Experiment, Right or Wrong. Cambridge: Cambridge Uni-
versity Press.
Franklin, A. (2012). Experimentation in Physics. Stanford Encyclope-
dia of Philosophy. Available online at http://plato.stanford.edu/entries/
physics-experiment/.
Gunaydin, L.A., Yizhar, O., Berndt, A., Sohal, V.S., Deisseroth, K., Hegemann,
P. (2010). Ultrafast Optogenetic Control. Nature Neuroscience, 13: 387–392.
Hacking, I. (1983). Representing and Intervening. Cambridge: Cambridge Uni-
versity Press.
Heitmann, S., Rule, M., Truccolo, W., Ermentrout, B. (January 2017). “Op-
togenetic Stimulation Shifts the Excitability of Cerebral Cortex from Type I
to Type II: Oscillation Onset and Wave Propagation”. PLOS Computational
Biology, 13(1): e1005349.
Kaplan, D.M., Craver, C.F. (2011). The Explanatory Force of Dynamical Mod-
els. Philosophy of Science 78: 601–627.
Lobo, M.K., Covington, H.E., Chaudhury, D., Friedman, A.K., Sun, H.,
Damez-Werno, D., Dietz, D.M., Zaman, S., Koo, J.W., Kennedy, P.J., Mou-
zon, E. (2010). Cell Type–Specific Loss of BDNF Signaling Mimics Optoge-
netic Control of Cocaine Reward. Science 330(6002): 385–390.
Lu, Y., Truccolo, W., Wagner, F.B., Vargas-Irwin, C.E., Ozden, I., Zimmer-
mann, J.B., May, T., Agha, N.S., Wang, J., Nurmikko, A.V. (June 2015).
Optogenetically Induced Spatiotemporal Gamma Oscillations and Neuro-
nal Spiking Activity in Primate Motor Cortex. Journal of Neurophysiology,
113(10): 3574–3587.
Kästner, L. (2017). Philosophy of Cognitive Neuroscience: Causal Explana-
tions, Mechanisms & Empirical Manipulations. Berlin: Ontos/DeGruyter.
Kolar, K., Knobloch, C., Stork, H., Žnidarič, M., Weber, W. (2018). Opto-
Base: A Web Platform for Molecular Optogenetics. ACS Synthetic Biology, 7:
1825–1828. doi: 10.1021/acssynbio.8b00120.
Toward an Epistemology of Intervention 175
Mayo, D.G. (1996). Error and the Growth of Experimental Knowledge. Chi-
cago, IL: University of Chicago Press.
Pauly, P.J. (1987). Controlling Life: Jacques Loeb and the Engineering Ideal in
Biology. New York: Oxford University Press.
Prinz, A., Abbott, L.F., Marder, E. (2004). The Dynamic Clamp Comes of Age.
Trends in Neurosciences 27: 218–24. doi: 10.1016/j.tins.2004.02.004.
Rheinberger, H.-J. (1997). Toward a History of Epistemic Things. Stanford,
CA: Stanford University Press.
Sargent, R.-M. (2001). Baconian Experimentalism: Comments on McMullin’s
History of the Philosophy of Science. Philosophy of Science 68: 311–317.
Sargent, R.-M. (2012). Bacon to Banks: The Vision and the Realities of Pur-
suing Science for the Common Good. Studies in History and Philosophy of
Science 43: 82–90.
Tsai, H.C., Zhang, F., Adamantidis, A., Stuber, G.D., Bonci, A., de Lecea, L.,
Deisseroth, K. (2009). Phasic Firing in Dopaminergic Neurons Is Sufficient
for Behavioral Conditioning. Science, 324: 1080–1084.
Tye, K.M., Prakash, R., Kim, S.Y., Fenno, L.E., Grosenick, L., Zarabi, H.,
Thompson, K.R., Gradinaru, V., Ramakrishnan, C., Deisseroth, K. (2011).
Amygdala Circuitry Mediating Reversible and Bidirectional Control of Anx-
iety. Nature, 471: 358–362.
Waters, K. (2008). How Practical Know-How About Experimentation Contex-
tualizes Theoretical Knowledge. Philosophy of Science, 75: 707–719.
Waters, K. (2014). Shifting Attention from Theory to Practice in Philosophy
of Biology. In M.C. Galavotti, D. Dieks, W.J. Gonzalez, S. Hartmann, T.
Uebel, M. Weber (eds.) New Directions in the Philosophy of Science. Berlin:
Springer International Publishing, 121–139.
Weber, M. (2005). Philosophy of Experimental Biology. Cambridge: Cam-
bridge University Press.
Weber, M. (2012). Experimentation in Biology. Stanford Encyclope-
dia of Philosophy. Available online at http://plato.stanford.edu/entries/
biology-experiment/.
Wiens, M.D., R. Campbell. (2018). Surveying the Landscape of Optogenetic
Methods for Detection of Protein–Protein Interactions. Wiley Interdisciplin-
ary Reviews. Systems Biology and Medicine. doi: 10.1002/wsbm.1415.
Woodward, J. (2003). Making Things Happen. New York: Oxford University
Press.
Zhang, F., Wang, L.P., Boyden, E.S., Deisseroth, K. (2006). Channelrhodopsin-
2 and Optical Control of Excitable Cells. Nature Methods, 3: 785–792.
8 Triangulating Tools in the
Messiness of Cognitive
Neuroscience
Antonella Tramacere
DOI: 10.4324/9781003251392-11
Messiness of Cognitive Neuroscience 177
experimental situations (Culp, 1994; Wimsatt, 2012). Triangulation has
received considerable attention in the literature (Soler et al., 2012). Vari-
ous case studies have shown that scientists (at least sometimes) appeal to
triangulation to determine the degree of robustness about objects, and
thereby validate their hypotheses.
I will discuss triangulation in cognitive neuroscience. Identifying the
role of brain processes and mechanisms underlying cognition is the goal
of cognitive neuroscience. To make a parallel with Gina’s case, neurosci-
entists aim to identify some yet undefined object. But in the case of cog-
nitive neuroscience, the undefined stuff is constituted by brain processes,
operations, and functions in cognition. Additionally, like in Gina’s case,
pursuing this goal is made difficult by the limitations of each one of the
methods that can be used to investigate the object of interest.
In fact, in cognitive neuroscience, the role of brain mechanisms un-
derlying cognition can only be indirectly investigated, because of the
intrinsic limitations of experimental interventions, and of techniques
used to measure the variable (Weichwald & Peters, 2021). Neuroscien-
tists study brain activity and connectivity by using various tools (widely
understood as methods, techniques, or procedures), providing data that
are correlative in nature, and dependent on confounding factors that
may produce partial or contradictory results. Not only may the different
tools yield different outputs at the level of raw data, but they may not
even focus on the same aspects of the phenomenon, thus making it un-
clear whether they provide evidence for the same hypothesis.
Using different methods or tools for investigating the role of the brain
in psychological phenomena inevitably produces a considerable amount
of diverse data that await to be integrated and validated. Integration
and validation are needed because we want to know how data and find-
ings produced by different tools relate to each other and to what extent
they provide support for inferences bridging neural data to psychological
phenomena. Given the limitations of tools leading to diverse (types of)
data and potentially discordant results, could triangulation help with in-
tegrating findings and validating hypotheses about the role of the brain
in cognition?
This is not a trivial question. To date, much philosophical ink has
been spilled on the question of whether and to what extent triangulation
warrants belief in the triangulated-upon object, but less attention has
been paid to the ways that discordance is itself epistemically valuable.
This is the goal of this chapter. I will first present a case study, where
triangulating different tools (call this methodological triangulation) has
successfully increased the confidence in our understanding of a causal
relation. Specifically, I will analyze research on the causal role of high
values of systolic blood pressure in coronary heart disease. I will then
present a comparatively messy case from cognitive neuroscience to
178 Antonella Tramacere
analyze the epistemic force of triangulation when data and findings are
inconsistent and divergent. Specifically, I will focus on research about
the role of the mirror neuron system in autism spectrum disorder. By
comparing this case study to the previous one, I will show that triangu-
lating tools has comparable value in both cases.
Triangulation is useful when tools have limitations independently of
whether results are concordant or not because incoherence and diver-
gence are not insurmountable problems for validation of robustness. As
in Gina’s story, the utility of triangulation is not necessarily conditioned
on reaching the same conclusion with all information sources, but rather
on acquiring multidimensional information on objects and a modest
unification of phenomena.
2 Triangulation to Robustness
Triangulation refers to the use of multiple independent lines of deter-
mination to ascertain whether an object is robust, namely, to exclude
that it is an artifact of a particular method. The more a phenomenon
is confirmed under multiple independent lines of determination, the
more it is likely to be robust (Wimsatt, 2012). Triangulation as multiple
determination connects to an epistemic conception of robustness, as it
is used to deliver reliable connections between a claim and some fact
about the world. This conception of robustness is particularly relevant to
methodological triangulation because multiple tools for inquiring into a
phenomenon may or may not produce the same results or consistent in-
formation. Consequently, it is important to ascertain whether and how
various lines of evidence point to the same conclusion despite different
modalities or experimental procedures.
Robustness is not an all-or-nothing construct, because it comes in de-
grees and depends on the context of validation. This means that objects
or conclusions are evaluated against a set of contextual factors, depending
on the question asked by scientists, the formulated hypothesis, and the
specific methods used in the experimental setting. I will say more about
the context-dependent features of robustness throughout the article.
Many philosophers apply the term “robustness” strictly in the context
of debates about models and model outcomes (see Odenbaugh & Alex-
androva, 2011; Orzack & Sober, 1993; Weisberg, 2006). According to
these philosophers, the term ‘robustness’ indicates that model outcomes
do not change with a fixed body of data when assumptions are varied. I
do not use robustness in this sense. I am instead concerned with robust-
ness understood as a form of validation of phenomena through triangu-
lation of different tools (see Cartwright, 1991; Woodward, 2006) for
analyses of varieties of robustness).
Because independence is a condition of possibility for robustness (but
see Stegenga & Menon 2017 for a critical discussion), scholars have
Messiness of Cognitive Neuroscience 179
largely discussed what makes one tool independent from another. Ac-
cording to a widely used notion, tools are independent when they have
unrelated sources of potential biases or confounding factors (Kuorikoski
& Marchionni, 2016; Munafò & Smith, 2018; Woodward, 2006).
When the possibility to incur an error using one tool is unrelated to
the possibility of errors with another tool, these tools can be defined as
independent. This implies that, to counterbalance the flaws or the weak-
ness of one tool with the strengths of another, various tools relying on
independent assumptions are needed.
Remember Gina’s case. She used different sensory modalities to tri-
angulate on the undefined object, because one single modality, such as
vision, could not provide conclusive evidence. On the contrary, the com-
bination of different modalities helped to counterbalance the weakness
of each one. In one sense, the sensory modalities are independent because
they are subjected to different sensory biases. Nevertheless, the sensory
modalities pertain to the same subject, that is Gina, and are therefore
dependent on potential cognitive biases that she may have. This sug-
gests that the independence of different tools is always contextual and
depends on the type of tool. Triangulation for different questions, with
different tools or with different subjects would require different criteria
of independence.
Triangulation also requires that evidence is commensurable, namely
tractable within a consistent set of background assumptions. Since
different methods often use “different languages” to generate data, it
is possible that scientists are not able to amalgamate them to validate
empirical claims about objects. Correctly individuating independence,
while avoiding incommensurability, has been considered a hard problem
for triangulation (Stegenga, 2009). Using different methods to inquire
into an object could increase the chance of errors, and since different
methods often rely on different sets of assumptions, any attempt to com-
bine them may lead to confusion.
Consequently, it seems that triangulation relies on integrating com-
mensurable and reliable data. We need to scrutinize data that depend
on limitations of tools or study design, and we need to know whether
and how different data pertains to the question under investigation. It
has been noted however that, if practitioners knew that the methods
used produce reliable and commensurable data about an investigated
phenomenon, they would not need triangulation (Hudson, 2013; Ste-
genga, 2009). They could simply use one single method for investigating
the phenomena, chosen for its precision and reliability.
These observations have produced skepticism about the value of trian-
gulation. If triangulation requires that evidence is reliable and commen-
surable, the use of multiple independent methods would be restricted
to specific situations. More specifically, triangulation would be valu-
able only when we know that the different methods produce reliable
180 Antonella Tramacere
evidence that can be amalgamated. But if scientists knew it in advance,
they would not need to triangulate methods. This is a serious criticism,
but I think that the skepticism for triangulation can be mitigated if we
consider it as neither a necessary nor a sufficient strategy for robustness.1
Triangulation is only one possible strategy to minimize confounders and
bias when we observe, measure, and thereby interpret and understand
objects, reaching a modest and local level of scientific unification.
In what follows, I will try to mitigate the criticisms addressed to trian-
gulation, by describing it as a way for minimizing the limitations of tools
and for contributing to the integration of data and findings. When we
don’t trust methods, evidence concordance works as a regulative ideal
to conduct research. Irrespective of the concordance of the evidence,
triangulation can be valuable to merging disparate sources of data ob-
tained through available methods and be instrumental to the validation
of empirical claims by assessing the robustness of data-to-phenomena
inferences.
I will discuss a case from etiological epidemiology to analyze how
triangulation can be used to increase the confidence in the causal role
of specific variables in the emergence of a disease. I will show how the
robustness of a causal relation can be assessed in ideal cases of evi-
dence concordance, and how multiple lines of determination can help
in determining the reliability of the methods, through analysis of the as-
sumptions and possible limitations of tools used in the experimental in-
terventions and manipulation of variables. Later, I will use this example
to analyze the case of evidence discordance in cognitive neuroscience.
I will leave aside the issue of what makes a causal claim causal, and
when causality can be inferred with reasonable certainty from data, as
this has been already the object of numerous investigations. Note that
I rely on interventionism (Cartwright, 2006; Hausman & Woodward,
1999), as an account of causation which illustrates (and problematize)
the ways scientists uncover the causal structure of the world through
focused interventions (Eberhardt & Scheines, 2007; Spirtes & Scheines,
2004). Questions asked by scientists are often causal and aimed at event
prediction, even when single experiments can only highlight correlations
among variables (Nathan, this volume). Because experiments and inter-
ventions are important to validate causal inference about phenomena,
analyzing the value of methodological triangulation is important to cor-
rectly identify the causal story behind the data.
Acknowledgment
I would like to thank Francesco Bianchini, Marco Nathan, Adrian
Spohr and Alfredo Vernazzani for critical comments and discussions on
an early version of the chapter.
Notes
1 Triangulation is neither necessary nor sufficient for robustness. Triangula-
tion is only one among the many possible epistemic strategies for assessing
robustness about objects, based on the assumption that it is at best unlikely
that multiple seemingly independent methods provide the same conclusion.
This is known as the no miracle argument (Hacking, 1983). Triangulation
is also not necessarily connected to integration of data and findings, be-
cause integration can be reached through different strategies, and does not
even depend on the presence of a hypothesis to validate (O’Malley & Soyer,
2012).
2 This use of unification does not refer to classical ideals of intertheoretic and
hierarchical disciplinary reductions. Classic philosophical works on unifi-
cation have focused on interfield or disciplinary integration, aiming at a
reduction between neuroscience and psychology (Bickle, 1996; Churchland,
1986; Schaffner, 1993). I here acquire conceptions of modest unifications
(Grantham, 2015), understood as local interconnection between experimen-
tal results (other than concepts and theories) within and across sub-fields of
science.
References
Bechtel, W. (2002). Aligning multiple research techniques in cognitive neurosci-
ence: Why is it important? Philosophy of Science, 69(S3), S48–58. https://doi.
org/10.1086/341767
Bickle, J. (1996). New wave psychophysical reductionism and the method-
ological caveats. Philosophy and Phenomenological Research, 56(1), 57–78.
https://doi.org/ppr199656116
Bogen, J., & Woodward, J. (1988). Saving the phenomena. The Philosophical
Review, 97(3), 303–52. https://doi.org/10.2307/2185445
Cartwright, N. (1991). Replicability, reproducibility, and robustness: Comments
on Harry Collins. History of Political Economy, 23(1), 143–55. https://doi.
org/10.1215/00182702-23-1-143
Cartwright, N. (2006). From metaphysics to method: Comments on manipula-
bility and the causal Markov condition. The British Journal for the Philoso-
phy of Science, 57(1), 197–218.
Messiness of Cognitive Neuroscience 191
Churchland, P. S. (1986). Neurophilosophy: Toward A Unified Science of the
Mind-Brain. Cambridge: MIT Press.
Cichy, R. M., & Oliva, A. (2020). A M/EEG-fMRI fusion primer: Resolving
human brain responses in space and time. Neuron, 107(5), 772–81. https://
doi.org/10.1016/j.neuron.2020.07.001
Coko, K. (2020). The multiple dimensions of multiple determination. Perspec-
tives on Science, 28(4), 505–541. https://doi.org/10.1162/posc_a_00349
Cole, E. J., Barraclough, N. E., & Enticott, P. G. (2018). Investigating mirror
system (MS) activity in adults with ASD when inferring others’ intentions
using both TMS and EEG. Journal of Autism and Developmental Disorders,
48(7), 2350–67. https://doi.org/10.1007/s10803-018-3492-2
Culp, S. (1994). Defending robustness: The bacterial mesosome as a test case. PSA:
Proceedings of the Biennial Meeting of the Philosophy of Science Association,
1994(1), 46–57. https://doi.org/10.1086/psaprocbienmeetp.1994.1.193010
Dapretto, M., Davies, M. S., Pfeifer, J. H., Scott, A. A., Sigman, M., Bookheimer,
S. Y., & Iacoboni, M. (2006). Understanding emotions in others: Mirror neu-
ron dysfunction in children with autism spectrum disorders. Nature Neuro-
science, 9(1), 28–30. https://doi.org/10.1038/nn1611
Dinstein, I., Thomas, C., Humphreys, K., Minshew, N., Behrmann, M., &
Heeger, D. J. (2010). Normal movement selectivity in autism. Neuron, 66(3),
461–9. https://doi.org/10.1016/j.neuron.2010.03.034
Eberhardt, F., & Scheines, R. (2007). Interventions and causal inference. Philos-
ophy of Science, 74(5), 981–995. https://doi.org/10.1086/525638
Enticott, P. G., Kennedy, H. A., Rinehart, N. J., Bradshaw, J. L., Tonge, B. J.,
Daskalakis, Z. J., & Fitzgerald, P. B. (2013). Interpersonal motor resonance
in autism spectrum disorder: Evidence against a global “mirror system”
deficit. Frontiers in Human Neuroscience, 7, 218. https://doi.org/10.3389/
fnhum.2013.00218
Enticott, P. G., Kennedy, H. A., Rinehart, N. J., Tonge, B. J., Bradshaw, J.
L., Taffe, J. R., Daskalakis, Z. J., & Fitzgerald, P. B. (2012). Mirror neuron
activity associated with social impairments but not age in autism spectrum
disorder. Biological Psychiatry, 71(5), 427–33. https://doi.org/10.1016/j.
biopsych.2011.09.001
Fan, Y.-T., Decety, J., Yang, C.-Y., Liu, J.-L., & Cheng, Y. (2010). Unbroken mir-
ror neurons in autism spectrum disorders. Journal of Child Psychology and
Psychiatry, 51(9), 981–8. https://doi.org/10.1111/j.1469-7610.2010.02269.x
Ference, B. A., Julius, S., Mahajan, N., Levy, P. D., Williams, K. A., & Flack, J. M.
(2014). Clinical effect of naturally random allocation to lower systolic blood
pressure beginning before the development of hypertension. Hypertension,
63(6), 1182–8. https://doi.org/10.1161/HYPERTENSIONAHA.113.02734
Ferrari, & Rizzolatti, G. (2015). New Frontiers in Mirror Neurons Research.
New York: Oxford University Press.
Fishman, I., Keown, C. L., Lincoln, A. J., Pineda, J. A., & Müller, R.-A.
(2014). Atypical cross talk between mentalizing and mirror neuron networks
in autism spectrum disorder. JAMA Psychiatry, 71(7), 751–60. https://doi.
org/10.1001/jamapsychiatry.2014.83
Fuelscher, I., Caeyenberghs, K., Enticott, P. G., Kirkovski, M., Farquharson, S.,
Lum, J., & Hyde, C. (2019). Does fMRI repetition suppression reveal mirror
neuron activity in the human brain? Insights from univariate and multivariate
192 Antonella Tramacere
analysis. The European Journal of Neuroscience, 50(5), 2877–92. https://doi.
org/10.1111/ejn.14370
Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition
in the premotor cortex. Brain: A Journal of Neurology, 119(Pt 2), 593–609.
Grantham, T. (2015). Conceptualizing the (dis)unity of science. Philosophy of
Science 71(2): 133–55. https://doi.org/10.1086/383008.
Hacking, I. (1983). Representing and Intervening: Introductory Topics in the
Philosophy of Natural Science. Cambridge: Cambridge University Press.
Hausman, D. M., & Woodward, J. (1999). Independence, invariance and the
causal Markov condition. British Journal for the Philosophy of Science,
50(4), 521–83. https://doi.org/10.1093/bjps/50.4.521
Hobson, H. M., & Bishop, D. V. M. (2016). Mu suppression—A good measure
of the human mirror neuron system? Cortex: A Journal Devoted to the Study
of the Nervous System and Behavior, 82, 290–310. https://doi.org/10.1016/j.
cortex.2016.03.019
Hudson, R. (2013). Seeing Things: The Philosophy of Reliable Observation.
New York: Oxford University Press.
Khalil, R., Tindle, R., Boraud, T., Moustafa, A. A., & Karim, A. A. (2018).
Social decision making in autism: On the impact of mirror neurons, motor
control, and imitative behaviors. CNS Neuroscience & Therapeutics, 24(8),
669–76. https://doi.org/10.1111/cns.13001
Kuorikoski, J., & Marchionni, C. (2016). Evidential diversity and the trian-
gulation of phenomena. Philosophy of Science, 83(2), 227–247. https://doi.
org/10.1086/684960
Law, M. R., Morris, J. K., & Wald, N. J. (2009). Use of blood pressure lowering
drugs in the prevention of cardiovascular disease: Meta-analysis of 147 ran-
domised trials in the context of expectations from prospective epidemiologi-
cal studies. BMJ, 338, b1665. https://doi.org/10.1136/bmj.b1665
Lawlor, D. A., Tilling, K., & Davey Smith, G. (2016). Triangulation in aetiolog-
ical epidemiology. International Journal of Epidemiology, 45(6), 1866–86.
https://doi.org/10.1093/ije/dyw314
Likowski, K. U., Mühlberger, A., Gerdes, A. B. M., Wieser, M. J., Pauli, P., &
Weyers, P. (2012). Facial mimicry and the mirror neuron system: Simultane-
ous acquisition of facial electromyography and functional magnetic resonance
imaging. Frontiers in Human Neuroscience, 6, 214. https://doi.org/10.3389/
fnhum.2012.00214
Martineau, J., Andersson, F., Barthélémy, C., Cottier, J.-P., & Destrieux,
C. (2010). Atypical activation of the mirror neuron system during percep-
tion of hand motion in autism. Brain Research, 1320, 168–75. https://doi.
org/10.1016/j.brainres.2010.01.035
Munafò, M. R., & Smith, G. D. (2018). Robust research needs many lines of evidence.
Nature, 553(7689), 399–401. https://doi.org/10.1038/d41586-018-01023-3
Myers, S. M., Johnson, C. P., & American Academy of Pediatrics Council on
Children with Disabilities. (2007). Management of children with autism
spectrum disorders. Pediatrics, 120(5), 1162–82. https://doi.org/10.1542/
peds.2007-2362
O’malley, M. A., & Soyer, O. S. (2012). The roles of integration in molecular
systems biology. Studies in History and Philosophy of Science Part C: Studies
Messiness of Cognitive Neuroscience 193
in History and Philosophy of Biological and Biomedical Sciences, 43(1),
58–68. https://doi.org/10.1016/j.shpsc.2011.10.006
Oberman, L. M., Hubbard, E. M., McCleery, J. P., Altschuler, E. L., Ram-
achandran, V. S., & Pineda, J. A. (2005). EEG evidence for mirror neuron
dysfunction in autism spectrum disorders. Cognitive Brain Research, 24(2),
190–8. https://doi.org/10.1016/j.cogbrainres.2005.01.014
Odenbaugh, J., & Alexandrova, A. (2011). Buyer beware: Robustness analyses
in economics and biology. Biology and Philosophy, 26(5), 757–71. https://doi.
org/10.1007/s10539-011-9278-y
Orzack, S. H., & Sober, E. (1993). A critical assessment of Levins’s the strategy
of model building in population biology (1966). The Quarterly Review of
Biology, 68(4), 533–546. https://doi.org/10.1086/418301
Palau-Baduell, M., Valls-Santasusana, A., & Salvadó-Salvadó, B. (2011). Au-
tism spectrum disorders and mu rhythm. A new neurophysiological view. Re-
vista De Neurologia, 52(Suppl 1), S141–6.
Ramachandran, V. S., & Oberman, L. M. (2006). Broken mirrors. Scientific
American, 295(5), 62–9.
Rasmussen, N. (2001). Evolving scientific epistemologies and the artifacts of
empirical philosophy of science: A reply concerning mesosomes. Biology and
Philosophy, 16(5), 627–52. https://doi.org/10.1023/A:1012038815107
Raymaekers, R., Wiersema, J. R., & Roeyers, H. (2009). EEG study of the mir-
ror neuron system in children with high functioning autism. Brain Research,
1304, 113–21. https://doi.org/10.1016/j.brainres.2009.09.068
Schaffner, K. F. (1993). Theory structure, reduction, and disciplinary integration
in biology. Biology and Philosophy, 8(3), 319–47. https://doi.org/10.1007/
BF00860432
Soler, L., Trizio, E., Nickles, T., & Wimsatt, W. (Eds.). (2012). Characterizing
the Robustness of Science: After the Practice Turn in Philosophy of Science
(Vol. 292). Dordrecht, Netherlands: Springer Science & Business Media.
Spirtes, P., & Scheines, R. (2004). Causal inference of ambiguous manipula-
tions. Philosophy of Science, 71(5), 833–45. https://doi.org/10.1086/425058
Stegenga, J. (2009). Robustness, discordance, and relevance. Philosophy of Sci-
ence, 76(5), 650–61. https://doi.org/10.1086/605819
Stegenga, J. (2012). Rerum concordia discors: Robustness and discordant mul-
timodal evidence. Rerum Concordia Discors: Robustness and Discordant
Multimodal Evidence, 292, 207–26.
Stegenga, J., & Menon, T. (2017). Robustness and independent evidence. Phi-
losophy of Science, 84(3), 414–35. https://doi.org/10.1086/692141
Trizio, E. (2012). Achieving robustness to confirm controversial hypotheses: A
case study in cell biology. Achieving Robustness to Confirm Controversial
Hypotheses: A Case Study in Cell Biology, 292, 105–20.
Weichwald, S., & Peters, J. (2021). Causality in cognitive neuroscience: Con-
cepts, challenges, and distributional robustness. Journal of Cognitive Neuro-
science, 33(2), 226–47. https://doi.org/10.1162/jocn_a_01623
Weisberg, M. (2006). Robustness analysis. Philosophy of Science, 73(5), 730–
42. https://doi.org/10.1086/518628
Whewell, W., & Butts, R. E. (1968). William Whewell’s Theory of Scientific
Method. Pittsburgh, PA: University of Pittsburgh Press.
194 Antonella Tramacere
Williams, J. H. G., Whiten, A., Suddendorf, T., & Perrett, D. I. (2001). Imita-
tion, mirror neurons and autism. Neuroscience & Biobehavioral Reviews,
25(4), 287–95. https://doi.org/10.1016/S0149-7634(01)00014-8
Wimsatt, W. C. (2012). Robustness, reliability, and overdetermination (1981).
In L. Soler (ed.) Characterizing the Robustness of Science (pp. 61–78). Am-
sterdam: Springer.
Woodward, J. (2006). Some varieties of robustness. Journal of Economic Meth-
odology, 13(2), 219–40. https://doi.org/10.1080/13501780600733376
Yates, L., & Hobson, H. (2020). Continuing to look in the mirror: A review
of neuroscientific evidence for the broken mirror hypothesis, EP-M model
and STORM model of autism spectrum conditions. Autism, 24(8), 1945–59.
https://doi.org/10.1177/1362361320936945
9 Prediction, Explanation,
and the “Toolbox” Problem
Marco J. Nathan
DOI: 10.4324/9781003251392-12
196 Marco J. Nathan
Years lazily went by. People eventually realized that not all that glit-
ters is gold. Incidentally, one also needs a baptismal act and the appro-
priate atomic essence. The new causal-mechanistic ideal, embodied in
the CMM, was far less ecumenical than its predecessor. With the demise
of CLM, explanation and prediction were no longer symmetric. War
soon broke out.
The playing field was hardly leveled. Causal explanation, led by De-
Salmon’s troops, swiftly vanquished the enemy. Prediction, outnumbered
and deemed dispensable, was banished from the kingdom. It searched
for a new home and eventually found one in a neighboring country, the
Republic of Natural Science. Planted in fertile soil, prediction quickly
grew into a strong, independent research field, with a host of new theo-
retical and experimental tools.
Meanwhile, in Philosophy of Science, the first cracks began to appear.
Galvanized by its initial success, van der Quine, DeSalmon, and their fol-
lowers were led to believe that the CMM could take care of all business
at once. And, at first, this appeared to be so. Explanation became the
once and future inference, relegating everything else to the background.
But, as it turns out, the mainstream causal-mechanistic approach, with
its focus on how things are brought about, is poorly equipped to account
for prediction. When forecasts finally started holding their own, this
became painfully clear.
The eventual passing of McKuhn and van der Quine left the kingdom
in a state of disarray. Civil war ran amok. Some citizens of Philosophy
of Science sought refuge in the Realm of Metaphysics, governed by Duke
Lewis Kellogg, an old foe of King Hempel. But the old duchy, once thriv-
ing, was also in a ruinous state and could barely take care of its own.
Others migrated to the larger neighbor, Natural Science, which, in the
meantime, had grown bigger and more powerful. Prediction and expla-
nation made amends. Most of them settled in Natural Science. Others
returned to Philosophy of Science. But there was finally peace and pros-
perity. And everyone lived happily ever after—isn’t this how any fairy
tale worth its salt is supposed to end?
Let’s snap out of fiction and back to reality. Our legend is inspired by
real events with momentous implications, the subject of this essay. Pre-
diction and explanation, once perceived as symmetrical, have been torn
apart and treated independently. The CMM, now dominant in philoso-
phy, captures forecasts indirectly, via causal-mechanistic relations. But
prediction, in many areas of the sciences, is rapidly gaining autonomy.
Purely predictive inferences, not backed up by causal explanation, are
increasingly becoming commonplace, fueled by conceptual and techno-
logical advancements. This raises the question of how predictive and
explanatory tools can work in unison while remaining distinct. This
puzzle, which I dub the “toolbox problem,” has a “black-box solu-
tion.” Prediction and explanation are ultimately grounded in the same
The “Toolbox” Problem 197
causal network. Yet, predictive and explanatory tools require different
amounts of mechanistic detail, as well as varying degrees of abstraction
and idealization.
Here is the master plan. Section 2 outlines the status quo in contempo-
rary philosophy of science: the methodological hegemony of explanation.
Section 3 illustrates the ongoing predictive turn—the emancipation of
prediction from explanation—with examples from genomics, molecular
medicine, and cognitive neuropsychology. Section 4 raises a philosoph-
ical puzzle, namely, how to characterize the interdependence of predic-
tion and explanation while keeping these inferences distinct. Section 5
explores a “black box” solution to our toolbox problem. Section 6 wraps
up the discussion with general implications and concluding remarks.
The aims of the pure (empirical) sciences are then essentially the
same throughout the whole field. What the scientists are seeking are
descriptions, explanations, and predictions which are as adequate
and accurate as possible in the given context of research.
Elaborating on this insight, Feigl notes how “the first aim [description],
is basic and indispensable, the second and third [explanation and predic-
tion] (closely related to each other) arise as the most desirable fruits of
scientific labors whenever inquiry rises beyond the mere fact-gathering
stage” (pp. 10–11). These remarks raise several related questions. Why
should science privilege these aims as paramount? What makes descrip-
tion “basic” and “indispensable”? In what sense are prediction and ex-
planation “closely related to each other”? To address these issues, we
must focus on how these three goals can be met.
Feigl could safely assert that prediction and explanation are closely
related because the covering-law model (CLM), the “official” theory
of explanation of logical positivism and logical empiricism—I will not
separate these movements here—took care of both endeavors simulta-
neously. As gestured in Section 1, Hempel and Oppenheim (1948), the
main proponents of the CLM, viewed prediction and explanation as two
198 Marco J. Nathan
sides of the same coin. Predicting an event is explaining something in
the future, yet to be observed. Conversely, explanation is retrodiction:
prediction of a happening that occurred in the past. Both prediction and
explanation are a matter of logically deriving an event from an expla-
nans consisting of background assumptions and laws of nature. This is
where description comes into the picture. A necessary presupposition
for predicting or explaining an event is an accurate characterization of
initial conditions and underlying nomological regularities. In short, all
three basic epistemic aims of science are brought together by the CLM.
This unified picture of scientific inquiry was bound to change with
the demise of the CLM, due to the discovery of asymmetries of explana-
tion, and the subsequent emergence of the causal-mechanistic paradigm.
Briefly rehashing this familiar story will be helpful to unravel the histor-
ical narrative.
Setting complications aside, the hiccup is straightforward. From the
standpoint of the CLM, prediction and explanation are structurally
analogous. This means that any explanatory inference is thereby pre-
dictive and, conversely, any prediction can be treated as an explana-
tion. Whether this symmetry holds water is questionable. Paraphrasing
the insights of Scriven (1958) and Bromberger (1966), among others,
the length of a shadow cast by a flagpole on a clear day, together
with the position of the sun and the basic laws of light refraction, can
be used to predict the height of the flagpole. But it is doubtful, to say
the least, whether a similar derivation constitutes an explanation of the
height of the pole. Along the same lines, reliable symptoms can be used
to predict a disease, but they don’t explain it.
Unsurprisingly, accounts which supplanted the CLM dropped the
alleged symmetry between explanation and prediction. Unification-
ism, pioneered by Friedman (1974) and Kitcher (1981, 1989), treated
explanation as a unifying endeavor, paying little to no attention to fore-
casting. As well-known as they are, unificationist analyses did not gain
much momentum. Hence, in what follows, I shall focus on the frame-
work that has received the lion’s share of attention and consensus. This
is the causal-mechanistic model, or CMM, for short.
What does it mean to approach explanation from a causal-mechanistic
perspective? Several influential alternatives have been proposed (Salmon
1984, 1998; Woodward 2003; Strevens 2008). The overarching guid-
ing thought is that explaining an event involves specifying relevant de-
tails about what triggers it, that is, how the event in question is brought
about and how it would vary under different circumstances. In the
words of David Lewis (1986, p. 217, italics omitted), “to explain an
event is to provide some information about its causal history.” With this
core assumption firmly in place, it became clear that grasping complex
causal networks requires looking “under the hood” for mechanistic
underpinnings. A broad analysis of mechanisms and cognate notions
The “Toolbox” Problem 199
was undertaken, with significant differences in nuance, by Bechtel and
Richardson (1993), Machamer et al. (2000), Glennan (2017), and fellow
advocates of the so-called “new wave of mechanistic philosophy,” which
gained full traction at the turn of the millennium.
As explanation became a matter of causes and mechanisms, what hap-
pened to prediction? It faded into the background. Allow me to elabo-
rate. Prediction was a hallowed topic in classic philosophy of science,
connected to induction and confirmation, long before the CLM. The no-
torious problem of induction—first posed by Hume (2000[1938], 1999
[1748]), rephrased in modern terms by Russell (1912), and revamped by
Goodman (1955)—challenges us to justify our belief that the future will
resemble the past. Expectations about the future are, at heart, a sort of
prediction. The issue of confirmation, as developed by logical empiri-
cists, is, in turn, a de-psychologized form of induction, which focuses
on purely logical, probabilistic relations between theory and evidence.
Two remarks. First, the class of inductive inferences, which encom-
passes all non-deductive forms of reasoning, is broader than the set of
predictions per se. Second, the problem of induction, conceived as a
yet-unresolved philosophical issue, is of marginal significance for prac-
ticing scientists who, pace Popper, routinely take for granted some prin-
ciple of induction. As such, the study of confirmation, traditionally part
of philosophy of science, was relocated and repackaged in the field of
formal epistemology.
Admittedly, a few philosophers of science have devoted attention
to prediction. Cartwright (1983, 2007, 2019) has emphasized the gap
between causal and predictive knowledge. Spirtes et al. (2000) have
sketched a mathematical framework for predictive inferences. In the
philosophy of medicine, Broadbent (2013) discusses prediction in epi-
demiology. And I, for one, have considered the role of predictions in di-
agnosis and prognosis (Nathan 2016). Nevertheless, to date, no one has
undertaken a systematic philosophical analysis of prediction comparable
to what has been provided for causal explanation.
Why did philosophers of science trade in prediction for explanation?
Could it be that forecasting has little to offer of philosophical value?
This seems unlikely. While a clear-cut answer is yet to be found, predic-
tion continues to live in the shadow of its counterpart, in theories of ex-
planation. To be sure, predictive accuracy remains a cardinal theoretical
virtue of many scientific models. Across various fields—from climatol-
ogy to economics, from precision medicine to epidemiology—a simu-
lation able to accurately anticipate the behavior of a complex system is
worth its weight in gold. Yet, I surmise that predictive accuracy is often
considered a more or less direct consequence of a deep understanding of
the system’s causal-mechanistic underpinnings. One predicts by dint of
explanatory power. Description and explanation remain the true gold
standard. This is my best attempt to rationalize the neglect of prediction
200 Marco J. Nathan
in philosophy of science. Otherwise, I would find the current situation
quite inexplicable.
In sum, with the demise of the CLM, the perceived symmetry of pre-
diction and explanation was lost. Despite a handful of valiant, if rela-
tively isolated attempts, philosophers of science swept prediction under
the rug, concentrating quasi-exclusively on description and explanation,
construed in causal-mechanistic fashion. Pointing the spotlight on expla-
nation left prediction without a template of its own. To be sure, formal
epistemologists, Bayesians in particular, focused on the revision of belief
based on new evidence. But this emphasis on confirmation falls short of
a full-blown model of prediction. Thus conceived, prediction could no
longer be viewed as a self-standing goal of science, but a by-product of
explanatory power, a welcome free lunch.
Was neglecting prediction the right move for philosophy of science?
For a while, this stance seemed justified by the attitude of the scientific
community, which also privileged explanation. Yet, things were bound
to change fast. Over the last few decades, many working scientists have
begun couching their epistemic aims in terms of predictive accuracy.
This is the “predictive turn.” It is a trend that has pushed researchers
away from explanation and toward purely predictive models. To drive
the point home, rather than providing an abstract description, the fol-
lowing section briefly introduces three case studies that illustrate this
predictive turn.
a set of new technologies [that] has given us the ability to study how
the human brain works in greater detail than ever before. These
tools are known as neuroimaging methods, because they allow us
to create images of the human brain that show us what it is made of
(which we refer to as its structure) and what it is doing (which we
refer to as its function).
(Poldrack 2018, p. 1)
Arguably, the most revolutionary tool of all has been magnetic reso-
nance imaging (MRI). Now, MRI comes in various forms, and different
kinds of MRI scans measure specific aspects of the brain. Structural
MRI captures aspects of the makeup of neural tissue, such as how much
water or fat is present. To determine what the brain is doing, we need
functional MRI, or “fMRI,” which, roughly speaking, detects the
shadow of brain activity—engagement in a cognitive task—through its
effects on the amount of oxygen in the blood.
Let’s focus on so-called reverse inference, which basically allows one
to infer the engagement of a particular mental process from specific ac-
tivation patterns or locations in the brain, as opposed to a “forward
inference,” from mental states to brain processes. Here I provide an
admittedly simplified reconstruction of the underlying argument—for
204 Marco J. Nathan
a more detailed overview of reverse inference and its limitations, see
Nathan and Del Pinal (2017).
Consider a psychological generalization describing how humans typi-
cally behave while undertaking a certain cognitive task. For instance, sup-
pose that we perform better on a test when we listen to soothing music.
Presumably, there will be various cognitive processes that could explain
this tendency. For instance, music may increase our concentration or, al-
ternatively, listening to music may block negative emotions that interfere
with cognitive performance. How do we decide which option, if either, is
correct? Reverse inference shows how to employ neuroscientific evidence
to answer this question. The trick is to find some association between the
competing cognitive processes and some underlying areas or patterns of
neural activation. For instance, being focused may be associated with a
certain kind of brain activity x, whereas blocking emotions could be as-
sociated with a different neural pattern y. This being so, we can conduct
a neuroimaging scan while subjects are performing the test. If the scan
detects type-x activity and not type-y activity, this provides evidence that
what explains enhanced performance is focus, as opposed to lack of emo-
tions. To be sure, this constitutes an oversimplification, because most brain
regions and patterns of brain activity will be associated with several cogni-
tive tasks—the “lack of selectivity objection.” Various strategies have been
offered to rule out alternative explanations (Del Pinal and Nathan 2013;
Hutzler 2014; Machery 2014). We need not worry about details here.
Once again, the crucial point, for our present purposes, is that the as-
sociation of a cognitive subprocess with a neural region or pattern need
not be a causal one. Indeed, while it is possible that the patterns detected
via fMRI are what produce the mental state, this is the exception rather
than the rule. All that is required for the argument to go through is an
associative link, not a bona fide reduction. These associative bridge laws
may be conceived as probabilistic and context-sensitive relations that
do not identify their relata, either at the type-level or at the token-level
(Nathan and Del Pinal 2016). In short, the association between cog-
nitive state and neural pattern can be used to predict (“reverse-infer”)
the former from the latter or predict (“forward infer”) the latter from
the former. But neither inference is typically explanatory. To emphasize,
such inferences may well be explanatory, if the neural processes in ques-
tion are causally responsible for the mental state. Nevertheless—and this
is the take-home message—they need not be.
In conclusion, reverse and forward inference, alongside the required
fMRI technology, are, at heart, instruments of prediction, not explana-
tion. In this respect, they are like GWAS and biomarkers. The moral, of
course, isn’t that description and explanation are irrelevant in science.
What the predictive turn shows is that forecasting may become a goal in
and of itself, in the absence of explanation. What philosophical conse-
quences might this have?
The “Toolbox” Problem 205
4 The Toolbox Problem
The previous section outlined three scientific tools, whose development
required a host of technological and conceptual advancements: GWAS,
biomarkers, and neuroimaging. All three can be conceived primarily as
instruments of prediction, as opposed to explanation. GWAS need not
uncover any causal link between genotype and phenotype. As such, they
frequently do not explain the presence of the trait under study, without
tainting their forecasting value. Biomarkers can increase our confidence
in the occurrence of a specific physiological process or reaction, while
not offering any insight as to what produces the state at hand. fMRI is
grounded in a series of associative bridge laws that reverse-infer the en-
gagement of a cognitive process based on neural activation or, vice versa,
forward-infer neural activity based on cognitive engagement. Again, all
of this can be effectively done without shedding any light on their causal
interplay and, thereby, not warranting any causal-mechanistic explana-
tion. These are clear illustrations of the predictive turn.
Some readers may find none of this strange, concerning, or remotely
controversial. As mentioned at the outset, prediction is a traditional goal
of science. Why should one be surprised by the emergence of predictive
tools?
The development of predictive tools per se is hardly shocking. The vex-
ing issue, from a philosophical perspective, is squaring the presence of
these instruments—characterized from a purely predictive, as opposed
to an explanatory standpoint—with the mainstream causal-mechanistic
outlook. I dub this the “toolbox problem” because it challenges us to
spell out a framework that captures how prediction and explanation can
mutually inform each other without reducing to one and the same. As
I will argue in due course, prediction is, indeed, a form of “settling for
less,” a cheaper version of explanation. But that does not undermine its
pivotal role.
In the heyday of logical positivism, the toolbox problem did not oc-
cur. The ruling theory of explanation, the CLM, treated prediction and
explanation as two sides of the same coin. “[The same logical] schema
underlies both explanation and prediction; only the knowledge situation
is different. In explanation, the fact Qa [the explanandum] is already
known. (…) In prediction, Qa is a fact not yet known” (Carnap 1966,
p. 17). Hence, a predictive inference is always explanatory and, vice
versa, an explanation also has predictive power. The statistical tech-
niques involved in a well-conducted GWAS provide an informed guess
as to whether an individual with genotype g will display phenotypic trait
t. If this prediction is borne out, we thereby have an explanation of t.
The same can be said about biomarkers and neuroimaging. If explana-
tion and prediction are logically equivalent, predictive and explanatory
oomph goes hand in hand.
206 Marco J. Nathan
When the CLM was displaced by the CMM, a wedge was drawn be-
tween prediction and explanation, which were no longer perceived as
symmetrical. So, how are these inferences related? Clearly, they support
each other. But how? Explanation is now a matter of causal-mechanistic
detail. If prediction is not logically equivalent to explanation, what is
the logical structure of a predictive inference? How does predictive value
enhance explanatory power and, vice versa, how does explanation con-
tribute to a prediction? This is the toolbox problem.
It is important not to conflate what I call the “toolbox problem” with
what has been dubbed the “causal fallacy.” The causal fallacy, as Broad-
bent (2013, p. 83) presents it, is the mistake of believing that all expla-
nations have predictive power.
Is the causal fallacy really a fallacy? Does all explanation enable pre-
diction? This remains an interesting open question. Still, this issue is
tangential to my toolbox problem, which is distinct. It involves explain-
ing how we can have prediction without explanation. How do we get
correlational structures underwritten by causal arrows that point in the
wrong direction?
As Pearl and Mackenzie (2018, p. 30) quip, “Good predictions need
not have good explanations. The owl can be a good hunter without
understanding why the rat always goes from point A to point B.” While
this seems unassailable, by focusing exclusively on causal-mechanistic
relations, one misses the point of the predictive turn. Attentive read-
ers will surely note that there are various ways of answering the tool-
box problem. The remainder of this section explores some intuitive
options and argues that none of them is ultimately successful. Then,
the following section will offer a different “black box” solution to our
conundrum.
A first reaction is the timeworn, ostrich-inspired tendency to stick
one’s head into the sand. From this standpoint, admittedly not especially
popular, the toolbox problem is not something we should lose sleep
over. Sure, the objection runs, prediction has a role to play in science.
And there may well be concepts that predict without explaining. Yet,
these are the exception rather than the rule. The issue is not widespread
enough to make it worth our while. Better to focus our attention on
what really matters for science, namely, mechanistic description and
causal explanation.
The “Toolbox” Problem 207
This initial riposte strikes me as myopic. Our three case studies are
but the tip of the iceberg. Predictive inferences of this ilk are ubiquitous.
From computer science to climatology, from evolutionary theory to psy-
chology, prediction sans explanation is not the exception to the norm.
Furthermore, my examples are hardly marginal. GWAS are a central
tool in genomics. Biomarkers are the holy grail of precision medicine.
And neuroimaging, for better or for worse, has come to dominate cogni-
tive psychology. Hence, dismissing the issue is no more effective than the
cognate strategy of sticking your head in the sand in hope that a preda-
tor will chase itself away. Better to put our focus elsewhere.
A second, more promising response is to insist that the alleged struc-
tural discrepancy between explanation and prediction is merely illusory.
The CMM, properly understood and contextualized, is perfectly capa-
ble of taking care of both explanation and prediction. Both inferences
rely on causal-mechanistic knowledge.
Consider, first, biomarkers and associated conditions, such as the link
between high PSA levels and prostate cancer. To the best of my knowl-
edge, high PSA is not among the causes of cancer. Still, there must be
something that underlies and explains the reliable correlation between
biomarker and pathology. Obviously, it cannot be a cosmic coincidence
that high PSA is consistently associated with cancer. There must be some
causal connection.
A similar insight applies to neuroimaging. Take, for instance, a com-
puter classifier able to reliably reverse-infer the engagement of cogni-
tive state P from some location or pattern of neural activation N. Now,
presumably, P is not completely causally independent of the underly-
ing neural activity N. Clearly, it would be fallacious to infer from this
correlation, without further corroborating evidence, that P and N are
type-identical, or that N directly causes P. Still, for the association be-
tween P and N to be robust, and therefore useful in the context of a
reverse inference, there must be some causal connection between the
neural substrate and the mental superstrate.
In short, the argument runs, both examples point to the same con-
clusion. Prediction, just like explanation, is a thoroughly causal-
mechanistic endeavor. The CMM can thus take care of both. Toolbox
problem solved.
There is something about this second rejoinder that strikes me as cor-
rect. Prediction, lest it becomes hocus pocus, must exploit mechanis-
tic links. My own solution, outlined in Section 5, will indeed posit a
causal connection between predictor and predicted state. Having said
this, as just presented, this reply is too crude. No one should dispute
the existence of a mechanistic pathway linking high PSA with cancer
or mental states with neural activity. Yet, such a connection does not
explain the pathology or mental state. In other words, the causal con-
nection between predictor and prediction is what ensures the robustness
208 Marco J. Nathan
of the inference (for a discussion of robustness in neuroscience, see
Tramacere, this volume). Still—and this is the crucial point—such
causal-mechanistic connection need not explain the relevant outcome,
that is, the explanandum.
Moving on, a third reaction to the toolbox problem insists that pre-
diction and explanation are fundamentally distinct. Explanation follows
the CMM. Prediction does not. Thus, perhaps, the solution consists in
retaining the CMM for explanation, while developing an altogether dif-
ferent approach to prediction. From this perspective, two mistakes need
mending. First, the CLM wrongly assumed that prediction and explana-
tion are two sides of the same coin. The CMM rectified this by drawing
the two inferences apart. Second, the CMM fails to develop an indepen-
dent account of prediction. That’s the missing ingredient.
The predictable follow-up becomes: which model of prediction will
do the trick? Many alternatives are on the table, from statistical Bayes-
ian networks approaches, to a purely logical “covering-law model of
prediction” that drops any pretense of explaining, to hybrid approaches
that purport to combine both insights. All three avenues seem worth
pursuing, together with potentially others. Still, I’m skeptical about the
prospects of rendering prediction completely independently of explana-
tion. Why? For one thing, it offends the aesthetic taste of those of us who
long for some sort of unified scientific methodology. Second, and more
to the point, it leaves something out. Prediction and explanation do have
something in common, at least intuitively. Making a correct prediction
may open the door, even if a slight crack, to providing a corresponding
explanation. Conversely, explaining something is an effective strategy to
draw an effective prediction. To be sure, we are now back to square one.
We still need to address the toolbox problem by characterizing the na-
ture of this connection. True. But throwing in the towel and developing
an account of prediction completely severed from explanation throws
the baby out with the bath water. I’ll advance a better proposal in the
following section. First, however, we’ve got one final option to explore.
A fourth, and final rejoinder would be to return to the CLM. The
underlying idea is that we already have a theory that provides a uni-
fied treatment of prediction and explanation, namely, the covering-law
model. Perhaps the mistake was wandering away from Hempel and Op-
penheim’s hallowed approach.
This route seems to me to get us out of the frying pan and into the fire.
As noted all along, the CMM comes with costs and difficulties of its own.
Nevertheless, walking away from the deductive-nomological approach
was the right move. The notorious asymmetries are there to remind us
that predicting and explaining are not one and the same logical infer-
ence. This point was reinforced by our illustrations of the predictive turn.
Much explanation requires causation; prediction often does not, settling
for correlation. Rehashing the CLM, in other words, is a nostalgic way of
The “Toolbox” Problem 209
reliving the glory days, a past that was abandoned for good reason. We’re
better off looking to the future, whatever that may be.
Let’s take a quick breather. This section introduced the toolbox prob-
lem, the issue of showing how prediction and explanation can, intui-
tively, mutually inform and support each other, without reducing one
to the other. There are many ways of addressing this challenge. We ex-
plored four possible rejoinders, none of which, I argued, hits the bull-
seye. Section 5 develops a different, more effective line of argument, a
“black-box” solution to our toolbox problem.
5 A Black-Box Solution
The previous section introduced the toolbox problem: the task of char-
acterizing the interplay between prediction and explanation. I explored
four rejoinders. First, one can deny that prediction is all that central to
science. Second, prediction, like explanation, could be accounted for at a
causal-mechanistic level. Third, one may develop a self-standing model
that treats prediction independently of explanation. Fourth, the two infer-
ences might be reunited by returning to the old CLM. None of these strat-
egies, I maintained, is ultimately successful. At the same time, there is no
need to throw in the towel. This section sketches a “black-box” solution
to the toolbox problem and argues that it provides a viable path forward.
Let’s begin by emphasizing, once again, that prediction and expla-
nation are not structurally identical. Not all successful forecasts are
explanatory. Symptoms diagnose diseases without explaining them. Ex-
planation typically requires causation. Prediction, in contrast, only pre-
supposes correlation, that is, robust, reliable association.
Having noted this, the kind of association underlying predictive in-
ference calls for further elucidation. Even if marker B does not explain
condition C, there must be something responsible for the link between
B and C. Symptoms may not explain diseases, but what makes the cor-
relation between symptoms and diseases reliable, robust? In the case of
fMRI, specific patterns of neural activation N need not be type- or even
token-identical to the associated cognitive state M. (To be clear, M will be
token-identical to some neural state. But such neural state need not and,
typically, will not be the same neural state N that we use in our forward
or reverse inferences.) Still, there must be a causal mechanism which ex-
plains why engagement in C is reliably accompanied by activation in N.
In short, predicting C from B does rely on a causal-mechanistic story. The
point is that, as discussed at length, the causal-mechanistic relation in
question may well not correspond to a plausible explanation of C from B.
These considerations reveal that, while predictive and explanatory
inferences do not have the same structure, they are closely related and
inform each other. The question becomes what exactly makes them dif-
ferent. And the answer cannot simply lie in causation vs. correlation
210 Marco J. Nathan
since, as said, robust correlations are grounded in causal connections
of sorts. My proposal is to draw the relevant distinction in the amount
and kind of detail provided. To appreciate the point, it will be useful to
separate cases where predictions and explanations coincide from cases
where they do not.
Consider, first, a scenario where the same trait, property, or condition
is both predictive and explanatory of a certain state. Here, marker B is
a cause C of effect E. For instance, the modified huntingtin gene is both
a sure-fire predictor that Huntington’s Disease (HD) will occur when
the patient comes of age and is also one of the driving factors in neural
degeneration.
In these last two situations, the marker is predictive but not explanatory.
Note, once again, that there is an underlying causal-mechanistic connec-
tion that ensures the robustness of the association, enabling the prediction.
But, unlike our first scenario, adding etiological details will not turn the
prediction into an explanation—at least, not an explanation of our expla-
nandum. The shadow predicts the flagpole without explaining it. High
PSA levels predict prostate cancer but don’t explain it. Both effects are
explained by a common cause: genetic predisposition perhaps, although,
unfortunately, not much is currently known about this pathology.
Three brief remarks. First, why not use the common cause, as opposed
to the correlated marker to predict a condition such as cancer? More gen-
erally, could we not use causal explanations to predict effects? Of course,
we could. The problem is that causal-mechanistic stories are often quite
complex. To wit, not much is currently known about the network of causes
of, say, prostate cancer and Huntington’s Disease. Thus, having a simple
and easy to test marker comes in quite handy for diagnostic purposes.
Second, and relatedly, can a predictive link be explained? Could the
causal-mechanistic underpinnings of the relation between PSA levels
and cancer not be unveiled, transforming the forecast into a full-blown
explanation? Yes, they could. But should they? What exactly is there
to be gained? Allow me to elaborate. In some cases, we may want to
explain an association to make sure that it is robust enough. When the
212 Marco J. Nathan
link between PSA levels and cancer was first observed, researchers could
legitimately question the reliability of the marker in detecting a danger-
ous condition such as prostate cancer. And shedding light on why these
conditions tend to correlate may help mitigating legitimate doubts. At
the same time, once the robustness of the marker has been established
beyond reasonable doubt, an explanation of the marker may be superflu-
ous. What we want to explain is cancer, not PSA levels. When we have
substantive evidence that PSA predicts cancer, the explanation of PSA is
not all that important—a black box will suffice. It seems better to focus
on diagnosing cancer itself and to learn how to cure it.
Third, and finally, what is the relation between prediction and expla-
nation? The CLM treated predictive and explanatory inferences as two
sides of the same coin. The CMM drew them apart. Explanation requires
specifying the causes underlying a specific phenomenon; prediction does
not. These two perspectives can be reconciled by stressing that predic-
tion and explanation pertain to different levels of description, depending
on how much mechanistic information is provided. These considerations
shed light on ongoing debates concerning the nature of so-called “mathe-
matical” and “causal-dynamical” explanatory models, which include lim-
ited causal-mechanistic information (Ross 2015; Green and Jones 2016).
Now, whether these explanatory models should be understood as mecha-
nism sketches or as a different type of explanation, or even if they are best
conceived from a causal perspective (Woodward 2018; Ross 2021) are
questions that I shall not address here. However, our discussion stresses
that there is a spectrum of causal models, which vary depending on the
amount of abstraction and idealization. On one end of the spectrum, we
have detailed “glass box” explanations, where mechanistic components
are presented in detail, and the causal link should be as direct as possi-
ble. On the opposite side of the spectrum, we have black-boxed predic-
tions, where causal information can be omitted, and the causal link can
be much more indirect. In between, there is a continuum of causal models
of various sorts. To be sure, spelling out the epistemic norms from these
predictive and explanatory inferences, governing the amount of mecha-
nistic detail and the degree of abstraction is a worthwhile project. Yet,
this ambitious endeavor lies beyond the scope of the present work.
Acknowledgments
The author is grateful to Bill Anderson, Janella Baxter, John Bickle,
Carl Craver, Guie Del Pinal, Mallory Hrehor, and Antonella Tramacere
for constructive comments on various versions of this essay. An early
draft was presented on September 28, 2019, at the Tool Development in
Experimental Neuroscience workshop in Pensacola Beach, Florida. The
audience provided valuable feedback.
References
Alonso-Gonzalez, A. Calaza, C. Rodriguez-Fontenla, and A. Carracedo (2019).
“Novel Gene-Based Analysis of ASD GWAS: Insight into the Biological Role
of Associated Genes.” Frontiers in Genetics 10, 733.
Anney, R. J. et al. (2017). “Meta-Analysis of GWAS of Over 16,000 In-
dividuals with Autism Spectrum Disorder Highlights a Novel Locus at
10q24.32 and a Significant Overlap with Schizophrenia.” Molecular
The “Toolbox” Problem 215
Autism 8(21). https://molecularautism.biomedcentral.com/articles/10.1186/
s13229-017-0137-9#author-information
Bechtel, W. and R. Richardson (1993). Discovering Complexity: Decompo-
sition and Localization as Strategies in Scientific Research. Princeton, NJ:
Princeton University Press.
Bourrat, P. (2020). “Causation and Single Nucleotide Polymorphism Heritabil-
ity.” Philosophy of Science 87(5), 1075–83.
Broadbent, A. (2013). Philosophy of Epidemiology. New York: Palgrave
MacMillan.
Bromberger, S. (1966). “Why-Questions.” In R. Colodny (Ed.), Mind and Cos-
mos: Essays in Contemporary Science and Philosophy, pp. 86–111. Pitts-
burgh, PA: University of Pittsburgh Press.
Carnap, R. (1966). An Introduction to the Philosophy of Science. Mineola, NY:
Dover.
Cartwright, N. (1983). How the Laws of Physics Lie. Oxford: Clarendon.
Cartwright, N. (2007). Hunting Causes and Using Them: Approaches in Phi-
losophy and Economics. Cambridge: Cambridge University Press.
Cartwright, N. (2019). Nature, the Artful Modeler. Chicago, IL: Open Court.
Craver, C. F. and L. Darden (2013). In Search of Mechanisms. Discoveries
Across the Life Sciences. Chicago, IL: University of Chicago Press.
Craver, C. F. and D. M. Kaplan (2020). “Are More Details Better? On the
Norms of Completeness for Mechanistic Explanation.” British Journal for
the Philosophy of Science 71(1), 287–319.
Del Pinal, G. and M. J. Nathan (2013). “There and Up Again: On the Uses and
Misuses of Neuroimaging in Psychology.” Cognitive Neuropsychology 30(4),
233–52.
Feigl, H. (1949). “The Scientific Outlook: Naturalism and Humanism.” Amer-
ican Quarterly 1(2), 135–48.
Friedman, M. (1974). “Explanation and Scientific Understanding.” The Journal
of Philosophy 71(1), 5–19.
Ghiara, V. and F. Russo (2019). “Reconstructing the Mixed Mechanisms of
Health: The Role of Bio- and Socio-Markers.” Longitudinal and Life Course
Studies 10, 7–25.
Glennan, S. (2017). The New Mechanical Philosophy. Oxford: Oxford Univer-
sity Press.
Goodman, N. (1955). Fact, Fiction, and Forecast. Cambridge, MA: Harvard
University Press.
Harden, K. P. (2021). “‘Reports of My Death Were Greatly Exaggerated’: Be-
havior Genetics in the Postgenomic Era.”’ Annual Review of Psychology 72,
37–60.
Hempel, C. G. and P. Oppenheim (1948). “Studies in the Logic of Explanation.”
Philosophy of Science 15, 135–75.
Hume, D. (1999 [1748]). An Enquiry Concerning Human Understanding. New
York: Oxford University Press.
Hume, D. (2000 [1738]). A Treatise of Human Nature. New York: Oxford Uni-
versity Press.
Hutzler, F. (2014). “Reverse Inference Is Not a Fallacy Per Se: Cognitive Pro-
cesses Can Be Inferred from Functional Imaging Data.” Neuroimage 84,
1061–69.
216 Marco J. Nathan
Illari, P. and F. Russo (2016). “Information Channels and Biomarkers of
Disease.” Topoi 35, 175–90.
Kitcher, P. (1981). “Explanatory Unification.” Philosophy of Science 48 (4),
507–31.
Kitcher, P. (1989). “Explanatory Unification and the Causal Structure of the
World.” In P. Kitcher and W. C. Salmon (Eds.), Scientific Explanation, pp.
410–505. Minneapolis: University of Minnesota Press.
Lewis, D. K. (1986). “Postscript E to ‘Causation’.” In Philosophical Papers,
Volume 2, pp. 193–212. New York: Oxford University Press.
Lewontin, R. (1974). The Genetic Basis of Evolutionary Change. New York:
Columbia University Press.
Lynch, K. E. and P. Bourrat (2017). “Interpreting Heritability Causally.” Philos-
ophy of Science 84(1), 14–34.
Machamer, P. K., L. Darden, and C. F. Craver (2000). “Thinking about Mech-
anisms.” Philosophy of Science 67, 1–15.
Machery, E. (2014). “In Defense of Reverse Inference.” British Journal for the
Philosophy of Science 65(2), 251–67.
Nathan, M. J. (2016). “Counterfactual Reasoning in Molecular Medicine.”
In G. Boniolo and M. J. Nathan (Eds.), Philosophy of Molecular Medicine:
Foundational Issues in Research and Practice, pp. 192–214, New York:
Routledge.
Nathan, M. J. (2021). Black Boxes: How Science Turns Ignorance into Knowl-
edge. New York: Oxford University Press.
Nathan, M. J. and G. Del Pinal (2016). “Mapping the Mind: Bridge Laws and
the Psycho-Neural Interface.” Synthese 193(2), 637–57.
Nathan, M. J. and G. Del Pinal (2017). “The Future of Cognitive Neuroscience?
Reverse Inference in Focus.” Philosophy Compass 12(7), e12427.
NIH Biomarkers Definitions Working Group. (2001). “Biomarkers and Surro-
gate Endpoints: Preferred Definitions and Conceptual Framework.” Clinical
Pharmacology and Therapeutics 69, 89–95.
Pearl, J. and D. Mackenzie (2018). The Book of Why: The New Science of
Cause and Effect. New York: Basic Books.
Poldrack, R. A. (2018). The New Mind Readers: What Neuroimaging Can and
Cannot Reveal about Our Thoughts. Princeton, NJ: Princeton University
Press.
Richardson, K. and M. C. Jones (2019). “Why Genome-Wide Associations with
Cognitive Ability Measures are Probably Spurious.” New Ideas in Psychology
55, 35–41.
Ross, L. N. (2015). “Dynamical Models and Explanation in Neuroscience.”
Philosophy of Science 82(1), 32–54.
Ross, L. N. (2021). “Distinguishing Topological and Causal Explanation.” Syn-
these 198, 9803–20.
Russell, B. (1912). The Problems of Philosophy. Indianapolis, IN: Hackett.
Russo, F. and J. Williamson (2012). “Envirogenomarkers. The Interplay be-
tween Difference-Making and Mechanisms.” Medical Studies 3, 249–262.
Salmon, W. C. (1984). Scientific Explanation and the Causal Structure of the
World. Princeton, NJ: Princeton University Press.
Salmon, W. C. (1998). Causality and Explanation. New York: Oxford Univer-
sity Press.
The “Toolbox” Problem 217
Scriven, M. (1958). “Definitions, Explanations, and Theories.” In H. Feigl, M.
Scriven, and G. Maxwell (Eds.), Minnesota Studies in the Philosophy of Sci-
ence, Volume 2, pp. 99–195. Minneapolis: University of Minnesota Press.
Spirtes, P., C. Glymour, and R. Scheines (2000). Causation, Prediction, and
Search (2nd ed.). Cambridge and London: MIT Press.
Strevens, M. (2008). Depth. An Account of Scientific Explanation. Cambridge,
MA: Harvard University Press.
Strevens, M. (2016). “Special-Science Autonomy and the Division of Labor.” In
M. Couch and J. Pfeifer (Eds.), The Philosophy of Philip Kitcher, pp. 153–81.
New York: Oxford University Press.
Sul, J., L. Marti, P. Skidmore, A. Cassidy, S. Fairweather-Tait, L. Hooper, and
A. A. Roe (2018). “Population Structure in Genetic Studies: Confounding
Factors and Mixed Models.” PLoS Genetics 14(12), e1007309.
Torrica, B., A. G. Chiocchetti, E. Bacchelli, E. Trabetti, A. Hervás, B. Franke, J.
K. Buitelaar, N. Rommelse, A. Yousaf, E. Duketis, and C. M. Freitag. (2017).
“Lack of Replication of Previous Autism Spectrum Disorder GWAS Hits in
European Populations.” Autism Research 10(2), 202–11.
van der Sijde, M., A. Ng, and J. Fu (2014). “Systems Genetics: From GWAS to
Disease Pathways.” Biochimica et Biophysica Acta 1842(10), 1903–9.
Vineis, P., P. Illari, and F. Russo (2017). “Causality in Cancer Research: A Jour-
ney Through Models in Molecular Epidemiology and Their Philosophical In-
terpretation.” Emerging Themes in Epidemiology 14, 7.
Visscher, P., N. R. Wray, Q. Zhang, P. Sklar, M. I. McCarthy, J. Brown, A. Mat-
thew, and Yang (2017). “10 Years of GWAS Discovery: Biology, Function,
and Translation.” The American Journal of Human Genetics 101(1), 5–22.
Vrieze, S. I., W. G. Iacono, and M. McGue (2012). “Confluence of Genes, Envi-
ronment, Development, and Behavior in a Post-GWAS World.” Development
and Psychopathology 24(4), 1195–214.
Weiskopf, D. A. (2021). “Data Mining the Brain to Decode the Mind.” In F.
Calzavarini and M. Viola (Eds.), Neural Mechanisms: New Challenges in the
Philosophy of Neuroscience, pp. 85–110. Cham: Springer.
Woodward, J. (2003). Making Things Happen. A Theory of Causal Explana-
tion. New York: Oxford University Press.
Woodward, J. (2018). “Some Varieties of Non-Causal Explanation.” In A. Reut-
linger and J. Saatsi, Explanation Beyond Causation: Philosophical Perspec-
tives on Non-Causal Explanations, pp. 117–37. Oxford: Oxford University
Press.
Section 3
Research Tools,
Integration, Circuits, and
Ontology
10 How Do Tools Obstruct
(and Facilitate) Integration
in Neuroscience?
David J. Colaço
1 Introduction
For the past decade, philosophers have investigated how the methods,
data, and especially the explanatory frameworks of the subfields of neu-
roscience might be integrated (Craver 2007; Sullivan 2017). This interest
is unsurprising, as neuroscientists investigate diverse entities and phe-
nomena at different scales of the brain, which they aim to link via sci-
entific integration. Neuroscientists have developed integrative projects
(Shepherd et al. 1998; Markram 2012; Jorgenson et al. 2015; Amunts
et al. 2016) to both better understand the brain at different scales and
develop treatments for diseases (Markram 2012; Jorgenson et al. 2015).
Given that these integrative projects have been active for a decade,
it seems prudent to ask how successful they have been. Systematic in-
tegrative brain modeling projects, such as the Human Brain Project or
Blue Brain, have not fulfilled their ambitious goals (Yong 2019), and the
promised therapeutic interventions have yet to be delivered. At the same
time, neuroscientists have integrated locally, with piecemeal connections
made between subfields (Sullivan 2017). This disconnect between desires
and reality raises two questions. First, why has integrative neuroscience
not matched its expectations? Second, why does it succeed when it does?
In this chapter, I answer these questions by arguing that tools can both
obstruct and facilitate scientific integration, where tools are the mate-
rials and technologies that researchers use to study brain structure and
activity. My argument is built on two premises about popular tools in
neuroscience: (1) these tools have different constraints, (2) despite these
constraints, these tools productively generate knowledge. Together, these
premises entail that established tools contribute to knowledge production
often at the expense of the integration of methods, data, or explanatory
frameworks. This fact explains why we can find cases of local integration
across neuroscience, but systematic integration remains elusive.
In Section 2, I detail integration in neuroscience. I discuss how differ-
ent components – methods, data, and explanatory frameworks – might
be integrated, and I address the disconnect between desires for integra-
tion and its reality. In Section 3, I defend my premises for why tools
DOI: 10.4324/9781003251392-14
222 David J. Colaço
obstruct scientific integration. I illustrate them with examples from cog-
nitive neuroscience and molecular and cellular cognition. In Section 4,
I explain why some new projects, such as the BRAIN Initiative, prior-
itize the development of new tools to facilitate integration. I support
my claims with a case study of the tool known as CLARITY. This case
shows that tools can facilitate integration, explaining why local integra-
tion occurs despite as-of-yet unfulfilled desires of systematic integration,
given the constraints and productivity of tools.
3.3 T
ools Obstructing Scientific Integration: MCC and
Cognitive Neuroscience
An example of how tools obstruct scientific integration between neu-
roscience subfields can be found when we compare different subfields
that study cognition. Take molecular and cellular cognition (MCC), a
subfield where molecular and cellular approaches are used to study be-
havioral and cognitive phenomena. MCC uses tools like “molecular ma-
nipulations (e.g., gene targeting, viral vectors, pharmacology), cellular
measures and manipulations (neuroanatomy, electrophysiology, optoge-
netics, cellular and circuit imaging), and a plethora of behavioral assays”
in the study of cognition (Silva et al. 2013, p. 8). Researchers use these
Integrating subfields of neuroscience 229
tools to “intervene into molecular pathways in neurons and attempt to
develop explanations that bridge molecules, cells, circuits, and behav-
ior” (Silva et al. 2013, p. 12).
MCC does not readily integrate with other subfields that involve the
study of cognition, such as cognitive neuroscience. This lack of integra-
tion with cognitive neuroscience is a feature of MCC according to the
Molecular and Cellular Cognition Society:
Thus, even though these two subfields ostensibly study the same cogni-
tive capacities, they study these capacities with distinct tools. In addi-
tion, this quote suggests that MCC researchers perceive their work to be
an alternative to cognitive neuroscience, rather than something that is
intended to integrate with it.
The differences between MCC and cognitive neuroscience can be il-
lustrated by how memory might be studied in each subfield. To study
memory deficits, MCC researchers use tools like electrophysiology and
transgenetics to manipulate synaptic and molecular activities in a mouse
model organism (Silva 2003). To use these tools on this model organ-
ism, the researchers must operationalize memory deficits in a way that
can be measured in mice. At this time, tools like electrophysiology and
transgenetics have not been used on human subjects due to ethical con-
straints, leaving open how the mechanistic schemata developed in MCC
relate to schemata that represent or explain memory phenomena in other
animals (let alone humans).
Cognitive neuroscientists use different tools than those that are used in
MCC. When studying human memory phenomena, they often use MRI
to correlate brain activity differences between healthy and impaired hu-
man subjects (Gabrieli 1998). This tool can be methodologically inte-
grated with other tools like EEG. However, methods like fMRI cannot
be concurrently used with electrophysiology in mouse model organisms,
due to electromagnetic constraints. This incompatibility obstructs the
integration of methods that employ tools with distinct constraints. Fur-
ther, how memory deficits are operationalized in the two cases is not
equivalent, making it difficult to compare the data from humans and
mice. This mismatch obstructs data integration from studies whose tar-
gets may not be equivalent, even if these targets share the same nomen-
clature. This fact ultimately obstructs the integration of explanatory
230 David J. Colaço
frameworks from these subfields. Because the relation between targets is
unclear, it also is unclear how the mechanistic schemata in mouse model
organisms relate to memory in humans.
This case does not exhaust the tools that are used in MCC or cogni-
tive neuroscience. For instance, cognitive neuroscience can employ more
invasive tools in model organisms like the rhesus macaque. Further, cog-
nitive neuroscience has integrated explanatory frameworks from neuro-
biology (though not MCC) in its history. For instance, neurobiological
work on long-term potentiation in the rabbit hippocampus (Colaço
2020) has greatly informed neuroimaging studies on the role of the hip-
pocampus in memory formation and consolidation. Thus, integration
between cognitive neuroscience and MCC can occur, but we must be
clear about exactly what of these subfields is integrated. This interaction
is just one example of a case where the reality fails to meet the desires of
the integration of neuroscience, but it highlights what integration is up
against. While local methodological, data, and explanatory integration
might occur – MCC and cognitive neuroscience have cases of all three –
each can be obstructed. Additionally, systematic integration remains elu-
sive and might not be desired by researchers in these respective subfields.
Nonetheless, we must ask how the methods, data, and explanatory
frameworks that are used in these subfields might be integrated. Each
mode of integration is obstructed. However, MCC and cognitive neu-
roscience each are individually successful, indebted to the fact that each
subfield has productive tools that, when used within their respective con-
straints, produce useful data about the systems the respective groups of
researchers target. These data, in turn, feed into the explanatory frame-
works of each subfield, including how the targets are operationalized and
how brain activity and structure are understood. Hence, the integration
of the methods, data, and explanatory frameworks of these subfields
counterbalance with the productivity of those that do not facilitate in-
tegration.1 Further, it is worth speculating whether researchers in either
camp want to integrate their methods, data, or explanatory frameworks,
even if there were no other obstacles to achieving more systematic inte-
gration of these two subfields.
4.1 CLARITY
A tool that drove the formation of the BRAIN Initiative is CLARITY,
which renders intact tissue transparent and amenable to optical and flu-
orescent microscopy paradigms (Chung and Deisseroth 2013). This tool
is indebted to the Deisseroth laboratory at Stanford. Prior to the devel-
opment of CLARITY, optical microscopy only could be performed on
the microscale, due to the need to slice tissue into thin segments so that
light can penetrate them. Indirect macroscale imaging tools – such as
MRI – cannot be used by researchers to investigate entities or phenom-
ena at the microscale or mesoscale, due to constraints of the coarseness
of indirect imaging. CLARITY solves the problem that some lipids in
cellular tissue scatter light. This technique removes these lipids and de-
velops a transparent support structure in the tissue.
The process begins when the tissue is placed in a monomer and linking
chemical solution. These chemicals diffuse into the tissue, and they bind
to proteins and nucleic acids in this tissue. However, these chemicals do
not bind to the lipids that scatter light. Following the binding, the im-
pregnated tissue is heat shocked, which causes the monomer and linking
chemicals to bind to one another, forming a “mesh.” The biomolecules
that bind to the chemicals embed in this mesh as it sets. Following the
development of the mesh, detergents are used by researchers to remove
the non-embedded molecules like the light-scattering lipids. Because the
mesh is in place, the tissue, minus the light-scattering lipids, remains in
the configuration it was in at the start of the process.
232 David J. Colaço
CLARITY has different constraints when compared to those of mi-
croscale slice microscopy or the indirect means of measuring or imag-
ing at the macroscale. Instead of researchers needing to slice tissue into
pieces, CLARITY allows researchers to prepare larger dimensions of tis-
sue. Because of its distinct constraints when compared to previous tools,
the use of CLARITY produces microscale data about the properties of
individual neurons and other cellular material as well as mesoscale data
about the structural relations of the population of neurons and other
material within the tissue.
Just like any other tool, CLARITY does not lack constraints. The
biological system in question is killed by CLARITY, and the hydrogel
and detergents used in the process can damage cellular structures in
predictable ways. What matters for my discussion of integration in neu-
roscience is that its constraints are novel when compared to other tools,
including the existing optical and indirect imaging methods mentioned
above, allowing researchers to produce data that are relevant to the data
produced using other tools. Thus, CLARITY overlaps other tools’ do-
mains: researchers can use it with existing light and fluorescent micros-
copy methods to investigate microscopic, mesoscopic, and macroscopic
structures.
Unlike many imaging tools of the past, CLARITY is a “multi-scale”
tool, with constraints covering the microscale, mesoscale, and macro-
scale. In this sense, CLARITY crosscuts previous research that used tools
with incompatible constraints. This is a direct consequence of CLAR-
ITY’s comparatively novel set of constraints: the change in constraints
affects how the tool informs existing frameworks, helping with method-
ological, data, and (perhaps most importantly) explanatory integration.
The creators of CLARITY have sought to make this tool productive
and easy to incorporate into existing paradigms that deploy other mea-
surement, imaging, and manipulation tools. Beyond publishing works
that communicate the theoretical basis and validity of the tool (Chung
et al. 2013), these creators developed a “wiki” to communicate the use
of the tool, the materials, and the kinds of projects for which CLAR-
ITY is useful (CLARITY Resource Center). This “CLARITY resource
center” also includes a forum for users of the tool to discuss their uses
of CLARITY as well as the issues that they face when they use it. This
forum is paired with “boot camps” that train researchers to use CLAR-
ITY. These resources provide researchers the opportunity to obtain the
first-hand experience of using the tool and second-hand experience from
other users. These strategies show that its creators aim to make the use
of CLARITY easy, efficient, and reliable.
The point of this section is not to suggest that CLARITY is some kind
of “silver bullet” tool that will resolve integration challenges in neuro-
science. Rather, it is one tool amongst several that can, and as I will
show does, contribute to integration. To put this tool into the broader
Integrating subfields of neuroscience 233
context of neuroscience inquiry, we must consider some of the limita-
tions of CLARITY. For instance, it takes time and effort for researchers
to make a tool productive, and CLARITY is no exception. There remain
aspects of its use that make it less productive when compared to other
tools. It can take months for researchers to successfully perform CLAR-
ITY on neural systems, diminishing its efficiency compared to other
imaging tools.
There already have been attempts to reduce the time needed to prepare
the tissue and image it (Gradinaru et al. 2018), but the fact that the tool
is slow limits its productivity at this time. Likewise, the wiki dedicates
a section to troubleshooting the application of the tool, suggesting that
it can be difficult to master CLARITY. These troubleshooting tips will
likely help ease the use of CLARITY in the future. Nonetheless, their
existence reflects the fact that there is a degree of skill required for re-
searchers to use the tool correctly.
5 Conclusion
In this chapter, I have shown that resolving the challenges of integration
in neuroscience requires us to reflect upon both the constraints and the
productivity of the tools that are popular in the discipline. I have ex-
plained when tools are obstructions and facilitators of three modes of in-
tegration in neuroscience. The fact that productive but constrained tools
lead researchers to continue doing research at the expense of engaging
in integrative practices is a topic that has not been sufficiently addressed
in the philosophy of science, and it is worthwhile to appreciate that this
relation is challenging but not impossible for us to overcome.
As I have shown, local integration can be facilitated by new produc-
tive tools with novel constraints, even if this does not promote any sort
of systematic integration, especially in the short run. This is not a mere
possibility; it is actively achieved by projects like the BRAIN Initiative
and research with the tool CLARITY. Nonetheless, my premises of con-
straints and productivity are important to understanding the promise of
integration in neuroscience, as they explain why tools both obstruct and
facilitate scientific integration.
Note
1 This challenge reflects Bechtel’s claims that integration that creates a new
subfield such as MCC often results in a great deal of specialization. This
specialization, in turn, can result in disintegration between the new subfield
and previously established subfields (1993, p. 278).
References
Amunts, K., Ebell, C., Muller, J., Telefont, M., Knoll, A., & Lippert, T. (2016).
The human brain project: Creating a European research infrastructure to de-
code the human brain. Neuron, 92(3), 574–581.
Bassett, D. S., & Sporns, O. (2017). Network neuroscience. Nature Neurosci-
ence, 20(3), 353–364.
Bechtel, W. (1993). Integrating sciences by creating new disciplines: The case of
cell biology. Biology and Philosophy, 8(3), 277–299.
Bickle, J. (2016). Revolutions in neuroscience: Tool development. Frontiers in
Systems Neuroscience, 10, 24.
Integrating subfields of neuroscience 237
Chang, E. F. (2015). Towards large-scale, human-based, mesoscopic neurotech-
nologies. Neuron, 86(1), 68–78.
Chang, E. H., Argyelan, M., Aggarwal, M., Chandon, T. S. S., Karlsgodt, K.
H., Mori, S., & Malhotra, A. K. (2017). The role of myelination in measures
of white matter integrity: Combination of diffusion tensor imaging and two-
photon microscopy of CLARITY intact brains. Neuroimage, 147, 253–261.
Chung, K., & Deisseroth, K. (2013). CLARITY for mapping the nervous sys-
tem. Nature Methods, 10(6), 508.
Chung, K., Wallace, J., Kim, S. Y., Kalyanasundaram, S., Andalman, A. S.,
Davidson, T. J.,… & Pak, S. (2013). Structural and molecular interrogation
of intact biological systems. Nature, 497(7449), 332–337.
CLARITY Resource Center. (2020). http://clarityresourcecenter.org (Accessed
June 20th, 2021).
Colaço, D. (2018). Rethinking the role of theory in exploratory experimenta-
tion. Biology and Philosophy, 33(5–6), 38.
Colaço, D. (2020). Recharacterizing scientific phenomena. European Journal
for Philosophy of Science, 10(2), 1–19.
Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of
neuroscience. Oxford: Oxford University Press.
Feest, U. (2005). Operationism in psychology: What the debate is about, what
the debate should be about. Journal of the History of the Behavioral Sciences,
41(2), 131–149.
Gabrieli, J. D. (1998). Cognitive neuroscience of human memory. Annual Re-
view of Psychology, 49(1), 87–115.
Gradinaru, V., Treweek, J., Overton, K., & Deisseroth, K. (2018). Hydrogel-tis-
sue chemistry: principles and applications. Annual Review of Biophysics, 47,
355–376.
Grillner, S., Kozlov, A., & Kotaleski, J. H. (2005). Integrative neuroscience:
Linking levels of analyses. Current Opinion in Neurobiology, 15(5), 614–621.
Harp, R., & Khalifa, K. (2015). Why pursue unification?: A social-
epistemological puzzle. THEORIA. Revista de Teoría, Historia y Funda-
mentos de la Ciencia, 30(3), 431–447.
Huang, L., Kebschull, J. M., Furth, D., Musall, S., Kaufman, M. T., Churchland,
A. K., & Zador, A. M. (2018). High-throughput mapping of mesoscale con-
nectomes in individual mice. bioRxiv, 422477. doi: 10.1101/422477
Jorgenson, L. A., Newsome, W. T., Anderson, D. J., Bargmann, C. I., Brown,
E. N., Deisseroth, K.,… & Marder, E. (2015). The BRAIN Initiative:
Developing technology to catalyse neuroscience discovery. Philosophi-
cal Transactions of the Royal Society B: Biological Sciences, 370(1668),
20140164.
Koroshetz, W., Gordon, J., Adams, A., Beckel-Mitchener, A., Churchill, J.,
Farber, G.,… & White, S. (2018). The state of the NIH BRAIN initiative.
Journal of Neuroscience, 38(29), 6427–6438.
Kotchoubey, B., Tretter, F., Braun, H. A., Buchheim, T., Draguhn, A., Fuchs,
T.,… & Rentschler, I. (2016). Methodological problems on the way to inte-
grative human neuroscience. Frontiers in Integrative Neuroscience, 10, 41.
Krohs, U. (2012). Convenience experimentation. Studies in History and Philos-
ophy of Science Part C: Studies in History and Philosophy of Biological and
Biomedical Sciences, 43(1), 52–57.
238 David J. Colaço
Leonelli, S. (2016). Data-centric biology: A philosophical study. Chicago, IL:
University of Chicago Press.
Leuze, C., Aswendt, M., Ferenczi, E., Liu, C. W., Hsueh, B., Goubran, M.,…
& McNab, J. A. (2017). The separate effects of lipids and proteins on brain
MRI contrast revealed through tissue clearing. Neuroimage, 156, 412–422.
Logothetis, N. K., Pauls, J., Augath, M., Trinath, T., & Oeltermann, A. (2001).
Neurophysiological investigation of the basis of the fMRI signal. Nature,
412(6843), 150–157.
Markram, H. (2012). The human brain project. Scientific American, 306(6),
50–55.
Mitchell, S. D., & Gronenborn, A. M. (2017). After fifty years, why are protein
X-ray crystallographers still in business? The British Journal for the Philoso-
phy of Science, 68(3), 703–723.
Molecular and Cellular Cognition Society. (2021) https://molcellcog.org/about-
mccs (Accessed June 20th, 2021).
O’Malley, M. A. (2013). When integration fails: Prokaryote phylogeny and the
tree of life. Studies in History and Philosophy of Science Part C: Studies
in History and Philosophy of Biological and Biomedical Sciences, 44(4),
551–562.
O’Malley, M. A., & Soyer, O. S. (2012). The roles of integration in molecular
systems biology. Studies in History and Philosophy of Science Part C: Stud-
ies in History and Philosophy of Biological and Biomedical Sciences, 43(1),
58–68.
Roy, D. S., Park, Y. G., Ogawa, S. K., Cho, J. H., Choi, H., Kamensky, L.,…
& Tonegawa, S. (2019). Brain-wide mapping of contextual fear memory en-
gram ensembles supports the dispersed engram complex hypothesis. bioRxiv,
668483. doi: 10.1101/668483
Shepherd, G. M., Mirsky, J. S., Healy, M. D., Singer, M. S., Skoufos, E., Hines,
M. S.,… & Miller, P. L. (1998). The human brain project: Neuroinformatics
tools for integrating, searching and modeling multidisciplinary neuroscience
data. Trends in Neurosciences, 21(11), 460–468.
Silva, A. J. (2003). Molecular and cellular cognitive studies of the role of synap-
tic plasticity in memory. Journal of Neurobiology, 54(1), 224–237.
Silva, A. J., Landreth, A., & Bickle, J. (2013). Engineering the next revolution
in neuroscience: The new science of experiment planning. Oxford: Oxford
University Press.
Sporns, O. (2014). Contributions and challenges for network models in cogni-
tive neuroscience. Nature Neuroscience, 17(5), 652–660.
Sullivan, J. A. (2016). Construct stabilization and the unity of the mind-brain
sciences. Philosophy of Science, 83(5), 662–673.
Sullivan, J. A. (2017). Coordinated pluralism as a means to facilitate integrative
taxonomies of cognition. Philosophical Explorations, 20(2), 129–145.
Ye, L., Allen, W. E., Thompson, K. R., Tian, Q., Hsueh, B., Ramakrishnan,
C.,… & Deisseroth, K. (2016). Wiring and molecular features of prefrontal
ensembles representing distinct experiences. Cell, 165(7), 1776–1788.
Yong, E. (2019). The human brain project hasn’t lived up to its prom-
ise. The Atlantic. https://www.theatlantic.com/science/archive/2019/07/
ten-years-human-brain-project-simulation-markram-ted-talk/594493/
11 Understanding Brain
Circuits
Do New Experimental
Tools Need to Address New
Concepts?
David Parker
DOI: 10.4324/9781003251392-15
240 David Parker
at a very early stage of brain science” (cited in Horgan 1999). Gunther
Stent (1969) suggested that analysing the physiological processes of be-
haviour was pointless as they will “degenerate into seemingly ordinary
reactions no more and no less fascinating than those occurring in the
liver” echoing Charles Sherrington’s claim of a remoteness “between the
field of neurology and that of mental health …physiology has not enough
to offer about the brain in relation to mind to lend the psychiatrist much
help” (Sherrington 1951). There is thus a lack of connection between
the components of the nervous system and the outputs they produce.
Examining components or behaviours alone is not sufficient: molecular
and cellular analyses require knowledge of the behaviour or goal of the
system, while top-down behavioural approaches need lower-level insight
to constrain potential explanations.
While it seems implicit in neuroscience claims, understanding lacks an
accepted definition (de Regt 2013). For example, Bassett and Gazzaniga
(2011, p. 6) wrote, “Understanding the brain depends significantly on
understanding its emergent properties”, while Gregory claimed under-
standing will remove any appeal to emergence (in Blackmore 2006, p.
105). Bassett and Gazzaniga (p. 8) also say “To understand mind-brain
mechanisms it is necessary to characterize relations between multiple
levels of the multiscale human brain system, including interactions be-
tween temporal scales”, while Dennett (1971), Newell (1982), Marr
(1982), and Glass et al. (1979), all following Gilbert Ryle’s (1949) claim
that claim understanding can be obtained at different levels, the compu-
tational or behavioural/cognitive, the programme or algorithm, or the
physical or implementational level.
Understanding and explanation in biology are claimed to appeal to mech-
anisms rather than laws (Railton 1981; Machamer et al. 2000). Functional
concepts and mechanisms explain how a system and its parts do what they
do, while understanding requires that the functional concepts and expla-
nations are both intelligible and correct (Grimm 2006). The phlogiston
theory made certain phenomena intelligible and had practical uses, but
obviously couldn’t be claimed as an understanding of combustion.
In neuroscience, it is easier to say what is not sufficient for understand-
ing. Description is not enough. Claims that more details will explain
are illustrated by connectomic approaches (Schroter et al. 2017; Morgan
and Lichtman 2013) and that an understanding will follow from record-
ing “from ever more cells over larger brain regions” (Mott et al. 2018).
These reflect an “illusion of depth” (Ylikoski 2009) by assuming the
more we know the more we will understand. Second, while prediction
or the ability to make targeted interventions is important (Woodward
2017), we can predict effects and reliably manipulate systems without
explaining how they happen. A classic neurophysiological example is
the Jendrassik manoeuvre used clinically for over a century to potentiate
Understanding Brain Circuits 241
reflexes (e.g. the knee-jerk reflex) but which still lacks explanation (Nar-
done and Schieppati 2008). Lesions provide another example. Spinal
cord lesions evoke predictable sensory and motor disturbances that are
not explained by the lesion alone, but also reflect diverse functional
changes above and below the lesion site. Emphasis on the lesion has fo-
cused remedial approaches on regeneration to repair the lesion, but this
has failed to translate into a treatment (Steward et al. 2012), presumably
because it doesn’t properly explain the changes.
Even if we can explain and understand how an effect occurs, this still
leaves the question of “why”. Sherrington (1899) claimed that neuro-
physiology can only answer “how” questions: analysing the activity of
all the components involved we could understand how we run, but not
why (to exercise, compete, escape?). Why questions are teleological and
represent the goal or purpose, a final causality replaced in reduction-
ist accounts by the efficient cause, the mechanical account of how an
effect occurred. But explaining behaviour requires knowing how and
why it occurred. For example, a complete description of “how” may
not determine why a person exhibits certain psychopathology (a faulty
gene or faulty environment?), and thus won’t identify the most effective
treatment.
Neural circuit analyses, while far from a panacea (Parker 2010), offers
a middle-ground between bottom-up and top-down analyses, by con-
sidering how components interact to generate outputs. Minimal criteria
must for circuit understanding include (Selverston 1980):
1 Experimental Reductionism
Mechanistic explanations of behaviour in neuroscience typically reduce
behaviours to their constituent molecules and cells (Selverston 1980; Get-
ting 1989; Ito 2006; Yuste 2015). Appeals to reductionism include its track
record; that by opening “black boxes” it allows greater potential for con-
trol and practical use; and that it offers greater generality and simplicity.
Reductive explanations relate to a “machine model” (Monod 1972), where
interlocking parts combine to move the system from an initial to a goal
state, an explanation showing how the component parts and their organ-
isation achieve the system goal (Darden and Craver 2009). Hanahan and
Weinberg (2000) claimed, “Two decades from now…it will be possible to
lay out the complete integrated circuit of the cell… we will then be able
to apply the tools of mathematical analysis to explain”. The microproces-
sor analogy is unfortunate as Jonas and Kording (2017) have shown that
current neuroscience approaches are insufficient to explain actual micro-
processors. Hanahan and Weinberg’s 20 years have passed, and features
have been identified that negate the circuit-board analogy for even a single
cell: a fluid cytoskeleton, “intrinsically disordered proteins”, enzymes that
affect numerous substrates or perform non-enzymatic functions, pleomor-
phic molecular assemblies with “probability clouds” of interactions, and
stochastic and probabilistic gene expression (see Nicholson 2019).
Apart from clinical case studies, most biological discoveries reflect the
use of model organisms with features that make them useful. This relates
to Krogh’s principle (originally outlined by Claude Bernard in the 19th
century; Jørgensen 2001), that for any problem there an animal on which
it can be most conveniently studied. Between the 1930s and 1960s a wide
range of invertebrate and lower-vertebrate model systems were introduced
to determine the general principles of nervous system function (strongly
associated with the field of neuroethology). These have relatively simple be-
haviours and accessible nervous systems containing relatively small num-
bers of often large cells that allowed circuit characterisations related to
behaviours. These natural advantages can be engineered to some extent in
more complex (mammalian) systems using early developmental stages or
reduced preparations (tissue slices or cultured cells). While we cannot sim-
ply extrapolate, the hope is that the conservation of function in simpler or
reduced preparations will help us explain more complex or intact systems.
The reductionist approach is illustrated by the field of molecular and
cellular cognition, a ‘ruthless’ reductionism that claims to explain the
molecular basis of cognition from genetic manipulations (Bickle 2003).
To some extent this link is obvious: if a change in behaviour reliably fol-
lows a manipulation, then the manipulation caused the change. But this
244 David Parker
is a correlation, and useful for that, not an explanation. Molecules don’t
connect directly like switches to behaviours but work through cells in
circuits that form systems that link to behaviour. While we could detail
the molecular intervention and the resulting behaviour, explaining the
links between them requires knowing how the molecular manipulation
worked through different levels.
Two aspects removed by the reductionist neural approach are glial cells
and non-wired effects. We now know that glia serve more than the tradi-
tional developmental or “housekeeping” role (clearing away released trans-
mitters and extracellular potassium caused by action potential signalling):
the tripartite synapse concept sees glia as functional components that re-
ceive and send signals to neurons (Araque and Navarrete 2010). Non-wired
signals are not passed along axons and synaptic contacts but through the
extracellular space and include volume transmission (the diffusion of trans-
mitters up to mm from their point of release; Svensson et al. 2019) and
extracellular electrical fields (“ephaptic” signals; Weiss and Faber 2010).
To consider the issue, assume that field effects are important (Figure
1). We know they occur as they are measured in EEGs, but their func-
tional role has received relatively little attention. Field effects are not a
component in the conventional sense but reflect the summed activity in
cell populations, an example of an emergent biological effect. This gen-
erates an extracellular signal that can affect the activity of other cells
depending on its magnitude and spread, which are not easy to predict and
depend on the specific features of the activity and the extracellular space
(Weiss and Faber 2010), which in turn depends on the arrangement of
other neurons and the geometry of the extracellular space. Extracellular
fields are thus influenced by neuronal activity and neuronal activity by
extracellular fields, a circular interaction. Neuronal activity also alters the
diameter, and thus resistivity of the extracellular space (Østby et al. 2009)
to alter the magnitude and spread of field effects, neuronal activity, and
the extracellular space… generating a circular interaction that influences
another circular interaction. We could, in principle, understand how this
influences an output if we knew the contribution of each cell to the extra-
cellular signal, and how it spreads through the extracellular space to alter
neuronal activity and the extracellular space, but this would at best pro-
vide a snapshot of constantly evolving dynamic activity. This argument
could be negated by begging the question and saying that field effects are
an unimportant epiphenomenon. But similar considerations apply to vol-
ume transmission and to circular interactions generated by conventional
“wired” feedback connections between neurons (Parker 2019; Svensson
et al. 2019) which cannot be easily dismissed to appeal to simple accounts.
It isn’t that we want or need knowledge of the minutiae of wired and
non-wired components and connections evolving in real time to explain
(Greenberg and Manor 2005, provide a sobering example of the failure to
explain when detail exceeds a small limit in a very simple system), but we
need to appreciate the role of these effects in any explanation.
Understanding Brain Circuits 245
Figure 11.1 C
ircular interactions along wired and non-wired pathways. (a) Neu-
ronal activity generates extracellular signals (shaded circle), that
can alter neuronal excitability. At the synaptic terminal, inputs are
evoked in postsynaptic cells and in glial cells (G), and glia can signal
back to neurons. Glial cells form a syncytium through gap junction
connections (dashed lines), which allows local activity to spread. (b)
Neuronal activity can cause swelling of glial cells to ‘shrink’ the
extracellular space and change the extracellular field, neuronal and
glial activity, transmitter release and glial cell activity, which alters
the extracellular space…, generating multiple circular interactions.
(c) These are known features of nervous systems, nothing is exag-
gerated, and if anything is too simplified. (d) Adding plasticity (both
activity-dependent (ADP) and neuromodulator-evoked (NMod),
and the interactions between these effects) evokes additional circular
interactions. Even if complete (‘Laplacian’) detail was possible, we
would only have snapshots of constantly evolving dynamic activity.
Once generated from neural events, the higher order mental patterns
and programs have their own subjective qualities and progress, op-
erate and interact by their own causal laws and principles which are
different from and cannot be reduced to those of neurophysiology.
We can stay far below the level of mental events to see this. A sodium
channel has atoms and molecules arranged to allow movement of so-
dium ions underlying an action potential; but an action potential is not
reducible to the sodium channel as it reflects multiple ion channels,
membrane pumps, and the membrane itself, that together generate the
voltage and electrochemical gradients needed for the sodium channel
to work.
246 David Parker
The reductionist focus on manipulating single system components
in isolation is really only possible in decomposable or “nearly decom-
posable” systems where interactions between parts are minimal, not in
systems where the interactions between parts are many and strong (“non-
decomposable systems”; Simon 1962). Bassett et al. (2010) refer to Simon
(1962) in saying that hierarchical systems are nearly decomposable to claim
parts can be examined relatively independently. But Simon didn’t say that
hierarchical systems are nearly decomposable, just that “At least some
kinds of hierarchic systems can be approximated successfully as nearly
decomposable systems” (Simon 1969, p. 474). Near-decomposability is
an assumption of reductionist approaches, a “fallible” heuristic (Wimsatt
2006), and assumptions need to be considered. Non-decomposability
could reflect a temporary failure of experimental or analytical techniques
or concepts (e.g. Bechtel 2002) that disappears when we have the correct
concepts and details. But is this begging the question to maintain the
assumption of near-decomposability, a “promise of jam tomorrow”? It
is not enough to inductively claim that many major advances in the life
sciences have stemmed from the discovery of ways of decomposing a phe-
nomenon: that there are nearly-decomposable systems in biology does not
mean that this applies to all biological systems; we may need to appreciate
this to find ways of approaching these systems.
Non-decomposable systems have interactions between components
that are many and strong. This seems to define nervous systems where
feedforward, feedback and circular interactions are endemic, a heter-
archical rather than a hierarchical organisation of component parts
performing specific functions in fixed sequences. Heterarchical systems
offer challenges to reductionist approaches. Properties examined in re-
duced preparations or under quiescent conditions can change when the
system is active due to the recruitment of wired and non-wired feedback
loops or activity-dependent changes in component or system properties.
Heterarchical systems also show equifinality (many mechanisms gen-
erate the same output) and multifinality (single components influence
multiple outputs). Functional imaging has shown that instead of a one-
to-one mapping between a region and a function there is a high degree of
overlap among regions that are activated by tasks that share no cognitive
components (Schroeder and Foxe 2005). A given region can thus influ-
ence multiple functions. Price and Friston (2005) claim this will be de-
termined by its connectivity, but this ignores non-wired signals. The lack
of obligatory mechanistic sequences means the law of transitivity does
not hold in heterarchical systems, complicating the spatial and temporal
relations between levels, and meaning that explanations cannot simply
build up from lower-level properties. Noble (2012) provides an example
from the failure to explain the heart rhythm from lower-level properties.
In non-decomposable systems any manipulation, no matter how
“surgical”, will also necessarily affect other components through
Understanding Brain Circuits 247
diaschisis (Carrera and Tononi 2014), an acknowledged neurological
term that seems less appreciated experimentally. Diaschisis literally
means “shocked throughout” to represent the widespread changes in the
brain caused by a focal lesion. Any manipulation, no matter how elegant
and refined, will necessarily affect downstream (and with feedback, up-
stream) components. Terms like “specific” or “targeted” imply precision
that may be justified from intention (a specific single component was
successfully targeted), but not from application (many components will
be affected). To offer more than a correlation, and more is often implied,
requires knowing how the manipulation alters the system. A manipula-
tion may cause an effect without explaining it.
Reductionist explanations also aim to define individual parameters.
For invertebrates, these can be single uniquely identifiable cells but in
vertebrates, they are cell populations, characterisation reflecting a popu-
lation average. Variability is endemic in these populations (Soltesz 2006).
Claude Bernard claimed that averages “confuse while aiming to unify,
and distort when aiming to simplify” (Bernard 1865, pp. 134–135). An
average value may not describe a parameter, especially if the sample size
is low, as the average value may not have been seen in any measure-
ment: would this be considered characterisation of a component? Aver-
aging also assumes that the variability is noise, when it may be a signal
or reflect the presence within a population or functional sub-groups
(Soltesz 2006). Variability does not rule out conventional reductionist
approaches but requires larger sample sizes to ensure that we properly
characterise components, which for small or inaccessible cells and syn-
aptic connections adds a significant burden.
Finally, reductionism assumes substantivalism, that components are
defined by their intrinsic characteristics (e.g. structure, location, or
transmitter content), rather than by their relationship to other compo-
nents. But a transmitter like GABA (or a neuron containing it) is not
intrinsically inhibitory, it is defined as such from the receptor it binds to
and the receptor’s inhibitory effect on the postsynaptic cell. Even then
the functional circuit effect could be excitation through disinhibition.
Highlighting these limits of reductionism does not invite the oppo-
site view prevalent with dichotomous thinking, that lower-level details
are irrelevant. Regions of the brain show specificity in the types of cells
they contain (from the 47 areas in Brodmann’s map of a century ago
to 98 now; Glasser et al. 2016), the specific anatomy, wiring patterns,
and functional properties suggesting that these details are important. If
everything could be reduced to Hopfield network representations then
the brain would be a collection of identical units that simply process dif-
ferent types of information and relay it to different areas (an argument
could be made for this arrangement in the cerebellar cortex). This par-
adoxically seems to be the view of the Human Brain Project, despite its
explicit focus on reductionist detail (Markram et al. 2015), that chaining
248 David Parker
together multiple simulated cortical columns will produce a fully con-
scious brain emulation. If we want to understand mechanisms, and for
effective and safe translations we should, we need these details.
2 New Tools
Neuroscience tools offer greater control and give more direct results at
lower-levels or scales, supporting a reductionist focus (compare the anal-
ysis of a protein on a gel with an fMRI scan). In practice, the development
of theories and instruments are mutually dependent: we develop tools
and techniques to perform specific analyses, and analyses are directed
by the available tools. The instruments and their use should be theory-
neutral: a measurement obtained with a tool will be the same irrespective
of the theory under which it was used. However, how a tool is applied
and a measurement interpreted can be affected by a theory. The power
of newer technologies to focus on biological details can leave behaviour
as an “afterthought” (Krakauer et al. 2017), leading to explanations of
behaviour that fail to separate causation from correlation and thus gen-
erating erroneous assumptions of links between lower and higher levels.
New techniques are claimed to have caused “revolutions” in neuro-
science (“In short: understanding tool development is the key to under-
standing real revolutions in actual neuroscience”; Bickle 2016). Peter
Galison’s “Image and Logic” (1997) gives a view of scientific history
dominated by tools: steam-engine technology came before thermody-
namics and telegraphy and telephony came before information theory.
While Einstein, Heisenberg, Schrödinger and Dirac believed that prog-
ress in physics would continue through conceptual insight, experimen-
tal physics in the mid-20th century focused on new tools. Neuroscience
seems to have a similar dichotomy between the need for concepts or
techniques and tools.
It is easy to see new tools as revolutionary if a revolution is defined as a
fundamental change in the way things are done. This wouldn’t fit Thomas
Kuhn’s (1962) definition of a scientific revolution, “a noncumulative de-
velopmental episode in which an older paradigm is replaced in whole
or in part by an incompatible new one” (SSR, p. 92). A new technique
may revolutionise how or what experiments are done but would only
cause a scientific revolution if it led to a paradigm shift, not by increasing
precision or allowing novel analyses within a paradigm (this is “normal
science”; Kuhn 1962; Parker 2018). The terms scientific revolution and
paradigm shift are often used erroneously (e.g. Knafo and Wyart 2015),
presumably to emphasise something above “normal science” (seeing this
as a pejorative term also misunderstands its meaning; Parker 2018).
The technological aspect of neuroscience is expressed in the aims of
the US BRAIN (Brain Research Through Advancing Innovative Neu-
rotechnologies®) project, which promises technology “to produce a
Understanding Brain Circuits 249
revolutionary new dynamic picture of the brain that, for the first time,
shows how individual cells and complex neural circuits interact in both
time and space”, and to find “new ways to treat, cure, and even prevent
brain disorders”. It doesn’t say how it will do this, seemingly beyond
examining more components in greater detail. Understanding the ner-
vous system comes from observation, both of brain and behaviour, and
the manipulation of its component parts either by accident (injuries or
neurological disorders) or experimental design.
Neuroscience research has traditionally been slow but high through-
put techniques are accelerating analyses (from automated “behavioural”
ethoscopes to high throughput anatomical, molecular and electrophysi-
ological analyses). But while this data gives knowledge, knowledge does
not necessarily explain.
In summarising neural circuit analyses, Selverston (1980) said that un-
derstanding would be limited to the simplest systems with the techniques
then available: these were intracellular and extracellular recording and
stimulation, various anatomical approaches combined with cell stains,
electron microscopy, the use of physical lesions, pharmacological agents,
the ability to kill labelled cells with UV light, and initial attempts at im-
aging. The range of tools has now increased markedly (see https://www.
biomedcentral.com/collections/ntfn). For example:
These are all valuable additions to circuit research. But no tool, new
or traditional, is without caveats. The neural imaging field shows the
importance of highlighting caveats. Imaging was developed in the 1970s
(Selverston 1980), but highlighting dissatisfaction with its poor spatial
and temporal resolution led to successive improvements in reporters, mi-
croscopes, and analyses to the point where signals can now be imaged
with millisecond precision in multiple cells (Zhang et al. 2020; although
temporal precision falls as the number of cells imaged increases).
Many new tools offer significant control at the molecular and single-cell
level, making genetic tractability an advantage of a model system. Ge-
netically tractable models (Drosophila, C. elegans, zebrafish, mouse)
now dominate classical model systems (paradoxically, genetically tracta-
ble systems are less amenable to neurophysiological approaches). Instead
of the neuroethological approach of using diverse systems suited to ad-
dressing specific questions, we may be choosing questions to address in
genetically tractable systems, an example of the “law of the instrument”.
Before molecular genetic approaches, single-cell recording techniques
also promoted a reductionist explanatory and analytical focus of inver-
tebrate and vertebrate neural circuits (Selverston 1980; Getting 1989;
Ito 2006); the mechanisms of perception, learning and memory (Kandel
2001; Ito 2006; Bliss and Collingridge 2013).
New techniques often claim to overcome the limitations of previous
techniques, and they often do, but this doesn’t mean that they don’t
have caveats. Take molecular genetic approaches: positive or negative
effects of manipulations using these tools do not unequivocally identify
or eliminate a neuron as part of a circuit given the issues of heterarchical
systems discussed above (Parker 2019). Interpretation of effects, of both
molecular genetic and optogenetic approaches, is complicated by effects
on multiple cell types (“leaky” expression; Allen et al. 2015): ideally, a
single component at a time is affected so any effect can be related to that
component. Claims that molecular genetic approaches could “dissect”
Understanding Brain Circuits 251
neural circuits (Kiehn and Kullander 2004), language that suggests sur-
gical precision, are negated by the promiscuity of supposedly selective
targeting: Gosgnach et al. (2006) claimed ‘selective ablation’ of one class
of spinal cord neurons but in the same paragraph say that two other
classes were increased and another reduced in number, a poor dissec-
tion. If multiple cells are affected then any explanation has to appeal to
all effects, not beg the question by dismissing unintended effects using
terms like “small increase” or “slight decrease” (Gosgnach et al. 2006).
Specific targeting of neuron classes is possible using combinatorial meth-
ods, for example using the regulatory elements of two neurally expressed
genes (Luan et al. 2020): INTERSECT (intron recombinase sites en-
abling combinatorial targeting) utilizes multiple recombinases expressed
in different cells type to more specifically target cell populations. Op-
togenetic probes can also be activated by activity-dependent immediate
early genes, increasing specificity by limiting targeting to recently active
cells (Guru et al. 2015).
There are issues even if a single population of cells is targeted. One
issue is that it doesn’t consider population variability. A knock-out or
optogenetic manipulation will affect all members of the population,
essentially averaging the effect. Whether this is acceptable or not de-
pends on the view of variability: if it is considered irrelevant or noise
then it doesn’t matter, but if variability is functionally relevant then it
does. In addition, genetic manipulations can result in adaptive changes
within minutes (i.e. not prevented by conditional knock-outs; Frank
et al. 2006). Homeostatic plasticity may lead to compensation that re-
sults in no apparent phenotype despite a key component being affected,
while diaschisis will add to the effect of the intended manipulation and
complicate interpretations of what caused a change (Parker 2019). Op-
togenetic stimulation can also evoke effects in the absence of optogenetic
proteins, possibly reflecting light-induced temperature changes (Allen
et al. 2015). These artefacts can presumably be controlled for (if they
are admitted as caveats; issues are not always highlighted as clearly as in
Allen et al. 2015); but diaschisis cannot be removed as it reflects normal
features of nervous systems, which means in this case that the potential
contribution of these factors to any explanation needs to be considered.
A connectomic map is a very desirable feature, and knowledge of
structure is important (ephaptic communication in the escape circuit of
goldfish is an example of a structural feature that explained a functional
phenomenon; Weiss and Faber 2010). But function must be determined,
it doesn’t drop out from structure despite claims that structure “may
enable predictions of circuit behaviour” (Lichtman and Sanes 2008,
p. 349) and that structure “will signify a physiological process without
the requirement of repeating the physiological analysis” (Morgan and
Lichtman 2013, p. 496). Evidence from many systems over many years
suggests a very limited ability to infer function from structure. This is
252 David Parker
evidenced by C. elegans, where a complete connectome has been avail-
able for over 30 years but we still lack insight into how the circuit works,
and the cerebellar cortex where the organisation has been known for de-
cades without insight into how the cerebellum does what it does, or even
what it is doing. This was very elegantly shown by the marked change
in output of a circuit of just two neurons depending on the functional
properties of the connections between them (Elson et al. 2002).
Although connectomic analyses can be automated, it still requires sig-
nificant effort. A recent connectome of a portion of the Drosophila brain
(around 25,000 neurons) took two years and hundreds of thousands of
hours on a paper with around 100 authors (Scheffer et al. 2020). Scal-
ing this up to larger systems may require a disproportionately greater
time. Connectomic detail is desirable but given the limited functional
insight is the time investment on detailed maps useful (1 mm3 of visual
cortex took six months to obtain the data (Landhuis 2020); considering
the volume of the visual cortex is 6 cm3 the time needed for a complete
connectomic map is biblical). A less detailed structure with functional
concepts may be a better approach.
3 Conclusion
Technological advances are needed to address the astronomical com-
plexity of even modestly sized networks. But conceptual advances are
needed to direct analyses and tool development; we need to know what
we need to know, and how to achieve it. Neuroscience has an enor-
mous amount of data on molecular, cellular, synaptic, and developmen-
tal mechanisms, and cognition and behaviour, but we lack a coherent
framework to link between these aspects. It isn’t that we need, or should
expect, a unified theory of brain function, just some idea of how to in-
tegrate between levels.
Complete reductionist detail of even modest circuits is not possible
with current techniques, it may never be, and may not be needed. Vari-
ous people have highlighted different explanatory levels (Ryle, Newell,
Marr, Dennett; see above), the semantic and computational, the pro-
gramme or algorithmic, and reductionist physical or implementational
approaches. Non-reductionist accounts of nervous systems include Lash-
ley’s equipotentiality hypothesis, Pribram’s holonomic brain theory, and
arguably most successfully in linking across levels cybernetic control
principles (Pickering 2010). For explanation appeal to a computational
or algorithmic level is reasonable, but is it enough for interventions? You
may know the ‘computations’ that a car engine performs, but you would
still want a mechanic who knows about the details of engines to fix
one. Neuroscience may need to introduce something similar to electro-
magnetic charge and electromagnetic forces as new fundamental phys-
ical properties to explain electromagnetism. While reductionism can
Understanding Brain Circuits 253
appeal to successes, we shouldn’t assume its success. A key issue is that
although biological components operate according to physical laws, they
reflect organisation over various spatial and temporal scales rather than
independent parts (field effects may be the obvious example), not forget-
ting that the nervous system is in a body embedded in an environment.
Critiques of neuroscience reductionist approaches point to its failure
to explain, but current failure doesn’t necessarily mean that the ap-
proach is misguided. The Rosetta Stone was useful before it was decoded
(and it wasn’t decoded by having more of it or a reductionist analysis
of the stone). The same could apply to the relevance of lower-level (or
other) properties. For the nervous system, we are still in the early days
of understanding a complex system. We can continue to use statistical
approaches to explain higher-level phenomena from the mass action of
identical parts or abstractions that focus on representations or computa-
tions, while also considering the details in the diversity of brain regions,
neurons and synapses.
Even techniques that promise to examine multiple components (e.g.
the US BRAIN project; Mott et al. 2018) still examine components, and
there seems to be a drive to record from ever more cells. A recent review
states that we should now shift from considering individual neurons to
considering ensembles of neurons as the functional units of the nervous
system, and that new technologies will provide a greater understand-
ing of the link between the brain and behaviour (Yuste 2015), without
saying what links are needed and how to develop them. We are already
data rich but theory light, and assuming more components will give un-
derstanding is an illusion (Ylikoski 2009). Analyses at higher levels can
generate plausible inferences, but plausible is not necessarily correct.
Neuroscience papers increasingly use multiple methods, ‘a method-
ological decathlon’ seemingly to emphasise the importance of the work
(Krakauer et al. 2017). Using different methods can produce more robust
findings if the results of each are in agreement, and it avoids the dan-
ger of being biased towards certain factors and blind to others. But the
techniques should genuinely address questions rather than “throw” tech-
niques at a system. The prestige journals seem especially prone to multiple
techniques. There is a danger of multiple pieces of neuroscience informa-
tion seeming to make explanations more satisfying (Weisberg et al. 2008;
McCabe and Castel 2008), and thus interfering with proper critique.
Unless halted we may regret the move away from the traditional neu-
rophysiological/neuroethological approaches that explicitly address the
neurophysiology of natural behaviours in various model systems (ad-
dressing some of the concerns raised in Krakauer et al. 2017): we will
not simply lose the insight it can give, but also the insight it has brought.
This has always partly reflected a split between invertebrate and verte-
brate/mammalian communities (after moving to a vertebrate lab for a
post-doc after a PhD on insects I heard that neuroethology was “people
254 David Parker
doing weird things with insects”), and now reflects the appeal to geneti-
cally tractable systems.
Instead of philosophical and scientific competition over the merits of
lower-level reductionist or higher-level computational or representative
effects, they should be seen as different approaches to the same ques-
tion, not only as an epistemological diversity but also as an ontological
unity. As Simon (1962) said, “In the face of complexity, an in-principle
reductionist may be at the same time a pragmatic holist”. We can say the
same about tools and ideas. The history of neuroscience shows we need
both. We need to discuss concepts to identify what we actually need
to do and how to use and develop tools to do this. An important, and
seemingly relatively simple and non-technologial start is to stop allow-
ing trivial claims to characterisation and understanding by prominent
figures (Glanzman 2010; Parker 2006, 2019).
References
Allen, B. D., Singer, A. C., & Boyden, E. S. (2015). Principles of designing
interpretable optogenetic behavior experiments. Learning & Memory, 22,
232–238.
Araque, A., & Navarrete, M. (2010). Glial cells in neuronal network function.
Philosophical Transactions of the Royal Society London B: Biological Sci-
ences, 365, 2375–2381.
Bassett, D. S., & Gazzaniga, M. (2011). Understanding complexity in the hu-
man brain. Trends in Cognitive Science, 15, 200–209.
Bassett, D. S., Greenfield, D. L., Meyer-Lindenberg, A., Weinberger, D. R.,
Moore, S. W., & Bullmore, E. T. (2010). Efficient physical embedding of to-
pologically complex information processing networks in brains and computer
circuits. PLoS Computational Biology, 6(4), e1000748.
Bechtel, W. (2002). Decomposing the brain: a long-term pursuit. Brain and
Mind, 3, 229–242.
Bernard, C. (1865). An Introduction to the Study of Experimental Medicine
(H. C Greene (1949), Trans.). New York: Henry Schuman Inc.
Bickle, J. (2003). Philosophy and Neuroscience: A Ruthlessly Reductive Ac-
count. Dordrecht: Springer.
Bickle, J. (2016). Revolutions in neuroscience: tool development. Frontiers in
Systems Neuroscience, 10, 24.
Blackmore, S. (2006). Conversations on Consciousness: What the Best Minds
Think about the Brain, Free Will, and What It Means to Be Human. New
York: Oxford University Press.
Bliss, T. V., & Collingridge, G. L. (2013). Expression of NMDA receptor-
dependent LTP in the hippocampus: bridging the divide. Molecular Brain, 6, 5.
Carrera, E., & Tononi, G. (2014). Diaschisis: past, present, future. Brain, 137,
2408–2422.
Darden, L., & Craver, C. (2009). Reductionism in biology, Encyclopedia of Life
Sciences. Chichester: J. Wiley and Sons Ltd.
Understanding Brain Circuits 255
Dennett, D. C. (1971). Intentional systems. The Journal of Philosophy, 68,
87–106.
de Regt, H. W. (2013). Understanding and explanation: living apart together?
Studies in History and Philosophy of Science Part A, 44(3), 505–509.
Elson, R. C., Selverston, A. I., Abarbanel, H. D. I., & Rabinovich, M. I. (2002).
Inhibitory synchronization of bursting in biological neurons: dependence on
synaptic time constant. The Journal of Neurophysiology 88(3), 1166–1176.
Frank, C., Kennedy, M., Goold, C., Marek, K., & Davis, G. (2006). Mecha-
nisms underlying rapid induction and sustained expression of synaptic ho-
meostasis. Neuron, 52, 663–677.
Galison, P. (1997). Image and Logic: A Material Culture of Microphysics.
Chicago, IL: University of Chicago Press.
Getting, P. (1989). Emerging principles governing the operation of neural net-
works. Ann. Rev. Neurosci., 12, 185-204.
Glanzman, D. L. (2010). Common mechanisms of synaptic plasticity in verte-
brates and invertebrates. Current Biology, 20(1), R31–R36.
Glass, A. L., Holyoak, K. J., & Santa, J. L. (1979). Cognition. Reading, MA:
Addison-Wesley Publishing Company.
Glasser, M. F., Coalson, T. S., Robinson, E. C., Hacker, C. D., Harwell, J.,
Yacoub, E., Ugurbil, K., Andersson, J., Beckmann, C. F., Jenkinson, M.,
Smith, S. M., Van Essen, D. C. (2016). A multi-modal parcellation of human
cerebral cortex. Nature, 536, 171–178.
Gosgnach, S., Lanuza, G. M., Butt, S. J. B., Saueressig, H., Zhang, Y., Vel-
asquez, T., Riethmacher, D., Callaway, E. M., Kiehn, O., & Goulding, M.
(2006). V1 spinal neurons regulate the speed of vertebrate locomotor outputs.
Nature, 440, 215–219.
Greenberg, I., & Manor, Y. (2005). Synaptic depression in conjunction with
A-current channels promote phase constancy in a rhythmic network. The
Journal of Neurophysiology, 93, 656–677.
Grimm, S. R. (2006). Is understanding a species of knowledge? The British
Journal for the Philosophy of Science, 57, 515–535.
Guru, A., Post, R. J., Ho, Y.-Y., & Warden, M. R. (2015). Making sense of
optogenetics. The International Journal of Neuropsychopharmacology, 18,
pyv079.
Hanahan, D., & Weinberg R. A. (2000). The hallmarks of cancer. Cell, 100,
57–70.
Hartline, H. K., Wagner, H. G., & Ratliff, F. (1956). Inhibition in the eye of
limulus. The Journal of General Physiology, 39, 651–673.
Horgan, J. (1999). The Undiscovered Mind. London: Widenfeld and Nicholson.
Howard-Jones, P. A. (2007) Neuroscience and Education: Issues and Oppor-
tunities, TLRP Commentary. London: Teaching and Learning Research
Programme.
Ito, M. (2006). Cerebellar circuitry as a neuronal machine. Progress in Neuro-
biology, 78(3), 272–303.
Jonas, E., & Kording, K. P. (2017). Could a neuroscientist understand a micro-
processor? PLoS Computational Biology, 13(1), e1005268.
Jørgensen, C. B. (2001). August Krogh and Claude Bernard on basic principles
in experimental physiology. BioScience, 51(1), 59–61.
256 David Parker
Kandel, E. (2001). The molecular biology of memory storage: a dialogue be-
tween genes and synapses. Science, 294, 1030–1038.
Kiehn, O., & Kullander, K. (2004). Central pattern generators deciphered by
molecular genetics. Neuron, 41, 317–321.
Koch, C. (2012). Modular biological complexity. Science, 337, 531–532.
Knafo, S., & Wyart, C. (2015). Optogenetic neuromodulation: new tools for
monitoring and breaking neural circuits. Annals of Physical and Rehabilita-
tion Medicine, 58, 259–264.
Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A., & Poep-
pel, D. (2017). Neuroscience needs behavior: correcting a reductionist bias.
Neuron, 93(3), 480–490.
Kuhn, T. (1962). The Structure of Scientific Revolutions, 1st edn. Chicago, IL:
University of Chicago Press.
Landhuis, E. (2020). Probing fine-scale connections in the brain. Nature, 586,
631–633.
Lichtman, J., & Sanes, J. (2008). Ome sweet ome: what can the genome tell us
about the connectome? Current Opinion in Neurobiology, 18, 346–353.
Luan, H., Kuzin, A., Odenwald, W. F., & White, B. H. (2020). Cre-assisted
fine-mapping of neural circuits using orthogonal split inteins. eLife, 9, e53041.
Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mecha-
nisms. Philosophy of Science, 67, 1–25.
Markram, H., Muller, E., Ramaswamy, S., Reimann, M. W., Abdellah, M.,
Sanchez, C. A., … & Schurmann, F. (2015). Reconstruction and simulation
of neocortical microcircuitry. Cell, 163, 456–492.
Marr, D. (1982). Vision: A Computational Investigation into the Human Rep-
resentation of Visual Information. New York: W.H. Freeman & Company.
McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: the effect of brain
images on judgments of scientific reasoning. Cognition, 107(1), 343–352.
Middleton, H., & Moncrieff, J. (2019). Critical psychiatry: a brief overview. BJ
Psychiatry Advances, 25, 47–54.
Miller, G. (2011). Blue brain founder responds to critics, clarifies his goals. Sci-
ence, 334, 748–749.
Monod, J. (1972). Chance and Necessity: An Essay on the Natural Philosophy
of Modern Biology. New York: Vintage.
Morgan, J. L., & Lichtman, J. W. (2013). Why not connectomics? Nature Meth-
ods, 10, 494–500.
Morris, R. G. M. (2003). Long-term potentiation and memory. Philosophical
Transactions of the Royal Society of London. Series B: Biological Sciences,
358(1432), 643–647.
Mott, M. C., Gordon, J. A., & Koroshetz, W. J. (2018). The NIH BRAIN ini-
tiative: advancing neurotechnologies, integrating disciplines. PLoS Biology,
16(11), e3000066.
Nardone, A., & Schieppati, M. (2008). Inhibitory effect of the Jendrassik ma-
neuver on the stretch reflex. Neuroscience, 156(3), 607–617.
Newell, A. (1982). The knowledge level. Artificial Intelligence, 18, 87–127.
Nicholson, D. J. (2019). Is the cell really a machine? The Journal of Theoretical
Biology, 477, 108–126.
Noble, D. (2012). A theory of biological relativity: no privileged level of
causation. Interface Focus, 2, 55–64.
Understanding Brain Circuits 257
Østby, I., Øehaug, L., Einevoll, G., Nagelhus, E., Plahte, E., Zeuthen, T., Lloyd,
C., Ottersen, O., & Omholt, S. (2009). Astrocytic mechanisms explaining
neural-activity-induced shrinkage of extraneuronal space. PLoS Computa-
tional Biology, 5, e1000272.
Parker, D. (2006). Complexities and uncertainties of neuronal network func-
tion. Philosophical Transactions of the Royal Society B: Biological Sciences,
361, 81–99.
Parker, D. (2010). Neuronal network analyses: premises, promises and uncer-
tainties. Philosophical Transactions of the Royal Society London B: Biologi-
cal Sciences, 365, 2315–2328.
Parker, D. (2018). Kuhnian revolutions in neuroscience: the role of tool develop-
ment. Biology and Philosophy, 33, 17.
Parker, D. (2019). Psychoneural reduction: a perspective from neural circuits.
Biology & Philosophy, 34(4), 44. doi:10.1007/s10539-019-9697-8.
Parker, D., & Srivastava, V. (2013). Dynamic systems approaches and levels of
analysis in the nervous system. Frontiers in Physiology, 4, 15.
Pickering, A. (2010). The Cybernetic Brain: Sketches of Another Future. Chi-
cago, IL: University of Chicago.
Price, C. J., & Friston, K. J. (2005). Functional ontologies for cognition: the
systematic definition of structure and function. Cognitive Neuropsychology,
22, 262–275.
Railton, P. (1981). Probability, explanation, and information. Synthese, 48,
233–256.
Ryle, G. (1949). The Concept of Mind. Chicago, IL: University of Chicago Press.
Sagan, C. (1977). Dragons of Eden: Speculations on the Evolution of Human
Intelligence. New York: Random House.
Scheffer, L. K., Xu, C. S., Januszewski, M., Lu, Z., Takemura, S.-Y., Hayworth,
K. J., Huang, G. B., … & Plaza, S. M. (2020). A connectome and analysis of
the adult Drosophila central brain. eLife, 9, e57443 C57441.
Schroeder, C., & Foxe, J. (2005). Multisensory contributions to low-level, 'uni-
sensory' processing. Curr Opin Neurobiol, 4, 454-458.
Schroter, M., Paulsen, O., & Bullmore, E. T. (2017). Micro-connectomics:
probing the organization of neuronal networks at the cellular scale. Nature
Reviews Neuroscience, 18, 131.
Selverston, A. (1980). Are central pattern generators understandable? Behav-
ioral Brain Sciences, 3, 535–571.
Sherrington, C. S. (1899). On the relation between structure and function as ex-
amined in the arm. Transactions of the Liverpool Biological Society, 13, 1–20.
Sherrington, C. S. (1951). Man on His Nature: The Gifford Lectures, Edin-
burgh, 1937–8. Cambridge: Cambridge University Press.
Simon, H. (1962). The architecture of complexity. Proceedings of the American
Philosophical Society, 106, 467–482.
Simon, H. (1969). The Sciences of the Artificial, Third Edition.: The MIT Press.
Soltesz, I. (2006). Diversity in the Neuronal Machine. New York: Oxford Uni-
versity Press.
Sperry, R. W. (1980). Mind-brain interaction: mentalism, yes; dualism, no.
Neuroscience. 5, 195–206.
Steinmetz, N. A., Aydin, C., Lebedeva, A., Okun, M., Pachitariu, M., Bauza,
M., Beau, M., … & Harris, T. D. (2021). Neuropixels 2.0: a miniaturized
258 David Parker
high-density probe for stable, long-term brain recordings. Science, 372,
eabf4588.
Stent, G. (1969). The Coming of the Golden Age. Garden City, NY: Natural
History Press.
Steward, O., Popovich, P. G., Dietrich, W. D., & Kleitman, N. (2012). Replica-
tion and reproducibility in spinal cord injury research. Experimental Neurol-
ogy, 233(2), 597–605.
Svensson, E., Aspergis-Schoute, J., Burnstock, G., Nusbaum, M., Parker, D., &
Schioth, H. (2019). General principles of neuronal co-transmission: insights
from multiple model systems. Frontiers in Neural Circuits, 12, 117.
Thomson, H. (2010). Mental muscle: six ways to boost your brain. https://
www.newscientist.com/article/mg20827801-300-mental-muscle-six-ways-
to-boost-your-brain/#ixzz67nP7rZgk.
Warner, R. (2001). Microelectronics: its unusual origin and personality. IEEE
Transactions on Electron Devices, 48, 2457–2467.
Weisberg, D., Keil, F., Goodstein, J., Rawson, E., & Gray, J. (2008). The seduc-
tive allure of neuroscience explanations. The Journal of Cognitive Neurosci-
ence, 20, 470–477.
Weiss, S., & Faber, D. (2010). Field effects in the CNS play a functional role.
Frontiers in Neural Circuits, 4, 1–10.
Wimsatt, W. C. (2006). Reductionism and its heuristics: making methodologi-
cal reductionism honest. Synthese, 151, 445–475.
Woodward, J. (2017). Explanation in Neurobiology: An Interventionist Per-
spective. In D. Kaplan (Ed.), Explanation and Integration in Mind and Brain
Science (pp. 70–100). Oxford: Oxford University Press.
Zhang, Y., Rózsa, M., Bushey, D., Zheng, J., Reep, D., Liang, Y., Brousaard,
G. J., … & Looger, L. L. (2020). jGCaMP8 Fast Genetically Encoded Cal-
cium Indicators. Janelia Research Campus. Online resource. doi:10.25378/
janelia.13148243.v4
Ylikoski, P. K. (2009). The Illusion of Depth of Understanding in Science. In H.
W. de Regt, S. Leonelli, & K. Eigner (Eds.), Scientific Understanding: Philo-
sophical Perspectives (pp. 100–119). Pittsburgh, PA: University of Pittsburgh
Press.
Yuste, R. (2015). From the neuron doctrine to neural networks. Nature Reviews
Neuroscience, 16, 487–497.
12 Cognitive Ontologies,
Task Ontologies, and
Explanation in Cognitive
Neuroscience
Daniel C. Burnston
1 Introduction
The development of new scientific tools provides opportunities for prog-
ress, but also gives scientists reason to reinvestigate, reconsider, and
maybe revise their assumptions about the domain under investigation. In
cognitive neuroscience, this has manifested in the debate over “cognitive
ontology” – that is, the set of mental functions or faculties investigated
by the neurosciences. Psychology comes equipped with a series of intu-
itive mental categories – perception, cognition, memory, imagination,
verbal reasoning, emotion, etc. Cognitive neuroscience has traditionally
proceeded under the assumption that these, or some suitably explicated
set of these, will be realized in the processes that neuroscientists investi-
gate. For better or worse, however, this assumption sits poorly with the
current evidence, which indicates the massive multifunctionality of indi-
vidual parts of the brain, the wide distribution of activity corresponding
to intuitive mental categories, and the importance of global network and
ecological context in determining what an individual part of the brain
does. These data stress, and perhaps break, the “new phrenological”
(Uttal, 2001) approach to cognitive and systems neuroscience, invali-
dating cherished means of analysis such as subtractive methodology and
reverse inference.
One powerful thought in the field is that part of the problem is our
intuitive conception of psychology. Indeed, Poldrack once said that “the
fundamental problem is our stone age psychological ontology” (Bunzl,
Hanson, & Poldrack, 2010, p. 54). Perhaps the standard mentalistic cat-
egories used in the psychological sciences are just too simple, too general,
and too crude to capture how the brain implements behavior. Perhaps
those categories need to be revised, refined, or even abandoned to under-
stand brain function. A host of questions immediately arises, however,
surrounding how committed we should remain to our standard list of
cognitive kinds. Do they successfully describe brain function, only at a
network level? Could we discover discrete implementations of kinds if
they are suitably amended, for instance by subdividing them into more
specific kinds? Or should they just be gotten rid of, resulting in a view
DOI: 10.4324/9781003251392-16
260 Daniel C. Burnston
of the brain on which it is “unanalyzable” (Uttal, 2001) into distinct
functions, where its function is “protean” and lacking generalizability
(Hutto, Peeters, & Segundo-Ortin, 2017)?
Theorists interested in cognitive ontology are thus facing a conun-
drum that is both methodological and ontological. What cognitive cat-
egories are realized in the brain cannot be determined independently of
our methods of investigation. But in cognitive neuroscience, those meth-
ods traditionally employ those categories as basic assumptions – i.e.,
they are what is being investigated, and thereby constrain the interpre-
tation of otherwise inscrutable brain data. Theorists have begun to use
formal tools from databasing, machine learning, and meta-analysis as a
way of addressing this problem. The hope is that the use of these tools
can turn the issue of cognitive ontology into a problem for data science,
rather than metaphysics. By analyzing large amounts of studies using an
agreed-upon, publicly shareable taxonomy of cognitive function, neuro-
scientists hope to be able to discover the ways in which cognitive catego-
ries relate to brain activation, and thereby provide a groundwork for the
substantiation, revision, or abandonment of those categories.
I refer to these projects collectively as “databasing and brain map-
ping” projects, and in this paper, I assess their status and prospects. Ul-
timately, I will argue that the problem is not so much with our intuitive
mental ontology per se, but with the standard explanatory framework
assumed by the cognitive neurosciences. The standard framework as-
sumes that categories of mental function are explanatory kinds, and
that cognitive neuroscience proceeds by showing how these explanatory
categories are instantiated in brain activity. As such, the standard frame-
work is committed to there being an ultimate taxonomy of distinct and
discretely realized cognitive kinds, whose instantiation in the brain caus-
ally explains behavior. I will argue that databasing and brain-mapping
projects, rather than substantiating this standard framework, should
inspire us to abandon it. Instead, I advocate an alternative view of neu-
roscientific explanation on which what explains are ways in which brain
systems organize to implement the informational demands of a particu-
lar task or context (Burnston, 2016, 2021). On this alternative, the best
reading of the role of psychological constructs is as heuristics for inves-
tigation, rather than as explanatory kinds (cf. Feest, 2010).
My aims are both descriptive and normative. I both believe that this
is what successful neuroscientific explanation does look like, and that
it is how we should think about it. This comes along with a variety of
methodological prescriptions, including a plea for increased focus on
task, rather than cognitive ontologies. I hope to clarify the potential
advantages and pitfalls of using formal analytical tools in the cognitive
ontology debate along the way.
I proceed as follows. In Section 2, I introduce the standard explana-
tory framework of cognitive neuroscience, articulate its commitments,
Ontologies in Cognitive Neuroscience 261
and discuss methodological and empirical problems for the framework
present in the literature. Then, in Section 3, I outline some of the formal
tools that have been applied to the problem, and in Section 4 show that
no clear consensus has emerged on how results employing these methods
are supposed to relate to the standard framework. In Section 5 I out-
line my preferred approach to understanding the role of psychological
constructs in neuroscientific explanation, and in Section 6 show how
this approach offers distinct normative prescriptions than the standard
framework. Section 7 concludes.
These results are perfectly interesting in their own right, in that they
quantify the “specificity” with which our intuitive cognitive concepts
interact with brain activation. It is just that, on their face, they are in
conflict with the standard model because they show multifunctionality
and distribution rather than univocal relationships between psychologi-
cal constructs and activation. The question is what to do in response to
these results with regard to the standard framework.
Let me stress that I am reconstructing positions here – I think each of
the options I am about to articulate is present, to some degree, across
papers and theorists within the field. As far as I can tell, there are three
options with regard to the standard framework. First, one could attempt
268 Daniel C. Burnston
to substantiate the framework by pursuing more fine-grained analyses
in an attempt to discover more and more specific activation patterns for
particular cognitive concepts, perhaps further leading to decomposition
and causal explanation. Second, one could attempt to use the analyses to
revise our cognitive ontology. On this view, it might be the case that the
standard framework can be maintained, but only after the appropriate
revisions to the ontology. Third, one might use these results as motiva-
tion to abandon the standard framework altogether and opt for some
other kind of project. In the next section, I outline each of these perspec-
tives, along with examples from the literature which might suggest them,
and give reasons to question them. This will motivate my own proposal
about psychological constructs in Section 5.
4.1 Substantiate?
One view one could take towards the standard framework is that it is
basically right, and our ontology is basically right, but that the results of
multifunctionality and distribution are due to insufficiently fine-grained
measurement. The solution, if this is one’s perspective, would be to re-
fine analyses so that the “true” and univocal associations between psy-
chological constructs and brain activation can be uncovered.
I think that this is the position that is least strongly considered in the
literature, but there are a few trends that suggest it. Indeed, one direction
in which the literature has gone over the last few years is in the direction
of more fine-grained analysis and the search for increased specificity.
Poldrack and Yarkoni (2016) thus describe the project as one of “quan-
tifying the true specificity of hypothesized structure-function associa-
tions” (p. 589). This, one assumes, means that they indeed take there
to be relations there to be discovered, further indicating realism about
psychological kinds. Moreover, recent projects take the goal of brain
mapping projects to be enabling both forward and reverse inference –
that is, the predictive ability of brain mapping models between con-
structs and activation patterns should be bidirectional. This, to me at
least, further suggests a belief in the importance of the instantiation
relation. Finally, there is how these projects are qualitatively described.
For instance, Varoquaux et al. (2018) suggest that one of the goals of
mapping projects is “precisely describing the function of any given brain
region” (2018, p. 1).
Varoquaux et al. performed a decoding analysis using a hierarchical
general linear model (GLM) framework. In particular, their reverse in-
ference required multiple layers of linear regressions on activity in the
brain. The first layer was tuned to individual oppositions between task
conditions. Then, a second layer used another regression that compared
Ontologies in Cognitive Neuroscience 269
each cognitive term to all others, predicting which term was overall most
relevant. They compared the results of this decoder to other approaches,
showing that it resulted in sharper divisions between distinct functions.
There are a few things to be said here, however. First, this study
measured terms in the Cognitive Paradigm Ontology rather than the
Brain Map or the Cognitive Atlas, and these terms more directly de-
scribe task conditions (e.g. “response with left hand”) than psycholog-
ical constructs (e.g., “motor control”). Second, they focused primarily
on perceptual and motor areas for which there are already more-or-less
well-understood general function ascriptions. Finally, even these results
showed distributed and interdigitated functional populations, with, for
instance, “face” and “place” areas being more or less separated, but
each involving multiple subpopulations distinct from each other.
Another recent approach to bidirectional decoding is from Rubin et al.
(2017), which employs LDA on over 11,000 articles from Neurosynth.
They start by noting that previous studies show mainly wide patterns of
activation for particular constructs, and thus are no help in finding “rel-
atively simple, well-defined functional-anatomical atoms.” To overcome
this, they performed an LDA analysis constrained both by the semantics
of the terms and by groupings in spatial coordinates. They report that,
not only were they able to uncover topics with relatively clear functional
upshot, (e.g., topics related to “emotion”), but that each topic “is as-
sociated with a single brain region.” At first, this sounds a lot like the
explanatory aim of the standard framework – i.e., to find a constrained
localization corresponding to each psychological function.
A closer reading questions this analysis, however. As the researchers
note, the probabilistic nature of the model suggests that the decoding
analysis uncovers the construct most likely associated with a given area,
but not the only one. This is further illustrated by the fact that individual
topics were allowed to spatially overlap in the model, and many mul-
tifunctional areas did indeed show significant overlap between related
topics. Further specifying to individual topics in many cases required
conditioning further on more spatial coordinates, hence suggesting,
again, distribution of function. So, while the results in this model are
predictive at a very specific construct-spatial level, it is not clear that
this reflects the reality of the system. And this is noted explicitly by the
researchers. It is worth quoting them in full:
So, while one trend in the literature is to look for increasingly specific
relationships between extant psychological constructs and patterns of
activation, it is not clear that even successful results in this endeavor sub-
stantiate, or should be read as attempting to substantiate, the standard
framework.
4.2 Revise?
The idea that the databasing and brain mapping project can help us re-
vise our cognitive ontology is extremely common. For example, Poldrack
and Yarkoni (2016) suggest that “formal cognitive ontologies [are use-
ful] in helping to clarify, refine, and test theories of brain and cognitive
function” (p. 587), and that “biological discoveries can and should in-
form the continual revision of psychological theories” (p. 599).
These quotes suggest that, ultimately, the role of the databasing and
brain mapping project will be in helping us to explicate our mental on-
tology. Sometimes, this is pitched in terms of a discovery science – we
should let the brain tell us what its functional categories are, and revise
our ontology accordingly (Poldrack et al., 2012). In this section, I sug-
gest two related problems for this view. The first is the interpretability
problem, and the second is the seeding problem. In general, however, the
issue is this: without a rubric for how and when to revise our mental cat-
egories in light of brain mapping data, we lack the ability to use results
from brain mapping to revise the ontology in any specific way. This sug-
gests that metaphysical commitments about the nature of mental states
precede, rather than being compelled by, brain mapping data.
The interpretability problem is akin to a problem discussed by Carl-
son et al. (2018; cf. Ritchie, Kaplan, & Klein, 2016; Weiskopf, 2021)
for uncovering neural representations via machine learning techniques.
They argue that, given a particular ability to decode some stimulus from
neural activity, it is unclear how to interpret that result in terms of rep-
resentational content. The worry, I take it, is that the ability to decode
a stimulus from an activation does not mean that the activity represents
the stimulus under anything like the way we would describe it. The an-
alog problem here is that simply showing that a pattern of activity in
the brain is specific to, say, decision-making (or to a high topic loading
on a topic that happens to comprise words we associate with decision-
making), doesn’t give us any indication of whether the pattern of activity
Ontologies in Cognitive Neuroscience 271
is in fact performing something we would call “decision-making.” The
more distribution and overlap uncovered in the analysis, the more exac-
erbated this problem becomes, because of the inferential freedom dis-
cussed in Section 2.
So, given the association of a pattern of activity with a mental con-
struct, should we take that construct as substantiated, as in need of ex-
plication, or what? What degree of correlation/predictability, or what
degree of specificity, is required to count the kind as substantiated, and at
what point should we consider it in need of revision? The brain mapping
results themselves provide no rubric for how to make these decisions.
The seeding problem is related to the interpretation problem and is
based on the fact that even constructing the analyses requires adher-
ing, to an unspecified degree, to our extant cognitive constructs. In an
analysis based on Brain Map or the Cognitive Atlas, one only considers
concepts that are a current part of our mental ontology. This presumes
that the basic structure of the brain corresponds closely enough to those
categories in order for them to be useful in understanding the brain. But
what justifies this assumption? In principle, a specific-enough correla-
tion between mental constructs and brain activity might justify the as-
sumption, but it is precisely a lack of specificity of this type that prompts
the idea of ontology revision.
In topic-modeling analyses, the topics that are often focused upon are
the ones that one can intuitively or statistically pair with an a lready-known
mental construct. Varoquaux et al., for example, advertise that 100 of
the 200 topics in the model correspond to well-understood mental con-
structs. What about the other ones, however? Even given a substantia-
tion of some of our mental categories, what the analysis would suggest
is that our ontology is at least impoverished, and it does not come with
any prescriptions for what to say in these other cases.
Again, the point of this is not to discount the analysis. The point of
if it is just to deny that the analysis on its own offers us any principled
way of revising our cognitive ontology. Put differently, the principles for
ontology revision cannot be uncovered bottom-up from these analyses.
Metaphysical commitments must be undertaken in constructing and in-
terpreting the analyses themselves. Again, theorists in the field recognize
this problem. Poldrack and Yarkoni (2016), for instance, note that there
is “no algorithmic way” to approach ontology revision in light of specific
mapping results. They seem to suggest, however, that more analysis and
case-by-case thinking will allow for sufficient explication. The individ-
uation and seeding problems should raise concerns for that approach.
4.3 Abandonment?
One also finds more-or-less explicit discussion of the explanatory ide-
als of the standard framework in the literature. The clearest cases of
272 Daniel C. Burnston
these are Yarkoni and Westfall (2017) and Anderson (2014). Yarkoni
suggests explicitly that results from databasing and brain-mapping proj-
ects suggest abandoning explanation altogether, in favor of a purely pre-
dictive neuroscience. Anderson’s view does not cite prediction per se,
but does suggest that we need to change to a dispositional approach to
brain organization, wherein we do not understand a part of the brain as
contributing a specific causal influence at a specific time, but instead as
exhibiting dispositions to contribute to a range of functions.
I lack the space to assess these proposals in detail, but for my pur-
poses, it suffices to note that they both, more-or-less-explicitly, move
away from the mechanistic kind of explanation inherent to the standard
framework. Much has been said about the relative merits of mechanistic
explanation versus prediction in explanation (Craver, 2006), and now
is not the time to re-adjudicate these issues. What I want to argue for in
the remainder of the paper is that abandoning the standard framework
is not itself equivalent to abandoning mechanistic explanation. Instead,
we can abandon the standard framework by abandoning the central ex-
planatory role it affords to mentalistic constructs.
5 An Alternative View
5.2 An Exemplar
The heuristic view makes a number of invocations about successful ex-
planations in neuroscience. First, overlap between mental constructs
across tasks and contexts should be just as important as separation be-
tween them. Second, understanding the differences in structure between
tasks is paramount for understanding neural function. Third, assuming
spatial decomposition between distinct purported mental faculties would
limit, rather than enabling, understanding of how the system works.
I will discuss one example in detail. Murray, Jaramillo, and Wang
(2017) pursued a modeling study of the interaction between the pre-
frontal cortex (PFC) and the posterior parietal cortex (PPC). The initi-
ating motivation for their study is that both the PFC and the PPC have
been shown physiologically to be involved across a wide range of both
working memory (WM) and decision-making (DM) tasks. The question,
then, is what their distinct contributions are.
Murray et al.’s approach was as follows. They modeled each area
as a fully recurrent neural network, and each area had distinct sub-
populations selective for distinct perceptual stimuli. The PFC and PPC
networks were bi-directionally connected via long-range projections.
The main difference between the two populations was a difference in
274 Daniel C. Burnston
local structure. In particular, the PFC population was modeled as hav-
ing a higher degree of internal influence – both in self-excitation of each
subpopulation, and in inhibitory connections between them, than the
PPC. Given this network structure, the investigators could model the
dynamics of the system in a range of task types, and think about how
the network responded in each.
Murray et al. posited that one key factor involved in working memory
tasks is multi-stability. That is, the network can represent a range of
possible stimuli, but given that it has already represented one, it must
maintain that information across a delay, perhaps in the presence of
distractors. So, they modeled the presentation of a stimulus and whether
its representation could be maintained in the network even as other
modeled stimuli were presented. What they showed is that a particu-
lar dynamics occurred during “successful” working memory trials, in
which both PFC and PPC populations represented the stimulus during
presentation. During delay, presentation of a distractor would “switch”
the PPC representation to representing the distractor, but PFC would not
switch. After the distractor was removed, the PFC → PPC long-range
connections would enforce the PPC populations to “switch back” to
representing the remembered stimulus. This model predicted a range of
physiological results found in PFC and PPC during these kinds of tasks,
as well as predicting types and durations of distractor-presentation that
would cause errors. Importantly, this explanation relied on the degree of
internal connection in each area, and the feedback connections from the
PFC to the PPC. For instance, if the internal connections within the PFC
were not sufficiently strong, then it would not maintain the representa-
tion during distractor presentation.
For decision tasks, the investigators asked whether the network could
produce an evidence-accumulation-to-threshold kind of process. These
processes have been shown to be important for a range of decision-
making processes including perceptual decisions and multi-attribute
choices (Teodorescu & Usher, 2013). Murray et al. modeled a perceptual
decision-making task, in which one out of a range of possible perceptual
outcomes must be decided on in the presence of a noisy signal. They
showed that the network could implement an evidence-accumulation
process, in which buildup of evidence occurred primarily in the PPC
population and selection of outcome in the PFC population. Intriguingly,
these dynamics also were dependent on the degree of internal structure
in the populations. Specifically, if the PFC population had a lesser de-
gree of internal recurrent influence, the network would evolve towards
a decision more slowly, whereas if it was strongly internally connected
it would evolve very quickly. As predicted, at a higher degree of inter-
nal influence the network “decided” faster, which in turn contributed to
more errors when the stimulus was noisier, and more time to integrate
evidence would have been helpful.
Ontologies in Cognitive Neuroscience 275
So, the very same network could implement both the kind of infor-
mation processing required in a WM task, and the kind required in a
perceptual DM task. One of the most intriguing results, however, is
that these two kinds of information processing trade-off in a network.
Greater resistance to distraction in the PFC network required a high
degree of internal influence in that module. But a high degree of inter-
nal influence also shortened the timeline over which perceptual evidence
could be accumulated. They posited that the particular structure of the
PFC-PPC circuit helps ameliorate this tradeoff. In particular, if one re-
moved the recurrent connections from the PFC to the PPC, then high
performance in the decision-making task would result in lower perfor-
mance in the working memory task.
I suggest that this kind of modeling project results in a mechanistic
understanding of the network, but only by exhibiting the three proper-
ties I discussed above. First, the understanding of the circuit developed
in the study starts out from the data point that both working memory
and decision employ overlapping circuits. Second, understanding the in-
formational requirements that are in common and differ across tasks is
central to the explanation. In particular, there is something in common
between working memory and decision-making tasks, namely that it is
useful to have both a population that is multiple in its responses paired
with a more categorically responding population. The difference in in-
ternal structure between the PFC and the PPC leads to the former hav-
ing more univocal responses, which lead to both its robustness in WM
contexts and its thresholding behavior in DM contexts. However, the
differences between the tasks are also vitally important, because they il-
lustrate the tradeoff in the network. WM contexts benefit from stronger
interconnection, since it increases resistance to distractors. But DM con-
texts benefit from weaker interconnection, since it allows for an increase
in evidence-gathering. This in turn leads to a mechanistic hypothesis,
namely that the distinction between the PFC and PPC circuits in their
degree of self-influence, and the feedback connection from the former to
the latter, help ameliorate this tradeoff.
Importantly, because the explanation takes this form, it would be a
mistake to attempt to spatially map WM and DM to distinct brain sys-
tems. There is not one part “doing” working memory and another part
“doing” decision-making, and therefore there is not a causal relationship
between so-individuated parts. There is one distributed circuit underlying
those intuitively distinct functions. Given this, I submit, there is no meta-
physically important distinction between working memory and percep-
tual decision-making. What there are are distinct task demands, and the
ways in which those demands are implemented by a distributed system.
The heuristic view, unlike the standard view, simply doesn’t assume
that there is a fact of the matter about (i) whether WM is really distinct
from DM, (ii) which tasks measure one versus the other, or (iii) whether
276 Daniel C. Burnston
a brain part really performs one rather than the other. What it suggests,
as seems to be the case, is that there are deep commonalities in the brain
systems performing these functions. The explanation also does not re-
quire that there be any firm division between the ultimate set of tasks
that are WM, versus those that are DM, tasks. There are simply tasks
with different informational requirements that are implemented differ-
ently in the network.
This is compatible with working memory and decision-making hav-
ing played important heuristic roles in the understanding of this sys-
tem. It was not, perhaps, initially obvious what the relationship between
WM and DM might be. The concepts were operationalized differently.
However, the persistent discovery of overlapping involvement in each
of these tasks by the distributed PFC/PPC circuit led to the question
of exactly how these functions are implemented. This led to a model-
ing project which uncovered both the commonalities and the differences
between the informational requirements of, and the neural processing
instantiating, tasks that correspond more-or-less closely to each of these
categories.
I have only discussed one example, but I take this to be an exemplar of
how multifunctional distributed circuits might be decomposed. I discuss
a variety of other examples in other venues (Burnston, 2020, 2021). If
this case is exemplary, however, then it stresses the normative bit of the
heuristic approach, as opposed to that of the standard framework.
6 Normative Upshot
7 Conclusions
A number of years ago, it was common for textbooks in the philosophy
of mind to teach the following: either our intuitive conception of the
mind, with its commitments to intentional attitudes, etc., is true, or be-
haviorism is. I hope this strikes the modern reader as almost charmingly
anachronistic. One can find newer versions of the dichotomy, however.
Uttal (2001), in his famous criticism of fMRI research, argues that the
alternative to discovering discrete localizations for distinct cognitive fac-
ulties is to view the mind as “unanalyzable,” by which he means indivis-
ible into distinct parts. More recently, Hutto et al. (2017) have suggested
that the way to react to the “protean” – by which they mean dynamically
reconfigurable – functionality of the brain is to embrace enactivist views
of cognition, with their attendant rejection of mental representation and
computation (Anderson, 2014; Silberstein & Chemero, 2013).
I have tried to argue that one can abandon the standard explanatory
framework of cognitive neuroscience, and its attendant commitments about
280 Daniel C. Burnston
psychological constructs, without abandoning mechanistic explanation in
the brain. And, while I haven’t argued for it here, I claim elsewhere (Burn-
ston, 2020) that this general approach extends to representational explana-
tion as well. The heuristic approach to cognitive ontology is a very different
stance on explanation than is currently assumed in the literature, and I
believe it deserves to be taken as a realistic option in this emerging field.
References
Anderson, M. L. (2014). After phrenology: Neural reuse and the interactive
brain. Cambridge: MIT Press.
Anderson, M. L., Kinnison, J., & Pessoa, L. (2013). Describing functional di-
versity of brain regions and brain networks. Neuroimage, 73, 50–58.
Bechtel, W. (2008). Mental mechanisms: Philosophical perspectives on cogni-
tive neuroscience. New York: Routledge.
Bechtel, W. (2017). Using the hierarchy of biological ontologies to iden-
tify mechanisms in flat networks. Biology & Philosophy. doi:10.1007/
s10539-017-9579-x
Bickle, J. (2003). Philosophy and neuroscience: A ruthlessly reductive account
(Vol. 2). Dordrecht: Springer Science & Business Media.
Boone, W., & Piccinini, G. (2016). The cognitive neuroscience revolution. Syn-
these, 193(5), 1509–1534.
Bunzl, M., Hanson, S. J., & Poldrack, R. A. (2010). An exchange about local-
ism. In Hanson and Bunzl (Eds.), Foundational issues in human brain map-
ping (pp. 49–54). Cambridge: MIT Press.
Burnston, D. C. (2016). A contextualist approach to functional localization in
the brain. Biology & Philosophy, 31(4), 527-550.
Burnston, D. C. (2020). Contents, vehicles, and complex data analysis in neuro-
science. Synthese. doi:10.1007/s11229-020-02831-9
Burnston, D. C. (2021). Getting over Atomism: Functional Decomposition in
Complex Neural Systems. The British Journal for the Philosophy of Science,
72(3), 743-772. doi:10.1093/bjps/axz039
Carlson, T., Goddard, E., Kaplan, D. M., Klein, C., & Ritchie, J. B. (2018).
Ghosts in machine learning for cognitive neuroscience: Moving from data to
theory. Neuroimage, 180, 88–100.
Craver, C. F. (2006). When mechanistic models explain. Synthese, 153(3),
355–376. Retrieved from http://rd.springer.com/article/10.1007/s11229-
006-9097-x
Cummins, R. C. (1983). The nature of psychological explanation. Cambridge:
Bradford/MIT Press.
Darden, L., Pal, L. R., Kundu, K., & Moult, J. (2018). The product guides the
process: Discovering disease mechanisms. In E. Ippoliti & D. Danks (Eds.),
Building theories (pp. 101–117). Dordrecht: Springer.
De Brigard, F. (2014). Is memory for remembering? Recollection as a form of
episodic hypothetical thinking. Synthese, 191(2), 155–185.
Feest, U. (2010). Concepts as tools in the experimental generation of knowledge
in cognitive neuropsychology. Spontaneous Generations: A Journal for the
History and Philosophy of Science, 4(1), 173–190.
Ontologies in Cognitive Neuroscience 281
Figdor, C. (2011). Semantics and metaphysics in informatics: Toward an ontol-
ogy of tasks. Topics in Cognitive Science, 3(2), 222–226.
Fox, P. T., & Lancaster, J. L. (2002). Mapping context and content: the Brain-
Map model. Nature Reviews Neuroscience, 3, 319. doi:10.1038/nrn789
Gomez-Lavin, J. (2020). Working memory is not a natural kind and cannot ex-
plain central cognition. Review of Philosophy and Psychology. doi:10.1007/
s13164-020-00507-4
Griffiths, P. (2002). Is emotion a natural kind? In R. C. Solomon (Eds.), Think-
ing about feeling (pp. 233–249). Oxford & New York: Oxford University
Press.
Hastings, J., Frishkoff, G. A., Smith, B., Jensen, M., Poldrack, R. A., Lomax, J., . . .
Martone, M. E. (2014). Interdisciplinary perspectives on the development,
integration, and application of cognitive ontologies. Frontiers in Neuroinfor-
matics, 8, 62.
Hutto, D. D., Peeters, A., & Segundo-Ortin, M. (2017). Cognitive ontology in
flux: The possibility of protean brains. Philosophical Explorations, 20(2),
209–223.
Kanwisher, N. (2010). Functional specificity in the human brain: a window into
the functional architecture of the mind. Proceedings of the National Acad-
emy of Sciences of the United States of America, 107(25), 11163–11170.
doi:10.1073/pnas.1005062107
Lenartowicz, A., Kalar, D. J., Congdon, E., & Poldrack, R. A. (2010). Towards
an ontology of cognitive control. Topics in Cognitive Science, 2(4), 678–692.
Leonelli, S. (2012). Classificatory theory in data-intensive science: The case of
open biomedical ontologies. International Studies in the Philosophy of Sci-
ence, 26(1), 47–65.
Machery, E. (2009). Doing without concepts: Oxford University Press.
Murray, J. D., Jaramillo, J., & Wang, X. J. (2017). Working Memory and
Decision-Making in a Frontoparietal Circuit Model. Journal of Neurosci-
ence, 37(50), 12167–12186. doi:10.1523/JNEUROSCI.0343-17.2017
Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience:
Functional analyses as mechanism sketches. Synthese, 183(3), 283–311.
doi:10.1007/s11229-011-9898-4
Poldrack, R. A., Kittur, A., Kalar, D., Miller, E., Seppa, C., Gil, Y., . . . Bilder,
R. M. (2011). The cognitive atlas: Toward a knowledge foundation for
cognitive neuroscience. Frontiers in Neuroinformatics, 5, 17. doi:10.3389/
fninf.2011.00017
Poldrack, R. A., Mumford, J. A., Schonberg, T., Kalar, D., Barman, B., & Yar-
koni, T. (2012). Discovering relations between mind, brain, and mental dis-
orders using topic mapping. PLoS Computational Biology, 8(10), e1002707.
doi:10.1371/journal.pcbi.1002707
Poldrack, R. A., & Yarkoni, T. (2016). From brain maps to cognitive ontologies:
Informatics and the search for mental structure. Annual Review of Psychol-
ogy, 67, 587–612.
Ritchie, J. B., Kaplan, D. M., & Klein, C. (2016). Decoding the brain: Neural
representation and the limits of multivariate pattern analysis in cognitive neu-
roscience. The British Journal for the Philosophy of Science, 70(2), 581-607.
Robins, S. K. (2016). Optogenetics and the mechanism of false memory. Syn-
these, 193(5), 1561–1583.
282 Daniel C. Burnston
Rubin, T. N., Koyejo, O., Gorgolewski, K. J., Jones, M. N., Poldrack, R. A., &
Yarkoni, T. (2017). Decoding brain activity using a large-scale probabilistic
functional-anatomical atlas of human cognition. PLoS Computational Biol-
ogy, 13(10), e1005649. doi:10.1371/journal.pcbi.1005649
Schacter, D. L., Benoit, R. G., De Brigard, F., & Szpunar, K. K. (2015). Episodic
future thinking and episodic counterfactual thinking: Intersections between
memory and decisions. Neurobiology of learning and memory, 117, 14–21.
doi:10.1016/j.nlm.2013.12.008
Shine, J. M., Bissett, P. G., Bell, P. T., Koyejo, O., Balsters, J. H., Gorgolewski,
K. J., . . . Poldrack, R. A. (2016). The dynamics of functional brain networks:
Integrated network states during cognitive task performance. Neuron, 92(2),
544–554. doi:10.1016/j.neuron.2016.09.018
Shine, J. M., Breakspear, M., Bell, P. T., Martens, K. E., Shine, R., Koyejo,
O., . . . Poldrack, R. A. (2018). The low dimensional dynamic and integrative
core of cognition in the human brain. bioRxiv, 266635. doi:10.1101/266635
Shine, J. M., & Poldrack, R. A. (2017). Principles of dynamic network re-
configuration across diverse brain states. Neuroimage. doi:10.1016/j.
neuroimage.2017.08.010
Silberstein, M., & Chemero, T. (2013). Constraints on localization and decom-
position as explanatory strategies in the biological sciences. Philosophy of
Science, 80(5), 958–970.
Sochat, V. V., Eisenberg, I. W., Enkavi, A. Z., Li, J., Bissett, P. G., & Poldrack,
R. A. (2016). The experiment factory: Standardizing behavioral experiments.
Frontiers in Psychology, 7, 610. doi:10.3389/fpsyg.2016.00610
Sullivan, J. A. (2010). Reconsidering ‘spatial memory’ and the Morris water
maze. Synthese, 177(2), 261–283. Retrieved from http://rd.springer.com/
article/10.1007/s11229-010-9849-5
Sullivan, J. A. (2014). Stabilizing mental disorders: prospects and problems.
In H. Kincaid & J. A. Sullivan (Eds.), Classifying Psychopathology: Mental
Kinds and Natural Kinds (pp. 257 - 281). Cambridge, MA: MIT Press.
Sullivan, J. A., Dumont, J. R., Memar, S., Skirzewski, M., Wan, J., Mofrad, M.
H., . . . Prado, V. F. (2021). New frontiers in translational research: Touch-
screens, open science, and the mouse translational research accelerator plat-
form. Genes, Brain and Behavior, 20(1), e12705.
Tekin, Ş. (2016). Are mental disorders natural kinds?: A plea for a new approach
to intervention in psychiatry. Philosophy, Psychiatry, & Psychology, 23(2),
147–163.
Teodorescu, A. R., & Usher, M. (2013). Disentangling decision models: From
independence to competition. Psychological review, 120(1), 1.
Turner, J. A., & Laird, A. R. (2012). The cognitive paradigm ontology: De-
sign and application. Neuroinformatics, 10(1), 57–66. doi:10.1007/
s12021-011-9126-x
Uttal, W. R. (2001). The new phrenology: The limits of localizing cognitive
processes in the brain. Cambridge: The MIT Press.
Varoquaux, G., Schwartz, Y., Poldrack, R. A., Gauthier, B., Bzdok, D., Poline,
J. B., & Thirion, B. (2018). Atlases of cognition with large-scale human brain
mapping. PLoS Computational Biology, 14(11), e1006565. doi:10.1371/
journal.pcbi.1006565
Ontologies in Cognitive Neuroscience 283
Yarkoni, T., Poldrack, R. A., Nichols, T. E., Van Essen, D. C., & Wager, T. D.
(2011). Large-scale automated synthesis of human functional neuroimaging
data. Nature methods, 8(8), 665.
Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in
psychology: Lessons from machine learning. Perspectives on Psychological
Science, 12(6), 1100–1122. doi:10.1177/1745691617693393
Section 4
1 Introduction
There should be no doubt that technological developments have played
significant roles throughout the history of scientific discoveries and prog-
ress. This is as true in the physical sciences (e.g., particle accelerators in
physics) as in the life sciences (e.g., microscopes in biology). What is less
apparent is the role mathematical developments have played in facilitat-
ing and supporting many of those discoveries. Mathematical tools for
analyzing data may not be at the forefront of discoveries centering on the
physical structure of investigative targets of interest (e.g., cells); but they
certainly are crucial in research focused on the dynamics of phenomena
(e.g., planetary motion). Consequently, for science to progress, research
on the movement and temporal aspects of phenomena often requires the
coevolution of technological and mathematical tools.
Recently, it has been increasingly argued by some philosophers of neu-
roscience that experimental tools are not just important but are funda-
mental to neuroscience research (e.g., Barwich, 2020; Bickle, 2016; Silva,
Landreth, & Bickle, 2014). Put in its sharpest terms, the line of thought
goes like this: From Golgi’s staining technique to functional magnetic
resonance imaging, and from deep brain stimulation to optogenetics,
the history of neuroscience is principally a history of tool development.
Moreover, it has been argued that this history is best characterized as
one that exhibits reductionist (Bickle, 2006, 2016) and mechanistic
explanations (Craver, 2002, 2005). Across these claims, little to no
mention of data analysis methods are mentioned nor the underlying
assumptions of those techniques. Here, I argue that the mathematical
DOI: 10.4324/9781003251392-18
288 Luis H. Favela
assumptions of applied data analyses have played crucial—though often
underappreciated—roles in the history of neuroscience. First, I present
the Hodgkin and Huxley model of action potentials as an example of re-
search constrained by technological and mathematical limitations of its
time. Second, I draw attention to a feature of neurons that is overlooked
by the Hodgkin-Huxley model: scale-invariant dynamics. After describ-
ing scale-invariant dynamics, I then point out consequences scale-invari-
ant neuronal dynamics have for explanatory approaches in neuroscience
that rely on—what can be broadly described as—decomposition strat-
egies of neuronal activity. I conclude by emphasizing the necessity of
mathematical developments in providing more appropriate and encom-
passing explanations of neural phenomena in toto.
Figure 1 H
odgkin-Huxley model. (a) The canonical Hodgkin and Huxley
(1952) model of action potentials in the squid giant axon. Definitions
of key model variables (box). (b) The basic shape of an action potential
as produced by the Hodgkin-Huxley model (created with MATLAB
[MathWorks®, Natick, MA] script based on Kothawala, 2015). The
x-axis captures the entire range of time in which an action potential
occurs. According to the model, the lower temporal boundary of an
action potential is 10 ms. This means that the entire event, from start
to finish, occurs within that time frame.
Figure 2 T
he Brunsviga 20, “one of the most popular mechanical calculators.
It was produced up to the early 1970s and marketed with the slo-
gan ‘Brains of Steel’” (Schwiening, 2012). (Reprinted with permission
from Wikipedia. CC BY-SA 2.0 DE.)
Figure 3 M
odels of nonlinear single-neuron activity. (a) FitzHugh-Nagumo
model and phase space portrait. (Modified and reprinted with per-
mission from Scholarpedia. CC BY-NC-SA 3.0.). (b) Izhikevich model
and phase space portrait. Note that the phase space portrait depicts
a strange attractor as the Izhikevich model parameters are set to de-
pict chaotic time evolution. The FitzHugh-Nagumo model is not able
to depict chaotic neuronal behavior no matter how the parameters
are tuned. (Modified and reprinted with permission from Nobukawa,
Nishimura, Yamanishi, & Liu, 2015.)
“It Takes Two to Make a Thing Go Right” 293
Figure 4 T
he Koch curve is an example of an abstract spatial fractal. Here,
three iterations of self-similarity are depicted (a, b, and c). (Modified
and reprinted with permission from Wikipedia. CC BY-SA 3.0.)
Figure 5 R
andom networks and scale-invariant networks. (a) Abstract random
network illustrating that most nodes have a comparable number of
connections. (b) Abstract scale-invariant network illustrating many
nodes with a small number of connections and few nodes with a high
number of connections. Dissociated neurons developing in vitro and
coupled to multielectrode array (MEA), exhibiting (c) random net-
work connectivity and (d) scale-invariant connectivity, with a highly
connected unit (center), or hub. (Modified and reprinted with permis-
sion from Josserand et al., 2021 and Poli et al., 2015. CC BY.)
Indeed, the lesson from our journey across levels of organization, from
behavior through neural assemblies to single neurons and proteins,
suggests that dreams on all-encompassing microscopic timescale-based
descriptions, aimed at explaining the temporal richness of macroscopic
levels, should be abandoned. Other approaches are called for.
(2010, p. 23)
n= 1
Sd (1)
It is helpful to explain this equation via the Koch curve. For demonstra-
tion purposes, we will look at a four-lined Koch curve (Figure 4a). Here
n is the number of line segments at a particular scale of observation;
298 Luis H. Favela
in this case, it is 4. Next, S is the scale factor, or the size reduction
at each iteration; here it is 1/3. Our equation is now: 4 = 1/(1/3)d, or
4 = 3d. We want to figure out d, or the fractal dimension. To do so, we
take the log of both sides: d = log 4/log 3, which gives us a fractal dimen-
sion d = 1.26. In English, this means that the fractal dimension of the
Koch curve is 1.26, which means it is not a straight line (d = 1) or a square
(d = 2), but closer to being a straight line than a square (d = 1.26). There
are various other methods for mathematically assessing fractals and
multifractals (Lopes & Betrouni, 2009).
The point of this example is to demonstrate that before Mandelbrot’s
invention—or, perhaps, discovery—of fractal geometry, it was not pos-
sible to appropriately account for scale-invariant phenomena, for exam-
ple, via more standard statistical tools such as calculating the arithmetic
mean. The consequence for neuronal activity is that it was not until the
1990s (e.g., Liebovitch & Toth, 1990) that scale-invariant dynamics
could be properly identified. Before then, such properties were misidenti-
fied via other statistical methods. Since scale-invariant structures have no
primary scale or average scale, they have no specific window to identify
as the start and finish boundary. Such a view of neuronal activity is fur-
ther evidenced by other NDST-based work, such as the Izhikevich model
(2007; Figure 3b), which treats action potentials as continuous cycles and
not binary, “all-or-none” events (cf. Figure 1b). If true, that is, if action
potentials are not bounded within discrete windows of time, then action
potentials cannot be accounted for via decomposition strategies, such as
those common to mechanistic approaches in actual scientific practice.4
In concluding this section, an important clarification needs to be made
in order to address a significant critique of the current line of argument.
The critique centers on the notion of “bounded” in regard to natural
phenomena. As discussed above, the currently relevant aspect of the
Bechtel/Marom debate centers on the idea that mechanistic explana-
tions treat targets of investigation as bounded, namely, as having delin-
eated borders, which can be spatial or temporal. The Hodgkin-Huxley
model of action potentials and its 10 ms event window were presented
as an example of such a bounded mechanism. Scale-invariant neuronal
dynamics was presented as an unbounded natural phenomenon, which
means it is not accessible to mechanistic explanation (i.e., if “mechanis-
tic explanations” include the stipulation of boundedness; see Bechtel,
2015; Marom, 2010). The critique of this line of argument centers on
the point that even scale-invariant neuronal dynamics are “bounded” in
a number of ways, for example, there is a window of time in which they
occur (e.g., they do not last for months, years, or centuries) and they
are spatially confined (e.g., they occur in an area of the brain, and not
across the whole brain, let alone body and world). This is an understand-
ably compelling critique. However, it does not concern the way in which
scale-invariant dynamics are “unbounded.”
“It Takes Two to Make a Thing Go Right” 299
The way in which scale-invariant dynamics are unbounded involves
the inability of single, bounded values to characterize the phenomenon.
A time series (Figure 6) need not be infinite nor recorded from an event
that has no measurable spatial location in order to be scale invariant.
A scale-invariant time series exhibits the same pattern among windows
of various lengths of time. For example, if a heartbeat shows a pattern
of activity over 60 minutes, then, to be considered scale invariant, that
(statistically) same pattern should be shown in each of two 30-minute
windows of time, at each of four 15-minute windows, and so on. In that
way, the time series is not properly understood as “bounded” in that
there is no single length of time that characterizes the entire signal. That
is to say, it is not correct to treat the event as a bounded 60-minute event,
or a 30-minute event, and so on; but in terms of the structure of the
patterns across various scales. It is in that sense that Marom argues that
neuronal dynamics do not have timescales, and it is in that sense that
they are unbounded, and, thus, not properly explained mechanistically.
5 Conclusion
It is highly unlikely to find disagreement among the scientific research
community that technological advancements have paved the way
for some of the greatest advances and discoveries. What is less often
acknowledged—especially in neuroscience—is the necessity of coevolv-
ing our mathematical tools with technological advances, and vice versa.
Consequently, technological advancements that produce more detailed
and accurate data recording will not alone necessarily provide proper ex-
planations of biological phenomena. Mathematical tools like those pro-
vided by NDST are needed as well in order to properly characterize data.
The Hodgkin-Huxley model was informed and constrained by the avail-
able technological (i.e., voltage clamp) and mathematical (i.e., Brunsviga
20 calculator) tools of the time. Since then, more advanced technology
(e.g., multielectrode arrays) and mathematics (e.g., fractal analysis)
have highlighted some of the limitations of the Hodgkin-Huxley model
as a comprehensive model of action potentials across temporal scales.
Scale-invariant neuronal activity provides a rich example of this. In or-
der to identify scale-invariant activity, researchers needed more accurate
measurements (e.g., MEA), data analyses (e.g., detrended fluctuation
analysis), and—in this case—new concepts altogether. In order to prop-
erly account for scale-invariant activity, a new concept—namely, fractals
and the fractal dimension—was needed, as was accompanying innovative
mathematical analyses. One consequence of the existence of scale-invari-
ant neuronal activity discussed here involves the limitations of research
approaches centered on decomposition strategies—i.e., “mechanistic”
approaches understood in terms of actual scientific practice—to account
for phenomena that are without discrete (i.e., “bounded”) temporal
300 Luis H. Favela
Figure 6 T
ime series exhibiting statistical scale invariance at multiple windows
of time. Data obtained from the spontaneous activity of single neurons
(i.e., mitral cells) in the rat main olfactory bulb are shown (Stakic,
Suchanek, Ziegler, & Griff, 2011). Synthetic time series reproduced
from results of detrended fluctuation analysis (Favela, Coey, Griff, &
Richardson, 2016). Statistical scale invariance demonstrated in win-
dows based on power of two: (a) 8,192 seconds, (b) 2,048 seconds, and
(c) 1,024 seconds. The overall structure of the time series is repeated
within each window of time. As a result, the time series is not properly
characterized via a single value (e.g., mean) or window of time.
“It Takes Two to Make a Thing Go Right” 301
boundaries. In conclusion, an attempt has been made here to motivate
the claim that progress in neuroscience takes two things to make it go
right, namely, technological and mathematical advancements.
Acknowledgments
I would like to thank Ann-Sophie Barwich, John Bickle, and Carl Craver
for their interest in this project. Thanks to John Beggs for many conversa-
tions about several aspects of this material; I’ve learned a great deal from
him. Thanks to Mary Jean Amon for assistance with creating Figure 6.
Various stages and versions of this paper have benefited from constructive
feedback by audiences at the Tool Development in Experimental Neuro-
science: A Science-in-Practice Workshop & the 57th Annual Meeting
of the Alabama Philosophical Society (2019, September), the Center for
Philosophy of Science at the University of Pittsburgh (2019, October),
NeuroTech: An Interdisciplinary Early Career Workshop on Tools and
Technology in Neuroscience (2020, January), and the Neuroscience Alli-
ance at the University of Central Florida (2020, February).
Notes
1 “Linearity” can be understood as a mathematical relationship among data
(e.g., time series), in that each successive data point is additively related to its
previous point. Such phenomena exhibit outputs that are proportional to in-
puts (Lam, 1998). “Nonlinearity” can also be understood as a mathematical
relationship among data. But here, each successive data point can observe a
variety of relationships to previous points, such as exponential or multipli-
cative (Lam, 1998; May, 1976; Stam, 2005). Other, more interesting, forms
of nonlinearity found in biological systems include patterned dynamics
(e.g., fractals; Di Ieva, 2016) and phase transitions (e.g., catastrophe theory;
Isnard & Zeeman, 1976/2013).
2 Hysteresis is a nonlinear phenomenon. There are multiple forms of hystere-
sis that can be exhibited by both biological and nonbiological systems. All
forms are characterized by their being constrained by historical variation,
that is, when a system’s current state is strongly dependent on its history
(Haken, 1983). Ferromagnetic materials (e.g., common magnets that hold
up pictures on your refrigerator) provide clear illustrations of strong histori-
cal dependence characteristic of hysteresis effects (Dris, 2016). If a material
such as iron is magnetized, once it is demagnetized, then it will require dif-
ferent magnetic fields to re-magnetize it. The reason the same magnetic field
will not magnetize the piece of iron again is because the “system” (i.e., its
atomic composition) “remembers” its previous state, that is, is constrained
by states it has been in before. Thus, the current state of the system depends
on its previous state such that magnetization does not occur at absolute,
context-free values.
3 Bechtel’s (2015) use of “scale free” and the current of “scale invariant” refer
to the same phenomenon. The terms are commonly used interchangeably
in the relevant literatures (e.g., Barabási, 2016; Broido & Clauset, 2019;
Serafino et al., 2021). For that reason, when discussing Bechtel’s argument,
I will switch out “scale invariance” for “scale free” for consistency of termi-
nology in the current work.
302 Luis H. Favela
4 It is extremely challenging to define “mechanism” or “mechanistic explana-
tion” in a way satisfactory to most philosophers of science. In the current
work, I limit my application of the concept of “mechanism” to ways consis-
tent with actual scientific practice, that is, in the sense discussed by Bechtel
(2015). For that reason, I emphasize decomposition as common to the ways
scientists investigate and define “mechanisms” (e.g., Andersen, 2014; Illari
& Williamson, 2012; Khambhati, Mattar, Wymbs, Grafton, & Bassett,
2018). That is to say, scientists commonly aim to decompose—or “break
apart”—a target of investigation in order to understand it in terms of its
parts, namely, the constitution, contribution, and interaction of those parts.
References
Adrian, E. D., & Zotterman, Y. (1926). The impulses produced by sensory
nerve endings. Part 3. Impulses set up by touch and pressure. The Journal of
Physiology, 61(4), 465–483.
Andersen, H. (2014). A field guide to mechanisms: Part I. Philosophy Compass,
9(4), 274–283.
Bak, P. (1996). How nature works: The science of self-organized criticality.
New York, NY: Springer-Verlag.
Barabási, A.-L. (2016). Network science. Cambridge University Press. Retrieved
July 7, 2021 from http://networksciencebook.com
Barabási, A.-L., & Bonabeau, E. (2003). Scale-free networks. Scientific Amer-
ican, 288(5), 60–69.
Barwich, A. S. (2020). What makes a discovery successful? The story of Linda
Buck and the olfactory receptors. Cell, 181(4), 749–753. doi:10.1016/j.
cell.2020.04.040
Bear, M. F., Connors, B. W., & Paradiso, M. A. (2016). Neuroscience: Explor-
ing the brain (4th ed.). New York, NY: Wolters Kluwer.
Bechtel, W. (2015). Can mechanistic explanation be reconciled with scale-free
constitution and dynamics? Studies in History and Philosophy of Biological
and Biomedical Sciences, 53, 84–93.
Bickle, J. (2006). Reducing mind to molecular pathways: Explicating the re-
ductionism implicit in current cellular and molecular neuroscience. Synthese,
151, 411–434.
Bickle, J. (2016). Revolutions in neuroscience: Tool development. Frontiers in
Systems Neuroscience, 10(24). doi:10.3389/fnsys.2016.00024
Boonstra, T. W., He, B. J., & Daffertshofer, A. (2013). Scale-free dynamics and
critical phenomena in cortical activity. Frontiers in Physiology: Fractal and
Network Physiology, 4(79). doi:10.3389/fphys.2013.00079
Broido, A. D., & Clauset, A. (2019). Scale-free networks are rare. Nature Com-
munications, 10(1017), 1–10. doi:10.1038/s41467-019-08746-5
Bryce, R. M., & Sprague, K. B. (2012). Revisiting detrended fluctuation analy-
sis. Scientific Reports, 2(315), 1–6. doi:10.1038/srep00315
Churchland, P. S., & Sejnowski, T. J. (1992/2017). The computational brain
(25th anniversary ed.). Cambridge: The MIT Press.
Craver, C. F. (2002). Interlevel experiments and multilevel mechanisms in the
neuroscience of memory. Philosophy of Science, 69(S3), S83-S97.
Craver, C. F. (2005). Beyond reduction: Mechanisms, multifield integration and
the unity of neuroscience. Studies in History and Philosophy of Science Part
“It Takes Two to Make a Thing Go Right” 303
C: Studies in History and Philosophy of Biological and Biomedical Sciences,
36(2), 373–395.
Di Ieva, A. (Ed.). (2016). The fractal geometry of the brain. New York: Springer.
Dris, S. (2016). Magnetic hysteresis. LibreTexts. Retrieved July 7, 2021, from
https://eng.libretexts.org/@go/page/333
Eagleman, D., & Downar, J. (2016). Brain and behavior: A cognitive neurosci-
ence perspective. New York, NY: Oxford University Press.
Favela, L. H. (2020a). Cognitive science as complexity science. Wiley Interdisci-
plinary Reviews: Cognitive Science, 11(4), e1525, 1–24. doi:10.1002/WCS.1525
Favela, L. H. (2020b). Dynamical systems theory in cognitive science and neuro-
science. Philosophy Compass, 15(8), e12695, 1–16. doi:10.1111/phc3.12695
Favela, L. H. (2020c). The dynamical renaissance in neuroscience. Synthese.
doi:10.1007/s11229-020-02874-y
Favela, L. H., Coey, C. A., Griff, E. R., & Richardson, M. J. (2016). Fractal analysis
reveals subclasses of neurons and suggests an explanation of their spontaneous
activity. Neuroscience Letters, 626, 54–58. doi:10.1016/j.neulet.2016.05.017
FitzHugh, R. (1961). Impulses and physiological states in theoretical models of
nerve membrane. Biophysical Journal, 1(6), 445–466.
Gerstner, W., Kistler, W. M., Naud, R., & Paninski, L. (2014). Neuronal dy-
namics: From single neurons to networks and models of cognition. Cam-
bridge: Cambridge University Press.
Ginyard, R. (1988). It takes two [Recorded by R. Ginyard (Rob Base) and R.
Bryce (DJ E-Z Rock)]. On It takes two [Vinyl]. Englewood, NJ: Profile Records.
Gisiger, T. (2001). Scale invariance in biology: Coincidence or footprint of a
universal mechanism? Biological Reviews, 76, 161–209.
Gross, G. W. (2011). Multielectrode arrays. Scholarpedia, 6(3), 5749.
doi:10.4249/scholarpedia.5749
Haken, H. (1983). Synergetics: An introduction (3rd ed.). Berlin: Springer-Verlag.
Häusser, M. (2000). The Hodgkin-Huxley theory of the action potential.
Nature Neuroscience, 3(11), 1165. doi:10.1038/81426
He, B. J. (2014). Scale-free brain activity: Past, present, and future. Trends in
Cognitive Sciences, 18(9), 480–487.
Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of mem-
brane current and its application to conduction and excitation in nerve. The
Journal of Physiology, 117(4), 500–544.
Illari, P. M., & Williamson, J. (2012). What is a mechanism? Thinking about
mechanisms across the sciences. European Journal for Philosophy of Science,
2(1), 119–135.
Isnard, C. A., & Zeeman, E. C. (1976/2013). Some models from catastrophe
theory in the social sciences. In L. Collins (Ed.), The use of models in the
social sciences (pp. 44–100). Chicago, IL: Routledge.
Izhikevich, E. (2007). Dynamical systems in neuroscience: The geometry of
excitability and bursting. Cambridge: MIT Press.
Josserand, M., Allassonnière-Tang, M., Pellegrino, F., & Dediu, D. (2021). In-
terindividual variation refuses to go away: A Bayesian computer model of
language change in communicative networks. Frontiers in Psychology: Lan-
guage Science, 12(626118). doi:10.3389/fpsyg.2021.626118
Khambhati, A. N., Mattar, M. G., Wymbs, N. F., Grafton, S. T., & Bassett,
D. S. (2018). Beyond modularity: Fine-scale mechanisms and rules for brain
network reconfiguration. NeuroImage, 166, 385–399.
304 Luis H. Favela
Koch, C. (1999). Biophysics of computation: Information processing in single
neurons. New York, NY: Oxford University Press.
Kothawala, A. (2015). Simulation of nerve action potential using Hod-
gkin Huxley model [MATLAB script]. MATLAB Central File
Exchange. Retrieved July 7, 2021 from https://www.mathworks.com/mat-
labcentral/fileexchange/53659-simulation-of-nerve-action-potential-using-
hodgkin-huxley-model
Lam, L. (1998). Nonlinear physics for beginners: Fractals, chaos, solitons, pat-
tern formation, cellular automata and complex systems. Singapore: World
Scientific.
Liebovitch, L. S., & Toth, T. I. (1990). Using fractals to understand the opening
and closing of ion channels. Annals of Biomedical Engineering, 18, 177–194.
Lopes, R., & Betrouni, N. (2009). Fractal and multifractal analysis: A review.
Medical Image Analysis, 13(4), 634–649.
Mandelbrot, B. B. (1983). The fractal geometry of nature. New York, NY: W.
H. Freeman and Company.
Marom, S. (2010). Neural timescales or lack thereof. Progress in Neurobiology,
90, 16–28.
May, R. M. (1976). Simple mathematical models with very complicated dynam-
ics. Nature, 261, 459–467.
Nagumo, J., Arimoto, S., & Yoshizawa, S. (1962). An active pulse transmission
line simulating nerve axon. Proceedings of the IRE, 50(10), 2061–2070.
Nobukawa, S., Nishimura, H., Yamanishi, T., & Liu, J. Q. (2015). Analysis of
chaotic resonance in Izhikevich neuron model. PloS One, 10(9), e0138919.
doi:10.1371/journal.pone.0138919
Peng, C. K., Buldyrev, S. V., Havlin, S., Simons, M., Stanley, H. E., & Gold-
berger, A. L. (1994). Mosaic organization of DNA nucleotides. Physical Re-
view E, 49(2), 1685–1689.
Poli, D., Pastore, V. P., & Massobrio, P. (2015). Functional connectivity in in
vitro neuronal assemblies. Frontiers in Neural Circuits, 9(57). doi:10.3389/
fncir.2015.00057
Schwiening, C. J. (2012). A brief historical perspective: Hodgkin and Huxley.
The Journal of Physiology, 590(11), 2571–2575.
Serafino, M., Cimini, G., Maritan, A., Rinaldo, A., Suweis, S., Banavar, J. R.,
& Caldarelli, G. (2021). True scale-free networks hidden by finite size effects.
Proceedings of the National Academy of Sciences, 118(2), 1–11. doi:10.1073/
pnas.2013825118
Silva, A. J., Landreth, A., & Bickle, J. (2014). Engineering the next revolution
in neuroscience: The new science of experiment planning. New York, NY:
Oxford University Press.
Stakic, J., Suchanek, J. M., Ziegler, G. P., & Griff, E. R. (2011). The source of
spontaneous activity in the main olfactory bulb of the rat. PLoS One, 6(8),
e23990. doi:10.1371/journal.pone.0023990
Stam, C. J. (2005). Nonlinear dynamical analysis of EEG and MEG: Review of
an emerging field. Clinical Neurophysiology, 116(10), 2266–2301.
14 Hybrid Brains
Interfacing Living
Neurons and Circuits with
Computational Models
Astrid A. Prinz
DOI: 10.4324/9781003251392-19
306 Astrid A. Prinz
different levels of nervous system organization, from the sub-cellular
level of molecular interactions to the level of entire brain regions com-
municating with each other, and even into the realm of social interac-
tions between brains (Prinz and Hooper 2017).
At all levels, computational modeling complements experimental
studies most beneficially if there is a feedback loop between the two:
models are constructed based on experimental data, models are used to
explore potential mechanisms of brain function and make testable pre-
dictions, those predictions are tested in experiments, the experimental
results inform refinement of the model – repeat the cycle. The benefits
of this iterative process are best captured by the German saying ‘Der
Weg ist das Ziel’ (loosely, ‘The journey is the reward’), indicating that
much can be learned from models that initially fail to match the ground
truth of experimental data, from how they fail, and from how they can
be refined.
For the purposes of discussing the dynamic clamp, I will focus now
on computational modeling at the scale of individual neurons and small
circuits of neurons. At this scale, temporal dynamics of neuronal and
synaptic activity can be represented computationally in the form of sys-
tems of coupled differential equations that are solved numerically on a
computer. Conductance-based models are a particular type of compu-
tational model at the cellular and small circuit scale and include dif-
ferential equations for each neuron’s membrane potential, activation of
each of the relevant ionic membrane conductances (for example, what
fraction of a neuron’s sodium channels are open at any given time), and
other dynamic variables, such as intracellular calcium concentration and
synaptic receptor binding and channel gating.
Conductance-based models can be powerful tools to investigate how
the temporal dynamics of a neuronal circuit’s activity depend on the
circuit connectivity and the intrinsic properties of the neurons and syn-
apses that constitute the circuit. Figure 1 provides an example of such a
small circuit model – the neurons and synapses that generate the motor
commands that allow the sea slug Tritonia diomedea to swim and es-
cape predation through a succession of dorsal and ventral body flexions.
Figure 1c shows the output of an early (1980s) computational model of
the circuit based on cellular and synaptic information available at the
time, which successfully reproduced activation and production of the
swim pattern (Getting 1983, 1989) and could be used to investigate how
circuit activity depends on cellular and synaptic parameters not accessi-
ble to experimental manipulation.
But Figure 1 also illustrates potential pitfalls of computational model-
ing. The original Getting model of the circuit shown in Figure 1c repro-
duced the experimentally observed motor pattern so well that researchers
were lured into thinking that this seemingly simple circuit was now ‘un-
derstood’. However, two decades later, newer information on synaptic
connectivity and neuromodulation in the circuit was incorporated into
Hybrid Brains 307
Figure 14.1 M
odels of the Tritonia escape swim circuit. (a) Schematic of the
core swim escape circuit, including neurons DRI – dorsal ramp
interneuron; DSI – dorsal swim interneuron; C2 – cerebral neu-
ron 2; VDI-B – ventral swim interneuron B. Triangles and circles
indicate excitatory and inhibitory synapses, respectively. 5HT in-
dicates circuit-wide serotonin modulation. (b) Escape swim motor
pattern recorded from living circuit. Bursts of action potentials in
the DSI and VDI-B interneurons correspond to dorsal and ventral
body flexions, which alternate to produce swimming. (c) Simulated
motor pattern from original Getting model. (d) The updated model
based on newer cellular and synaptic information fails to reproduce
the motor pattern. Scale bars in (b–d): 10 seconds, 20 mV. Adapted
from Calin-Jageman et al. (2007), with permission.
308 Astrid A. Prinz
the model and resulted in the failure of the updated model to produce
functional motor patterns (Calin-Jageman et al. 2007). This illustrates
the importance of constraining a computational model with experimen-
tal data as much as possible, and the vagaries of trusting results from
models with many cellular and synaptic parameters that are not well
constrained by experimental data.
Figure 14.2 B
asic dynamic clamp applications: simulation of voltage-independent
conductances. (a) Schematic of dynamic clamp configuration. The
membrane potential (V) of a living neuron is recorded, digitized,
and fed into a computer in real-time. Equations describing the mem-
brane or synaptic currents (I) to be modeled are solved by the com-
puter for every time step to calculate the current I that would flow
through these virtual conductances (g), given the momentary value
of V and the current’s reversal potential (E). The current I is injected
into the living neuron, also in real-time. (b) Membrane potential V
of a living neuron in response to application of the neurotransmitter
GABA (Gamma-aminobutyric acid, top) and in the same neuron in
response to application of a virtual synaptic conductance via dy-
namic clamp (bottom). In addition to the dynamic clamp current
I, brief current test pulses are injected throughout both traces; re-
duced amplitude of the resulting voltage deflections during GABA
application and during the virtual synaptic conductance clamp in-
dicates that both manipulations temporarily increase the membrane
conductance. Asterisks indicate times of application of GABA and
the beginning of dynamic clamp. From (Sharp et al. 1993b), with
permission. (c) Dynamic clamp subtraction of the leak conductance
introduced by impalement of a living neuron with a sharp electrode.
Leak subtraction virtually “repairs” the impalement damage to the
membrane, restoring the neuron’s intrinsic bursting activity pattern.
From Cymbalyuk et al. (2002), with permission.
310 Astrid A. Prinz
computational modeling to further our understanding of behaviorally
relevant cellular and circuit mechanisms. In the case of dynamic clamp
of voltage-dependent conductances as in Figure 3, the equations describ-
ing the conductances are systems of coupled differential equations for
the gating variables of the corresponding ion channels. In this – more
general and versatile – type of dynamic clamp application, the dynamic
clamp software and computer therefore must solve differential equations
numerically in real-time.
A further example of dynamic clamp versatility is provided in Figure
4, where dynamic clamp is applied to a pyramidal neuron in a slice of rat
somatosensory cortex (Chance et al. 2002). Electrical activity in brain
slices is often altered compared to activity in the intact brain, and is
difficult to control at a millisecond time scale and network-wide level
in slice experiments, making studies of the impact of realistic network
activity on individual neurons technically difficult. The example appli-
cation in Figure 4 shows how dynamic clamp can be used to expose
neurons in a slice to in vivo-like but fully controllable synaptic inputs
and conditions. How being embedded in a large network of other neu-
rons affects a neuron’s activity and information processing can thus be
systematically studied at a physiologically relevant time scale.
While Figure 4 shows how realistic synaptic input from a network
can be simulated with the dynamic clamp, the technique can also be
used to provide a living neuron with one or multiple synaptic partners in
the form of complete model neurons or small circuits simulated by the
dynamic clamp software and computer in real-time (Pinto et al. 2001),
allowing for the construction of truly hybrid networks.
As these example applications show, in contrast to the less physio-
logical manipulations provided by traditional current clamp or voltage
clamp, the dynamic clamp actually mimics in living neurons the presence
of ionic membrane channels, synaptic receptors and their associated dy-
namic conductances, or of an entire surrounding network and the inputs
it provides (Figures 2–4). In this sense, the dynamic clamp is akin to phar-
macological manipulation or overexpression or mutation of cellular and
synaptic conductances, or of embedding of a neuron in a network, but at
the flip of a switch, and with full experimental control over the virtual
conductance(s) by changing parameters in the dynamic clamp equations.
How, then, does the dynamic clamp approach combine ‘the best of both
worlds’? The computational component of dynamic-clamp constructed
hybrid circuits retains the full control that the experimenter has over
a purely computational model, because all parameters of the dynamic
clamp equations can be changed at will, easily, and over wide ranges,
often including parameters that are not – and in many cases will never
be – experimentally manipulatable. On the biological side of the hybrid
circuit, the living neurons in a sense serve as their own model, allowing
the experimenter to probe the effect of various manipulations introduced
Hybrid Brains 311
Figure 14.3 D
ynamic clamp addition and subtraction of voltage-dependent
conductances to study spike broadening. (a) Schematic of dynamic
clamp configuration. In contrast to Figure 2a, the equations describ-
ing the added/subtracted voltage-dependent conductances here are
numerically solved differential equations. The impaled neuron is
neuron R20 in the sea hare Aplysia, which can initiate and modu-
late respiratory pumping behavior depending on spike duration. (b)
Overlaid action potentials recorded from a repetitively stimulated
R20 neuron show spike broadening, with narrower spikes for early
stimuli and broader spikes for late stimuli (left). Pharmacologically
blocking two voltage-dependent potassium currents reveals that
they contribute to the initially narrow, then broadening spike shape
(middle). Spike broadening is restored by dynamic clamp addition
of the same K+ currents (right). (c) Subtraction of the same two
currents using dynamic clamp with negative conductance values g
mimics their blocking with pharmacology. Adapted from Ma and
Koester (1996), with permission.
via dynamic clamp in the context of the living system, without the need
for an accurate computational model of the entire neuron or circuit. This
eliminates some potential modeling pitfalls, such as the one described in
Figure 1. In other words, the dynamic clamp allows the researcher to use
computation selectively to control only the modeled aspects of a system,
312 Astrid A. Prinz
Figure 14.4 D
ynamic clamp simulation of in vivo conditions in a brain slice. (a)
Schematic of dynamic clamp configuration. Simulated excitatory
(ge) and inhibitory (gi) synaptic conductance traces corresponding
to barrages of synaptic inputs from a surrounding network are sup-
plied to the dynamic clamp computer, which calculates the com-
bined excitatory and inhibitory synaptic current and injects it into
a pyramidal neuron in a slice of rat cortex in real-time depending
on the momentary membrane potential V. (b) Spike pattern of the
clamped neuron during simulated in vivo-like input barrage (bot-
tom) shows richer and more realistic temporal dynamics compared
to the same neuron’s spike pattern in response to constant current
injection (top). (c) Spike firing rate of the neuron as a function of
driving current when dynamic clamp is used to inject zero-fold (di-
amonds), one-fold (circles), two-fold (squares), or three-fold (tri-
angles) ge and gi. Simulated network synaptic input of increasing
strength affects the excitability of the neuron, with stronger net-
work input resulting in a smaller gain of action potential firing.
Adapted from Chance et al. (2002), with permission.
Figure 14.5 D
ynamic clamp reveals cellular and synaptic plasticity mechanisms
in learning-induced compulsive behavior. (a) Operant conditioning
protocol to induce reward learning in Aplysia feeding behavior. Im-
ages of the sea hare feeding apparatus viewed from the ventral side of
the animal, with the radula (tongue-like rasp) protracted (middle) or
retracted. Tick marks underneath each image show an example time
series of radula protractions during a 40 minute training period in
which a seaweed strip (top of frame) was touched to the animal’s lip
to incite feeding behavior, but not ingested. Control animals received
no food reward (left), contingently trained animals received a food
reward in the form of a calibrated injection of seaweed juice into the
mouth (symbolized by pipette), strictly timed to each radula bite cycle
(middle, reward timing indicated by arrows), and non-contingently
trained animals received an equal average number of food rewards,
but not timed in relation to the bites performed by the animal (right).
(b) One hour after the training, contingently trained animals show
both increased frequency and increased temporal regularity of rad-
ula bite cycles compared to control and non-contingently trained an-
imals, as indicated by the bite cycle interval histogram and fitted line
(middle). (a) and (b), adapted from (Nargeot et al. 2007), copyright
Society for Neuroscience. (c) In buccal neuronal circuits isolated from
contingently trained and non-contingently trained animals, dynamic
clamp was used to add/subtract a leak conductance Gleak to/from
three bite initiation neurons (B30, B63, B65) that control radula bit-
ing. Dynamic clamp was also used to increase/decrease the electrical
coupling strength Gcoupling between B30 and B63, and between
B63 and B65. The time of dynamic clamp operation is indicated by
the shaded area in voltage traces. Subtracting Gleak and increasing
coupling strength increased bite command frequency and regularity
in buccal circuits from non-contingently trained animals (thus mim-
icking reward learning, top traces). Conversely, adding Gleak and
decreasing coupling strength decreased frequency and regularity in
buccal circuits from contingently trained animals, effectively erasing
their behavioral learning (bottom traces). Scale bar, 30 seconds.
314 Astrid A. Prinz
animal has implicitly learned that radula bites can lead to a food reward,
and has adjusted the frequency and regularity (and thereby efficiency)
of its feeding motor program (Figure 5a). Animals that receive food re-
wards not timed to their bite cycles (non-contingent training) do not
increase the frequency and regularity of their radula cycles (Nargeot and
Simmers, 2011).
The small neuronal circuit controlling Aplysia’s buccal area, which
includes muscles that move the radula, can be isolated from the animal
and placed in a dish, where it continues to produce the neuronal bursts
of action potentials that in the intact animal would initiate and gov-
ern feeding behavior, called a ‘fictive biting’ motor pattern. Intriguingly,
buccal circuits isolated from contingently trained animals continue to
produce bite cycle motor patterns with higher frequency and regularity
than buccal circuits isolated from non-contingently trained animals or
control animals (Nargeot et al. 2007). The neuronal circuit in the dish
(in vitro) thus retains a circuit-level memory of what the animal learned
during training in vivo. At the same time, the in vitro circuit allows full
experimental access to the neurons that form the circuit and can be ma-
nipulated with the dynamic clamp.
Important components of the buccal circuit are three bite initiation neu-
rons, B30, B63, and B65, and the electrical synapses between them. In the
study summarized in Figure 5c, dynamic clamp was used to tease apart
how cellular excitability of B30, B63, and B65 and the strength of the
electrical synaptic coupling between them determines the frequency and
regularity of fictive biting (Sieling et al. 2014). The major findings are that
Notably, (3) and (4) suggest that learning in the intact animal likely oc-
curs through cellular and synaptic plasticity in the B30, B63, and B65
neurons and can be both mimicked and reversed by appropriately chosen
Hybrid Brains 315
and simple dynamic clamp manipulations in the isolated circuit. This
approach and results demonstrate the potential explanatory power of
the dynamic clamp technique through bridging between levels of brain
organization and function.
Acknowledgments
I thank Fred H. Sieling and Romuald Nargeot for providing unpublished
data for Figure 5c.
References
Calin-Jageman RJ, Tunstall MJ, Mensh BD, Katz PS, Frost WN (2007). Param-
eter space analysis suggests multi-site plasticity contributes to motor pattern
initiation in Tritonia. Journal of Neurophysiology 98(4):2382–2398.
Chance FS, Abbott LF, Reyes AD (2002). Gain modulation from background
synaptic input. Neuron 35:773–782.
Hybrid Brains 317
Cook SJ, Jarrell TA, Brittin CA, Wang Y, Bloniarz AE, Yakovlev MA, Nguyen
KCQ, Tang LTH, Bayer EA, Duerr JS, Buelow HE, Hobert O, Hall DH,
Emmons SW (2019). Whole-animal connectomes of both Caenorhabditis
elegans sexes. Nature 571:63–71.
Cymbalyuk GS, Gaudry Q, Masino MA, Calabrese RL (2002). Bursting in
leech heart interneurons: cell-autonomous and network-based mechanisms.
Journal of Neuroscience 22:10580–10592.
Getting PA (1983). Mechanisms of pattern generation underlying swim-
ming in Tritonia. II. Network reconstruction. Journal of Neurophysiology
49(4):1017–1035.
Getting PA (1989). Reconstruction of small neural networks. In: Koch C, Segev
I, eds. Methods in Neural Modeling. MIT Press, Cambridge, pp. 135–169.
Goaillard JM, Marder E (2006). Dynamic clamp analyses of cardiac, endocrine,
and neural function. Physiology 21:197–207.
Koppell NJ, Gritton HJ, Whittington MA, Kramer MA (2014). Beyond the con-
nectome: The dynome. Neuron 83(6):1319–1328.
Ma M, Koester J (1996). The role of potassium currents in frequency-dependent
spike broadening in Aplysia R20 neurons: a dynamic clamp analysis. Journal
of Neuroscience 16:4089–4101.
Nargeot R, Petrissans C, Simmers J (2007). Behavioral and in vitro correlates
of compulsive-like food seeking induced by operant conditioning in Aplysia.
Journal of Neuroscience 27(30):8059–8070.
Nargeot R, Simmers J (2011). Neural mechanisms of operant conditioning and
learning-induced behavioral plasticity in Aplysia. Cellular and Molecular
Life Sciences 68(5):803–816.
Newman JP, Fong MF, Millard DC, Whitmire CJ, Stanley GB, Potter SM
(2015). eLife 4: e07192.
Pinto RD, Elson RC, Szucs A, Rabinovich MI, Selverston AI, Abarbanel HDI
(2001). Extended dynamic clamp: controlling up to four neurons using a
single desktop computer and interface. Journal of Neuroscience Methods
108:39–48.
Preyer AJ, Butera RJ (2009). Causes of transient instabilities in the dynamic
clamp. IEEE Transactions on Neural Systems and Rehabilitation Engineer-
ing 17(2):190–198.
Prinz AA, Abbott LF, Marder E (2004). The dynamic clamp comes of age.
Trends in Neurosciences 27:218–224.
Prinz AA, Cudmore RH (2011). Dynamic clamp. Scholarpedia 6(5):1470.
Prinz AA, Hooper SL (2017). Computer simulation – power and peril. In:
Hooper SL, Buschges A, eds. The Neurobiology of Motor Control: Funda-
mental Concepts and New Directions. Wiley, Hoboken, NJ, 107–133.
Quach B, Krogh-Madsen T, Entcheva E, Christini DJ (2018). Light-activated
dynamic clamp using iPSC-derived cardiomyocytes. Biophysical Journal
115(11): 2206–2217.
Robinson HP, Kawai N (1993). Injection of digitally synthesized synaptic con-
ductance transients to measure the integrative properties of neurons. Journal
of Neuroscience Methods 49:157–165.
Sharp AA, O’Neil MB, Abbott LF, Marder E (1993a). The dynamic clamp: artifi-
cial conductances in biological neurons. Trends in Neuroscience 16:389–394.
Sharp AA, O’Neil MB, Abbott LF, Marder E (1993b). Dynamic clamp:
computer-generated conductances in real neurons. Journal of Neurophysiol-
ogy 69:992–995.
318 Astrid A. Prinz
Sieling F, Bedecarrats A, Simmers J, Prinz AA, Nargeot R (2014). Differen-
tial roles of nonsynaptic and synaptic plasticity in operant reward learning-
induced compulsive behavior. Current Biology 24(9):941–950.
Sporns O (2010). Connectome. Scholarpedia 5(2):5584.
Wilders R (2006). Dynamic clamp: a powerful tool in cardiac electrophysiology.
Journal of Physiology 576(2):349–359.
Section 5
1 Introduction
Loss of function studies in genetics are a central experimental approach
from which a substantial proportion of gene-centered explanations are
derived. Moreover, the gene-centered explanations that are derived from
the loss of function studies are crucial data for further research pro-
grams such as causal modeling of gene regulatory networks. While pre-
vious philosophical discussions have focused on what properties make
the gene uniquely significant in biological explanations, this paper has to
do with what makes the gene a useful tool for manipulation and control
of biological processes in loss of function studies. For, as we’ll see, the
causal properties that make genes central to loss of function studies are
not unique to genes alone but are likely to be shared by other related
biomolecules. Nevertheless, the centrality of loss of function studies to
contemporary biology and gene-centered explanations requires that our
philosophical analyses properly account for the causal and explanatory
status of genes in this area. A central aim in the history and philosophy
of biology literature has been to reconstruct the underlying justifications
for what makes genes explanatorily and experimentally significant. Sev-
eral, mutually compatible proposals have been made. The history of bi-
ology, it turns out, has had more than one operative gene concept. Thus,
a common view in the philosophy of biology has been that genes serve
a plurality of explanatory and experimental roles in different research
programs. Two proposals for the explanatory and experimental signif-
icance of genes have recently been articulated – the sequence specificity
view and the actual difference making approach (Waters 2007; Weber
2013, 2017).
In this paper, I argue that both the sequence specificity and actual
difference making views are inadequate for making sense of the subtle
reasoning at play in the loss of function studies. The argument against
* I owe special thanks to Alan Love, John Bickle, Antonella Tramacere, and Carl
Craver for their invaluable comments on this paper and to Gabriela Huelga-Morales
for welcoming me into her lab at the University of Minnesota, Twin Cities.
DOI: 10.4324/9781003251392-21
322 Janella Baxter
the sequence specificity view is easy. It simply involves observing that
the gene-centric explanations coming from loss of function studies do
not cite the sequence specificity of a gene as the explanatorily significant
property. Instead, the explanatorily significant property of genes in such
explanations is the pattern of gene expression in a biological system.
When it comes to patterns of gene expression, scientists conducting loss
of function studies distinguish between switch- and dial-like types of
gene expression. Switch- and dial-like gene expression represents two
types of causal control that are achievable thanks to recent technological
and experimental innovations.
Both switch- and dial-like causal control can be interpreted as more re-
fined concepts of Water’s actual difference making view. I embrace this in-
terpretation. However, in embracing this interpretation, I raise reasonable
concerns about why philosophers of science should understand the logic of
loss of function studies through the lens of switch- and dial-like causation
when we already have Waters’ actual difference making view. This is when
I incorporate a philosophy of technology to help justify preferring the
switch- and dial-like account over the actual difference making view.
I argue that the switch- and dial-like causal control represents a “spiral
of self-improvement” that is characteristic of scientific progress (Chang
2004, 44). On this view, technological and experimental progress isn’t
merely an advancement in practical know-how. I argue that this type of
progress can also usher forth conceptual change as novel technological
and experimental advancements help integrate novel concepts into con-
crete, empirical practices. In the case of loss of function studies, I argue
the novel techniques that make both knockout and knockdown exper-
imental techniques from which biologists may choose help switch- and
dial-like causal concepts make contact with the empirical world. Yet,
this recent development is not a replacement for the actual difference
making concept that characterized earlier stages of inquiry in genetics.
Rather, switch- and dial-like causal concepts build upon and refine the
older concept.
This account leaves philosophers of science with both the actual dif-
ference making and the switch- and dial-like accounts as legitimate ways
of understanding the logic of loss of function studies. Nevertheless, I
maintain that if our purpose is to understand the subtle reasoning char-
acteristic of loss of function studies, we should prefer the switch- and
dial-like account. For one thing, this account helps us track scientific
progress in genetics experimentation and theorization. For another, it
helps us understand the rationale behind a scientist’s choice between
switch- or dial-like control in a given experiment. For biologists are of-
ten principled in their choice between which type of control they employ
in an experiment. They will judge that one type of control is more likely
to be illuminating. The switch- and dial-like framework helps us under-
stand their reasoning for doing so.
Beyond Actual Difference Making 323
The structure of my paper proceeds by (Section 2) laying out the se-
quence specificity and actual difference making accounts that have been
proposed in the philosophy of biology literature. Next (Section 3) I in-
troduce the logic of loss of function studies and I argue that dial- and
switch-like causal control is what justifies the causal selection of genes
in this experimental paradigm. And finally, (Section 4) I argue that for
the purpose of understanding loss of function studies we should prefer
the switch- and dial-like account over the actual difference making view.
2 Gene-Centered Explanations
It is widely acknowledged that contemporary and classical explanations
in the biological sciences often accord a special explanatory status to
genes. This is true despite the fact that many causal conditions are rel-
evant to any given effect. A central program in the philosophy of biol-
ogy literature has been to articulate the underlying rationale of singling
out genes in biological explanations that attempt to capture the sort
of causal control genes have over biological processes. One proposal,
extensively defended by C. Kenneth Waters (1994, 2004, 2006, 2007;
Rheinberger and Müller-Wille 2017), is that genes serve as actual differ-
ence makers in both contemporary and classical experimental areas of
biology. In what follows I distinguish between a narrow and broad sense
of actual difference making that is at play in Water’s work. Another
recent proposal is that genes have sequence specificity with respect to
the linear sequences of molecular products (Weber 2006, 2013, 2017;
Waters 2007; Woodward 2010; Griffiths and Stotz 2013). I lay out these
views in the present section.
The causal selection debate in the philosophy of biology literature
has primarily concerned the underlying rationale behind why biologists
highlight some causal variables in explanation and background others.
In particular, the debate has taken molecular genes and DNA as its cen-
tral case of causal variables that are frequently privileged in contempo-
rary biological explanations. The debate begins with the observation
that any given effect – biological or not – has a large set of relevant
causal conditions. For example, green fluorescence in jellyfish requires
the presence of the gene for the green fluorescent protein encoded in
the jellyfish’s genome, the transcription of the gene into messenger RNA
(mRNA), and the translation of mRNA into a protein. It also requires
the abundance of amino acids and biochemical energy in the form of
ATP, as well as viable temperature and pH levels, and so on. Yet, despite
this immense causal complexity, biologists often single out the gene for
green fluorescent protein when formulating their explanations of why
jellyfish fluoresce in the deep sea. Several proposals have been offered as
analyses of the underlying rationale for gene selection in many biological
explanations – sequence specificity (Weber 2006, 2013, 2017; Waters
324 Janella Baxter
2007, 2018; Woodward 2010; Griffiths and Stotz 2013; Griffiths et al.
2015) and actual difference making (Waters 1994, 2006, 2007).
Sequence specificity has to do with the causal control structural genes
have over the linear sequences of gene products, like RNA and proteins.
Structural genes are sequences of nucleic acid bases – adenine (A), thy-
mine (T)/uracil (U) (for RNA), guanine (G), and cytosine (C) – in DNA
that encode other molecular products. Structural genes are conceptually
distinct from regulatory genes (Gerstein et al. 2007). Regulatory genes are
sequences of nucleic acid bases in DNA that don’t have sequence specific-
ity over the linear sequences of other biomolecules. Instead, they enable
or inhibit the transcription of structural genes when regulatory factors –
proteins and RNA molecules – bind to regulatory modules. Struc-
tural genes control the nucleic acid sequences of RNA transcripts by
Watson-Crick base pairing rules whereby adenine always specifies uracil,
thymine specifies adenine, guanine specifies cytosine, and cytosine spec-
ifies guanine. In this way, differences in the nucleic acid sequence of a
structural gene produce differences in the nucleic acid sequence of RNA
transcripts. In simple cases where there is no alternative splicing, differ-
ences in the nucleic acid sequences of structural genes also produce differ-
ences in the amino acid sequences of proteins. The nucleic acid sequence
of a structural gene determines the amino acid sequence of a protein in
units of three nucleic acids at a time or in units of codons. For the most
part (with the exception of stop codons – codons that instruct the protein
synthesis machinery to “halt” production), each nucleic acid triplet spec-
ifies one amino acid type. For example, the codon UGG “codes” only for
the amino acid tryptophan. There is some redundancy in the genetic code,
meaning that more than one codon specifies the same amino acid, as in the
case of ACU, ACC, ACA, and ACG all of which specify threonine. This
means that some alternative nucleic acid sequences of a protein coding
gene will make no difference to the same amino acid sequence of a protein.
However, many differences in the nucleic acid sequence of a protein cod-
ing gene will make a difference to the amino acid sequence of a protein.
For some authors, sequence specificity is the primary rationale for the
causal selection of genes in contemporary biological explanations. For
example, the sequence specificity of structural genes is at the heart of
C. Kenneth Waters’ account of what makes genetics successful (1994,
2006, 2007, 2018). In Waters’ view, the gene concept is flexible and can
refer to different entities depending on the purposes of a researcher. For
example, when a biologist wishes to explain the linear sequence of a
protein, they will appeal to the messenger RNA (mRNA) biomolecule
that is read and translated into a sequence of amino acids as the relevant
gene. By contrast, when a biologist wishes to explain the linear sequence
of an mRNA transcript, they will appeal to the relevant nucleic acid
sequence in DNA. In this way, there is no single entity that is a gene for
all research purposes. Yet, even on this flexible understanding of the
Beyond Actual Difference Making 325
gene concept in contemporary biology, Waters’ analysis of contemporary
practice focuses entirely on structural genes. At times, Waters writes as
if structural genes are the only genes in the game when he writes: “Genes
are for linear sequences in products of gene expression” (Waters 1994,
177). This is further illustrated by Waters’ account of what has made
contemporary genetics successful.
Waters may feel no need to expand his account of the contemporary
gene concept because he has another analysis upon which he can fall.
This is his actual difference making view (Waters 2007). Like the struc-
tural gene concept, the actual difference making concept is also flexible.
It can be understood in broad or narrow terms. Broadly, a cause that
(1) actually varies relative to an otherwise uniform population and (2)
whose varying accounts for an actual difference in an effect variable is
an actual difference maker. The broad interpretation of actual difference
making makes no reference to any particular scientific paradigm – this is
in contrast to the narrow interpretation. All that’s required is that con-
ditions (1) and (2) be met. As we’ll see below although this can charac-
terize the loss of function studies in contemporary molecular biology, I
question the usefulness of this analysis below. The narrow interpretation
of actual difference making is specific to classical genetics of the early
20th century.1 Waters also argues that genes in classical genetics serve
as actual difference makers. On this construal, classical genes are ac-
tual difference makers with respect to actual differences in a phenotypic
trait when they are allowed to actually vary relative to an experimental
population (Waters 1994, 2004, 2007) In classical genetics, genes make
actual differences to genotype – the genetic composition of an organism –
which in turn (can) correlate with actual differences in a phenotypic
trait. Classical geneticists exploited both investigative and theoretical
principles of inheritance – e.g., independent assortment, segregation,
recombination, etc. – to generate lab-raised populations that were (rel-
atively) genetically identical with the exception of a few genes that were
permitted to actually vary. So, in a population of fruit flies, some indi-
viduals are homozygous – they carry two copies of the same gene – for
the wild type eye color (+/+), others are heterozygous for a wild type and
a mutant gene, say purple, (+/pr), and still others are homozygous for
purple (pr/pr). When phenotypic differences in eye color are obtained –
some individuals have red others have purple – classical geneticists in-
ferred that such differences must be due to the genetic differences they
were allowed to actually obtain (Bridges and Morgan 1919).2 Even
though classical geneticists knew that other genes – say, the vermillion
gene – could make a difference to eye color, Waters insists that it is the
wild type or purple genotypes that are explanatorily significant relative
to this experimental set-up. The vermillion gene, by contrast, is merely a
potential difference maker in this case and, thus, is not privileged in the
explanations of classical geneticists.
326 Janella Baxter
Actual difference making – in both the broad and narrow senses –
as well as sequence specificity are different types of causal control that
genes have with respect to biological processes. These accounts have
been proposed by various philosophers (notably, Waters 1994, 2004,
2006, 2007; Weber 2006, 2013, 2017) as descriptions of the reasoning
behind why biologists systematically privilege genes when formulating
their explanations. Waters (1994, 2004, 2006, 2007) and Weber (2004,
2018) both attempt to inform their philosophies by attending to the ex-
perimental practices of biologists. However, as I’ll argue below, their
views on the causal selection of genes have not been attentive to con-
temporary experimental approaches. For the causal selection of genes at
play in the loss of function study fits neither the sequence specificity nor
the actual difference making view very well. Experimental practices get
better over time as new technologies and experimental techniques are
developed. Moreover, explanatory practices often change alongside in-
novation in experimental approaches. I’ll argue below that technological
and experimental innovation can justify novel ways of conceptualizing
phenomena. As researchers are prompted to justify the inferences they
make from experiments, the explanations they formulate often need to
invoke novel concepts ushered forth by technological and experimental
innovations. Thus, it is likely that sequence specificity and actual differ-
ence making may not be up to the task of capturing the causal reasoning
at play in novel areas of biology. At least for the purpose of making
sense of the subtle logics implicit to loss of function studies – one of the
most central experimental paradigms used in contemporary biology –
our philosophical analysis should track the conceptual changes at work
in these sets of practices.
The study from which this claim comes involves RNAi knockdowns of
several mRNA products involved in mammalian circadian rhythm cy-
cles. And yet, these authors speak of “the depletion of…genes…” This is
an illustration of how flexible the molecular gene concept is in contem-
porary biology. Waters (1994) has argued that a common gene concept
underlies the immensely diverse range of biochemicals to which biol-
ogists apply the term. Since nucleic acid sequences in DNA and RNA
often determine the linear sequences of other downstream products, the
word “gene” may properly apply to either type of biomolecule. Thus, the
molecular gene concept can be ambiguous when it is not specified what
product the gene is for and at what stage of gene expression. In this case,
the relevant gene refers to the mRNA transcripts that are the target of
RNAi. The nucleic acid sequences of the mRNA transcripts are genes for
the amino acid sequence of a protein during the stage of gene expression
called translation. Of course, there are genes for the mRNA transcripts
that are the target of the knockdown intervention. These are the Clock,
Bmal1, and Per1 nucleic acid sequences encoded in the organism’s DNA,
which determine the nucleic acid sequences of mRNA products during
the stage of gene expression called transcription. What this shows is that
sequence specificity plays a crucial role in the identification and individ-
uation of genes in biology; however, this should not be mistaken for the
causal significance of a gene. Loss of function studies intervene at var-
ious stages of gene expression – knockdown experiments intervene on
330 Janella Baxter
mRNA, and (as we’ll soon see) knockout experiments intervene directly
on DNA – and it is the various ways gene expression can be manipulated
that is conceptually relevant to the explanations biologists formulate.
Dial-like causal control differs from other types of loss of func-
tion interventions in that it leaves some amount of gene product in
the organism’s cellular environment. Gene knockout studies are an
increasingly pervasive experimental approach to the loss of function
studies. This sort of intervention involves switch-like causal control.
Instead of there being a range of many values a causal variable can take,
each of which associates with a different value in the effect variable,
switch-like causal control is all or nothing. On this sort of experimen-
tal approach, the contrast focus is between the presence or absence
of a gene product in a biological population. This sort of control is
best achieved by gene-editing techniques like CRISPR-Cas9, which
break and rejoin DNA strands at a precise site where a protein cod-
ing gene is encoded. When the broken strands are rejoined, cellular
repair mechanisms can replace a codon that specifies an amino acid
with a stop codon. Ideally, the insertion of premature stop codons
in all redundant copies of a gene throughout an organism’s genome
will ensure that no protein is synthesized. When a stark phenotypic
difference between a knockout and control population is achieved
by means of a knockout intervention, the target gene often becomes
a standard test case by which to determine whether other knockout
interventions are successful. For example, knockout studies of the
Dpy gene in Caenorhabditis elegans (or C. elegans) result in dumpy –
short and fat – body types and are often used to test whether
C RISPR-Cas9 reagents target a gene of interest (Shen et al. 2014). In
knockout studies, biologists privilege the absence of a gene whose DNA
sequence would determine the amino acid sequence of a protein in their
explanations of any phenotypic differences that might obtain. In the
case of Dpy knockouts, biologists highlight the absence of the Dpy nu-
cleic acid sequence in DNA to explain the phenotypic difference be-
tween the knockout and control population – in this case, short and fat
versus normal body types respectively.
One might object that either the narrow or broad construal of Waters’
actual difference making approach adequately captures the causal rea-
soning of loss of function experiments. On the narrow construal, loss
of function studies reason about the actual differences molecular genes
produce in ways that are similar to the reasoning strategies employed
by classical geneticists. For example, both strategies appeal to genes to
explain phenotypic differences when the genes are allowed to actually
vary relative to an otherwise genetically and environmentally uniform
population. Although it’s true that important conceptual and experimen-
tal parallels can be found between contemporary and classical genetics
(Vance 1996; Waters 2004, 2009), the causal reasoning strategies are
Beyond Actual Difference Making 331
not the same. An important difference is the role dominant and recessive
traits played in the reasoning of classical geneticists (Bridges and Morgan
1919). In diploid organisms – or organisms carrying two copies of each
gene – dominant traits are produced by either homo- or heterozygosity
for a dominant allele. That is, either a dominant trait, such as the red eye
color, is caused by either a genotype with two copies of the red eye gene
(+/+) or a genotype with one copy of the red eye gene and one copy of a
mutant gene, like purple (+/pr). By contrast, recessive genotypes require
homozygosity of a mutant gene (pr/pr). For classical geneticists to explain
the difference between some phenotypic traits that emerged in their ex-
perimental populations, say red and purple eyes, they had to appeal to
the actual differences in genotypes with different dominant and reces-
sive alleles. Dominant and recessive traits are not a feature of the causal
reasoning in loss of function studies. What matters is that phenotypic
differences correlate with an experimenter’s intervention on a gene prod-
uct. Contemporary biologists do without the theory of dominant and
recessive traits to explain differences in their populations. An adequate
account of the experimental and explanatory strategies of classical and
contemporary geneticists ought to be sensitive to this subtle difference.
Perhaps a further objection is that Waters’ broad construal of actual
difference making adequately captures the causal reasoning of loss of
function studies. On this proposal, scientists attribute any actual dif-
ferences that might obtain in a (relatively) uniform population to causal
variables that they allow to actually vary. This reasoning appears to
capture the causal reasoning of contemporary biologists conducting loss
of function studies. Indeed, biologists attribute actual phenotypic differ-
ences that obtain in their experimental populations to the actual differ-
ences they induce in molecular genes. Yet I caution against embracing
this characterization. For it is too crude of an analysis to differentiate
between actual difference making practices that biologists recognize as
being illuminating (or, at least, have the promise of being illuminating)
and ones they don’t. As I’ll argue below, contemporary biologists can
make actual many types of differences when studying the functions of a
gene. Yet, biologists are systematic about the sorts of experiments they
conduct and the sorts they don’t. The broad construal of actual differ-
ence making cannot account for this. I’ll argue instead that we charac-
terize the causal reasoning at play in the loss of function studies in terms
of dial- and switch-like causal control.
Technical and experimental innovation can stimulate explanatory in-
novation. Loss of function techniques like RNAi and CRISPR-Cas9 with
their respective types of causal control exerted over genes give scientists a
way to make specific causal concepts operational (Chang 2014). That is,
these tools make the integration of dial- and switch-like causal concepts
with concrete empirical practices possible. While for some authors, tech-
nological innovation is itself “conceptually impoverished” as it is merely
332 Janella Baxter
the application of already developed conceptual frameworks from pure
science (Bunge 1966), technological innovation for loss of function stud-
ies (and perhaps many other experimental paradigms) suggests otherwise.
Rather, knockdown and knockout approaches provide scientists with jus-
tification for conceptualizing phenomena in one way rather than another.
As novel concepts are introduced by means of novel technological and
experimental innovations the logic of the explanations scientist formulate
changes as well. Scientists are often confronted with the need to explain
the reasoning behind the inferences they draw from the experiments they
conduct. In the cases I am discussing, the inferences drawn are often
rather modest – concerning things like what is causally responsible for
some observable effect in a particular population in a particular environ-
ment. Nonetheless, the techniques employed in a particular experiment
shape significantly the conceptual framework implicit to the explanations
biologists formulate. Depending on whether a scientist uses a knockdown
or knockout approach, this will determine whether they will conceptu-
alize the cause and effect variables in terms of a dial- or switch-like way.
5 Conclusion
The gene is a central explanatory variable in much of classical and con-
temporary biology. I have argued that the rationale behind the wide-
spread causal selection of molecular genes varies depending on the type
336 Janella Baxter
of tools scientists use to manipulate and control genetic processes. This
is significant not just because it represents progress in practical know-
how. It’s significant because novel technologies and techniques can also
play a role in conceptual change as they make novel concepts opera-
tional. The story I am telling about the loss of function studies is a story
about how gene concepts have become increasingly sophisticated as bi-
ologist’s experimental tools have increased in number and power. This
means that the switch- and dial-like account defended in this paper is a
recent refinement of the broad interpretation of Waters’ actual difference
making view. Yet this doesn’t mean that the actual difference making
view is adequate for our purposes. If our purpose is to understand the
subtle reasoning at play in the loss of function studies – and I believe
philosophers of genetics should have this purpose – we should prefer the
switch- and dial-like account. This account better accommodates the
nature of technological and conceptual progress in the history of genet-
ics and it helps us appreciate the judgments biologists make about which
type of causal control is likely to be more illuminating.
Notes
1 By classical genetics, Waters really has in mind the system of practices and
theories of the American geneticist, Thomas Hunt Morgan and his lab. If by
“classical genetics,” one simply means the system of practices and theories
aimed at understanding genetics in the early 20th century, then Waters’ ac-
count only captures a subset of the science going on at the time. Different
laboratories around the world studying genetics operated by distinct styles
of thought at the time (see Harwood 1993).
2 Difference making alone cannot explain which genotype aligns with which
phenotypic difference. This required theoretical knowledge about dominant
and recessive traits as well as extensive empirical knowledge about which act
as dominant/recessive. Waters acknowledges this, however, it is noteworthy
his actual difference making approach cannot fully account for this aspect
of the reasoning at play in the experiments of classical genetics.
3 See Millikan (1989), Neander (1991a, 1991b), Wouters (2003), Walsh and
Ariew (2013) for a brief survey of the numerous function concepts in the life
sciences.
4 Some molecular tools do indeed produce phenotypic differences, however.
For example, genetic markers such as fluorescent proteins are an illustrative
example. Yet this sort of tool differs importantly from the loss of func-
tion tools presently discussed. Genetic markers are often used as an as-
say technique to help researchers detect the presence of a gene product;
whereas, loss of function techniques are methods for interrupting causal
relationships.
5 Dial-like control over the concentration of a gene product can be achieved
with a variety of methods including small molecules. Even gene-editing tech-
niques such as CRISPR-Cas9 can achieve this sort of control by targeting
only some copies of the same gene in a genome or by targeting all copies of a
gene at a particular stage of development. RNAi is by far the most popular
method.
Beyond Actual Difference Making 337
References
Baggs J., T. S. Price, L. DiTacchio, S. Panda, G. FitzGerald, J. Hogenesch. 2009.
“Network Features of the Mammalian Circadian Clock.” PLoS Biology,
7(3), pp. 0563–75.
Barbaric I., G. Miller, T. N. Dear. 2007. “Appearances Can be Deceiving: Phe-
notypes of Knockout Mice.” Briefings in Functional Genomics and Proteom-
ics, 6(2), pp. 91–103.
Bridges C. B., Morgan T. H. 1919. The Second-Chromosome Group of Mutant
Characters. Carnegie Institution of Washington Publication 278. Washing-
ton, DC: Carnegie Institution, pp. 123–304.
Bunge M. 1966. “Technology as Applied Science.” Technology and Culture,
7(3), pp. 329–47.
Castagnoli L., A. Costantini, C. Dall’Armi, S. Gonfloni, L. Montecchi-Palazzi,
S. Panni, S. Paoluzi, E. Santonico, G. Cesareni. 2004. “Selective Promiscu-
ity in the Interaction Network Mediated by Protein Recognition Modules.”
FEBS Letters, 567, pp. 74–9.
Chang H. 2004. Inventing Temperature: Measurement and Scientific Progress.
New York: Oxford University Press.
Chang H. 2014. Is Water H 2O? Evidence, Realism and Pluralism. Boston Stud-
ies in the Philosophy of Science, Volume 293. New York: Springer Science &
Business Media.
Craver C.F. 2007. Explaining the Brain: Mechanisms and the Mosaic Unity of
Neuroscience. New York: Oxford University Press.
Gerstein M. B., C. Bruce, J. S. Rozowsky, D. Zheng, J. Du, J. O. Korbel, O.
Emanuelsson, Z. D. Zhang, S. Weissman, M. Snyder. 2007. What Is a Gene,
Post-ENCODE? History and Updated Definition. Cold Spring Harbor, NY:
Cold Spring Harbor Laboratory Press.
Griffiths P., K. Stotz. 2013. Genetics and Philosophy: An Introduction. New
York: Cambridge University Press.
Griffiths P., A. Pocheville, B. Calcott, K. Stotz, H. Kim, R. Knight. 2015. “Mea-
suring Causal Specificity.” Philosophy of Science, 82(4), 529–55.
Harwood J. 1993. Styles of Scientific Thought: The German Genetics Commu-
nity, 1900–1933. Chicago, IL: The University of Chicago Press.
Housden, B., M. Muhar, M. Gemberling, C. Gersbach, D. Stainier, G. Seydoux, S.
Mohr, J. Zuber, N. Perrimon. 2016. “Loss-of-Function Genetic Tools for Ani-
mal Models: Cross-Species and Cross-Platform Differences.” Nature, 18, 24–40.
Kitano H. 2004. “Biological Robustness.” Nature Reviews, 5, pp. 826–37.
Mitchell S. 2009. Unsimple Truths: Science, Complexity, and Policy. Chicago,
IL: University of Chicago Press.
Millikan R. G. 1989. “In Defense of Proper Functions.” Philosophy of Science,
56(2), pp. 288–302.
Neander K. 1991a. “Functions as Selected Effects: The Conceptual Analyst’s
Defense.” Philosophy of Science, 58(2), pp. 168–84.
Neander K. 1991b. “The Teleological Notion of Function.” Australasian Jour-
nal of Philosophy, 69(4), pp. 454–68.
O’Malley M. A., K. C. Elliot, R. M. Burian. 2010. “From Genetic to Genomic
Regulation: Iterativity in microRNA Research.” Studies in History and Phi-
losophy of Biological and Biomedical Sciences, 41, pp. 407–17.
338 Janella Baxter
Rheinberger H.-J., S. Müller-Wille. 2017. The Gene: From Genetics to Postge-
nomics. Chicago, IL: University of Chicago Press.
Shalem O., N. E. Sanjana, E. Hartenian, X. Shi, D. Scott, T. Mikkelsen, D.
Heckl, B. L. Ebert, D. E. Root, J. G. Doench, F. Zhang. 2014. “Genome-Scale
CRISPR-Cas9 Knockout Screening in Human Cells.” Science, 343, pp. 84–87.
Shen Z., X. Zhang, Y. Chai, Z. Zhu, P. Yi, G. Feng, W. Li, G. Ou. 2014. “Con-
ditional Knockouts Generated by Engineered CRISPR-Cas9 Endonuclease
Reveal the Roles of Coronin in C. elegans Neural Development.” Develop-
mental Cell, 30, pp. 615–36.
Vance R. 1996. “Heroic Antireductionism and Genetics: A Tale of One Sci-
ence.” Philosophy of Science, 63, pp. 36–45.
Walsh D. M., A. Ariew. 2013. “A Taxonomy of Functions.” Canadian Journal
of Philosophy, 26(4), pp. 493–514.
Waters C. K. 1994. “Genes Made Molecular.” Philosophy of Science, 61, pp.
163–185.
Waters C. K. 2004. “What was Classical Genetics?” Studies in History and
Philosophy of Science, 35, pp. 783–809.
Waters C. K. 2006. “A Pluralist Interpretation of Gene-Centered Biology.” In
Scientific Pluralism, Minnesota Studies in the Philosophy of Science, Volume
XIX, edited by S. H. Kellert, H. E. Longino, C. K. Waters. Minneapolis: Uni-
versity of Minnesota Press, pp. 190–214.
Waters C. K. 2007. “Causes That Make a Difference.” Journal of Philosophy,
104(11), pp. 551–579.
Waters C. K. 2009. “Beyond Theoretical Reduction and Layer-Cake Antireduc-
tion: How DNA Retooled Genetics and Transformed Biological Practice.” In
The Oxford Handbook of Biology, edited by M. Ruse. New York: Oxford
University Press, pp. 238–62.
Weber M. 2004. Philosophy of Experimental Biology. Cambridge: Cambridge
University Press.
Weber M. 2006. “The Central Dogma as a Thesis of Causal Specificity.” His-
tory and Philosophy of the Life Sciences, 28, pp. 595–610.
Weber M. 2013. “Causal Selection vs Causal Parity in Biology: Relevant Coun-
terfactuals and Biologically Normal Interventions.” In Causation in Biology
and Philosophy, edited by C. K. Waters, M. Travisano, J. Woodward. Minne-
apolis: University of Minnesota Press.
Weber M. 2017. “Discussion Note: Which Kind of Causal Specificity Matters
Biologically?” Philosophy of Science, 84(3), pp. 574–85.
Weber M. 2018. “Experiment in Biology.” The Stanford Encyclopedia of Phi-
losophy. https://plato.stanford.edu/entries/biology-experiment/.
Whittaker S., J.-P. Theurillat, E. Van Allen, N. Wagle, J. Hsaio, G. S. Cowley,
D. Schadendorf, D. Root, L. Garraway. 2013. “A Genome-Scale RNA Inter-
ference Screen Implicates NF1 Loss in Resistance to RAF Inhibition.” Cancer
Discovery, 3, pp. 351–62.
Woodward J. 2010. “Causation in Biology: Stability, Specificity, and the Choice
of Levels of Explanation.” Biology and Philosophy 25, pp. 287–318.
Wouters A. 2003. “Four Notions of Biological Function.” Studies in History
and Philosophy of Biological and Biomedical Sciences, 34, pp. 633–68.
Contributors
action potential 14–16, 19–20, 24, calcium imaging 50–51, 92, 100, 115,
47, 50, 140, 146, 159, 162, 165, 148, 150
167–168, 172, 244–245, 288–291, calibration 61, 64–65, 67–70, 73, 76,
295–299, 303–304, 307, 311–312, 80, 313
314, 316 causal explanations 140, 143, 174,
adjustment 70, 98, 101, 109, 187–189 211
agent 52, 65, 71–72, 107–108, 141, causal-dynamical 212
249 causal-explanatory 202
algorithm 99–100, 109, 115, 240, causality 17, 41, 43–44, 47–48,
252, 265, 271 70–71, 73, 79, 81, 87, 95, 97, 102,
allosteric 103–104, 106, 145 108, 110–111, 119–120, 122,
Aplysia 242, 308, 311–314, 317 126–127, 129, 133, 141, 148,
artifacts 65, 94, 99, 152, 159, 172, 152, 154–161, 164–167, 169–170,
174, 178, 186, 193 172–174, 180–182, 185, 193, 199,
astrocytes 147, 151, 257 201–202, 204, 206–212, 215–217,
auditory system 88, 122–123 231, 241, 243–245, 247–248, 251,
256, 260–261, 263, 274, 289, 317,
Bacon, Francis 2, 49, 169, 173, 175 322, 325, 328, 331–332, 334, 338,
Barnes Maze 142, 147 341
Bechtel, William 50, 52–53, 189–190, causal-mechanistic 7, 44, 110, 129,
199, 215, 222, 236, 246, 254, 195–196, 198–201, 203, 205–214
264–265, 280, 294–296, 298, channel 14, 20, 22–25, 27, 36, 43,
301–302 46, 50–51, 63–64, 69, 153, 155,
Bell’s number 242 158–159, 162–163, 165, 171, 173,
Biel Water Maze 61, 63, 69, 80 175, 216, 226, 245, 255, 288, 304,
biomarker 201–205, 207, 210, 213, 306, 308, 310
216 Churchland(s, the) 29–31, 34, 37,
bivalence 163–164 39, 49–51, 53, 84, 114, 118, 131,
black box 7, 79, 109, 129, 196–197, 190–191, 237, 288, 302
206, 209–210, 212–213, 216, Cincinnati water maze 5, 56–57,
243, 341 61–62, 64, 80, 82
brain: and COVID-19 122–123, circuit 7–8, 14, 19, 21, 23, 33, 44,
125–129; “long hauler” syndrome 46, 48, 51–54, 73, 90, 143–147,
125, 128–129; and megakaryocytes 149, 151, 169, 175, 219, 222–223,
127–128; olfactory bulb 126; 228–229, 231, 239, 241–244, 247,
temporal lobe 127–128; viral 249–252, 254–258, 275–276, 281,
impacts 125–126 304–308, 310–316, 341
BRAIN initiative 2, 30–31, 222–223, clamp 8, 15, 21–28, 30, 35–36, 175,
231, 234–237, 256 288–289, 296, 299, 305–306,
Brownian motion 49 308–318
346 Index
closed-loop 144, 149 153, 155–160, 163, 167, 169–170,
coding 5, 84–87, 89, 91, 101, 103– 173–175, 181, 189, 192, 243,
112, 114, 116, 202, 324, 326–327, 248–250, 252, 269, 281, 310–314,
330, 333, 343 317, 321–324, 326, 328, 330–332,
cognition 4, 6, 29, 63, 71, 85, 107– 334–336
109, 111, 114, 116, 129, 147, 149, convergence 81, 137, 140–142, 147,
177, 185, 188, 222, 228–229, 231, 167, 180, 182, 186–188
238–239, 243, 252, 255–257, 259, covering-law model 195, 197, 208
276, 279, 281–282, 303, 339–342 COVID-19: autopsies 123–124,
cognitive 2–3, 7, 18, 20, 28, 31, 34, 127, 129; and the brain 122–123,
38, 44, 53, 61, 64–65, 71–75, 125–128, and the ear 122–123,
77–79, 81–85, 102, 107–109, 111, 125, 129; history of 119–120;
113–114, 116, 126, 132, 143, “long hauler” syndrome 122, 125,
149, 170, 174, 176–177, 179–180, 128–129; loss of smell 120, 122,
182–184, 188–190, 193, 197, 125; and megakaryocytes 127–129;
203–205, 207, 209, 213, 215–216, neurological impacts 125–126,
222, 228–230, 235, 237–240, 246, 128–129; risk factors for 120
254, 257–271, 276–282, 296, 303, Cre recombinase 42, 46, 150
339–342 cybernetic 252, 257
cognitive function 84, 107, 111, 229,
260, 270 data 1–2, 7, 29–31, 50, 52, 56, 61, 67,
cognitive impairments 64–65, 183 69–72, 76–80, 83, 90, 97–101, 104,
cognitive ontologies 259–260, 106, 108–111, 115, 122, 167, 172,
270, 281 177–180, 186–187, 189–190, 210,
cognitive ontology 7, 259–260, 264, 215, 217, 221–235, 238, 242–243,
268, 270–271, 280–281 249, 252–253, 259–260, 263–266,
cognitive science 53, 85, 107, 113, 270, 275, 277–278, 280, 283,
116, 132, 254, 281, 303, 287–290, 296–297, 299–301, 306,
339–340, 342 308, 316, 321, 327, 340
community 58, 60, 76, 81, 188, 200, data analysis 7, 83, 98–99, 101, 172,
253, 297, 299, 333, 337 265, 280, 287–288, 297, 299
computation 8, 24, 28–29, 34, 44, 52, data integration 222–224,
89–90, 100, 105, 131, 140–141, 227–229, 233
159, 174, 240, 252–257, 279, databases 75, 227, 234, 260, 264–
281–282, 297, 302, 304–306, 308, 267, 270, 272, 276–278
310–312, 316, 340–341 decomposition 155, 215, 246, 254,
computational modeling 8, 29, 140, 261, 263, 268, 273, 276, 278, 280,
305–306, 308, 310–312, 341 282, 288, 294, 298–299, 302
concentration 27–28, 87, 93–94, 106, determinism 157, 168
144, 154, 204, 306, 326, 328, 336 diaschisis 247, 251, 254
conductance 14, 20, 22, 24, 27, 35, differential equations 306,
51, 306, 308–317 310–311, 315
conductance-based models 306 discordance 6, 177, 180, 182–183,
conductance clamp 308–309 185–186, 188, 193, 224
connectomics 240, 249, 251–252, discovery 2, 6, 8, 16–19, 24–25, 31,
256–257, 305, 318 44, 52, 69, 78, 81, 85–86, 89, 105,
consistency 141–142, 301 112–113, 116, 119, 124, 127, 129,
constraints 7, 111, 155–156, 162, 132, 137–138, 140, 143, 149,
170, 172, 189, 221–222, 225–232, 156, 168, 174, 183, 190, 198,
234–236, 282, 315, 332 215, 217, 225, 237, 243, 246,
control 1, 8, 26–27, 29, 43, 45–47, 259–260, 264–265, 268, 270,
52–53, 64–65, 82, 96, 98, 120, 276, 279–281, 287, 291,
129, 131, 145, 147–149, 151, 298–299, 302, 329, 338, 340
Index 347
distributed cognition 85, 107–109, gene expression 14, 145, 150, 243,
111, 114 322, 325, 327–330, 332, 334
dominance 157, 166, 168 gene knockdown 328
dopamine 45, 55, 72–73, 79–81, 145, gene knockout 150, 249, 330
157–159, 167 gene targeting 5, 13, 26, 30, 34,
dynamic clamp 8, 175, 305–306, 39–41, 43, 45, 50–52, 142, 147,
308–318 150, 228
dynamics 8, 30, 33, 53, 108–109, 116, genetics 29, 45, 48, 75, 80, 85, 99,
148, 151, 274, 282, 287–288, 291, 102, 120, 127, 137–140, 142,
293–294, 296–299, 301–306, 308, 145–150, 155, 158, 170, 173,
312, 316 182, 200–202, 210–211, 214–217,
242–243, 249–251, 256, 321–322,
ear 120–125, 127, 129, 131, 134, 342 324–326, 330–332, 336–338, 343
Einstein, Albert 49, 248 genome-wide association studies 200
electrical synapses 314 gigaseal 23–25, 28, 30
electrode 16–17, 21, 24, 31, 78, 138, glia 146–147, 150, 158, 244–245, 254
158, 183, 308–309, 315–316 glutamate 46–48, 52, 54, 73, 145
electroencephalography (EEG) 6, 170, Golgi (stain) 137, 287
183–184, 191, 193, 229, 304 GPCRs 86, 113, 116
electrophysiology 2, 15, 25–28, 78,
138–140, 142, 146, 153, 158, 160, Hacking, Ian 13, 34–35, 38, 48–50,
162–163, 165, 173, 225, 228–229, 53, 59, 80–81, 117–118, 130,
249–250, 308, 318, 341 132–133, 172, 174, 190, 192
engineering 6, 16, 24, 36, 38, 82, heterarchical systems 246, 250
84, 93, 112, 117, 140, 144–146, heuristics 2, 246, 258, 260, 272–273,
148–150, 162, 169–171, 175, 238, 275–280
242–243, 249, 304, 317, hierarchical 15, 190, 246, 268, 280
338–339 hippocampal learning 138
enzymes 21–22, 42, 92, 120, 145, 164, hippocampus 40–43, 52–54, 56,
243 72–74, 78, 81–82, 128, 138, 142,
ephaptic 244, 251 148–151, 157, 230, 254
epidemiology 180–182, 185–186, 192, Hodgkin-Huxley-model 8, 20,
199, 215, 217 24–25, 173, 288–291, 295–299,
epithelial tissue 120–122, 124, 303–304
129–130 homeostasis 95, 251, 255, 341
error 16, 61, 63–65, 70, 74, 139, Hubel, David 14–20, 30–33, 35, 118,
152, 165, 171, 175, 179, 186–188, 132
243, 274 hybrid brains 305
error minimization 186, 188 hybrid circuit 310
etiological 180, 202, 211 hybrid networks 310
hybrid system 308, 316
feedback 244, 246–247, 274–275, hypothesis 6, 14, 32, 39–45, 48–49,
290–291, 301, 306, 315 51–52, 56, 60–61, 64, 69, 71,
fluorescence 92, 144, 232 73–79, 93, 111, 115, 118, 121–123,
fMRI 3–4, 6, 183–184, 191, 203–205, 125–126, 129–130, 140, 153,
209, 227, 229, 238, 248, 261, 158–159, 166, 168, 176–178, 181,
265–267, 279 183, 185–187, 190, 193–194, 238,
fractals 291, 293, 297–299, 301–304 242, 252, 275, 295