THE ACCIDENTAL MIND
THE ACCIDENTAL MIND
      DAVI D J. LI N DE N
    The Belknap Press of Harvard University Press
   Cambridge, Massachusetts   •   London, England
Copyright © 2007 by the President and Fellows of Harvard College
                        All rights reserved
             Printed in the United States of America
     First Harvard University Press paperback edition, 2008
        Library of Congress Cataloging-in-Publication Data
                    Linden, David J., 1961–
             The accidental mind / David J. Linden.
                            p. cm.
          Includes bibliographical references and index.
        ISBN-13: 978-0-674-02478-6 (cloth : alk. paper)
              ISBN-13: 978-0-674-03058-9 (pbk.)
        1. Brain—Popular works. 2. Brain—Evolution.
                 3. Neuropsychology. I. Title.
                     QP376.L577 2007
               612.8′2—dc22    2006047905
For Herbert Linden, M.D.
Contents
Prologue   Brain, Explained                              1
    One    The Inelegant Design of the Brain             5
    Two    Building a Brain with Yesterday’s Parts      28
  Three    Some Assembly Required                       50
    Four   Sensation and Emotion                        82
    Five   Learning, Memory, and Human Individuality   107
     Six   Love and Sex                                145
  Seven    Sleeping and Dreaming                       184
   Eight   The Religious Impulse                       221
   Nine    The Unintelligent Design of the Brain       235
Epilogue   That Middle Thing                           247
           Further Reading and Resources               257
           Acknowledgments                             264
           Index                                       267
THE ACCIDENTAL MIND
The large brain, like large government, may not be able to do
simple things in a simple way.
                                 —Donald O. Hebb
Now, the president says that the jury is out on evolution . . . Here in New
Jersey, we’re countin’ on it.
                                —Bruce Springsteen
Prologue
Brain, Explained
THE BEST THING    about being a brain researcher is that, in a very small number
of situations, you can appear to have the power of mind reading. Take cock-
tail parties. Chardonnay in hand, your host makes one of those introductions
where he feels compelled to state your occupation: “This is David. He’s a brain
researcher.” Many people are wise enough to simply turn around at this point
and go looking for the bourbon and ice. Of those who stay behind about half
can be counted on to pause, look heavenward, and raise their eyebrows in prep-
aration for speech. “You’re about to ask if it’s true that we only use 10 percent
of our brain, aren’t you?” Wide-eyed nodding. An amazing episode of “mind
reading.”
  Once we get past the 10-percent-of-the-brain thing (which, I should men-
tion, has no basis in reality), it becomes clear that many people have a deep curi-
                                                                                      1
    osity about brain function. Really fundamental and difficult questions come up
    right away:
      “Will playing classical music to my newborn really help his brain grow?”
      “Is there a biological reason why the events in my dreams are so bizarre?”
      “Are the brains of gay people physically different from the brains of straight
    people?”
      “Why can’t I tickle myself?”
      These are all great questions. For some of them, the best scientific answer is
    fairly clear and for others it is somewhat evasive (me, in my best Bill Clinton
    voice: “What exactly do you mean by “brain”?). It’s fun to talk to non–brain re-
    searchers about these kinds of things because they are not afraid to ask the hard
    questions and to put you on the spot.
      Often, when the conversation is over, people will ask, “Is there a good book
    on brain and behavior for a nonspecialist audience that you can recommend?”
    Here, it gets tricky. There are some books, such as Joe Le Doux’s Synaptic Self,
    that do a great job on the science, but that are rough sledding unless you’ve al-
    ready got a college degree in biology or psychology. There are others, such as
    Oliver Sacks’s Man Who Mistook His Wife for a Hat and V. S. Ramachandran
    and Sandra Blakeslee’s Phantoms in the Brain that tell fascinating and illuminat-
    ing stories based on case histories in neurology, but that really don’t convey a
    broad understanding of brain function and that largely ignore molecules and
    cells. There are books that talk about molecules and cells in the brain, but many
    of them are so deadly dull that you can start to feel your soul depart your body
    before you finish the very first page.
      What’s more, many books about the brain, and even more shows on edu-
    cational television, perpetuate a fundamental misunderstanding about neural
    function. They present the brain as a beautifully engineered, optimized device,
    the absolute pinnacle of design. You’ve probably seen it before: a human brain
2   Prologue
lit dramatically from the side, with the camera circling it as if taking a helicop-
ter shot of Stonehenge and a modulated baritone voice exalting the brain’s ele-
gant design in reverent tones.
  This is pure nonsense. The brain is not elegantly designed by any means: it is
a cobbled-together mess, which, amazingly, and in spite of its shortcomings,
manages to perform a number of very impressive functions. But while its over-
all function is impressive, its design is not. More important, the quirky, inef-
ficient, and bizarre plan of the brain and its constituent parts is fundamental to
our human experience. The particular texture of our feelings, perceptions, and
actions is derived, in large part, from the fact that the brain is not an optimized,
generic problem-solving machine, but rather a weird agglomeration of ad hoc
solutions that have accumulated throughout millions of years of evolutionary
history.
  So, here’s what I’ll try to do. I will be your guide to this strange and often il-
logical world of neural function, with the particular charge of pointing out the
most unusual and counterintuitive aspects of brain and neural design and ex-
plaining how they mold our lives. In particular, I will try to convince you that
the constraints of quirky, evolved brain design have ultimately led to many
transcendent and unique human characteristics: our long childhoods, our ex-
tensive memory capacity (which is the substrate upon which our individuality
is created by experience), our search for long-term love relationships, our need
to create compelling narrative and, ultimately, the universal cultural impulse to
create religious explanations.
  Along the way, I will briefly review the biology background you will need to
understand the things I am guessing you most want to know about the brain
and behavior. You know, the good stuff: emotion, illusion, memory, dreams,
love and sex, and, of course, freaky twin stories. Then, I’ll try my best to answer
the big questions and to be honest when answers are not at hand or are incom-
                                                                     Brain, Explained   3
    plete. If I don’t answer all of your questions, try visiting the book’s website,
    accidentalmind.org. I’ll strive to make it fun, but I’m not going to “take all the
    science out.” It will not be, as you might find on a label at Whole Foods, “100
    percent molecule free.”
      Max Delbrück, a pioneer of molecular genetics, said, “Imagine that your au-
    dience has zero knowledge but infinite intelligence.” That sounds just about
    right to me, so that’s what I’ll do. Let’s roll.
4   Prologue
Chapter One
The Inelegant Design of the Brain
WHEN I WAS IN    middle school, in California in the 1970s, a popular joke in-
volved asking someone, “Want to lose 6 pounds of ugly fat?” If the reply was
positive it would be met with “Then chop off your head! Hahahaha!” Clearly,
the brain did not hold a revered place in the collective imagination of my class-
mates. Like many, I was relieved when middle school drew to a close. Many
years later, however, I have been similarly distressed by the opposite view. Par-
ticularly when reading books or magazines or watching educational television,
I have been taken aback by a form of brain worship. Discussion of the brain is
most often delivered in a breathless, awestruck voice. In these works the brain
is “an amazingly efficient 3 pounds of tissue, more powerful than the largest
supercomputer,” or “the seat of the mind, the pinnacle of biological design.”
What I find problematic about these statements is not the deep appreciation
                                                                                    5
    that mental function resides in the brain, which is indeed amazing. Rather, it is
    the assumption that since the mind is in the brain, and the mind is a great
    achievement, the design and function of the brain must then be elegant and ef-
    ficient. In short, it is imagined by many that the brain is well engineered.
       Nothing could be further from the truth. The brain is, to use one of my fa-
    vorite words, a kludge (pronounced “klooj”), a design that is inefficient, inele-
    gant, and unfathomable, but that nevertheless works. More evocatively, in the
    words of the military historian Jackson Granholm, a kludge is “an ill-assorted
    collection of poorly matching parts, forming a distressing whole.”
       What I hope to show here is that at every level of brain organization, from re-
    gions and circuits to cells and molecules, the brain is an inelegant and inef-
    ficient agglomeration of stuff, which nonetheless works surprisingly well. The
    brain is not the ultimate general-purpose supercomputer. It was not designed at
    once, by a genius, on a blank piece of paper. Rather, it is a very peculiar edifice
    that reflects millions of years of evolutionary history. In many cases, the brain
    has adopted solutions to particular problems in the distant past that have per-
    sisted over time and have been recycled for other uses or have severely con-
    strained the possibilities for further change. In the words of the pioneering mo-
    lecular biologist François Jacob, “Evolution is a tinkerer, not an engineer.”
       What’s important about this point as applied to the brain is not merely that
    it challenges the notion of optimized design. Rather, appreciation of the quirky
    engineering of the brain can provide insights into some of the deepest and most
    particularly human aspects of experience, both in day-to-day behavior and in
    cases of injury and disease.
    SO, WITH THESE        issues in mind, let’s have a look at the brain and see what
    we can discern about its design. What are the organizational principles that
    emerge? For this purpose, imagine that we have a freshly dissected adult human
6   The Inelegant Design of the Brain
brain before us now (Figure 1.1). What you would see is a slightly oblong, gray-
ish-pink object weighing about 3 pounds. Its outer surface, which is called the
cortex, is covered with thick wrinkles that form deep grooves. The pattern of
these grooves and wrinkles looks like it might be variable, like a fingerprint, but
it is actually very similar in all human brains. Hanging off the back of the brain
is a structure the size of a squashed baseball with small crosswise grooves. This is
called the cerebellum, which means “little brain.” Sticking out of the bottom of
the brain, somewhat toward the back end is a thick stalk called the brainstem.
We’ve lopped off the very bottom of the brainstem where it would otherwise
taper to form the top of the spinal cord. Careful observation would reveal the
nerves, called the cranial nerves, which carry information from the eyes, ears,
nose, tongue, and face into the brainstem.
  One obvious characteristic of the brain is its symmetry: the view from the
top shows a long groove from front to back that divides the cortex (which
means “rind”), the thick outer covering of the brain, into two equal halves. If
we slice completely through the brain, using this front-to-back groove as a
guide, and then turn the cut side of the right half toward us, we see the view
shown in the bottom of Figure 1.1.
  Looking at this image makes it clear that the brain is not just a homogeneous
blob of stuff. There are variations in shape, color, and texture of the brain tissue
across brain regions, but these do not tell us about the functions of these various
regions. One of the most useful ways to investigate the function of these loca-
tions is to look at people who have sustained damage to various parts of the
brain. Such investigations have been complemented by animal experiments in
which small regions of the brain are precisely damaged through surgery or the
administration of drugs, after which the animal’s body functions and behavior
are carefully observed.
  The brainstem contains centers that control extremely basic regulation of
                                                      The Inelegant Design of the Brain   7
         'SPOU                                                                    #BDL
         PG                                                                       PG
         IFBE                                                                     IFBE
                     $PSUFY
                                                                             5IBMBNVT
           )ZQPUIBMBNVT
                              .JECSBJO
                                    #SBJOTUFN                          $FSFCFMMVN
    figure 1.1. The human brain. The top shows the intact brain viewed from the left
                side. The bottom shows the brain sliced down the middle and then
                opened to allow the right side to face us. Joan M. K. Tycko, illustrator.
8   The Inelegant Design of the Brain
the body that are not under your conscious control, including vital functions
such as regulation of heart rate, blood pressure, breathing rhythm, body tem-
perature, and digestion. It also contains the control centers for some important
reflexes, such as sneezing, coughing, and vomiting. The brainstem houses re-
lays for sensations coming up the spinal cord from your skin and muscles as
well as for command signals coming from your brain and destined for muscles
in your body. It also contains locations involved in producing feelings of wake-
fulness versus sleepiness. Drugs that modify your state of wakefulness, such as
sleeping pills or general anesthetics on the one hand and caffeine or amphet-
amines on the other, act on these brainstem regions. If you get a small area of
damage in your brainstem (from an injury, tumor, or stroke), you could be ren-
dered comatose, unable to be aroused by any sensation, but extensive damage
in the brainstem is almost always fatal.
  The cerebellum, which is richly interconnected with the brainstem, is in-
volved with coordination of movements. In particular, it uses feedback from
your senses about how your body is moving through space in order to issue
fine corrections to the muscles to render your movements smooth, fluid, and
well coordinated. This cerebellar fine-tuning operates not only in the most de-
manding forms of coordination such as hitting a baseball or playing the violin,
but also in everyday activities. Damage to the cerebellum is subtle. It will not
paralyze you, but rather will typically result in clumsiness in performing simple
tasks that we take for granted, such as reaching smoothly to grasp a coffee cup
or walking with a normal gait; this phenomenon is called ataxia.
  The cerebellum is also important in distinguishing sensations that are “ex-
pected” from those that are not. In general, when you initiate a movement and
you have sensations which result from that movement, you tend to pay less at-
tention to those sensations. For example, when you walk down the street and
your clothes rub against your body, these are sensations that you mostly ignore.
                                                    The Inelegant Design of the Brain   9
     By contrast, if you were standing still and you started to feel similar rubbing
     sensations on your body, you would probably pay a lot of attention. You would
     probably whirl around to see who was groping you. In many situations, it is
     useful to ignore sensations produced by your own motion and pay close atten-
     tion to other sensations that originate from the outside world. The cerebellum
     receives signals from those brain regions that create the commands that trigger
     body motion. The cerebellum uses these signals to predict the sensations that
     are likely to result from this motion. Then the cerebellum sends inhibitory sig-
     nals to other brain regions to subtract the “expected” sensations from the “to-
     tal” sensations and thereby change the way they feel to you.
        This may all sound a bit abstract, so let’s consider an example. It is well
     known that you can’t tickle yourself. This is not just true in certain cultures; it
     is worldwide. What’s different about having someone else tickle you, which
     can result in a very strong sensation, and self-tickling, which is ineffective?
     When researchers in Daniel Wolpert’s group at University College, London,
     placed people’s heads in a machine that can make images showing the location
     and strength of brain activity (called functional magnetic-resonance images, or
     fMRI) and then tickled them, they found strong activation in a brain region in-
     volved in touch sensation called the somatosensory cortex and no significant
     activation in the cerebellum. When people were then asked to tickle themselves
     on that same part of the body, it was seen that there was a spot of activation in
     the cerebellum and reduced activity in the somatosensory cortex. The interpre-
     tation of this experiment is that commands to activate the hand motions in self-
     tickling stimulated the cerebellum, which then formed a prediction of the ex-
     pected sensation and sent signals encoding this prediction to inhibit the so-
     matosensory cortex. The reduced activation of the somatosensory cortex was
     then below the threshold necessary to have the sensation feel like tickling. In-
     terestingly, there are now reports that some humans who sustain damage to the
10   The Inelegant Design of the Brain
                                                                '
                       '
               
               
       'PSDF
               
                                                                           
                                             5VSO
figure 1.2. Force escalation in a tit-for-tat finger-tapping task. The white circles show
            the force of finger taps delivered by one subject, the black circles the force
            from the other subject. In 9 tit-for-tat exchanges, the force increased al-
            most 20-fold. Adapted from S. S. Shergill, P. M. Bays, C. D. Frith, and
            D. M. Wolpert, Two eyes for an eye: the neuroscience of force escalation,
            Science 301:187 (2003); copyright 2003 AAAS. Joan M. K. Tycko, illustrator.
                                                          The Inelegant Design of the Brain   11
     cerebellum cannot generate predictions of expected sensations and therefore
     can actually tickle themselves!
        Daniel Wolpert and his colleagues at University College, London, have also
     devised a simple and elegant experiment to explain the cerebellum’s involve-
     ment in the escalation of a shoving match (Figure 1.2). When a shoving match
     starts between two people the force of the shoving tends to escalate, often to the
     point of a full-blown brawl. Typically, we have thought of this solely in terms of
     social dynamics: neither participant wants to show weakness by backing down.
     That may explain why the conflict continues, but it does not necessarily shed
     light on why the force of each shove increases in a tit-for-tat exchange.
        What Wolpert and his colleagues did was have two adult subjects face each
     other, each resting the left index finger, palm up, in a molded depression. A
     small metal bar on a hinge was then rested lightly on top of each subject’s finger.
     The hinge was fitted with a sensor to measure the force delivered when the
     bar was pressed down. Both subjects were given the same instructions: exactly
     match the force of the tap on his finger that he receives with an equivalent tap
     when his turn comes. Neither subject knew the instructions given to the other.
        Despite explicit instructions to the contrary, when the subjects took turns
     pressing on each other’s fingers, the force applied always escalated dramatically,
     just as it does in schoolyard or bar-room confrontations. Each person swore
     that he matched the force of the other’s tap. When asked to guess the instruc-
     tions given to the other person, each said, “You told the other guy to press back
     twice as hard.”
        Why does this happen? Several clues point to the answer. First, it is not spe-
     cific to social situations. When a person is asked to match the force of a finger
     tap which comes from a machine, he or she she will also respond with greater
     force. The second line of evidence comes from modifying the tit-for-tat experi-
     ment so that the tap is produced not by pressing on a bar but rather by moving a
12   The Inelegant Design of the Brain
joystick that controls the pressure by activating a motor. The important dif-
ference between these two situations is that when the force is generated by
bar pressing, making a stronger tap requires generating more force with the
fingertip. When the joystick is used, however, the motor does the work and
there is only a weak correlation between the force generated by the tapping
finger and the force produced on the upturned finger of the other subject.
When the tit-for-tat experiment is then repeated with joysticks there is very lit-
tle force escalation. The interpretation here is similar to that offered for self-
tickling: The cerebellum receives a copy of the commands to produce the finger
tap (using the bar) that are proportional to the force applied. It then creates a
prediction of the expected sensation that is sent to the somatosensory cortex to
inhibit feedback sensations from the fingertip during tapping. To overcome
this inhibition, the subject presses harder to match the force perceived from the
last tap he or she received, thus escalating the force applied.
  So, in most situations, the cerebellar circuit that allows us to pay less atten-
tion to sensations that result from self-generated movement and more atten-
tion to the outside world is a useful mechanism. But as any 8-year-old coming
home with a black eye and a tale of “But Mom, he hit me harder!” will tell you,
there is a price to pay for this feature. This is a common brain design flaw. Most
systems, like the cerebellar inhibition of sensations from self-generated move-
ment, are always on. They cannot be switched off even when their action is
counterproductive.
  Moving up and forward from the cerebellum, the next region we encounter
is called the midbrain. It contains primitive centers for vision and hearing.
These locations are the main sensory centers for some animals, such as frogs or
lizards. For example, the midbrain visual center is key for guiding the tongue-
thrust frogs use to capture insects in flight. But in mammals, including hu-
mans, the midbrain visual centers are supplemented and to some degree sup-
                                                      The Inelegant Design of the Brain   13
     planted by more elaborate visual regions higher up in the brain (in the cortex).
     Even though we make only limited use of a frog-like visual region in our brains
     (mostly in orienting our eyes to certain stimuli), this evolutionarily ancient
     structure has been retained in human brain design and this gives rise to the fas-
     cinating phenomenon called blindsight.
        Patients who are effectively blind owing to damage to the higher visual parts
     of the brain will report that they have no visual sense whatsoever. When asked
     to reach for an object in their visual field, such as a penlight, they will say,
     “What can you possibly mean? I can’t see a thing!” If however, they are told to
     just take a guess and try anyway, they can usually succeed at this task at a rate
     much higher than would be due to pure chance. In fact, some patients can
     grasp the penlight 99 percent of the time, yet will report each time that they
     have no idea where the target is and they are guessing randomly. The explana-
     tion seems to be that the ancient visual system in the midbrain is intact in these
     patients and guides their reaching, yet because this region is not interconnected
     with the higher areas of the brain, these people have no conscious awareness of
     the penlight’s location. This underscores a general theme that is emerging here.
     The functions of the lower portions of the brain such as the brainstem and the
     midbrain are generally performed automatically, without our conscious aware-
     ness. As we continue our tour to those parts of the brain that are both literally
     and metaphorically higher, then we will begin to make the transition from sub-
     conscious to conscious brain function.
        Furthermore, the midbrain visual system is a lovely example of brain kludge:
     it is an archaic system that has been retained in our brains for a highly delimited
     function, yet its action can be revealed in brain injury. As an analogy, imagine if
     your present-day audio electronics, let’s say that sleek handheld MP3 player,
     still contained a functional, rudimentary 8-track tape player from the 1960s.
14   The Inelegant Design of the Brain
Not too many of those would get sold, even with a really urban-hip, edgy ad
campaign.
  Moving a bit upward and forward, we reach two structures called the thal-
amus and the hypothalamus (which just means “below the thalamus”). The
thalamus is a large relay station for sending sensory signals on to higher brain
areas and also relaying command signals from these areas out along pathways
that ultimately activate muscles. The hypothalamus has many smaller parts,
each of which has a separate function, but one general theme of this region is
that it helps to maintain the status quo for a number of body functions, a pro-
cess called homeostasis. For example, when you get too cold, your body begins
to shiver reflexively in an attempt to generate heat through muscular activity.
The shivering reflex originates within the hypothalamus.
  Perhaps the most well-known homeostatic drives are those that control hun-
ger and thirst. Although the urge to eat and drink can be modulated by many
factors, including social circumstances, emotional state, and psychoactive drugs
(consider the well-known phenomenon of “the munchies” from smoking mari-
juana and the appetite-suppressing action of amphetamines), the basic drives
for hunger and thirst are triggered within the hypothalamus. When tiny holes
are made surgically in one part of the hypothalamus of a rat (called the lateral
nucleus; a “nucleus” in the brain is just a name for a group of brain cells), it will
fail to eat and drink, even after many days. Conversely, destroying a different
part of the hypothalamus (the ventromedial nucleus) results in massive overeat-
ing. Not surprisingly, a huge effort is under way to identify the chemical signals
that trigger feelings of hunger and fullness, with the hope of making a safe and
effective weight-loss drug. So far, this has proven to be much more difficult
than anticipated because multiple, parallel signals for both beginning and end-
ing feeding appear to play a role.
                                                       The Inelegant Design of the Brain   15
        In addition to its involvement in homeostasis and biological rhythms, the
     hypothalamus is also a key controller of some basic social drives, such as sex and
     aggression. I will talk about these functions in detail later. A point that must be
     made here, though, is that the hypothalamus exerts some of its effects on these
     drives by secreting hormones, powerful messenger molecules that are carried in
     the bloodstream throughout the body to cause many varied responses. The hy-
     pothalamus secretes two types of hormones. One type has direct actions on the
     body (such as the hormone called vasopressin, which acts on the kidney to limit
     the formation of urine and thereby increase blood pressure), and the second
     type, the so-called master hormones, directs other glands to secrete their own
     hormones. A good example of the latter is growth hormone, secreted by the pi-
     tuitary gland in growing children and adolescents but stimulated by a master
     hormone released by the hypothalamus. After much careful scientific thought,
     this master hormone was given the compelling name “growth hormone releas-
     ing hormone” (endocrinologists, like many scientists, are not known for their
     literary flair).
        Up to this point, we have been looking at the brain sliced exactly down the
     middle. Many areas inside the brain are revealed with this view, but others are
     buried deep within the tissue and are not visible either from the outside surface
     or from the cut surface at the midline. Particularly important are two deeply
     buried structures called the amygdala (“almond”) and the hippocampus (“sea-
     horse”) that constitute part of a larger circuit in the center of the brain called
     the limbic system (which also contains portions of the thalamus, cortex, and
     other regions). The limbic system is important for emotion and certain kinds
     of memory. It is also the first place in our bottom-to-the-top tour where auto-
     matic and reflexive functions begin to blend with conscious awareness.
        The amygdala is a brain center for emotional processing that plays a particu-
     lar role in fear and aggression. It links sensory information that has already been
16   The Inelegant Design of the Brain
highly processed by the cortex (that guy in the ski mask jumping out of that
dark alley at me can’t be up to any good) to automatic fight-or-flight responses
mediated by the hypothalamus and brainstem structures (sweating, increased
heart rate, dry mouth). Humans rarely sustain damage to the amygdala alone,
but those who do often have disorders of mood and appear to be unable to rec-
ognize fearful expressions in others. Electrical stimulation of the amygdala (as
sometimes occurs during neurosurgery) can evoke feelings of fear, and the amyg-
dala also appears to be involved in storing memories of fearful events.
  The hippocampus (which, when dissected out of the brain, actually looks
more like a ram’s horn than the seahorse for which it is named) is a mem-
ory center. Like the amygdala, it receives highly processed sensory information
from the cortex lying above it. Rather than mediating fear, however, the hippo-
campus appears to have a special role in laying down the memory traces for
facts and events, which are stored in the hippocampus for a year or so but are
then moved to other structures. The most compelling evidence for this model
comes from a small number of people who have sustained damage to their hip-
pocampus and some surrounding tissue on both sides of the brain. The most
famous of these cases is called H.M. (initials used to protect privacy), a man
who in 1953 underwent surgical removal of the hippocampus and some sur-
rounding tissue on both sides of his brain in order to control massive seizures
that had not responded to other treatments. The surgery was successful in con-
trolling his epilepsy and did not impair his motor functions, language, or gen-
eral cognitive abilities, but there were two disastrous side effects. First, H.M.
lost his memory of everything that occurred 2–4 years before the surgery. He
had extensive, detailed, and accurate recall of earlier events, but his memory of
his life in the years just before the surgery is lost forever. Even more devastating
is that since the surgery H.M. has been unable to store new memories for facts
and events. If you were to meet him on Monday, he would not remember you
                                                      The Inelegant Design of the Brain   17
     on Tuesday. He can read the same book every day and it will be new to him. Al-
     though he has short-term memory that can span tens of minutes, his ability to
     store new permanent memories for facts and events is gone.
        The seminal insights about memory and the hippocampus that came from
     H.M.’s case have since been reinforced many times, both by other patients
     who, for a variety of reasons, have sustained similar damage, and by animal
     studies in which the hippocampus has been surgically destroyed or had its func-
     tion disrupted by drugs. A consistent and simple conclusion comes from this
     work: without a hippocampus, the ability to store new memories for facts and
     events is severely impaired.
        Finally, moving to the outer surface of the brain, we reach the cortex. The
     cortex of the human brain is massive. The functions of some areas in the cortex
     are well understood, but others are terra incognita. A portion of the cortex ana-
     lyzes the information coming from your senses. The very back of your cortex is
     where visual information first arrives, and another strip of tissue just behind the
     main sideways groove in your brain (called the central sulcus) is where touch
     and muscle sensation first arrives. Similar maps can be drawn for other senses.
     If we stimulate these areas with an electrode we can mimic activation of the sen-
     sory system involved: stimulating the primary visual cortex will cause a flash of
     light, or something similar, to be seen. Likewise, there is a strip of cortex just in
     front of the central sulcus that sends out command signals that ultimately cause
     contraction of muscles and consequent body movement. Electrically stimulat-
     ing this motor cortex results in muscular contraction. This is a standard tech-
     nique for making a functional map of the brain when surgery must be per-
     formed in this area. What’s most interesting about the cortex are those regions
     for which the functions are not obviously either sensory or motor. Brain re-
     searchers have sometimes called these regions association cortex. Association
18   The Inelegant Design of the Brain
areas are most plentiful in the front of the brain (the frontal cortex), a region
that is highly developed in humans.
  I have offered a number of examples where people (and experimental ani-
mals) sustain damage to various brain regions and suffer various losses of func-
tion ranging from amnesia to overeating. Yet, to this point, though many of
these brain insults have had devastating effects, none of them has changed the
personality, the essential core identity of the sufferer. H.M., for example, has
the same unique personality that he had before his epilepsy surgery. A far differ-
ent picture emerges when we consider damage to the frontal cortex.
  Here, the most well known example is Phineas Gage, a foreman on a Ver-
mont railway gang in 1848. Railway construction, then and now, uses blasting
to remove obstacles and level the roadbed. Phineas, aged 25, had the unenvi-
able task of jamming the explosive charge into place using a long metal rod
known as a tamping iron. You can imagine what happened. As he stood over a
borehole, tamping the charge, there was a spark that ignited a horrible explo-
sion. The explosion drove the tamping iron through Phineas’s left cheek and
eye at a steep upward angle, piercing his skull through the eye socket tearing a
huge hole in his left frontal cortex, and exiting his skull through the top. Figure
1.3 shows a drawing based on a scan of his skull made long after his death, with
the tamping iron in place. Amazingly, after a few weeks in bed, Phineas made a
full recovery. The infection of his wound abated. He could walk, talk, and do
arithmetic in his head. His long-term memory was fine. What had changed was
his personality and his judgment. By all reports, before the accident he was
kind, level-headed, friendly, and charismatic. After his recovery he became ar-
rogant, opinionated, impulsive, rude, and selfish. Not to put too fine a point
on it, damage to his frontal cortex changed him from a nice guy into a jerk.
His former coworkers couldn’t stand him. “He’s just not Gage anymore,” one
                                                     The Inelegant Design of the Brain   19
     figure 1.3. The skull of Phineas Gage, with the famous tamping iron, reconstructed
                 by computer from scans made long after his death. Derived with permis-
                 sion from P. Ratiu and I.-F. Talos, Images in clinical medicine: the tale of
                 Phineas Gage, digitally remastered, The New England Journal of Medicine
                 351:e21 (2004). Joan M. K. Tycko, illustrator.
20   The Inelegant Design of the Brain
friend reportedly said. Tragically, he ended up in a carnival freak show, reinsert-
ing the tamping iron through the healed but still present hole in his head to
the morbid fascination of onlookers. He died 12 years after the tamping iron
accident.
   As shown by the case of Phineas Gage, and documented many times since,
the frontal cortex is the substrate of our individuality, determining our social
interactions, outlook, and perhaps even our moral sense. Not just our cogni-
tive capacities but our character—our personhood, so to speak—resides in this
most recently evolved region of our brains.
H AV I N G C O M P L E T E D O U R   whirlwind tour from the bottom to the top of the
brain (leaving out a few areas), what can we conclude about the overall prin-
ciples of brain design? Guiding Principle One: The highest functions of our
brain, involving conscious awareness and decision making, are located at the
very top and front, in the cortex, and the lowest functions, supporting ba-
sic subconscious control of our body functions such as breathing rhythm and
body temperature, are located in the very bottom and rear, in the brainstem. In
between are centers that are engaged in higher subconscious functions such as
rudimentary sensation (midbrain), homeostasis and biological rhythms (hy-
pothalamus), and motor coordination and sensory modulation (cerebellum).
The limbic system, including the amygdala and hippocampus, is the crossroads
where the conscious and unconscious parts of the brain meet and initiate the
storage of certain types of memories.
   Guiding Principle Two: The brain is built like an ice cream cone (and you are
the top scoop): Through evolutionary time, as higher functions were added, a
new scoop was placed on top, but the lower scoops were left largely unchanged.
In this way, our human brainstem, cerebellum, and midbrain are not very dif-
ferent in overall plan from that of a frog. It’s just that a frog has only rudimen-
                                                          The Inelegant Design of the Brain   21
     tary higher areas in addition (barely more than one scoop). All those structures
     plus the hypothalamus, thalamus, and limbic system are not that different be-
     tween humans and rats (two scoops), which have a small and simple cortex,
     while we humans have all that plus a hugely elaborated cortex (three scoops).
     When new, higher functions were added, this did not result in a redesign of the
     whole brain from the ground up; a new scoop was just added on top. Hence, in
     true kludge fashion, our brains contain regions, like the midbrain visual center,
     that are functional remnants of our evolutionary past.
        You probably have seen those quaint charts from the nineteenth century
     (Figure 1.4), in which the surface of the brain is divided into neat regions, each
     labeled with a cognitive function (such as calculation) or a personality trait (say
     combativeness). The phrenologists who used these charts believed not only
     that those functions could be mapped to those particular brain regions but also
     that bumps on the skull resulted from the overgrowth of a particular brain re-
     gion. Indeed, there was a cottage industry in the nineteenth and early twentieth
     centuries of professional head-bump feelers, who, armed with charts, plaster
     models, and even a mechanical bump-measuring helmet, would analyze the
     skull-and-mind of anyone willing to pay.
        The phrenologists were wrong on two counts. First, bumps on the skull
     don’t indicate anything about the underlying brain tissue. Second, their dia-
     grams equating particular regions with cognitive functions and personality traits
     were pure fantasy. But on a more general issue, the phrenologists were right: the
     brain is not an undifferentiated mass of tissue where each region contributes
     equally to all functions. Rather, particular brain functions often are localized to
     distinct brain regions.
        This brings us to Guiding Principle Three: Localization of function in the
     brain is straightforward for basic subconscious reflexes such as vomiting and is
     fairly straightforward for the initial stages of sensation (we know where signals
22   The Inelegant Design of the Brain
figure 1.4. A phrenologist’s chart from the nineteenth century, equating head bumps
            with particular mental traits. In this case, for example, XIV = veneration,
            XVII = hope, XIII = benevolence, XXI = imitation, XIX = ideality,
            VIII = acquisitiveness, XVIII = marvelousness, and XX = wit. From W.
            Mattieu Williams, A Vindication of Phrenology (Chatto & Windus, London, 1894).
                                                          The Inelegant Design of the Brain   23
     first arrive in the cortex for vision, hearing, smell, and so forth) But localization
     of function is much more difficult for more complex phenomena such as mem-
     ory of facts and events and is really hard for the highest functions such as deci-
     sion making. In some cases it becomes complicated because the location of
     a function in the brain is not fixed over time: memories for facts and events
     seem to be stored in the hippocampus and some immediately adjacent regions
     for 1–2 years but are then exported to other locations in the cortex. Decision
     making generally is such a broad function, and generally requires such a con-
     vergence of information, that it may be broken into smaller tasks and distrib-
     uted to a number of places in the cortex. We may have to define functions more
     precisely in order to achieve a greater understanding of functional localization.
        So, given these Guiding Principles, what is it about this organ that makes us
     so clever? What is it about our brain that enables language and the ability to un-
     derstand the motivations of others (the so-called theory of mind) and other ca-
     pacities that humans have developed far beyond the abilities of other animals?
     We don’t have the biggest brains (an elephant’s is bigger) and we don’t even
     have the biggest brain-to-body-weight ratio (small birds beat us on that mea-
     sure). We don’t have the most wrinkled brain surface (whales and dolphins’ are
     more wrinkled). In fact, we don’t even have the largest brains among our homi-
     nid kin: estimates derived from skull volumes indicate that Neanderthals had
     brains that were, on average, somewhat larger than ours today. And, although I
     haven’t talked about it yet, we can assume that, overall, the shape and chemical
     composition of the cells that make up our brains are not fundamentally differ-
     ent from those of a rat (more on this to come). What we do have is the largest
     association cortex, that which is not strictly sensory or motor, most of it packed
     into the front half of our brain. Somehow, this is the elaboration that appears to
     have given humans their cognitive advantages.
        Can we take this one step farther? Humans have varying cognitive abilities.
24   The Inelegant Design of the Brain
Can human cognitive capacity be predicted by the overall size of the brain or by
the size of particular brain regions? Diseases (both inherited and acquired) and
trauma, both of which produce gross anatomical disruptions to the brain, can
clearly impair cognition. But what about normal variation, excluding obvious
mishaps such as trauma or disease? Recent studies relating normal human vari-
ation in cognitive ability to brain size or shape have used brain-scanning tech-
niques that provide more accurate measures than older studies that relied upon
skull measurements. In general, these newer studies have found statistically sig-
nificant correlations between brain size (adjusted for body weight) and cogni-
tive ability. But this correlation, while real, accounts for only about 40 per-
cent of the variation in cognitive ability of normal humans. Thus one can find
people at the small end of the range of normal brain sizes (say, 1,000 cubic cen-
timeters) who will score highly on a so-called test of general intelligence. Con-
versely, one can find individuals with unusually large brains (1,800 cubic centi-
meters) who score well below average.
  The large variation in the relationship between human brain size or shape
and cognitive capacity has not stopped the continual trickle of publications in
which the preserved brains of famous historical figures have been analyzed ana-
tomically. Lenin’s brain was studied in Germany in the late 1920s and, while it
was of average weight, in some regions a particular subset of cells in the brain
(called layer 3 cortical pyramidal cells) were purported to be unusually large
compared to other postmortem samples. Einstein’s brain actually was smaller
than average (but well within the normal range). Recently, there has been a
claim that a region of his brain called the inferior parietal cortex was slightly (15
percent) enlarged relative to a sample of men’s brains of a similar age. That
caused some interest because this region has been associated with spatial and
mathematical cognition, areas in which Einstein clearly excelled. But one must
be cautious in interpreting this sort of finding. First, it’s very hard to make a
                                                       The Inelegant Design of the Brain   25
     claim based on a single sample (Einstein). A more convincing study would
     need a whole group of mathematical/spatial geniuses compared with controls
     carefully matched for age, lifestyle, and other factors. Second, and more impor-
     tant, there’s a problem of causality at work. If, indeed, a part of Einstein’s brain
     involved in mathematical/spatial thinking was significantly larger than appro-
     priate control brains, does that mean that this variation endowed him with
     mathematical ability that he was then able to exploit? Or did his lifelong en-
     gagement in mathematical and spatial pursuits cause this part of his brain to
     grow slightly?
        Failure up to now to strongly associate gross anatomical features of the brain
     with normal variation in human cognition should not be taken to mean that
     variation in human cognition has no measurable physical correlate in brain
     structure. It’s very likely that such a relationship does exist. But this correlation
     will be only weakly reflected in crude measures such as brain size. Most of hu-
     man cognitive variation is more likely to be manifest as changes in the micro-
     scopic anatomy, the connectivity of brain cells, and the patterns of brain electri-
     cal activity.
     WE’VE U NCOVE R ED TH R E E         Guiding Principles of Brain Design and these
     highlight a few of the ways in which the human brain is poorly organized. The
     brain has primitive systems that developed in our distant evolutionary past (be-
     fore mammals) and that have been supplemented by newer, more powerful
     structures. These primitive structures persist in the lower parts of our brain,
     giving rise to interesting phenomena such as blindsight. Also, the brain has re-
     gions that perform functions that are often useful, such as cerebellar inhibition
     of the sensations from self-originated movements, but that cannot be turned
     off in the appropriate circumstances, a fact that contributes to problems such as
     force escalation in tit-for-tat conflicts.
26   The Inelegant Design of the Brain
  To put this in perspective, imagine that you are an engineer in charge of
building the latest and most efficient car. Only after you agree to take the job do
you learn that there are two weird stipulations. First, you are given a 1925
Model T Ford and told that your new car must take the form of adding parts to
the existing structure while taking almost nothing of the original design away.
Second, most of the new complex control systems you will build, such as the
device that rapidly pumps the antilock brakes, must remain on all of the time
(not just when a skid is detected). These are some of the types of constraints
that have influenced the design of the human brain as it has evolved. Together
with the engineering flaws of the component parts (the cells of the brain, which
I will consider in Chapter 2) and the assembly process (brain development,
covered in Chapter 3), these aspects of suboptimal design are central to brain
function. By the end of this book I hope to have convinced you that almost ev-
ery aspect of transcendent human experience, including love, memory, dreams,
and even our predisposition for religious thought, ultimately derives from the
inefficient and bizarre brain engineered by evolutionary history.
                                                     The Inelegant Design of the Brain   27
     Chapter Two
     Building a Brain with Yesterday’s Parts
     IT IS A CLICHÉ   to be awed by the microscopic complexity of the human brain.
     Any scientist who talks about this topic inevitably hears the kindly, avuncular
     ghost of Carl Sagan whispering: “Bill-yuns and bill-yuns of tiny brain cells!”
     Well, it is rather impressive. There are a hell of a lot of cells in there. The two
     main cell types in the brain are: neurons, responsible for rapid electrical sig-
     naling (the brain’s main business), and glial cells, important for housekeeping
     functions that create an optimal environment for neurons (and that directly
     participate in some forms of electrical signaling as well). The famous num-
     bers: approximately 100 billion (100,000,000,000) neurons in the adult hu-
     man brain and approximately one trillion (1,000,000,000,000) glial cells. To
     put this in perspective, if you wanted to give your neurons away to all human-
     ity, everyone on earth would receive about 16 of them.
28
  Neurons are not a recent development in evolution. They are soft and there-
fore not well preserved in fossils, so we don’t know exactly when the first neu-
rons appeared. But we do know that modern jellyfish, worms, and snails all
have neurons. Some other modern animals, such as sea sponges, don’t. There-
fore, our best guess is that neurons appeared at about the time when jellyfish
and their relatives, a group of animals called Cnidaria, first appeared in the fos-
sil record, in the Pre-Cambrian era, about 600 million years ago. Incredibly,
with few exceptions, the neurons and glial cells in a worm are not substantially
different from those in our own brains. In this chapter, I hope to show you that
our brain cells have an ancient design that makes them unreliable and slow, and
limits signaling capacity.
  Neurons come in a variety of shapes and sizes (see Figure 2.1), but have cer-
tain structures in common. Like all cells, neurons are bounded externally by a
sort of skin, the outer membrane (also called the plasma membrane). All neu-
rons have a cell body, which contains the cell nucleus, the storehouse of genetic
instructions encoded in DNA. The cell body can be round, triangular, or spin-
dle shaped and can range from 4 to 100 microns across (20 microns is typi-
cal). Perhaps a more useful way to think about this is that five average-sized
neuronal cell bodies could be placed side by side in the width of a typical hu-
man hair. Thus the outer membranes of neurons and glial cells are incredibly
tightly packed with very little space in between.
  Sprouting from the cell body are dendrites (from the Greek word for “tree”),
large, tapering branches of the neuron that receive chemical signals from neigh-
boring neurons. I’ll discuss how this happens soon. Dendrites can be short or
long, spindly or bushy or, rarely, even completely absent. High magnification
shows that some are smooth while others are covered with tiny nubbins called
dendritic spines. Typical neurons have several branching dendrites, but they
also have a single long thin protrusion growing from the cell body. This is the
                                                 Building a Brain with Yesterday’s Parts   29
                                     %JSFDUJPOPGJOGPSNBUJPOGMPX
     %FOESJUFT
                    $FMM                       "YPO              "YPOT       "YPO
                    CPEZ                       IJMMPDL                       UFSNJOBMT
                            %FOESJUFT
                                                                                      %FOESJUF
                                                                                      PGOFYU
                                                                                      OFVSPOJO
        &YQBOEFE                                                                      UIFDIBJO
        WJFX
                                                     %FOESJUJD           8JEUIPGB
                                                     TQJOFT              IVNBOIBJS
     figure 2.1. Two different neurons with their parts labeled. Joan M. K. Tycko, illustrator.
     axon and is the information-sending side of the neuron. The axon, usually
     thinner than the dendrites, does not taper as it extends from the cell body. A
     single axon grows from the cell body, but it often subsequently branches, some-
     times going to very different destinations. Axons can be remarkably long: some
     run all the way from the base of the spine to the toes (which makes the longest
     axons around 3 feet for average humans, and up to 12 feet long for a giraffe).
        At specialized junctions called synapses, information passes from the axon of
     one neuron to the dendrite (or sometimes the cell body) of the next (Figure 2.2).
30   Building a Brain with Yesterday’s Parts
               4ZOBQUJD                        4ZOBQUJD
               WFTJDMFT                        DMFGU
  "YPO                                                                                 %FOESJUF
               "YPO                                              %FOESJUJD
               UFSNJOBM                                          TQJOF
figure 2.2. Parts of the synapse in a drawing (top) and in an actual electron micro-
            scope photo (bottom). Joan M. K. Tycko illustrated the top panel. The bottom
             panel was kindly provided by Professor Kristen Harris of the Medical College of
             Georgia. Her website, synapses.mcg.edu, provides an excellent overview of the fine
             structure of synapses.
                                                          Building a Brain with Yesterday’s Parts   31
     At synapses, the ends of axons (called axon terminals) nearly, but not actually,
     touch the next neuron. Axon terminals contain many synaptic vesicles, tiny
     balls with a skin made of membrane. The most common type of synaptic vesi-
     cle in the brain is loaded with about 2,000 molecules of a specialized com-
     pound called a neurotransmitter. Between the axon terminal of one neuron and
     the dendrite of the next is a tiny saltwater-filled gap called the synaptic cleft. By
     tiny, I mean extremely tiny: about 5,000 synaptic clefts would fit in the width
     of a single human hair. The synaptic cleft is the location where synaptic vesicles
     release neurotransmitters to signal the next neuron in the chain.
        Synapses are crucial to our story. They will come up repeatedly as I discuss
     everything from memory to emotion to sleep. We should therefore spend some
     time on them now. First, the number of synapses in the brain is staggering. On
     average, each neuron receives 5,000 synapses, locations where the axon termi-
     nals of other neurons make contact (the range is from 0 to 200,000 synapses).
     Most synapses contact the dendrites, some the cell body, and a few the axon.
     Multiplying 5,000 synapses per neuron by 100 billion neurons per brain, gives
     you an estimate of the astonishing number of synapses in the brain: 500 tril-
     lion, 500,000,000,000,000.
        Synapses are the key switching points between the two forms of rapid signal-
     ing in the brain: chemical and electrical impulses. Electrical signaling uses a
     rapid blip, called a spike, as its fundamental unit of information. Spikes are
     brief electrical signals that originate at the axon hillock, the place where the cell
     body and the axon join. When spikes, having traveled down the axon, arrive at
     the axon terminals they trigger a series of chemical reactions that cause a dra-
     matic structural change (see Figure 2.3). Synaptic vesicles fuse with the outer
     membrane of the axon terminal, dumping their contents, special neurotrans-
     mitter molecules, into the synaptic cleft. These neurotransmitter molecules
     then move across the synaptic cleft, where they contact specialized proteins
32   Building a Brain with Yesterday’s Parts
            4ZOBQUJD
            WFTJDMFT           4ZOBQUJD
   "YPO                        DMFGU                                      $FMM
   UFSNJOBM                                                               CPEZ
 "YPO                                             %FOESJUF                          "YPO
 4QJLF
 /FVSPUSBOTNJUUFS                         /FVSPUSBOTNJUUFS
 NPMFDVMFT                                SFDFQUPST
         &MFDUSJDBM     $IFNJDBM                        &MFDUSJDBM
          TJHOBM         TJHOBM                          TJHOBM
figure 2.3. Synapses, the key sites in the brain for converting electrical signals to
            chemical signals and then back into electrical signals. Reading from left to
            right tells the story of synaptic signaling. Joan M. K. Tycko, illustrator.
called neurotransmitter receptors, embedded in the membrane of a neighbor-
ing neuron’s dendrite. Receptors convert the neurotransmitter’s chemical signal
back into an electrical signal. Electrical signals from activated receptors all over
the dendrite are funneled toward the cell body. If enough electrical signals ar-
rive together, a new spike is triggered and the signal is passed farther along the
chain of neurons.
  That’s the Reader’s Digest version. Now, let’s flesh that out with some real bi-
ology. At about 3 pounds, the brain constitutes about 2 percent of total body
weight, and yet it uses about 20 percent of the body’s energy. Clearly, the brain
is an inefficient energy hog (the Hummer H2 of the body, if you will), but why
is this so? The brain is naturally bathed in a special saltwater solution called
cerebrospinal fluid that has a high concentration of sodium and a much lower
                                                    Building a Brain with Yesterday’s Parts   33
                                                                            0VUTJEF
       4PEJVNQPUBTTJVNQVNQ
                                                                         0VUFS
                                                                         NFNCSBOF
        1PUBTTJVN                                                4PEJVN        *OTJEF
     figure 2.4. The sodium-potassium pump. Located in the outer membrane of neu-
                 rons, it pumps sodium ions out and potassium in, thereby establishing the
                 electrical gradient used by neurons to send information. Joan M. K. Tycko,
                   illustrator.
     concentration of potassium. These sodium and potassium atoms are in their
     charged state, called ions, in which they each have one unit of positive charge
     (+1). The brain’s main energy expense involves continuously running a molec-
     ular machine that pumps sodium ions out of the cell and potassium ions in
     (see Figure 2.4). As a result of this pump’s action, the concentration of sodium
     ions outside a neuron is about 10-fold higher than it is inside. For potassium,
     the concentration gradient runs in the other direction: the concentration of po-
     tassium ions is about 40-fold greater inside than outside. So neurons have salt-
     water solutions on both sides of their outer membranes (the skin of the cell),
     but very different saltwater solutions: the outside solution is high in sodium
34   Building a Brain with Yesterday’s Parts
and low in potassium; the inside solution is the opposite, low in sodium and
high in potassium. That is the basis of electrical function in the brain. The dif-
ferences in concentrations of sodium and potassium create potential energy,
similar to that created by winding the spring on a child’s toy, that can then be
released in the appropriate circumstances to generate neural signals. Neurons
rest with an electrical potential across their outer membranes: there is more
negative charge inside the cell than outside.
  Let’s conduct an imaginary experiment that will help us understand neu-
ronal electrical signaling. In our imagined lab, some neurons have been ex-
tracted from a rat’s brain, placed in petri dishes, and grown in special solutions
designed to mimic cerebrospinal fluid. This process is called neuronal cell cul-
ture and is a standard technique in brain research laboratories. In this experi-
ment, illustrated in Figure 2.5, we insert recording electrodes into a neuron to
measure the electrical signals across the outer membrane. Recording electrodes
are hollow glass needles with very fine points, filled with a special saltwater so-
lution that mimics the neuron’s internal milieu (high potassium, low sodium).
One electrode is in the dendrite, where a particular synapse is received, another
is at the axon hillock, the place where the axon just starts to grow from the cell
body, and a third electrode is way down in the axon terminal. Yet another elec-
trode is used, not for recording, but rather for electrical stimulation of an axon
terminal of another neuron that is contacting the dendrite of the first.
  Before anything happens, we record the previously mentioned negative rest-
ing potential across the outer membrane of the information-receiving neuron.
Measured in thousandths of a volt, or millivolts, our typical neuron’s resting
potential across its outer membrane is −70 millivolts, or about 1/20th the volt-
age of a single AA battery. Next, we electrically stimulate the adjacent axon ter-
minal, causing it to release neurotransmitter molecules into the synaptic cleft.
In our imaginary experiment, this neurotransmitter is the molecule glutamate.
                                                 Building a Brain with Yesterday’s Parts   35
         4UJNVMBUPS                               3FDPSEJOHFMFDUSPEFT
                                                                                           %FOESJUF
     "YPOPG                          "                        #                      $    PGOFYU
     QSFWJPVT                                                                              OFVSPO
     OFVSPO                                                                  "YPO
                                                                                           JODIBJO
     JODIBJO
                                                                    "YPO        "YPO
                                                                    IJMMPDL     UFSNJOBM
                                                           $FMM
                                            %FOESJUFT      CPEZ
                                      5JNFPG
                                    TUJNVMBUJPO
                               
                  .JMMJWPMUT
                                
             "
                          
                          
                                                      5JNF NJMMJTFDPOET
                                
             #
                          
                          
                               
                          
                          
36   Building a Brain with Yesterday’s Parts
I have chosen glutamate as our example because it is by far the most com-
mon neurotransmitter molecule in the brain. When glutamate molecules are
released at synapses, they diffuse across the narrow synaptic cleft separating two
neurons. Glutamate molecules are not squirted across the synapse with force;
they merely diffuse, like a single drop of red wine slowly mixing into a full glass
of water. Because the synaptic cleft is so small, in only about 50 one-millionths
of a second (5 microseconds) glutamate molecules released from the presyn-
aptic axon terminal of one neuron cross to the other side, the postsynaptic
membrane of the dendrite. Most of the glutamate molecules simply diffuse
away and have no effect, but some will bind specialized glutamate receptor pro-
teins that are embedded in the postsynaptic membrane. There are many differ-
ent neurotransmitters in the brain, and though glutamate is the most common
one, many others are important and will arise as I consider particular brain
functions.
   Glutamate receptor proteins are highly complex molecular machines. They
are built of four similar parts that join together to form a doughnut-shaped
structure around a central pore (Figure 2.6). In the resting state, this pore is
shut tight, but when glutamate binds this receptor a gate that normally closes
figure 2.5. An imaginary experiment to investigate electrical signaling in neurons.
            Weak stimulation (of a few terminals) gives rise to the release of glutamate
            molecules, which diffuse across the synaptic cleft and bind glutamate re-
            ceptors to evoke the responses indicated with gray lines in the chart at the
            bottom of the illustration. A small excitatory postsynaptic potential
            (EPSP) in the dendrites is even smaller in the axon hillock and fails to trig-
            ger a spike. Strong stimulation of terminals (responses indicated with
            black lines) causes a large EPSP in the dendrite. This EPSP is smaller in
            the axon hillock but is still big enough to cause a spike to be initiated here,
            and this spike then travels down the axon, where, after a delay, it is also re-
            corded in the axon terminals. Joan M. K. Tycko, illustrator.
                                                     Building a Brain with Yesterday’s Parts   37
       (MVUBNBUF                                          5PQ
       NPMFDVMF                                           WJFX
       CPVOEUP                         $FOUSBM
       SFDFQUPS                         QPSF
       $VUBXBZ
       TJEF
       WJFX                                                                  0VUTJEF
                                                                 0VUFSNFNCSBOF
                                                                                *OTJEF
     figure 2.6. Schematic drawing of a glutamate receptor in the postsynaptic mem-
                 brane. Glutamate binding to its receptor opens the central pore, the ion
                 channel. Joan M. K. Tycko, illustrator.
     off this central pore opens, thus allowing certain ions to flow in or out of the
     cell. The receptor’s central pore is small and its particular chemical properties
     ensure that only particular ions can get through. Hence, the central pore has
     been given a special name, ion channel. In the case of the glutamate receptor,
     the ion channel allows passage of both sodium ions and potassium ions. When
     the pore opens, sodium ions from the outside (where sodium concentration is
     high) rush to the inside (where sodium concentration is low), and potassium
     ions flow in the opposite direction from inside (where concentration is high) to
     outside (where it is low). In this process, more sodium ions rush in than potas-
38   Building a Brain with Yesterday’s Parts
sium ions flow out, so there is a net flow of positive charge into the cell, raising
the voltage difference across the dendrite’s outer membrane (the membrane po-
tential) from its resting state of −70 millivolts to some more positive level, let’s
say −65 millivolts. As the glutamate molecules diffuse away from their recep-
tors and the receptor-gated ion channel (central pore) closes again, the mem-
brane potential returns to the resting state. This whole event, about 10 mil-
liseconds in duration from start to finish, has been given a rather long and
ponderous name, the excitatory postsynaptic potential, abbreviated EPSP.
  In most neurons, a single EPSP produces a response like the one we have
seen, a brief change in voltage, then nothing. This is a fairly typical mechanism
that neurons have for ignoring very low levels of activity that are merely ongo-
ing noise in the brain. Something very different happens if we activate a group
of axon terminals to release glutamate all at the same time. We produce a larger
EPSP at both the dendrite and the axon hillock, but when the strength of the
signal at the axon hillock reaches a certain threshold level (say about −60 milli-
volts), an amazing thing happens. Rather than falling back down to rest, the
membrane potential at the axon hillock explosively deflects upward and then
rapidly returns. This explosive response is the spike, the fundamental unit of
information in the brain.
  Why is there a spike and why does it start at the axon hillock? The answer is
in the structure of the outer membrane at this location. The axon hillock, but
not the dendrite or cell body, has a high density of a different ion channel.
These ion channels are not opened by binding glutamate, but rather have a
built-in sensor of the local membrane voltage that allows them to be shut at rest
(−70 millivolts) but open when the membrane voltage becomes more positive
(to about −60 millivolts and beyond). When EPSPs from several different syn-
apses add up at the axon hillock and move the membrane potential to −60 mil-
livolts, then these voltage-sensitive ion channels begin to open. They are built
                                                 Building a Brain with Yesterday’s Parts   39
     to allow only sodium ions through their central pore, and as this sodium rushes
     in, it moves the membrane to an even more positive potential. This, in turn,
     causes more voltage-sensitive sodium channel opening in a rapid positive feed-
     back loop that underlies the explosive upstroke of the spike.
        The spike typically peaks at about +50 millivolts and rapidly falls back to
     rest. There are two factors that contribute to this rapid peak-and-return behav-
     ior. First, voltage-sensitive sodium ion channels open rapidly but stay open
     only for about a millisecond before snapping closed again, which limits the
     spike’s duration. Second, there is another type of voltage-sensitive ion channel
     involved. This one is also activated by positive-going changes in membrane po-
     tential, but it opens more slowly and when it opens, potassium ions rush out of
     the neuron. The loss of positively charged potassium ions from inside the cell
     makes the membrane potential more negative, causing the downstroke of the
     spike as the membrane potential returns to rest.
        The axon hillock, where the spike originates, is the first stretch of a long
     highway to the axon terminal. Fortunately, the voltage-sensitive sodium chan-
     nel’s positive feedback loop allows the spike to travel along the axon. Sodium
     ions rushing in make the outer membrane more positive not just at the axon
     hillock, but also at the next bit of axon, farther from the cell body. Because the
     membrane in this next bit of axon also has voltage-gated sodium channels, they
     will open, sodium ions will rush in at that location and produce more positive
     charge in yet a further bit of axon membrane, and so on. In this manner, the
     spike travels down the axon like a flame racing along a fuse, each bit of axonal
     membrane “igniting” the next until the spike reaches the axon terminals.
        The voltage-sensitive sodium channel that initiates neuronal spikes is a key
     target of neurotoxins generated by many plants and animals. Interfere with that
     channel and you block essentially all signaling in the brain (and the rest of the
     nervous system too). The most famous—or infamous—toxin is that of the
40   Building a Brain with Yesterday’s Parts
figure 2.7. The pufferfish. Joan M. K. Tycko, illustrator.
fugu, the Japanese pufferfish (Figure 2.7). This toxin (called tetrodotoxin) is a
tiny molecular plug that fits exactly into the outer portion of the sodium chan-
nel’s central pore, thereby stopping it up. Tetrodotoxin is more than 1,000
times as powerful as cyanide and a single pufferfish has enough to kill 30 peo-
ple. Considered a delicacy in Japan, pufferfish killed many people before prepa-
ration of fugu in restaurants was closely regulated by law to prevent people
from ingesting the parts of the fish that have the highest concentrations of the
toxin. Even today fugu is the one food the emperor and his family are prohib-
ited from eating.
                                                       Building a Brain with Yesterday’s Parts   41
        But let’s return to movement along the axon when it is not interrupted by
     neurotoxins or other means. It is tempting to say that the axon is like an insu-
     lated copper electrical wire. But this obscures one of the fundamental inef-
     ficiencies of neurons. Copper wire need not do anything to keep electrical sig-
     nals moving: it is totally passive, is a good conductor, and is well insulated
     against losing electrical charge to the outside. As a consequence, electrical sig-
     nals in copper wires move at nearly the speed of light, about 669 million miles
     per hour. In contrast, the axon uses molecular machines with moving parts
     (voltage-sensitive ion channels snapping open and closed) to maintain the spike
     as it travels down its pathway. Comparatively, the axon is a quite poor conduc-
     tor. The saltwater solution on the inside of the axon is not nearly as good a con-
     ductor as copper. Moreover, the outer membrane of the axon is a rather leaky
     insulator.
        Perhaps the conduction of electrical signals along the axon is best under-
     stood through a hydraulic analogy. Insulated copper wire is like a steel water
     pipe (does not leak) that is 10 feet in diameter (great flow through its core),
     while the axon is like a “soaker” garden hose, 1 inch in diameter (poor flow
     through its core), that has been riddled with tiny holes along its length (leaks
     like hell) to allow you to irrigate a flower bed. This combination of poor core
     flow and leakiness makes water flow through a soaker hose slowly. Similarly,
     electrical current flow through an axon is also restricted by poor core flow and
     leakiness. As a consequence, electrical signals in axons typically travel slowly, at
     about 100 miles per hour. There is, however, quite a range, with the thinnest,
     uninsulated axons poking along at about 1 mile per hour and the very fastest
     (thick axons or those well insulated by neighboring glial cells) going at about
     400 miles per hour. Nonetheless, even the very fastest axons, like those involved
     in reflexively withdrawing your finger from a hot stove, are conducting electri-
     cal signals at less than one-millionth the speed of copper wires.
42   Building a Brain with Yesterday’s Parts
   Another way that our neurons differ from man-made devices, such as com-
puters, to which they are often compared, involves the temporal range of their
signals. The pattern of spike firing is the main way neurons encode and convey
information, so timing limits on spike firing are particularly important. A desk-
top computer’s central processing unit (circa 2006) may conduct 10 billion op-
erations per second, but a typical neuron in a human brain is limited to around
400 spikes per second (though some special neurons, such as those in the audi-
tory system that encode high-frequency sound, can fire up to 1,200 spikes per
second). Furthermore, most neurons cannot sustain these highest rates for long
(more than a few seconds) before they need a rest. With such constraints on
speed and timing, it seems amazing that the brain can do what it does.
TO R E T U R N TO   our neuronal story, we last left the spike racing down the axon
highway to meet its fate. When the spike reaches the axon terminal it produces
its characteristic explosive positive deflection in membrane potential. But, in
the terminal, in addition to causing voltage-sensitive sodium channels to open,
this voltage change also opens another class of ion channels that selectively pass
calcium ions. Like sodium ions, calcium ions are positively charged (they have
a charge of +2) and have a much higher concentration outside the cell than in-
side. So, like sodium ions, they too rush inside when a calcium channel is
opened.
   When calcium ions rush into the terminal they not only produce positive
deflection in membrane potential, but also trigger unique biochemical events.
Special sensor proteins for calcium ions are built into the neurotransmitter-
containing synaptic vesicles. These sensors, upon binding calcium ions, set in
motion a complex biochemical cascade that results in the presynaptic vesicle
contacting a specialized patch of membrane called the release site and then fus-
ing with it. Fusion of a vesicle causes the formation of a structure that resembles
                                                  Building a Brain with Yesterday’s Parts   43
     the Greek capital letter omega (Ω), which allows the contents of the vesicle,
     the glutamate molecules, to diffuse into the synaptic cleft and ultimately bind
     postsynaptic receptors (see Figure 2.3). In this way, the cycle of neuronal signal-
     ing from EPSP to spike to glutamate release to EPSP is completed and informa-
     tion is conveyed from neuron to neuron.
     A L B E RT E I N S T E I N ,   in an oft-quoted critique of Werner Heisenberg’s Uncer-
     tainty Principle, said, “God does not play dice with the Universe.” By the stan-
     dards of modern physics, Einstein turned out to be wrong. If I were to make the
     related statement “Our brains do not play dice with our synapses,” it would
     also be wrong. At most synapses in the brain, when a spike invades the pre-
     synaptic axon terminal and causes influx of calcium ions, this does not neces-
     sarily result in vesicle fusion and the release of neurotransmitter. It is, quite sim-
     ply, a matter of chance. The probability of neurotransmitter release for a single
     spike might be 30 percent at an average synapse in the brain. Some synapses
     have release probabilities as low as 10 percent and a few release neurotransmit-
     ter every single time (100 percent probability), but these are the exceptions, not
     the rule. Most synapses in our brains do not function reliably: rather, they are
     probabilistic devices.
     OU R I MAGI NARY E XPE R I M E NT          has now revealed the entire cycle of electri-
     cal signaling in neurons. This is a basic template that can be used to under-
     stand many brain phenomena. But the situation is a bit more complicated than
     shown by just this one example. Glutamate opens an ion channel that lets posi-
     tive charge into the cell. This tends to move the membrane potential in a posi-
     tive direction, close to the level where a spike will fire, referred to as excitation
     (as in excitatory postsynaptic potential, EPSP). There are other neurotransmit-
     ters that produce the opposite effect, inhibition, where the probability of the
44   Building a Brain with Yesterday’s Parts
postsynaptic cell’s firing a spike is reduced. For example, the major inhibitory
neurotransmitter in the brain is gamma-aminobutyric acid, abbreviated GABA.
GABA binds a receptor that opens a channel that lets chloride ions flow into
the postsynaptic neuron. Chloride ions have a negative charge (-1), and thus
make the membrane potential more negative. This, not surprisingly, is called
an inhibitory postsynaptic potential, or IPSP, and makes it even harder for the
postsynaptic neuron to fire a spike.
  In practice, whether or not a neuron fires a spike at any given moment is de-
termined by the simultaneous action of many synapses, with excitatory and in-
hibitory actions summed to produce the total effect. Recall that the average
neuron in the brain receives 5,000 synapses. Of these, about 4,500 will be ex-
citatory and 500 will be inhibitory. Although only a small number are likely to
be active at any one time, most neurons will not be driven to fire a spike from
the brief action of a single excitatory synapse, but will require the simultaneous
action of about 5 to 20 synapses (or even more in some neurons).
  Glutamate and GABA are fast-acting neurotransmitters: when they bind
their receptors, the electrical changes they produce occur within a few millisec-
onds. They are the dominant fast neurotransmitters in brain, but there are
some other fast ones. Glycine is an inhibitory neurotransmitter that acts like
GABA: it opens a receptor-associated ion channel to let chloride ions rush
in and inhibit the postsynaptic neuron. The poison strychnine, which figures
prominently in mystery novels, blocks glycine receptors and prevents their acti-
vation. Another example is acetylcholine, an excitatory neurotransmitter that,
like glutamate, opens an ion channel that lets both sodium rush in and potas-
sium out. This occurs in some parts of the brain, as well as at the synapses be-
tween neurons and muscles. The South American hunting arrow poison called
curare blocks this receptor. Animals shot with a curare-tipped arrow become
totally limp as commands from the nerves fail to activate muscular contraction.
                                                Building a Brain with Yesterday’s Parts   45
        In addition to the fast neurotransmitters, such as glutamate, GABA, glycine,
     and acetylcholine, there are also other neurotransmitters that act more slowly.
     These neurotransmitters bind a different class of receptors. Instead of opening
     ion channels, they activate biochemical processes inside the neurons. These
     biochemical events produce changes that are slow to start but that have a long
     duration: typically, from 200 milliseconds to 10 seconds. Many of these slow-
     acting neurotransmitters do not produce a direct electrical effect: the mem-
     brane potential does not change in either the positive or the negative direction
     after they bind their receptor. Rather, they change the electrical properties of
     the cell in ways that are only apparent when fast neurotransmitters also act. For
     example, the slow-acting neurotransmitter called noradrenaline can change the
     voltage at which a spike will be triggered from its normal level of −60 millivolts
     to −65 millivolts. In a neuron that is silent, there won’t be any difference after
     noradrenaline release, but when that neuron receives fast synaptic input, there
     will be. If glutamate is released onto this neuron from synapses and this changes
     its membrane potential from the resting state of −70 millivolts to −65 milli-
     volts, this will now result in a spike. This same action of glutamate in the ab-
     sence of noradrenaline would fail to trigger a spike. In biochemical terms, we
     would say that noradrenaline has a modulatory action on spike firing: it doesn’t
     directly cause spike firing but it changes the properties of spike firing produced
     by other neurotransmitters. The bottom line here is that fast neurotransmitters
     are suited to conveying a certain class of information that requires rapid signals,
     while slow neurotransmitters are better at setting the overall tone and range.
     W H E N N E U R OT R A N S M I T T E R S   are released into the synaptic cleft, they even-
     tually diffuse away, achieving a low concentration. A while back, I invoked
     the image of a single drop of red wine released into a full water glass that, even-
     tually, will turn the contents of the glass a very pale pink. This is fine if
46   Building a Brain with Yesterday’s Parts
neurotransmitters were released only once. But, over time, if neurotransmitter
molecules are repeatedly released, there must be some mechanism to clear the
neurotransmitter from the cerebrospinal fluid surrounding brain cells before
it achieves dangerously high concentrations (continuous activation of neuro-
transmitter receptors can often kill neurons). In terms of our wine glass image,
with repeated drops the wine glass would eventually turn a uniform shade of
pink and then red.
   Essentially, when it comes to cleaning up after neurotransmitter release,
someone has to take out the trash. For some neurotransmitters, there is the
quintessentially American solution: burn that junk in the front yard. For exam-
ple, acetylcholine is destroyed in the synaptic cleft by an enzyme specifically
built for that purpose. Most other neurotransmitters get the European treat-
ment: they are recycled. Glutamate molecules, through the actions of special-
ized transporter proteins in the outer membrane, are taken up into glial cells,
where they undergo some biochemical processing before being sent to neurons
for re-use. Most of the slow-acting neurotransmitters, such as dopamine and
noradrenaline, are taken up right back into axon terminals, where they can
be repackaged into vesicles and used again. Interestingly, GABA seems to go
both ways: it is taken up by both axon terminals and glial cells. Some neuro-
transmitter transporters make excellent targets for psychoactive drugs (such as
the antidepressant Prozac and its relatives) because blocking them will cause
neurotransmitters in the synapse to linger and achieve higher concentrations.
A L L T H E I N F O R M AT I O N   in your brain, from the sensation of smelling a rose, to
the commands moving your arm to shoot pool, to that dream about going to
school naked, are encoded by spike firing in a sea of neurons, densely intercon-
nected by synapses. Now that we have gained an overall understanding of elec-
trical signaling in the brain, let’s consider the challenges the brain must con-
                                                         Building a Brain with Yesterday’s Parts   47
     front as it tries to create mental function using a collection of less-than-optimal
     parts. The first challenge is the limitation on the rate of spike firing caused by
     the time it takes for voltage-sensitive sodium and potassium ions to open and
     close. As a result, individual neurons are typically limited to a maximal firing
     rate of about 400 spikes/second (compared with 10 billion operations/second
     for a modern desktop computer). The second challenge is that axons are slow,
     leaky electrical conductors that typically propagate spikes at a relatively sedate
     100 miles per hour (compared with electrical signals in a man-made electronic
     device moving at around 669 million miles per hour). The third challenge is
     that once spikes have made it to the synaptic terminal, there is a high probabil-
     ity (about 70 percent on average) that the whole trip will have been in vain, and
     no neurotransmitters will be released. What a bum deal! These constraints may
     have been tolerable for the simple problems solved by the nervous system of a
     worm or a jellyfish, but for the human brain, the constraints imposed by (an-
     cient) neuronal electrical function are considerable.
        How does the brain manage to create human mental function with neurons
     that are such crummy parts? More to the point, given the comparisons above,
     how is it that our brains can easily accomplish certain tasks that typically baffle
     electronic computers—for example, recognizing instantly that an image of a
     Rottweiler taken from the front and another of a teacup poodle taken from
     the rear should both be classified as “dog”? This is a deep question, central to
     neurobiology, for which a detailed answer is not at hand. Yet a more general ex-
     planation appears to be as follows. Individual neurons are horribly slow, unreli-
     able, and inefficient processors. But the brain is an agglomeration of 100 bil-
     lion of these suboptimal processors, massively interconnected by 500 trillion
     synapses. As a result, the brain can solve difficult problems by using the simul-
     taneous processing and subsequent integration of large numbers of neurons.
     The brain is a kludge in which an enormous number of interconnected proces-
48   Building a Brain with Yesterday’s Parts
sors can function impressively even when each individual processor is severely
limited.
  In addition, while the overall wiring diagram of the brain is laid down in the
genetic code, the fine-scale wiring of the brain is guided by patterns of activity,
which allows the strength and pattern of synaptic connections to be molded by
experience, a process called synaptic plasticity (which I will consider in Chap-
ters 3 and 5). It is the massively interconnected parallel architecture of the brain
combined with the capacity for subtle rewiring that allows the brain to build
such an impressive device from such crummy parts.
                                                 Building a Brain with Yesterday’s Parts   49
     Chapter Three
     Some Assembly Required
     IT’S A DAU NTI NG   task to develop a brain. The nervous system must be precisely
     constructed as the fertilized ovum develops into the mature organism. The
     tiny roundworm called Caenorhabditis elegans generates, arranges, and wires to-
     gether a neural circuit of exactly 302 neurons and about 7,800 synapses. These
     302 neurons must be derived from rapidly dividing precursor cells, migrate to
     the appropriate location in the body of the worm, and express the right proteins
     to make neurotransmitters, and form ion channels, receptors, and the like.
     Finally, these neurons must grow their axons and dendrites in the correct way
     to wire the whole thing together properly. Faults in creating this neural cir-
     cuitry result in worms that can’t wriggle properly through the soil or have prob-
     lems finding food or avoiding dangerous conditions. It’s a complicated recipe
     to specify all of these neuronal properties and connections. Fortunately, the
50
figure 3.1. The roundworm Caenorhabditis elegans, about 1 millimeter long. It has a
            transparent body that allows researchers to see internal structures, includ-
            ing all of its 302 neurons. Joan M. K. Tycko, illustrator.
roundworm has, encoded in its DNA, about 19,000 genes that can potentially
help guide this process.
  The human brain obviously is a much bigger challenge. Its development
must correctly specify the location, properties, and connections of about 100
billion neurons and about 500 trillion synapses. If all of this process were en-
coded in our DNA, we might expect that we would need many more genes
than the roundworm. Actually, the best estimates to date from the Human Ge-
                                                                Some Assembly Required     51
     nome Project are that we have about 23,000 genes, not many more than the
     worm. In the human about 70 percent of these genes are expressed in the brain
     (the brain is not only the “energy hog” of the body, it’s also a “gene hog”). Be-
     cause worm neurons are really tiny and sparse and are therefore hard to dissect
     and analyze, we don’t actually know what fraction of genes is expressed in the
     worm nervous system, but a reasonable guess would be 50 percent. So, as a ball-
     park estimate, the worm has about 9,000 genes expressed in 302 neurons while
     humans have about 16,000 genes expressed in 100 billion neurons. There is
     some evidence to suggest that human genes are more likely than worm genes to
     use a trick known as alternative splicing by which a single gene can give rise to
     multiple, related gene products. But even if we imagine that human neural
     genes are, on average, three times as likely to undergo splicing as their worm
     counterparts, we still wind up with a situation where the number of gene prod-
     ucts per neuron (a rough measure of the capacity of genetic information to in-
     struct brain development) is around 100 million-fold lower for humans than
     for worms.
       So, how can our genes rise to the task at hand? How can they specify the
     complete development of such a large, complex structure as the human brain?
     The answer, simply, is that they can’t: although the overall size and shape of the
     brain and the large-scale pattern of connections between brain regions and cell
     types are instructed by genes, the cell-by-cell details are not. The precise speci-
     fication and wiring of the brain depends upon factors not encoded in the genes
     (called epigenetic factors), including the effects of the environment. In this
     case, as we will discover, the word “environment” is used broadly to encompass
     everything from the chemical environment of the womb to sensory experience
     starting in the womb and continuing through childhood as the brain matures.
       The central issue here, the relative contribution of genetic and epigenetic
     factors to brain development, may sound esoteric, but it is at the core of a de-
52   Some Assembly Required
bate which has been raging since before Darwin’s time: the famous and often
bitter debate about whether “nature” or “nurture” is more important in the de-
termination of human mental functions and personality. Over the last 150
years or so, the pendulum of scientific thought has swung at various times to
both extremes. Some extreme nurturists, such as the founder of behaviorist psy-
chology, B. F. Skinner, have claimed that the human brain is a “blank slate”
with no genetic constraints and that human cognition and personality are en-
tirely formed by experience, particularly early experience. On the other side of
the debate have been the extreme naturists (not to be confused with people
who like to go bungee-jumping in their birthday suits), a group that has in-
cluded such historic figures as William James. Naturists have claimed that hu-
man mental traits and personality are largely determined by genes, and that
barring extreme environmental events, such as being locked in a dark room for
long periods, early experience does not significantly contribute.
  The debate continues today, but the range of views has trended toward the
middle. Now fewer scientists inhabit either extreme pole of the nature-nurture
spectrum. In part, this stance has come from accumulating evidence that, for
some mental and behavioral traits, there is a clear contribution of genes. A por-
tion of this evidence has come from studies of genetically identical twins (called
monozygotic twins by biologists) separated soon after birth and raised by dif-
ferent families. For example, identical twins given psychological tests to pin-
point personality traits, such as extroversion or conscientiousness or openness,
showed that identical twins have tended to share many of these traits whether
or not the twins were raised together. These studies have been performed by
now in a number of different countries, mostly in the more affluent parts of the
world.
  Not surprisingly, tests of “general intelligence” in adopted twins have gener-
ated a lot of controversy. Early studies on this topic were sloppily designed and
                                                           Some Assembly Required    53
     some even involved scientific fraud. More recently, however, large, carefully de-
     signed trials seem to converge on a similar conclusion: in children and young
     adults from middle-class or affluent families, in studies that have used a combi-
     nation of twins, identical and nonidentical, raised together and apart, about 50
     percent of “general intelligence” can be attributed to genes, with the remainder
     determined by environmental factors. In other words, genes influence general
     intelligence but to a lesser degree than they influence personality.
       Some telling details emerge from intelligence tests of particular twin sub-
     groups. For example, when identical twins are adopted into different families
     and one of those families is extremely poor, the poor twin is much more likely
     to score lower on intelligence tests. Twins raised in poverty perform worse on
     intelligence tests than twins raised in middle-class households. But twins raised
     in middle-class households do not perform worse than twins raised in wealthy
     households. In other words, for the case of “general intelligence,” both genes
     and environment contribute, but in the extreme case of environmental de-
     privation seen in the poorest households, the effects of environment become
     much greater and largely overcome the effects of genes.
       In contrast, other behavioral traits do not appear to be strongly influenced
     by genes: food preferences (in both rodents and humans) are largely deter-
     mined by early experience and are therefore not similar in identical twins raised
     apart. Sense of humor is another. Identical twins raised apart tend not to find
     the same things humorous, whereas they do share a sense of humor with their
     adoptive siblings. These examples show that blanket generalizations about the
     contribution of genes to mental traits are not warranted. We must consider dif-
     ferent aspects of mental function on their own terms.
       Separated identical twin studies have been useful for untangling contribu-
     tions of genes and environment. But they are not perfect. First, shared environ-
54   Some Assembly Required
mental factors begin in the womb. If, for example, the mother has a high level
of stress-induced hormones in her bloodstream during pregnancy, this affects
the development of both twins. This is an example of a biological influence
that is not genetic, but epigenetic. Second, although the phrase “separated at
birth” has become engrained in our popular culture, in practice, such separa-
tion rarely occurs. Most twins are adopted after spending days to weeks (and
sometimes even months) together, sharing the same nursery environment.
Third, some separated twins who have been recruited for these studies have
been reunited for some time before their participation in the study. What this
means is that direct comparisons between identical twins raised apart and iden-
tical twins raised together may overestimate the contribution of genetic fac-
tors. However, comparisons between identical twins raised apart and noniden-
tical same-sex twins (dizygotic twins) raised apart should not be biased in this
way because these factors will apply equally to both groups. Indeed, the tests
mentioned earlier have shown that identical twins raised apart are significantly
more alike in measures of personality than nonidentical twins raised apart.
Thus, at present, it is clear that for some human behavioral traits, there is a sig-
nificant contribution of genes.
  Another major factor that has moved many scientists closer to common
ground in the nature-nurture wars has been a better understanding of how
genes and environment interact in brain cells. In the past, there has been a ten-
dency to imagine that genes and behavior interact in only one direction: genes
influence behavior. We now know that the environment, broadly considered,
can also influence gene function in brain cells. In other words, nurture can in-
fluence nature and vice versa. Causality, in the brain, is a two-way street.
  Let’s briefly review a little molecular genetics to help understand how the en-
vironment can influence genes. Each cell in the human body contains the com-
                                                             Some Assembly Required    55
     plete human genome, all 23,000 or so genes, arranged on strands of DNA orga-
     nized into 23 chromosome pairs (one set from Mom and the other from Dad)
     in the cell nucleus. Each gene consists of a series of DNA bases that provide the
     information that ultimately directs the construction of a chain of amino ac-
     ids. These chains of amino acids are called proteins. Proteins form the impor-
     tant structural and functional units of the cell. For example, they make all
     of the important neuronal molecules discussed so far. These include ion chan-
     nels (such as the voltage-sensitive sodium channels that underlie the upstroke
     of the spike), enzymes that direct chemical reactions to produce or break down
     neurotransmitters (like the enzyme acetylcholinesterase, which breaks down
     the neurotransmitter acetylcholine), and neurotransmitter receptors (such as
     glutamate receptors), as well as the structural molecules, the cables, tubes, and
     rods of protein that give neurons their shape.
       Every cell in your body has, encoded in its DNA, the information to make all
     proteins encoded in the genome. But, at any given time, a particular cell in your
     body is only actively making proteins from a small subset of these genes. A
     small number of genes make products that are continually needed in all cells in
     the body. These “housekeeping genes” are always on, directing the production
     of their proteins. Other genes are activated only in certain cell types. For exam-
     ple, the cells that line your stomach are not producing the proteins needed to
     grow hair, and your hair follicles are not producing the proteins involved in the
     secretion of stomach acids. Still other genes may be switched on or off at certain
     points in development or in response to particular signals, and these are the
     ones in which we are most interested.
       Gene expression is the process by which genes are turned on and off. The
     molecular mechanisms that underlie it are complex and represent an entire
     subfield of biology. In brief, however, one or more sequences of DNA called
56   Some Assembly Required
       4ZOBQTF
                         1SPUFJOT
                       BSFNBEFBT
                       EJSFDUFECZ                      &YDJUBUPSZTZOBQTFT
                       NFTTFOHFS                      BSFBDUJWBUFE XIJDIQSPEVDFT
                       3/"                            NFNCSBOFEFQPMBSJ[BUJPO
         .FTTFOHFS                                          7PMUBHFTFOTJUJWF
       3/"JTNBEF                                          DBMDJVNDIBOOFMTPQFO
       BOEFYJUTUIF
       OVDMFVT
                                                                      $BMDJVNJPOT
                                                                    GMPXJOTJEF
                $BMDJVN                                              $BMDJVNJPOT
                JPOT                                               FOUFSUIFOVDMFVT
                                                                   BOEBDUJWBUF
                                                                   USBOTDSJQUJPOGBDUPST
                                  (FOFT                  "DUJWBUFE
                                FODPEFEPO              USBOTDSJQUJPO
                                %/"BSF                  GBDUPSCJOET%/"
                                USBOTDSJCFE
figure 3.2. The molecular basis of the link between nature and nurture in the brain.
            Experiences activate sensory systems, which cause neurons to fire excit-
            atory synapses. This causes a brief increase in the concentration of cal-
            cium ions that, through an intermediate biochemical process, activates
            certain transcription factors, causing them to bind to promoter regions of
            certain genes and activate them. When the gene is activated it then pro-
            duces messenger RNA that instructs the synthesis of proteins, the final
            step in the gene-expression cascade. Joan M. K. Tycko, illustrator.
                                                               Some Assembly Required       57
     promoters must come into play in a section of DNA adjacent to the region of
     the actual information-containing part of these genes. Promoters are activated
     by a set of molecules called transcription factors. Typically, a given promoter
     will have a specific transcription factor that binds to it. Sometimes, in order to
     activate a given gene and start the series of events that will ultimately result in
     making its encoded protein, a particular set of transcription factors must all
     bind and activate their respective promoters at the same time.
       Transcription factors can be activated in different ways. For example, when a
     rat has been kept in the same cage for weeks and is then placed in a new cage
     with different sights and smells, a set of neurons in the cortex and the hippo-
     campus will fire bursts of spikes in response to the novel environment. When
     these neurons fire spike bursts, they cause the opening of voltage-sensitive cal-
     cium channels and calcium rushes into the cell. The increase in internal cal-
     cium concentration can activate a set of biochemical signals that ultimately re-
     sult in the activation of transcription factors, one of which is called SRF. SRF
     binds to a promoter present in many different genes called an SRE. Activation
     of the SRE promoter is usually not sufficient to activate a gene by itself, but it
     can be one of several required events. Some other transcription factors are mol-
     ecules that come from outside the cell, penetrate the outer membrane, and get
     into the cell nucleus to directly bind promoters. Many hormones, such as the
     female sex hormone estrogen or thyroid hormone, work in this way.
       So, transcription factors acting on promoters provide a biochemical mecha-
     nism by which experience, in all of its forms, can affect genes, not by altering
     the structure of the genetic information, but by controlling the timing of gene
     expression. It should be mentioned that while transcription factors are one im-
     portant way of controlling gene expression, they are not the only way. There are
     several additional steps between switching a gene on and the production of pro-
     tein, and each of these steps is also subject to regulation. I won’t go into all of
58   Some Assembly Required
the ways this can happen, but the larger point here is that many biochemical
pathways enable experience to influence gene expression.
H AV I N G S E T T H E   stage through a consideration of some points in the nature-
nurture debate, let’s now follow the brain through its development, first in the
womb and then during early life. The fertilized ovum begins dividing to form a
ball of cells that implant in the lining of the womb several days later and ulti-
mately flatten to form the embryonic disk, a thin pancake-like structure about
1 millimeter in diameter. The ectoderm is the surface layer of cells in the em-
bryonic disk. Over the next few days a portion of the ectoderm receives chemi-
cal signals from surrounding tissue that cause it to form the neural plate, a
structure in the center of the disk. As the whole embryo grows, the edges of the
neural plate curl up and fuse together to form a tube. This neural tube will ulti-
mately become the brain at one end and the spinal cord at the other. The hol-
low core of the neural tube eventually forms the ventricles, fluid-filled spaces in
the center of the brain and spinal cord. This is the state of affairs at about 1
month after conception.
   At this point, the neural tube is composed not of actual neurons, but rather
of about 125,000 so-called neuronal precursor cells. These cells divide repeat-
edly, at a furious pace, and give rise to yet more neuronal precursor cells. The
rate of cell division in the developing human nervous system is staggering, with
about 250,000 new cells being created per minute throughout the first half of
gestation. Most of this cell division is happening deep in the developing brain,
adjacent to the fluid-filled ventricles. A precursor cell may have several fates. It
may divide again to make more precursors, it may become a neuron, or it may
become a glial cell. The factors that determine precursor cell fate are critical to
determining the ultimate size of the brain and the relative size of its regions.
   That brain size in humans is strongly influenced by genes has been known
                                                               Some Assembly Required   59
     for many years. More recently, the use of sophisticated brain scanners has not
     only improved the accuracy of brain size measurements, but also allowed sci-
     entists to separately measure those parts of the brain composed mostly of bun-
     dles of axons (called white matter) versus those parts of the brain composed
     mostly of neuronal cell bodies and dendrites (gray matter). Impressively, identi-
     cal twins, whether raised together or apart, are 95 percent similar in gray matter
     volume. Nonidentical twins, who share the same degree of genetic similarity as
     any two siblings, are about 50 percent alike in this measure.
       This strong finding suggests an obvious question: Can we identify particular
     genes that control the number of divisions of precursor cells during brain devel-
     opment and thereby influence brain size? In recent years, a small number of
     candidate genes have emerged in this search. The function of these genes has
     come to light through investigations of human populations in which a rare and
     incurable disorder called microcephaly has been found to run in families. Mi-
     crocephaly is a severe genetic disease that results in a brain that is only about 30
     percent of normal size. It does not just represent the low end of the normal
     range of brain sizes. Rather, adult microcephalics typically have brains about
     the size of the brain of a chimpanzee or, suggestively, the size of the brain of our
     2.5-million-year-old hominid ancestor, Australopithecus africanus.
       Analysis of microcephalics has revealed a handful of genes that harbor muta-
     tions linked to this disease. Of these, we presently know the most about one.
     The ASPM gene produces a protein involved in cell division: in particular, it
     helps to form a structure called the mitotic spindle, essential in dividing cells so
     that each new cell gets its proper share of chromosomes. An important part of
     this protein is a segment that binds a messenger molecule called calmodulin.
     The calmodulin-binding region is present in two copies in the ASPM gene of
     the roundworm, 24 copies in the fruit fly, and 74 copies in humans. Further-
     more, careful base-by-base analysis of the ASPM gene in humans, chimpan-
60   Some Assembly Required
zees, gorillas, orangutans, and macaque monkeys has suggested that evolution
of the ASPM gene, particularly its calmodulin-binding region, has been partic-
ularly accelerated in the great ape family. The greatest degree of selective change
in the ASPM gene is found along those ape lineages leading to humans. Thus it
is likely that the ASPM gene and similar genes have played central roles in the
evolutionary expansion of human brain size. You can bet that in the near future
brain researchers will look carefully at variation in the ASPM gene and related
genes to see if this predicts variation in brain size over the normal range.
  As the brain develops during gestation, it is not merely accumulating a larger
and larger disorganized mass of cells, but is also initiating important changes in
brain shape and the emergence of particular regions (see Figure 3.3). By the end
of the second month of pregnancy the neural tube has developed three swell-
ings. The front one will ultimately expand to form the massive and infolded
cortex (and some other nearby structures). The lower portions of the neural
tube will develop two sharp right-angle bends resulting from different regions
adding cells at different rates, and these bends will help to pack the lower por-
tions of the brain into their appropriate orientations. Certain regions will sprout
outgrowths that can become quite large, such as the cerebellum, which extends
from the back of the developing brain. At the time of birth, many of the neu-
rons that constitute the adult brain have been created. But the brain at birth is
far from mature because much of the fine wiring is still to come.
  The swelling and bending of the neural tube to delineate regions such as the
cortex, midbrain, and cerebellum are under the control of a set of “homeotic”
genes that are master regulators of early development. Homeotic genes code for
proteins, and these proteins are, you guessed it, transcription factors. Because
these transcription factors can contribute to the activation of many other target
genes, including those that form the boundaries between regions and that cause
groups of cells to clump together, homeotic genes can have widespread effects.
                                                            Some Assembly Required    61
                   XFFLT                   XFFLT                  NPOUIT
               NPOUIT                      NPOUIT                      NPOUIT
                     NPOUIT                                       NPOUIT
     figure 3.3. The development of the brain from 4 weeks after conception, when the
                 neural tube has just formed, through birth. Intermediate stages show the
                 formation of swellings in the neural tube and the expansion and bending
                 of the tube that ultimately give rise to the brain of the newborn. The
                 drawings of the earliest stages are magnified relative to those of the latest
                 stages: the 4-week-old neural tube, for example, is only about 3 millime-
                 ters long. Adapted from W. M. Cowan, The development of the brain,
                 Scientific American 241:113–133 (1979). Joan M. K. Tycko, illustrator.
62   Some Assembly Required
                                    #SBJO4VSGBDF
                    .JHSBUJOH
                    OFVSPO
                        3BEJBM
                        HMJBM
                        DFMM
                                       7FOUSJDMF
figure 3.4. The migration of newly created neurons. They crawl along radial glia to
            reach their appropriate locations in the cortex. The radial glial cell func-
            tions as a scaffold that stretches from the fluid-filled ventricle, where
            neuronal precursor cells divide, all the way to the surface of the brain.
            There, a migrating neuron comes to reside in a layer of the cortex close to
            the brain surface. Interestingly, in the cerebellum, this process is reversed.
            Newly born neurons migrate along the outside surface of the cerebellum
            and then begin crawling along radial glia to migrate inward. Adapted with
            the permission of Elsevier from A. R. Kriegstein and S. C. Noctor, Pat-
            terns of neuronal migration in the embryonic cortex, Trends in Neurosci-
            ence 27:392–399 (2004). Joan M. K. Tycko, illustrator.
                                                                 Some Assembly Required      63
     Interfering with the action of homeotic genes through mutations or drugs will
     cause massive and often fatal flaws in brain development.
       Once neural precursor cells are done dividing they must migrate from a spe-
     cialized region for cell division (which is next to the ventricles) to their final lo-
     cation in the brain. The molecular cues that guide neuronal migration are not
     completely understood, but they include adhesive molecules that guide migrat-
     ing cells and other molecules that repel them. In those regions of the brain that
     are organized into distinct cellular layers, such as the cerebellum or the cortex,
     neurons literally crawl along scaffolds formed by a special class of glial cell, ra-
     dial glia that extend from the ventricles to the brain surface (see Figure 3.4).
     The layers are generated in the following way: those cells created first migrate a
     short distance to the nearest part in the developing cortex while those neurons
     born later will crawl through the earlier cells to wind up nearer the cortical sur-
     face. In this way, the cortex develops in an inside-out fashion with the first neu-
     rons created residing in the deepest cortical layers. This complex process can go
     awry. The effects of errors in migration are less severe than those of defective
     homeotic genes, but are still very serious: aberrant neuronal migration can re-
     sult in cerebral palsy, mental retardation, and epilepsy.
       As the embryo develops, dividing precursor cells of the neural tube ulti-
     mately must give rise to all the diverse types of neurons in the brain. Neuronal
     diversity encompasses a wide range of characteristics including shape, location,
     electrical properties, and the neurotransmitter(s) to be used. Somewhat later, of
     course, the neurons with all of these characteristics have to be appropriately
     wired together by extending axons and dendrites. Right now, let’s focus on the
     determination of these earlier neuronal properties. One could imagine a plan in
     which newly created neurons are not restricted to any fate at all. In this view,
     neurons are generic and multipotent: the properties of individual neurons are
64   Some Assembly Required
determined entirely by their ultimate location in the brain and the signals they
receive from surrounding cells. Alternatively, neuronal precursors could be-
come divided into lineages such that, after a certain number of divisions, all of
the daughter cells of a particular precursor (and their daughters too) will only
give rise to one type of neuron.
  To put this into a real context, let’s turn to the cortex. Deep in the cortex are a
class of neurons, called layer 5 pyramidal neurons, that look kind of like carrots
with the narrow end pointed up. Layer 5 pyramidal neurons have a long thick
main dendrite and smaller branching dendrites that tend to point up or down
but not to the side. These neurons use the neurotransmitter glutamate and
receive synapses from the thalamus. Closer to the surface are a different set
of neurons, layer 2 cells. When early neuronal precursor cells from a rat that
would normally have become layer 5 cells are labeled with a bright green dye to
track them and are then transplanted into layer 2 of another rat’s cortex, they
adopt the properties of layer 2 cells. This result supports the former model,
in which developing neurons are derived from multipotent progenitors. But
when the experiment is done in reverse and later precursor cells that would nor-
mally become layer 2 cells are transplanted into layer 5, those cells do not settle
into position and grow to become layer 5 cells. Rather, they migrate out of layer
5 to find layer 2 and grow there in the appropriate fashion. This finding sup-
ports the latter model, in which neuronal fate is determined by cell lineage. Al-
though these examples are taken from the cortex, this general theme also ap-
plies to other regions of the brain: a combination of local signals and cell lineage
factors controls the generation of neuronal diversity. It turns out that, in the
end, the story is quite complex: the relative contribution of these factors varies
by brain region, cell type, and stage of development.
  To this point in our discussion of brain development we have talked a lot
                                                             Some Assembly Required     65
     about the influence of genes and not at all about the influence of environ-
     ment. There is a reason for this. Early in development genes direct most deci-
     sions about the formation of the brain. Opportunities for environmental in-
     fluence gradually increase as development progresses, both in the womb and
     postnatally. In contrasting the role of environment in early versus late brain de-
     velopment, it is useful to distinguish between permissive and instructive influ-
     ences. The early fetus has no sensory apparatus to carry messages from the
     world outside, and is entirely dependent upon the maternal blood supply for
     energy, oxygen, and the molecular building blocks for making new cells. These
     are permissive factors: interruption by, say, a poor diet, placental malfunction,
     or maternal disease, can be devastating to fetal brain development. But if these
     basic fetal needs are met, no information is imparted by these factors that can
     specifically guide or instruct brain development.
       Another form of environmental influence on brain development is through
     circulating hormones. If the mother is under stress for any reason, from social
     factors (job loss, a death in the family) to infection, stress-induced hormones
     will pass into the fetal circulation, where they can influence neurogenesis and
     migration. The immune system of the mother may also influence brain devel-
     opment, not only through the production of antibodies, but also through a set
     of molecules, cytokines, produced by the mother’s immune system but that can
     bind to cytokine receptors on fetal neurons. Things get even more complicated
     when we consider twin fetuses. Hormones produced by one twin can affect the
     brain development of the other.
       Early brain development can also be massively influenced by maternal use of
     some drugs (both therapeutic and recreational) and alcohol and can be more
     subtly affected by nicotine. Interestingly, not all of the drugs that can influence
     fetal brain development are drugs taken to affect the mother’s brain function.
66   Some Assembly Required
For example, certain antibiotics and even acne treatments can have significant
effects on fetal brain development.
I N T H E L AT E R   stages of pregnancy, while creation of new brain cells continues
along with migration and specification of neuronal type, the really hard prob-
lem emerges: how to wire the neurons together properly. Here’s the difficulty:
Not only must neurons from, say, the eye, project to the appropriate visual part
of the brain (a particular region of the thalamus that in turn sends axons to the
visual part of the cortex in the far back portion of the brain), but also the spatial
relationship between adjacent points on the retina, where light is sensed, must
be preserved as axons from the eye go into the brain. Otherwise, the visual
world would be all scrambled and it would not be possible to construct a visual
image of the outside world. This is not just a problem for the visual system.
Other sensory systems also have orderly representations of sensory information
that must be preserved as brain regions wire together.
   Classic experiments published in the 1940s revealed some important aspects
of how the brain wiring proceeds. Roger Sperry of Caltech rotated one eye
in developing frogs 180 degrees in its socket, before the axons from the eye
began to grow into the brain (Figure 3.5). What he found was that even in
the case of a rotated eye, the axons from the eye solved the problem of finding
their normal targets in the brain’s visual center. For frogs this is the optic tec-
tum, which is the equivalent of our human visual midbrain, discussed in Chap-
ter 1. It seemed as if each neuron in the eye was able to find its appropriate
target in the optic tectum using chemical cues even when eye rotation dis-
rupted physical cues. Sperry concluded that there are synapse-specific chemical
nametags that match axons to their target dendrites of cell bodies during devel-
opment.
                                                               Some Assembly Required   67
       -FGUFZF                   3JHIUFZF
                                                  -FGUWJTVBMGJFME
                                                       %PSTBM %
                                                          
                                            /BTBM /               5FNQPSBM 5
           'JCFST                                         
           DSPTT
                                                       7FOUSBM 7
        -FGU                       3JHIU                               3JHIUUFDUVN
        PQUJD                      PQUJD                                   NFEJBM
        UFDUVN                     UFDUVN                                                                              
                                                               $BVEBM              3PTUSBM                                                                             
                      %                       7
                                                                           -BUFSBM
             /                5     5              /
                      7                     %
                    /PSNBM              3PUBUFE
68   Some Assembly Required
  The general idea that chemical cues can guide appropriate synapse forma-
tion has held up well under scrutiny. But the evidence for precise, individual
synaptic nametags is poor. For example, if instead of rotating the eye, an experi-
menter destroys half of the optic tectum in the frog’s brain, then all of the in-
growing axons from the eye will crowd into the remaining part of the tectum.
This is inconsistent with a nametag model (which would predict that half of the
axons would not find their predetermined targets), and suggests an alternative
in which gradients of molecules expressed on the surface of neurons in the tar-
get region guide the ingrowing axons. Indeed, in recent years some of the mole-
cules establishing these gradients have been discovered, and it has been shown
that perturbation of these molecules can disrupt orderly synapse formation. It
turns out that a number of the molecules that guide axon outgrowth, by at-
tracting and repelling the tips of growing axons, are the same molecules that
guide migration of neurons slightly earlier in development.
  But can gradients of guidance molecules determined by genes completely
solve the problem of brain wiring? The answer is no. Although early in develop-
figure 3.5. The wiring of the visual system in a frog. The top left panel shows a top-
            down view of the visual system of a frog with the eyes at the top and the
            visual part of the brain, the optic tectum, at the bottom. The neurons
            from the retina cross when projecting to the brain so that the right eye
            goes to the left brain and vice versa. More important, as shown at the top
            right, the retina is mapped onto the optic tectum in a precise way that pre-
            serves the integrity of the image of the visual world formed on the retina
            (although it is flipped right to left). When Roger Sperry rotated the eye of
            a frog 180 degrees and then let the axons grow into the brain, the axons
            from the retina still found their appropriate targets in the tectum. As a
            consequence, the frog now has an inverted map of its visual world and it
            will strike in the wrong location when trying to catch a fly for lunch.
            Adapted from John E. Dowling, Neurons and Networks, 2nd ed. (Belknap
            Press, Cambridge, 2001). Joan M. K. Tycko, illustrator.
                                                                Some Assembly Required     69
     ment the sensory organs were not yet functional, later, during the time of wir-
     ing, the sensory organs are starting to work and the brain itself is becoming in-
     creasingly electrically active. Some senses, such as hearing and touch, are quite
     functional in the later stages of human pregnancy. In the case of fetal vision,
     there may not be much to see in utero, but there is evidence that even in the ab-
     sence of light there are spontaneous patterns of activity that sweep across the
     retina in waves. Electrical activity that results from these spontaneous waves
     is then conveyed by the developing axons to cause transmitter release in the
     brain’s vision centers.
       So, what role does neuronal activity play in wiring up the brain? Let’s exam-
     ine two key observations that will help address this question. First, we’ll con-
     sider a mutant mouse created in the laboratory of Thomas Südhof at the Uni-
     versity of Texas Southwestern Medical Center. This mouse lacks a protein in
     presynaptic terminals that is essential for the fusion of synaptic vesicles with
     the presynaptic membrane. As a result, this mouse completely lacks neuro-
     transmitter release and so activity in neurons cannot be propagated to their
     neighbors. If neuronal activity were essential in the basic wiring of the brain,
     one might imagine that the brain of this mutant mouse would be a complete
     mess, with axons and dendrites running every which way. It turns out that, ulti-
     mately, this mouse is a disaster: it dies at birth because it cannot control the
     muscles used for breathing. But when its brain was examined at birth and shortly
     before, there was a big surprise. The wiring plan of this mouse’s brain develops
     in a basically normal fashion. Axons generally project to the right places, and in
     layered structures such as the cortex neurons are properly arranged and synap-
     ses are formed, although in somewhat fewer numbers than normal. Although
     the brain looks basically normal up to the point of synapse formation, in the
     days following synapse formation, there is massive neuronal cell death. It is as
     if, in the absence of receiving synaptic transmission, many neurons could not
70   Some Assembly Required
continue to live. This finding strongly suggests that for much of the brain, ini-
tial wiring can occur without neuronal activity.
  The second observation concerns the wiring of the brain in adult humans
who have been deaf from birth owing to a genetic defect in the cells of the inner
ear. In these people, both brain imaging and postmortem anatomical studies re-
veal that axons from neurons in the visual part of the thalamus that would nor-
mally be confined to the visual part of the cortex (located in the far back of the
brain) are also found in the auditory cortex (on the sides of the brain). In nor-
mal development, a few axons from the visual thalamus stray into the auditory
cortex early on, but they are eliminated over time. In congenitally deaf people,
visual axons are not only retained, but they sprout new branches. It is as if
lack of auditory activity in the auditory cortex allows axons from the visual
thalamus to invade new territory and make synaptic connections. This proba-
bly happens in a competitive fashion because the unused auditory axons gradu-
ally wither from disuse.
  These two examples are representative of a large number of similar findings
leading to the general conclusion that in most brain regions, large-scale wiring
(getting the right axons to the right brain region) and gross maps (getting the
axons to the right sub-area of the brain region) are genetically specified. Ge-
netic specification does not involve individual molecular nametags for synapses
that would, for example, instruct retinal neuron #345,721 to make a synapse
with visual thalamus cell #98,313. Rather, gradients of axonal guidance cues
are present that convey more general information to the ingrowing axons. In
contrast, the fine details of wiring (getting the axon to make particular synapses
with particular individual neurons) is the stage in which experience, as encoded
by neuronal activity, plays a role. Genetically determined large-scale aspects of
wiring neurons generally occurs earlier in development, while environmentally
determined fine details of brain wiring occur later. In the case of humans, the
                                                           Some Assembly Required    71
     period when brain wiring affects fine-scale brain development starts in the later
     stages of pregnancy and continues through the first few years of life.
     U P TO N O W,   I have largely ignored the admittedly important event of birth,
     straying on both sides of the natal line in discussing brain wiring. In large part
     this is warranted, because there is no evidence to date for a dramatic or qualita-
     tive difference in human brain development that accompanies birth. Rather,
     the maturational processes of late pregnancy continue on a similar trajectory in
     newborns. The most important thing about birth from the point of view of
     brain development is a straightforward consideration: the baby’s head has to get
     through the birth canal and this limits the size of the brain at birth.
       Here the inefficiency of brain design becomes painfully apparent to the birth-
     ing mother. The reason that Mom has to struggle to squeeze out that big head is
     directly attributable to suboptimal brain design: the human brain has never
     been redesigned from the ground up and is therefore spatially inefficient (for
     example, it has two visual systems, one ancient and one modern, as discussed in
     Chapter 1), and because it is built out of neurons that are slow, inefficient pro-
     cessors (Chapter 2), a human needs to employ massive interconnected network
     processing using about 100 billion neurons and 500 trillion synapses. Hence, a
     big head.
       At birth, the volume of the human brain is about 400 cubic centimeters, or
     about the size of an adult chimpanzee’s. It will continue to grow quite rapidly
     until about the age of 5, at which point the brain reaches about 90 percent of its
     maximal size. After the age of 5, the brain continues to grow at a slower rate un-
     til stabilizing at about the age of 20. The period from birth to the age of 20, in
     which the brain is increasing in size by more than 300 percent, is accompanied
     by a host of changes in brain structure. A subset of glial cells in the brain are se-
     creting myelin, an insulating substance that wraps around axons to accelerate
72   Some Assembly Required
     /FXCPSO               NPOUIT              NPOUIT               ZFBST
figure 3.6. Maturation of the human cortex in early life. Although the number of
            neurons changes only slightly, the axons and dendrites of these neurons
            become much more elaborate. This drawing shows a representative subset
            of neurons. It also omits glial cells, which, if shown, would fill in most of
            the space between the neurons. Adapted from J. L. Conel, The Post-natal
            Development of the Human Cerebral Cortex, vol. 1 (Harvard University
            Press, Cambridge, 1939). Joan M. K. Tycko, illustrator.
spike propagation and reduce energy usage. Myelin secretion causes an increase
in the volume of the white matter. In addition, this is a period of extensive
branching and elaboration of dendrites and axons (Figure 3.6) accompanied by
the formation of many, many new synapses.
   In general, the increase in brain volume after birth is not accompanied by an
increase in the number of neurons. A small fraction of the brain’s complete
population of neurons is newly created in the first year of life, but some neurons
die off in this period, leaving the total number basically unchanged. If we count
                                                                 Some Assembly Required     73
     the total number of neurons created during brain development, both before
     and after birth, we find that about twice as many neurons are created as ulti-
     mately reside in the mature brain.
       What happens to these extra 100 billion neurons, most of which die before
     birth? The answer reveals a lot about how electrical activity contributes to the
     fine structure of brain wiring. Basically, the developing brain is a battleground.
     There is a competition for survival among neurons that is well encapsulated in
     the popular phrase “Use it or lose it.” What this means is that during develop-
     ment more neurons are created than can actually be used, and, in general, the
     ones that survive are the ones that are electrically active. The way a neuron be-
     comes electrically active is by receiving synapses that release neurotransmitters
     and thereby cause it to fire spikes. So, if we look in a bit more detail, the battle is
     being fought not at the level of whole neurons, but on a smaller scale at the level
     of synapses. Recall that synapses that are not used tend to wither away (like the
     synapses conveying auditory information in deaf people), while synapses that
     remain active are maintained. This encompasses a portion of the idea of synap-
     tic competition, but it is not everything. A synapse can “lose” and be eliminated
     even if it is active to some degree, if its neighbor is much more active. Strong ac-
     tivation of a synapse not only preserves and strengthens it, but also makes its
     neighbors weaker and ultimately can cause them to be eliminated. I’ll talk a lot
     about the molecular basis of how this happens in Chapter 5, when I consider
     memory storage that reuses these same mechanisms.
       Can we then envision environmental molding of brain development as a
     process by which experience selects from a preexisting set of synapses and neu-
     rons, keeping some active ones and killing off some (relatively or absolutely) in-
     active ones? Does the sculptor of experience chisel away at the block of stone
     that is the developing brain to create the mature form? This is an idea that has
     been very attractive to certain brain researchers, computer scientists, and even
74   Some Assembly Required
some philosophers. It has been given names such as “selectionist theory” or
“neural Darwinism.” Although at some level these ideas are correct, they are far
from complete. There is now excellent evidence from different animals, brain
regions, and conditions that experience-driven electrical activation can cause
axons to sprout new branches that will develop new presynaptic terminals. This
can also occur on the postsynaptic side: electrical activity can cause the forma-
tion of new dendritic spines and small dendritic branches. So, if the brain is a
block of clay, then experience sculpts it not just by carving away inactive or in-
effective parts but also by sticking on new bits in the form of newly created wir-
ing (axons, dendrites, and synapses) in active regions.
  The ability of the brain to be modulated by experience is called neural plas-
ticity. The degree of neuronal plasticity will vary depending upon the brain
region and the stage of development. This gives rise to the idea that there are
critical periods during which experience is necessary to properly sculpt neural
circuits for certain brain functions. One of the best examples comes from vi-
sion. If a baby has an eye closed with a bandage (to treat an infection, for exam-
ple) and the bandage stays on for a long time, then that baby can be blinded in
that eye for life. The same bandage applied to the eye of an adult will cause no
lasting problem. The reason for the blindness is not that the eye has ceased to
function (this can be confirmed by recording light-evoked electrical activity
from the eyeball) but rather that the information from that eye was not present
to help retain the appropriate connections in the brain during the critical pe-
riod for vision.
  There are other forms of neural plasticity that are not subject to a clearly de-
limited critical period. In the early 1960s neural plasticity was not a widely con-
sidered topic. Most scientists thought that the brain had a set of connections
which were wired like a circuit board in a radio and not subject to change. So it
was quite a shock to the scientific community when Marion Diamond and her
                                                            Some Assembly Required    75
                       %FQSJWFE                          &OSJDIFE
figure 3.7. The effect of deprived and enriched environments. Deprived environ-
            ments can reduce the dendritic complexity of neurons in the cortex and
            hippocampus. Adapted from C. J. Faherty, D. Kerley, and R. J. Smeyne,
            A Golgi-Cox morphological analysis of neuronal changes induced by
            environmental enrichment, Developmental Brain Research 141:55–61
            (2003). Joan M. K. Tycko, illustrator.
coworkers at the University of California at Berkeley analyzed the brains of
adult rats that had been removed from their boring, prison-like, individual lab
cages and housed in an “enriched environment” with toys, places to explore,
and other rats. After several weeks in the enriched environment, these rats were
killed and their brains prepared for microscopic examination. In several corti-
cal regions, neuronal dendrites were larger and much more highly branched
and there were more dendritic spines and more synapses than in rats kept in
Spartan conditions. This suggested that even the adult brain was much more
plastic than anyone had imagined at that time.
  Crucially, this process was reversible. Rats placed in the enriched environ-
ment for several weeks and then returned to standard lab cages for several more
weeks had cortical neurons that looked like those of rats that had never left the
standard cages. It is tempting to leap to the conclusion that a similar strategy
of “environmental enrichment” would be beneficial to children. The thing to
keep in mind here is that the so-called enriched environment for rats is ac-
tually just a simulation of what rats encounter in the wild. The standard lab
cage is deadly boring: for the rat it’s like being in solitary confinement. Rather
than showing that extra enrichment beyond normal experience can boost
brain growth, what this experiment shows is that severe environmental depriva-
tion can, at least temporarily, cause a reduction in the complexity of cortical
circuits.
  Are there correlates of brain-critical periods that can be reflected in higher
cognitive processes? There is evidence to suggest that a critical period exists for
language acquisition. Babies under the age of 6 months appear to be able to dis-
tinguish all forms of speech sounds from any language. But after 6–12 months
of exposure solely to Japanese, infants have begun to tune into the sound differ-
ences important for Japanese speech and to ignore other distinctions that are
not present in Japanese (such as the “r” versus “l” sounds in English). The flip
                                                            Some Assembly Required    77
     side of this phenomenon is that babies exposed to two languages can develop
     perfectly accented speech in both languages.
       Children learning a second language after they are 5 years old or so can do
     very well, but are likely not to have perfect accents. School-age children gener-
     ally will acquire second or third languages more easily than adults. A few studies
     suggest that the window for development of a first language closes at around 12
     years. But these claims depend upon a handful of attempts to teach language to
     abused children who were essentially locked away for large parts of their child-
     hood. A 12-year-old who had suffered in this way was able to acquire only the
     most basic rudiments of language, while a 6-year-old subjected to similar depri-
     vation and abuse eventually learned language quite well. The problem is that
     these children obviously were deeply traumatized in ways that could strongly
     influence subsequent social learning. Therefore, it is hard to make a straightfor-
     ward conclusion about language development from their tragic cases.
       Is it possible to identify critical periods for other forms of learning, and, if
     so, can measurements of various brain structures during early life inform this
     process? At present there is an explosion of interest in so-called brain-based
     education. References to developmental neurobiology have been used to justify
     everything from a whole-language approach to reading (as opposed to a phonics-
     based curriculum) to assessment of a student’s work portfolio as a teaching tool.
     A report from the 1997 White House Conference on Early Brain Development
     states, “By the age of three, the brains of children are two and a half times more
     active than the brains of adults—and they stay that way throughout the first de-
     cade of life . . . This suggests that young children—particularly infants and tod-
     dlers—are biologically primed for learning and that these early years provide a
     unique window of opportunity or prime time for learning.”
       Unfortunately, brain research is being invoked here to justify policies that
     may or may not be valid, but current neurobiological knowledge can add little
78   Some Assembly Required
to the debate. For example, if we wanted to predict a critical period for learning
arithmetic, it’s not clear where we should look in the brain and what we should
look for once we determine a location. Even the general statement from the
White House conference is problematic. First, the actual evidence for a 2.5-
fold increase in brain activity in normal 3–10-year-olds is almost nil, and, sec-
ond, even if it were true there is no particular reason to believe this indicates a
unique opportunity for learning. One could just as easily imagine that this in-
creased activity represents “background noise” in the brain that might interfere
with learning and that such a finding could justify shifting more educational re-
sources to teaching older children. This is what can happen when a tiny bit of
science finds it way into a policy debate.
  It is clear that experience early in life is important for developing and fine-
tuning the circuitry in certain parts of the brain, but this cannot be used to jus-
tify the contention that there is a critical early window for many forms of learn-
ing. One problem with analyzing learning in early life is that it’s hard to distin-
guish a super-plastic state in early development in which learning might be
particularly effective from the “founder effect” of early information. Learning is
a process by which new experiences are integrated with previous experiences.
Therefore, early experience may be important, not because it is written into
neural circuitry more effectively, but rather because it is the basis for subse-
quent learning.
  Similarly, there is little evidence that efforts of some parents to “enrich” the
environments of newborns or toddlers with multicolored mobiles or Mozart
CDs will result in any measurable consequences for brain wiring, or for general
measures of cognitive function. Essentially, in terms of brain wiring, the evi-
dence to date is that a child’s early environment is like your need for vitamins:
you need a minimum dose, but beyond that, taking extra won’t help. That is,
exposure to varied spoken language, narrative, music, and the ability to explore,
                                                            Some Assembly Required    79
     play, and interact socially are all important for youngsters. But beyond these ba-
     sic experiences that are present in most middle-class homes there is no reason to
     believe that further “enrichment” confers any benefit to the structure or func-
     tion of the developing brain.
       So, let’s break it on down. We’ve seen that the initial stages of brain develop-
     ment, proliferation of neuronal precursor cells and migration of these cells to
     their correct positions, are mostly genetically determined. There are environ-
     mental influences at these early stages but they are mostly permissive rather
     than instructive. Environmental influences during early gestation tend to be re-
     vealed with problems such as maternal malnutrition or stress. As development
     progresses to the stage of wiring the brain, there is a mixture of genetic and en-
     vironmental influences with genes guiding large-scale wiring and neural activ-
     ity (deriving from both internal and external sources) guiding fine-scale wiring.
     Patterns of experience-driven neural activity can influence fine-scale wiring both
     by eliminating some relatively disused synapses and neurons and by promoting
     new growth of new axons, dendrites, and synapses. In certain brain regions
     (such as the visual cortex) there are critical periods in early life where experience
     must be present or the fine-scale wiring will degenerate and never regrow in
     later life. In other brain regions, experience-driven neuronal plasticity allows
     for the fine-scale wiring of the brain to be subtly changed throughout life. In
     Chapter 5, I will explore how these mechanisms, which underlie the influences
     of nurture on brain development, have been retained in the mature brain and
     modified to store memories.
       How did brain development come to be a two-way interaction between na-
     ture and nurture? This situation has been imposed by three main factors. First,
     our neurons are slow and unreliable processors. Second, our brains have never
     been redesigned from the ground up and are therefore filled with multiple sys-
     tems and anachronistic junk. These two factors work together to require our
80   Some Assembly Required
brains to employ a huge number of neurons to achieve sophisticated compu-
tation. Third, this number of neurons is so large that it is not possible to ge-
netically specify each and every synaptic connection with a unique chemical
nametag. Therefore, because of informational constraints imposed by brain
size, fine-scale brain wiring must be driven by experience rather than genes. Al-
though this means that we must spend an unusually long childhood wiring up
our brains with experience (much longer than any other animal), the mecha-
nisms of neural plasticity that have emerged to allow this have also given us our
memories and ultimately, our individuality. Not a bad deal, really.
                                                          Some Assembly Required    81
     Chapter Four
     Sensation and Emotion
     E V E R Y D AY W E   go through our lives trusting our senses to provide us with
     the lowdown: a direct, unadorned view of the external world. In particular,
     we are inclined to believe vision over our other senses. To illustrate this, we
     need to look no further than the usage of sensory terms in our casual speech:
        “I see that the President is a liar.”
        [This means “the truth about the President is revealed to me.”]
        “I hear that the President is a liar.”
        [This may or may not be true. It warrants further attention.]
        “Something doesn’t smell right about this President.”
        [I’m suspicious, but it’s hard to say exactly why. It warrants further atten-
     tion.]
82
  Whichever President we’re talking about here, the larger point is that we
trust our senses and, of our senses, we trust vision the most—think of “eyewit-
ness” testimony in court. What’s more, in everyday life we behave with the im-
plicit assumption that our sensory information is “raw data,” and, if necessary,
we can evaluate this data dispassionately and, only then, make decisions and
plan actions based upon it.
  What I hope to convey in this chapter is that this feeling that we have about
our senses, that they are trustworthy and independent reporters, while over-
whelming and pervasive, is simply not true. Our senses are not built to give us
an “accurate” picture of the external world at all. Rather, through millions of
years of evolutionary tinkering, they have been designed to detect and even ex-
aggerate certain features and aspects of the sensory world and to ignore others.
Our brains then blend this whole sensory stew together with emotion to create
a seamless ongoing story of experience that makes sense. Our senses are cherry-
picking and processing certain aspects of the external world for us to consider.
Furthermore, we cannot experience the world in a purely sensory fashion be-
cause, in many cases, by the time we are aware of sensory information, it’s al-
ready been deeply intertwined with emotions and plans for action. Simply put:
In the sensory world, our brains are messing with the data.
SO HOW DOES     this sensory manipulation come about? To start with, let’s con-
sider some general themes in the organization of sensory systems. These sys-
tems are typically organized into maps of the external world. In Chapter 3, I
talked about how the rough map of the visual world is created in the brain by
gradients of axon guidance molecules during early development and is then re-
fined by experience at a later stage. So, if you look at the place in the cortex
where visual information first arrives (called the primary visual cortex) you will
find a map of the visual world (in this case, the map happens to be upside-down
                                                              Sensation and Emotion   83
     and backward). What this means is that the far right portion of this area will be
     activated by light coming from the far left of the field of view, and, conversely,
     the far left of this area will be activated by light coming from the far right of the
     field of view, with the intermediate areas of the visual cortex filling in the mid-
     dle. Other senses also have maps. For hearing, the map is for pitch: if you look
     at the primary auditory cortex you will find that one end is activated by very
     high tones and the other by very low tones, with intermediate pitches arranged
     gradually in between.
       These maps, though organized, often reflect the particular anatomy of sen-
     sory systems. For example, your retina devotes an unusually large number of
     light-sensing neurons to evaluating the very center of your field of view (this is
     why your visual acuity and color vision are better in the center than at the pe-
     riphery). As a consequence, the map of the visual world in your cortex is dis-
     torted so that those neurons responding to light in the center of your visual
     field take up an inordinate amount of cortical space. An even more dramatic ex-
     ample of this is found in your primary somatosensory cortex, where informa-
     tion about touch and body position first arrives in higher brain centers. We
     have very good tactile discrimination in our fingers and face, particularly the
     lips and tongue (hence the popularity of kissing), and relatively poor tactile dis-
     crimination in some other locations such as the lower back. This is reflected in
     the size and arrangement of body parts in the cortical map of the body surface,
     which is called the sensory homunculus (Figure 4.1, left). (“Homunculus” just
     means little man.) When the size of the representation in the cortex is used to
     scale the body parts in a drawing of an assembled homunculus, as seen in Figure
     4.1, right, the exaggeration of certain body parts in the sensory map which have
     fine tactile discrimination becomes even more apparent.
       Most people who look at the sensory homunculus long enough will eventu-
     ally stammer out something like, “Given how sensitive the genitals are, shouldn’t
84   Sensation and Emotion
    )JQ     5SVOL          &MCPX
,OFF                                    )BOE
'PPU
 (FOJUBMT
                                5POHVF
                                                  -JQT
figure 4.1. The representation of tactile sense in the brain. Those body parts that have
            fine tactile sensation, such as the hand, lips, and tongue, occupy a dispro-
            portionate amount of space in the primary somatosensory cortex. Left: A
            view of the right half of the brain, cut in the coronal plane (from ear to ear)
            and opened to face us. The body parts that are represented in the cortical
            map are shown adjacent to or overlying the corresponding regions of the
            cortex. Note that the map is fractured: some adjacent body parts, such as
            the forehead and hand, are not adjacent on the body. Right: A view of the
            human male in which the sizes of the body parts have been scaled to the
            size of their representation in the primary somatosensory cortex. To me,
            this little guy looks a bit like Mick Jagger. Joan M. K. Tycko, illustrator.
they be larger?” We know that the genitals are sensitive to touch and that there
are particular nerves which carry sensory information from the genitals into
the spinal cord and up to the brain. One potential explanation for the size is-
sue hinges on the need to be more precise when we say “sensitive to touch.”
The parts of the homunculus that have huge representations (such as hands,
lips, and tongue) are not merely able to detect faint sensations but can also
discriminate the location of these sensations very precisely. You might imag-
                                                                     Sensation and Emotion    85
     ine that these two abilities always go together, but they do not. The ability to do
     the finest discrimination, necessary for tactile form perception (as in reading
     Braille), requires a special type of nerve ending in the skin that is abundant
     in the fingers, lips, and tongue but almost completely absent in either the penis
     or the clitoris. The genitals, while they can easily detect faint sensations, cannot
     accomplish tactile form perception. In the spirit of old-fashioned natural phi-
     losophy, you can experiment with this at home. In this way, they are some-
     what like the cornea of the eye: quite sensitive to faint sensations such as a grain
     of sand, but without an ability to precisely locate those sensations. This dif-
     ference in the exact type of touch sensation is likely to explain why neither the
     cornea nor the genitals (male or female) are particularly large in the sensory
     homunculus.
     S E N SO RY SYSTE MS I N   the brain typically do not have a single map of their
     world, but rather many, spread over adjoining regions of the cortex. In many
     cases, sensory information is divided or duplicated and sent to different subre-
     gions of the cortex that are specialized to extract particular forms of informa-
     tion. A good example of this is found in the visual system. The cells that send
     visual information from the retina into the brain can be divided into two types,
     P cells (P is for “parvi,” which means small) and M cells (M = “magni” =
     large). Each P cell responds to only a small part of the visual scene, and all are
     sensitive to color. The M cells, which are important for detecting moving stim-
     uli, are insensitive to color, and integrate information from a larger area.
       Although P-cell and M-cell signals travel side by side in axons going from the
     retina to the thalamus and then in other axons from the thalamus to the pri-
     mary visual cortex, little of this information mixes in either of these locations.
     After the primary visual cortex, this information becomes clearly divided, as
     different sets of axons carry P-cell and M-cell information along different routes
86   Sensation and Emotion
                                                                   l8IFSFm
                                                                   QBUIXBZ
                                                                   1SJNBSZ
                                                                   WJTVBM
                                                                   DPSUFY
                           l8IBUm
                           QBUIXBZ
figure 4.2. The processing of visual signals in two different pathways to extract
            “what” versus “where” information about objects in the world. The figure
            shows the left surface of a human brain. After a relay in the thalamus, sig-
            nals from the retina arrive at the primary visual cortex, at the far back end
            of the brain. The primary visual cortex sends fibers carrying visual infor-
            mation into two different processing streams, each with many individual
            areas. The higher pathway, into the parietal lobe, is the “where” pathway,
            specialized to determine the position, depth, and motion of objects. The
            lower pathway into the temporal lobe constitutes the “what” pathway,
            which uses visual detail and color to evaluate and identify objects.
            Adapted with the permission of Elsevier from A. C. Guyton, Textbook of
            Medical Physiology, 8th ed. (W. B. Saunders Company, Philadelphia,
            1991). Joan M. K. Tycko, illustrator.
(Figure 4.2). Signals from the M cells are conveyed into a set of processing sta-
tions in the parietal lobe that are specialized to use this broadly tuned visual in-
formation to plot the location, depth, and trajectory of visual objects, both
animate and inanimate. This has been called the “where” pathway. P-cell in-
formation has another fate. It is sent to a set of regions in the temporal lobe
that use finely tuned visual information, including color, to recognize objects,
                                                                   Sensation and Emotion    87
     thereby constituting the “what” pathway. In later stages the “what” and “where”
     streams converge, presumably to allow this information to be integrated in our
     visual experience.
       If we travel the long and winding road of the “what” pathway and examine
     the responses to visual stimuli at each station, an interesting theme begins to
     emerge. Neurons in the earliest stations, such as the retina, respond well to sim-
     ple stimuli, such as dots of light. Somewhat farther along, in the primary visual
     cortex, the optimal stimuli are more geometrically complex, like a bar with a
     particular orientation. Further still, along the “what” pathway optimal stimuli
     are actual real-world objects such as a hand or a rock. It seems as if the visual sys-
     tem gradually builds the ability to detect more complex features and objects
     through successive processing in the “what” pathway. At the very end of the
     “what” pathway, information is fed to memory and emotion centers such as the
     hippocampus and amygdala.
       The later stations of the “what” pathway can be quite specialized. Damage to
     these regions (either from trauma or developmental/genetic problems) can lead
     to very specific impairments, such as an inability to recognize particular human
     faces, a syndrome called face-blindness, or prosopagnosia. People who sustain
     damage to nearby regions can have a failure to recognize visual objects (called
     visual object agnosia). Milder cases of this can involve inability to recognize a
     particular object within a class—the inability to pick out one’s own car in a full
     parking lot is typical. More severe cases include the profound confusion of ani-
     mate and inanimate objects, as in the unfortunate individual who inspired the
     title of Oliver Sacks’s famous book The Man Who Mistook His Wife for a Hat.
     Similar phenomena have been observed in laboratory monkeys that have sus-
     tained bilateral damage to the temporal lobe: they may try to eat grossly inap-
     propriate nonfood items (such as a lit cigarette).
       So, to put it all together, when you’re at a U2 concert and your mother bursts
88   Sensation and Emotion
from the wings and runs across the stage in a desperate attempt to smooch
Bono, the M-cell driven “where” pathway in your parietal lobes will register
that there’s something moving on a trajectory that will intercept him, and the
P-cell driven “what” pathway in your temporal lobes will recognize this some-
thing as Mom. Because the “where” pathway is a bit faster (and has an easier
computational job) you will register the former slightly before the latter. Any
feelings of embarrassment or elation you may feel in this situation are probably
mediated by fibers carrying this information to your amygdala and a handful of
other regions involved in emotional responses. Of course, you don’t experience
this visual scene as separate “what” and “where” information—it’s blended into
a coherent unitary perception that feels natural and true.
W H AT H A P P E N S W H E N   information from multiple sensory areas, which is nor-
mally kept separate, is blended in the brain? Consider the case of E.S., a 27-
year-old professional musician who lives in Switzerland. She is a synesthete,
meaning that she has involuntary physical experiences across sensory modali-
ties. In particular, whenever she hears a certain tone interval she experiences a
taste on her tongue. This sensation is totally consistent: a major third will al-
ways produce a sweet taste, a minor seventh evokes a bitter taste, and a minor
sixth, the taste of cream. She also experiences colors in response to tones: C is
red, F-sharp is violet, and so on. Careful study by Lutz Jäncke and his co-
workers at the University of Zurich has shown that E.S. uses her remarkable
synesthetic sense as a memory aid in musical performances.
   E.S. is only one of many types of synesthete. Others can hear odors, smell
textures, or even feel heat from certain forms of visual stimulation. About half
of all synesthetes have more than one cross-modal drive, these are never bi-
directional: someone who sees colors in response to particular odors will not
also perceive odors in response to colors. By far the most common forms of
                                                                 Sensation and Emotion   89
     synesthesia are the perception of color in response to graphemes (written num-
     bers, letters, or symbols) or sounds, particularly musical sounds. Interestingly,
     though most synesthetes have their cross-modal experience triggered by an ex-
     plicit sensory experience (say, the number “5” as written in Arabic numerals but
     not other representations of five such as the Roman “V” or five hash marks
     “兩兩兩兩兩”), some others are triggered by concepts. For example, there is a group of
     synesthetes who have colors triggered by temporal categories: December is blue
     while May is red; Saturday is pink and Wednesday is light green.
       Synesthetes have normal-to-above-normal general intelligence and they ap-
     pear typical in personality tests and general neurological exams. They do not
     hallucinate or show an unusual incidence of mental illness. Determining the
     number of synesthetes in the general population is difficult, but recent esti-
     mates have been as high as 1 in 200 people. Synesthesia is much more common
     in women and in left-handed people. Although it is hard to exclude sampling
     bias, it appears, not surprisingly, that synesthetes tend to be drawn to the cre-
     ative professions, such as writing, visual art, music, and architecture.
       Synesthesia has been known for over 200 years, and was even found by Dar-
     win’s cousin, the nineteenth-century scientist Francis Galton, to run in fami-
     lies. But until recently, it was a phenomenon that never quite achieved scientific
     respectability. Many neurologists thought that synesthetics were simply flaky
     and poetic: they didn’t really experience cross-modal associations; they just had a
     flair for metaphoric language. In their view, E.S.’s reporting that the note F-
     sharp evoked the color violet was not fundamentally different from what a wag
     said of the poet W. H. Auden, “He’s got a face like an unmade bed.”
       There are several reasons to believe that synesthetes are having genuine cross-
     modal experience rather than merely making poetic associations. First, the ex-
     periences they report do not change over time: they are consistent over many
     years, even when subjects are tested without warning. Second, some clever per-
90   Sensation and Emotion
ceptual tests have supported the idea of true synesthetic experience. For exam-
ple, imagine an array of 5’s on a piece of paper with a few 2’s thrown in, all
printed in black type on a white background. If you were asked to count the
number of 2’s this would require a systematic search and your response time
would be slow. If, however, all of the 5’s were in red type while the 2’s were
green, then the 2’s would “pop out” in your perception and you could count
them much faster. When Edward Hubbard and V. S. Ramachandran of the
University of California at San Diego gave the black-type task to number-to-
color synesthetes, they solved it rapidly, like normals facing the colored-type
task, which supports the notion that synesthetes truly see the numbers as col-
ored. Third, brain imaging studies have shown that synesthetes have activation
of brain regions corresponding to their cross-modal sense. Work by Jeffrey Gray
and his coworkers at the Institute of Psychiatry in London showed that spoken-
word-to-color synesthetes showed activation of both auditory/language regions
and centers that process color vision (called V4/V8) in a word-listening task,
while normals showed only auditory/language activation.
  The suggestion from this and other brain imaging studies is that synesthesia
results from the spread of signals from their typical sensory regions in the brain
to the regions subserving other senses. The most popular hypothesis to explain
how this comes about is that aberrant synaptic connections (say, from the audi-
tory information stream to visual color areas) somehow fail to be eliminated in
early postnatal development, and their retention and elaboration in later life
drives particular synesthetic experience. This notion is supported by the obser-
vation that in the most common forms of synesthesia, such as tone-to-color
and grapheme-to-color, we see coactivation of adjacent regions of cortex, while
rare forms of synesthesia, such as odor-to-hearing involve coactivation of more
distant regions.
  Synesthesia is not a disease state. It is likely to represent one end of a spec-
                                                             Sensation and Emotion   91
     trum of multi-modal sensory experience: we all integrate sensory information
     to some degree. Indeed, it is possible that as infants, before the first wave of
     activity-driven refinement of synaptic connections was complete, we were all
     once highly synesthetic.
     AS WE GO   through life, whether attending a concert or walking down the street,
     we are not generally aware of the convoluted neural architecture of our sensory
     systems. We just experience the external world and it feels like the truth. In fact,
     our sensory systems are messing with nearly every aspect of our sensations from
     their quality to their timing. Millions of years of evolution have biased our sen-
     sory systems in very unusual ways. First, we must consider the rather simple
     fact that the range of stimuli we can detect is merely a subset of possible sensory
     information. We can see certain wavelengths of light from deep red to deep vio-
     let, but not light that is beyond either end of this spectrum. By contrast, many
     birds can see in the ultraviolet. This allows certain birds of prey (such as hawks)
     to detect the urine trails left by their prey (field mice or rabbits). Likewise, hu-
     mans can hear over a certain frequency range (from about 20–20,000 cycles per
     second), but this range is just one slice of auditory information: bats, whales,
     and mice can hear much higher tones (up to about 100,000 cycles per sec-
     ond). We can discriminate about 10,000 different odors, but dogs can do much
     better (250,000 odors is one estimate). The list goes on and on in every sensory
     modality. Presumably, it has been evolutionarily advantageous for dogs to have
     this large olfactory range while humans can make do with less information.
     Our senses are merely “peering through a keyhole” into sensory space.
       Evolutionary pressures have influenced not only the boundaries of our senses
     but also how that sensory information is subsequently processed in the brain.
     Our sensory systems have adapted in ways that are important for key behaviors
     such as feeding, avoiding danger, mating, and child care. Although there are
92   Sensation and Emotion
many quirks of sensory processing that are unique to particular senses, there are
also some general themes. For example, our sensory systems are generally built
to give a stronger response to novel stimuli than those which are ongoing, a pro-
cess called adaptation. You know this from your own experience: If you walk
into your kitchen the morning after cooking fish for dinner you may detect a
lingering odor initially, but after a minute or so, you barely notice it. If you walk
out of the kitchen and then reenter it later, the odor will once again become
briefly apparent. Similarly, if you use a computer you might notice the faint
high-pitched whine of the hard drive when you first sit down to work, but this
is likely to fade from your perceptual world rather quickly. This adaptation is
likely to be evolutionarily useful because it allows you to focus on novel, poten-
tially dangerous (or tasty) stimuli out in the world.
  Our sensory systems are also specifically designed to detect changes in di-
mensions other than time. One of the best examples of this is edge enhance-
ment in the visual system, which is useful for distinguishing objects from their
background, an understandably adaptive function for finding food or avoiding
predation. Edge enhancement is produced by circuitry in both the retina and
the subsequent processing stations of the brain that, when assigning a percep-
tion of luminance to a given spot in the visual field, makes it appear darker if
the surrounding area is brighter. This is produced by a process called lateral in-
hibition, in which neurons in a visual map inhibit their near neighbors when
activated. They do this using axons that form synapses releasing the inhibitory
neurotransmitter GABA. Although we are normally unaware of edge enhance-
ment in our visual world, it can be revealed in a number of optical illusions, one
of which is shown in Figure 4.3.
  Edge enhancement is an example of how our brain distorts our perception of
the world to render sensory information more useful. This is all very well, but
our sensory systems actually have a much bigger problem: they must make sen-
                                                               Sensation and Emotion    93
     figure 4.3. An optical illusion produced by circuits in the visual system that are de-
                 signed to enhance edges. The gray horizontal stripes in the right and left
                 panels are the same uniform shade of grey. The stripe in the left panel,
                 however, appears to alternately become brighter and darker as the visual
                 system integrates information outside of the stripe to enhance edges. To
                 convince yourself that this is really true, cover everything but the horizon-
                 tal stripe in the left panel and watch it become uniformly gray. Joan M. K.
                  Tycko, illustrator.
     sory time seem continuous and flowing. At this point, you’re probably thinking
     that time is, by its nature, continuous and flowing, so why does the brain have
     to do anything to make it seem so? Let me explain what I mean. When you sur-
     vey a visual scene, your eyes do not stay still. They tend to jump rapidly from
     point to point. These jumps are called saccades and they function to place dif-
     ferent parts of the visual scene in the center of your field of view, where fine
     form or color discrimination is possible (Figure 4.4). It takes time to complete a
     saccade: your brain has to issue commands and these commands have to be re-
     layed through several locations to ultimately reach axons that make synapses on
     your eye muscles and release the neurotransmitter acetylcholine to excite them.
     Then, the eye muscles have to contract to produce the movement. The longest
     possible saccade you can do, from the far right of your visual field to the far left,
94   Sensation and Emotion
figure 4.4. Saccades involved in scanning a visual scene. An eye-tracking device was
            used to record eye position as a human subject scanned this photograph of
            a girl from the Volga region of Russia for about 3 minutes. Most of the line
            segments show here are saccades (the others are slow tracking movements).
            From A. L. Yarbus, Eye Movements and Vision (Plenum Press, New York,
            1967); reprinted with permission from Springer Science & Business Media.
will take about 200 milliseconds (one fifth of a second). During these saccades
you do not see the visual world sliding around as your eyes shift, nor does
your vision black out during this period. Your retina does not stop sending in-
formation to your brain during a saccade. Rather, the signals showing the visual
world sliding around during a saccade are sent from your eye to your brain, but
they do not make their way into your perception. As you know, your visual per-
ception seems to make sense: it is continuous and flowing and you are generally
unaware of your eyes jumping about.
                                                                  Sensation and Emotion    95
       How does your brain fill in the gaps created by saccades to achieve this
     smooth visual effect? To explain this, it is first necessary to mention that there is
     a brief delay between the point when events in the world impinge upon the sen-
     sory organs (light falls on the retina, sound waves reach the eardrum, odorant
     molecules bump into sensory cells in the nose) and the point when we become
     aware of these sensations. There is some variation in this delay, depending on
     the sense involved and the exact type and intensity of stimulation, but the range
     is generally between 50 and 300 milliseconds. This delay, like the ones that
     television networks impose on “live” broadcasts to allow them to bleep out pro-
     hibited words, allows the brain to engage in some funny business. It’s impor-
     tant to understand that the delay does not simply correspond to the time it
     takes for the first electrical signals to reach the primary cortex (usually 20–50
     milliseconds). In most cases the awareness of sensations requires further corti-
     cal processing, and, as a consequence, a bit more time.
       So, in the case of a saccade, your brain ignores the visual information con-
     veyed during the eye movement. Then, when the saccade is complete, your
     brain takes the visual image of the new location and uses it, retroactively, to fill
     in the preceding time gap. Most of the time you do not notice this at all. But in
     particular circumstances, this brain trickery can be revealed, as in a phenome-
     non called the stopped-clock illusion. When you make a large saccade that re-
     sults in your eyes coming to rest on a clock, it will sometimes appear as if the
     second hand of the clock then takes slightly longer than normal to move to its
     next position. For this illusion to work, the movement must be a true saccade
     (slowly sweeping your eyes across the visual field to ultimately rest on the clock
     engages a different mechanism in the brain) and the clock must be silent (regu-
     lar ticking will destroy the illusion). It will work on either a traditional analog
     clock or a digital clock that shows seconds, but will be most apparent on those
     trials where the saccade is completed immediately after a clock movement. For
96   Sensation and Emotion
a short period, the clock appears to have stopped (a phenomenon called chron-
ostasis) because the brain extends the percept of the new location back in time
to just before the start of the saccade.
   For many years, it was thought that chronostasis was a strictly visual phe-
nomenon. In the last few years, however, it has been demonstrated for other
senses as well. One study that required subjects wearing headphones to shift
their attention from their right ear to their left before judging an interval be-
tween two tones found a similar phenomenon. This may underlie the so-called
dead phone illusion, in which, when a person rapidly shifts attention and si-
multaneously activates a phone handset, she judges the silent interval before
the dial tone to be unusually long. In another experiment it was shown that,
following a rapid reaching movement, people overestimate the time their hands
have been in contact with a newly touched object. Much as in the visual stopped
clock illusion, it appears as if tactile perception were extended backward in time to
a moment at the onset of the reach. These findings suggest that chronostasis,
resulting from extending perception backward in time to mask a period where
perception would otherwise be confusing, is a widespread feature of sensory sys-
tems. This is one trick our brains employ to create a useful, coherent sensory
narrative.
M A N Y P H I LO S O P H E R S A N D   cognitive scientists approach perception as if it
were a completely objective and logical process. In their view, perceptions can
sometimes trigger emotions but it is possible to divorce emotion from percep-
tion and act on perceptions in a purely unemotional fashion. This perception/
emotion distinction resonates throughout our Western cultural tradition. We
see this most strongly in medicine, where we have two different fields for treat-
ing brain disease. Neurology mostly deals with perceptual, motor, and cogni-
tive problems, while psychiatry mostly deals with emotional and social prob-
                                                                    Sensation and Emotion   97
     lems. The fact that these disciplines are separate is an accident of recent history.
     If things had gone a little differently in Vienna, St. Petersburg, and Baltimore
     around the turn of the twentieth century, there might have been a single medi-
     cal specialty devoted to all brain disease that would integrate both biological
     and talking-cure therapies. As it is now, there is no biological basis for assigning
     particular brain diseases to these specialties. It’s not as if there is a dividing line
     in the structure of the brain such that problems of the occipital and parietal
     lobes are sent to neurologists and those of the temporal and frontal lobes go to
     psychiatrists. Nor is there a biochemical dividing line: diseases of glutamate-
     using synapses are not the territory of neurologists, while diseases of dopamine-
     using synapses go to psychiatrists. In truth, it’s just that, like brains themselves,
     these fields have “evolved,” not according to any master plan, but merely in re-
     sponse to the vagaries and constraints of history. Clearly, the perception/emo-
     tion distinction cuts deep into the way we think about the brain and the way we
     deal with its dysfunctions.
       What I hope to show here is that perception and emotion are often inextrica-
     bly linked. There is little, if any, “pure perception” in the brain. By the time we
     are aware of sensations, emotions are already engaged. Fascinating examples of
     this can be seen in two complementary types of brain damage. In 1923, the
     French physician Jean Marie Joseph Capgras described a patient who, follow-
     ing temporal lobe damage, could still visually identify objects and human faces,
     but these objects and faces did not evoke any emotional feelings. As a conse-
     quence, this patient, suffering from what is now called Capgras syndrome, be-
     came convinced that his parents had been replaced by exact human replicas.
     One explanation is that he was led to this conclusion because the emotional re-
     sponses he expected to feel when seeing his parents weren’t there and, conse-
     quently, the only reasonable explanation was that these people looked like his
98   Sensation and Emotion
parents but were not actually they. The problem was exclusive to vision: the
voices of his parents still sounded genuine.
  Since the original description, quite a few more cases of Capgras syndrome
have come to light and some of these have been observed quite carefully. Capgras
syndrome is most often manifest as a feeling of parental imposters, but it can
occur for anyone or anything for which there is an expected strong emotional
response—pets, for example. Many Capgras patients find mirrors extremely
disturbing: they recognize that the reflected image resembles themselves, but
they are also convinced that the reflection is of an imposter. Often, this is terri-
fying because the reflected image is thought of as a malevolent stalker, deter-
mined to ruin the life of the patient.
  Capgras patients do not have a simple problem with either visual discrimina-
tion or emotional responses. In the laboratory, they can easily make distinc-
tions between similar faces and objects. They do not hallucinate and can have
appropriate emotional responses to auditory stimuli. These observations, to-
gether with anatomical evidence, support the view that Capgras syndrome is
specifically a defect in information transfer between the later parts of the visual
“what” pathway and the emotion centers, including the amygdala.
  The second part of this story about vision and emotion comes from patients
who have been blinded by damage to the primary visual cortex. In Chapter 1, I
discussed how some patients with this type of lesion can still accurately locate
an object in their visual field, even though they have no conscious awareness of
seeing anything (blindsight). Recently, a patient who sustained this form of
damage from repeated strokes that affected the primary visual cortex was asked
to guess the emotions expressed in photos of human faces. These faces, which
were both male and female, showed typical expressions of fear, sadness, happi-
ness, and anger. He was able to guess the correct emotion about 60 percent of
                                                               Sensation and Emotion   99
      the time. This was not a perfect score but was significantly better than the out-
      come obtained by chance. When this task was repeated with the subject in an
      fMRI machine to scan brain activity, significant activation was seen in the right
      amygdala for emotional faces, with the strongest activation produced by fearful
      expressions.
         Taken together, these clinical examples show that for both the ancient mid-
      brain visual system and the modern cortical visual system, the amygdala is acti-
      vated to engage emotional responses. It is likely that, in the case of the cortical
      “what” pathway, the amygdala is not the only region engaged in the triggering
      of emotional responses by visual information. The important point here is that
      visual information is rapidly fed into emotional centers in the brain, which
      makes it impossible to separate emotion from perception in experience. When
      an object is rapidly and symmetrically expanding in your visual field, indicating
      a collision, you cannot help taking evasive action. It’s a hard-wired subcon-
      scious response. Likewise, when you see a snake in the grass or an angry face,
      your brain will begin to prepare for “fight or flight” by triggering increased
      heart rate and other anticipatory physiological responses. This occurs before
      you are able to consciously make a plan of action. Though the examples I have
      used are from vision, this principle applies broadly to all of our senses: emotion
      is integral to sensation and the two are not easily separated.
      O N E S E N S AT I O N T H AT   we think of as intrinsically emotion-laden is pain. Pain
      is not caused merely by the overactivation of sensory pathways in the body, but
      rather by a dedicated system of sensory cells and their axons that project into
      the spinal cord, and ultimately to the brain. Counterintuitively, some of the ax-
      ons that send pain information tend to be of small diameter, and hence they are
      among the slowest in the nervous system for conducting spikes, operating at a
      speed of about 1–2 meters per second. This is why, when you stub your toe, you
100   Sensation and Emotion
can feel some sensation rapidly (through the fast, nonpain fibers) but you can
count a complete “One-Mississippi-Two” before you feel the wave of pain.
  Pain is crucial in two ways: it helps protect us from the tissue-damaging
effects of dangerous stimuli, and it acts as a warning to learn to avoid these
sorts of situations in the future. People who have lost pain sensation owing
to nerve damage from trauma or an inherited disease are at constant risk of in-
jury. Pain is not, however, a unitary sensation. It can be divided into multi-
ple components: we now have good evidence that there are separate sensory/
discriminative and affective/motivational pathways for pain. The axons of the
sensory pathway form synapses in the lateral portion of the thalamus (far from
the midline), which in turn send axons to the body representation in the pri-
mary somatosensory cortex. Selective damage to this pathway results in a con-
dition in which the ability to discriminate the qualities of pain (sharp versus
dull, cold versus hot) is lost. Individuals with this type of lesion may be able to
describe an unpleasant emotional reaction to a particular stimulus but are un-
able to describe its qualities or even specify its location on their body.
  The affective (emotional) dimension of pain appears to be mediated by a
pathway that runs more or less parallel to the lateral sensory pain pathway: it in-
volves activation of a medial portion of the thalamus (near the midline) that
then sends axons to two cortical regions implicated in emotional responses,
called the insula and the anterior cingulate cortex. Damage restricted to the
medial affective pain pathway results in a condition called pain asymbolia. In
this condition people are able to accurately report the quality, location, and rel-
ative strength of a painful stimulus, and have intact withdrawal and grimacing
reflexes and normal-looking biopsies of their peripheral nerves. What is amaz-
ing about people with pain asymbolia, however, is that they seem to lack the
negative emotional response to pain that the rest of us take for granted: they can
report pain accurately, but it just doesn’t seem to bother them. This syndrome
                                                               Sensation and Emotion   101
      can result from a genetic defect (a family in France has been described in which
      pain asymbolia is inherited) or from traumatic damage to the insula or anterior
      cingulate cortex.
        The affective component of pain can be modulated by cognitive and emo-
      tional factors. Anxiety and specific attention to a painful stimulus can increase
      the affective component of pain while relaxation techniques and distraction
      can reduce it. A potent behavioral form of pain modulation is hypnotic sugges-
      tion that (depending upon the suggestion) can either increase or reduce the af-
      fective component of pain. When Catherine Bushnell and her colleagues at
      McGill University used hypnotic suggestion to increase or reduce the emo-
      tional component of pain felt by subjects in a brain scanner, they found corre-
      sponding increases and decreases in the activity of the anterior cingulate cortex,
      which further implicates this structure in a distinct affective pain pathway.
        The anterior cingulate may do more than play role in pain; it may also have a
      more general role in producing emotional responses to tactile stimuli. We have
      evidence that this structure is also activated by pleasant light touching (caress-
      ing, if you will) and may contribute to emotional bonding and hormonal re-
      sponses evoked by skin-to-skin contact between individuals (I’m not just talk-
      ing about sex here—the most important example of this may be in parent-child
      bonding). It may be that different subregions or biochemical pathways in the
      anterior cingulate are engaged to produce positive or negative emotional re-
      sponses to tactile stimulation.
        Recent experiments on rats have indicated that learning to avoid painful
      stimuli depends upon the affective/motivational rather than the sensory/dis-
      criminative pain pathway. In these experiments, using a simple learning task
      called conditioned place aversion, a rat is placed in a box with two chambers
      that are easy to tell apart (typically, one chamber is painted black and the other
102   Sensation and Emotion
white). When the rat enters one chamber it receives a moderately painful foot-
shock through metal bars in the cage floor. Very quickly, rats will learn to avoid
the chamber where the shock was received. But if drugs that block receptors for
the neurotransmitter glutamate are injected into the anterior cingulate cortex
before training, then conditioned place aversion learning will be blocked. Con-
versely, if the animals are placed into one chamber and, instead of receiving a
footshock, glutamate is injected into the anterior cingulate cortex, they will be-
have as if footshock had been received and will learn to avoid that chamber. But
if these injections, of either glutamate or the receptor blocker, are made into
locations in the sensory/discriminative pathway, learning proceeds normally.
Thus it is the emotional rather than the sensory response evoked by pain that
appears to provide the teaching signal for aversive learning.
  Both humans and our hominid and prehominid ancestors live(d) in social
groups, so it not surprising that our sensory systems appear to have some par-
ticular specializations for social interaction. A recent study has demonstrated
that showing people still photographs of hands and feet in painful situations
causes activation of brain regions that comprise the affective pain pathway. The
anterior cingulate cortex was activated by this experience and its activity was
strongly correlated with the participants’ ranking of the others’ pain. This re-
markable finding, that affective pain centers can be activated by both your own
painful experiences and those of others, may shed light on the neural substrate
of empathy. It will be interesting to repeat these experiments in populations
with disorders that impair empathy.
WE OFTEN SPEAK     of certain social interactions as being painful or hurtful. Is
this only a linguistic metaphor, or do physical and social pain really have a com-
mon substrate in the brain? A clever study by Naomi Eisenberger and her co-
                                                                Sensation and Emotion   103
      workers at UCLA has shown that subjects who were made to feel social ex-
      clusion in a three-way ball-throwing game showed strong activation of, you
      guessed it, the anterior cingulate cortex. In order to do this experiment, the
      ball-throwing game was actually a virtual one using a computer screen visible to
      the subject in a brain scanner. The exclusion in a virtual ball-throwing game is
      not a very potent form of social pain, and yet it produced a strong activation of
      a key affective pain center in the brain. One can only imagine what it would
      look like if someone in an fMRI machine received a “let’s just be friends,”
      phone call from a serious love interest. Considering this study together with the
      previous one on physical pain empathy, it is reasonable for us to speculate that
      the anterior cingulate and related structures in the affective pain pathway may
      be important in empathy for social as well as physical pain.
        I have talked at length about how sensory systems in the brain are inter-
      twined with emotion. Now I would like to present the idea that sensation and
      motor function in the brain are also comingled. Even today, students are shown
      brain diagrams and told that certain regions of the brain are sensory while oth-
      ers are motor. In truth, in many cases this distinction is not so clear. There are
      many places in the brain where sensory and motor function are blended, in-
      cluding the cerebellum and the basal ganglia, but now I will focus on one par-
      ticular example in the cortex with relevance for human social behavior. Several
      years ago, Giacomo Rizzolatti and his colleagues at the University of Parma
      made recordings from single neurons in a cortical region of the monkey brain
      called the ventral premotor area. This region had previously been shown to be
      involved in the planning of movements. So it was not entirely surprising that
      certain neurons in this region fired spikes when the monkey performed certain
      motions, in particular, goal-directed motions such as pushing a button or pick-
      ing up a peanut to eat. The big surprise came when it was found that a particu-
      lar neuron that might be activated by the monkey’s grasping a cup, for example,
104   Sensation and Emotion
would also be activated by watching another monkey performing this same ac-
tion. These neurons with a dual sensory and motor function were called mirror
neurons. It soon became clear that they also could be activated by watching
movements of the human experimenter (but not by watching a video of an-
other monkey or human, even when a stereoscopic video was used with 3-D
glasses for the monkey). Further investigation showed that mirror neurons can
be activated by a wide range of purposeful movements and may be found in
other parts of the frontal cortex outside the ventral premotor area.
  The discovery of mirror neurons has a lot of brain researchers very excited
because this seemingly simple finding holds promise for explaining what have
been some very enigmatic issues in human behavior. Humans and, to a lesser
degree, the great apes have developed a capacity for understanding the experi-
ences and motivations of others (such as “I know that he knows”) that is not
present in lower animals. This understanding can be used for good (empathy,
cooperation) or evil (manipulation, combativeness) social purposes and has
been called a “theory of mind.” Mirror neurons, by allowing us to understand
the actions of others in terms of our own actions, might be a biological basis of
theory of mind. It has even been suggested that mirror neurons may sow the
seeds of language in that having a theory of mind is a prerequisite for purpose-
ful linguistic communication: to want to speak you have to have the idea that
someone else is listening. We assume, rightly I believe, that mirror neurons are
also present in humans. But at the time of this writing that has not been con-
firmed, because recording from single neurons is humans is a rare procedure
and is only done briefly, in conjunction with certain forms of brain surgery.
I N S U M, OU R S E N SO RY   world is anything but pure and truthful. Built and
transformed by evolutionary history into a very peculiar edifice, it responds to
only one particular slice of possible sensory space. Our brains then process this
                                                             Sensation and Emotion   105
      sensory stream to extract certain kinds of information, ignore other kinds of in-
      formation, and bind the whole thing together into an ongoing story that is un-
      derstandable and useful. Furthermore, by the time we are aware of sensations,
      they have evoked emotional responses that are largely beyond our control and
      that have been used to plan actions and understand the actions of others.
106   Sensation and Emotion
Chapter Five
Learning, Memory,
and Human Individuality
W H AT ’ S A B R A I N   good for? We’ve seen that the lower portions of our brains
have essential control circuits that govern basic body functions: key reflexes, an
automatic thermostat, a regulated appetite for food and drink, and wakeful-
ness/sleepiness. The lower brain also has regions for coordinating our move-
ments and modifying our perception to direct our attention to the outside
world. This is the basic stuff that we share with frogs and fish, the “bottom
scoop of the ice cream cone.” The top two scoops, the limbic system and the
neocortex, are where things get really interesting. Many complex functions such
as language and social reasoning emerge in the cortex, but I contend that there
are two key brain functions that are the basis upon which these higher capaci-
ties are built. These are memory and emotion—and the interaction between
the two.
                                                                                      107
        Consider this analogy: the brain does for the individual what the genome
      does for the species. The genome, the sequence of information encoded in the
      DNA, undergoes random mutation and sometimes a mutation (or a collection
      of mutations) confers an advantage on an individual that allows him or her to
      have more and/or healthier offspring. The genome, through the Darwinian
      process of natural selection, is the book in which the story of evolution is writ-
      ten: the experience of the species ultimately modifies the genome and thereby
      the genetic traits of the species, sometimes rendering it better adapted to the
      environment. The limitation of evolution through natural selection is that it is
      not a rapid process. Species adapt to their experiences (environments) slowly,
      over many generations.
        The brain, by storing memories, performs a related function for the individ-
      ual. It is the book in which individual experience is written. Because memory
      storage is rapid, it allows an individual to adapt to new experiences and situa-
      tions. This is a much more flexible and powerful solution than relying solely
      upon mutation and selection acting upon the genome.
        But how does emotion come into it? In our lives, we have a lot of experiences
      and many of these we will remember until we die. We have many mechanisms
      for determining which experiences are stored (where were you on 9/11?) and
      which are discarded (what did you have for dinner exactly 1 month ago?). Some
      memories will fade with time and some will be distorted by generalization (can
      you distinctly remember your seventeenth haircut?) We need a signal to say,
      “This is an important memory. Write this down and underline it.” That signal
      is emotion. When you have feelings of fear or joy or love or anger or sadness,
      these mark your experiences as being particularly meaningful. These are the
      memories you most need to store and keep safe. These are the ones that are
      most likely to be relevant in future situations. These are the building blocks
      that form logic, reasoning, social cognition, and decision making. These are the
108   Learning, Memory, and Human Individuality
memories that confer your individuality. And that function, memory indexed
by emotion, more than anything else, is what a brain is good for.
I N C A S U A L C O N V E R S AT I O N   we may say that a certain person has a good mem-
ory or another person has a bad memory. In truth, however, we know from our
everyday experience that things are not so simple. Memory is not a unitary phe-
nomenon. You may have a great ability for matching names to faces while you
struggle to memorize music for a piano recital. Your brother might remember
everything he ever reads but progresses slowly with motor memory tasks such as
learning to improve his golf swing.
   Brain researchers have worked for many years to develop a taxonomy of
memory, a means of classifying types of memory that has its roots in clinical ob-
servation (see Figure 5.1). Much of this work relies on the analysis of human
amnesiacs who have sustained damage to various parts of their brains through
infections, stroke, trauma, chronic abuse of drugs or alcohol, or, as in the case
of the patient called H.M. (Chapter 1), surgery to treat otherwise incurable sei-
zures. Other insights have come from studying more temporary forms of dis-
ruption, such as transiently acting drugs and electroconvulsive shock (used to
relieve depression that fails to respond to other therapies).
   In the 1950s it was generally thought that patients like H.M. and others who
sustained damage to the hippocampus and surrounding cortical tissue were un-
able to form any new memories at all. But detailed study of these patients
revealed that although they could no longer form new memories of facts or
events, so-called declarative memories, they could lay down memory traces for
a number of other tasks. One of these is mirror reading: learning to read words
in English that have been printed with left-right reversal (Figure 5.2). This is a
task that both normals and hippocampal amnesiacs such as H.M. can learn
with daily practice. It’s also a nice task for illustrating different types of mem-
                                                     Learning, Memory, and Human Individuality   109
                                  )6."/.&.03:
         %FDMBSBUJWF                                /POEFDMBSBUJWF
          FYQMJDJU                                     JNQMJDJU
      'BDUT          &WFOUT        1SPDFEVSBM     1SJNJOH       4JNQMF /POBTTPDJBUJWF
                                    TLJMMTBOE                 DMBTTJDBM  MFBSOJOH
                                     IBCJUT                  DPOEJUJPOJOH
      Figure 5.1. A taxonomy of human memory. Adapted with permission from Elsevier
                  from B. Milner, L. R. Squire, and E. R. Kandel, Cognitive neuroscience
                  and the study of memory, Neuron 20:445–468 (1998). Joan M. K. Tycko,
                   illustrator.
      ory: although both the amnesiacs and the normals showed daily improvement
      in the mirror-reading task (as indicated by progressively faster reading times),
      only the normals could recall some of the words used in the test the previous
      day—the amnesiacs had no memory of these words whatsoever (indeed, they
      also had no memory that the previous day’s training session had even occurred).
        Further experiments with hippocampal amnesiacs have revealed a large group
      of memory tasks that are retained. Amnesiacs still have memory for motor co-
      ordination—they can improve at sports with practice. Both mirror reading and
      motor coordination learning fall into the larger category of “skills and habits”
      shown in Figure 5.1. Amnesiacs also retain the ability to learn simple, subcon-
      scious associations, through a process called classical conditioning. For exam-
      ple, your heart rate will reflexively accelerate if you receive a mild shock to your
      arm, but it will not do so in response to a more neutral stimulus such as the
      sight of a dim red light briefly appearing in your field of view. But if the light is
      paired with the shock repeatedly, after a while your brain will begin to learn that
      the light predicts the shock and your heart rate will accelerate in response to the
110   Learning, Memory, and Human Individuality
figure 5.2. Mirror reading, a skill that both hippocampal amnesiacs and normals can
            acquire and retain with practice. The memories of the particular words
            read, however, will be retained only by the normals.
light alone. Hippocampal amnesiacs trained in this task for several days will
have no memory of the previous day’s training, but their heart rate will acceler-
ate in response to the light nonetheless.
  Perhaps the most interesting form of memory retained in amnesiacs is
achieved by a method called priming. In this task, initially devised by Elizabeth
Warrington and Larry Weiskrantz of Oxford University, amnesiacs are asked to
recall a list of words they have seen the previous day. Not surprisingly, if you
simply ask them to list the words from the earlier session they have no memory
of them at all. But if you give them the first few letters of a word, they will often
be able to correctly produce the complete word even if it feels to them as if they
                                             Learning, Memory, and Human Individuality   111
      are guessing randomly. For example, if a word on the list was “crust,” the stem
      “cru____” would probably evoke the correct answer rather than other possibili-
      ties such as “crumb” or “crud” or “cruller.” What’s interesting about priming is
      that, unlike many of the other forms of memory retained in hippocampal am-
      nesia, it is a cognitive rather than a motor task.
         All of these memory tasks that are retained in amnesiacs (priming, skill and
      habit learning, classical conditioning, as well as some others I didn’t discuss) fall
      into a category called nondeclarative, or implicit, memory: they are forms of
      memory that do not involve conscious retrieval. These memories are not re-
      called, but rather are manifest as a specific change in behavior. Nondeclarative
      memory is not what we usually think of when we talk about memory in casual
      conversation—it is not memory for facts and events—such as what you had for
      breakfast yesterday morning or the name of the British prime minister. None-
      theless, nondeclarative memory is central to our experience.
      O B S E R VAT I O N S O F H U M A N   amnesiacs clearly suggest that storage of new de-
      clarative memories requires an intact hippocampal system (the hippocampus
      proper and some adjacent cortical structures). This brings up a central ques-
      tion: are particular memories stored in specific locations in the brain or are they
      stored in a distributed fashion, spread over many brain regions? One early indi-
      cation of the answer came from the work of the Montreal Neurological Insti-
      tute neurosurgeon Wilder Penfield, who, starting in the 1930s, stimulated the
      brains of patients undergoing surgery for epilepsy.
         This was not just an academic exercise. It allowed him to map more care-
      fully than previously the location of the area that triggered the seizure and
      thereby minimize damage to nearby parts of the brain. Because brain tissue has
      no pain-sensing system itself, neurosurgery can be performed on conscious
      people while they are under a local anesthetic to block pain from their scalp and
112   Learning, Memory, and Human Individuality
                                     "VEJUPSZ
                                                                   7JTVBM
figure 5.3. Reminiscence evoked by brain stimulation during neurosurgery. The Ca-
            nadian neurosurgeon Wilder Penfield inserted electrodes to stimulate the
            cortical surface of awake patients in the course of neurosurgery. This fig-
            ure shows two types of memory-like experiences evoked by stimulation in
            various regions. Joan M. K. Tycko, illustrator.
skull. Penfield’s stimulation was restricted to the cortical surface and was per-
formed on over 1,000 surgical patients (Figure 5.3). In a small fraction of cases,
stimulation of the cortical surface would evoke a coherent perception: a snatch
of music, a human voice, a vision of a pet or loved one. Were these electrical
                                               Learning, Memory, and Human Individuality   113
      stimuli evoking memories? Well, yes and no. In some cases it does seem that
      particular, real past events were recalled, at least in fragmentary form. More of-
      ten, however, the stimulation evoked sensations that were dreamlike, with typi-
      cal elements of fantasy and violations of physical laws. Often, the area that
      evoked a “memory” was itself the epileptic focus. In these cases, destruction of
      that cortical tissue did not obliterate the memory of that particular stored expe-
      rience. So, the Penfield experiments, while titillating, did not directly address
      the question of memory localization.
        If formation of new nondeclarative memories can proceed when the hippo-
      campal system is destroyed, then where are the critical locations for these forms
      of memory? Some information about this can be derived from studies of hu-
      mans with damage to other brain regions. Damage to the amygdala, for exam-
      ple, seems to be associated with memory storage for classical conditioning of
      emotional responses, particularly fear conditioning. Damage to the cerebel-
      lum has similar effects for classical conditioning of emotionally neutral stimuli
      (see Figure 5.4).
        So, is memory storage localized or not? The answer is not so simple. It’s also
      a bit different for nondeclarative versus declarative memory. Nondeclarative
      memory is not consciously recollected. Rather, it is evoked by a specific stimu-
      lus or set of stimuli and is manifested as a change in behavior. As a result,
      nondeclarative memories can often be localized, not just to a brain region,
      but to a certain subregion or even class of neuron. Declarative memory is a dif-
      ferent story. Such memories are consciously recollected. They are useful in large
      part because we can access them using stimuli that are very different from the
      ones that created them initially. For example, when you read “Imagine your
      mother’s face,” the sensation of reading that line is nothing like the stimuli
      that laid down the memory of your mother’s face, yet you can probably re-
114   Learning, Memory, and Human Individuality
                            )6."/.&.03:
   %FDMBSBUJWF                                 /POEFDMBSBUJWF
    FYQMJDJU                                      JNQMJDJU
'BDUT              &WFOUT    1SPDFEVSBM     1SJNJOH         4JNQMF /POBTTPDJBUJWF
                              TLJMMTBOE                   DMBTTJDBM  MFBSOJOH
                               IBCJUT                    DPOEJUJPOJOH
                                                       &NPUJPOBM    4LFMFUBM
                                                       SFTQPOTFT   NVTDVMBUVSF
         .FEJBM               4USJBUVN     /FPDPSUFY   "NZHEBMB     $FSFCFMMVN    3FGMFY
        UFNQPSBM                                                                 QBUIXBZT
          MPCF
figure 5.4. A taxonomy of human memory, now elaborated to show some crucial
            brain regions involved in different tasks. Here, the medial temporal lobe
            means the hippocampus and some associated regions of the cerebral cor-
            tex. It should be cautioned that, in real-world behavior, most experiences
            will simultaneously lay down memory traces of several types. For example,
            if you take lessons to improve your tennis game, you will probably recall
            particular things that may have happened during the lesson (declarative
            memory for events), but what you are really trying to achieve is an uncon-
            scious improvement of your motor performance as you play (non-
            declarative skill memory). Adapted with the permission of Elsevier from
            B. Milner, L. R. Squire, and E. R. Kandel, Cognitive neuroscience and the
            study of memory, Neuron 20:445–468 (1998). Joan M. K. Tycko, illustrator.
call her face with ease after reading that line. This imposes an important con-
straint on declarative memories: nondeclarative memories can simply be ac-
cessed in a subconscious fashion through specific stimuli, but declarative
memories must be embedded in a much richer informational system, which
                                               Learning, Memory, and Human Individuality    115
      makes it less likely that they will be localized to the same degree as
      nondeclarative memories.
      A LT H O U G H D A M A G E TO   the hippocampal system produces anterograde amne-
      sia, an impairment in the storage of new memories for facts and events, it does
      not erase a lifetime’s worth of declarative memories. Rather, there is typically a
      “hole in declarative memory,” or retrograde amnesia, which stretches back 1 or
      2 years before the infliction of hippocampal damage. Thus H.M. and others
      like him have lost a part of their past forever, but older memories have been
      spared. The explanation for this seems to be that declarative memories are ini-
      tially stored in the hippocampus and some adjacent regions, but, gradually,
      over months to years, the storage site changes to other locations in the cerebral
      cortex. The dominant theory today is that the final locations for declarative
      memories are distributed in the cortex, not in a random fashion, but rather in
      those parts of the cortex initially involved in their perception. In this fashion,
      memories for sounds are stored in the auditory cortex (indeed, memories for
      words appear to be stored in a particular subregion of the auditory cortex),
      memories for scenes in the visual cortex, and so on. What this means for any
      real experience, involving multiple senses, is that your memory for, say, your
      first trip to the beach is stored in a number of cortical locations, each corre-
      sponding to a particular sensory modality or submodality. There does not ap-
      pear to be a single dedicated site for the permanent storage of declarative mem-
      ories. This underlies, at least in part, the observation that memory is not a
      unitary system. This may be why your Aunt Matilda can remember every word
      to every song Elvis Presley recorded, but can’t keep track of your birthday.
      M E M O R I E S M AY B E   classified not only by type but also by duration. There is
      evidence for separate neural processes underlying at least three stages of mem-
116   Learning, Memory, and Human Individuality
ory. The first and most transient of these is known as working memory. Anyone
who grew up with an annoying sibling knows certain aspects of working mem-
ory well: You’ve just read a phone number from your address book and you’re
repeating it to yourself, trying to keep it in your memory long enough to dial
the phone while your sib is trying equally hard to interfere by shouting random
numbers in your ear. Working memory is a temporary “scratchpad” for holding
information just long enough to complete a task (to dial a phone number or to
remember the first part of a heard sentence long enough to match it with the
ending). You can hold information in this scratchpad for a somewhat longer
time through rehearsal or by employing mental imagery, but otherwise it will
quickly fade away. Working memory is a form of declarative memory that is
crucial to understanding lengthy experiences as they unfold over time. It is the
glue that holds our perceptual and cognitive lives together.
  Working memory is preserved in hippocampal amnesiacs. Although we don’t
have a complete understanding of its neural basis, there is now a generally ac-
cepted model that holds that working memory requires the ongoing firing of
particular sets of neurons. This has been tested in monkeys by using a working-
memory task called delayed matching to sample (Figure 5.5). In this task, a col-
ored light briefly flashes, and then after a delay of a few seconds the monkey
must correctly choose the previous color from a display of two or more to get a
food reward. Investigators found that some neurons in the higher regions of
the visual “what” pathway (in an area called TE) fired continually during the
working-memory interval. These higher visual areas send a lot of axons to the
prefrontal cortex, and neurons in this area also fire in this fashion. Similar ac-
tivity can also be recorded with scalp electrodes in the prefrontal cortex of hu-
man subjects performing working-memory tasks. So, a current model is that
there are separate working-memory systems for different areas in the brain,
each located at some point in the appropriate region of cortex (auditory, visual,
                                            Learning, Memory, and Human Individuality   117
                              4BNQMF                                 .BUDI
                              4BNQMF 8PSLJOHNFNPSZQFSJPE .BUDI
                                             EFMBZ
               
               
      4QJLFT
               
               
               
                
               
                                                                         
                                                  4FDPOET
      figure 5.5. Persistent neuronal activity as a substrate of working memory. In this de-
                  layed matching to sample task, a monkey sees a given color and then, after
                  a delay of a few seconds, must pick this color from two choices. The lower
                  panel shows a recording made from a neuron in the higher visual region
                  called area TE, illustrating that the firing rate of this neuron was elevated
                  when the sample was presented and remained so through the 15-second-
                  long working-memory interval. Adapted from L. R. Squire and E. R.
                  Kandel, Memory: From Mind to Molecules (Scientific American Library,
                  New York, 1999); © 1999 by Scientific American Library; used by per-
                  mission of Henry Holt and Company, LLC. Joan M. K. Tycko, illustrator.
118   Learning, Memory, and Human Individuality
and so on). These regions all seem to project to the prefrontal cortex, which, at
least to some degree, integrates working memory across sensory modalities.
This model is further supported by the findings that damage to the prefrontal
cortex in both humans and monkeys results in impairment on working-mem-
ory tasks.
  More subtly, if the prefrontal cortex of monkeys is artificially electrically
stimulated during the working-memory interval, performance can be disrupted.
A similar effect can be produced by injecting drugs into the prefrontal cortex
that either block or overactivate receptors for the modulatory neurotransmit-
ter dopamine. Dopamine functions to tune the amount of spike firing in the
prefrontal cortex that is triggered by information flowing from other cortical
regions such as the auditory and visual systems. This may explain why schizo-
phrenics and patients with Parkinson’s disease, people whose ailments are asso-
ciated with defects in dopamine signaling, perform poorly on tests of working
memory.
I F YOU QUE RY   middle-aged people on general knowledge (news, popular cul-
ture), you typically find that they have better recollection of recent events than
more distant past ones. This predictable result is called the forgetting curve.
Yet distant memories that do survive normal forgetting are unusually resistant
to disruption. As the record of a particular experience moves from working
memory through short-term memory and into long-term memory, the mem-
ory trace, or engram (those changes in the brain that encode memory), gradu-
ally changes from being fragile and easily disrupted to being more stable. This
transformation process takes time and has been given the name consolidation.
The evidence for this comes from both human and animal studies. If you repeat
the experiments mentioned above in which you query general cultural knowl-
edge among people who have received bilateral electroconvulsive shock treat-
                                            Learning, Memory, and Human Individuality   119
                                          )VNBOT                                3BUT
      4DPSFPONFNPSZUFTU
                                              %FQSFTTFE
                                              DPOUSPMT                              $POUSPMT
                                              %FQSFTTFE                             &$5
                                              XJUI&$5
                                                                                       
                             .POUITCFGPSFUIFNFNPSZUFTU          %BZTCFGPSFUIFNFNPSZUFTU
      figure 5.6. The persistence of old memories and the fragility of new memories. Both
                  humans and rats were tested for their recollection of past events. Controls
                  showed some degree of forgetting of information in the more distant past.
                  Both humans and rats receiving ECT had severely impaired memory for
                  events occurring immediately before ECT, but had normal rates of forget-
                  ting for information in the more distant past. Note the different time
                  scales for the human and rat data. Joan M. K. Tycko, Illustrator.
      ment (ECT) to relieve drug-resistant depression, you will find that superim-
      posed upon the forgetting curve is an additional disruption of memory that is
      strongest for events that occurred immediately before the treatments and that
      gradually trails off as you query events further in the past (Figure 5.6). Of
      course, in this type of experiment, it is important that the control group be oth-
      ers with severe depression and not just the general population.
                         Similar studies can also be done using laboratory animals such as rats. Obvi-
      ously, in this case, you can’t quiz them about general knowledge, so instead you
      train them in a particular task (such as navigating a maze to get food) and then
      wait for various intervals before giving ECT. When you test them for their
      memory of the maze task the next day, you will find a result that parallels that
120   Learning, Memory, and Human Individuality
seen in humans: recent memory is easily disrupted by ECT, but memory in
the more distant past is sufficiently consolidated to withstand this treatment
(Figure 5.6). This basic strategy can be applied to many types of experimen-
tal amnesia, including those caused by the administration of various drugs.
One class of drug that has been particularly well studied in terms of disrupting
memory consolidation is protein synthesis inhibitors. These are compounds
that interfere with any one of several biochemical steps by which genes ulti-
mately direct the synthesis of new proteins. Thus a popular hypothesis is that
synthesis of new proteins is one important step in the consolidation of short-
term into long-term memory, thereby rendering the memory less vulnerable to
erasure. Although these examples use tests of declarative memory, several lines
of evidence indicate that nondeclarative memories also undergo consolidation
and that this consolidation requires new protein synthesis.
ON THE EVENING     of October 6, 1996, I was watching TV. I know this, because
so were about 46 million others in the United States: It was the night of the first
Presidential debate between incumbent Bill Clinton and challenger Bob Dole.
When questions were posed to Clinton, he had a habit of pausing for about 3
seconds with his eyes rolled back in his head and then launching into a carefully
constructed and detailed answer. However you might have felt about his poli-
cies, you had to admire his command of information. After several questions
like this, with the characteristic 3-second pause, my wife said, “Look, he’s re-
winding the tape!” We laughed because it really did seem as if something me-
chanical was happening in the President’s brain that night.
  Although we might imagine that our memories for facts and events are stored
on a tape we can rewind or a set of photographs we can browse in an album, this
does not seem to be true. As discussed previously, one of the biggest challenges
of declarative memory (memory for facts and events) is to store information so
                                            Learning, Memory, and Human Individuality   121
      that it can be retrieved by diverse stimuli. The key point here is that retrieval of
      memory is an active process. It is not like browsing through an album of photo-
      graphs, even an album of fading photographs. Rather, it is a bit like searching
      the Internet with Google. A question such as “Who was with us on that day trip
      to the beach last summer?” provides a few search terms that will yield a large
      number of memory fragments associated with key terms such as “beach” and
      “last summer.” But the question “Who was with us on that day trip to the beach
      last summer—you know—when we got caught in the thunderstorm and then
      you threw up in the car on the way home?”—with its greater number of search
      terms—not only makes it more likely that you will recall the memory of those
      events, but also makes it more likely that you will recall more aspects of the
      events. Of course, unlike a Google search, retrieval of declarative memory is not
      fundamentally text based.
        Memory retrieval is an active and dynamic process. But this dynamic recol-
      lection and rewriting of memory is a two-edged sword. In some ways it’s very
      useful for subsequent experience and recollection to modify memory traces of
      events in the past, but this can also lead to errors. Memories of recurring com-
      monplace events are often rendered generic. This is something we all know
      from our own personal experience. As a child growing up in Santa Monica,
      California, I probably ate dinner with my father at Zucky’s Delicatessen hun-
      dreds of times. Although I have many memory fragments associated with those
      times—the smell of matzo-ball soup, my father’s secret-agent like insistence
      upon sitting with a view of the door, the weird mechanical sound of the ciga-
      rette machine, the unnatural colors of the glossy marzipan fruits in the bakery
      case—these are mostly not related to specific incidents. I can’t really say that I
      remember a particular meal that I ate any night when I was 12 years old. I can,
      however, remember almost everything about the particular night at Zucky’s in
      1974 when my father told me that he would be having triple bypass heart sur-
122   Learning, Memory, and Human Individuality
gery. It scared the hell out of me and, as a consequence, that particular meal is
etched into my memory forever.
  Everyone knows how emotion-laden events can be written into long-term
memory with unusual strength. One would be tempted to think that this could
be entirely explained by activation of emotional systems at the time of the
event. Indeed, that’s part of the story, but it’s not the whole story. It is now
clear that consolidation of long-term memory is also reinforced by subsequent
conversation—when you repeatedly tell the story of where you were on 9/11,
this repetitive narration reinforces consolidation. Furthermore, the emotions
in both you and your listeners that are evoked by the retelling will subtly influ-
ence the memory trace itself—the event and the retelling will begin to blend in
your mind.
  This dynamic reconsolidation of memory is all well and good in some ways:
memory of commonplace events is probably of more use to us when rendered
generic by the passage of time and subsequent experience. This also has the
useful result that emotionally important events stand out more clearly in our
memories. But this dynamic process renders our memories particularly subject
to certain forms of error above and beyond the slow, gradual fading of long-
term memory over time. In his splendid book The Seven Sins of Memory, the
Harvard University psychologist Daniel Schacter speaks of three of these “sins
of commission” in declarative memory retrieval: misattribution, suggestibility,
and bias.
  Misattribution is very common form of error in which some aspects of a
memory are correct but others are not. It can happen in many domains. Take,
for example, source misattribution: I may correctly recall a joke I heard which
begins “Ted Kennedy walks into a bar . . .” but I will swear I heard it from my
sister-in-law when I really heard Jay Leno tell it on TV. Sometimes misattrib-
ution can cause you to think that you’ve created something original, whereas re-
                                            Learning, Memory, and Human Individuality   123
      ally you had heard it from another source and attributed it to your own internal
      processes. I went around for over 30 years humming a snippet of tune I thought
      I had composed myself, only to hear it years later when I bought my children a
      DVD containing Bugs Bunny cartoons from the 1940s.
        Misattribution is at the heart of one of the most famous cases in music copy-
      right law, which involved the 1970 number-one pop hit by George Harrison
      called “My Sweet Lord.” Although the lyrics and instrumentation differ, the
      tune of “My Sweet Lord” very strongly resembles that of a previous number-
      one hit recorded by the Chiffons in 1963 called “He’s So Fine.” The judge in
      the case ruled that though Harrison had no intent to plagiarize, he had almost
      certainly misattributed his memory of the tune of “He’s So Fine” (which Harri-
      son admitted he had heard before), thereby imagining that he had composed it
      de novo. The company which held the copyright to “He’s So Fine” was ulti-
      mately awarded millions of dollars in damages from Harrison.
        These examples are forms of source misattribution. A variant of this is mis-
      attribution of time or place. A common experimental design is to give subjects
      a list of words to study. When they return the next day they are given a new list
      of words and are asked to indicate which ones they had seen the day before.
      People in these experiments will often misattribute new words to the previous
      list. Their propensity for doing this can be manipulated by experimental con-
      text. For example, if a word appearing on the new list for the first time is more
      familiar to the subject or is thematically related to several words on the first list,
      it is more likely to be misattributed. If the first list contained “needle,” “sew-
      ing,” “pins,” and “stitch,” then the chance of misattributing the word “thread”
      to the first list will be high. It may be that we have an evaluative system in our
      brains that says “If I recognize this word rapidly then it’s likely that I’ve seen it
      before,” and this is the basis of some forms of misattribution.
        Suggestibility and bias are additional forms of memory error in which the act
124   Learning, Memory, and Human Individuality
of recollection involves the incorporation of misleading information. Suggest-
ibility is the term used when this information comes from external sources
(other people, films, books, media), while bias is warping one’s recollection to
fit present circumstances: “I always knew that the Red Sox could win the World
Series.” It turns out to be surprisingly easy to alter people’s recollections. For ex-
ample, a number of studies have sought to simulate police line-ups: a group of
experimental subjects will watch a video of a (simulated) convenience store
robbery and will then see a line-up of six suspects, none of whom was the rob-
ber in the video. When subjects are presented with the suspects one by one and
asked to make a yes-or-no decision, almost all will correctly respond “no” to all
six. But if the six are presented all at once and the subjects are asked, “Is any of
these the robber?” then about 40 percent of people will pick a suspect (usually
the one who resembles the perpetrator most closely). If the subject is told by the
experimenters in advance that several others have already identified suspect X
and they need them to confirm or deny, then about 70 percent of people can be
manipulated into false recollection. These results not only highlight the sug-
gestibility of memory recall but also have obvious implications for police proce-
dures and our legal system.
  The problem of suggestibility is even greater in children, particularly pre-
school-age children. In a typical study, a group of preschoolers had a bald man
visit their room, read a story, play briefly, and leave. The next day, these chil-
dren were asked nonleading questions such as “What happened when the visi-
tor came?” and they related a series of memories that, though not complete,
were quite accurate. But when leading questions were used, such as “What
color hair did the man have?” then a large number of children made up a color.
Even those children who initially responded that the man had no hair would
typically, after having the question repeated several times in different sessions,
begin to confabulate and even extend the false recollection—“He had red hair.
                                              Learning, Memory, and Human Individuality   125
      And a mustache too!” Initially, such studies were done using rather innocuous
      questions like the one above. The consensus at that time was that though chil-
      dren were suggestible about trivial details, they could not easily be made to con-
      fabulate entire events, particularly events that would be emotionally traumatic.
         A series of high-profile accusations of child abuse in the 1980s prompted
      several teams of researchers to reexamine this point. What they found was star-
      tling. Both preschoolers and, to a lesser extent, elementary school children could
      easily be made to completely manufacture allegations of abusive behavior (such
      as yelling, hitting, or taking off their clothes) against an adult in a laboratory
      setting. All it took was some social incentives: leading questions, reinforcement
      of particular answers, and a lot of repetition. These are exactly the techniques
      that were used by many therapists and police officers in developing evidence to
      accuse preschool teachers in the 1980s. Most (but not all) of these cases were ul-
      timately dropped or overturned on appeal. Let’s be clear about what this means:
      Abuse of children happens and spontaneous reports of abuse volunteered by
      children are often true and warrant careful examination. But extreme care must
      be taken in questioning children in cases where abuse is suspected. It is extraor-
      dinarily easy for caring professionals with the best intentions to distort a child’s
      recollection or even implant memories that are completely false. The neural ba-
      sis for the increased suggestibility of small children is unknown but is likely to
      reflect the fact that brain regions required for retaining memory of events and
      evaluating confidence in the accuracy of one’s own recollections, particularly
      the frontal lobes, are still undergoing rapid growth and reorganization in the
      preschool years and slower growth from age 5 to age 20.
      W H AT C H A N G E S O C C U R   in brain tissue to store long-term memory? Let’s begin
      our consideration of this key question by stepping back a bit and playing engi-
      neer. In building neural memory storage there are a lot of difficult design goals
126   Learning, Memory, and Human Individuality
we’ll have to meet. First, the capacity for memory storage must be large. Even
though we forget things, we still have to store a huge amount of information
over many years and do so with reasonable fidelity. Second, memory must be
durable. Some memories will last for an entire lifetime. Third, memories must
be stored in a way such that they are retrieved readily, but not too readily.
For declarative memories, this means that they must be recollected by using
fragmentary cues that can be very different from those which laid them down
(“Imagine your mother’s face”). Nondeclarative memories are optimally trig-
gered by an appropriate range of stimuli—if you’ve been trained to blink to a
400 hertz tone, then you probably also would want to blink to a 410 hertz tone
but not a 10,000 hertz tone. Fourth, memories must be malleable, based upon
subsequent experience in order to place them in a useful context and absorb
them into the totality of the conscious self. All in all, this is a rather tall order.
Memory must be accurate, but it must also be useful in supporting generaliza-
tion. It must be permanent, but also subject to modification by subsequent ex-
perience. Given these competing requirements, it is not surprising that our
memories for facts and events are often subject to misattribution, suggestibility,
and bias.
  On a smaller scale, what we need to build are systems through which particu-
lar patterns of experience-driven neuronal activity will create enduring changes
in the brain. What are the general classes of change that could be used to store
memories? We know that the fundamental unit of neuronal information is the
spike. The probability of spike firing is driven by the integrated activity of
many of the excitatory and inhibitory synapses, which add together to produce
changes in the voltage across the membrane at the axon hillock, where the spike
originates. So if a particular pattern of neuronal activity results in a lasting
modification of, say, voltage-sensitive sodium channels located at the axon hill-
ock, such that the threshold for firing a spike was moved closer to the resting
                                              Learning, Memory, and Human Individuality   127
             4ZOBQUJDNFNPSZTUPSBHF                    *OUSJOTJDNFNPSZTUPSBHF
              "YPO
             "YPO                                 -PDBMDIBOHF
             IJMMPDL                              JOWPMUBHF
                                                  TFOTJUJWFJPO
                                                  DIBOOFMT
      -PDBMDIBOHF
      JOTZOBQUJD                                                               %FOESJUFT
      TUSFOHUI
      figure 5.7. Synaptic versus intrinsic modulation in memory storage. Long-term
                  modulation of synaptic strength (left) results in changes in throughput
                  that are confined to the activated synapses (shaded area). Changes in in-
                  trinsic excitability through modification of voltage-sensitive channels in
                  the axon hillock (right) will change throughput from synapses received
                  throughout the dendritic arbor (shaded area). As a consequence, intrinsic
                  changes have the advantage of producing useful generalization but the
                  disadvantage of having a much smaller capacity to store memory. Adapted
                  from W. Zhang and D. J. Linden, The other side of the engram: experi-
                  ence-driven changes in neuronal intrinsic excitability, Nature Reviews
                  Neuroscience 4:885–900 (2003). Joan M. K. Tycko, illustrator.
128   Learning, Memory, and Human Individuality
potential, then this could produce a lasting change in the firing properties of
that neuron, thereby contributing to an engram. This is only one of many pos-
sible changes that would affect neuronal spiking. For example, modifying the
voltage-sensitive potassium channels that underlie the downstroke of the spike
could change their average time to open. This would result in alterations to the
rate and number of spikes fired in response to synaptic drive. Indeed, changes
in voltage-sensitive ion channels can persistently alter the intrinsic excitabil-
ity of neurons and, in animal experiments, these changes can be triggered by
learning.
  Although changes in intrinsic excitability are likely to contribute to some as-
pects of memory storage, it’s unlikely that they are the whole story. Compu-
tationally, this mode of memory storage doesn’t make the most efficient use of
the brain’s resources. Recall that there are about 5,000 synapses received by the
average neuron. When you change ion channels underlying spike firing, you
are changing the probability of firing a spike in response to synaptic input for
all 5,000 of those synapses at the same time. One can imagine that this general-
izing property might be useful for certain aspects of memory storage, but an
engram solely built upon modifying neuronal intrinsic excitability would, by
its nature, have a much smaller capacity than one that allowed individual syn-
apses to change.
  Experience-dependent modification of synaptic function is a general mecha-
nism that is thought by most brain researchers to underlie a large part of mem-
ory storage. There are many steps in synaptic transmission and several of these
are subject to long-term modulation. As a sort of shorthand, people speak of
“synaptic strength” as a parameter that can be changed. If, as a test, you stimu-
late 10 excitatory axons to fire spikes and they all converge on the same post-
synaptic neuron and you then measure the resultant deflection in membrane
voltage (the EPSP), you might find that this produces a depolarization of 5 mil-
                                           Learning, Memory, and Human Individuality   129
      livolts. If, after a certain period of conditioning stimulation (a particular pat-
      tern of activation designed to mimic the results of sensory experience), this
      same test stimulation produced a depolarization of only 3 millivolts, this
      would be called synaptic depression. An increase in the response to 10 milli-
      volts would be called synaptic potentiation. If these changes were long-lasting
      in nature, they could contribute to the storage of memory. Because there are
      about 500 trillion synapses in your brain, this mechanism, experience-driven
      persistent changes in synaptic strength, has a very high capacity for information
      storage.
        There are two general ways to modify the strength of existing synapses. On
      the presynaptic side, you could potentiate or depress the amount (or proba-
      bility) of neurotransmitter release following arrival of an action potential. Or,
      on the postsynaptic side, you could potentiate or depress the electrical effect
      produced by a constant amount of released neurotransmitter. In molecular
      terms, each of these forms of modification can come about in several different
      ways. For example, if one modifies voltage-sensitive calcium channels in the
      presynaptic terminal so that they pass fewer calcium ions into the cell when an
      action potential invades, this will depress neurotransmitter release. A similar ef-
      fect may be produced by modifying the proteins that control the fusion of neu-
      rotransmitter-laden synaptic vesicles with the presynaptic membrane. In this
      case, for a constant spike-evoked presynaptic calcium signal the probability of a
      vesicle being released would become lower. On the postsynaptic side, you can
      depress the effect of released transmitter by reducing the number of neuro-
      transmitter receptors in the postsynaptic membrane. Alternatively, a similar re-
      sult could be achieved by keeping the number of receptors constant but modi-
      fying them so that they pass fewer positively charged ions when they open. The
      point here is that almost every function on both sides of the synapse is subject
      to modulation and is therefore a candidate for a memory mechanism. In prac-
130   Learning, Memory, and Human Individuality
         1/%                                                           
                                                                    NN
figure 5.8. Changes and stability in the fine structure of dendrites in adult cerebral
            cortex. These images of a segment of neuronal dendrite from living
            mouse visual cortex were taken every day from day 115 to day 118
            (PND = postnatal day). The filled arrowhead shows one of several stable
            dendritic spines, while the open arrowhead shows a transient one. This
            mouse was genetically engineered to express a fluorescent protein in some
            of its cortical neurons. Reproduced with the permission of Elsevier from
            A. J. Holtmaat, J. T. Trachtenberg, L. Wilbrecht, G. M. Shepherd, X.
            Zhang, G. W. Knott, and K. Svoboda, Transient and persistent dendritic
            spines in the neocortex in vivo, Neuron 45:275–291 (2005).
tice, these different molecular mechanisms are not mutually exclusive, and in
most synapses several can be working at the same time.
  Modifying synaptic function is not the only way to create long-term memo-
ries. Such memories may also be encoded through changes in synaptic struc-
ture. Although the overall wiring plan of the brain is largely fixed in adult
brains, the same cannot be said of individual axons, dendrites, and synapses.
Short-term memory is likely to involve changes in the function and structure
                                              Learning, Memory, and Human Individuality   131
      of existing synapses, but long-term memory can involve the creation of new
      branches of dendrites and axons. The tiny spines that cover dendrites are struc-
      tures that are particularly subject to experience-dependent rearrangement. One
      recent study by Karel Svoboda and his coworkers at Cold Spring Harbor Labo-
      ratory used a novel form of microscopy to repeatedly examine dendritic struc-
      ture in the cerebral cortex of living adult mice (Figure 5.8). They found that
      over a period of 30 days, approximately 25 percent of dendritic spines disap-
      peared or were newly formed. At a microscopic level, the synapses of the brain
      are not static. They grow, shrink, morph, die off, and are newly born, and this
      structural dynamism is likely to be central to memory storage.
      I’VE NOW PRESENTED        a theoretical overview of some cellular mechanisms by
      which memory could be stored in the brain. How do we then go about testing
      whether any of these mechanisms really operate in behaving animals? There are
      two general approaches. One is to alter the brain function (with drugs, lesions,
      genetic manipulation, electrical stimulation, and so on) and observe the resul-
      tant effects on behavior. This is an interventional strategy (in animals at least; in
      humans we usually let nature make the lesions). The other is a correlational
      strategy, where we measure physiological properties of the brain (electrical ac-
      tivity, microscopic structure, biochemistry, gene expression, and so on) to try to
      determine how they change as a result of experience.
        To get a sense of the current state of the struggle, let’s examine a form of de-
      clarative memory for which scientists have made substantial progress in illumi-
      nating the cellular and molecular substrates of the engram. After reports of the
      amnesiac patient H.M. became known in the 1950s, there was a determined ef-
      fort to reproduce his deficit, complete anterograde amnesia for facts and events,
      in an animal model (preferably an inexpensive animal like a rat). It wasn’t until
      the 1970s that this really started to pay dividends. It’s not hard to use surgical
132   Learning, Memory, and Human Individuality
techniques to damage the hippocampus of a rat. The challenge was to find ap-
propriate declarative memory tasks for this animal. The best ones turned out to
be tests of spatial learning.
  There are several ways to test spatial learning but the most widely adopted
have had animals learn to navigate a maze in order to escape from a situation
they find stressful. One clever maze was developed by Richard Morris and his
colleagues at the University of Edinburgh. It’s not what we normally think of
when we hear the word “maze.” This maze has no passageways. Rather it con-
sists of a circular swimming pool 1.2 meters in diameter with a wall at the edge
to prevent escape and dry milk powder added to make the water opaque. The
pool is housed in a room with prominent and unique visual landmarks on the
walls to aid in navigation. A rat (or mouse) is placed at a random location at
the edge of the water and is then allowed to explore by swimming. Eventually, it
will find that there is an escape platform, the top surface of which is just a centi-
meter or so below the opaque surface of the water. When the rat reaches the
platform, it is allowed to stand there for a moment before being gently returned
to its cage. The task is to remember where this escape platform is located so that
on subsequent trials the rat can swim to it directly and make a quick exit. Not
surprisingly, rats that have had their hippocampus surgically destroyed on both
sides of the brain cannot learn the Morris water-maze task. Even after many tri-
als, they behave as if they are experiencing the maze for the first time. This ap-
pears to be a specific deficit in spatial memory because they can easily learn to
swim rapidly to a platform marked with a flag, which indicates that they do not
merely have a problem with swimming or vision but rather have a genuine and
specific memory deficit.
  Also in the 1970s, a report on hippocampal physiology fired the imagination
of brain researchers around the world. Terje Lomo of the University of Oslo
and Tim Bliss of the National Institute of Medical Research in the United
                                             Learning, Memory, and Human Individuality   133
      Kingdom reported that if they briefly stimulated a population of glutamate-us-
      ing excitatory synapses in the hippocampus of anesthetized rabbits at high fre-
      quency (100 to 400 stimuli per second for 1 or 2 seconds), this produced an
      increase in synaptic strength that could last for days. This phenomenon was
      named long-term synaptic potentiation (commonly abbreviated LTP). You can
      see why people got so excited. LTP was an experience-dependent, long-lasting
      change in neuronal function that occurred in a location in the brain already
      known to be crucial for memory. Furthermore, high-frequency bursts of the
      kind known to trigger LTP occur naturally in rats (and rabbits and monkeys).
      The hypothesis that LTP might underlie memory storage for facts and events in
      the hippocampus rapidly became one of the most exciting and controversial
      ideas in brain research.
        In the years that followed, thousands of papers were published about LTP.
      One of the most interesting things scientists learned is that, although LTP was
      initially found in the hippocampus, it is actually a phenomenon that occurs
      throughout the brain. It is found in the spinal cord and in the cerebral cortex
      and almost everywhere in between. Although it is most commonly studied at
      excitatory synapses that use glutamate as a neurotransmitter, it is present in
      other types of synapse as well. Another important finding is that there is a com-
      plementary process: a persistent use-dependent weakening of synapses called
      long-term synaptic depression, or LTD. The exact parameters for evoking LTP
      and LTD vary from synapse to synapse, but at most locations LTP is produced
      by brief, high-frequency activation (100 stimuli per second for 1 second is typi-
      cal) while LTD is produced by more sustained activation at moderate frequen-
      cies (say, 2 stimuli per second for 5 minutes). So far, all synapses that have LTP
      also have a form of LTD and vice versa.
        Given that random, low-frequency spiking of neurons goes on all of the
      time, how does the synapse undergo LTP when there’s a burst of high-frequency
134   Learning, Memory, and Human Individuality
stimulation, but not in the presence of ongoing background activity? This is a
problem the brain has solved using several different molecular strategies. Here,
I’ll consider the most commonly used solution, which involves a special recep-
tor for the neurotransmitter glutamate.
  Previously, I’ve talked about glutamate receptors that rest with a closed ion
channel and then open this channel when glutamate binds, allowing the so-
dium ions to flow in and potassium ions to flow out. This type of glutamate re-
ceptor is called an AMPA-type glutamate receptor (named after a synthetic
drug that activates it strongly). These receptors cannot differentiate low-level
background activity from high-frequency bursts. They are activated by both
stimuli. The receptor that can make this differentiation (also named after a po-
tent synthetic drug) is the NMDA-type glutamate receptor (Figure 5.9). The
reason that the NMDA receptor can perform this trick is that, at the resting
potential of −70 millivolts, its ion channel is blocked by a magnesium ion
from the outside (magnesium ions float freely in the saltwater solution that
surrounds neurons). This blockade remains until the membrane potential be-
comes more positive than about −50 millivolts.
  So, neither glutamate binding alone nor depolarization of the membrane
alone will open the NMDA-type receptor’s ion channel. Background activity
will produce the former but not the latter, but bursts of high-frequency spikes
will produce both glutamate binding and depolarization and the ion chan-
nel will open. This ion channel is also unique in that it allows the influx of cal-
cium ions together with sodium ions, while most AMPA-type receptors allow
only sodium influx. This means that strong calcium influx through NMDA-
type glutamate receptors is a unique consequence of high-frequency bursts. Or,
stated another way, the NMDA receptor is a coincidence detector: it opens and
fluxes calcium ions when both glutamate is released and the postsynaptic mem-
brane is depolarized, but neither of these events alone is sufficient.
                                            Learning, Memory, and Human Individuality   135
                  #BDLHSPVOEBDUJWJUZ                           )JHIGSFRVFODZCVSTUT
          (MVUBNBUF                  .BHOFTJVN              $BMDJVN               .BHOFTJVN
         4PEJVN                      CMPDLT/.%"        4PEJVN                    FYQFMMFEGSPN
                                     SFDFQUPS                                    DIBOOFM
       ".1"                              /.%"       ".1"                                /.%"
       SFDFQUPS                          SFDFQUPS   SFDFQUPS                            SFDFQUPS
                                          %FOESJUJDTQJOFPG
                                          QPTUTZOBQUJDOFVSPO
      figure 5.9. NMDA-type glutamate receptors and AMPA-type glutamate receptors.
                  The NMDA receptors are activated by high-frequency bursts but not by
                  background activity, because the voltage-dependent blockade of the
                  NMDA receptor’s ion channel by magnesium ions (Mg2+) is only relieved
                  when the postsynaptic membrane is depolarized to a level positive to −50
                  millivolts. The AMPA receptors are activated by both background activity
                  and high-frequencey bursts. Adapted from L. R. Squire and E. R. Kandel,
                  Memory: From Mind to Molecules (Scientific American Library, New York,
                  1999). Joan M. K. Tycko, illustrator.
        If this process is the trigger for LTP, then drugs that block the NMDA recep-
      tor should also block LTP. This is indeed what happens in most hippocampal
      synapses. Furthermore, if one injects neurons with drugs that rapidly bind cal-
      cium ions as soon as they enter the cell, thereby preventing them from interact-
      ing with other molecules, this will also prevent LTP. Calcium ions entering
136   Learning, Memory, and Human Individuality
through NMDA receptors can activate lots of different calcium-sensitive
enzymes in the neuronal dendrite. Rapid, large calcium transients can acti-
vate an enzyme called calcium/calmodulin protein kinase II alpha, typically
abbreviated CaMKII. This enzyme transfers chemical phosphate groups onto
proteins to change their function. Although the substrates of CaMKII action
relevant for LTP are not known, one popular hypothesis is that this process
ultimately results in the insertion of new AMPA-type receptors into the
postsynaptic membrane, thereby strengthening the synapse. It should be men-
tioned that though this NMDA receptor → CaMKII → AMPA receptor inser-
tion cascade is the most common form of LTP, it is not the only one. There are
others that can use different biochemical steps and produce LTP through dif-
ferent means (such as increased glutamate release or increased conductance of
existing AMPA receptors).
  What about LTD? How does sustained synaptic activation at moderate fre-
quencies result in persistent synaptic weakening? Interestingly, in its most com-
mon form, LTD also uses the NMDA receptor. In this case, moderate fre-
quency stimulation results in partial relief of the magnesium ion blockade of
the NMDA receptor. This produces a calcium flux, but one that is small and
sustained rather than large and brief. Small, sustained calcium signals are insuf-
ficient to activate CaMKII and therefore don’t produce LTP. Instead, they acti-
vate an enzyme that does the opposite job: protein phosphatase 1 (PP1) re-
moves phosphate groups. Activation of PP1, not surprisingly, ultimately results
in the removal of AMPA receptors from the postsynaptic membrane, thereby
depressing synaptic strength in a way that is the functional opposite of LTP.
This LTD cascade involving NMDA receptor → PP1 → AMPA receptor inter-
nalization is a dominant form of LTD in the hippocampus, but it is only one of
several mechanisms for producing persistent depression of synaptic strength.
                                            Learning, Memory, and Human Individuality   137
      Thus both LTP and LTD can be produced in several different ways. In reality,
      some individual synapses are able to express multiple forms of both LTP and
      LTD.
        So, the model being developed here is that somehow, memory for facts and
      events, including memory for the location of the escape platform in the Mor-
      ris water maze, is encoded by producing LTP and LTD in an array of hippo-
      campal synapses, and these forms of LTP and LTD are critically dependent
      upon triggering by NMDA receptors. One central test of this hypothesis was
      to inject rats with NMDA receptor–blocking drugs to see if they could learn
      the Morris water-maze task in conditions where LTP and LTD were mostly
      blocked. This experiment, which has now been repeated several times in differ-
      ent labs, showed that spatial memory was indeed severely impaired under these
      conditions. Later, a similar result was obtained by using mutant mice that failed
      to express functional NMDA receptors in a crucial subregion of the hippocam-
      pus (called area CA1; see Figure 5.10). In all of these cases, the general sensory
      and motor functions of these mice were largely intact—the failure in the maze
      task appeared to be a genuine memory deficit and not a trivial defect in seeing
      or swimming or stress level.
        Would it be possible to train rats in the Morris water maze task and then ana-
      lyze their hippocampal tissue? Many attempts have been made over the years to
      look for the electrical, biochemical, or structural correlates of learning in the
      hippocampus. There have been intermittent claims, but in truth not much has
      come of these efforts. Here’s the problem. Spatial learning is likely to produce
      changes in a very small fraction of spatially distributed hippocampal synapses,
      and we don’t have a good way to know where these synapses are. So, whether
      you’re recording synaptic strength electrically or making biochemical or struc-
      tural measurements, there’s a big “needle-in-a-haystack” problem: it’s almost
138   Learning, Memory, and Human Individuality
           /.%"3          MBSHFCSJFGDBMDJVNJOGMVY      $B.,**                  ".1 "3JOTFSUJPO      -51
           /.%"3          TNBMMTVTUBJOFE$BJOGMVY         11                  ".1 "3JOUFSOBMJ[BUJPO   -5%
                              -51FYQFSJNFOU                                .PSSJTXBUFSNB[FFYQFSJNFOU
                    
4ZOBQUJDTUSFOHUI
  PGCBTFMJOF
                                                          /PSNBMNJDF
                    
                    
                                     /.%"SFDFQUPSNVUBOUNJDF
                     
                                                                  
                                  5JNF NJOVUFT                                /PSNBM        /.%"SFDFQUPSNVUBOU
figure 5.10. An experiment showing that mutant mice lacking functional NMDA-
             type glutamate receptors in a crucial region of the hippocampus have
             impaired LTP, LTD, and spatial learning. The top panel shows the sig-
             naling cascades triggered by the NMDA receptor to induce both LTP
             and LTD. The question marks indicate that there are multiple steps
             leading to AMPA receptor insertion and internalization that we still do
             not understand. The lower left panel is a plot of synaptic strength as a
             function of time in an LTP experiment. LTP was induced by applying
             high-frequency bursts to the presynaptic axons at the point indicated by
             the upward arrow. The lower right panel shows the path of well-trained
             mice in a Morris water maze. These are the results of a probe trial in
             which the platform is removed to see where the mouse will hunt for it.
             The normal mouse has a well-established memory for the correct plat-
             form location in the upper left quadrant while the LTP/LTD-lacking
             mutant mouse has little memory for the location and therefore searches
             widely in the water maze. Adapted from J. Z. Tsien, P. T. Huerta, and S.
             Tonegawa, The essential role of hippocampal CA1 NMDA receptor-de-
             pendent synaptic plasticity in spatial memory, Cell 87:1327–1338
             (1996). Joan M. K. Tycko, illustrator.
                                                                      Learning, Memory, and Human Individuality     139
      impossible to measure the relevant changes when they are diluted in a sea of
      other synapses that are not a part of the memory trace.
        The results showing that treatments that interfere with hippocampal NMDA
      receptor function block spatial learning in rats and mice suggest that our work-
      ing model is correct: the engram for declarative memory in the hippocampus
      requires LTP and LTD. Do these findings prove this hypothesis? Unfortunately,
      no. Although hippocampal NMDA receptor manipulations interfere with spa-
      tial learning, attempts to interfere with LTP and LTD by targeting biochemical
      signals that follow NMDA receptor activation have met with mixed success.
      One can block most forms of LTP or LTD by interfering with CaMKII or PP1
      or certain types of AMPA receptors, but this will not always produce a deficit in
      spatial learning tasks. In addition, it’s very likely that these manipulations affect
      many processes in addition to LTP and LTD. The calcium flux through the
      NMDA receptor activates many enzymes, not just PP1 and CaMKII. There is
      further divergence of the signaling cascade as we move along: CaMKII, for ex-
      ample, transfers phosphate groups to hundreds of proteins in hippocampal
      neurons, not just those involved in LTP. As a consequence, one cannot be com-
      pletely confident that the blockade of spatial learning produced by these drugs
      or molecular genetic tricks is really due to an LTP/LTD deficit as opposed to
      some side effect.
        To summarize, we know that destroying the hippocampus will prevent spa-
      tial learning in rats and mice, and there is suggestive, but not conclusive, evi-
      dence that memory for locations in space is stored in the hippocampus by
      changing the strength of synapses through LTP and LTD. How does making
      certain synapses weaker or stronger in the hippocampus give rise to the be-
      havioral memory that allows an animal to learn the Morris water maze or an-
      other spatial task? The short answer is that we don’t know. The hippocampus is
140   Learning, Memory, and Human Individuality
not anatomically or functionally organized in a way that makes this obvious.
The slightly longer answer is that even through we don’t know, there is an inter-
esting hint that may be relevant to this difficult problem.
  John O’Keefe, Lynn Nadel, and their coworkers at University College, Lon-
don, made recordings from neurons in the hippocampus of rats as they ex-
plored an artificial environment in the lab. What they found was that about 30
percent of one class of cells in the hippocampus (called pyramidal cells) seemed
to encode the animals’ position in space. When a rat is placed in a new environ-
ment and has a chance to explore, recordings will reveal that, after a few min-
utes, one cell fires only when the animal is in a particular location (say, the
upper left edge of a large circular cage (see Figure 5.11). This particular cell, re-
ferred to as a “place cell,” will once again fire in this fashion even if the rat is re-
moved from this environment and returned days to weeks later. Place cells have
been found in mice as well as rats. Recording from additional cells reveals that
there are place cells that fire specifically for all different parts of the explored en-
vironment. Some are fairly sharply tuned for place (Figure 5.11) while oth-
ers fire over a broader area. When place cells are recorded in mutant mice in
which the hippocampal pyramidal cells have been engineered to have a form
of CaMKII that is always on (they can’t have more LTP because it is already
turned up to maximum levels), interesting properties emerge. Place cells do
form characteristic firing patterns when the mouse explores an environment,
but then, when the animal returns to the environment, the tuning of these cells
tends to change (Figure 5.11). Because these mutant mice are also impaired in
spatial learning tasks, it has been suggested that LTP is required to maintain the
tuning of place cells and that these place cells form a cognitive map of space that
allows the animal to store spatial memory.
  The problem is that the physical details of this cognitive map in the hippo-
                                               Learning, Memory, and Human Individuality   141
                             4FTTJPO      4FTTJPO      4FTTJPO      4FTTJPO
              /PSNBM
              -51MBDLJOH
              NVUBOU
      figure 5.11. Place cells in the hippocampus of mice. These figures show the firing
                   rate of individual pyramidal cells in the hippocampus of a mouse explor-
                   ing a circular environment. Black pixels indicate high firing rates, and
                   light gray low firing rates, for that particular location. Place cells from
                   normal mice may be either sharply tuned (as shown here in the example
                   from the normal mouse) or broadly tuned, but the representation of
                   space is stable over many training sessions. Recordings made from a mu-
                   tant mouse that lacks LTP (it has been engineered to have a form of
                   CaMKII that is always turned on in these neurons) show that place
                   fields are not stable over repeated training sessions. This correlates with a
                   failure of these mutant mice to learn certain spatial tasks. Adapted from
                   A. Rotenberg, M. Mayford, R. D. Hawkins, E. R. Kandel, and R. U.
                   Muller, Mice expressing activated CaMKII lack low frequency LTP and
                   do not form stable place cells in the CA1 region of the hippocampus,
                   Cell 87:1351–1361 (1996). Joan M. K. Tycko, illustrator.
      campus are not obvious. Sensory systems have maps that represent the external
      world anatomically in the brain: adjacent cells in the primary visual cortex will
      be activated by light coming from adjacent points in the visual field. Likewise,
      adjacent cells in the primary somatosensory cortex will be stimulated by touch
      on adjacent points on the body surface. But although different hippocampal
      neurons that code for the same location in space tend to fire together, they are
142   Learning, Memory, and Human Individuality
not physically organized in any coherent fashion. One cell that codes for the
upper left quadrant of the environment may be located at the opposite end of
the hippocampus from another cell that codes for the same region, and cells in
the intervening tissue are not organized in any fashion to represent the spatial
world. So, while we are beginning to gain an understanding of the molecular
processes that represent experience as changes in neuronal function (LTP, LTD,
and changes in intrinsic excitability) and structure, and there is some evidence
emerging to link these processes with specific forms of learning, we are still far,
far away from a complete “molecules-to-behavior” explanation of declarative
memory.
W E H AV E S E E N   that the brain does not use a single cellular process or a single
brain region to store memory. Rather, memory storage involves multiple brain
locations and several broad classes of mechanism (synaptic plasticity, intrinsic
plasticity), each of which can be produced by a number of different molecular
strategies. Crucially, the cellular and molecular mechanisms of memory storage
are not unique. In true, kludgy evolutionary fashion, the mechanisms for stor-
ing memory have been largely adapted from those designed to wire up the brain
in response to experience during the later stages of development (during late
pregnancy and early childhood).
   Let’s put this back in a historical context. The design of the brain has been
limited as it has evolved by three main considerations.
   1. During the course of evolution, the brain has never been redesigned from
       the ground up. It can only add new systems onto existing ones.
   2. The brain has a very limited capacity for turning off control systems,
       even when these systems are counterproductive in a given
       situation.
                                                Learning, Memory, and Human Individuality   143
         3. Neurons, the basic processors of the brain, are slow and unreliable, and
            they have a rather limited signaling range.
      These considerations have driven the brain’s solution to the problem of build-
      ing computational complexity: a brain that has a huge number of neurons and
      in which these neurons are highly interconnected. This big complex brain cre-
      ates two problems. How do you get a large head through the birth canal? And
      how do you specify the wiring diagram for 500 trillion synapses genetically?
      The solutions, as previously discussed, have been to only roughly specify the
      wiring diagram of the brain genetically and to reserve significant brain growth
      and synapse formation until after birth. This design allows a head that can pass
      through the birth canal. It also allows sensory experience to guide the fine-
      scale wiring of the brain. In order to do that, there had to be mechanisms by
      which particular patterns of sensory experience could drive changes in synaptic
      strength (LTP and LTD), intrinsic excitability, and the growth and retraction
      of axonal and dendritic branches as well as synapses. These, of course, are the
      same cellular and molecular mechanisms that, with slight elaboration, are re-
      tained in the mature brain to store memory.
        This is the ultimate example of “when life gives you lemons, make lemon-
      ade.” Our memory, which is the substrate of our consciousness and individual-
      ity, is nothing more than the accidental product of a work-around solution to a
      set of early evolutionary constraints. Put another way, our very humanness is
      the product of accidental design, constrained by evolution.
144   Learning, Memory, and Human Individuality
Chapter Six
Love and Sex
H U M A N S A R E T R U LY   the all-time twisted sex deviants of the mammalian
world. I’m not saying this because some of us get turned on by the sight of auto-
mobile exhaust systems, the smell of unwashed feet, or the idea of traffic cops in
bondage. After all, other species are at a disadvantage in expressing their kinks
by not having reliable access to the Internet. Rather, I mean that the more pro-
saic aspects of sexual activity in humans are far outside the mainstream of be-
havior for most of our closest animal relatives.
   The spectrum of human amorous and sexual behavior is wide and deeply in-
fluenced by culture (and I will consider these issues shortly), but let’s first talk
about the generic presumed norm: regular, old-fashioned monogamous het-
erosexual practice. Then we can see how it compares with the practices of most
other mammals. The simplified human story, stripped of all the romance, is
                                                                                       145
      something like this. Once upon a time, a man and woman met and felt mu-
      tual attraction that they codified in a ceremony (marriage). They liked pri-
      vacy for their sexual acts and they declined opportunities for sex with oth-
      ers. They had sex, including intercourse, many times, in most phases of the
      woman’s ovulatory cycle, until she became pregnant. Once it was known that
      the woman was pregnant, they continued to have sexual intercourse for some
      time thereafter. After the baby was born, the man helped the woman to provide
      resources and sometimes care for the child (and for the other children that
      followed). The woman and man continued their monogamous relationship
      and remained sexually active well beyond the woman’s childbearing years, as
      marked by her menopause.
        Now let’s hear another perspective. The comedian Margaret Cho uses the
      line “Monogamy is sooo weird . . . like . . . when you know their name and
      stuff?” This brings down the house in a comedy club, but the idea is actually the
      dominant one in the nonhuman world: more than 95 percent of mammalian
      species do not form lasting pair bonds, or even pair bonds of any kind. In fact,
      rampant sexual promiscuity is the norm for both males and females, and this
      promiscuous sex is typically conducted in the open, for everyone in the so-
      cial group to see. One-night stands and public sex are the rule, not the excep-
      tion. One consequence of all this public promiscuity is that in most nonhuman
      mammals the father makes little or no contribution to rearing the young. In
      some cases, the male does not stay in a social group following mating, but
      rather drifts away. In others, the male stays in the social group but does not ap-
      pear to recognize his own offspring.
        This arrangement may give the impression that most nonhuman animals are
      libertines, but in another sense they are deeply conservative. Humans often
      have sex when it is either unlikely or impossible for conception to occur (during
      the wrong part of the ovulatory cycle, during pregnancy or after menopause),
146   Love and Sex
but most nonhuman mammals have sex that is very accurately timed to match
ovulation. Human females have concealed ovulation: it is almost impossible for
a male to detect directly the female’s most fertile days. Although women are
able to train themselves to detect ovulation, there is no evidence of an instinc-
tive knowledge of ovulation like that possessed by other female primates. In
fact, while many studies have been done on this topic, it is not clear that women
are most interested in sexual intercourse during the preovulatory (fertile) phase
of their cycle.
  In contrast, most nonhuman females in the mammalian world advertise
their impending ovulation with sexual swellings, specific odors, or stereotyped
sounds and gestures (such as a posture that presents the genitals) indicating sex-
ual interest. Typically, neither males nor females will approach each other for
sex during nonfertile times. Sex after menopause is not an issue because al-
though nonhuman females do show gradually declining fertility after a certain
age, there is no point where they become absolutely infertile. Indeed, meno-
pause may be a uniquely human phenomenon.
  Of course, these human sexual distinctions are based on a broad generaliza-
tion. There are some nonhuman species such as gibbons and prairie voles that
form long-term pair bonds in which the father helps rear the young. There are
also a few animals, such as dolphins and bonobos, that seem to share the human
proclivity for recreational sex, and some others, such as vervet monkeys and
orangutans, where the females have concealed ovulation. On the human side, it
is not all Ozzie-and-Harriet either: clearly, humans are not all monogamous
(or even serially monogamous), and in some cultures or subgroups polygyny
(multiple wives) or polyandry (multiple husbands) is an established practice.
Nonetheless, it is clear that the dominant human practice, across cultures, is
monogamy, or at least serial monogamy. The critical point here is that in hu-
mans, most females have a single sexual partner in a given ovulatory cycle. In
                                                                      Love and Sex   147
      studies where paternity has been evaluated with genetic tests across large num-
      bers of children, the vast majority (over 90 percent) of children are indeed
      found to be the offspring of the mother’s husband or long-term partner, and
      most fathers provide some form of care and support for their children (although
      this may take the form of providing food, protection from others, shelter, and
      money rather than direct child care).
        We share a number of common sexual practices with other animals. Oral-
      genital stimulation (of both sexes) is one of these. Masturbation is another.
      Both male and female animals have been observed to masturbate, some even
      using objects to do so. To date, however, humans are the only species reported
      to masturbate while watching Richard Simmons’ Sweatin’ to the Oldies, Disc 2.
      Originally it was thought that masturbation might solely be a phenomenon of
      animals in captivity, but there are now reliable field reports of both male and fe-
      male masturbation in wild bonobos and red colobus monkeys. There is also ev-
      idence for nonhuman masturbation independent of direct genital stimulation.
      Sir Frank Darling, in his classic 1937 book on animal behavior A Herd of Red
      Deer, reported that during the rutting season male Scottish red deer masturbate
      “by lowering the head and gently drawing the tips of the antlers to and fro
      through the herbage.” This typically results in penile erection and ejaculation
      within a few minutes. Finally, it should be mentioned that both male and fe-
      male homosexual acts have been observed in a large number of mammalian
      species, although, to my knowledge, there are no reports of lasting homosexual
      pair bonds in nonhumans.
        So, why have humans evolved such a distinct cluster of sexual behaviors with
      concealed ovulation, recreational sex, long-term pair bonding, and prolonged
      paternal involvement? Though a few of our close simian cousins share some of
      these traits—the bonobos with their penchant for recreational sex and gibbons
      with their long-term pair bonding—none of these species has the complete
148   Love and Sex
cluster of behaviors. Thus these aspects of human sexual behavior are likely to
be recent evolutionary developments in our primate lineage.
  What I will argue here is that our normative human sexual practices follow
directly from inelegant brain design. Let’s work backward to try to explore this
question. Why do humans have concealed ovulation and recreational sex? One
persuasive evolutionary hypothesis, from Katherine Noonan and Richard Al-
exander of the University of Michigan, is that concealed ovulation functions to
keep the male around. Let’s first consider the counter-example: When ovula-
tion is clearly advertised, the male can maximize his reproductive success by
mating with a given female in her fertile time and then, when her fertile time is
over, leaving to try to find a another fertile female to impregnate. In this system,
the male does not have to worry that some other male will come along and
impregnate the first female while he is away because he knows that she is no
longer fertile. This is the mating system found in many species, including ba-
boons and geese. With concealed ovulation, however, the couple has to mate all
through the woman’s cycle to have a reasonable chance of conceiving. Not only
that, but if the male decides to stray and try his luck with another female, he
cannot be sure that another male will not sneak in the back door and mate with
the first female on her fertile days. Furthermore, his chance of finding another
ovulating female is low. Hence, with concealed ovulation, the best male strat-
egy is to stick with one female and mate with her all the time.
  Enough about the male. What does the female get out of this arrangement?
Isn’t her best reproductive strategy to play the field in the hope of getting the
best-quality male genetic contribution to her offspring? Indeed, the females of
many species, including many mammalian species, do exactly that. The crucial
difference is that although a female orangutan, for example, easily rears her off-
spring alone, human females don’t have it so easy. Most other animals are able
to find their own food immediately after weaning, but human children do not
                                                                        Love and Sex   149
      achieve this level of independence for many more years. As a consequence, the
      reproductive success of a female human is much greater if she can establish a
      long-term pair bond with a male and he contributes in some form to child-
      rearing. Males tend to buy into this arrangement for two reasons. One is that if
      the male plays along he can be confident of paternity: he won’t be wasting his
      resources supporting the offspring of another male. Another is that he, and the
      female, will enjoy the bonding that comes from frequent sex. This bonding and
      reward is enough to keep humans having sex even when conception is impossi-
      ble (during pregnancy or after menopause).
        In this story, the key point is that human females need male help in certain
      aspects of childrearing much more than females of other species because hu-
      man infants are totally helpless and even toddlers and small children are incapa-
      ble of fending for themselves. Why is that? Recall that the human brain at birth
      has only about one third of its mature volume and that early life is crucial for
      the experience-dependent wiring and growth of the brain. The human brain
      grows at an explosive rate until age 5 and it is not completely mature until
      about age 20. Unlike the 5-year-olds of most other species, human 5-year-olds
      simply do not have sufficiently mature brains to find their own food and pro-
      tect themselves from predators.
        Let’s summarize by telling the story back in the other direction. Human
      brains are never designed from the ground up. Rather, as we have seen, new sys-
      tems are just added on top of the evolutionarily older ones below. This means
      that the brain must grow in size as it evolves new features. Even more impor-
      tant, the brain is made of neurons that haven’t changed substantially in their
      design since the days of prehistoric jellyfish: as a consequence, neurons are slow,
      leaky, unreliable, and have a severely limited signaling range. So, the way to
      build sophisticated computation in a brain with these suboptimal parts has
150   Love and Sex
been to create an enormous, massively interconnected network of 100 billion
neurons and 500 trillion synapses. This network is too big to have its point-to-
point wiring diagram explicitly encoded in the genome, so experience-driven
“use it or lose it” rules for wiring must come into play to actively construct this
huge network. This necessitates extensive sensory activity, which mostly pro-
ceeds after birth, and this requires an unusually protracted childhood during
which the brain matures. In addition, the physical constraints of the birth canal
make it impossible for a human baby to be born with a more mature brain—it
just wouldn’t fit. As it is, death during childbirth is a significant human phe-
nomenon, particularly in traditional societies, whereas it is almost unknown
among our closest primate relatives.
  As a consequence of all this, human females are uniquely dependent on male
support to raise their offspring. They secure their reproductive success by hav-
ing concealed ovulation, which compels males to adopt a strategy of mating
with one female repeatedly throughout her cycle. This monogamous, mostly
recreational sex has two effects: it gives a high probability of accurately knowing
paternity of the resultant offspring and it helps to reinforce a lasting pair bond,
both of which promote continued care of the offspring by both parents. Or, to
reduce it to an extreme level of speculation: if human neurons were much more
efficient processors, then heterosexual marriage might not exist as a dominant
cross-cultural human institution.
  “But . . . but . . . but,” I can hear you say, “does this explanation really bear on
how we live now? After all, in my city there are plenty of single moms who are
raising their children just fine, thank you. And there are couples lining up to
adopt children with whom they share no genetic material and others who are
happy not to have children at all. There are plenty of gay people, and a few are
raising children, but most are not. Also, there are lots of people who are hav-
                                                                         Love and Sex    151
      ing sex outside of their long-term relationship. Though this is all true, several
      points, central to how human sexual behavior has developed, must be made in
      thinking about these observations. First, evolution is a slow process and our
      genomes are never totally adapted to rapidly changing conditions. In our mod-
      ern world, some very recent changes relevant to sexual behavior, such as the
      availability of contraception and assisted fertility, and changes in social conven-
      tions, political systems, and technologies, have allowed women to live indepen-
      dently. Most of these changes have only appeared in the last generation. So,
      the genes that help to instruct the parts of our brains involved in sexual behav-
      ior have not yet undergone selection by many of the forces operating in modern
      society. Indeed, this is a general issue in considering human evolution that ap-
      plies to many aspects of biological function, not just the biological basis of sex-
      ual behavior. Second, certain drives related to sexual function will persist even
      in situations or stages of life where they are no longer relevant to getting one’s
      genes into the next generation. Hence, people routinely feel sexual attraction
      and even form long-term pair bonds (read as “fall in love”) in situations where
      there is no chance of producing offspring (because of contraception, infertility,
      menopause, same-sex partner, and so on). Likewise, many couples feel a strong
      urge to raise children, even if those children do not share their genes. Third,
      even in our modern society with sex outside of long-term relationships, high
      divorce rates, and so forth, it is amazing that the end result of these factors on
      paternity is minor. As I mentioned previously, widespread genetic tests of pater-
      nity across several cultures have shown that over 90 percent of children were in-
      deed fathered by the mother’s husband or long-term partner. Furthermore, de-
      spite divorce and remarriage, a similar fraction of fathers contribute in some
      fashion to raising their children. In this way, though cultural attitudes and
      practices may differ, the ultimate outcome of the sexual lives of today’s New
152   Love and Sex
Yorkers or Londoners is not very different from that of people living in more
traditional societies.
I’VE SPENT A   good bit of time describing a rather limited spectrum of human
sexual behavior and speculating about how inefficient brain design might have
helped to create it. Let’s now turn our attention to the other side of the coin:
how brain function influences our sexual and amorous drives. In doing so, we
must first consider the prerequisite of all sexual behavior, which is the develop-
ment of gender identity. How do we come to see ourselves as male or female?
  Gender identity is a complex process in which biological and sociocultural
factors come together. It’s not sufficient to say that if your sex chromosomes are
XX and you have ovaries and a vagina you will then think of yourself as female,
whereas XY chromosomes, testicles, and a penis will cause you to think of your-
self as male. It’s more complex in at least two ways. As you undoubtedly know,
there is a small fraction of people who feel gender dysphoria. These people be-
lieve deeply that their chromosomal sex does not match their sense of self. This,
despite the outward characteristics of their bodies and overwhelming social
pressure. In some more affluent cultures, these transgendered people will often
elect to cross-dress, take hormonal treatments, or undergo various forms of sur-
gery to partially or completely reassign their sex. Gender dysphoria is more
common in chromosomally male people, but it is not solely a male-to-female
phenomenon. It’s worth noting that while gender dysphorics, if allowed by
social convention, will almost always cross-dress, the reverse isn’t true: most
cross-dressers identify with their chromosomal sex and do not experience gen-
der dysphoria. Rather, they cross-dress as a more subtle expression of sexual
identity.
  Once you have self-identified as male or female, what this means in terms of
                                                                      Love and Sex   153
      your ideas and expectations is hugely influenced by culture and personal expe-
      rience. The idea of what it means to be a man or a woman varies widely across
      cultures, families, and even individuals, in ways we know all too well: Japa-
      nese female gender identity, for example, is not the same as Italian female gen-
      der identity. In recent years, our cultural ideas about male and female identity
      have undergone rapid change. Perhaps the clearest examples of culturally con-
      structed gender identity may be found in those traditions that have institution-
      alized transgendered status. In many Native North American cultural groups, a
      practice called two-spirit flourished. In these traditions, chromosomal males
      who identified as females and, to a lesser extent, chromosomal females who
      identified as males were encouraged to cross-dress and were accorded a special
      shamanistic status for their abilities to bridge the male and female worlds. In
      Polynesia, there is a tradition in which the first-born child is designated as a sort
      of mother’s helper and assigned a female-typical social role. In some cases this is
      done irrespective of the chromosomal sex of the child, and the male-to-female
      transgendered people who result are given the name mahu (in Tahiti or Hawaii)
      or fa’ a fafine (in Samoa). An early European encounter with this practice was
      recorded by a Lieutenant Morrison, a member of Captain William Bligh’s 1789
      expedition to Tahiti: “They have a set of men called mahu. These men are in
      some respects like the eunuchs of India but they are not castrated. They never
      cohabit with women but live as they do. They pick their beards out and dress as
      women, dance and sing with them and are as effeminate in their voice. They are
      generally excellent hands at making and painting of cloth, making mats and ev-
      ery other woman’s employment.” Like Native American two-spirit practitio-
      ners, mahu are assigned a high social status and are considered both lucky and
      powerful. King Kamehameha I of Hawaii made sure to have mahu dwelling
      within his compound for exactly this reason. The larger issue here, as illustrated
154   Love and Sex
by mahu, two-spirit, or merely your hyper-macho Uncle Fergus, is that al-
though sex is simply determined by sex chromosomes and the resultant action
of sex hormones, gender identity is a more complex process in which there is an
interplay of biological and sociocultural factors.
  Can we identify differences in male and female brains that might underlie
the biological component of gender identity? Male brains, on the average, are
slightly bigger than female brains, even when a correction is made for body
size. This is most apparent in measures of the thickness of the right cerebral
cortex. More interestingly, a particular cluster of cells in the hypothalamus
called INAH3 (an acronym for interstitial nucleus of the anterior hypothala-
mus number 3) is two to three times larger in men than in women. This is very
suggestive because the cells of INAH3 have an unusually high density of recep-
tors for testosterone and also because neural activity in this region is correlated
with certain phases of male-typical behavior during sex (more on this later).
Lest we start to think that everything is bigger in males, there are two key re-
gions that are proportionally larger in the female brain. These are the corpus
callosum and the anterior commissure. These structures are bundles of axons
(white matter) that carry information from one side of the brain to the other.
They are particularly important in linking the two sides of the highest and most
recently evolved brain region, the cerebral cortex. It is almost certain that this
list is incomplete in several respects. There are likely to be more regional size
differences in male versus female brains that will emerge with further research.
In addition, there are likely to be even more differences that will be manifest
not as size differences but rather as differences in cellular structure (such as de-
gree of dendritic branching), biochemical constituents (perhaps the density of
neurotransmitter receptors or voltage-gated ion channels), or electrical func-
tion (such as spiking rate and timing in particular neurons).
                                                                        Love and Sex   155
      figure 6.1. ¿Qué es más macho? Even though there is very little information in these
                  stick figures, we can easily assign them to typically male and female cate-
                  gories. This illustrates that our brains’ visual systems have become very
                  highly specialized for gender recognition. This image was kindly provided by
                     Professor Nikolaus Troje of Queens University, Ontario, Canada. The male/female
                     distinction is even more apparent when the figures are animated with gender-typical
                     walking motions. You can see this at Professor Troje’s website: www.biomotionlab.ca/
                     Demos/BMLgender.html
156   Love and Sex
  In addition to these neuroanatomical differences between men and women
there are some consistent behavioral differences. This has been a contentious
and politically charged area of research, but a large number of studies con-
ducted by different groups around the world now seem to be pointing to a con-
sistent set of conclusions. On average, women score better than men on some
language tasks, such as rapidly generating words in a particular category. This is
called “verbal fluency” and has been found cross-culturally. They outscore men
in tests of social intelligence, empathy, and cooperation. On average, women
are better at tasks that involve generating novel ideas, and they excel at match-
ing items (spotting when two items are alike) and arithmetic calculation. But
men generally outperform women on tests of mathematical reasoning, particu-
larly those using word problems or geometry. They are better at some spatial
tasks such as mental rotation of three-dimensional objects and distinguish-
ing figures from the background. The general conclusion is that, on average,
women and men do tend to have different cognitive styles. Of course, these
differences are seen in averages of large populations and individual men and
women can have abilities throughout the performance range for all of these
traits. Tests that seek to measure general intelligence have not found significant
differences between large male and female populations.
  So, we have some evidence for differences in male and female brain structure
and some differences in male and female mental function. One key issue is to
what degree these anatomical and behavioral differences are genetically versus
socioculturally determined, the old nature-versus-nurture question. The fact
that we can see anatomical differences between adult male and female brains
does not, in itself, prove that these differences have a genetic basis. Recall from
Chapter 3 that experience can mold neuronal connections and fine structure as
particular patterns of electrical activity give rise to expression for certain genes.
Perhaps the way a typical girl is raised causes her to grow a somewhat larger set
                                                                         Love and Sex   157
      of axonal connections between the left and right sides of the brain (the anterior
      commissure and corpus callosum) and the way an average boy is raised causes
      expansion of INAH3.
        At present, although it seems reasonable to imagine that sociocultural factors
      might affect sex differences in brain structure, there is no evidence leading to ei-
      ther acceptance or rejection of this idea. But several lines of evidence argue for a
      genetically based explanation. For example, accumulating evidence indicates
      that gender differences in behavior can be seen very early in life and across spe-
      cies. On average, newborn girls spend more time attending to social stimuli
      such as voices and faces while newborn boys show greater fascination with spa-
      tial stimuli such as mobiles. Young male monkeys and rats tend to engage in
      more rough-and-tumble play than their female counterparts. Young male rats
      perform better in spatial maze tasks than females.
        Correlational studies on both girls and boys have shown that the levels of
      prenatal testosterone can predict performance on some spatial tasks when it is
      measured later in life. Although testosterone is thought of as a “male hormone”
      deriving from the testes, it is also produced in the adrenal glands and is there-
      fore present in females in smaller amounts. In one recent report, Simon Baron-
      Cohen and his colleagues at the University of Cambridge found that children
      exposed to a high level of testosterone in utero were less likely to make eye con-
      tact at the age of 12 months and had less developed language skills at the age of
      18 months. In sum, testosterone exposure seems to drive a more male-typical
      cognitive/behavioral style even when this is measured quite early in life.
        Extreme examples of this idea can be found in cases where sex hormones are
      subject to unusual manipulations. Girls suffering from a swelling of the adrenal
      glands called congenital adrenal hyperplasia, or whose mothers were treated
      during pregnancy with the steroid drug diethylstilbestrol (DES), are exposed
158   Love and Sex
to much higher than usual levels of testosterone, starting in utero. On average,
these girls tend to perform more like boys in some cognitive tests (better at
mathematical reasoning and spatial tasks). Their behavior as small children was
also more boylike: these girls displayed more aggressive play and showed more
interest in object toys (trucks) than social toys (dolls). An analogous result is
seen in animal experiments: the performance of a group of female rats in a spa-
tial maze task was increased to average male levels when they were treated with
testosterone shortly after birth.
  The converse finding is revealed in a disease called androgen-insensitivity
syndrome, in which males develop normal testes that secrete typical levels of
testosterone, but a mutation in the receptor for testosterone (a “male hormone,”
or androgen) renders cells unable to respond to these compounds and so the
body (and the brain) develop on a female path. When visual-spatial ability was
studied in such a population by Juliane Imperato-McGinly and her coworkers
at Cornell Medical School, it was found that not only did they perform more
poorly than average males, but they did significantly worse than average fe-
males as well. Presumably this reflects the fact that males with androgen insen-
sitivity syndrome receive no effects of androgens at all during development and
early life, while normal females do have a low level of androgen exposure from
adrenal gland–derived testosterone. These results are similar to those seen in
male rats castrated at birth, which also perform more poorly than females in a
spatial maze task.
  Some sex-based differences in cognitive style may be attributed to differ-
ences in brain structure. Melissa Hines and her coworkers, then at UCLA,
found that in a population of normal women, those with the largest corpus cal-
losum, in particular, a subregion of the corpus callosum called the splenium,
performed best on tests of verbal fluency. They hypothesized that a larger sple-
                                                                     Love and Sex   159
      nium allows a greater flow of information between language centers in the left
      and right brains.
      THE ISSUE OF    sex-based differences in brain function and cognition entered
      the public consciousness in a big way when Larry Summers, president of Har-
      vard University, addressed the National Bureau of Economic Research Con-
      ference on Diversifying the Science and Engineering Workforce on January 14,
      2005. He proposed that the extreme underrepresentation of women at the
      highest levels of science and engineering could be partially explained by genetic
      differences underlying the function of male and female brains, what he called
      “different availability of aptitude at the high end.” He proposed that if one were
      to evaluate the top 2 percent or so of scorers on standardized math or science
      tests, one would find four times more men than women; and he suggested
      that this difference in the elite pool underlies, in part, the underrepresentation
      of women in science and engineering, particularly at top-ranked universities.
      These comments evoked a firestorm of criticism and counterattack that contin-
      ued even after his resignation some months later.
        Let’s evaluate the Summers hypothesis in light of the work I’ve just reviewed
      on sex-based differences in brain structure and cognitive style. There is reason
      to believe his foundational premise: one can devise cognitive tests that reveal
      differences between men and women in both the average score and in the varia-
      tion of scores (greater variation will influence the top 2 percent). But, crucially,
      are these tests predictive of success in science and engineering and the very top
      ranks? To my knowledge, data are not available to assess this point. But from
      my own experience in science, I would guess not. I have had the pleasure of in-
      teracting with many of the world’s most successful scientists (albeit, mostly bi-
      ologists) over the years and one thing is clear: there is not a single cognitive
160   Love and Sex
strategy that underlies success at the very top of science. Some of the world’s top
scientists think in terms of equations, others verbally, others spatially. Some rely
on step-by-step deduction and logic to reach their conclusions while others
have a flash of insight that they must then go back and test post hoc to see if it is
valid. Einstein, by all reports, was a rather middling mathematician, but this
did not stop him from making paradigm-shifting contributions to physics that
were expressed mathematically.
  For the Summers hypothesis to be true, the cognitive differences that one
measures on standardized tests must truly be predictive of scientific success at
the high end. In addition, for the Summers hypothesis to be true, a diminished
pool of elite women must be a limiting factor in the development of elite scien-
tists. Here, my personal experience also leads me to be skeptical. I served as Ad-
missions Chair and then Director of the Graduate Program in Neuroscience at
The Johns Hopkins University School of Medicine from 1995 to 2006. That is
one of the top programs in the world and it draws a very talented pool of stu-
dents. During this period, comparable numbers of men and women enrolled.
Similar numbers of men and women completed the program and those that did
had similar productivity (measured, for example, as numbers of papers in the
most prestigious journals). But as these students moved on in their careers, the
women began to drop out. Fewer entered postdoctoral fellowships. Of those
who finished their fellowships, fewer applied for faculty positions at elite uni-
versities, and of those that applied, fewer were successful in obtaining the top
positions. This is reflected in the composition of my own department, where
only 3 of 24 faculty are women. At least in the field of neuroscience, I highly
doubt the validity of the Summers hypothesis: there are plenty of women with
the very highest scientific aptitude in the pool, but the pipeline is leaking prodi-
giously, for a variety of reasons. These reasons include many social factors in-
                                                                         Love and Sex   161
      cluding a hostile environment, inflexible tenure and promotion policies (that
      do not account for a woman’s childbearing years), and, in some cases, blatant
      discrimination.
        The Summers episode has been damaging to science for at least two reasons.
      First, undoubtedly, some women who might otherwise have considered a ca-
      reer in science and engineering have reconsidered in light of his remarks, ei-
      ther because they accepted his hypothesis or because they took his remarks as
      evidence of a hostile environment to women scientists in academia. Second,
      the backlash against Summers has included statements indicating that even
      considering the issue of sex-based differences in brain function or cognitive
      style should not be allowed. It’s easy to see where these “politically correct” ideas
      come from. It’s natural to be suspicious of work that could be used to rational-
      ize the status quo, in this case male-dominated science. But this position is fun-
      damentally intellectually dishonest. At some level, genes and epigenetic factors,
      including those linked to sex, influence the cognitive style of male and female
      populations. Pretending that this isn’t true does not advance the cause of the
      rights of women (or any other historically oppressed group, for that matter).
      Scientists should be able to advocate for a woman-friendly, merit-based, inclu-
      sive, and diverse scientific enterprise without denying the mounting evidence
      for sex-based differences in brain function and cognitive style.
      ENOUG H ABOUT MALE        and female brains. Let’s talk about love and sex. The
      1970s art-rock band Roxy Music summed it up rather succinctly when they
      sang, “Love is the drug got a hook in me.” What is the neurobiological basis of
      this? Is love, or at least sex, really like a drug? Not surprisingly, we know more
      about the brain’s involvement in sexual acts than we do about its role in love
      and attraction.
        Andreas Bartels and Semir Zeki of University College, London, have taken
162   Love and Sex
 "OUFSJPS
                                                $BVEBUF                         $BVEBUF
 DJOHVMBUF
                                           1VUBNFO
                                           *OTVMB
                                                                               1VUBNFO
                                     $FSFCFMMVN
figure 6.2. Specific brain activation produced by viewing photos of a lover’s face.
            Left: A slice view down the midline of the brain with the nose oriented to
            the left created by a brain scanner. Right: A different slice view, crosswise,
            just in front of the ears. The black patches in both panels show the regions
            of activation. Adapted from A. Bartels and S. Zeki, The neural basis of ro-
            mantic love, Neuroreport 11:3829–3834 (2000). Joan M. K. Tycko,
             illustrator.
an interesting approach to finding neural correlates of romantic love. They re-
cruited male and female subjects in their 20s who claimed to be “truly, deeply,
and madly in love” and imaged their brains while they looked at photographs of
their lovers’ faces. Then, they performed a similar experiment using photo-
graphs of friends for whom the subjects had no strong amorous or sexual feel-
ings, matched for age, sex, and duration of friendship with the idea that the for-
mer minus the latter would reveal sites of brain activation specific to romantic
love, as opposed to merely vision or face recognition. This calculation found in-
creased activity in several discrete locations including the insula and anterior
cingulate cortex (areas known to be important in processing emotional stimuli)
                                                                             Love and Sex    163
      when the subjects were viewing the lover’s face and, surprisingly, in two regions
      that are mostly known for their involvement in coordination of sensation and
      movement: the caudate/putamen and the cerebellum (Figure 6.2). There was
      also a group of regions where activation was decreased with viewing the lover’s
      face, and these included several regions of the cerebral cortex as well as the
      amygdala (an emotion, aggression, and fear center).
        The Bartels and Zeki study used people who had been involved in a relation-
      ship for over 2 years. When this study was repeated by another group, headed
      by Lucy Brown at Albert Einstein College of Medicine, they recruited a subject
      pool of people who were in an earlier phase of their love relationships: ranging
      from 2 to 17 months. This population generally had the same pattern of activa-
      tion as the longer-relationship subjects, with one consistent difference: the new
      relationship subjects also showed strong activation in the ventral tegmental
      area. This is particularly interesting because the ventral tegmental area is a re-
      ward center of the brain that is responsible for intensely pleasurable sensations.
      It is one of the key regions activated by heroin or cocaine. Like users of heroin
      or cocaine, new lovers frequently show very poor judgment, particularly about
      the object of their affections. So the boys in Roxy Music may have had it at least
      partially right. Love is a very powerful drug, but it only works for a while, a few
      months to a year—kind of like crack. Then the bloom falls off the rose, so to
      speak. Hence the old joke:
         Q: Is it true you married your wife for her looks?
         A: Yeah, but not the looks she’s been giving me lately.
        What can we take from all this? First, the caveats. This type of study is lim-
      ited in several ways. It’s correlational so we really don’t know if any of these
      changes in brain activity are actually involved in the feeling of romantic love.
164   Love and Sex
Also, it’s a difficult study to perform. We don’t really know the mental state of
each subject as he or she gazed at their lover’s photo, and it’s difficult to com-
pletely exclude other factors in the results. For example, can the experimenters
be confident that the lovers’ faces were not simply more familiar to the subjects
than those of their friends? But if we assume for a moment that the pattern of
activation seen in this study does indeed reflect the activity of the brain during
the feeling of romantic love, then one thing that’s clear, and not surprising, is
that not just a single, discrete region is involved. The fact that emotional and
reward centers are activated is interesting. But what’s surprising is the activation
of centers for sensory-motor integration (caudate/putamen and cerebellum),
which might shed new light on the issue.
  Clearly, if you imagine a group of twenty-somethings looking at photos of
their true loves you can guess that they might be feeling sexually aroused. How
do the patterns of activation from the lover’s face experiment compare to brain
activation produced by seeing images of sexual activity? There is a collection of
several studies of men and women who had their brain activity scanned while
watching videos of strangers engaged in (hetero)sexual activity. In some of these
studies the subjects were also asked to rate their level of sexual arousal in a ques-
tionnaire, and in one, performed by Bruce Arnow and his coworkers at Stan-
ford, males had their arousal measured by means of a “custom-built pneumatic
pressure cuff ” attached to the penis with a condom. To try to isolate brain acti-
vation that was specific to sexual arousal, scans of these males were compared to
scans of the same subjects watching presumably sexually neutral material such
as landscapes or sports. The patterns of activation with the sex videos (Fig-
ure 6.3) was somewhat different in studies from various labs, but, in general,
the studies showed partial overlap with the brain regions activated by photos
of lovers’ faces (Figure 6.2). Both activated the anterior cingulate, insula, and
caudate/putamen. In addition, the sex videos produced activation of visual as-
                                                                         Love and Sex   165
                                              5VSHJEJUZSFTQPOTF
                                                   #SBJOBDUJWBUJPO
                             4FY        4FY      4QPSUT    4QPSUT     4FY
                                                           
                                                 5JNF T
      figure 6.3. Simultaneous measurement of penile erection and brain activity while the
                  subject watched alternating video snippets of sex and sports. Brain activity
                  was recorded in a region called the insula. You can see that these two mea-
                  sures correlate rather well. Adapted from B. A. Arnow, J. E. Desmond,
                  L. L. Banner, G. H. Glover, A. Solomon, M. L. Polan, T. F. Lue, and S. W.
                  Atlas, Brain activation and sexual arousal in healthy, heterosexual males,
                  Brain 125:1014–1023 (2002). Joan M. K. Tycko, illustrator.
      sociation areas in the occipital and temporal cortex as well as some areas impli-
      cated in executive function and judgment in the frontal cortex. The sex videos
      did not activate the ventral tegmental reward area. Interestingly, Sherif Ka-
      rama and his colleagues at the University of Montréal, who used both men and
      women in their study, found that only the men had significant activation in the
      hypothalamus. This result should be interpreted with caution, however, be-
      cause it may reflect a difference in acculturated male and female responses to
      sex videos rather than an underlying difference in brain function.
166   Love and Sex
  Although it is difficult to derive a good understanding of either romantic
love or the initial stages of sexual arousal from these human imaging studies,
things become somewhat clearer as we begin to consider aspects of the sexual
acts themselves, because this allows for animal experimentation: We can’t easily
ask animals how they are feeling, but we certainly can watch them have inter-
course. Recall that unlike humans, most animals, including the rats and mon-
keys that are the mainstay of laboratory research, will only mate during the fe-
male’s ovulatory phase. Therefore, the initiation of sexual behavior is typically
under control by the hormonal processes of the female’s ovulatory cycle. A fe-
male monkey comes into heat by a two-step process in which surges of the ovar-
ian hormones estrogen and then progesterone occur. This process causes sev-
eral different effects that prime sexual behavior. The estrogen acts, over a day or
so, to stimulate the growth of synaptic connections in a region of the hypothal-
amus called the ventromedial nucleus. You may recall from Chapter 1 that this
nucleus is also involved in feeding behavior—probably the ventromedial nu-
cleus has different subdivisions devoted to eating and sexual behavior. Estrogen
also causes the neurons in this region to express receptors for progesterone (es-
trogen binds to the promoters of the progesterone receptor gene to turn on
their transcription). Then, when progesterone surges, slightly later, it bonds
progesterone receptors, and this causes the female to seek out males, present her
genitals, and engage in other come-hither behaviors (such as ear wiggling in
rats). The female’s ventromedial nucleus is integrating two types of informa-
tion. One is electrical, triggered by sensory stimulation from seeing/hearing/
smelling the male (hey, he’s kinda cute . . .). The other is hormonal informa-
tion indicating her ovarian status (. . . and I’m fertile right now!). Only when
both signals are on will things move ahead. Recordings of neuronal activity
from the ventromedial nucleus show that neurons are firing spikes at high rates
                                                                       Love and Sex   167
      both during this “courtship phase” and during subsequent copulation. Females
      whose ventromedial nucleus has been damaged will not show these behaviors
      even if their ovarian hormones are functioning normally. Conversely, artificial
      electrical stimulation of this nucleus can induce or strengthen female-typical
      mating behavior.
        Estrogen also functions to trigger the cells lining the vagina to produce sub-
      stances that ultimately result in an odor that is attractive to males. These odor-
      ant molecules do not appear to be directly secreted by the cells of the vagina,
      but rather by bacteria that thrive in an estrogen-primed environment within
      the vaginal mucus. These odors are key to triggering sexual interest in males.
      The vaginal odor of a monkey in the postovulatory phase not only is unattrac-
      tive to males; it evens seems to repel them. If vaginal secretions from a female
      monkey in heat (immediate preovulatory phase) are smeared around the vagina
      of a monkey not in heat, the male will be fooled by the alluring smell and will
      attempt to mate.
        Here, it is worth pausing to mention that although some general themes of
      this sexual circuit are likely to operate in human females, there are important
      differences as well. As noted previously, olfaction does not appear to be as cen-
      tral to sexual behavior in humans as it is for many other mammals. Likewise,
      ovarian hormones do not exert such a powerful control over female sexual drive
      in women. In fact, women who have had their ovaries removed for medical rea-
      sons typically experience a normal sex drive.
        Males also have a center in the hypothalamus for triggering sexual behavior,
      but this is in a different area, the medial preoptic region. This is a group of nu-
      clei within the hypothalamus that includes the aforementioned INAH3, the
      area that has a high density of testosterone receptors and that is larger in males.
      Like the ventromedial nucleus in females, the medial preoptic region integrates
      both sensory-driven synaptic stimulation from higher centers, including emo-
168   Love and Sex
tional centers and hormonal information. The difference is that in this case the
hormone is testosterone. If the action of testosterone is removed (by castration
or drugs that block testosterone receptors), then this will block the increase of
spike firing of medial preoptic area neurons evoked by sexual stimuli, such as a
female in heat. These treatments will also cause a reduction in male-typical sex-
ual behavior, such as the mounting of females. A complete abolition of the
mounting behavior can come from selective destruction of the medial preoptic
area. Surprisingly, this does not seem to produce a complete abolition of sex
drive, but only stops that triggered by females: male monkeys with medial pre-
optic lesions will still masturbate with gusto.
  Artificial electrical stimulation of a male monkey’s medial preoptic area will
cause it to mount a nearby female and commence thrusting, but copulation
will only be sustained if the female is in heat. Otherwise, the male will give a few
half-hearted thrusts and scamper off. It is important to realize that the medial
preoptic area, which is quite small, triggers penile erection, mounting, and
thrusting, but it is not the actual command center for any of those actions.
Rather, it activates brainstem centers to produce erection and motor cortex and
motor coordination centers to initiate mounting and thrusting.
  Likewise, the medial preoptic area does not appear to be important in trig-
gering ejaculation. Artificial stimulation of this region will not result in ejacula-
tion, and recordings of electrical activity do not show a burst of activity corre-
lated with ejaculation, as would be expected if the medial preoptic were the
center for triggering this function. In fact, the medial preoptic falls almost com-
pletely silent at the point of ejaculation and remains so for minutes afterward. It
has been suggested that this may underlie the male post-ejaculation refractory
period during which further sexual activity is difficult or impossible.
  This brings us to the topic of orgasm. Orgasm, as a physiological phenome-
non, is remarkably similar in women and men. In both sexes, orgasm involves a
                                                                         Love and Sex   169
      rising heart rate, an increase in blood pressure, involuntary muscle contrac-
      tions, and an intensely pleasurable sensation. Orgasm is accompanied by con-
      traction of two pelvic muscles, the bulbocavernosus and aschiocavernosus, as
      well as the muscles in the wall of the urethra, leading to ejaculation of semen in
      men, and in some cases, glandular fluids in women as well (one recent sur-
      vey indicated that 40 percent of women have experienced ejaculation at some
      point).
        In recent years, brain imaging studies have been performed on men during
      orgasm. Let’s think for a moment about what a profoundly unsexy arrange-
      ment that is. The subject has his head immobilized with a tightly fitting strap
      that is then slid into the ringing, claustrophobic metal tube that is the positron
      emission tomography (PET) scanner, with his nether regions still outside the
      device. An intravenous line is attached to deliver a pulse of the radioactive water
      required for PET imaging. The subject is instructed to close his eyes and lie as
      still as possible (to avoid activation of visual or motor parts of the brain) while
      his female companion attempts to bring him to orgasm with manual stimula-
      tion. It’s amazing that anyone could achieve orgasm under these conditions.
      Yet, in a recent study by Gert Holstege and his coworkers at University Hospi-
      tal Groningen in The Netherlands, 8 of 11 subjects were able to ejaculate dur-
      ing the experiment (and 3 of these even managed to do it twice).
        During male orgasm, a large number of brain regions were activated. Pre-
      dictably, the reward centers of the midbrain, including the ventral tegmental
      area, were strongly engaged. In this sense, both new love and orgasm are like
      heroin and cocaine in their pleasurable effects. A large number of discrete areas
      in the cortex were activated including sites in the frontal, parietal, and temporal
      lobes. Surprisingly, these sites of cortical activation were found only on the
      right side of the brain. Finally, the cerebellum was also strongly activated dur-
      ing orgasm. This is not entirely unexpected because part of the job of the cere-
170   Love and Sex
bellum is to detect mismatches between plans for motor action and feedback
about how that is progressing. In this sense, involuntary motions produced
during orgasm might be expected to produce strong cerebellar activation. Al-
though imaging studies of female orgasm have not yet been been published at
the time of this writing, preliminary results presented at scientific meetings
have indicated that the patterns of brain activation in female and male orgasm
are remarkably similar. The main difference between women’s and men’s brains
during orgasm appears to be that women have additional strong activation in a
midbrain area called the periaqueductal gray region. This is an area rich in en-
dorphin-containing neurons and might possibly contribute an additional as-
pect to the sexual pleasure or satiety felt by women.
  Orgasm is a complex phenomenon with different aspects mediated by dif-
ferent brain regions. This is revealed in part by brain stimulation studies in
which electrical activation of the septum (a part of the limbic system subserv-
ing, among other things, emotion and memory) produced orgasms in men that
did not have any pleasurable component to them. Similarly, there are a number
of cases of patients suffering from seizures involving the right temporal lobes
(in which parts of the limbic system reside) who experience uncontrollable sei-
zure-evoked orgasms with no pleasure. But it’s important to note that not all
seizure-evoked orgasms fail to evoke sexual pleasure: Yao-Chung Chuang and
his coworkers from Chang Gung Memorial Hospital in Taiwan report the case
of a 41-year-old woman who experienced temporal lobe seizures and accompa-
nying pleasurable orgasms after a few seconds of toothbrushing (Figure 6.4).
Gives the expression “oral sex” a whole new meaning, doesn’t it? It’s tempting to
speculate that this woman’s seizures activated the midbrain reward circuitry, in-
cluding the ventral tegmental area, while those who experienced seizure-evoked
orgasms without pleasure did not undergo activation of that circuitry.
  The observation that it is possible to have orgasms without pleasure is remi-
                                                                     Love and Sex   171
                                                                                 NJDSPWPMUT
           5PPUICSVTIJOHTUBSUT                                       TFDPOE
           4FJ[VSFBOEPSHBTNTUBSU
      figure 6.4. Toothbrushing-evoked temporal lobe seizures in a 41-year-old woman
                  from Taiwan. These seizures, shown here in an electroencephalogram
                  (EEG) record, were accompanied by pleasurable orgasms. Reproduced
                  with the permission of Elsevier from Y.-C. Chuang, T.-K. Lin, C.-C. Lui,
                  S.-D. Chen and C.-S. Chang, Tooth-brushing epilepsy with ictal or-
                  gasms, Seizure 13:179–182 (2004).
172   Love and Sex
niscent of a theme we have previously encountered in sensory systems: separate
brain regions are involved in the pure sensory aspects of an experience and the
emotional (rewarding/aversive) components. Normally, these strands of sensa-
tion are tightly woven together and it is only when something unusual hap-
pens, such as the onset of Capgras syndrome or pain asymbolia (Chapter 4)
or orgasmic seizures without pleasure, that we can see the underlying compo-
nent parts.
   Aside from the intense and immediate pleasure of orgasm, there is also the
warm, lingering post-orgasmic afterglow. This state, which is thought to be
crucial for sexual pair bond formation, may be mediated, in both men and
women, by release of the hormone oxytocin from the pituitary gland, under
control of the hypothalamus. Treatments that block oxytocin release do not
prevent orgasm or the immediate pleasurable sensations, but do seem to inter-
fere with the crucial afterglow. It is worth noting that the oxytocin-releasing
system appears to be involved in more general aspects of pair bond formation,
not just that which occurs in a sexual context. Oxytocin surges occur in moth-
ers’ brains at birth and during breastfeeding and are likely to be one important
factor in developing a mother’s bond with her child.
W E T E N D TO   use a shorthand to describe the sexual feelings and motivations of
individuals. Typically we will say that someone is homosexual, heterosexual, or
bisexual as a descriptor of what we have come to call sexual orientation. In
truth, this is a very crude sort of measure. In humans, sexuality has been embel-
lished considerably beyond instinctive behavior. Each of us carries within his or
her mind a sort of template of an ideal romantic or sexual encounter that in-
corporates many unique elements and details. Within each of these categories
there is a lot of variation. In the gay community, for example, one can find gay
men and lesbians with all sorts of gender identities. Lesbians run the gamut
                                                                        Love and Sex   173
      from “butch” to “femme,” with many who have sexual and gender-identity
      feelings that are not easily classifiable along this dimension. The playwright and
      actor Harvey Firestein has described himself as “gay as a pink leather piñata.”
      This is a hoot, but what does it mean? Macho-gay? Effeminate-gay? Neither?
      Straight people are similarly diverse, adopting myriad sexual roles and per-
      sonas.
        The subtlety of human sexual identity makes it hard to analyze. But within
      these rough designations it seems that, at least in the United States and Europe,
      about 4 percent of men and 2 percent of women are consistently homosexual,
      about 1 percent of men and 2 percent of women are consistently bisexual, with
      the remainder heterosexual. These numbers are based on surveys and it is not
      always easy to get truthful answers without sampling bias, but these are rea-
      sonable general estimates that have been confirmed in multiple carefully con-
      trolled studies. They reflect consistent behavior, not “experimentation”—the
      numbers of people who have ever had at least one homosexual experience lead-
      ing to orgasm is much higher (approximately 25 percent for men and 15 per-
      cent for women).
        The biological determinants of sexual orientation have been a topic of ran-
      corous, politically charged debate that has intensified in recent years as more
      and more science bearing on this issue has come to light. Many religious con-
      servatives and some others on the political right are invested in the interpreta-
      tion that homosexuality is a sinful choice made of free will. As a consequence
      they have been motivated to attack any research suggesting that sexual orien-
      tation has a biological component whether genetic or epigenetic, driven by
      nongenetically determined biological signals such as fetal hormone levels. Gay
      activists and many on the political left wish to promote societal acceptance of
      and civil rights for gays. Accordingly, many in this camp would like to believe
174   Love and Sex
that sexual orientation is like eye color: it’s a trait you’re born with rather than a
choice. There is also a flip side of fear to this position, however. If sexual orien-
tation is completely genetically determined, then one must be concerned that
in the future people might use genetic tests to discriminate against gays or even
abort potentially gay fetuses.
  Let’s try to be as objective as possible in examining the evidence to date. As in
any nature-versus-nurture type of debate, one can take extreme positions, but
these are not inevitable. Recall the discussion (in Chapter 3) of the biological
basis of general intelligence, another contentious and complex human trait
that’s hard to measure. In that case it appears likely that about 50 percent of
general intelligence is heritable. It’s possible that, in the fullness of time, a simi-
lar answer will emerge for sexual orientation.
  So, does sexual orientation have a heritable component? Statistically, having
a gay sibling dramatically increases your own probability of being gay. It ap-
pears that about 15 percent of the sisters of lesbians are themselves lesbian
(compared to about 2 percent of the general population), and 25 percent of all
brothers of gay men are gay (compared to about 4 percent of the general popu-
lation). Interestingly, having a gay brother does not increase the chance of a
woman’s being a lesbian and vice versa. Now, of course, the studies that pro-
duced these results do not speak directly to the heritability of sexual orientation
since siblings also share similar upbringing and environment. More compelling
evidence comes from studies of monozygotic (identical) and dizygotic (frater-
nal) twins. Here, it seems that, for men, having a gay male monozygotic twin
makes your own likelihood of being gay about 50 percent while having a gay
male dizygotic twin makes your likelihood of being gay about 30 percent (simi-
lar to the likelihood when you have a gay nontwin brother). A similar study
conducted with women showed that having a lesbian monozygotic twin con-
                                                                          Love and Sex    175
      ferred a 48 percent chance of being lesbian, while a lesbian dizygotic twin was
      associated with a 16 percent rate (again, similar to that from having a nontwin
      lesbian sister).
        There is one clear conclusion from these studies: In significant numbers of
      cases, monozygotic twin pairs are discordant (one gay, one straight), which in-
      dicates that sexual orientation is not 100 percent heritable, like eye color. That
      said, the studies suggest that a portion of sexual orientation is genetically deter-
      mined. But we have to be concerned about the limitations on studies of twins
      raised together: if monozygotic twins are raised more similarly than dizygotic
      twins, this could contribute to the greater incidence of homosexuality in the
      former. A better study, of course, would analyze twins raised apart. These are
      ongoing at the time of this writing.
        Two lines of evidence suggest that male homosexuality is partially linked to
      function of the X sex chromosome, which males inherit from their mothers. A
      small number of men have an extra X chromosome, giving them the XXY ge-
      notype instead of the more typical XY. This is called Klinefelter’s syndrome and
      has a number of associated traits, including reduced levels of testosterone and
      reduced sperm viability. In one study, such men had a much higher incidence of
      homosexuality than the general population (about 60 percent). A complemen-
      tary line of work has shown that among chromosomally normal gay men, sig-
      nificantly increased rates of same-sex orientation were found in the maternal
      uncles and male cousins of these subjects, but not in their fathers or pater-
      nal relatives. This is also consistent with maternal transmission through the X
      chromosome.
        Taken together, these studies indicate a strong but not total genetic influence
      on sexual orientation in both men and women, with the effect in women being
      somewhat smaller. What gene or genes are likely to be involved? Here it is
      worthwhile to briefly discuss some issues of genetics in relation to human be-
176   Love and Sex
havior. Complex human behavioral traits such as general intelligence, shyness,
and sexual orientation can have a significant degree of heritability, yet these
traits cannot typically be attributed to variation in a single gene. Rather, they
are polygenic: the heritable component of the trait is caused by variation at
multiple genes. As a fictional illustrative example, we might imagine that gen-
eral intelligence is promoted by having a cerebral cortex composed of a large
number of highly interconnected neurons that fire spikes readily. Thus higher
general intelligence would be promoted by particular forms of genes that in-
crease the overall number of neurons in the cerebral cortex, others that promote
the growth and branching of dendrites or axons, and yet others that express ion
channels that underlie particular modes of spike firing. Given the broad activa-
tion of brain regions involved in amorous and sexual behavior that we have seen
in imaging studies, we can imagine that sexual orientation is also likely to have
polygenic heritable components.
  The influence of maternal inheritance on male homosexuality makes the X
chromosome a reasonable place to look for one or more genes that could influ-
ence male sexual orientation. Dean Hamer and his colleagues at the National
Institutes of Health examined the DNA of a group of gay men and lesbians
who had at least one same-sex homosexual sibling, as well as a control group
of straight men and women. They did this by analyzing stretches of DNA at
roughly evenly spaced locations throughout the X chromosome. They found
that a particular region of this chromosome, called Xq28, had a significant ten-
dency to differ from that of straight men in gay men, but not in lesbians. This
finding does not pinpoint a particular gene. Rather, it points to the likelihood
that one or more genes in this chromosomal region may vary in a way that con-
tributes to male homosexuality. More recently, this type of genetic scan was
performed on gay male DNA but instead of just looking at the X chromosome,
the experimenters analyzed markers spread across the whole genome (all 23
                                                                     Love and Sex   177
      chromosomes). Several additional “linkage sites” were found on chromosomes
      7, 8, and 10. It is important to note that as of this writing, no group has pub-
      lished a scientific paper replicating these linkage studies of the Hamer lab, an
      event that would go a long way toward validating this notion of particular ge-
      netic loci influencing homosexuality.
        Genetic variation may not account for all of the biological component of
      sexual orientation. It’s possible that epigenetic developmental factors may also
      contribute. These might include the effects of maternal stress or immune sys-
      tem status during pregnancy and hormonal effects derived from siblings in
      utero. The latter seems to be an important factor in rats: female pups that are
      adjacent to male siblings in the womb sometimes are partly masculinized, both
      physically and behaviorally, as a result of testosterone circulating from their
      brothers in utero. Although this may also be an issue in human multiple preg-
      nancies, it is likely to be less dramatic because the maternal blood supply to the
      fetuses is more separated in humans, whereas in rats it is organized serially such
      that a particular pup is “downstream” from another.
        How might genetic or epigenetic developmental factors work to influence
      sexual orientation? The basic hypothesis has been this: Gay men have brains
      the structure and function of which are, in some respects, like those of straight
      women. Conversely, lesbians have brains that are, in some respects, like those of
      straight men. So, one obvious test of this hypothesis is to look at those regions
      that were previously known to be structurally different in the brains of straight
      men and women. This is exactly what Simon LeVay of the Salk Institute did.
      He measured the volume of the hypothalamic nucleus INAH3 in postmortem
      tissue samples of straight and gay men as well as of straight women. The gay
      sample was entirely composed of men who had died of AIDS, and the straight
      sample was partly men who had died of AIDS (intravenous drug users) and
      partly men who had died of other causes, as well as women who had died of
178   Love and Sex
other causes. Replicating previous work from another lab, LeVay found that
the volume of INAH3 was two to three times larger in the straight men than in
the straight women. The really interesting finding was that the average volume
of INAH3 in the gay men was similar to that of straight women: two to three
times smaller than that of straight men. These differences were not seen in adja-
cent hypothalamic nuclei that are not sexually dimorphic in straight people,
such as INAH 1, 2, and 4.
  Is it possible that the INAH3 was smaller in the gay male sample because
of AIDS, which is known to affect brain cells? This is unlikely because the
mean INAH3 volume of the AIDS-infected straight male sample was also sig-
nificantly larger than the INAH3 volume of the gay sample. In addition, after
his initial publication in 1991, LeVay was able to get brains from some gay men
who had died of causes other than AIDS, and he found the same differences as
in the previous group.
  Another study looked at the anterior commissure, which, you will recall, is
larger in straight women than in straight men. The cross-sectional area of this
bundle of axons connecting the right and left sides of the brain was measured
in postmortem brains from gay men, straight men, and straight women by
Laura Allen and Roger Gorski at UCLA. They found that gay men had anterior
commissures that were larger, on the average, than those of straight men and
even slightly larger than those of straight women.
  These anatomical results with INAH3 and the anterior commissure have re-
ceived a lot of attention, much of it overblown. Newspapers and magazines
around the world have rushed to declare that these data prove that “homosexu-
ality is genetic” or that “gay people are born that way.” Clearly, a correlational
study of adults cannot prove this sort of statement. Although these studies are
consistent with the notion that sexual orientation is, at least in part, biologi-
cally determined, we still don’t know the answer to the crucial question: What
                                                                      Love and Sex   179
      are the brains of gay people like at birth or shortly thereafter, before socio-
      cultural factors have a chance to make a major impact?
        If gay people are indeed “born that way,” then one might expect that the
      masculinization of the female brain and the feminization of the male brain will
      be evident in nonsexual behavior and physiology, starting early in life. One
      approach for assessing this idea is to interview people and their relatives and
      friends about their recollections of childhood to see if particular themes emerge
      in the homosexual population. This strategy is fraught with difficulty, however,
      because it relies on people’s memories and it is very difficult to exclude sam-
      pling bias. Nonetheless, it is interesting that gay men as a population tend to
      have strong recollections of effeminate behavior in early childhood. In fact, a
      study by James Weinrich and his coworkers at the University of California at
      San Diego showed that the strongest recollections of effeminate childhood be-
      havior in gay adults were found in that part of the gay male population that
      adopted the most female-typical role in their adult lives (for example, strongly
      preferring the receptive role in anal intercourse). This amplifies the point made
      earlier: Straight, gay, and bisexual are crude categories and their use in genetic,
      anatomical, and behavioral studies may mask some of the most interesting find-
      ings. One could imagine, for example, that “macho” gay men have a larger
      INAH3 and smaller anterior commissure than effeminate gay men or that
      “femme” lesbians have a smaller INAH3 and larger anterior commissure than
      “butch” lesbians. These subcategories of homosexual behavior are themselves
      quite crude, but I think they make the point: graded differences in certain brain
      structures may be manifest in part as subtle differences in sexual orientation.
        A much more powerful method than the retrospective interview is the pro-
      spective study, like that carried out by Richard Green of Imperial College School
      of Medicine, who identified boys with effeminate behaviors in the preschool
      years. When these boys were then tracked as they grew up, it was found that
180   Love and Sex
greater than 60 percent of them became gay or bisexual adults. This is a remark-
able statistic when you consider that homosexual plus bisexual men constitute
only about 5 percent of the adult male population.
  Another prediction of a biological hypothesis for sexual orientation is that
manipulations that feminize male brains or masculinize female brains increase
the incidence of homosexual behavior. This seems to be the case in both hu-
man and animal studies. Male rats with lesions in the medial preoptic area (in-
cluding INAH3) often engage in female-typical sexual behaviors toward other
males, such as ear wiggling and lordosis, a posture that presents the genitals.
This effect can be further enhanced if the lesioned males are treated with estro-
gen. Similar effects can be produced by treatments that impede the action of
testosterone (castration at birth, drugs that interfere with testosterone recep-
tors). Most interestingly, subjecting a mother rat during pregnancy to moder-
ate stress (confinement in a clear plastic tube under bright lights) can reduce the
levels of testosterone in the developing fetus. When the male pups grow up,
their sexual behavior is feminized: they are reluctant to mount females and
themselves display female-typical sexual behavior. In other words, interfering
with testosterone in fetal or early postnatal life can make male rats “gay.”
  Not surprisingly, a complementary result is found in females exposed to
higher than usual levels of testosterone. The female offspring of rats or sheep
given testosterone-boosting treatments in utero tend to adopt more male-typi-
cal sexual behaviors (mounting, aggression). More important, women who suf-
fer from congenital adrenal hyperplasia, in which testosterone levels are ele-
vated starting in utero, have a much higher incidence of lesbianism than is
found in the general population.
  One of the bitterest issues in the ongoing debate about the biological basis of
sexual orientation has been whether gay men and lesbians can change their sex-
ual feelings and behavior to become straight. Some researchers, such as Rob-
                                                                       Love and Sex   181
      ert Spitzer of the New York State Psychiatric Institute, have published papers
      claiming that with certain forms of treatment this is possible in at least a frac-
      tion of the population (17 percent of the males in his sample reported achiev-
      ing “exclusively opposite sex attraction” after treatment). Others, including the
      major professional associations of clinical psychologists and psychiatrists, have
      derided these claims as politically motivated junk science, in part based on a
      critique of Spitzer’s sampling methods, in which subjects were recruited from
      ex-gay ministries. Although the advisability of offering treatment to change
      sexual orientation is itself an important moral and social question, whether or
      not some people can change their sexual behavior from gay to straight does not
      bear on the question of whether sexual orientation is, to some degree, biologi-
      cally determined. Left-handedness is almost certainly biologically determined
      and yet almost any leftie can be trained to be right-handed. Some Catholic
      priests and others, who have normal sexual drives, can subsume these feelings
      to completely refrain from sexual acts, in keeping with religious teachings. So,
      even if a small fraction of homosexuals can undertake a form of treatment that
      results in their adopting exclusively heterosexual behavior, this does not speak
      to the question of whether sexual orientation is, either fully or in part, deter-
      mined by biological factors at birth or shortly thereafter.
      AT T H I S P O I N T,   the evidence from families, twin studies, gene linkage, neuro-
      anatomical analysis of postmortem tissue, and manipulations of sex hormones
      points to the conclusion that some portion of sexual orientation is biologically
      determined. Whether this will turn out to be 30 percent or 90 prcent of the
      variation in sexual orientation remains to be seen. Likewise, the relative contri-
      bution of genetic and epigenetic factors to this biological predisposition re-
      mains unclear. It is likely that, many years from now, when the smoke clears,
182   Love and Sex
sexual orientation will have a story not very different than that emerging for
many other complex human behaviors: It will have some degree of sociocul-
tural and some degree of biological determination. The biological part will
have both genetic and epigenetic components and the genetic component will
reflect the action of multiple genes.
                                                                   Love and Sex   183
      Chapter Seven
      Sleeping and Dreaming
      IT WAS 1952 AND     the American military was getting panicky. Over 60 per-
      cent of the airmen captured by the Chinese army in the ongoing Korean War
      were confessing to bogus war crimes (such as the use of biological weapons) or
      were signing statements or recording messages renouncing the United States
      and embracing communism. These events created an enormous propaganda
      coup for the Chinese. The CIA and military intelligence specialists entertained
      a number of theories about the success of this effort, including the develop-
      ment of exotic “brainwashing drugs,” hypnosis, and exposure to mind-altering
      electric fields. The truth, revealed some years later, was much more prosaic:
      the Chinese were able to coerce these statements from their prisoners mainly
      through the use of beatings combined with prolonged sleep deprivation.
        This shouldn’t have been news. Throughout history it has been known that
184
sleep deprivation is an ideal form of torture. The ancient Romans employed
sleep deprivation extensively to interrogate and punish prisoners. It leaves no
physical trace and it does not result in permanent alteration of the victim’s men-
tal function: he or she is mostly back to normal after a good night’s sleep or two.
Indeed, of the thousands of American and United Nations prisoners of war in
Korea, almost none maintained their bogus confessions or denunciations after
their release. They weren’t “brainwashed” at all. Their fundamental belief sys-
tems and personality had not been permanently compromised. Rather, they
were temporarily rendered delusional, suggestible, and even psychotic through
sleep deprivation.
  In his autobiographical book White Nights, the Russian dissident Menachem
Begin, later to become the prime minister of Israel, describes his sleep-depriva-
tion treatment at the hands of the KGB.
    In the head of the interrogated prisoner, a haze begins to form. His spirit is
    wearied to death, his legs are unsteady, and he has one sole desire: to sleep . . .
    Anyone who has experienced this desire knows that not even hunger and
    thirst are comparable with it.
       I came across prisoners who signed what they were ordered to sign, only to
    get what the interrogator promised them.
       He did not promise them their liberty; he did not promise them food to
    sate themselves. He promised them—if they signed—uninterrupted sleep!
    And, having signed, there was nothing in the world that could move them to
    risk again such nights and such days.
  Begin’s description highlights the effectiveness, but also the limitations, of
sleep deprivation as a torture method. It is very effective as a coercive device,
but torturers cannot rely upon information gleaned while a prisoner is in a se-
                                                                     Sleeping and Dreaming   185
      verely sleep-deprived state: people in this condition are often experiencing au-
      ditory and visual hallucinations as well as paranoia. They are likely to say any-
      thing if they believe they finally will be allowed to sleep. I should note that
      torture by sleep deprivation is a practice that is still in common use. Andrew
      Hogg of the Medical Foundation for the Care of Victims of Torture in the
      United Kingdom says, “It is such a standard form of torture that basically ev-
      erybody has used it at one time or another.” Here, “everybody” includes demo-
      cratic states such as the United States, the United Kingdom, India, and Israel,
      all of which have published recent guidelines for interrogation by the military
      and security services that allow for extreme sleep deprivation.
        How long can a human go without sleep? The world record is presently held
      by Randy Gardner, who, as a 17-year-old high school student, stayed awake for
      11 straight days in 1965 just for the hell of it. He did this without the use of
      stimulant drugs. During this period, Gardner initially became moody, clumsy,
      and irritable. As time progressed he showed delusions (he said that he was a fa-
      mous professional football player), then visual hallucinations (he saw a path
      through a forest extending from his bedroom), paranoia, and a complete lack of
      mental focus. Remarkably, after a 15-hour sleep, almost all of these symptoms
      abated. Gardner appears to have suffered no lasting physical, cognitive, or emo-
      tional harm from the incident.
        A grisly set of experiments with rats showed that total sleep deprivation will
      cause death in 3–4 weeks. Although the exact cause of death was unknown,
      these animals suffered from skin lesions and a gradually failing immune system.
      This condition ultimately allowed for the colonization of the body by other-
      wise benign bacteria that are usually restricted to the digestive tract. Through-
      out this period there is a gradual buildup of the steroid hormone cortisol, a nat-
      ural immunosuppressant, and a gradual reduction in core body temperature.
      Human death from total sleep deprivation has not been reported in the scien-
186   Sleeping and Dreaming
tific literature. But there are indications of this from records of Nazi death camp
experiments during World War II as well as reports of executions by sleep de-
privation in China in the nineteenth century. These suggest that 3–4 weeks of
sleep deprivation will kill humans as well. Or, to put it another way, 4 weeks
without food may or may not kill you (depending upon your health, age, and
access to medical care) but 4 weeks without sleep will.
C L E A R LY, B OT H R AT S   and humans need sleep to live. This raises the ques-
tion: what are the physiological functions of sleep that make it so important?
Amazingly, we don’t have a definitive answer to this simple question. One obvi-
ous idea is that sleep serves a restorative function for the entire body. Cellular
growth and repair functions involving gene expression and protein synthesis
seem to accelerate during sleep in both the brain and other tissues. But it is not
well established that people who are physically active sleep significantly more
than those who are confined to bed. Nor is it clear that a brief period of intense
exercise promotes longer total sleep time (although there are some small effects
on the time spent in various stages of sleep).
   It has been proposed that sleep functions to conserve energy. This may be
particularly relevant for warm-blooded animals (mammals and birds) that must
expend a lot of energy to maintain a body temperature higher than that of their
surroundings. Indeed, many small mammals living in cold climates, who lose
heat easily by having an unfavorable surface area to body weight ratio, tend
to sleep a lot, often in insulating burrows. Yet sleep does not appear to have
evolved only in warm-blooded animals. EEG recordings from reptiles and am-
phibians indicate that they also sleep, and there are now strong indications of a
sleep-like state in some invertebrates, such as crayfish, fruit flies, and honey
bees. Also, though it is true that the overall use of energy is reduced during
sleep, as compared with the active waking state, there is almost as much reduc-
                                                               Sleeping and Dreaming   187
      tion in energy use from just resting quietly. The additional energy conservation
      in going from the resting state to sleep is minimal. So, an explanation for sleep
      based on restoration and energy conservation is unlikely to be complete.
         One simple role of sleep might be to restrict an animal’s activity to those
      times when activity is productive—when the chance of finding food is high but
      the chance of becoming someone else’s food is low. For many species, including
      ours, this means sleeping at night. Others, such as many foraging rodents, bats,
      and owls, do the opposite, but the principle in the same: they are trying to hunt
      for food but avoid predators. There is some evidence to support this model:
      mammals at the top of the food chain such as lions and jaguars tend to sleep a
      lot (as much as 12 hours a day) while those that graze in the open such as deer
      and antelope sleep much less. Some herbivorous animals such as ground squir-
      rels and sloths also sleep a lot (two-toed sloths sleep for 20 hours every day!),
      but these tend to be species that are mostly safe from predation during sleep be-
      cause of their sleeping location (in underground burrows or high in trees).
      Nonetheless, this explanation for sleep doesn’t seem entirely satisfying either.
      Perhaps if we look at the process of sleep in greater detail, more compelling
      ideas will emerge.
      T H E S C I E N T I F I C S T U DY   of human sleep has a very strange beginning. In the
      nineteenth century, several investigators in France were very interested in the
      processes of sleep, but they never did the most simple, observational experi-
      ment: just stay up all night and make notes of how people’s bodies move over
      the course of a normal night’s sleep. Instead, these scientists spent their time
      trying to influence the dreams of their subjects. They would open a bottle of
      perfume under the sleeper’s nose or tickle him with a feather and then wake
      him up a few minutes later to see if they had influenced his dreams. Not much
      useful information came from this line of work, and up until the 1950s the
188   Sleeping and Dreaming
standard model of sleep was simple and wrong. It was held that sleep is a con-
stant, unchanging period of little body movement and low brain activity that
changes only upon waking.
  In 1952 Eugene Aserinsky was a graduate student in the laboratory of
Nathaniel Kleitman at the University of Chicago, where EEG recordings were
being made from adults as they fell asleep. These revealed that after falling
asleep, the EEG gradually changed from a desynchronized, low-voltage trace to
a high-voltage trace with slow, synchronized oscillations. At this point, it was
assumed that deep sleep had been achieved and this status would be maintained
until waking. The standard operating procedure was to record for 30–45 min-
utes to capture this transition and then turn the EEG recorder off to save chart
paper. One night Aserinsky brought his son Armond, 8 years old, into the lab
to be the subject. About 45 minutes after Armond had fallen asleep, his father
was watching the pens on the EEG chart recorder register the slow oscillations
of deep sleep. Then, amazingly, the EEG shifted to another rhythm that looked
more like waking even though Armond was still clearly sleeping and was totally
immobile. We now know that this stage of sleep is associated with rapid eye
movements (REMs) and that while it usually does not occur in adults un-
til about 90 minutes after falling asleep, in children, like Armond, it occurs
sooner.
  The report of these findings by Aserinksy and Kleitman in 1953 began the
modern era of sleep research, and in the following years a much more detailed
picture of sleep emerged. When scientists left their EEG machines on all night
(piling up enormous stacks of chart paper in the process), they found an adult
sleep cycle of about 90 minutes duration (Figure 7.1). This consisted of the
aforementioned gradual descent into deeper and deeper sleep accompanied by
gradual synchronization of the EEG. These stages of sleep are collectively called
non-REM sleep and they are further subdivided into four stages ranging from
                                                            Sleeping and Dreaming   189
         "XBLF
          3&.
        4UBHF*
        4UBHF**
       4UBHF***
       4UBHF*7
                                                                             
                                                  5JNF IPVST
           "XBLF
                         4UBHF*                                                     3&.
                                       4UBHF**
                                                       4UBHF***
                                                                    4UBHF*7
      Figure 7.1. The stages of adult human sleep. The top panel depicts a complete night’s
                  sleep with sleep stage on the vertical axis. This graph was made by analyz-
                  ing the EEG record to determine the sleep stage. It shows the main fea-
                  tures of a normal night’s sleep. There is a sleep cycle of approximately 90
                  minutes duration during which the sleeper gradually progresses from
                  drowsiness (stage I) into deep sleep (stage IV), followed by a period of
                  REM sleep. A typical night’s sleep might involve 4 or 5 of these cycles. As
                  the night progresses, a higher proportion of the sleep cycle is devoted to
                  REM sleep with a concomitant decrease in non-REM sleep (stages I–IV).
                  The bottom panel shows representative EEG records from each sleep
                  stage. Note that the EEG record for REM sleep is similar to that of the
                  waking or drowsy state. Adapted with the permission of Macmillan Pub-
                  lishers, Ltd., from E. F. Pace-Schott and J. A. Hobson, The neurobiology
                  of sleep: genetics, cellular physiology, and subcortical networks, Nature
                  Reviews Neuroscience 3:591–605 (2002). Joan M. K. Tycko, illustrator.
190   Sleeping and Dreaming
drowsy/nodding-off (stage I) to deep sleep (stage IV). A typical uninterrupted
night’s sleep will consist of 4 or 5 complete 90-minute-long cycles. What’s in-
teresting is that as the night wears on, the character of each sleep cycle changes
so that there is proportionally more REM and less non-REM sleep per cycle. In
the last period before waking, as much as 50 percent of the cycle may be de-
voted to REM sleep.
  It’s a testament to the occasional bone-headedness of scientists that sleep cy-
cles were not discovered before the 1950s. You don’t need an EEG recording to
detect them. Simple observation of a sleeper throughout the night will show
you the main features. The most obvious of these is the rapid side-to-side eye
movements that are easily seen even when the eyelids are closed (owing to the
bulge of the cornea indenting the eyelid). Careful observation would reveal a
host of other changes during REM sleep. These include an increase in breath-
ing rate (as well as heart rate and blood pressure) and a sexual response (penile
erection in men, erection of the nipples and clitoris together with vaginal lubri-
cation in women). Even more striking are changes in muscle tone. The typical
adult sleeper will change his or her position about 40 times a night without be-
ing conscious of this action. None of these motions, however, will occur during
REM sleep. In REM sleep, there is no movement at all. In fact, there is not even
any muscle tone: the body goes totally limp. It is almost impossible to have
REM sleep in anything other than a horizontal position. Remember this the
next time you are wrapped in an airline blanket and stuffed into your coach
class seat like a scrofulous burrito for a trans-Atlantic flight: even if you manage
to catch some sleep in your seat, you won’t be able to enter REM sleep.
  REM sleep is sometimes called “paradoxical sleep” because the EEG resem-
bles that of the waking state, yet the subject is essentially paralyzed. The story
here is that the motor centers of the brain are actively sending signals to the
muscles but these signals are blocked at the level of the brainstem by inhibitory
                                                               Sleeping and Dreaming   191
      synaptic drive. This blockade affects only the outflow of motor commands
      down the spinal cord, not those of the cranial nerves that exit the brainstem di-
      rectly to control eye and facial movements (as well as heart rate). Michel Jouvet
      of the University of Lyon showed that severing the inhibitory fibers that block
      motor outflow in cats resulted in a bizarre condition: during REM sleep the
      cats engaged in complex motor behaviors while keeping their eyes closed. They
      ran, pounced, and even seemed to eat their imagined prey. Although we can’t
      know this for certain, they appeared to be acting out their dreams (more on
      this soon). A similar phenomenon is seen in a human condition called REM
      sleep behavior disorder, which mostly affects men over 50. This disease causes
      dream-enacting behaviors during the REM period of sleep, including kicking,
      punching, jumping, or even running. Not surprisingly, these violent behaviors
      can often result in injury to the patient or to his or her bedmate. In most cases,
      this disorder is successfully treated by a bedtime dose of the drug clonezepam
      (sold under the trade name Klonopin), which works by boosting the strength
      of synapses that use the inhibitory neurotransmitter GABA. REM sleep behav-
      ior disorder is different from conventional sleepwalking, which occurs only
      during non-REM sleep.
        Humans show changes in sleep over the life cycle, with the proportion of the
      time spent in REM sleep decreasing from about 50 percent at birth to 25 per-
      cent in mid-life and 15 percent among the elderly (a decrease in REM is also
      seen over the lifespans of cats, dogs, and rats). If we compare our sleep with that
      of other mammals, we find that we are more or less in the center of the range
      bounded by the duck-billed platypus, which spends about 60 percent of its
      sleeping life in REM, and the bottlenose dolphin, which has a REM propor-
      tion of only 2 percent. There is no obvious relationship between degree of
      REM sleep and brain size or structure across mammalian species (Figure 7.2).
      Non-REM sleep appears to have evolved as early as the fly (about 500 million
192   Sleeping and Dreaming
     -PX3&.4MFFQ                                          )JHI3&.4MFFQ
       IPVSPG                                            IPVSTPG
     3&.TMFFQEBZ                                          3&.TMFFQEBZ
                                                                      1MBUZQVT
                                                               0SOJUIPSIZODIVTBOBUJOVT
                                                                   3&. UPUBM
#PUUMFOPTFEPMQIJO
   5VSTJPQTUSVODBUVT
 3&. UPUBM
      )PSTF                                                        'FSSFU
   &RVVTDBCBMMVT                                              .VTUFMBOJHSJQFT
  3&. UPUBM                                            3&. UPUBM
             (VJOFBCBCPPO                              &VSPQFBOIFEHFIPH
                 1BQJPQBQJP                              &SJOBDFVTFVSPQBFVT
              3&. UPUBM                            3&. UPUBM
                                       )VNBO
                                    )PNPTBQJFOT
                                    3&. UPUBM
figure 7.2. REM and total sleep in a gallery of representative mammals. Humans fall
            into the middle of the range when REM sleep is considered either as a raw
            value or as a proportion of total sleep. Adapted from J. M. Siegel, The
            REM sleep-memory consolidation hypothesis, Science 294:1058–1063
            (2001); copyright 2001 AAAS. Joan M. K. Tycko, illustrator.
      years ago), but true REM sleep is found only in warm-blooded species. It is
      present in the most primitive surviving mammals (such as the platypus and the
      echidna) as well as in birds, but appears to be absent in reptiles and amphibians.
        So, with knowledge of sleep cycles, we can return to our main question “Why
      is sleep necessary?” with a bit more sophistication. Really, two separate ques-
      tions are warranted: What are the key functions of sleep composed of only a
      non-REM period, as is found in reptiles and amphibians (and possibly some
      invertebrates as well)? And, what are the key functions of cycling sleep in which
      REM and non-REM periods alternate, as is found in mammals and birds? It
      may be that the previously mentioned ideas that sleep is required for restorative
      functions, energy conservation, and maximizing feeding efficiency while mini-
      mizing danger from predation are appropriate for non-REM sleep alone. Cy-
      cling sleep is serving some function that only emerges in mammals and birds
      and that is most important early in life. Let’s consider some hypotheses about
      what that function might be. One proposal is that cycling sleep serves a rather
      mundane function. It’s known that non-REM sleep tends to cool the brain, re-
      ducing its thermoregulatory set point, while REM sleep heats the brain up. Per-
      haps alternating bouts of REM and non-REM sleep prevent the brain from
      becoming too cool or too hot. This hypothesis is consistent with the first ap-
      pearance of cycling sleep in warm-blooded animals but it doesn’t explain either
      the variation in REM across mammalian species or the decrease in REM over
      the lifespan.
        Another idea is that cycling sleep somehow promotes the development of the
      brain in early life. In particular, cycling sleep may play a special role in the later,
      mostly postnatal, stages of development that require experience-driven plastic-
      ity. The experimental evidence in support of this idea comes from experiments
      in which kittens have one eye artificially closed for a brief period. This results,
      within a few hours, in a reduced excitation of neurons in the visual cortex by
194   Sleeping and Dreaming
stimuli (light pulses) delivered to the deprived eye and enhanced responses to
stimulation of the open eye. When kittens are allowed to sleep following a pe-
riod of monocular deprivation, this change in the responsiveness of cortical
neurons is retained and even enhanced. But when kittens were either totally
sleep-deprived or selectively deprived of non-REM sleep, the effects on cortical
neurons of the monocular-deprivation experience were lost. Conversely, in a
separate set of experiments, selective deprivation of REM sleep seemed to exag-
gerate the effects of monocular deprivation, producing even greater changes in
the responses of visual cortex neurons.
  If cycling sleep were only involved in the experience-dependent phase of
brain development, then there would be no need for it to continue into adult-
hood. One possibility is that it is retained in adulthood but no longer has a
function. But this is unlikely. Recall that the cellular mechanisms involved in
the experience-dependent phases of later brain development (plasticity ex-
pressed as growth of axons and dendrites, and changes in intrinsic excitability
and synaptic strength) are retained in the adult brain to store memories. Could
the same be true of the sleep cycle? Perhaps alternating periods of REM and
non-REM sleep initially serve to consolidate experience-driven changes in late
brain development and then remain in a slightly different form to integrate and
consolidate memory.
  A basic hypothesis of cycling sleep and memory has been nicely articulated
by Robert Stickgold of the Harvard Medical School, who writes “the unique
physiology of sleep and perhaps even more so, of REM sleep, shifts the brain/
mind into an altered state in which it pulls together disparate, often emotion-
ally charged and weakly associated memories into a narrative structure and . . .
this process of memory reactivation and association is, in fact, also a process of
memory consolidation and integration that enhances our ability to function in
the world.”
                                                             Sleeping and Dreaming   195
        A large number of studies in both humans and rats have shown that a normal
      night’s sleep following certain simple learning tasks results in improved perfor-
      mance when subjects are tested the next day. In most of these studies there is
      not an absolute requirement for sleep in order to consolidate memory. Some
      memory for the training experience is still present after 8 hours of wakefulness,
      and this effect is found whether the wakefulness occurs during the day or at
      night. But normal cycling sleep produces a noticeable improvement. In a way,
      these experiments prove something that is widely appreciated in folk traditions
      around the world: many cultures have a saying to the effect of “sleep on it and
      you’ll have a better understanding of the problem in the morning.”
        Anecdotal reports of sleep-inspired insight abound. Paul McCartney of the
      Beatles relates that the tune for the hit song “Yesterday” came to him when
      he awoke from a dream. The nineteenth-century German chemist Friedrich
      Kekulé claimed that he solved the ring structure of benzene after being inspired
      by a dream in which a snake was biting its tail. The American inventor Elias
      Howe reported that the main innovation allowing for the first sewing machine
      (placing the thread hole near the tip of the needle) came to him during sleep.
      But do insight and revelation regularly result from sleep or are these just a coin-
      cidences that have resulted in a few good stories?
        One interesting study of human learning and sleep deprivation comes from
      the laboratory of Jan Born at the University of Lübeck in Germany, where in-
      vestigators sought to test the notion that a night’s sleep can help yield insight
      into a previously intractable problem. To do this, a numerical problem was de-
      vised that could be solved by sequential application of simple rules. The experi-
      menters embedded within the problem a shortcut that, if appreciated, could al-
      low the subject to respond much more quickly than through the sequential-
      application method (see Figure 7.3 for the details of this task). None of the par-
      ticipants recognized the shortcut in the first block of trials. After a night’s sleep,
196   Sleeping and Dreaming
though, 13 of 22 subjects had the insight to recognize the shortcut, while, in a
different group of subjects, who were not allowed to sleep over a similar inter-
val, only 5 of 22 found the shortcut. The experimenters’ conclusion: sleep in-
spires insight.
  A large number of studies have sought to interfere with REM sleep by wak-
ing humans or lab animals when an EEG recording indicates that they have en-
tered a REM stage. Selective REM deprivation has been reported to interfere
with memory consolidation for a number of learning tasks. In some cases the
results have been dramatic: in one report, when humans were trained in a vi-
sual texture discrimination task, in which reaction time is taken as a measure
of learning, they showed no evidence of learning whatsoever after a REM-
deprived sleep but significant learning after either a normal sleep or sleep in
which non-REM periods were selectively disturbed. It’s important to note that
REM deprivation seems to interfere specifically with the consolidation of memo-
ries for rules, skills, procedures, and subconscious associations (nondeclarative
memory) but not memories of facts and events (declarative memory). Thus the
people who spent a REM-deprived night following visual texture discrimina-
tion training still had clear memories of the training session (an event) but did
not retain their quick reaction times in the task (a nondeclarative skill).
  The timing of REM sleep also appears to be important. REM sleep must oc-
cur within 24 hours of the training experience in order for it to improve mem-
ory consolidation. People who learn a new skill or procedure during the day
and then miss that night’s sleep will not show any improvement following sleep
on the second night. A similar effect is seen in rats, but the interval is reduced:
REM sleep must occur within 4–8 hours of training to have a beneficial effect.
  REM sleep also appears to be associated with “playback” of the previous day’s
memories. Kendall Louie and Matt Wilson of MIT used arrays of electrodes to
simultaneously record from large numbers of “place cells” (Figure 5.11) in the
                                                              Sleeping and Dreaming   197
                      1   1       4   4                                        9           4        9       4
             Response 1       1
                                                                                                                Fin
                                  1                           1                    4       4        9       4         9       4
                      Response 2                                               1       9
                                                                                                                                  Fin
                                                                                       1        1       4        4        9       4     9           4
                                                                                                    1       9         1
                              Response 3
                                                                                                                                                        Fin
                                                                                                        1        1        4       4         9       4         9       4
                                                                                   Response 7                         1       9         1       4        4        1       9
                                                                                                                                                                          Fin
                                                                               70
                                                                                                                p=0.014
                                      Percentage of subjects gaining insight
                                                                               60
                                                                               50
                                                                               40
                                                                               30
                                                                               20
                                                                               10
                                                                                               Wake-Day Wake-Night                      Sleep
198   Sleeping and Dreaming
Figure 7.3. Sleep as a source of insight. Subjects were trained in a number-reduction
            task with a hidden rule and then had either intervening sleep, wakefulness
            during the day, or wakefulness during the night before being retested. The
            top panel illustrates a sample trial of the task. On each trial, a different
            string of eight digits was presented. Each string was composed of the dig-
            its 1, 4, and 9. For each string, subjects had to determine a digit defined as
            the “final solution” of the task trial (Fin). This could be achieved by se-
            quentially processing the digits pairwise from left to right according to
            two simple rules. One, the “same rule,” is that the result of two identical
            digits is just that digit (for example, 1 and 1 results in 1, as in response 1).
            The other, the “different rule,” states that the result of two nonidentical
            digits is the remaining third digit of this three-digit system (for example, 1
            and 4 results in 9, as in response 2). After the first response, comparisons
            are made between the preceding result and the next digit. The seventh re-
            sponse indicates the final solution, to be confirmed by pressing a separate
            key. Instructions to the subjects stated that only this final solution was to
            be communicated and this could be done at any time. It was not men-
            tioned to the subjects that the strings were generated in such a way that
            the last three responses always mirrored the previous three responses. This
            implies that in each trial the second response coincided with the final so-
            lution (arrow). Subjects who gained insight into this hidden rule abruptly
            cut short sequential responding by pressing the solution key immediately
            after the second response. The bottom panel shows the percentage of sub-
            jects who gained insight into the hidden rule following sleep versus two
            conditions of wakefulness. Reproduced with permission of Macmillan
            Publishers, Ltd., from U. Wagner, S. Gais, H. Haider, R. Verleger, and J.
            Born, Sleep inspires insight, Nature 427:352–355 (2004).
                                                                     Sleeping and Dreaming     199
      hippocampus of rats as they repeatedly ran a unidirectional path in a circular
      track to obtain a food reward. The experimenters were able to see sequential ac-
      tivation of place cells coding for various locations on the circular track as the
      animal ran. Then recordings were continued as the animal slept after training.
      Amazingly, these same patterns of hippocampal place cell activation were re-
      played during REM sleep. The replay wasn’t a perfect spike-for-spike repro-
      duction of the waking activity. Sometimes the pattern was a bit degraded and
      sometimes the pattern was recognizable from the waking experience, but the
      overall speed of the activity had changed. Nonetheless, this study, and several
      others like it from different laboratories, have found statistically significant re-
      activation of neuronal ensemble activity during REM sleep following training.
      Was the replay of activity in Louie and Wilson’s rats important for consolidat-
      ing memory of the circular track? If so, what aspects of the experience? Were the
      rats dreaming of the circular track when the replay activity was recorded during
      REM sleep? We don’t yet know the answer to these questions.
        One might be tempted to conclude from this line of evidence that the rela-
      tionship between REM sleep and memory consolidation is fairly solid. But a
      bit more investigation will reveal some cracks in the façade. For example, subse-
      quent experiments on both rats and humans have shown that selective depriva-
      tion of non-REM sleep can also have deleterious effects on consolidation of
      some nondeclarative memory tasks, although these tend to be smaller than
      those achieved by selective REM sleep deprivation. In addition, a recent report
      indicates that the “playback” of neuronal firing patterns following novel experi-
      ence in the rat is actually stronger in deep non-REM sleep (stages III and IV)
      than it is in REM sleep. Most important, it is almost impossible to produce
      REM sleep deprivation without also causing stress and the accompanying rise
      in circulating stress hormones. We know that stress can impair learning in both
200   Sleeping and Dreaming
humans and rats and that both stress and artificial administration of stress hor-
mones can interfere with synaptic and morphological plasticity in rat brains.
  Finally, there is a strong prediction of the REM sleep and memory consolida-
tion hypothesis that has not been born out. Modern antidepressant drugs, in-
cluding the serotonin-specific reuptake inhibitors (SSRIs, such as Prozac and
its kin) and tricyclic antidepressants (such as Elavil), produce a partial reduc-
tion of REM sleep. But an earlier class of antidepressants, the monoamine oxi-
dase inhibitors, such as phenelzine (Nardil), produce a complete blockade of
REM sleep. A similar effect is seen with certain forms of traumatic brainstem
damage, yet both of these cases that produce complete blockade of REM sleep
(and do so without stress hormone surges) do not seem to produce significant
impairment of memory. Conversely, the benzodiazepine class of anti-anxiety
drugs (including Valium, Xanax, and Versed) have strong memory-blocking ef-
fects, yet leave sleep cycles unperturbed.
  So, what are we to conclude? The evidence that cycling sleep has some role
in the consolidation and integration of memory is fairly good. The notion
that REM sleep has a privileged part in this process is somewhat weaker. My
own guess is that a holistic explanation is more accurate: it’s likely that some-
thing about the cycling between REM and non-REM stages throughout the
night is particularly beneficial in memory consolidation and integration. Some
theoretical models, involving alternating unidirectional flow of information
between the hippocampus and the cerebral cortex, suggest why this might be,
but I won’t go into those details (interested readers are encouraged to check the
Further Reading and Resources section).
  So what’s special about sleep? Perhaps the type of integration and cross-
referencing that sleep allows is somehow different than that of the waking
state. One might imagine that the reduction of external sensation during sleep
                                                            Sleeping and Dreaming   201
                                    "XBLF                              "TMFFQ                                                                                                                
      %BZT
                                 4XJUDIUPDPOTUBOUMJHIUPSDPOTUBOUEBSLOFTT
              
             
             
             
             
             
             
             
             .JEOJHIU BN           /PPO        QN   .JEOJHIU BN   /PPO
      figure 7.4. Changes in the human daily sleep-wake cycle in the absence of external
                  cues. The cycle persists in the absence of cues from alternating light and
                  darkness, but becomes gradually desynchronized to the external world. In
                  this diagram hollow bars represent waking and filled bars indicate sleep.
                     Joan M. K. Tycko, Illustrator.
      allows for associations between more distant and fluid aspects of memory
      that would be impossible during waking sensory bombardment. Let’s keep this
      thought in the back of our minds and return to it shortly when we consider
      dreams.
      TO T H I S P O I N T,   I have discussed the sleep-wake cycle and the stages of sleep
      without reference to the brain circuitry and molecular events underlying them.
202   Sleeping and Dreaming
Let’s move in that direction by asking a very fundamental question: do daily cy-
cles of activity such as the sleep-wake cycle require a sort of clock within the
brain, or is this behavioral rhythm solely driven by external cues, such as those
from sunlight? Figure 7.4 shows what happens when someone who has been
living in normal conditions for 10 days, with light and dark cues, is placed into
conditions where these cues are no longer present (either constant light or con-
stant darkness). The basic daily rhythm of sleeping and waking persists with a
near 24-hour-long cycle (about 24.2 hours on average), but this cycle becomes
gradually desynchronized from the clock of the external world, and the time of
sleep onset slowly shifts later and later. This indicates that there is indeed a
clock within the brain but that it requires information to remain synchronized
to the outside world.
  It turns out that a tiny structure within the hypothalamus called the supra-
chiasmatic nucleus (that means “above the place where the optic nerves cross”
and is abbreviated SCN) is the body’s master timekeeper. This cluster of about
20,000 neurons has a natural rhythm of activity that continues even if you re-
move it surgically (from a hamster, for example) and grow it in a lab dish filled
with nutrient fluids. This activity is approximately, but not exactly, 24 hours
long, hence its name, the circadian clock (from circa = approximately and
dia = day). Animals that sustain damage to the SCN no longer have normal
sleep-wake cycles. Rather, they have brief periods of sleep and waking distrib-
uted randomly throughout the day and night.
  The way light coordinates the timing of the internal circadian clock with the
external world is mostly driven by a special set of neurons in the retina. These
are not the rods and cones that form the visual image, but rather a group of
large, spindly cells called melanopsin-positive ganglion cells. These cells send
their axons to the SCN to give information about the ambient light level. Sig-
nificantly, not only are melanopsin-positive ganglion cells stimulated by strong
                                                            Sleeping and Dreaming   203
            BN          BN     BN         BN        BN         BN
            BN           BN     BN         BN        BN       BN
            QN          QN     QN         QN        QN         QN
            QN           QN     QN         QN        QN       QN
      figure 7.5. A rendition of Carl von Linné’s flower clock, which uses the opening and
                  closing times of European flowers to estimate time of day. Joan M. K. Tycko,
                   illustrator.
      sunlight, but they can also be activated by relatively weak artificial lighting.
      Therefore, when you stay up late under artificial light you are trying to force
      your internal circadian clock into a 25- or 26-hour period. The result: morning
      grogginess. The degree to which light can shift the internal circadian clock is
      limited to about a 1-hour shift per day. So, if you make a flight across 5 time
204   Sleeping and Dreaming
zones you are likely to need about 5 days for your internal clock to reset to the
new local time. The result, as you well know: jet lag.
  Is the circadian clock solely a device to drive the sleep-wake cycle? After all,
many organisms have functions that are coordinated to the time of day but are
independent of sleeping. Even many plants open or close their flowers at partic-
ular times of the day (Figure 7.5). This was noted by the Roman philosopher
Pliny the Elder, writing in the first century a.d., and was elaborated by the
eighteenth-century Swedish naturalist Carl von Linné, who proposed that it
would be possible to create an accurate clock by planting a flower garden with
carefully calibrated opening and closing times. It turns out that the basic bio-
chemical scheme of the circadian clock found in the human SCN can also be
found in lower animals, plants, and even fungi. Clearly, the ability to coordi-
nate biological processes with the light-dark cycle is an important function that
is likely to have predated sleeping animals by a billion years. It’s most likely
that circadian clocks evolved independently, at least twice: fungi have circadian
clock genes that are related to ours, but cyanobacteria (as well as archaea and
proteobacteria) have a set of unrelated molecules that nonetheless perform sim-
ilar functions. Interestingly, these ancient bacteria are likely to have developed
their circadian clock about 3.5 billion years ago when the Earth’s rotation pe-
riod was only about 15 hours (this is an estimate).
  What is it that originally drove the evolution of the circadian clock? We don’t
know the answer to this question and several hypotheses have been put for-
ward. One appealing idea, formulated by Colin Pittendrigh in the 1960s, is
called the “escape from light” hypothesis. Pittendrigh and others noticed that
several species of unicellular algae underwent replication of their DNA and
subsequent cell division only during the night. It was known that dividing
cells can be killed by the ultraviolet radiation present in daylight. Hence, Pit-
tendrigh suggested that circadian rhythms evolved as an escape from light: to
                                                             Sleeping and Dreaming   205
      allow sensitive cellular processes to occur in darkness. Recently, Selene Nikaido
      and Carl Johnson of Vanderbilt University put this to the test: They showed
      that the unicellular alga Chlamydomonas reinhardtii survives exposure to a pulse
      of ultraviolet light best during the day, when cell division ceases. When lab
      dishes of Chlamydomonas were placed in constant light conditions, they had a
      persistent circadian cycle of cell division that gradually became desynchronized
      with the outside world, just like the sleep-wake cycle of humans kept in con-
      stant light.
      A LT H O U G H R E C E N T Y E A R S   have seen an explosion of knowledge about the mo-
      lecular basis of the circadian clock in the SCN and about the brain circuitry in-
      volved in the onset and various stages of sleep, the way in which the SCN affects
      sleep-control circuits is still poorly understood. Axons from the neurons of
      the SCN make synapses in several adjacent regions of the hypothalamus that
      in turn project to brainstem and thalamic structures. In addition, the SCN,
      through a complex circuit of at least three synaptic relays, stimulates the pineal
      gland to secrete the hormone melatonin. The levels of melatonin, widely sold
      in health food stores as a sort of “natural sleeping pill,” increase with nightfall
      and peak at about 3:00 a.m. Melatonin diffuses throughout the body, but has
      its major effect on sleep-control circuits in the brainstem.
         One of the main circuits in the brain that affects the control of sleep is called
      the brainstem reticular activating system. These neurons, which use the trans-
      mitter acetylcholine (and are hence called cholinergic neurons), send their ax-
      ons to sites in the thalamus, where they modulate the transmission of informa-
      tion between the thalamus and the cortex. Reticular cholinergic neurons are
      active during waking but gradually become less and less active as non-REM
      sleep progresses to deeper stages. Indeed, artificial electrical stimulation of the
      reticular activating system will wake an animal from sleep while stimulation
206   Sleeping and Dreaming
of its targets in the thalamus will have the opposite effect: it will induce deep
non-REM sleep in a previously awake animal. When the transition from non-
REM to REM sleep begins, the brainstem cholinergic neurons begin firing
rapidly again, and this causes the EEG record to shift from the large-amplitude,
synchronized state to the small-amplitude, desynchronized state that’s typi-
cal of both REM sleep and waking. Why doesn’t the animal just wake up at
this point, instead of staying in REM sleep? The answer is that other brain-
stem systems, the serotonin-containing neurons of the dorsal raphe and the
noradrenaline-containing neurons of the locus coeruleus are also involved in
sleep-cycle control, and these neurons are inactive in both REM and non-REM
sleep. The interaction of these three brain regions (together with some oth-
ers that play a smaller role) determines how sleep stages progress through the
night. The large number of neurotransmitter systems involved in sleep-cycle
control means that a variety of drugs can affect sleep, producing either a desired
effect (such as sleep through the use of drugs that interfere with acetylcholine
receptors) or an unwanted side effect (such as the REM sleep–inhibiting prop-
erties of many antidepressants that boost serotonin).
E V E R Y O N E L I K E S TO   talk about dreams. The thing about dreams is that they
feel inherently meaningful. In every culture studied to date, people have elabo-
rate ideas about the meaning and causes of dreams. In many cases, dreams are
thought to be messages from divine beings or ancestors that can provide guid-
ance or foretell the future—the Judeo-Christian Bible, the Islamic Koran, and
the sacred texts of Buddhism and Hinduism all contain stories of prophetic
dreams. Dreams can also be thought to represent “soul-travel” to distant loca-
tions. If you believe that dreams are meaningful, you can hold that their mean-
ings are either rather straightforward, reflecting prior events and concerns, or
are occluded and symbolic, requiring interpretation. The ancient Egyptians, by
                                                                  Sleeping and Dreaming   207
      about 1500 b.c., had elaborate temples that were specifically built for dream
      interpretation by trained priests. Manuscripts survive from this time that cata-
      logue the meanings of various dream elements. Most of these are couched in
      terms of prophecy (“if you dream of crows, then a death will soon come to a
      loved one”).
        Many years later, Sigmund Freud, the father of psychoanalysis, would elabo-
      rate a related theory in his famous 1900 volume entitled The Interpretation of
      Dreams. In Freud’s view, dreams arise from subconscious wishes, mostly of a
      sexual or aggressive nature, that the conscious mind suppresses during the day.
      But if these subconscious wishes were manifest in dreams in a straightforward
      fashion, then the dreamer would be awakened by these forbidden desires. So,
      instead, dreams are symbolic reflections of the dreamer’s suppressed subcon-
      scious wishes. Thus, in Freud’s view, a dream of flying represents displaced sex-
      ual desire, and a man’s dream of teeth falling out represents a fear of castra-
      tion (it’s unclear what such a dream would mean for a woman). In many ways
      the practices of ancient Egyptian dream priests and those of present day post-
      Freudian psychoanalysts are not dissimilar. They have different goals in that the
      former are concerned with predicting the future while the latter seek to illumi-
      nate past and present events and motivations. But both rely, more or less, upon
      a dictionary of symbols to guide dream interpretation.
        There is no question that dreams feel meaningful and symbolic. Indeed, vari-
      ous symbolic dictionaries for dream interpretation (in the basic form of “if you
      dream of X then it means Y”) are sold by the tens of thousands every year. Al-
      though dream interpretation is a phenomenon that is broadly cross-cultural, it
      is not accepted by all. There are those, mostly a subset of neurobiologists, who
      hold that the content of dreams has no meaning whatsoever. In their view
      dreams are merely the byproduct of some other important process, such as
      memory consolidation. Dreams are the smoke and not the fire, so to speak.
208   Sleeping and Dreaming
Let’s try our best to address this contentious issue systematically. First, we’ll
consider some ideas about how patterns of activity in the brain might give rise
to dreams. Then, we’ll talk about the possible function or purpose of dreaming,
and finally we’ll attempt to ask whether the content of dreams is meaningful.
  You know from your own experience that some mornings you may awaken
with no recollection of any dreams at all, while at other times the night seems to
be crowded with them. In general, unless you awaken during or within a few
minutes of the end of a dream, you are unlikely to recall it. For many years it
was thought that dreaming only occurred during REM sleep. Now we know
that dreams can be reported following awakening from any stage of sleep but
that their character, duration, and frequency vary with different sleep stages.
Let’s illustrate this with some examples from my own dream journal.
       Dream 1: Shortly after falling asleep, I had the sensation of swimming un-
    derwater, as I did with my kids at the neighborhood pool yesterday.
       Dream 2: I couldn’t get anything done on my grant application today, and
    was plagued through the night with worry that I couldn’t finish it before the
    deadline.
       Dream 3: I am waltzing with a beautiful woman in a vast space. The woman
    is not someone I recognize but she seems to know me well. In some respects
    the room where we’re dancing is like a large ballroom, but it’s also like a shop
    in my home town that I visited frequently as a teenager. This shop sold musi-
    cal instruments, including many unusual ones from foreign countries. My
    dancing partner is beaming at me, but I’m distracted by the instruments in
    the cases, which are complex and inviting. I long to go tinker with them, but
    I’m aware that my dancing partner is getting annoyed that I’m not paying
    enough attention to her. She grows more and more upset as she senses my dis-
    traction. Soon, she’s furious and I’m running from her and the scene has
                                                                  Sleeping and Dreaming   209
          changed to a long, hot road. I jump on a bicycle and pedal quickly, which al-
          lows me to pull away from her pursuit. I can no longer see her in the road be-
          hind me. However, after a minute or so, the road grows bumpy and I realize
          that I’m riding over live snakes. As I pedal, the snakes snap at my feet each
          time they reach the lowest point in the pedal’s revolution, so I put my feet up
          on the crossbar of the bike to avoid being bitten. Of course, I gradually lose
          speed and I realize that very soon, without forward momentum, I will lose my
          balance and fall into the snakes that now cover the road like a carpet.
        Heaven knows what a psychoanalyst (such as my father!) would make of all
      this (is a snake, sometimes, just a snake?). These dreams are very different, but
      they do share two common features: I am the main character and they occur in
      the present. This is a general feature: the vast majority of dreams are “present-
      tense, first-person” experiences. Dream 1 is a typical dream from the period
      shortly after sleep onset. It is brief, and while it has a strong sensory component,
      this does not progress to form a continuing narrative. It is a scene fragment
      without much detail and without any particular emotional tone. It is logical,
      congruent with waking experience, and does not have hallucinatory properties.
      Significantly, sleep-onset dreams are very likely to incorporate experiences from
      the previous day’s events. In one study, Robert Stickgold and his coworkers
      from Harvard Medical School had subjects play the video game “Downhill
      Racer II” for several hours. In the following night’s sleep, more than 90 percent
      of the subjects reported scenes from this game, but only when they were awak-
      ened shortly after sleep onset, not in the middle or late parts of the night when
      deep non-REM (stages III-IV) and REM sleep predominate.
        Dream 2 is a typical dream from deeper, non-REM sleep, particularly as
      would be found in the first half of the night. Like Dream 1, it lacks an unfold-
      ing story, but in this case, it almost completely lacks sensory experience. Ba-
210   Sleeping and Dreaming
sically, it’s just an obsessive, emotion-laden anxious thought. The thought is
logical and grounded in waking experience, but it does not trigger any form of
narrative.
  Dream 3 is typical of REM sleep, particularly REM episodes that occur shortly
before waking. It is a narrative dream that unfolds in a story-like fashion and is
rich in detail. The dream fuses together disparate locations, some specific (the
music store of my youth) and others generic (a fancy ballroom I don’t recog-
nize). It incorporates elements of fantasy: in real life, I can’t waltz for beans, but
in the dream I do it flawlessly and without effort. There is a sense of continuous
motion throughout the dream (waltzing, running, cycling). The dream nar-
rative incorporates scene changes (from the ballroom to the road) and other
events and locations that don’t make sense, and yet, in the dream, I accept these
phenomena as the natural course of things. There is a suspension of disbelief
about otherwise illogical or bizarre experiences. There are many hallucinatory
aspects to this dream but they are almost exclusively visual (as opposed to audi-
tory or tactile). Finally, there is a growing sense of anxiety and fear that builds
throughout the dream, starting with the mild social anxiety of offending my
dancing partner and culminating with the acute fear of a horrible death by
snakebite.
  Narrative, emotion-laden dreams with illogical and bizarre scenes are the
kinds of dreams we are most likely to remember and discuss, partly because
they make for good stories, but also because of the structure of the sleep cycle:
you are most likely to awaken, and therefore remember your dream, toward the
end of the night’s sleep when REM predominates. This type of dream is most
frequent during REM, but we have recent evidence that people awakened from
non-REM sleep during the last third of the night can sometimes recall similar
narrative dreams.
  There have now been many large studies in which people have kept dream
                                                                Sleeping and Dreaming    211
      journals (either written or audio) and a much smaller sample in which peo-
      ple in a sleep lab or wearing a home EEG recording unit are awakened dur-
      ing various sleep stages to provide dream reports. What becomes clear from
      these studies is that, in general, dream content is very highly biased toward neg-
      ative emotional states. Fear, anxiety, and aggression are the dominant emotions
      in about 70 percent of dreams recorded in dream journals. Only about 15 per-
      cent of these dreams are clearly emotionally positive. These results seem gener-
      ally to hold cross-culturally: dreams of being chased are the most common sin-
      gle theme found around the world, from Amazonian hunter-gatherers to urban
      dwellers in Europe. Interestingly, the proportion of dreams with prominent
      anxiety, fear, and aggression is greater in dream journals that rely upon sponta-
      neous waking than it is in situations where people are awakened artificially in
      the last third of the night (reduced from 70 percent to about 50 percent). One
      interpretation of this disparity is that dreams with negative emotions are more
      likely to awaken the sleeper, who will then remember and record them.
        Given the preponderance of sexual dream interpretations by Freud, it is in-
      teresting that less than 10 percent of dreams appear to have overtly sexual con-
      tent. This is similar in men and women. The previously mentioned male and
      female genital responses that occur during REM sleep do not seem to be corre-
      lated with sexual dreaming.
        Elements of the previous day’s activity, particularly those with a strong sen-
      sorimotor component, are often incorporated into brief sleep-onset dreams,
      but seldom in narrative dreams. In one study, less than 2 percent of narrative
      dreams contained autobiographical memory replay of an event from the previ-
      ous day (although more incorporated a single aspect of the day’s experience
      such as a person or a location). Some researchers have claimed that there is a
      time-lag effect in which experiences are most likely to appear in dreams 3 to 7
212   Sleeping and Dreaming
nights later. Counterintuitively, highly emotional experiences during the day
seem to require a slightly longer time before showing up in dreams.
  So, let’s summarize the differences between the waking state and narrative
dreaming. Compared to the waking state, narrative dreaming
    incorporates bizarre aspects, including fusions and abrupt changes of loca-
    tions and individuals, violation of physical laws, and so on;
    is characterized by a lack of internal reflection and an acceptance of illogical
    events;
    often involves a heightened sense of motion, predominantly conveyed visu-
    ally;
    has a higher incidence of negative emotion than waking life, particularly
    anxiety and fear;
    incorporates older memories to a greater degree than new ones;
    is rapidly forgotten unless interrupted by waking.
  In recent years, a number of studies have used scanners to measure brain ac-
tivity in people during various stages of sleep. Let’s examine these findings with
an eye to whether they can help explain some of the characteristics of narrative
dreaming listed above. Although narrative dreams can occur during either deep
non-REM sleep or REM sleep, they seem to predominate in the latter, so we’ll
use the REM sleep stage brain as our template for a physiological analysis of
narrative dreaming. Figure 7.6 shows a simplified summary of changes in brain
activity during REM sleep as compared to restful waking.
                                                                    Sleeping and Dreaming   213
      %PSTPMBUFSBM
      QSFGSPOUBMDPSUFY
                                                                           0DDJQJUBMDPSUFY
                                                                            QSJNBSZWJTVBM
                                                                           DPSUFY
       "OUFSJPS
       DJOHVMBUF
                    "NZHEBMB
                                     1POUJOF
                                     UFHNFOUVN
          "DUJWBUFEJO3&.TMFFQ
                                                                1BSBIJQQPDBNQBM
          %FBDUJWBUFEJO3&.TMFFQ
                                                                DPSUFY
      figure 7.6. Some brain regions that show altered electrical function in REM sleep, as
                  determined by PET scans. This figure is not meant to be complete. For
                  example, in addition to the amygdala and anterior cingulate, adjacent
                  parts of emotional circuitry are also activated during REM sleep, includ-
                  ing the septal area and infralimbic cortex. Adapted from J. A. Hobson and
                  E. F. Pace-Schott, The cognitive neuroscience of sleep: neuronal systems,
                  consciousness, and learning, Nature Reviews Neuroscience 3:679–693
                  (2002). Joan M. K. Tycko, illustrator.
        We knew from previous work in animal models that the brainstem reticular
      activating system is strongly active during REM sleep, and the activity of these
      cholinergic neurons (in a place called the pontine tegmentum) can be seen in
      PET scan images. One of the most striking features of the brain scans is that
      while narrative dreams are intensely visual, the primary visual cortex is almost
      completely silent during REM sleep. But areas involved in the higher-level
      analysis of visual scenes and the storage of visual and cross-modal memories
214   Sleeping and Dreaming
(such as the parahippocampal cortex) are strongly activated. This may help ex-
plain why dreams are often constructed from fragments of disparate memories,
mostly long-term visual memories stored in these visual association areas.
  Another striking feature of the brain in REM sleep is strong activation of re-
gions subserving emotion. In particular, the amygdala and anterior cingulate
are strongly activated and these regions appear to play a particular role in fear,
anxiety, and the emotional aspects of pain as well as responses to fearful and
painful stimuli. This may underlie the prevalence of fear, anxiety, and aggres-
sion in the emotional tone of narrative dreams. Finally, portions of the pre-
frontal cortex, in particular the dorsolateral prefrontal cortex, are deactivated in
REM sleep. This is a crucial part of the brain for executive functions (judg-
ment, logic, planning) and working memory. Its deactivation may help ex-
plain the illogical character of dreams and the dreamer’s acceptance of bizarre
and improbable circumstances and plotlines. Essentially, reduced dorsolateral
prefrontal activation could contribute strongly to the hallucinatory properties
of dreams. In this sense, it is worth mentioning that deactivation of this region
is a hallmark of hallucinating schizophrenics (who, in a limited sense, have
dreamlike experiences while awake).
  Brain scanning with PET is a technique that gives information about the av-
erage activation of brain regions. It is very useful, but it does not convey de-
tailed information about either the exact location of individual firing neurons
or the fine temporal structure of that activity. These parameters are both critical
to understanding the way information is being processed in the brain during
narrative dreaming. Animal experiments using implanted recording electrodes
have shown that during REM sleep the noradrenaline-containing neurons of
the locus coeruleus and the serotonin-containing neurons of the dorsal raphe
fall silent, while the acetylcholine neurons of the brainstem reticular activating
system fire strongly. The neurons of these three modulatory systems have axons
                                                               Sleeping and Dreaming   215
      that project widely throughout the brain, including the thalamus, limbic sys-
      tem, and cortex. Thus some of the regional activity changes during REM sleep,
      as reflected in brain scanning studies, results from turning up synaptic drive
      that uses acetylcholine together with turning down that which uses noradren-
      aline and serotonin.
         The increased cholinergic drive also ultimately results in the limp muscle pa-
      ralysis that characterizes REM sleep. During narrative dreams, the motor cor-
      tex and other movement control structures such as the basal ganglia and cere-
      bellum are issuing commands to cause movements, but these commands are
      blocked from entering the spinal cord by an inhibitory circuit triggered by
      strong cholinergic drive in the brainstem. This may underlie the continual and
      effortless sense of movement (including flight) that is so prevalent in the experi-
      ence of narrative dreams: the commands for movement are being issued but the
      feedback from the muscles and other sensory organs about how those move-
      ments are progressing is no longer present to ground the perception of move-
      ment in reality.
      A LT H O U G H T H E S TO R Y   is far from complete, we can certainly say that the pat-
      tern of brain activity during narrative dreaming can explain many of the un-
      usual features of dream content. This level of explanation, does not, however,
      address either the purpose of dreams or the question of whether dream con-
      tent is meaningful. So, why do we dream? The short answer, sadly, is that we
      don’t really know. The long answer, however, suggests some avenues for investi-
      gation.
         If you ask a cross section of sleep researchers why we dream, you tend to
      get answers that reflect that person’s area of interest. In this fashion, scien-
      tists whose primary interest is emotion will tell you that the main function of
      dreams is to regulate mood. For example, Rosalind Cartwright, of Rush Presby-
216   Sleeping and Dreaming
terian–St. Luke’s Medical Center in Chicago, theorizes that dreams function as
mood regulators, to allow us to process negative emotions, so we wake up feel-
ing better than we were when we went to sleep. Some psychiatrists say that
dreams are like a kind of psychotherapy. Ernest Hartmann of Tufts University
has proposed that both dreams and psychotherapy largely function by allowing
connections between life events to be made in a safe, insulated environment,
away from the outside world.
  Biologists with an interest in evolution have proposed that dreaming has de-
veloped as a time to rehearse and perfect behaviors that are crucial to survival
during waking hours. They function as a kind of virtual-reality environment to
simulate life-threatening scenarios in a safe place. In a way, this explanation is
not too different from that offered by Hartmann. Both seek to explain the cen-
tral role of fear and anxiety in dream reports, and both imagine dreaming as a
protected environment in which to accomplish important mental tasks.
  And, of course, I’ve already discussed the idea that cycling sleep is impor-
tant for the consolidation, integration, and cross-referencing of memory, so it is
a small leap to imagine that dreaming is somehow related to these memory
processes. One interesting twist on this comes from Jonathan Winson of Rock-
efeller University, who thinks of dreams as “off-line memory processing.” In his
view, the computational resources needed to integrate experience into memory,
if operative only during waking, would require an even larger and ultimately
untenable volume of cortex than we already have. So, in order to make the best
of the brain volume we have, we run the night shift, so to speak, continuing the
process of memory consolidation and integration around the clock, like a war-
time munitions factory.
  In considering the merits of these models for dream function we should keep
several things in mind. First, these models are not necessarily mutually exclu-
sive: for example, dreams could function both as regulators of mood and as a
                                                             Sleeping and Dreaming   217
      part of memory consolidation. Second, we need to be careful to make some im-
      portant distinctions between levels of analysis in dreams. On one level are the
      underlying processes that occur in the brain during the dreaming state. Then
      there is the experience of the dreaming state while it is happening, and, finally,
      the report of the dream that will only occur for those dreams that are inter-
      rupted or that are followed very quickly by waking.
        In my view, each of the models for dream function has some strengths and
      weaknesses. The psychiatric explanations of dreams as mood regulators or as
      night therapy provides a plausible rationale for the prevalence of negative emo-
      tion in dream reports. But this model has to contend with two important ob-
      servations. First, there are some people who report no dreams at all unless ar-
      tificially awakened, and yet, on the average, these people have no unusual in-
      cidence of emotional or cognitive problems. Here, one might retreat a bit and
      posit that the therapeutic value of dreams occurs as a result of their experience
      during sleep, even if they are not consciously recalled. Second, many of the
      most emotionally salient events in life never make their way into dreams at
      all, even in those individuals who report dreams regularly. Some psychiatrists
      might respond that these events would be manifest symbolically rather than lit-
      erally and would therefore not always be easy to spot.
        The memory consolidation/integration model for dreams is compelling in
      many ways. Among other things, it provides an explanation of why items in re-
      mote memory are often dredged up in dreams: presumably these are being inte-
      grated with newer memories. Within memory consolidation/integration mod-
      els there are some important distinctions. In some, the experience and/or later
      report of the dream are central to the process. These models, of course, must
      deal with the same critique leveled at the emotional models above: on average,
      people who fail to report dreams perform normally in a battery of memory
      tests. A reductionist variant of the memory model, most forcefully proposed by
218   Sleeping and Dreaming
J. Allen Hobson of Harvard University, states that the main purpose of cycling
sleep is memory consolidation and integration and that the experiences of nar-
rative dreams are basically what the logically impaired (inhibited dorsolateral
frontal cortex) and hyperemotional (overactive amygdala, septum, and anterior
cingulate) brain can stitch together into a narrative from scraps of mostly vi-
sual memory (overactive parahippocampal gyrus). In this view, the content of
dreams is merely a funhouse-mirror reflection of memory consolidation and
there is no need for symbolic dream interpretation in the Freudian (or ancient
Egyptian) tradition.
  To me, there has always been a big hole in existing memory consolidation/
integration models of dreams. They fail to address why the emotional con-
tent of dreams is so negative. My own suspicion about this has been as follows:
It is well known that the activation of the negative emotion circuits (fear/anx-
iety/aggression) in the brain will reinforce memory consolidation in the wak-
ing state. Essentially, strong activation of the brain regions subserving negative
emotions is a signal that says “write this down in memory and underline it.”
During memory consolidation and integration in sleep we need some mecha-
nism to say, “OK. You’ve made this connection with something in long-term
memory. Write it down now.” I suggest that that mechanism is activation of the
negative emotion centers. In essence, the fear/anxiety/aggression circuitry is co-
opted for use in reinforcing memories and connections between memories in
the absence of relevant emotional stimuli. Your dreaming brain doesn’t know
that the negative emotion circuits have been hijacked, and it integrates the ac-
tivity in these centers to produce narrative dreams with negative emotional
themes.
  So, where do these various models leave the question of whether dream con-
tent is meaningful? To me, this has always seemed like a nonissue. Certainly,
the content of dreams is of some interest under any model of dreaming. Even
                                                             Sleeping and Dreaming   219
      diehard proponents of the memory consolidation/integration model of dream-
      ing agree that the content of what is being written into memory and what it’s
      being integrated with are of some value in understanding an individual’s men-
      tal state. The question is how far to take it. Although there is a place for the
      analysis of dream content in both psychotherapy and personal growth, I have
      no confidence (and there is no biological basis to believe) that insight into one’s
      mental state can be gained through analysis of dream content with arbitrary
      symbolic dictionaries.
        The obsession with specific dream content tends to obscure what’s really
      important about dreaming. The most useful thing about the experience of
      dreaming (as opposed to the underlying processes) is not the detailed content
      of dreams. It’s not so crucial that you dream of a cigar rather than a shoe, or of
      your father rather than your mother. What’s most important about dreaming is
      that it allows you to experience a world where the normal waking rules don’t ap-
      ply, where causality and rational thought and our core cognitive schemas (peo-
      ple don’t transform or merge, places should be constant, gravity always oper-
      ates, and so forth) melt away in the face of bizarre and illogical stories. And,
      while you dream, you accept these stories as they unfold. Essentially, the experi-
      ence of narrative dreams allows you to imagine explanations and structures that
      exist outside of your waking perception of the natural world. In your waking
      life you may embrace the distorted structures of the dream world or you may be
      a hard-headed rationalist, or you may blend the two (as most of us do), but in
      all cases the experience of dreaming has thrown back the curtain and allowed
      you to imagine a world where fundamentally different rules apply.
220   Sleeping and Dreaming
Chapter Eight
The Religious Impulse
I N MY N IG HTMAR E,   I’m in New Orleans for the Society for Neuroscience an-
nual meeting, a gathering of 30,000 or so of the world’s brain researchers. It’s
nighttime and I’m at a restaurant table with a group of colleagues. The wine is
flowing, everyone is happy and chatting, so I begin to explain my theory of reli-
gion and neural function as the waiter delivers the huge plates of steaming
boiled crawfish. As I go on for a minute or two I slowly realize that the table has
become oddly silent. Behind me, a tall robed figure with a black hood is waiting
expectantly with a pepper grinder about the size of a Stinger missile. I turn
slowly, my spiel gradually winding down.
  “Would you care for some freshly ground speculation with that, sir?”
  All heads turn toward me. The sounds in the restaurant gradually build from
                                                                                      221
      a low rumble to howling to shrieking cacophonous laughter. Then the hooting
      begins and all the diners slowly point their fingers in my direction at once. The
      gathering din soon animates the boiled crawfish on my plate and they jerk and
      snap and finally swarm over me, ripping at my flesh and singing “In-a-gadda-
      da-vida baby, don’t you know I’m in love with you” in tinny little voices as I
      crumple to the floor.
         I’m not alone. Neurobiologists are hesitant to talk about brain function and
      religion in the same breath. Every human culture has language and music, and
      we are happy to study the neurobiological bases of these phenomena. Every
      human culture has a form of marriage, and we study the neurobiological ba-
      sis of pair bonding as well. Every human culture has religion. The forms of
      religion vary enormously (as do languages and marriage customs), but the
      presence of religion is a cross-cultural universal. To date, a culture has yet to
      be found that lacks religious ideas and practice. Yet scientists who study the
      brain rarely contribute to discussions of this widespread form of human behav-
      ior (perhaps out of a widespread fear of singing zombie crawfish from hell).
      F R E S H LY G R O U N D S P E C U L AT I O N   in hand, let’s have at it and consider
      some religious ideas from around the world. Some of these come from a
      provocative book by the cognitive anthropologist Pascal Boyer, Religion Ex-
      plained.
           Invisible souls of dead people lurk everywhere. They must be pacified with
           offerings of food and drink or they will make you sick.
           There has only been one woman in history who has given birth without hav-
           ing sex and we worship her for that reason.
222   The Religious Impulse
    After you die, you will come back to earth in either a higher or lower form,
    depending upon how you have followed a set of rules in this life.
    There is one god who is all-powerful and all-knowing and who can hear your
    thoughts. You can pray to our god in a temple or anywhere else.
    Some ebony trees can recall conversations people hold in their shade. These
    can be revealed by burning a stick from the tree and interpreting the pattern
    of fallen ash.
    There is a shaman in our village who will dance until his soul leaves his body
    and goes to the land of the dead. When he returns he will bring messages
    from our ancestors who have become all-seeing gods.
  It is likely that some of these ideas are from traditions with which you are fa-
miliar and others are not. This small collection illustrates a bit of the variety of
cross-cultural religious thought. Some groups have religions with one god, oth-
ers have many, and some have none at all. In some cases, unusual powers are at-
tributed to historical figures or natural objects, which then become the focus of
particular attention. In others, special rituals can be used to speak with divine
beings or the dead.
  There is a lot of variety here, but not infinite variety. You don’t, for example,
find a religion where there is an all-powerful, all-seeing god but he never inter-
acts with the human world, or one in which the spirits of the ancestors will
punish you for doing what they want, or one in which priests can see the fu-
ture but then forget what they saw before they can tell anyone. Religions, like
dreams, have variety, but they are still constrained within a particular set of cog-
nitive and narrative boundaries.
                                                                   The Religious Impulse   223
         So, why is it that religion of some form is found in every culture (although
      not in every individual)? Why is it that, in Boyer’s words, “human beings can
      easily acquire a certain range of religious notions and pass them on to others?”
      Can our present knowledge of brain function provide any form of explanation
      for the prevalence and practices of religion cross-culturally?
         If, after a few pints, I start to ask people at my local bar about the origin of re-
      ligious thought around the world, I get the sort of answers that can be summa-
      rized as follows:
           Religion provides comfort, particularly in allowing people to face their own
           mortality.
           Religion allows for the upholding of a particular social order: it lays down
           moral rules for interactions with others.
           Religion gives answers to difficult questions such as what are the origins of
           the natural world.
         These ideas all hold true to some degree for most of the religions we encoun-
      ter in the more affluent parts of the world. But they do not always apply in the
      broader cross-cultural sense. Many religions are not comforting at all: they are
      mostly concerned with malevolent spirits that, if not continually appeased, will
      kill you, make you sick, make you crazy, destroy your crops, cause you to fail at
      hunting. Most religions have a world origin story and an afterlife story, but
      these are not universal. Religions do not always promise salvation. In many of
      the world’s cultures, the dead are condemned to wander eternally no matter
      how scrupulously they lived their lives on Earth. Many societies have common
      rules for social order, but in many cases these are entirely independent of reli-
      gious practice. In short, the explanations offered at the bar have some utility,
224   The Religious Impulse
but they all fail the broader cross-cultural test. They do not answer our basic
question: “Why does every human culture have religion?” A different approach
is needed.
IS IT REASONABLE     to imagine that brain function, something that is generally
shared by human beings around the world, can be invoked to explain religious
thought and practice, which takes such a wide variety of forms (including athe-
ism)? Let’s be clear what we’re after here. We’re not looking for a brain region or
neurotransmitter or gene that somehow confers religion. That is unlikely to be
a fruitful level of analysis. Nor are we seeking to explain specific religious ideas
in biological terms. Rather let’s ask: are there some aspects of brain function
that, on the average, make it easy for humans to acquire and transmit religious
thought?
  I will try to convince you that our brains have become particularly adapted
to creating coherent, gap-free stories and that this propensity for narrative cre-
ation is part of what predisposes humans to religious thought. Creating a co-
herent percept from sensory fragments is a theme touched on in Chapter 4. Re-
call that as you scan a visual scene with tiny jumps of your eyes, called saccades,
your brain plays some tricks. You do not see a jerky image with the visual scene
jumping around, nor do you see a scene that briefly fades to black every time
your eyes jump. Rather, your brain takes the “jerky movie” that is the raw visual
feed from your eyeballs, edits out the saccades, and retroactively fills in the gaps
in the ongoing visual scene with images from the time the saccade ended. What
you perceive feels continuous and flowing, but it is actually a narrative actively
constructed by your brain to create a coherent sensory story.
  The creation of coherent narratives in the brain is not limited to manip-
ulation of low-level perception, as occurs with visual saccades, but extends to
higher perceptual and cognitive levels. This function is ongoing, but difficult
                                                                The Religious Impulse   225
      to study in the normal brain. It is often more clearly revealed in cases of brain
      damage. Take, for example, people suffering from anterograde amnesia. Recall
      that these people are unable to form new memories of facts and events but
      have intact memories for things in the more distant past. When a hospital-
      bound man with severe anterograde amnesia is asked “What did you do yester-
      day?” he does not have any memories from the previous day to call to mind.
      In many cases, the patient will construct a narrative from scraps of older mem-
      ory and weave them together to make a coherent and detailed story. “I stopped
      in to visit my old pal Ned at his store and then we went out for lunch at the
      deli. I had a corned beef sandwich and a dill pickle. Afterward we took a walk
      in the park and watched the skaters.” This process, which is called confabula-
      tion, is not merely a face-saving attempt. In almost all cases, amnesiacs be-
      lieve their own confabulations and will act upon them as if they were true. Con-
      fabulation in anterograde amnesia is not a process under voluntary control.
      Rather, it’s what the brain does when confronted with a problem it cannot
      begin to solve: it makes a story from whatever bits of experience it can dredge
      up, in much the same way that narrative dreams are created from scraps of
      memory.
         The drive to create coherent narrative is also revealed in a fascinating group
      of “split-brain” patients. These are people whose severe and otherwise intracta-
      ble epilepsy has been controlled by cutting the corpus callosum and the ante-
      rior commissure, which are the bundles of axons that normally connect the
      right and left cerebral hemispheres. The split-brain operation, though used as a
      last resort, is a remarkably effective way of controlling some types of seizures.
      This procedure disconnects the direct communication between the right and
      left cerebral cortex, but each side of the cortex retains generally normal func-
      tion and the lower (subcortical) parts of the brain remain connected. Remark-
      ably, if you meet someone who has had split-brain surgery you are unlikely to
226   The Religious Impulse
notice anything amiss in casual conversation. It takes careful examination, usu-
ally employing special instruments, to reveal anything unusual.
  The analysis of perception and cognition in split-brain patients was pio-
neered in the 1960s by Roger Sperry of the California Institue of Technol-
ogy (the same remarkable neurobiologist whose work on the development of
the visual system of the frog I discussed in Chapter 3) and has been carried on
more recently by a number of others, including Michael Gazzaniga of the Uni-
versity of California at Santa Barbara. In most people (almost all right-handers
and about half of lefties), the left cortex is specialized for abstract thought, lan-
guage (particularly involving the meanings of words), and sequential mathe-
matical calculation. The right cortex excels at spatial relationships, geometry,
face recognition, and detecting the emotional tone of language, music, and
facial expressions. These insights have mostly come from patients with vari-
ous localized brain lesions as well as from brain-scanning studies of normal
humans.
  Split-brain patients provide a unique opportunity to see how the left and
right cortices process information independently. In one famous experiment, a
split-brain patient was placed before a specially constructed screen, designed so
that the left cortex received only an image of a chicken’s claw (projected in the
right visual field: the representation of the visual field is reversed right to left in
the brain) while the right cortex saw a winter landscape with snow (see Fig-
ure 8.1). When asked to pick a card with an image to match, the right cor-
tex, which controls the left hand, picked a shovel to go with the theme of
snow, while the left cortex, which controls the right hand, picked an image of
a chicken to go with the claw. This shows that each side of the cortex could rec-
ognize its image and make an appropriate association. When the patient was
asked why he chose those two images, the response came from the left side (that
is the only one that can speak—the right is mute); the response was, “Oh, that’s
                                                                  The Religious Impulse   227
      figure 8.1. A split-brain patient receiving separate visual stimulation to each cortical
                  hemisphere. The chicken claw in the right visual field activates the left
                  hemisphere while the shed in the left visual field activates the right hemi-
                  sphere. When the patient is asked to explain his choices of thematically re-
                  lated images, confabulation results. Joan M. K. Tycko, illustrator.
228   The Religious Impulse
simple. The chicken claw goes with the chicken and you need a shovel to clean
out the chicken shed.”
  Let’s think carefully about what’s happening here. The left brain saw the
chicken claw but not the snow scene. When faced with the shovel and the
chicken it retroactively constructed a story to make these disparate choices ap-
pear to make sense. Michael Gazzaniga, in his book entitled The Mind’s Past,
from which this example is taken, notes, “What is amazing here is that the left
hemisphere is perfectly capable of saying something like, “Look, I have no idea
why I picked the shovel . . . Quit asking me this stupid question.” Yet it doesn’t.
  Here’s another brief example, also from Gazzaniga. If, in a split-brain pa-
tient, the (mute) right brain receives the instruction “Go take a walk,” the sub-
ject will push the chair back and prepare to leave. If, at that point, the (speak-
ing) left brain, which had no access to the instruction, is asked “What are you
doing?” it will manufacture a seemingly coherent response to make sense of the
body’s action, such as “I was feeling thirsty and decided to go get a drink” or “I
had a cramp in my leg and needed to work it out.” This is not just a fluke of one
or two split-brain patients. The narrative-constructing capacity of the left cor-
tex has now been clearly observed in more than 100 split-brain patients in
many different situations.
  In all humans, not just those who have had split-brain operations, this action
of the left cortex is revealed in narrative dreaming. Why do we have narrative
dreams at all? If the underlying purpose of these dreams is indeed memory con-
solidation and integration, then why don’t we just experience isolated vignettes
or flashes of memory instead of an unfolding bizarre and illogical story? The
answer is that the narrative-constructing function of the left cortex cannot be
switched off, even during sleep. Like the cerebellar system designed to reduce
perception of self-generated movements discussed in Chapter 1, it is always on
whether or not its function is relevant for the particular task at hand. The
                                                                The Religious Impulse   229
      dream researcher David Foulkes tells Andrea Rock (in her book The Mind at
      Night, p. 127), “The interpreter [the narrative-constructing function of the left
      cortex] is doing an even more spectacular job of story-making than it does in
      waking, because the brain in sleep is activated but the raw material it has to
      work with is much different. You lose yourself, you lose the world and thought
      is no longer directed.”
      I S U G G E S T T H AT   the left cortex’s always-on narrative-constructing function
      promotes the acquisition of religious thought through both subconscious and
      conscious means. Religious ideas largely involve nonnaturalistic explanations.
      Whether religious ideas are regarded by their practitioners as “faith” or merely
      “given knowledge,” they share the property that they violate everyday percep-
      tual and cognitive structures and categories. The left cortex predisposes us to
      create narratives from fragments of perception and memory. Religious ideas are
      similarly formed by transforming everyday perceptions, by building coherent
      narratives that bridge otherwise disparate concepts and entities. Pascal Boyer
      proposes that the most effective religious concepts preserve all the relevant in-
      ferences of a particular cognitive category except those that are specifically pro-
      hibited by a special nonnaturalistic aspect.
           There has only been one woman in history who has given birth without hav-
           ing sex and we worship her for that reason.
           Category: person. Special aspect: virgin birth.
           There is one god who is all-powerful and all-knowing and who can hear your
           thoughts. You can pray to our god in a temple or anywhere else.
           Category: person. Special aspect: all-powerful, all-knowing.
230   The Religious Impulse
    Some ebony trees can recall conversations people hold in their shade. These
    can be revealed by burning a stick from the tree and interpreting the pattern
    of fallen ash.
    Category: plant. Special aspect: recalls conversations.
  The binding together of disparate percepts and ideas to create coherent nar-
rative that violates our everyday waking experience and cognitive categories is a
left cortical function that underlies both dreaming and the creation and social
propagation of religious thought. This function operates subconsciously. We
are not aware of the stories spun by our left cortex in our waking lives. We pay
no attention to the man behind the curtain.
  But in our narrative dreams, we have the experience of extended violations of
conventional logic, connection, and causation. Our dreams give us nonnat-
uralistic experiences. They allow us to conceptualize systems and stories that
are not constrained by our conventional waking categories and causal expecta-
tions. In this way, the subconscious process of narrative creation is made evi-
dent to our conscious mind.
  So, I hypothesize that left cortex narrative creation works in two ways to pro-
mote religious thought: subconsciously, to make the cognitive leaps that under-
lie nonnaturalistic thought (that which violates categories, expectations, and
causality) and consciously, through recollected dreams, to create a template for
nonnaturalistic thought. In this vein, it is not an accident that cross-cultural rit-
ual practice often incorporates dreaming, hallucinogenic drugs, trance, dance,
meditation, and music. All of these aspects of ritual practice, by moving us
away from waking consciousness, provide nonnaturalistic, dreamlike experi-
ence guided by the left cortex, and thereby reinforce the religious impulse.
  Let’s be clear about what I’m proposing here. Although all cultures have re-
                                                                  The Religious Impulse   231
      ligious thought, ultimately, religious thought is an individual phenomenon.
      Within a given culture, individuals vary considerably in their religious thought
      with some claiming to have none whatsoever. I am not proposing that some in-
      dividuals (or some cultural groups) have biological variation, either genetic or
      epigenetic, that predicts their predisposition for religious thought. Rather, I am
      claiming that, though on the individual level religion is a matter of personal
      faith as influenced by sociocultural factors, our shared human evolutionary
      heritage, as reflected in the structure and function of our brains, predisposes us
      as a species for religious thought in much the same way that it predisposes us
      for other human cultural universals such as long-term pair bonding, language,
      and music.
      FA I T H I S N OT   only the province of religion. When John Brockman posed the
      question, “What do you believe is true even though you cannot prove it?” to a
      group of scientists and academics through his edge.org website, the answers
      were voluminous and wide-ranging, although mostly not explicitly religious.
      Even the most hard-headed rational atheists had an answer. On some level, all
      of us are hard-wired, or are at least strongly predisposed, to believe things we
      cannot prove. That essential act of faith is central to human mental function. It
      is an important initial step in making sense of our world.
         So why, particularly in the United States, do scientific and religious ideas of-
      ten lead to cultural war? One reason is that many scientists have not been very
      humble about this matter. Scientific investigation has strongly challenged the
      factual basis of some particular religious ideas (such as a Judeo-Christian bibli-
      cal flood or a 6,000-year-old Earth or the creation of Eve from Adam’s rib). For
      some scientists, such findings are enough to warrant a wholesale repudiation of
      religious faith and the faithful. But, scientifically speaking, is that warranted?
      Although the details of particular religious texts are falsifiable, the core tenets
232   The Religious Impulse
of many religions (a belief in a God, the existence of an immortal soul) are
not. Science cannot prove or disprove the central ideas underlying most religious
thought. When scientists claim to invalidate these core tenets of religious faith
without the evidence to do so, they do a disservice to both science and religion.
  On the religious side there has often been a similar intolerance of scientific
thought. Fundamentalists of many religions (Christians, Muslims, Jews, and
Hindus to name but a few) have insisted upon a rigid, literal reading of sacred
texts. For these people, the rejection of science is a given, and the firmer the re-
jection the better, because strong rejection is seen as an opportunity to demon-
strate the strength of their religious feeling: “I have faith, I believe this in my
heart and nothing you can say or do can shake my faith.” Attempts to reconcile
literal readings of sacred texts with observations of the world often lead to im-
probable proposals. For example, an exhibit at the Creation Museum in Pe-
tersburg, Kentucky, shows pairs of dinosaurs marching up the ramp to board
Noah’s Ark!
  Of course, most fundamentalists don’t get really worked up about most areas
of science. Chemistry and mathematics remain largely unmolested. Physics
does not inspire impassioned debates at school board meetings (although this
may change). Evolutionary biology takes most of the heat. It’s not just that evo-
lutionary biology contradicts traditional creation stories such as that in the
Book of Genesis. There has also been an assumption that if one accepts the idea
that life developed without divine intervention, it necessarily follows that all as-
pects of religious thought must be rejected. Those who take this line of argu-
ment to extremes argue that when religious thought is rejected moral and social
codes will degenerate, and “the law of the jungle” will be all that is left. It is
imagined by religious fundamentalists that those who do not share their partic-
ular religious faith are incapable of leading moral lives.
  The tragedy is that these suppositions are simply not true. One can be a per-
                                                                The Religious Impulse   233
      son of faith and still accept a scientific worldview, including evolutionary biol-
      ogy (one can also be a scrupulously moral agnostic or atheist). It’s only funda-
      mentalist religion that is incompatible with science. Fortunately, many of the
      world’s religious leaders have accepted the idea that scientific and religious ideas
      need not be mutually exclusive. His Holiness the Dalai Lama has said, “If sci-
      ence proves Buddhism is wrong, then Buddhism must change.” In stark con-
      trast to fundamentalist Christian teachings, the Catholic bishops of England,
      Scotland, and Wales have stated, “We should not expect to find in Scripture full
      scientific accuracy or complete historical precision.” They hold that the Bible
      is true in passages relating to human salvation and the divine origin of the
      soul but, “we should not expect total accuracy from the Bible in other, secular
      matters” (from The Gift of Scripture, published by the Catholic Truth Society,
      United Kingdom). The Vatican has essentially stated that the scientific consen-
      sus model of evolution is valid but that it explains only the biological part of
      humanity, not the spiritual mystery. How utterly reasonable!
      WE ALL BELIEVE      some things we cannot prove. Those unproven ideas that
      are ultimately subject to falsifying experiment or observation constitute “sci-
      entific faith.” Those that are not constitute religious faith. These two modes
      of thought are not mutually exclusive, as fundamentalist religious leaders and
      some scientists would have you believe. Rather, they are two branches of the
      same cognitive stream. Our brains have evolved to make us believers.
234   The Religious Impulse
Chapter Nine
The Unintelligent Design of the Brain
H O S T I L I T Y TO E V O L U T I O N A R Y   biology has been a feature of certain parts of the
American political and religious landscape for many years, although it has been
much less of an issue in most other countries. Most religious denominations
and indeed most Christian leaders have made their peace with the basic tenets
of evolution: that all present life on Earth derives from a common 3.5-billion-
year-old ancestor, and that living things change slowly through a random pro-
cess of genetic mutation coupled with natural selection. Indeed, Pope John
Paul II made this point in a 1996 address to the Pontifical Academy of Sciences
entitled “Truth Cannot Contradict the Truth.” He said, “Today, almost half a
century after the publication of the encyclical [a previous statement from Pope
Pius XII in 1950 that said there was no opposition between evolution and the
                                                                                                    235
      doctrine of the faith], new knowledge has led to the recognition of the theory of
      evolution as more than a hypothesis.”
         But fundamentalist Christians adhere to a literal reading of the Book of
      Genesis and have for many years sought to have this biblical view taught in
      American public schools. When these attempts were repeatedly banned by the
      courts on the basis of the Constitutional separation of church and state, a new
      strategy was born called “scientific creationism.” A group of fundamentalist
      American Christians attempted to claim that careful examination of the geo-
      logical and biological record supports the story of Genesis—that the Earth is
      6,000 years old, that all species were created simultaneously, and that mass ex-
      tinctions seen in the fossil record were caused by the Noah’s flood. But this
      argument also failed. Not only was it impossible to marshal the evidence to
      support these claims scientifically, but, in the words of the evolutionary biolo-
      gist Jerry Coyne, “American courts clearly spied clerical collars beneath the lab
      coats” and struck down teaching of so-called scientific creationism in schools.
         In the 1990s yet another strategy was developed. Recognizing that explicit
      references to religion would always be rejected by the courts, a group of funda-
      mentalist Christian academics took a step back and sought to devise a theory
      that would challenge evolutionary biology but would appear to be scientifically
      reasonable. This movement, dubbed “intelligent design,” does not try to pro-
      vide support for such obviously scientifically untenable points as a 6,000-year-
      old Earth, Noah’s flood, or other aspects of the Genesis story. In fact, when talk-
      ing to the world at large, the supporters of intelligent design are careful not
      to mention God or religion at all. Rather, they claim that living creatures are
      just too intricate and clever to have arisen by random mutation and selection.
      These forms, they say, are too elegant and too complex to attribute to anything
      other than a very clever designer. Therefore, an unspecified intelligent designer
      must be at work. In this way of thinking, gradual change of living things is ad-
236   The Unintelligent Design of the Brain
mitted and the fossil record and the genetic relationships between living organ-
isms can be accounted for, but the engine driving this change is challenged.
  The crux of the matter is this: intelligent design purports to be a scientific
theory, but it isn’t. Pope John Paul II hit one out of the ballpark when he offered
the following definition. “A theory is a metascientific elaboration distinct from
the results of observation but consistent with them. By means of it, a series of
independent data and facts can be related and interpreted in a unified explana-
tion. A theory’s validity depends on whether or not it can be falsified. It is con-
tinually tested against the facts; wherever it can no longer explain the latter, it
shows its limitations and unsuitability. It must then be rethought” (address to
the Pontifical Academy of Sciences, October 23, 1996).
  Evolution is a theory. It can be falsified by particular findings, such as a
hominid skeleton dated to the Jurassic Era. Intelligent design is not. It rests on
a subjective inference of design that is not subject to a falsifying experiment
or observation. It is not surprising that despite lavish funding from certain reli-
gious and political groups, the intelligent design movement has provided no
fieldwork or laboratory experimentation to bolster its claims. Yes, books are
written, papers are presented and published, and even mathematical models
are constructed. All the trappings of science are there, but there is no science
at the core.
IS THE GOAL    of the intelligent design movement really to do legitimate science
to challenge the theory of evolution, or is its goal merely to craft a sufficiently
watered-down view of creationism to appear scientific and thereby gain a place
at the debating table and fly under the radar of the courts? Although intelligent
design proponents are careful not to mention religion in public hearings or de-
bates, quite a different picture emerges when they are addressing fundamental-
ist Christian audiences. Phillip E. Johnson of the University of California at
                                                  The Unintelligent Design of the Brain   237
      Berkeley, one of the founders of the intelligent design movement, said, “Our
      strategy has been to change the subject a bit so that we can get the issue of intel-
      ligent design, which really means the reality of God, before the academic world
      and into the schools” (American Family Radio, January 10, 2003). William
      Dembski of the Southern Baptist Theological Seminary, another well-known
      intelligent design proponent, has stated, “Intelligent design readily embraces
      the sacramental nature of physical reality. Indeed, intelligent design is just the
      Logos theology of John’s Gospel restated in the idiom of information theory”
      (Touchstone: A Journal of Mere Christianity, July 1999).
         In its public face, intelligent design has been cleverly crafted to appear as a le-
      gitimate scientific theory with no ties to a specific religious agenda. This gives
      political cover to politicians and school board members who can adopt a tone
      of fairness in saying, “Let’s present our students with both sides of this fas-
      cinating scientific debate.” For example, in March 2002, U.S. Senator Rick
      Santorum (Republican of Pennsylvania) said, “Proponents of intelligent design
      are not trying to teach religion via science, but are trying to establish the valid-
      ity of their theory as a scientific alternative to Darwinism.” In August 2005,
      President George W. Bush weighed in: “Both sides ought to be properly taught
      . . . so people can understand what the debate is about.”
      I F YOU B E LI EVE   that life was designed by an intelligent force (whether that be
      the Judeo-Christian God, angels, the Allah of Islam, or even extraterrestrials),
      then the human brain, the presumed seat of reason, morality, and faith, is the
      obvious test case to reveal this design. After all, this 2.5-pound lump of tissue
      can solve problems in recognition, categorization, social interaction, and many
      other areas that routinely baffle the world’s most sophisticated supercomputers.
      These supercomputers are designed and programmed by teams of extremely
238   The Unintelligent Design of the Brain
talented hardware and software engineers. Doesn’t this imply that the brain was
designed by an even more skillful engineer?
  The proponents of intelligent design make two main arguments. First, as we
have seen, they contend that living things could not have arisen through Dar-
winian evolution because they contain structures that are “irreducibly complex.”
This means that if you remove any one component part from one of these
structures (such as an ion channel or a bacterial flagellum), it will not be par-
tially crippled, but rather will fail to function entirely. Therefore, how can we
imagine that these structures arose from random change and gradual selection
when the intermediate forms would fail? Second, they claim that random mu-
tation and selection cannot generate new information and therefore cannot
produce the “specified complexity” necessary to adapt to the environment. In
their view, only an intelligent agent can get around these problems.
  Specialists in molecular evolution and information theory (I am neither)
have refuted these core claims in exquisite detail (see the Further Reading and
Resources section of this book). To me the most compelling evidence against
the claim of irreducible complexity is that careful observation reveals that com-
plexity is not irreducible at all. For example, the flagellum (a whip-like struc-
ture that bacteria spin to move through liquids) of more recent bacteria is more
complex than the flagellum of more ancient bacteria. In many cases, complex
structures such as the flagellum arise when genes mediating other functions
(such as ion pumps for example) are randomly duplicated in the genome, and
then one copy of the duplicated gene accumulates mutations that allow it to
adopt a new function (as a component of the flagellum).
  The claim that “specified complexity” cannot arise through random muta-
tion and selection is a specious argument. The critique from information the-
ory, that new information cannot be generated by the evolutionary process,
                                                 The Unintelligent Design of the Brain   239
      would hold only if the evolutionary process were charged with matching an in-
      dependently given pattern. This is not the case. The evolutionary process does
      not strive to build prespecified complex structures such as eyes, kidneys, or
      brains. It has no goal. The only driving force of evolution is reproductive suc-
      cess and the related issues of kin selection and the reproductive success of one’s
      offspring. If building complex structures increases reproductive fitness, then
      they may arise, but if destroying complex structures also increases reproductive
      fitness, then complex structures can just as easily be destroyed or altered (as
      happened when the eyes of cave-dwelling fish became nonfunctional).
         So, where does this leave the intelligent design movement? Essentially, it
      leaves its proponents saying: “Look at that thing. It’s just too cool not to have
      been actively designed.” Michael Behe, a biochemist, took this line when de-
      fending the intelligent design movement on the New York Times op-ed page
      (February 7, 2005): “The strong appearance of design allows a disarmingly
      simple argument: If it looks, walks and quacks like a duck, then, absent com-
      pelling evidence to the contrary, we have warrant to conclude it’s a duck. De-
      sign should not be overlooked simply because it’s so obvious.” Behe would like
      intelligent design to be the default explanation for biological structure, with the
      burden placed on any competing explanation to prove otherwise (Figure 9.1).
         Is the evidence for design in biological systems so obvious? I hold that the
      brain, the ultimate test case, is, in many respects, a true design nightmare. Let’s
      review a bit. When we compare the human brain to that of other vertebrates,
      it becomes clear that the human brain has mostly developed through agglom-
      eration. The difference between the lizard brain and the mouse brain does not
      involve a wholesale redesign. Rather, the mouse brain is basically the lizard
      brain with some extra stuff thrown on top. Likewise, the human brain is basi-
      cally the mouse brain with still more stuff piled on top. That’s how we wind up
      with two visual systems and two auditory systems (one ancient and one mod-
240   The Unintelligent Design of the Brain
figure 9.1. Do these all reveal the work of an intelligent designer? Proponents of in-
            telligent design are fond of using the carvings of presidents on Mount
            Rushmore (top left) as an example of something that we can tell was intel-
            ligently designed even without any specific experiments. They would say
            that evidence of intelligent design is equally evident in biological struc-
            tures such as the shell of the chambered nautilus (shown in cross section in
            the top right), neurons in the cerebral cortex (bottom left), and the brain
            as a whole (bottom right). Joan M. K. Tycko, illustrator.
                                                     The Unintelligent Design of the Brain   241
      ern) jammed into our heads. The brain is built like an ice cream cone with new
      scoops piled on at each stage of our lineage.
         Accidental design is even more obvious at the cellular level in the brain. The
      job of neurons is to integrate and propagate electrical signals. Yet, in almost all
      respects, neurons do a bad job. They propagate their signals slowly (a million
      times more slowly than copper wires), their signaling range is tiny (0 to 1,200
      spikes/second), they leak signals to their neighbors, and, on average, they suc-
      cessfully propagate their signals to their targets only about 30 percent of the
      time. As electrical devices, the neurons of the brain are extremely inefficient.
         So, at either the systems or the cellular level, the human brain, which the in-
      telligent design crowd would imagine to be the most highly designed bit of tis-
      sue on the planet, is essentially a Rube Goldberg contraption. Not surprisingly,
      some proponents of intelligent design have left themselves a way to retreat on
      this point. Michael Behe writes, “Features that strike us as odd in a design
      might have been placed there by the designer for a reason—for artistic reasons,
      for variety, to show off, for some as-yet-undetectable practical purpose or for
      some unguessable reason—or they might not.” Or, stated another way, if on
      first glimpse biological systems look cool, that must be the result of intelligent
      design. If, on closer inspection, biological systems look like a cobbled-together
      contraption, that’s still got to be from intelligent design, just intelligent design
      with an offbeat sense of humor. Clearly, this position is not a true, falsifiable sci-
      entific hypothesis, as is the theory of evolution. The idea of intelligent design is
      merely an assertion.
      W H AT H A P P E N S W H E N   we lift the hood, so to speak? After all, the complete se-
      quences of the human, mouse, worm, and fly genomes are all in hand. What
      can they tell us? They make the case for evolution much stronger. Would you
      like to see genes duplicated to underlie development of novel complex traits?
242   The Unintelligent Design of the Brain
They’re there. How about disabled genes which have mutated to the point
where they no longer function to make protein (called pseudogenes)? Check.
Genes where mutations have accrued across species to give rise to new func-
tions? No problem.
  The complete sequences of these genomes have been available only for a few
years and there’s a lot we don’t know about how genes direct the structure and
function of tissues and how their expression responds to environmental cues.
Our knowledge of gene-environment interactions in forming the structure and
function of the brain is at a very early stage. Nonetheless, there are some strik-
ing examples of how variation in gene structure underlies brain structure. One
of the best is the ASPM gene (previously mentioned in Chapter 3). Recall that
this gene, which codes for a protein in the mitotic spindle (a structure used to
organize the chromosomes during cell division), seems to determine how many
times cortical progenitor cells divide before they become committed to becom-
ing cortical neurons. As a result, this gene is crucial for determining cortical
size. Humans who harbor certain mutations in ASPM will be microcephalic.
You may also recall that an important part of this protein is a segment that
binds the messenger molecule calmodulin and that the calmodulin-binding re-
gion is present in two copies in the ASPM gene of the roundworm, 24 in the
fruit fly, and 74 in humans. Analysis of the ASPM gene in chimpanzees, goril-
las, orangutans, and macaque monkeys has indicated that change in this gene,
particularly in its calmodulin-binding region, has been particularly accelerated
in the great ape family. These findings strongly indicate that the ASPM gene is
one major determinant in the evolution of cortical size. In a few more years we
will not have to speculate about the genes underlying the evolution of brain
structure. We will have them in hand.
  So, with genomic information supporting evolution, including brain evolu-
tion, so strongly, where are the intelligent design advocates to retreat on this
                                                 The Unintelligent Design of the Brain   243
                                                  The Main Evolutionary Constraints on Brain Design
                                                  1. The brain is never redesigned from the ground up: it mostly adds new systems onto existing
                                                  ones.
                                                  2. The brain has a very limited capacity for turning off control systems, even when these systems
                                                  are counterproductive in a given situation.
                                                  3. Neurons, the basic processors of the brain are slow and unreliable and they have a severely
                                                  limited signaling range.
                                                  Computational power can only be achieved by a very large, highly interconnected
                                                  brain, which can't fit through the birth canal in a state that's close to maturity.
Always-on left cortex narrative creation system
                                                  The wiring diagram of the 500 trillion
                                                                                                        Humans are born with very immature
                                                  synapses in the brain is too complex to
                                                                                                        brains.
                                                  be completely specified in the genome.
                                                                                                        Humans have long childhoods,
                                                  The fine structure of brain wiring is often
                                                                                                        requiring extensive, sustained parental
                                                  experience-dependent.
                                                                                                        cooperation.
                                                  These systems of experience-dependent                 The human mating system: sex
                                                  neuronal plasticity are retained and                  throughout the ovulatory cycle and
                                                  modified to store information. [MEMORY]               long-term pair bonding. [LOVE]
                                                  To be useful, memory must be integrated
                                                  with past events and indexed by emotion.
                                                  Aspects of memory integration/consolida-
                                                  tion are best done at night in the absence of
                                                  competing sensory stimulation.
                                                  Bizarre, illogical narrative. [DREAMS]
                                                  The always-on narrative creation system in the left cortex together with the nonnaturalistic
                                                  experience of dreaming predisposes humans to acquire religious ideas, among them: [GOD]
          Figure 9.2. Love and memory, dreams and God: a chart encapsulating the main argu-
                      ment of the book.
point? Behe points to a way out: Imagine that an intelligent designer assembled
some simple organisms long ago, and then washed her hands of the whole thing
and let evolution take over. In this way you may still have evolved from a com-
mon ancestor with chimpanzees and mice and flies and worms, but this was
only allowed by an intelligent design that ended more that 600 million years
ago. By positing that both creation by intelligent design and evolution have oc-
curred, and that the designer’s intent is unfathomable owing to an offbeat sense
of humor, Behe has carved out a tiny mountaintop niche. This is a scrap of rhe-
torical ground that is impossible to attack, but from which nothing can be
launched, either. Needless to say, it’s not falsifiable and therefore it’s not a genu-
ine scientific theory. Not surprisingly, many others in the intelligent design
movement (William Dembski and Phillip Johnson among them) are not will-
ing to give up so much ground. They hold onto the notion that Darwinian evo-
lution is incapable of building anything useful.
  Perhaps the problem is the gee-whiz factor. It is indeed deeply and pro-
foundly amazing that there is a tissue such as the human brain that confers our
very humanness. It’s not surprising that, for some, pondering the awe-inspiring
concept of the mind-in-the-brain leads to a religious, faith-based (untestable,
unfalsifiable) explanation rather than a scientific, faith-based (testable, falsi-
fiable) hypothesis. What’s interesting here is that though there are many differ-
ent ways to get the story wrong, the intelligent design group has gotten it ex-
actly, explicitly 180 degrees wrong. The transcendent aspects of our human
experience, the things that touch our emotional and cognitive core, were not
given to us by a Great Engineer. These are not the latest design features of an
impeccably crafted brain. Rather, at every turn, brain design has been a kludge,
a workaround, a jumble, a pastiche. The things we hold highest in our human
experience (love, memory, dreams, and a predisposition for religious thought;
Figure 9.2) result from a particular agglomeration of ad hoc solutions that have
                                                    The Unintelligent Design of the Brain   245
      been piled on through millions of years of evolutionary history. It’s not that we
      have fundamentally human thoughts and feelings despite the kludgy design of
      the brain as molded by the twists and turns of evolutionary history. Rather, we
      have them precisely because of that history.
246   The Unintelligent Design of the Brain
Epilogue
That Middle Thing
THERE ARE A    lot of fascinating topics in brain function that I didn’t cover here,
including language, brain aging and disease, psychoactive drugs, hypnosis, and
the placebo effect. This is all great stuff and it took a lot of topical self-control to
prevent this book from becoming a huge tome. More important, I think that
you have discerned by now that many of the explanations present-day biology
can offer about higher brain function are rather incomplete. But there are a few
lovely examples where an explanation in terms of molecules and cells provides a
nearly complete understanding of something we experience. One of my favor-
ites is the finding that people equate the sensation of chili peppers (either in the
mouth or on the skin) with the sensation of heat. At first, you might imagine
that this is merely an example of metaphorical speech that has arisen in a few
cultures. Not so: in every culture where people are exposed to capsaicin, the ac-
                                                                                           247
      tive ingredient in chili peppers, they characterize the sensation as “hot,” sug-
      gesting a biological basis. OK then, you might say, perhaps the answer is that in
      your palate there are temperature-sensing neurons and also some capsaicin-
      sensing neurons and these two types of neuron project to the same place in the
      brain, which, when activated, gives a sensation of “heat.” It turns out that this
      explanation’s not quite right either. The real story is that there is a family of re-
      ceptors for capsaicin and related compounds that are located on nerve endings
      in the mouth (and other places such as the skin). These are called vanilloid re-
      ceptors (vanilloids are the class of chemical that include capsaicin and related
      compounds), and they are activated by both capsaicin and warming, giving rise
      to similar sensations of “heat” for both stimuli. In this case, the experience at
      the level of behavior is almost entirely explained at the level of the single recep-
      tor molecule. This even reveals why if you drink hot tea after a spicy meal the
      tea seems extra-hot: the receptor is super-activated by warming and capsaicin
      together! And yes, it’s not just chili pepper heat that can be explained in this
      way: an analogous story concerns a receptor family called the cold/menthol re-
      ceptors, which give rise to the cross-cultural association between coolness and
      the active ingredient of mint.
        Unfortunately, most of the biological explanations for experience and be-
      havior are not nearly this neat and compact. For example, in Chapter 5, I talked
      about learning and memory. We know that if you get a hole in your hippocam-
      pus you can’t store new memories for facts and events. We also know this seems
      to require a chemical process at synapses in your hippocampus whereby certain
      patterns of neural activity result in activation of NMDA-type glutamate recep-
      tors. This receptor activation, in turn, sets in motion a series of chemical steps
      that render these activated synapses weaker or stronger and keeps them that
      way for a long time—a phenomenon called long-term synaptic depression and
      potentiation, LTP and LTD. And we know this molecular phenomenon seems
248   Epilogue
to underlie declarative memory, because injection into the hippocampus of
drugs that prevent the activation of NMDA-type glutamate receptors makes it
impossible for animals to store new memories for facts and events.
  At first glance, this would appear to be a fairly complete explanation, but it’s
not. What’s missing is that middle thing. How is it that changing the strength
of some synapses in the hippocampal circuit actually gives rise to memories
for facts and events, as recalled during behavior? We have a molecular expla-
nation for how synapses get weaker and stronger. At a behavioral level, we
can show that interfering with this molecular process (and perhaps some other
things as well) disrupts memory. But our understanding of the intervening
steps is almost nonexistent: that middle thing is a big, nasty, embarrassing gap
for brain scientists. Unfortunately, the middle-thing problem is not confined
just to learning and memory. Similar gaps between molecules and behavior
exist for our understanding of many other complex cognitive and perceptual
phenomena.
I DON’T WANT    to end on a downer. Brain science has made huge progress in
identifying the molecular and cellular underpinnings of behavior and experi-
ence. In most cases, a complete gap-free explanation that flows from molecules
to behavior with an intermediate-level understanding of systems and circuits is
not yet in hand. But let’s talk about an example where it seems possible to find
the brain scientist’s holy grail—a gap-free explanation of a high-level process in
the brain. In this case, the process is a particular form of learning that involves
control of your eye muscles.
  Here I’m talking about a learning task that’s similar to the one given to Pav-
lov’s famous dog. You’ll recall that Pavlov’s dog had no particular response to
the sound of a bell and would reflexively salivate when presented with meat. Af-
ter many experiences when the bell was rung immediately before giving meat,
                                                                   That Middle Thing   249
      the dog learned to associate these two stimuli such that the sound of the bell
      alone would cause the dog to salivate. Psychologists have named this simple
      form of learning classical conditioning. It is a type of nondeclarative memory.
      Now if you (or a rat, rabbit, or a mouse) are brought into the lab and you hear a
      bell (or some other innocuous sound) at moderate volume, you will have no
      particular behavioral response. If a brief puff of air is delivered to your eye, you
      will blink reflexively. You don’t think about it; it will just happen, as when the
      doctor taps your knee with the hammer to make your leg jump during a physi-
      cal exam. If, however, the bell tone is presented for a half-second or so and at the
      end of this tone a puff of air is directed to your eye, you will begin to learn to as-
      sociate the bell tone with the air puff just as Pavlov’s dog did for the meat and
      bell. What this means is that after many pairings of air puff and bell tone, you
      will blink your eye in response to the tone alone so that your eyelid is closed
      when the air puff would be expected to arrive. This form of learning, which is
      called associative eyelid conditioning, absolutely requires that the tone predicts
      the arrival of the air puff. If you experience tones alone or air puffs alone or even
      both air puffs and tones but delivered out of synch, you will not learn. After you
      learn this response, it is completely subconscious and beyond your control—
      you can’t help blinking when you hear the tone.
        Many labs all over the world have been working for a long time to under-
      stand how this simple form of learning happens, and a lot of progress has been
      made. We know, for example, that the air puff activates a small group of neu-
      rons in a portion of the brain called the inferior olive (yes, that’s really what
      it’s called—the early anatomists who named this stuff were given to flights of
      fancy). If, in a rabbit, you artificially activate this location in the brain with an
      electrode, it can replace the air puff during training. The bell tone, on the other
      hand, activates a group of cells in the brainstem that give rise to a set of axons
      called mossy fibers. In a way similar to what was found for the air puff, you can
250   Epilogue
replace the bell tone during training with artificial electrical activation of these
mossy fibers. So in order to store the memory for associative eyelid condition-
ing, the bell tone signals and the air puff signals must meet somewhere in the
brain, and when they arrive together (but not separately) they must produce a
change in the neural circuit that ultimately causes a blink in response to the bell
tone.
  Figure E.1 shows how this might occur. The tone and air puff signals are
both received in the cerebellum (that baseball-sized blob hanging off the back
of your brain that is important for motor coordination). In particular, these sig-
nals both excite a fan-shaped class of neuron called the cerebellar Purkinje cell.
The air puff signal comes directly through climbing fibers, but the bell tone sig-
nal comes indirectly: the mossy fibers excite cerebellar granule cells, and the
axons of the granule cells, called parallel fibers, in turn, excite Purkinje cells.
When climbing fiber and parallel fibers are activated together, as occurs when
the bell tone and air puff are paired, and this is repeated many times, the re-
sult is a long-lasting decrease in the strength of those excitatory parallel fiber–
Purkinje cell synapses activated by bell tones; this is called cerebellar long-term
synaptic depression, or cerebellar LTD.
  It turns out that we now know a lot about the molecular alterations that
underlie cerebellar LTD. The synapses are made weaker by changes on the
postsynaptic side that result in triggering the internalization of neurotransmit-
ter receptors, thereby rendering them unavailable to bind the neurotransmitter
(which in this case is glutamate) at the cell surface. We understand some details
of this process in excruciating molecular detail. For example, the main form of
the glutamate receptor at this synapse is composed of a chain of 883 amino ac-
ids, and the crucial molecular step in triggering internalization of the receptor
is the transfer of a phosphate group from an enzyme called protein kinase C to
the number 880 amino acid, which happens to be a serine.
                                                                   That Middle Thing   251
                                     -5%PGQBSBMMFMGJCFSo
                                     1VSLJOKFDFMMTZOBQTFT
   1BSBMMFMGJCFST
          (SBOVMF                              $MJNCJOH
          DFMMT
                                      1VSLJOKF GJCFS
          .PTTZ                       DFMM
          GJCFST
   5POF
                    1POT          %FFQ             *OGFSJPS                    "JSQVGG
                                  OVDMFJ           PMJWF
                                                              3FGMFYJWF
                                                              CMJOL
                                                                             &ZFCMJOL
                                                   -FBSOFE
                                                   CMJOL
             #FGPSFUSBJOJOH               %VSJOHUSBJOJOH         "GUFSUSBJOJOH
       5POF
                 
                 NTFD
     "JSQVGG
                               3FGMFYJWF       )VOESFET                        -FBSOFE
                               CMJOL           PGQBJSJOHT                     CMJOL
  &ZFMJE
  SFTQPOTF
    "DUJWJUZ
    JOEFFQ
    OVDMFJ
figure E.1. A proposed circuit-level explanation for a simple form of learning called
            associative eyelid conditioning. See the text for more detail. After repeated
            pairing of a tone and an air puff, the animal learns that the tone is predic-
            tive of the air puff, and it will reflexively blink in response to the tone
            alone. Tone–air puff pairing is thought to produce long-term depression
            (LTD) of the excitatory parallel fiber–Purkinje cell synapse. This ulti-
            mately results in an increase in tone-driven activity in the deep nuclei, and
            it is this activity that drives the learned blink. Adapted from D. J. Linden,
            From molecules to memory in the cerebellum, Science 301:1682–1685
            (2003). Joan M. K. Tycko, illustrator.
  So how do we get from cerebellar LTD to a learned blink in response to tone?
When the parallel fiber synapse is depressed as a result of tone–air puff pairing,
it produces less excitation of the Purkinje cell. The Purkinje cell, in turn, fires
less. Because the Purkinje cell is inhibitory, the cells that receive contacts from
its axon are less inhibited and therefore fire more in response to tones. This oc-
curs in a particular place called the cerebellar interposed nucleus, where record-
ings of neural activity have shown that as rabbits learn the tone–air puff associa-
tion, firing rates in the interval between the start of the tone and the start of the
air puff gradually increase. What’s more, artificial stimulation of the appropri-
ate portion of the interposed nucleus can itself give rise to eye blinks.
  Now, this is a model and, with further experiments, portions of it will be
proved incomplete or even wrong. But the exciting thing about this explana-
tion is that there’s no missing middle thing. Here is one of the very few exam-
ples in the brain where it is possible to go from a detailed molecular description
of a change at a synapse, through an anatomically well-defined wiring diagram,
to a high-level behavior, in this case, a form of nondeclarative memory. The
payback comes from being willing to study a behavior that’s simple (memory
for rules and procedures) as opposed to one where it is still too difficult to grasp
the middle thing (such as the problem of memory for facts and events).
  The holy grail of complete biological explanations for behavior is not in
hand, but it is emerging for some simple phenomena. We neurobiologists are
an optimistic lot by nature, but there is every reason to believe that our level of
understanding will continue to increase rapidly. Furthermore, it is very likely
that working out complete molecules-to-circuits-to-behavior explanations for
some simple forms of learning such as associative eyelid conditioning will yield
some general principles and insights that can then be applied to more complex
phenomena.
  So, the next time you hear some misguided congressman give a spittle-
                                                                    That Middle Thing   253
      strewn rant about “how those ivory-tower pointy-headed scientists are spend-
      ing our tax dollars to figure out how a rabbit learns to blink,” you can fire off an
      e-mail explaining exactly why this line of work is crucial for understanding the
      molecular basis of cognition and diseases of memory: it’s a step in conquering
      the next great scientific frontier.
254   Epilogue
Further Reading and Resources
Acknowledgments
Index
Further Reading and Resources
1. The Inelegant Design of the Brain
M AT E R I A L F O R A G E N E R A L AU D I E N CE
Carter, R. 1998. Mapping the Mind. University of California, Press, Berkeley. For my money,
  this is the best coffee-table book written about brain function to date. It’s clear, scien-
  tifically accurate, and has lovely illustrations.
Ramachandran, V. S., and Blakeslee, S. 1998. Phantoms in the Brain. William Morrow,
  New York. This is my favorite of the “illuminate higher brain functions through interest-
  ing neurological case studies” genre. It does a good job of blending the case studies with
  human laboratory experiments and a dash of philosophy and intellectual history.
S C I E N T I F I C R E P O RTS A N D R E V I E W S
Blakemore, S. J., Wolpert, D., and Frith, C. 2000. Why can’t you tickle yourself? Neuro-
  Report 11:11–16.
Corkin, S. 2002. What’s new with the amnesic patient H.M.? Nature Reviews Neuroscience
  3:153–160.
Shergill, S. S., Bays, P. M., Frith, C. D., and Wolpert, D. M. 2003. Two eyes for an eye: the
  neuroscience of force escalation. Science 301:187.
Weiskrantz, L. 2004. Roots of blindsight. Progress in Brain Research 144:229–241.
                                                                                                257
      2. Building a Brain with Yesterday’s Parts
      M AT E R I A L FO R A G E N E R A L A U DI E NCE
      Nicholls, J. G., Wallace, B. G., Fuchs, P. A., and Martin, A. R. 2001. From Neuron to Brain,
        4th ed. Sinauer, Sunderland, MA. There isn’t much on molecular and cellular neuro-
        biology for a general audience. This, in my view, is the best college textbook on the sub-
        ject.
      3. Some Assembly Required
      M AT E R I A L FO R A G E N E R A L A U DI E NCE
      Ridley, M. 2003. Nature via Nurture. Harper Perennial, New York. A splendid, very well
        written book on the nature-versus-nurture debate in human brain development. The au-
        thor takes the reasonable middle path. A page turner, actually.
      S C I E N T I F I C R E P O RTS A N D R E V I E W S
      Bouchard, T. J., Jr., and Loehlin, J. C. 2001. Genes, evolution, and personality. Behavioral
        Genetics 31:243–273.
      Bradbury, J. 2005. Molecular insights into human brain evolution. PLoS Biology 3:E5. PLoS
        Biology is an open-access journal whose contents are available free to all at www.plos.org.
      Kouprina, N., Pavlicek, A., Mochida, G. H., Solomon, G., Gersch, W., Yoon, Y. H.,
        Collura, R., Ruvolo, M., Barrett, J. C., Woods, C. G., Walsh, C. A., Jurka, J., and
        Larionov, V. 2004. Accelerated evolution of the ASPM gene controlling brain size begins
        prior to human brain expansion. PLoS Biology 2:E126.
      Meyer, R. L. 1988. Roger Sperry and his chemoaffinity hypothesis. Neuropsychologia
        36:957–980.
      Verhage, M., Maia, A. S., Plomp, J. J., Brussaard, A. B., Heeroma, J. H., Vermeer, H.,
        Toonen, R. F., Hammer, R. E., van den Berg, T. K., Missler, M., Geuze, H. J., and Südhof,
        T. C. 2000. Synaptic assembly of the brain in the absence of neurotransmitter secretion.
        Science 287:864–869.
      4. Sensation and Emotion
      M AT E R I A L FO R A G E N E R A L A U DI E NCE
      Ramachandran, V. S., and Hubbard, E. M. 2003. Hearing colors, tasting shapes. Scientific
        American 288:52–59.
258   Further Reading and Resources
Stafford, T., and Webb, M. 2004. Mind Hacks. O’Reilly, Sebastopol, CA. This improbable,
   wonderful book from the computer book publisher O’Reilly is in a series of books with ti-
   tles such as Google Hacks and Linux Hacks. Although it is a bit strange to shoehorn a brain
   book into this “computer tips and tricks” format, the end result is a fascinating collection
   of exercises you can do at home that reveal aspects of brain organization. It’s particularly
   strong on sensory systems and is richly endowed with links to websites that support the
   “hacks” with Java applets, flash animation, and so on.
www.michaelbach.de/ot/index.html. This website shows 53 optical illusions, many of them
   animated. There is good commentary about the neural phenomena thought to underlie
   these illusions and references to the original scientific papers written about them.
www.prosopagnosia.com. This website about face-blindness is written by a woman named
   Cecilia Burman, who has this condition. It is particularly interesting for her descriptions
   of living with prosopagnosia and the strategies she uses to adapt in social situations.
S C I E N T I F I C R E P O RTS A N D R E V I E W S
Beeli, G., Esslen, M., and Jancke, L. 2005. Synaesthesia: when coloured sounds taste sweet.
   Nature 434:38.
Eisenberger, N. I., and Lieberman, M. D. 2004. Why rejection hurts: a common neural
   alarm system for physical and social pain. Trends in Cognitive Science 8:294–300.
Nunn, J. A., Gregory, L. J., Brammer, M., Williams, S. C., Parslow, D. M., Morgan, M. J.,
   Morris, R. G., Bullmore, E. T., Baron-Cohen, S., and Gray, J. A. 2002. Functional mag-
   netic resonance imaging of synesthesia: activation of V4/V8 by spoken words. Nature
   Neuroscience 5:371–375.
Ramachandran, V. S. 1996. What neurological syndromes can tell us about human nature:
   some lessons from phantom limbs, Capgras syndrome, and anosognosia. Cold Spring
   Harbor Symposium in Quantitative Biology 61:115–134.
Ramachandran, V. S., and Hubbard, E. M. 2001. Psychophysical investigations into the
   neural basis of synaesthesia. Proceedings of the Royal Society: Biological Sciences 268:979–
   983.
Rizzolatti, G., and Craighero, L. 2004. The mirror-neuron system. Annual Review of Neuro-
   science 27:169–192.
Thilo, K. V., and Walsh, V. 2002. Chronostasis. Current Biology 12:R580–581.
Villemure, C., and Bushnell, M. C. 2002. Cognitive modulation of pain: how do attention
   and emotion influence pain processing? Pain 95:195–199.
Yarrow, K., and Rothwell, J. C. 2003. Manual chronostasis: tactile perception precedes
   physical contact. Current Biology 13:1134–1139.
                                                               Further Reading and Resources      259
      5. Learning, Memory, and Human Individuality
      M AT E R I A L FO R A G E N E R A L A U DI E NCE
      Le Doux, J. 2002. Synaptic Self. Penguin, New York. A well-argued book about the current
        state of research on the cellular basis of memory. It is particularly strong in considering the
        role of the amygdala in fear memory, which is the author’s specialty.
      Schacter, D. L. 2001. The Seven Sins of Memory. Houghton Mifflin, Boston. A wonderful,
        lucid book that describes the ways in which memory fails in healthy humans. These are
        explanations at the level of behavior and brain imaging, not the level of molecules and
        cells.
      Squire, L. R., and Kandel, E. R. 1999. Memory: From Mind to Molecules. Scientific American
        Library, New York. Although I may argue with some of the details in the cellular/molecu-
        lar part of this book, there is no denying that is does a nice job of encompassing the sweep
        of modern memory research. Nicely illustrated in the Scientific American style.
      S C I E N T I F I C R E P O RTS A N D R E V I E W S
      Holtmaat, A. J., Trachtenberg, J. T., Wilbrecht, L., Shepherd, G. M., Zhang, X., Knott,
        G. W., and Svoboda, K. 2005. Transient and persistent dendritic spines in the neocortex
        in vivo. Neuron 45:279–291.
      Malenka, R. C., and Bear, M. F. 2004. LTP and LTD: an embarrassment of riches. Neuron
        44:5–21.
      Morris, R. G., Moser, E. I., Riedel, G., Martin, S. J., Sandin, J., Day, M., and O’Carroll, C.
        2003. Elements of a neurobiological theory of the hippocampus: the role of activity-de-
        pendent synaptic plasticity in memory. Philosopical Transactions of the Royal Society of Lon-
        don, Series B, Biological Science 358:773–786.
      Nakazawa, K., McHugh, T. J., Wilson, M. A. and Tonegawa, S. 2004. NMDA receptors,
        place cells, and hippocampal spatial memory. Nature Reviews Neuroscience 5:361–372.
      O’Keefe, J., and Nadel, L. 1978. The Hippocampus as a Cognitive Map. Oxford University
        Press, Oxford.
      Zhang, W., and Linden, D. J. 2003. The other side of the engram: experience-dependent
        changes in neuronal intrinsic excitability. Nature Reviews Neuroscience 4:885–900.
      6. Love and Sex
      M AT E R I A L FO R A G E N E R A L A U DI E NCE
      Diamond, J. 1998. Why Is Sex Fun? Basic Books, New York. An excellent overview of human
        sexual physiology and behavior in the context of evolutionary biology.
260   Further Reading and Resources
Judson, O. 2003. Dr. Tatiana’s Sex Advice to All Creation. Owl Books, New York. This is the
  rarest of creatures: a science book that’s at once erudite, informative, and a hoot. Judson’s
  shtick is that she’s a sex-advice columnist for animals. And she uses this device to get
  at some rather sophisticated and subtle issues in the evolutionary biology of sex. Re-
  cently, this book has spawned a series of three television episodes featuring Dr. Tatiana,
  produced by Discovery Channel Canada. These feature elaborate costumed musical
  numbers, such as “Pocket Rocket” about the evolution of penis shape, that have to be seen
  to be believed.
Le Vay, S. 1993. The Sexual Brain. MIT Press, Cambridge. This is a clear account of the state
  of brain sex research by a prominent neuroanatomist. The problem is that it’s getting
  somewhat outdated because quite a bit has happened in the field since the time of its writ-
  ing, and an update is overdue.
S C I E N T I F I C R E P O RTS A N D R E V I E W S
Allen, L. S., and Gorski, R. A. 1992. Sexual orientation and the size of the anterior com-
   missure in the human brain. Proceedings of the National Academy of Science of the USA
   89:7199–7202.
Arnow, B. A., Desmond, J. E., Banner, L. L., Glover, G. H., Solomon, A., Polan, M. L., Lue,
   T. F., and Atlas, S. W. 2002. Brain activation and sexual arousal in healthy, heterosexual
   males. Brain 125:1014–1023.
Bailey, J. M., Dunne, M. P., and Martin, N. G. 2000. Genetic and environmental influences
   on sexual orientation and its correlates in an Australian twin sample. Journal of Personality
   and Social Psychology 78:524–536.
Bartels, A., and Zeki, S. 2000. The neural basis of romantic love. NeuroReport 11:3829–
   3834.
Chuang, Y. C., Lin, T. K., Lui, C. C., Chen, S. D., and Chang, C. S. 2004. Tooth-brushing
   epilepsy with ictal orgasms. Seizure 13:179–182.
Holstege, G., Georgiadis, J. R., Paans, A. M., Meiners, L. C., van der Graaf, F. H., and
   Reinders, A. A. 2003. Brain activation during human male ejaculation. Journal of Neuro-
   science 23:9185–9193.
Hu, S., Pattatucci, A. M., Patterson, C., Li, L., Fulker, D. W., Cherny, S. S., Kruglyak, L.,
   and Hamer, D. H. 1995. Linkage between sexual orientation and chromosome Xq28 in
   males but not in females. Nature Genetics 11:248–256.
Karama, S., Lecours, A. R., Leroux, J. M., Bourgouin, P., Beaudoin, G., Joubert, S., and
   Beauregard, M. 2002. Areas of brain activation in males and females during viewing of
   erotic film excerpts. Human Brain Mapping 16:1–13.
Mustanski, B. S., Dupree, M. G., Nievergelt, C. M., Bocklandt, S., Schork, N. J., and
                                                                Further Reading and Resources      261
         Hamer, D. H. 2005. A genomewide scan of male sexual orientation. Human Genetics
         116:272–278.
      Pillard, R. C., and Weinrich, J. D. 1986. Evidence of familial nature of male homosexuality.
         Archives of General Psychiatry 43:808–812.
      Young, L. J., and Wang, Z. 2004. The neurobiology of pair bonding. Nature Neuroscience
         7:1048–1054.
      7. Sleeping and Dreaming
      M AT E R I A L FO R A G E N E R A L A U DI E NCE
      Martin, P. 2003. Counting Sheep: The Science and Pleasures of Sleep and Dreams. Flamingo,
        London. A longish, detailed read but worth the effort. Clearly written, comprehensive,
        and accurate.
      Rock, A. 2004. The Mind at Night. Basic Books, New York. This book is more focused on
        dreams than sleep as a whole. It relies heavily on interviews with a group of prominent
        sleep researchers. Rock is at her best when she is telling some of the personal stories behind
        the science.
      S C I E N T I F I C R E P O RTS A N D R E V I E W S
      Frank, M. G., Issa, N. P., and Stryker, M. P. 2001. Sleep enhances plasticity in the developing
         visual cortex. Neuron 30:275–287.
      King, D. P., and Takahashi, J. S. 2000. Molecular genetics of circadian rhythms in mam-
         mals. Annual Review of Neuroscience 23:713–742.
      Louie, K., and Wilson, M. A. 2001. Temporally structured replay of awake hippocampal en-
         semble activity during rapid eye movement sleep. Neuron 29:145–156.
      Nikaido, S. S., and Johnson, C. H. 2000. Daily and circadian variation in survival from
         ultraviolet radiation in Chlamydomonas reinhardtii. Photochemistry and Photobiology
         71:758–765.
      Pace-Schott, E. F., and Hobson, J. A. 2002. The neurobiology of sleep: genetics, cellular
         physiology, and subcortical networks. Nature Reviews Neuroscience 3:591–605.
      Ribiero, S., Gervasoni, D., Soares, E. S., Zhou, Y., Lin, S.-C., Pantoja, J., Levine, M., and
         Nicolelis, M. A. L. 2004. Long-lasting novelty-induced neuronal reverberation across
         slow-wave sleep in multiple forebrain areas. PLoS Biology 2:126–137. PLoS Biology is an
         open-access journal whose contents are available free to all at www.plos.org.
      Siegel, J. M. 2005. Clues to the function of mammalian sleep. Science 437: 1264–1271. This
         review is highly critical of the REM sleep/memory consolidation hypothesis. Reading it
         together with Stickgold 2005 will give you both sides of the argument.
262   Further Reading and Resources
Stickgold, R. 2005. Sleep-dependent memory consolidation. Science 437:1272–1278.
Wagner, U., Gais, S., Haider, H., Verleger, R., and Born, J. 2004. Sleep inspires insight. Na-
   ture 427:304–305.
8. The Religious Impulse
M AT E R I A L F O R A G E N E R A L AU D I E N CE
Boyer, P. 2001. Religion Explained. Basic Books, New York. A cognitive anthropologist ex-
  amines the question “Why do we have religion at all?” from a cross-cultural and evolu-
  tionary perspective.
Brockman J., ed. 2006. What We Believe but Cannot Prove: Today’s Leading Thinkers on Sci-
  ence in the Age of Certainty. New York, Harper Perennial.
Gazzaniga, M. S. 1998. The Mind’s Past. University of California Press, Berkeley. A fun, brief
  book laying out the case for a specific module in the left brain for interpreting disparate
  data and constructing narratives. Written in a conversational style, Gazziniga’s book lets
  his quirkiness and wit shine through. Don’t believe the short shrift he gives to experience-
  dependent plasticity though. In arguing (appropriately) against the blank-slate behavior-
  ist tradition, he got a bit carried away.
9. The Unintelligent Design of the Brain
M AT E R I A L F O R A G E N E R A L AU D I E N CE
Brockman, J., ed. 2006. Intelligent Thought: Science versus the Intelligent Design Movement.
  New York, Vintage. This is a collection of essays by prominent scientists refuting the intel-
  ligent design model. The essay by Jerry Coyne is the best and most succinct argument
  based on the fossil record that I know.
Pennock, R. T., ed. 2001. Intelligent Design, Creationism, and Its Critics. MIT Press, Cam-
  bridge. This large volume is a good way to get started if you are highly motivated to hear
  the arguments on both sides of this contentious issue.
                                                               Further Reading and Resources      263
      Acknowledgments
      I H AV E B E E N B L E S S E D   to work in a stimulating and dynamic environment.
      This intellectual milieu has been central to shaping my thoughts as expressed in
      this book. First and foremost, I would like to thank my wife, Professor Eliza-
      beth Tolbert, who is, quite simply, the smartest and most interesting person on
      the planet. A true scholar and a fearless thinker, she has spent years pushing and
      prodding me to take a few baby steps outside of the scientific mainstream.
      Most of the ideas in this book have been stimulated by our ongoing discussions
      (characterized by our friends with a Zen-like bent as “the sound of two rocks
      crashing together.”)
         A group of brilliant and friendly colleagues at The Johns Hopkins Univer-
      sity School of Medicine continue to make my work life a joy. In particular, I
      owe a debt of gratitude to the Neuroscience Lunch Crew: David Ginty, Shan
      Sockanathan, Alex Kolodkin, Rick Huganir, Dwight Bergles, Paul Worley, and
      Lunch Crew Emeritus Fabio Rupp, who have provided both intellectual and
      social support over many years. The people who have worked in my lab con-
      tinue to humble me with their insights, hard work, and friendship. Thanks
      to Kalyani Narsimhan, Kanji Takahashi, Carlos Aizenman, Christian Hansel,
264
Angèle Parent, Dorit Gurfel, Shanida Morris Nataraja, Jung Hoon Shin, Ying
Shen, Andrei Sdrulla, Yu Shin Kim, Wei Zhang, Roland Bock, Hiroshi
Nishiyama, Sang Jeong Kim, Sangmok Kim, and Joo Min Park.
  Many thanks to Sol Snyder, Department Director Extraordinaire, who has
been supportive in all things. This book was written during a lovely sabbatical
year. My academic home away from home was Wolfson College, University of
Cambridge. In particular, I would like to thank Ian Cross and Jane Woods for
going above and beyond the call of duty in welcoming me and my family to
Cambridge. Thanks also to the scientists of the Department of Physiology at
University College, London, and Paola Pedarzani in particular, for encouraging
and tolerating my frequent visits on seminar days.
  A number of good people provided advice and critiques of various parts
of the manuscript. I am indebted to Elaine Levin (Mom!), Keith Goldfarb,
Sascha du Lac, Eric Enderton, Steven Hsiao, Nely Keinanen, Herb Linden
(Dad!), Sue Reed, Julia Kim Smith, and The Prince of Dark Moods himself,
Adam Sapirstein. Many scientists took time from their busy schedules to pre-
pare figures or track down obscure information. My thanks to Niko Troje,
Kristen Harris, Anthony Holtmaat, Yao-Chung Chuang, Ullrich Wagner, Frank
Schieber, and that amazing web detective, Roland Bock.
  Many professionals in the publishing world have lent their talents to this
effort. Joan M. K. Tycko took my horrid sketches and half-baked graphic
ideas and transformed them into superb illustrations. Michael Fisher, Editor-in-
Chief at Harvard University Press, has been insightful and supportive through-
out the publishing process. Nancy Clemente labored hard to clean up my clunky
prose.
  Thanks to Cal Fussman for generously allowing me to use a quotation from
his interview with Bruce Springsteen, which originally appeared in Esquire mag-
                                                                Acknowledgments    265
      azine on August 1, 2005. And thanks to the University of Wisconsin Press for
      allowing me to reprint a quotation by Donald O. Hebb.
        Finally, I couldn’t go on without the love and inspiration of Jacob Linden
      and Natalie Linden.
266   Acknowledgments
Index
accidental design, 240–242                        ASPM gene, 60–61, 243
acetylcholine (neurotransmitter), 45, 47, 56,     association cortex, 18–19, 24
   206, 215–216                                   associative eyelid conditioning, 250–254
acetylcholinesterase (enzyme), 56                 ataxia, 9
adaptation, 93                                    Australopithecus africanus, 60
aggression, 16–17, 192, 219                       axon hillock, 32, 39–40
AIDS (acquired immune deficiency syndrome),       axons, 30–31, 42, 48, 73, 100–101
   178–179                                        axon terminals, 32, 40, 43
Alexander, Richard, 149
alga Chlamydomonas reinhardtii, 206
Allen, Laura, 179                                 Baron-Cohen, Simon, 158
alternative splicing, 52                          Bartels, Andreas, 162–164
amnesia: anterograde, 116, 132–133, 226; exper-   Begin, Menachem, 185
   imental, 121; hippocampal, 109–112, 117;       behavior, molecular/cellular underpinnings of,
   retrograde, 116                                   248–254
AMPA-type glutamate receptor, 135, 137            behavioral differences, between men and women,
amygdala, 16–17, 100, 164, 215                       157–162
androgen-insensitivity syndrome, 159              Behe, Michael, 240, 242, 245
animal experiments, 7, 18, 67–71, 77, 134, 167–   benzodiazepine class of anti-anxiety drugs, 201
   169, 192, 195, 215–216. See also mice; mon-    bias: in memory retrieval, 123–126; in sensory
   keys; rats                                        systems, 92–97
anterior cingulate cortex, 101–104, 163, 165,     birth, and brain development, 72–74
   215                                            bisexuality, 173–182
anterior commissure, 155, 179, 226–227            blending, of sensory information, 89–92
antidepressant drugs, 201                         blindness, 75
anxiety, in dreams, 211–213, 215, 217, 219        blindsight, 14, 99
Arnow, Bruce, 165                                 Bliss, Tim, 133–134
Aserinsky, Eugene, 189                            body movement, 18
                                                                                                    267
      body position, 84–86                                brain research, applied to education, 78–80
      Born, Jan, 196                                      brains, preserved, of famous historical figures,
      Boyer, Pascal, 222, 230                               25–26
      brain, female: and gender identity, 155;            brain shape, and cognitive ability, 25
        masculinization of, 180–181; and orgasm,          brain size, 24, 72–73, 155; and cognitive ability,
        169–173                                             24–25; genetic factors, 59–61
      brain, human: energy use, 33–35; as kludge, 6,      brainstem, 7–9
        22, 48–49, 240–242, 245–246; as test of intel-    brain stimulation studies, of orgasm, 171
        ligent design, 238–242                            brain structure: archaic features, 14–15; and
      brain, male: feminization of, 180–181; and gen-       brain growth, 72–73; development of, 61; and
        der identity, 155; and orgasm, 169–173              gender differences, 159–160; and gene struc-
      brain activity during sleep, 214–216                  ture, 243; and sexual orientation, 178–180
      brain-based education, 78–80                        brain systems, “always on,” 13, 229–230
      brain complexity, 28                                brain temperature, and sleep cycle, 194
      brain-critical periods, in higher cognitive pro-    “brainwashing,” 184–187
        cesses, 77–80                                     Brockman, John, 232
      brain design: and evolution, 6, 21–22, 26–27 (see   Brown, Lucy, 164
        also evolution); flaws in, 13; inefficiency of,   Bushnell, Catherine, 102
        80–81; inelegance of, 5–6 (see also kludge,
        brain as); and memory storage, 126–132,
        143–144 (see also memory storage); principles     calcium ions: and NMDA receptor, 135–141;
        of, 21–24; and sexual behaviors, 149–153 (see        role in electrical signaling, 43
        also brain, female; brain, male; sexual behav-    calmodulin (messenger molecule), 60, 243
        iors)                                             CaMKII (calcium/calmodulin protein kinase II
      brain development, 50–52, 150; at birth, 61, 72–       alpha), 137, 140
        74; postnatal, 73–80; prenatal, 59–72; and        Capgras syndrome, 98–99, 173
        sleep cycle, 194–195; and “wiring,” 67–72,        capsaicin, 247–248
        80, 144. See also nature-nurture debate; neural   Cartwright, Rosalind, 216–217
        plasticity                                        case studies: H.M., 17–19; Phineas Gage, 19–21
      brain evolution, 6, 21–22, 26–27. See also evolu-   Catholic church, 234
        tion                                              cats, and sleep research, 192, 195
      brain function: automatic (subconscious), 14;       caudate/putamen, 164–165
        and gender differences, 160–162; localized to     causality, in brain, 55
        brain region, 21–24; and religious thought,       cell body (neuron), 29
        225–232                                           cell division, in brain development, 59
      brain imaging studies: of men during orgasm,        cell lineage, in neuronal diversity, 64–65
        170; on synesthesia, 91. See also fMRI; PET       cell nucleus (neuron), 29
        scanning                                          cerebellar interposed nucleus, 253
      brain injury, 7, 64, 109–112; amygdala, 17, 114;    cerebellum, 7, 9–13, 61, 164–165, 170–171,
        brainstem, 9, 201; cerebellum, 9, 12, 114; cor-      251
        tex, 19–21; hippocampal system, 116, 248;         cerebral cortex, 155, 164
        hippocampus, 17–18; midbrain, 14–15; pain         cerebral palsy, 64
        pathway, 101; primary visual cortex, 99–100;      cerebrospinal fluid, 33–34
        temporal lobe, 98–99, 171; visual system, 88.     chance, in function of synapses, 44
        See also amnesia                                  chemical signaling in brain, 32–33
268   Index
child abuse, accusations of, 126                     curare, 45
childhood, long, 81                                  cyanobacteria, 205
child rearing, 149–150, 152; paternal involve-       cytokines, 66
   ment in, 146–148
children, suggestibility of, 126
chimpanzee, 60–61                                    Dalai Lama, 234
chloride ions, role in electrical signaling, 45      Darling, Sir Frank, 148
Cho, Margaret, 146                                   dead phone illusion, 97
cholinergic neurons, 206–207, 214                    deaf humans, and brain wiring, 71
chronostasis, 97                                     death: during childbirth, 151; from sleep depri-
Chuang, Yao-Chung, 171                                 vation, 186–187
circadian clock, 203–206                             declarative memory, 109, 114–117, 121, 249
classical conditioning, 110–111, 249–250             delay, in awareness of sensory information, 96
climbing fibers, 251                                 delayed matching to sample task, 117–119
clonezepam (Klonopin), 192                           Delbrück, Max, 4
Cnidaria, 29                                         Dembski, William, 238, 245
cocaine, 164                                         dendrites, 29, 65, 73
cognitive ability, variation in, 24–26               dendritic spines, 29
cognitive style, gender differences in, 157–162      DES (diethylstilbestrol), 158–159
cold/menthol receptors, 248                          Diamond, Marion, 75–77
“cold,” perception of, 248                           DNA, 56–58, 176–178. See also genes, human;
complexity, irreducible, and intelligent design,       genetic factors; human genome
   238–242                                           dopamine (neurotransmitter), 47, 119
conditioned place aversion, 102–103                  dorsal raphe (brain region), 207, 215–216
confabulation, 226                                   dreaming, 213–219. See also dreams
congenital adrenal hyperplasia, 158–159, 181         dream interpretation, 207–208, 220
continuity, in sensory processing, 94–97             dream journals, 209–210, 212
coordination of movement, 9                          dream research, 188
corpus callosum, 155, 159–160, 226–227               dreams: function of, 216–219; meaningful/sym-
cortex, 7, 18–21, 61, 64–65, 80, 116; anterior         bolic, 207–208, 219–220; narrative, 211–
   cingulate, 101–104, 163, 165, 215; associa-         213, 229–232 (see also religious thought); sex-
   tion, 18–19, 24; cerebral, 155, 164; frontal,       ual, 212; sleep-onset, 210
   19, 21, 166, 170; left, 227–229; motor, 18;       drugs. See names of drugs
   occipital, 166; parietal, 170; prefrontal, 119,   drug use, maternal, 66–67
   215; primary auditory, 84; primary
   somatosensory, 84–86, 101; primary visual,
   83–84, 214; right, 227–229; somatosensory,        ECT (electroconvulsive shock treatment), 119–
   10; temporal, 166, 170                               121
cortisol (hormone), 186                              ectoderm, 59
Coyne, Jerry, 236                                    edge enhancement, in visual system, 93
cranial nerves, 7                                    EEG recordings of sleep, 189
cross-dressing, 153                                  Einstein, Albert, 44; brain of, 25–26
cultural war, between science and religion, 232–     Eisenberger, Naomi, 103
   234                                               ejaculation, male, 169–170
culture, and gender identity, 154                    Elavil (antidepressant), 201
                                                                                                Index    269
      electrical activity, and brain wiring, 74–75          Firestein, Harvey, 174
      electrical signaling in brain, 32–44                  flagellum, 239
      embryonic disk, 59                                    fMRI (functional magnetic-resonance imaging),
      emotion: and amygdala, 16–17; and dreams,                10, 104
         212, 215, 219; and limbic system, 16; and          food preferences, and nature-nurture debate,
         memory, 107–109, 122–123; and pain, 100–               54
         104; and perception, 97–104                        forgetting curve, 119
      empathy, 103–104                                      Foulkes, David, 230
      energy conservation, and sleep, 187–188               founder effect, 79
      engineering of brain. See brain design                Freud, Sigmund, 208
      environmental deprivation, 77                         frog brain, 21–22
      environmental enrichment, 77, 79–80                   frogs, 67–70
      environmental factors, 52, 80–81; in brain devel-     frontal cortex, 19, 21, 166, 170
         opment, 52, 66, 74–80; in brain wiring, 71–        fruit fly, 60
         72. See also nature-nurture debate                 fugu (pufferfish), 41
      enzymes, 56; acetylcholinesterase, 56; protein        fundamentalism, religious, 233–238
         kinase C, 251                                      fungi, 205
      epigenetic factors: in brain development, 52 (see
         also nature-nurture debate); and sexual orien-
         tation, 178                                        GABA (gamma-aminobutyric acid), 45, 47, 93,
      epilepsy, 17, 64, 112–114, 226–227                       192
      EPSP (excitatory postsynaptic potential), 39–40,      Galton, Francis, 90
         44                                                 Gardner, Randy, 186
      “escape from light” hypothesis, 205–206               gays, 173–182
      estrogen (hormone), 58, 167                           Gazzaniga, Michael, 227, 229
      evolution: of brain design, 6, 21–22, 26–27; of       gender dysphoria, 153
         brain size, 24, 61; of circadian clock, 205–206;   gender identity: development of, 153–160; di-
         of dreams/dreaming, 217; and intelligent de-          versity of, 173–174. See also sexual orientation
         sign, 235–246; and memory storage, 143–            gene expression, 56–59
         144; of neurons, 29; and REM sleep, 192–           general intelligence: genetic factors, 175, 177;
         194; and sensory processing, 92–97; and sex-          lack of gender difference, 157; tests of, 53–54
         ual behavior, 152                                  genes, human, 51–52, 56; ASPM gene, 60–61,
      evolutionary biology, 233–238                            243; homeotic, 61–64
      excitation (in neural signaling), 44                  genetic factors: in brain development, 52, 59–61,
      experience: and gender identity, 154; molecular/         80–81 (see also nature-nurture debate); in gen-
         cellular underpinnings of, 248–254                    der differences, 157–162; in general intelli-
                                                               gence, 175, 177; in sexual orientation, 175–
                                                               182
      face-blindness (prosopagnosia), 88                    genetic specification, in brain wiring, 71–72
      false memories, 125–126                               genitals, and tactile sensitivity, 84–86
      fear: amygdala and, 16–17; in dreams, 211–213,        genome, and evolution, 108, 242–243. See also
         215, 217, 219                                         human genome
      females, and sexual behavior, 167–168                 glial cells, 28; radial glia, 64
      fetal vision, 70                                      glutamate (neurotransmitter molecule), 35–37,
      fight-or-flight responses, 17, 100                       44–47, 65, 103, 251
270   Index
glutamate receptor proteins, 37–39, 56                hypnotic suggestion, and pain modulation,
glycine (neurotransmitter), 45                          102
gorilla, 61                                           hypothalamus, 15–16, 166, 173; lateral nucleus,
Gorski, Roger, 179                                      15; medial preoptic region, 168–169; SCN
Granholm, Jackson, 6                                    (suprachiasmatic nucleus), 203, 206–207;
granule cells, 251                                      ventromedial nucleus, 15, 167
gray matter, 60
Green, Richard, 180
growth hormone releasing hormone, 16                  immune system, mother’s, 66
guidance molecules, in brain development, 67–         Imperato-McGinly, Juliane, 159
   70                                                 implicit memory. See nondeclarative memory
                                                      INAH3 (interstitial nucleus of the anterior hypo-
                                                         thalamus number 3), 155, 168, 178–179
hallucinations, 186, 215                              individuality, human, and brain development, 81
Hamer, Dean, 177                                      inferior olive, 250
Hartmann, Ernest, 217                                 inhibition (in neural signaling), 44–45
hearing, 13–14, 71, 84                                inhibitory synaptic drive, and REM sleep, 191–
“heat,” perception of, 247–248                           192
Heisenberg, Werner, 44                                insight, sleep-inspired, 196–197
heroin, 164                                           insula (brain region), 101, 163, 165
higher cognitive processes, brain-critical periods,   intelligence testing, 25, 53–54
   77–80                                              intelligent design, and evolution, 235–246
Hines, Melissa, 159                                   interview studies, of sexual orientation, 180
hippocampal system, 112                               intrinsic plasticity, 143
hippocampus, 16–18, 248–249; and tests of             ion channel, 38–40, 56
   memory storage, 132–143                            IPSP (inhibitory postsynaptic potential), 45
Hobson, J. Allen, 219
Hogg, Andrew, 186
Holstege, Gert, 170                                   Jacob, François, 6
homeostasis, 15                                       James, William, 53
homeotic genes, 61–64                                 Jäncke, Lutz, 89
homosexuality, 148, 173–182                           jellyfish, 29
hormones: circulating, 66; cortisol, 186; estro-      jet lag, 204–205
   gen, 58, 167; growth hormone releasing hor-        John Paul II, Pope, 235–237
   mone, 16; master, 16; oxytocin, 173; proges-       Johnson, Carl, 206
   terone, 167; secreted by hypothalamus, 16;         Johnson, Philip E., 237–238, 245
   testosterone, 158, 168–169; thyroid, 58;           Jouvet, Michel, 192
   vasopressin, 16
“housekeeping genes,” 56
Howe, Elias, 196                                      Karama, Sherif, 166
Hubbard, Edward, 91                                   Kekulé, Friedrich, 196
human genome, 56                                      Kleitman, Nathaniel, 189
Human Genome Project, 51–52                           Klinefelter’s syndrome, 176
hunger, 15                                            kludge, brain as, 6, 22, 48–49, 240–242, 245–
hydraulic analogy, for electrical signaling, 42          246
                                                                                                 Index    271
      language: acquisition of, 77–78; and mirror neu-      nondeclarative, 112, 114–116, 121, 250;
         rons, 105                                          short-term, 119–121; taxonomy of, 109;
      lateral inhibition, 93                                working, 117–119
      lateral nucleus (of hypothalamus), 15               memory consolidation, 119–121, 123, 197, 208,
      learning: and memory, 133, 138–143, 248–249;          217–219
         and sleep deprivation, 196–197; tests of, 250–   memory duration, 116–121
         254                                              memory integration, 218–219
      left cortex, and split-brain operation, 227–229     memory localization, 112–116
      Lenin, V. I., brain of, 25                          memory retrieval, 121–127
      lesbians, 173–182                                   memory storage, 112–116, 126–132, 195–202;
      LeVay, Simon, 178–179                                 tests of, 132–143
      limbic system, 16                                   menopause, 147
      Linné, Carl von, 205                                mental function, brain’s creation of, 48–49
      localization, of brain functions, 21–24             mental retardation, 64
      local signals, in neuronal diversity, 64–65         mice, 70–71, 132, 138, 141
      locus coeruleus, 207, 215–216                       microcephaly, 60–61, 243
      Lomo, Terje, 133–134                                midbrain, 13–15, 61, 67
      Louie, Kendall, 197–200                             mirror neurons, 105
      love, neurobiological basis for, 162–166            mirror reading, 109–110
      LTD (long-term synaptic depression), 134–143,       misattribution, and memory retrieval, 123–124
         248–249, 251                                     mitotic spindle, 60, 243
      LTP (long-term synaptic potentiation), 134–         molecular genetics, 55–59
         143, 248–249                                     monkeys: and female sexual circuit, 167–168;
                                                            and memory tasks, 117–119
                                                          monoamine oxidase inhibitors, 201
      macaque monkey, 61                                  monogamy, 146–147
      mahu, 154                                           mood regulation, and dreaming, 216–218
      males, and sexual behavior, 168–169                 mossy fibers (of the cerebellum), 250–251
      malnutrition, maternal, 80                          mother-infant bond, 173
      master hormones, 16                                 motor coordination learning, 110
      masturbation, 148                                   motor cortex, 18
      mathematical reasoning, and gender, 157             motor function, and sensation, 104–105
      McCartney, Paul, 196                                mutation, random, and “specified complexity,”
      M cells (of the visual system), 86–89                 239–240
      medial preoptic region (of the hypothalamus),       myelin secretion, 72–73
        168–169
      melanopsin-positive ganglion cells, 203–205
      melatonin, 206                                      Nadel, Lynn, 141
      membrane potential, 39                              narrative creation, propensity for, and religious
      memory: and brain development, 81; declarative,       thought, 225–232. See also under dreams
        109, 114–117, 121, 249; and emotion, 107–         Native American culture, 154
        109, 122–123; for facts and events, 17–18;        nature-nurture debate, 53–59, 80–81; and gen-
        false, 125–126; and hippocampus, 17; and            der differences, 157–162; and sexual orienta-
        learning, 133, 138–143, 248–249; and limbic         tion, 173–182
        system, 16; long-term, 119–121;                   Neanderthal man, 24
272   Index
neural Darwinism, 74–75                                ovulation, 147; concealed, 149, 151
neural plasticity, 75–80                               oxytocin (hormone), 173
neural plate, 59
neural tube, 59, 61, 64
neurology, 97–98                                       pain, and emotion, 100–104
neuronal activity, and brain wiring, 70–71             pain asymbolia, 101–102, 173
neuronal cell culture, 35                              pair bonds, 146–147, 149–151, 173
neuronal diversity, 64–65                              parallel fibers, 251, 253
neuronal migration, 64                                 paralysis, limp muscle, in REM sleep, 191, 216
neuronal plasticity, 80–81                             paranoia, caused by sleep deprivation, 186
neurons, 28–29; and accidental design, 242;            parietal cortex, 170
   cholinergic, 206–207, 214; mirror, 105; num-        parietal lobe, 87
   ber of, 73–74, 81; requiring multiple simulta-      paternal involvement in child rearing, 146–150
   neous synapses, 45; role in electrical signaling,   paternity, of offspring, 147–148, 151–152
   34–44; temporal limits of spike firing, 43, 48      Pavlov’s dog, 249–250
neurosurgery, 112–114                                  P cells (of visual system), 86–89
neurotoxins, 40–41                                     Penfield, Wilder, 112–114
neurotransmitter receptors, 33, 56; glutamate re-      perception/emotion distinction, 97–104
   ceptor proteins, 37–39, 56; NMDA-type glu-          personality change, from damage to cortex, 19–
   tamate receptor, 135–141, 248                          21
neurotransmitters, 32, 37, 47; acetylcholine, 45,      PET scanning (positron emission tomography),
   47, 56, 206, 215–216; dopamine, 47, 119;               170, 214–216
   fast, 45–46; glutamate, 35–37, 44–47, 65,           phenelzine (Nardil), 201
   103, 251; glycine, 45; noradrenaline, 46–47,        phrenology, 22
   215–216; slow-acting, 46                            pineal gland, 206
nicotine use, maternal, 66–67                          Pittendrigh, Colin, 205
Nikaido, Selene, 206                                   pituitary gland, 173
NMDA-type glutamate receptor, 135–141, 248             place cells, 141, 197–200
nondeclarative memory, 112, 114–116, 121,              plagiarism, 124
   250                                                 plasticity: neural, 75–80; neuronal, 80–81; syn-
non-REM sleep, 189–194, 200, 207, 210–211                 aptic, 49, 143
Noonan, Katherine, 149                                 playback of memories, 197–200, 212–213
noradrenaline (neurotransmitter), 46–47, 215–          pleasure, sensations of, 164. See also orgasm
   216                                                 Pliny the Elder, 205
                                                       police line-ups, 125
                                                       polyandry, 147
occipital cortex, 166                                  polygenic traits, 177
O’Keefe, John, 141                                     polygyny, 147
olfaction, and sexual behavior, 168                    Polynesian culture, 154
optical illusions, 93, 96–97                           pontine tegmentum, 214
oral-genital stimulation, 148                          postmortem studies, 178–179
orangutan, 61                                          post-orgasmic afterglow, 173
orgasm, 169–173                                        potassium, in cerebrospinal fluid, 34–35
outer membrane (plasma membrane) (neuron),             potassium ions, role in electrical signaling, 38–40
   29                                                  PP1 (protein phosphatase 1), 137
                                                                                                   Index     273
      precursor cells (neuronal precursor cells), 59, 64–   reward centers, 164, 170
         65                                                 right cortex, and split-brain operation, 227–
      prefrontal cortex, 119, 215                              229
      primary auditory cortex, 84                           ritual practice, cross-cultural, 231
      primary somatosensory cortex, 84–86, 101              Rizzolatti, Giacomo, 104
      primary visual cortex, 83–84, 214                     Rock, Andrea, 230
      priming, 111–112                                      roundworm, 50–51, 60
      progesterone (hormone), 167
      promoters (control regions of genes), 58
      prospective study, of sexual orientation, 180–181     saccades, 94–97, 225
      protein kinase C (enzyme), 251                        Sacks, Oliver, 88
      proteins, 56                                          Sagan, Carl, 28
      protein synthesis inhibitors, 121                     Santorum, Senator Rick, 238
      Prozac (antidepressant), 47, 201                      Schacter, Daniel, 123
      pseudogenes, 243                                      science, and religious thought, 232–234
      psychiatry, 97–98                                     science/engineering, under-representation of
      psychotherapy, 208, 217–218                              women in, 160–162
      Purkinje cells (of the cerebellum), 251, 253          “scientific creationism,” 236
      pyramidal cells, 65, 141                              scientific theory, and intelligent design, 235–238
                                                            SCN (suprachiasmatic nucleus) (of hypothala-
                                                               mus), 203, 206–207
      rabbits, 134                                          sea sponges, 29
      Ramachandran, V. S., 91                               selection, and “specified complexity,” 239–240
      range, of sensory information, 92                     selectionist theory, 74–75
      rat brain, 22                                         self-tickling, 10–12
      rats, 77; and gender differences, 159; and mem-       sensation, and motor function, 104–105
         ory testing, 120–121, 133, 138, 141; and pain      sensations, expected, cerebellum and, 9–13
         processing, 102–103; and sexual orientation,       sense of humor, and nature-nurture debate, 54
         178; and sleep research, 186, 197, 200             sensory homunculus, 84–86
      receptors: AMPA-type glutamate receptor, 135,         sensory information: blending of, 89–92; cortex
         137                                                   and, 18; range of, 92; selectivity of, 82–83
      recording electrodes, 35                              sensory systems: and brain development, 67–70;
      recreational sex, 146, 149, 151                          mapping of external world, 83–86, 142–143;
      recycling, of neurotransmitters, 47                      and social interaction, 103–104
      reflexes, controlled by brainstem, 9                  septum, 171
      release site, 43                                      serial monogamy, 147
      religion, fundamentalist, 233–238                     serotonin, 215–216
      religious thought: cross-cultural, 222–223; as in-    sex drive, hypothalamus and, 16
         dividual phenomenon, 232; origin of, 224–          sex hormones, 158–159
         225; science and, 232–234                          sexual arousal, 165–166
      REM (rapid eye movement) sleep, 189–194,              sexual behaviors, 148; animal studies, 167–169
         197–202, 207; and dreaming, 209, 211–213           sexual orientation, 173–182
      REM sleep behavior disorder, 192                      sexual promiscuity, 146
      reticular activating system (brainstem), 206–207,     shivering reflex, 15
         214–216                                            sibling studies, 175. See also twin studies
274   Index
signal relay, thalamus and, 15                      Summers, Larry, 160–162
Skinner, B. F., 53                                  Svoboda, Karel, 132
sleep: non-REM, 189–194, 200, 207, 210–211;         symmetry of brain, 7
   physiological functions of, 187–188; REM         synapses, 30–33; number of, 32; probabilistic
   (rapid eye movement), 189–194, 197–202,            function of, 44, 48
   207, 209, 211–213                                synaptic cleft, 32
sleep cycle, 189–191, 206–207, 211; changes in,     synaptic competition, 74
   192–194; function of, 194–196                    synaptic connections, and synesthesia, 91
sleep deprivation, 184–187, 195; and learning,      synaptic depression, 130
   196–197; non-REM sleep, 200; REM sleep,          synaptic function, experience-dependent modi-
   197                                                fication of, 129–131
sleep research, 188–191                             synaptic nametags, 67–69, 81
sleepwalking, 192                                   synaptic plasticity, 49, 143
snails, 29                                          synaptic potentiation, 130
social drives, hypothalamus and, 16                 synaptic strength, and memory storage, 129–
social interaction, and sensory systems, 103–         131
   104                                              synaptic structure, changes in, and memory stor-
sodium, in cerebrospinal fluid, 33–35                 age, 131–132
sodium ions, role in electrical signaling, 38–40    synaptic vesicles, 32, 43
somatosensory cortex, 10                            synesthesia, 89–92
source misattribution, and memory retrieval,
   123–124
spatial learning, as test of memory storage, 133,   tactile form perception, 86
   138–143                                          temporal cortex, 166, 170
spatial skills, and gender, 157                     testosterone (hormone): and male sexual behav-
Sperry, Roger, 67, 227                                 ior, 168–169; prenatal, 158
spike (electrical signal), 32, 39–40, 127           tetrodotoxin, 41
spike firing: and memory storage, 127–129; pat-     thalamus, 15, 101, 206
   tern of, 43, 48                                  theory of mind, 24, 105
Spitzer, Robert, 181–182                            thirst, 15
splenium (subregion of corpus callosum), 159–       thyroid hormone, 58
   160                                              tickling, 10–12
split-brain patients, 226–229                       timing: of REM sleep, 197; of sexual intercourse,
SRE (gene promoter), 58                                146–147; of sleep-wake cycle, 203–206
SRF (a transcription factor), 58                    tit-for-tat experiment, 12–13
SSRIs (serotonin-specific reuptake inhibitors),     torture by sleep deprivation, 184–187
   201                                              touch, 84–86, 102
Stickgold, Robert, 195, 210                         transcription factors, 58, 61
stopped-clock illusion, 96–97                       tricyclic antidepressants, 201
stress: maternal, 80; from REM sleep depriva-       twins, and brain development, 66
   tion, 200–201                                    twin studies, 53–55; identical twins, 53–55, 60;
strychnine, 45                                         nonidentical twins, 55, 60; of sexual orienta-
subconscious, and dreams, 208                          tion, 175–176
Südhof, Thomas, 70                                  two-spirit (Native American cultural practice),
suggestibility, and memory retrieval, 123–126          154
                                                                                               Index    275
      Uncertainty Principle (Heisenberg), 44              Warrington, Elizabeth, 111
                                                          weight loss, 15
                                                          Weinrich, James, 180
      Valium, 201                                         Weiskrantz, Larry, 111
      vanilloid receptors, 248                            “what” pathway, in visual system, 87–89
      vasopressin (hormone), 16                           “where” pathway, in visual system, 87–89
      ventral tegmental area, 164, 166, 170               White House Conference on Early Brain Devel-
      ventricles, 59                                        opment (1997), 78–79
      ventromedial nucleus (of hypothalamus), 15,         white matter, 60, 73
         167                                              Wilson, Matt, 197–200
      verbal fluency, and gender, 157                     Winson, Jonathan, 217
      Versed (a sedative drug), 201                       Wolpert, Daniel, 10, 12
      vision, midbrain and, 13–14                         working memory, 117–119
      visual object agnosia, 88                           worms, 29
      visual system, 80; and circadian clock, 203–205;
         and coherent image, 225; development of, 67–
         70, 75; and edge enhancement, 93; and map        Xanax (antiaxiety drug), 201
         of visual world, 83–84; organization of, 86–89   X chromosome, and male sexual orientation,
      vital functions, controlled by brainstem, 9           176–178
                                                          XXY genotype, male, 176
      wakefulness and sleepiness, brainstem and, 9. See
        also sleep cycle                                  Zeki, Semir, 162–164
276   Index