0% found this document useful (0 votes)
70 views24 pages

Neuromorphic Architecture Insights

This document summarizes an article that explores how principles from neuroscience could inform the design of intelligent buildings. Specifically, it discusses how studying animal and human brains could yield insights into creating computational systems, called "interactive infrastructures", within buildings. These systems could allow buildings to perceive, act, and adapt based on the needs of their inhabitants. The article uses the example of "Ada", an intelligent pavilion from 2002, and proposes how future rooms may exemplify neuromorphic architecture principles to enhance social interaction within adaptive buildings.

Uploaded by

Mine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views24 pages

Neuromorphic Architecture Insights

This document summarizes an article that explores how principles from neuroscience could inform the design of intelligent buildings. Specifically, it discusses how studying animal and human brains could yield insights into creating computational systems, called "interactive infrastructures", within buildings. These systems could allow buildings to perceive, act, and adapt based on the needs of their inhabitants. The article uses the example of "Ada", an intelligent pavilion from 2002, and proposes how future rooms may exemplify neuromorphic architecture principles to enhance social interaction within adaptive buildings.

Uploaded by

Mine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/254242250

Brains, machines and buildings: Towards a neuromorphic architecture

Article in Intelligent Buildings International · July 2012


DOI: 10.1080/17508975.2012.702863

CITATIONS READS

12 927

1 author:

Michael A Arbib
University of California, San Diego
665 PUBLICATIONS 23,915 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Linking Neuroscience and Architecture View project

Computational Neuroscience View project

All content following this page was uploaded by Michael A Arbib on 14 January 2016.

The user has requested enhancement of the downloaded file.


This article was downloaded by: [USC University of Southern California], [Michael Arbib]
On: 05 July 2012, At: 08:52
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Intelligent Buildings International


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/tibi20

Brains, machines and buildings:


towards a neuromorphic architecture
a
Michael A. Arbib
a
USC Brain Project, University of Southern California, 3641 Watt
Way, Los Angeles, 90089-2520, USA

Version of record first published: 05 Jul 2012

To cite this article: Michael A. Arbib (2012): Brains, machines and buildings: towards a
neuromorphic architecture, Intelligent Buildings International, DOI:10.1080/17508975.2012.702863

To link to this article: http://dx.doi.org/10.1080/17508975.2012.702863

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-


conditions

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation
that the contents will be complete or accurate or up to date. The accuracy of any
instructions, formulae, and drug doses should be independently verified with primary
sources. The publisher shall not be liable for any loss, actions, claims, proceedings,
demand, or costs or damages whatsoever or howsoever caused arising directly or
indirectly in connection with or arising out of the use of this material.
Intelligent Buildings International
iFirst article 2012, 1– 22

COMMENTARY
Brains, machines and buildings: towards a neuromorphic architecture
Michael A. Arbib∗
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

USC Brain Project, University of Southern California, 3641 Watt Way, Los Angeles 90089-2520, USA

We introduce neuromorphic architecture, exploring ways to incorporate lessons from studying


real, biological brains to devise computational systems based on the findings of neuroscience
that can be used in intelligent buildings, adding a new biologically grounded perspective to the
more general view that future buildings are to be constructed as perceiving, acting and adapting
entities. For clarity, the term ‘brain’ is reserved for the brains of animals and humans in this
article, whereas the term ‘interactive infrastructure’ refers to the analogous system within a
building. Key concepts of neuroscience are presented at sufficient length to support
preliminary analysis of the possible influence of neurobiological data on the design and
properties of interactive infrastructures for future buildings. Ada – the intelligent space, a
pavilion visited by over 550,000 guests at the Swiss National Exhibition of 2002, had an
interactive infrastructure based (in part) on artificial neural networks (ANNs), had
‘emotions’ and ‘wanted’ to play with her visitors. We assess the extent to which her design
was indeed grounded in neuroscience. Several sketches for rooms that exemplify
neuromorphic architecture are used to demonstrate the way in which research on how its
brain supports an animal’s interactions with its physical and social world may yield brain
operating principles that lead to new algorithms for a neuromorphic architecture that
supports the ‘social interaction’ of rooms with people and other rooms to constantly adapt
buildings to the needs of their inhabitants and enhance interactions between the people who
use them and their environment.
Keywords: action; Ada – the intelligent space; ANFA; brains; cooperative computation;
emotion; interactive infrastructure; neural networks; neuromorphic architecture;
neuroscience; perception; smart building; social interaction

Introduction
The use of the term ‘brains’ in this article is literal – I seek to understand how the findings of
neuroscience concerning the brains of animals, including humans, may impact on the future
design of intelligent buildings. In promoting such understanding, the Academy of Neuroscience
for Architecture (ANFA, www.anfarch.org) has emphasized the neuroscience of architectural
experience that studies architecture in terms of the impact of the built environment on the
human brain: what is it about a designed space that affects the human brain and how might under-
standing the response of the brain lead us to improvements in architecture in the future? However,
the emphasis of this article is on what I call neuromorphic architecture,1 a complementary
approach incorporating brain functions into buildings. For clarity, I reserve the term ‘brain’ for
the brains of animals and humans in this article, and use the term ‘interactive infrastructure’ for


Email: [email protected]

ISSN 1750-8975 print/ISSN 1756-6932 online


# 2012 Taylor & Francis
http://dx.doi.org/10.1080/17508975.2012.702863
http://www.tandfonline.com
2 M.A. Arbib

the analogous systems within a building. There is now a great deal of work on interactive infra-
structure reported both in this journal and elsewhere, but it appeals very little to the findings of
neuroscience. What happens if our knowledge of the structure and function of brains informs
our design of perception, control and communication systems for buildings, so that these
systems are based on brain operating principles rather than ad hoc computational designs? The
present article points the way forward, reviewing key concepts in neuroscience that are less
studied within the intelligent buildings community, and offers several sketches – by no means
completed projects – of how these concepts may be incorporated in the interactive
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

infrastructures of future buildings. I am not an architect – but I am keenly interested in architec-


ture. The core of my own work has been the use of computers and mathematics to explore how the
brain supports vision, action, cognition and language and how it has evolved to do so (e.g.
Arbib et al. 1998, Arbib 2012). This informs the view of neuromorphic architecture, which I
offer here.
However, first, let us consider the neuroscience of architectural experience. In his book, Brain
Landscape, John Paul Eberhard’s (2008) thesis was that we use neuroscience to establish a frame-
work for decision making in the design process of architecture, focusing on such factors of
people’s reactions as: How could we reduce stress? How could we improve cognition, whether
in an educational environment or at a home for people with Alzheimer’s disease? How could
we increase productivity, whether in a factory or a research environment. And, if designing a
church, how could we increase the sense of awe and inspiration that could be offered? As a
recent contribution, Eduardo Macagno and Eve Edelstein and their colleagues use wireless moni-
toring of human responses in precisely controlled virtual reality representations of health care
environments. Lelin et al. (2010) present a head-mounted system for measuring electrooculogram
and electroencephalogram activity in human subjects interacting with and navigating in the Calit2
StarCAVE, a five-sided immersive 3-D visualization virtual reality environment. The system can
be easily expanded to include other measurements, such as cardiac activity and galvanic skin
response. They demonstrate the use of the system for estimating eye movements in response to
the presentation of a set of dynamic visual cues in the StarCAVE and the monitoring of neurologi-
cal and ocular dysfunction in vision/attention disorders.
Eberhard’s book also contains some brief discussion on ‘Smart Architecture’. This is not part
of the Neuroscience of the Architectural Experience, but I will consider it here for the background
it offers for neuromorphic architecture (a brief review is provided below in the section ‘Architec-
ture with Embedded Intelligence’). Intelligent Buildings International has published varied
studies on how to endow buildings with a measure of intelligence. They can adjust to changing
circumstances; turning lights down or up automatically depending upon the time of day, control-
ling temperature, perhaps even adjusting the acoustics depending on how crowded a room may
be. In the future, we can expect a transition from such simple types of ‘intelligence’ – let us
make it warmer if it is colder outside; let us turn up the lights if it is getting dark – to the
design of buildings in which, in some sense, the room engages in some communication with
the users to serve them better. A large motivation for smart architecture has come recently
from thinking about how to go beyond LEED (Leadership in Energy and Environmental
Design) standards for ‘green’ – e.g. energy-efficient – buildings. A problem with LEED is
that it provides a score based on the number of items conformed to on a checklist of fixed
codes, rather than providing broad goals and encouraging architects to find innovative ways to
achieve them. To support communication between building and users, Stein et al. (2010) used
an iPhone App that lets people report to the building’s computer on their comfort level while
at the same time allowing the computer to send messages on steps people might take to
become more comfortable in an energy-efficient way (e.g. open a window instead of turning
up the air conditioning to cool down; suggesting wearing a sweater if cold weather is expected
Intelligent Buildings International 3

to continue). The issue here is how to design the knowledge in a building’s interactive infrastruc-
ture – or employ pattern recognition and machine learning (with learning algorithms often
abstracted from models of learning in neural networks) to extract the knowledge (e.g. Bishop
2006) – so that the building will provide interactions that are pleasing to us, would increase
our comfort, and yet, at the same time, reduce energy costs and improve livability. My concern
will be to assess how such an interactive infrastructure might be improved by having its structure
and function informed by the findings of neuroscience.
Neuromorphic architecture approaches the design of rooms, buildings and buildings inte-
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

grated into a landscape by exploiting lessons from modelling brain computation. Much of our
nervous system is involved in homeostasis, making sure that there is enough glucose and
oxygen in the blood stream and keeping other key physiological variables in the range required
to keep the body functioning well, e.g. by increasing or decreasing the rate of respiration or heart
rate, or evoking feelings of hunger or thirst to induce us to secure necessary nutrients or fluids.
Clearly, there are parallels to be explored here between keeping bodily variables in the appropriate
range and the concern for green buildings, as in the above brief discussion on LEED. However, I
will have more to say in this article about the processes that link perception, action and social
interaction. These come closer to my own professional interest in looking at the brain not only
in terms of what it does within a single body, whether it is visual perception or control of muscular
activity, but also in terms of social neuroscience, how one brain represents the behaviours of
others to support social interaction. Few people who live in houses (as distinct from the architects
and builders) know about the electrical wiring behind the walls in their houses, but most know the
fixed functionality of each room in ways that condition their behaviour in those rooms. However,
in future we will have the ability to interact with rooms in terms of a dynamic functionality and we
will come in some sense to perceive the room as having a personality. There is increasing interest
in the idea of developing robots that are not autonomous humanoids, as in classic science fiction,
but rather ‘active furniture’. A bed may actively assist an invalid to get in and get out, or to help
them take their medicine. We can use today’s smart phones as a metaphor for smart architecture.
People have access to different smart apps that allow them to use their phone in diverse
ways. Through understanding neuroscience, we will come up with new apps for buildings
that will allow us to control, at a level that is comfortable for our cognition, the way in
which the room and the building behave to make the environment better adapt to our changing
needs.
Our aim, then, is to come up with brain operating principles to yield general principles that
will allow us to develop a neuromorphic architecture that supports the ‘social interaction’ of
rooms with people and with other rooms, to better adapt the behaviour, as it were, of the
rooms to the needs of their inhabitants. But let me reiterate that the point of this article is to
offer pointers on how to exploit the lessons of real brain research. But first let me present a
couple of issues related to smart architecture.
One is the design of dynamic architectural settings that use new materials to learn and adapt
over time. For example, we can now have glass that can be controlled as to its transparency. A wall
dividing two rooms can be transparent to unite the rooms, or opaque to give privacy in each room,
or it can be turned into a screen on which pleasing patterns are distributed to add to the ambiance
of each room. An important issue here relates to the field of computational neuroethology (Beer
and Chiel 2008), the computational modelling of the brain mechanisms underlying animal behav-
iour (ethology). The crucial point here is embodiment – the brains and bodies of animals evolve
together so that receptors and effectors and body architecture crucially constrain the brain and vice
versa. Similarly, as the materials and physical layout of the built environment change, so will neu-
romorphic architecture. I thus stress the use in neuromorphic architecture of neuroscience to
extract brain operating principles that have proved their worth in the diverse brains of
4 M.A. Arbib

different species, whereas for neuroscience of the architectural experience the human brain is
paramount.
Another issue for smart architecture is that design processes embedded in computer-based
systems will allow the architect real-time access to client information systems. As Eberhard
puts it, ‘A designer/architect would then become a stage manager for the activities being
housed and would serve his or her clients on a continuing basis’. As we increasingly embed
intelligence in rooms, each room will have a memory system, a database for each room and
for integration across rooms over time, so that the architect’s work need not finish with
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

signing off on the building. Instead, he or she can serve as a consultant as the years go by, to
the extent mandated by the client, in terms of finding new ways of exploiting the building’s intel-
ligence to adjust the room to the changing needs of the inhabitants. There is a precedent in the
aircraft industry at the moment. Rolls Royce, which is one of the world’s leading makers of jet
engines, is switching its paradigm from selling jet engines to leasing them with the crucial aspect
of the lease being that they continually monitor the performance of each jet engine. This has the
double benefit that not only can the company address the maintenance needs of the particular
engine but they can also, by monitoring many jet engines around the world under changing cir-
cumstances, come up with improved jet engine designs. Note, then, an important difference
between human brains and the interactive infrastructures of buildings of the future – the latter
will support data mining at a far more intensive level than we can project for human brain
states which are currently accessed either through language or brain imaging/monitoring tech-
niques restricted in either space or time.

Distinguishing neuroscience from artificial intelligence


There already exists a large literature on intelligent buildings and smart environments. The litera-
ture involves computations that range from basic control systems (think of the role of the thermo-
stat in heating, ventilation and air conditioning as the baseline example) to sophisticated
applications of artificial intelligence. They share with neuromorphic architecture the concern of
finding sophisticated ways to link sensors and actuators so as to improve a building’s perform-
ance, whether in response to the physical environment (e.g. adapting to the weather, reducing
the damage from earthquakes) or the social environment (responding to, or even anticipating,
the needs of human who share that environment). However, as the brief review in the next
section shows, almost none of the work was based on analysis of data or models from neuro-
science. The closest approach, seen in only a few cases, is to use artificial neural networks
(ANNs). These provide a form of adaptive distributed computing, using a network of very
simple computational units which, although called neurons, are not based on anything from
neuroscience beyond the fact that each unit can respond to a linear combination of its inputs,
and the weights of different inputs can change over time according to ‘learning rules’. The
crucial feature is that these learning rules can be set so that the performance of the network in
some well-constrained task can be improved over time without intervention or explicit program-
ming by a human operator. Haykin (2008) provides an excellent introduction – but this is a
branch of mathematical engineering or artificial intelligence with no link to neuroscience
beyond the original concern for learning rules in distributed networks. After a brief review of
examples of architectures with embedded intelligence but no connection to neuroscience, I will
look at a classic system that is ‘a start in the right direction’: Ada – an interactive space that
learns to influence human behaviour (Eng et al. 2005). Only after that will I offer a brief introduc-
tion to ‘real’ neuroscience, followed by sketches (not implemented projects) of how these may
contribute to the future development of neuromorphic architecture.
Intelligent Buildings International 5

Architecture with embedded intelligence


The present subsection offers a quick tour of a number of systems, mostly those suggested by an
anonymous reviewer. None contribute to neuromorphic architecture, but this brief review may
suggest challenges for the development of this new field.
In a series of articles (Mozer et al. 1995, Mozer 1998, 2005), Michael Mozer reports on what
he calls the adaptive house or the neural network house, but the only neural networks in this study
are ANNs. An actual residence was equipped with sensors to provide information about environ-
mental conditions (e.g. temperatures, ambient lighting level, sound and motion in each room) and
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

actuators to control the gas furnace, space and water heaters, lighting, motorized blinds, ceiling
fans and dampers in the heating ducts. The adaptive control system of the house has two objec-
tives: anticipation of inhabitants’ needs and energy conservation. Various predictors attempt to
take the current state and forecast future states. The predictors are implemented as feedforward
ANNs trained with back propagation, as lookup tables or as a combination of a neural net and
a lookup table. For example, the output of one anticipator is interpreted as the probability, for
each of the house’s eight zones, that the zone will become occupied in the next 2 s, given that
it is currently unoccupied. The anticipator runs every 250 ms and is a standard single-hidden-
layer ANN with 107 inputs, 50 hidden units, 8 output units, direct input – output connections
and a symmetric sigmoidal activation function. This structure is unrelated to neural circuitry in
any animal or human brain, and the word ‘brain’ occurs in none of Mozer’s three articles.
The CASAS Smart Home project (http://ailab.wsu.edu/casas/) at Washington State University
views the smart home as an intelligent agent that interacts with its environment through the use of
sensors and actuators. It has certain overall goals, such as minimizing the cost of maintaining the
home and maximizing the comfort of its inhabitants. Here the methodology appears to be based
on machine learning techniques such as support vector machines, rather than ANNs, and the term
‘brain’ is not cited in the project papers I have studied. Diane Cook (2012) provides a current per-
spective on the CASAS project in an article titled ‘How smart is your home?’ (and see the same
issue of Science for a companion article titled ‘How smart is your city?’ – O’Grady and O’Hare
2012). As for Mozer, but with a change of terminology, Cook argues that in an ambient intelligent
home, sensors collect information about the environment and the residents so that an ‘intelligent
agent’ may use this information to decide whether actions need to be taken to adjust, e.g., temp-
erature or lighting. In her concluding paragraph, Cook offers a brief comment on social inter-
action, a theme of central importance in the current article: ‘Social signals have long been
recognized as important for establishing relationships, but only with the introduction of sensed
environments have researchers become able to monitor and measure these signals. Building
upon ambient intelligence technologies, we can look at socialization within the home (such as
entertaining guests, interacting with residents, or making phone calls) and examine the correlation
between socialization parameters and productivity, behavioral patterns and health’.
Victor Callaghan, Hani Hagras and their colleagues have worked extensively on the notion
of ‘intelligent space’ – iSpace, for short – as the common denominator of pervasive computing,
ambient intelligence, intelligent environments and smart homes. For example, Callaghan et al.
(2006) focus on the issue of how non-experts would be able to direct their own personal set of
networked appliances to produce some desired functionality. They discuss two approaches.
One uses embedded autonomous agents that sense the user’s actions in the environment and,
in a life-long learning mode, autonomously ‘program’ the iSpace to match the user’s habitual
behaviour (implicit programming). The other approach applies programming-by-example tech-
niques in which, during a teaching phase, the user demonstrates the required behaviour to the
system. This is called explicit programming but, in fact, the challenge is to make this occur
without the lay user needing to know anything about programming. They discuss the relative
6 M.A. Arbib

advantages of the specific techniques they have developed, and suggest that the user may only be
able to demonstrate high-level aspects of system performance, so that the underlying subsystems
will have to be formed by embedded autonomous agents. The work makes some use of ANNs,
although the authors express a preference for fuzzy logic. Again, no appeal is made to neuro-
science. However, I would note that the ‘division of labour’ in the brain between declarative
learning (what we can talk about or explicitly demonstrate) and procedural learning (gaining a
skill through extended experience without conscious access to the learning process) is a dichot-
omy of continuing interest in neuroscience (Krakauer and Mazzoni 2011, Squire and Wixted
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

2011), and thus might be an excellent target for interaction between work on intelligent environ-
ments and neuroscience research.
Complementary work from the same group focuses on ‘adjustable autonomy’. Ball et al.
(2010) report on a project aimed at enabling human users and (computational) agents to collab-
orate in managing intelligent environments. The goal is to develop an adjustable-autonomy agent
in an effort to explore user acceptance of pervasive computing and the use of autonomous agents
therein. They present a prototype of their Adjustable-autonomy Behaviour-Based Agent architec-
ture, built on a smart home emulator, to demonstrate the plausibility of employing adjustable
autonomy in full-scale intelligent environments. While this research is related neither to ANNs
nor to brains, its concern with enabling human users and agents to collaborate may, in due
course, contribute to attempts to apply lessons from social neuroscience to the design of intelli-
gent buildings that interact with their users.
In summary, there is a great deal of work being done under such banners as intelligent build-
ings, smart architecture, intelligent environments and more. In most cases, sensors, actuators and
learning algorithms play an important role. In some cases, but probably not the majority, learning
is implemented in ANNs but in none of the above cases – and they are typical – are brains or
neuroscience part of the discussion. The aim of this article is to provide a sufficient introduction
to what neuroscience might have to offer by way of neuromorphic architecture to allow this com-
munity to better evaluate the possible utility of neuroscience for their work.
Here is one last example, from the Journal of Ambient Intelligence and Smart Environments.
Jorge Alves Lino et al. (2010) argue for responsive environments as a combination of the scien-
tific developments that resulted in ambient intelligence systems and the aesthetic motivation
behind interactive art installations. Responsive environments may be city squares, public halls
or other spaces enhanced with the use of technology and media, combining a system-centred
approach with a user-centred approach to the success of which both context awareness and
user experience make an important contribution. For Leno et al., the aesthetics of interaction,
user engagement, access, embodiment and intimacy all need to be taken into account in the
design and specification of responsive environments. All these considerations were much
involved in the design of the interactive space ‘Ada,’ to which we now turn.

The interactive space ‘Ada’


The interactive space Ada was built in 2002 (Eng et al. 2005) as a temporary exhibit at the Swiss
Expo in Lausanne, designed by a team led by Paul Verschure and Rodney Douglas in Zurich.
Both Verschure and Douglas are active in brain modelling, and so Ada can be seen (to an
extent assessed below) as an early example of neuromorphic architecture. It was named in
honour of Ada, Countess Lovelace, daughter of Lord Byron, who served as programmer for
Charles Babbage’s difference engine back in the mid-1800s and is thus regarded as the world’s
first computer programmer. The space named in her honour was visited by 550,000 people
from May to October of 2002. It was thus an entertainment space rather than a home or work
Intelligent Buildings International 7

environment, but understanding what the people in Zurich put into their design will give us some
ideas for the future of neuromorphic architecture.
Ada was not designed with a static functionality but rather as a perceiving, acting, adapting
entity. ‘She’ had an interactive infrastructure that was to a great extent built of ANNs and she
had ‘emotions’. She wanted to play with her visitors. Ada’s sensors mimic some of the capabilities
of organisms for collecting information about the world. For vision, Ada deployed a fixed grid of
cameras to monitor all visitors and a set of cameras called ‘gazers’ that could be directed to look at
and follow people. For hearing, Ada used fixed microphones to identify and locate sound sources,
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

while directional microphones were mounted on the gazers to further support attentional, focused
interactions with specific visitors. Some forms of sound and (very simple) word recognition were
available. Ada’s sense of touch was provided by the floor tiles that were pressure sensitive. This
could be used to sense where a person was and, via a simple neural network based on where foot-
steps fell in succession, to figure out how people were moving. By being able to change colour
(Figure 1), the tiles provided possibilities for a natural progression in visitor interaction:

Sleep: One tile color for all visitors


Wake: Visitors given different colored floor tiles
Explore: Probe for ‘interesting’ visitors; deploy gazers (cameras which could be directed to attend to
particular visitors)
Group: Try to direct visitors to a certain location in space, e.g. by floor patterns or by deploying light
fingers (beams of light for pointing at individual visitors or indicating different locations in the space)
Play: Play a game selected on the basis of number of visitors grouped together.
Leave: Show a path for visitors to exit the space.

Not only did Ada’s ‘skin’ (the floor) facilitate visual communication via its patterning of
lights, but Ada also used sound and music composed in real time on the basis of her internal
states and sensory input – using a computer system called Roboser to compose music that
could then be directed at different groups of people to change their level of interaction and excite-
ment (Manzolli and Verschure 2005). Ada could also perform ‘baby talk’ which imitates some-
thing of the sound patterns from her visitors, but had no capacity for speech understanding.
In the book Who Needs Emotions? The Brain Meets the Robot, Fellous and Arbib (2005) gath-
ered experts to report on both the neuroscience of emotion, and the current state of providing

Figure 1. The lighting system of a single floor tile (it can display various colours) the honeycomb in which the
tiles were placed and Ada’s floor providing a way to interact with people (photos courtesy of Rodney Douglas).
8 M.A. Arbib

robots with at least the appearance of emotions. Some of these ideas were anticipated in the design
of Ada. Ada continually evaluated the results of her actions and expressed ‘emotional states’
accordingly, as part of the effort of regulating the distribution and flow of visitors. Ada’s level
of overall ‘happiness’ was translated into the soundscape and the visual environment in which
the visitor was immersed, thus establishing a closed loop between the environment and the
visitor. Ada ‘wants’ to interact with people. When people participate, she is ‘happy’. When
they do not, she is ‘frustrated’. The overall schematics for Ada’s emotion system are shown in
Figure 2; however, I will not go into the details.
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Just as we would do (but unconsciously, and with far more complex circuitry), Ada is con-
stantly looking at details of sensory data to perceive what is going on in the environment,
while emotions – in Ada’s case these were joy, sadness, anger and surprise, based on (but far,
far simpler than) the corresponding human emotions – condition the way in which the data are
interpreted. An overall measure of ‘happiness’ (H) increased with three quantities, called survival,
recognition and interaction. Ada’s goal was to maximize the value of H. The survival of Ada was
not linked to any notion that ‘the exposition is coming to an end, they’re going to demolish me’,
but was monitored in terms of the flow of visitors: the more visitors that flow through, the higher
the rating of survival, but there was the issue of how well she kept track of what people were doing
and the level of interaction. One way for Ada to be happy would have been just to push people
through as quickly as possible to maximize this survival variable and ignore everything else. In
fact, Ada did behave in that way for a while, and her designers had to adjust the balance between
the variables to get her to spend enough time interacting to make the people happy with their
experience as well as making herself happy by her criteria. The conclusion is that ‘happiness’
is in general a social variable, which biases behaviour in ways that balance the needs of the indi-
vidual (whether person or room) with the needs of the others with which she interacts.
Is this an example of neuromorphic architecture? Some of the subsystems are based on ANNs,
rather than trying to emulate details of the actual neural circuitry that Douglas and Verschure had
studied. The vision and auditory systems were based, in part on, prior analysis of brain mechan-
isms for these sensory systems. The use of the floor for both for the sense of touch and for visual
communication was strikingly innovative. However, the touchstone for neuromorphic architec-
ture is not the (absurd) requirement that the structure of the building mimic the form of the

Figure 2. Ada’s moods and emotions drive behaviour and are driven by its outcomes (adapted from Was-
sermann et al. 2003).
Intelligent Buildings International 9

animal (and is thus distinct from the zoomorphic architecture of Aldersey-Williams 2003), but
rather that the interactive infrastructure incorporate brain operating principles, although not
necessarily mimicking neural circuitry of specific brains. The detection of the direction of
walking of Ada’s visitors was based on neural principles, although other aspects reflected more
abstract principles of artificial intelligence. In addition, the emotion system was heavily influ-
enced by analysis of emotional systems but not really grounded in the neuroscience of
emotion. Thus, it might be more appropriate to say that Ada is a seminal precursor, rather than
an example, of neuromorphic architecture. In any case, we shall see that in future, a building
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

exemplifying neuromorphic architecture will employ diverse subsystems in its interactive infra-
structure – and we should only expect a subset of these to incorporate brain-based operating prin-
ciples or more directly emulate the neural circuitry of animal brains.

A whirlwind introduction to neuroscience


Leen and Heffernan (2002) describe the distributed interactive infrastructure for an automobile,
circa 2002. Today, cars already employ many specialized computers distributed all the way
from the braking system to controlling the ignition, and we now begin to see cars that can
park themselves, and that can link vision sensors to motor control to avoid collisions by
slowing the car down when it approaches too close to the car in front. Here, the interactive infra-
structure is not a single centralized computer, but rather a system of systems that can work in
coalition or sometimes in competition to yield improved performance, whether for the driver
of the car as in this example or for neuromorphic architecture, for the inhabitant of a room or
building. However, a car’s computer network is an example of distributed computing rather
than neuroscience, so let me provide some background that would help us to assess ways in
which data on real brains may influence the design of interactive infrastructures in the intelligent
buildings of the future.

Multiple levels of neural structure


The human brain contains of the order of a hundred billion neurons and each brain cell has a
varied number of connections, with perhaps 10,000 being the average number of connection
points, synapses. So we are talking about a million billion synapses in the human brain, and
each of these is a complex dynamic chemical system involving a great amount of complexity.
To study it, we may proceed at many levels, and use studies of the nervous systems of other
species to deepen our understanding of the human brain.2
At the top level, neurologists and cognitive neuroscientists correlate changes in brain activity
in different regions with changes in tasks. In one such study, Brown et al. (2006) had amateur
dancers perform small scale, cyclically repeated tango steps on an inclined surface, with or
without hearing tango music, or just listened to the music without overt movement. The technique
used was positron emission tomography imaging which builds a 3D map of blood flow to assess
what parts of the brain were more active in one condition than in another (see http://www.
scientificamerican.com/article.cfm?id=fancy-footwork for an image of the dancer in the
scanner, and some sample brain images). As a result, one could see which parts of the brain
were more active when the subject was ‘dancing’ to a regular beat of music rather than just
dancing freely with no musical beat to constrain the movement; or one could see contrasts in
brain activity when she danced to the beat (Metric) versus just listening to the music. Such
studies do not tell us about the details of the brain’s computations, but do give us a sense of
how the overall activity of different brain regions varies as we look at different tasks. Saying
that region A is more active in one task than region B does not mean that region B is uninvolved
10 M.A. Arbib

in this task, and we have developed a method called synthetic brain imaging to use such coarse-
grain activity to constrain detailed computational models of the underlying neural activity (Arbib
et al. 2000).
Figure 3 shows five levels of analysis of neural structure. Korbinian Brodmann (1909) studied
slices of cerebral cortex under the light microscope and saw that different regions had somewhat
different thicknesses of the six layers of the cortex with somewhat different neuronal distributions.
On that basis he divided the brain into numbered regions (Figure 3, top left) that challenge neu-
roscientists to assess the functions of each region, how they are connected and how they compete
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

and cooperate in different combinations as the brain carries out various tasks. For example, Area
17 has been shown to be primary visual cortex, the area where the input from the eyes, after
various stages of processing in retina and thalamus, first enters the cerebral cortex, whereas
Areas 44 and 45 constitute Broca’s area, key areas for language processing. Brodmann’s areas
thus provide a framework within which people doing brain imaging try to make sense of the
high-level architecture of the brain, in relation to the differential activity of the brain during differ-
ent tasks.
At finer detail, the neuroanatomist may seek to discover the form, distribution and connec-
tivity of the neurons in (different regions of) the brain. The shape and connections of regions
and of neurons constitute neural architecture – the ‘architecture’ of real brains, as distinct
from the brain-inspired architecture of actual buildings that is the goal of neuromorphic architec-
ture. Figure 3 (top right) schematizes what can be seen after slicing vertically through the layers of
the suitably stained cerebral cortex – highlighting just a few of the billions of cortical neurons.
This drawing (Arbib et al. 1998) was made by the Hungarian neuroanatomist John Szentágothai
and poses diverse challenges to the neuroscientist: Why are the cells so differently shaped? Why

Figure 3. Top left: Brodmann’s division of the regions of the human brain based on cytoarchitectonics. Top
right: Szentágothai’s schematic rendering of a sampling of neurons in a vertical slice through cortex; lower
row: synapses: function, structure and the underlying neurochemistry.
Intelligent Buildings International 11

are they arranged in these different layers? What can that mean to understanding the computation
of the brain?
But neuroscience can focus on finer and finer details, as we see in the lower half of Figure 3.
The figure on the left exemplifies classic work on inserting electrodes in different places in the
spinal cord to chart the electrical activity at synapses (Sherrington 1906). The citation for the
Nobel Prize that John Eccles, a former student of Sherrington, received in 1963 stated that his
‘discoveries concern the electrical changes that the nerve impulses elicit when they reach
another nerve cell. . . . There are two kinds of synapses, one excitatory, the other inhibitory. If
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

the arriving impulse is connected to excitatory synapses the response of the cell is yes, i.e. excit-
ability increases, vice versa the inhibitory synapses make the cell respond with a no, a diminution
of excitability. Eccles has shown how excitation and inhibition are expressed by changes of mem-
brane potential’. Much subsequent work has advanced our understanding of synapses in many
different regions of the brain, in particular showing how the electrical interaction at synapses is
usually mediated chemically, as the presynaptic neuron squirts packets of ‘neurotransmitters’
that modify the electrical activity of the postsynaptic neuron (synapses usually have a direction
in which signals are conveyed, from pre- to post-). The electron microscope gives those who
know how to read records such as that shown at bottom middle (Palay 1956) a sense of the
packages of neurotransmitter inside the presynaptic terminal and the various receptors on the post-
synaptic membrane. But even this view can be further refined. A crucial finding of neuroscience is
that synapses in their billions and billions are not fixed connections, but are complex chemical
machines that change over time. At the bottom right, we see a diagram by Masao Ito (2002) –
for many years Japan’s leading neurophysiologist, and a former post-doctoral student of John
Eccles – summarizing what was then known about the chemical pathways at a typical synapse
in the cerebellum. In even finer detail, we may investigate the human genome to see how it
lays the basis for such pathways, as well as the overall patterns of development of brain and
body, and the way each of us can learn. The neural architecture that is genetically specified is
just a starting point. The brain is not a fixed system whose structure is laid down in the genes,
but it is in some sense a family of ‘virtual machines’ shaped by the experience of the individual
interacting with the social and physical environment. Faced with the complexities of Figure 3, a
neuroscientist must make a career decision:

. Focus on intricate details (bottom right), perhaps at the price of losing track of what the
brain actually does?
. Focus on data from brain imaging (top left) to probe how the brain operates in terms of large
regions of tissue interacting with each other (top left)?
. Try to navigate up and down between the levels to increase understanding of the brain as a
cognitive system, but informed by the circuitry (top right) and synaptic interactions (bottom
left) with only limited attention to synaptic neurochemistry (bottom middle and right).

In this article, I will focus on the third approach in suggesting implications of neuroscience for
neuromorphic architecture while noting that seeking the optimal technological implementation
for use in buildings will reduce the relevance of synaptic neurochemistry. In contrast, work in
the neuroscience of architectural experience may focus more on using brain imaging to probe
differential activity of brain regions in people experiencing different architectural forms – with
the emphasis not so much on understanding ‘how the brain does it’ as on seeking objective
measures to address the questions raised by John Eberhard.
Returning to Szentágothai’s view of the ‘typical’ circuitry of cerebral cortex, I want to empha-
size that different regions of the brain may exhibit very different neural architectures, i.e., very
different ways in which neurons are shaped and organized. Figure 4 (left) shows Ramon y
12 M.A. Arbib
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Figure 4. Different parts of the brain have dazzlingly different architectures. Left: hippocampus; centre:
spinal cord; right: Ken Yeang’s IBM Tower, Kuala Lumpur (1992).

Cajal’s diagram (Ramón y Cajal 1911) of circuitry in a slice of the hippocampus, a region
involved in spatial navigation and episodic memory (see below); whereas Figure 4 (centre)
shows John Szentágothai’s diagram of spinal cord circuitry (Arbib et al. 1998). It is beyond
the scope of this article to present the details of hippocampus and spinal cord and their compu-
tational implications, but I want to stress the very different types of cells, the very different geo-
metry or architecture of their arrangement and thus the enduring challenge for neuroscience of
trying to make sense of such details. How does the cellular architecture of the hippocampus
support episodic memory and spatial navigation? How does the architecture of the spinal cord
equip it for the control of the movements of our limbs, among many other functions?
The building at Figure 4 (right) has almost nothing to do with the theme of this article, but the
visual similarity between the structure of the spinal cord and the structure of this building is so
remarkable (as pointed out to me by the architect Ben Arbib) that I could not resist including
it. Ken Yeang, who designed this building, is widely regarded as the ‘father’ of the sustainable
bioclimatic skyscraper. The Menara Mesiniaga/IBM Tower (1992) in Kuala Lumpur was the
first ‘bioclimactic’ skyscraper using natural ventilation strategies to make the building feel as if
it were breathing. Unlike the technology of later green skyscrapers, most of its strategies are
passive – the spiraling atrium’s vertical landscaping improves indoor air quality and aids
natural ventilation, while the external louvers reduce solar heat gain (Yeang 2006). This contrast
does allow me to make an important point relevant to our theme, returning to the mention of neu-
roethology. Brains and bodies have co-evolved – the brains of humans and snakes are very differ-
ent, controlling very different body plans to support very different behaviours. In the same way, as
we make progress in neuromorphic architecture, we will have to develop skilled ways to trade off
between interactive infrastructure and physical structure, adapting these to each other in optimiz-
ing the usability of the building they define.

Two case studies: preliminary designs


Each Fall I give a course on ‘Brain Theory and Artificial Intelligence’ at USC. In 2003 and 2004, I
asked students to conduct projects that developed ‘Brains’ for Intelligent Rooms (I-Rooms). (I had
not then seen that it was better not to use the word ‘brain’ when talking about the interactive
Intelligent Buildings International 13

infrastructure of a building.) The students were computer science students, not architecture stu-
dents, and their projects had to be put together in a few weeks, so these do not set any sort of
standard for neuromorphic architecture. But I think there is interest in sampling the functions
they dreamed up, and the way in which the underlying control structures reflect some of the
things they learned about the brain. As Warren McCulloch, a pioneer of computational neuro-
science put it: ‘Don’t bite my finger; look where I’m pointing’. In this spirit, I now report on
just three of these (very preliminary) projects.
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Case study 1: from mirror neurons to social cognitive neuroscience


But now I want to bring in the social dimension. How do persons interact with each other and as a
society? Our big question for neuromorphic architecture is this: what happens when we replace
some of the persons by rooms or buildings whose interactive infrastructures are implemented
electronically. Let us start with an important example. The relevant background is provided by
insights from the study of mirror neurons (Gallese et al. 1996). By taking a very fine electrode
and positioning it appropriately, the neurophysiologist can observe the spikes, the pattern of
changes in membrane potential, whereby one neuron communicates with other neurons.
Gallese et al. found that during the monkey’s hand actions as they probed different neurons in
the region labelled F5 in the frontal cortex of the monkey brain, some neurons might be very
active during a specific hand action. More significantly, many such neurons fired vigorously
both for the monkey doing an action and for the monkey seeing someone else performing a
similar action. They called such neurons mirror neurons. (There are many other neurons in
this brain region that do not share the mirror property, however.) Subsequently, we conducted
brain imaging (Grafton et al. 1996) to show that humans have something like a mirror system
(a brain region, rather than individual neurons, that is active both when a human performs a
class of actions and when she observes others performing such actions). This started a growth
industry with thousands of articles in which people do brain imaging experiments and claim to
see a mirror system. Moreover, my group has been concerned with computational modelling
of the mirror system (Oztop and Arbib 2002, Bonaiuto et al. 2007, Bonaiuto and Arbib 2010).
I will not explain Figure 5 (diagramming our first model, from 2002) here. The only point I
wish to make is that in the model (and there are lots of details of neural circuitry and synaptic
plasticity that have been simulated on a computer but are not shown here), the mirror neurons
are represented in just one submodule and that it takes many different regions of the brain to
work together for them to achieve their functionality.
Inspired by this modelling, one group of my students developed an Intelligent Kitchen. The
idea was that the cook would come into the kitchen and decide to start cooking according to some
recipe. Once the room knew which recipe was chosen, the robot arms inside the refrigerator would
start picking out the ingredients from different parts of the fridge and passing them out. Cameras
would then be checking whether the cook was following the recipe and if not, it would gently
remind him what to do to get back on the recipe. (If you were a creative cook like my wife,
you would turn the system off and just go ahead and come up with your own improvised improve-
ment on the recipe.) The important points from the project are made in Figure 6. Without going
into details, I emphasize that one of the important modules, indicated by the large arrow, is action
recognition – based on the computational processes of the mirror system. The system would
monitor the behaviour of the person to recognize their hand movements and assess their relation
to the different stages involved in the recipe. It could also monitor speech to estimate the cook’s
emotional valence. Perhaps it would ‘back off’ when the cook shouts, ‘God Damn it! I’ll do what-
ever recipe I please!’. This ties in again with neuroscience where much attention has been given to
mirror neurons that recognize emotions rather than manual actions, with a crucial role having
14 M.A. Arbib
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Figure 5. The Oztop-Arbib model of the mirror neuron system as a learning system: learning to recognize
actions associated with hand– object trajectories.

been posited for mirror neurons in empathy, the power of projecting one’s personality into (and so
fully comprehending) the object of contemplation (Decety and Ickes 2009).
The second student project involved a secure room where the emphasis shifts from looking at
people in terms of their specific actions to determining that person’s intention. Do they look
shifty? What is their emotional state? Is this the sort of person for whom one should call a security
guard, or should one let them go about their business? Identifying and tracking people is linked to
a neurosystem in terms of trying to extrapolate what the intentions of individuals might be. Figure
7 highlights some of the subsystems whose design was influenced by progress in computational
neuroscience.

Case study 2: hippocampus and cognitive maps


An important challenge for architecture in general is to assist wayfinding, designing the built
environment to make it easy for people to find their way around. Consider the symmetry prin-
ciples that allow us, having found the toilet for people of the opposite sex, to infer the position
of our own – we can get very annoyed with buildings that do not support wayfinding to a des-
perately needed part of the building. How might we go beyond such spatial conventions and
signage? What tools can the building provide? Wayfinding Apps for smart phones might be
one answer to addressing the needs of individuals, especially if they can take account of the be-
haviour of those with whom the space is shared. But rather than using a cell phone one might
imagine smart spectacles or even contact lenses that provide heads-up information, though
these create problems for data entry.
But what about relevant brain mechanisms? How does our brain code the locations of things
and places, so that we may navigate around the world, and represent and mentally manipulate
Intelligent Buildings International 15
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Figure 6. Preliminary Project 1: a reactive and adaptive intelligent kitchen (Jacob Everist, Guang Shi,
Jarugool Tretriluxana, Gurveen Chopra and Frank Lewis).

spatial information? The part of the brain in humans (and other mammals) that is most associated
with wayfinding is the hippocampus – though, as usual, it functions only as part of a larger neural
network of networks.
O’Keefe and Dostrovsky (1971) discovered place cells in the hippocampus of rats. As a rat
moved around an arena, recording the firing of single cells revealed a population of cells that
were not concerned with sensory input or motor output, but rather with the rat’s location –
which is why these neurons are called place cells. Brain imaging on London taxi drivers who
have ‘the Knowledge’, that is, knowledge of all the London streets and how best to navigate
them, showed that they had a more active hippocampus than other people (Maguire 1997).
At the human level, the most famous ‘experiment’, one of those natural disasters that is very
unfortunate for the person but very helpful for neuroscience, was the neurosurgery performed on
H.M. who, as a young man back in the early 1950s, suffered such intractable epilepsy that his neu-
rosurgeon decided to remove a large region from the temporal lobe of his brain. This included not
only the amygdala which is associated with emotion (especially fear), but also the hippocampus.
The surprising result was that H.M. lost episodic memory. He could conduct a normal conversation,
16 M.A. Arbib
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Figure 7. Preliminary Project 2: secure room. The aim is to protect the room and its contents from
unauthorized intruders, identify and track people that enter the room, determine whether the person is at
risk or not and react to a situation in which an individual is considered as a threat (Rattapoom Tuchinda,
Donovan Artz, Usha Guduri, and Sagar Gokhale). Doubled rectangles indicate modules whose details
were linked to neuroscience.

yet, a short while later, would have no memory of what was said, or even of the person with whom
he was talking (Scoville and Milner 1957). He thus had the working memory to keep track of details
that were of immediate relevance, but after the surgery could not remember any episodes from his
life. Later study showed that HM could learn new skills although he could not remember the epi-
sodes whereby he acquired them. All this has provided the basis for thousands of studies that seek to
understand the diverse memory systems of the brain, and the detailed mechanisms which support
them. But let us focus now on the brain’s representation of space.
Movement is the measure of space. Despite the unity of consciousness, the spatial behaviour
of humans and animals each exploits a variety of different representations in their brains. For
example, different brain regions represent the oculomotor space that guides eye movements,
the peripersonal space that guides reaching and the further space that guides locomotion and
many more. The brain’s multiple maps gain their coherence not by their subservience to some
overarching representation of space in a localized brain region but rather with respect to the com-
petition and cooperation of multiple brain regions in serving a varied repertoire of actions that
must be coordinated in achieving our aims. Turning from maps in the brain to maps in the
more usual sense of pictorial representations of space that aid navigation, the map of the
London Underground is a great aid to planning train rides, but grossly distorts the metrics of
the city surface that can aid the pedestrian. For the latter we return to the world of locomotion –
measuring the world in terms of the actions (walking, swimming, flying) whereby we traverse
it. We coin the term locometric (locomotion + metric) for this way of measuring space. The
animal measures the world in terms of actions (e.g. how many steps taken) or perceived measures
of such actions (e.g. the visual effect of an action such as the achievement of a goal).
Figure 8 gives another glimpse of a complex model of brain function (Guazzelli et al. 1998).
In the model, the ‘hippocampal chart’ provided by place cells differs radically when a rat is placed
in different environments. Thus, a higher-level organization is needed to link these charts into an
overall cognitive map of the rat’s world. Even without a hippocampus a rat can exploit much of
Intelligent Buildings International 17
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Figure 8. A model of brain mechanisms of adaptive, motivated spatial navigation (adapted from Guazzelli
et al. 1998). The World Graph supports planning of routes, while premotor cortex controls the step-by-step
(for example) actions needed to follow them, while avoiding obstacles in the path.

the spatial structure of its world. Modelling helps us understand the underlying processes. Our
World Graph model (Arbib and Lieblich 1977) showed how prefrontal cortex can take the
hippocampus representation of current location to encode it in the neural representation of
something like a subway map, linking this to a representation of a desired endpoint and
finding a path at a relatively high level of landmark by landmark navigation. The Taxon
Affordance Model then links the navigation of the current segment to the locometric details of
the currently sensed environment, mapping from the parietal cortex (affordances – opportunities
for motion) to the premotor cortex (the choice of specific actions to move closer to the goal). The
neural machinery diagrammed at right shows the brain’s machinery for evaluating the success of
behaviours in meeting current needs and adjusting synapses accordingly to increase the prospects
for successful behaviour on future occasions. To understand the dichotomy (world graph versus
locometric map of current affordances) consider one’s own behaviour. If you want to get out of a
room, you do not have to have a map of the room or know anything about the building or neigh-
bourhood. You just look for a door, orient towards it and then walk to it in a way that avoids
bumping into obstacles. But suppose there are two doors, and you cannot see beyond them.
Then if you have a world graph of the neighbourhood you may be able to choose one door
over the other depending on whether you want to get a coffee or walk to the bus stop. At this
level, you would not be planning every step you need to take to get from here to there.
But now let us move back from cognitive neuroscience to neuromorphic architecture for
buildings. An animal lives within the space in which its behaviour is played out, but for a neu-
romorphic room, the ‘brain’ – more rigorously, the interactive infrastructure – is to control inter-
actions with people moving within the space defined by that room – let us call this the inhabitant
space of the room (Figure 9). The issue then is what the interactive infrastructure needs to make
the room more useful, comfortable or attractive to the inhabitants, its internal environment. In
some sense we are closing the loop back to the neuroscience of architectural experience, where we
sought to monitor people’s brain activity as they explored different built environments as a
18 M.A. Arbib
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Figure 9. A room with a view – and an interactive infrastructure inspired by thinking about brains (thus
our one lapse from the convention of reserving the term ‘brain’ for animals and humans).

way to improving our design criteria so that architecture would better meet human needs and
desires. But in neuromorphic architecture, we are turning from the brain of the inhabitant to
the interactive infrastructure of the room or building so that it can adaptively change in ways
that better match the current needs of its inhabitants. The room thus needs sensors to monitor
its inhabitants, but monitoring brain activity seems unlikely in most situations, and thus one
must design an interactive infrastructure that can base its interaction on monitoring people’s
actions, or making explicit use of smart phone Apps, etc. We may recall here Ada charting and
influencing the movements of people in her space.

Figure 10. Preliminary Project 3: a self-cleaning room (Steve Blackmon, Emma Bowring, Elizabeth Halter,
Piriya Prathuangwong). The arrows indicate modules whose design was influenced by the brain model of
Figure 8.
Intelligent Buildings International 19

Today, there are many areas of the built environment with security cameras but their output is
usually fed to a room in which a human security guard monitors many different screens in search
of security breaches. Instead, we seek to automate the task with brain-like computations that can
automate such processes and do so with unwavering attention and without fatigue (a task already
taken on by researchers in computer vision, to some extent). As with Ada, the system would have
many different camera and other sensors and different networks to control their interactions. Once
again, we move from one brain, one body, to interactions between embodied brains, deployed in
different coalitions depending on the current task. With this we turn to a third student project – a
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

room that cleans itself. The room is interesting because it is not like a Japanese automated public
toilet room that flushes the whole room out when people leave, or like an ordinary room equipped
with a Roomba robotic vacuum cleaner that randomly explores the floor of the room, vacuuming
as it goes. This room (Figure 10) actually monitors the people within it, seeing how they are
moving, following their path and then turning that round from monitoring people’s movements
to controlling the movement of the cleaning system, to track for any damage or dirt or trash
left by those people. The key point for neuromorphic architecture is that the interactive infrastruc-
ture is not just a modular object-oriented computer program. The designs of many of its com-
ponents are based on models of specific brain regions or guided by one or more brain
operating principles. The arrows in the block diagram, for example, link back to our analysis
of hippocampus and navigation, and relate to the model of Figure 8.

Conclusions
While there is a great deal of work well underway in the design of intelligent buildings and
ambient intelligence, this work has almost entirely ignored the findings of neuroscience – empiri-
cal research on the biological brains of living creatures and computational models that address
these empirical findings. We took pains to distinguish neuroscience from artificial intelligence
in general and ANNs (a form of machine learning) in particular.
We noted two ways in which the findings of neuroscience may impact on the future design of
intelligent buildings: the neuroscience of architectural experience which studies architecture in
terms of the impact of the built environment on the human brain, and neuromorphic architecture,
a complementary approach incorporating brain functions into buildings. It is the latter that was the
focus of this article and, for clarity in exposition, I reserved the term ‘brain’ for the brains of
animals and humans, and the term ‘interactive infrastructure’ for the analogous system within
a building. After a brief review of non-brain-oriented approaches to architecture with embedded
intelligence, we looked at the interactive space ‘Ada’ as an early (2002) project that provides a
significant stepping stone towards neuromorphic architecture.
This review makes clear that the main obstacle to neuromorphic architecture may be that most
people working on intelligent buildings know little of brain research beyond the simple abstrac-
tions captured in ANNs. I thus offered a whirlwind introduction to neuroscience to emphasize the
multiple levels of neural structure, stressing that we can learn much from the way the brain is
divided into distinctive regions whose competition and cooperation support a wide range of func-
tionalities. My claim is not that an intelligent building should be equipped with an interactive
infrastructure that mimics the overall structure of the human brain, but rather that – in addition
to the necessary sensors and effectors – the interactive infrastructure should be a system of sub-
systems, with some of the subsystems and their interactions based in some detail on computational
models of human or animal brains. In many cases, the brains of non-human animals may provide
useful insights when their behaviour is of greater relevance to the building under design than that
of humans.
20 M.A. Arbib

To underscore this general claim, I briefly reviewed neuroscience data and computational
models for two crucial systems: the role of mirror neurons (and larger networks of which they
are a part) in social cognitive neuroscience, and the role of the hippocampus (again in consort
with other brain regions) in episodic memory and in navigation. Noting that in some sense a build-
ing is an ‘inside-out animal’ – animals move within a larger environment whereas buildings offer
an environment in which humans move – I then offered preliminary sketches (rather than com-
pleted implementations) of three projects whose interactive infrastructures contain subsystems
based on extant models from computational neuroscience.
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

The overall aim, then, is not to make claims for the success of neuromorphic architecture to
date, but rather to invite readers ‘to look where I am pointing’ and inform themselves about con-
tributions to systems and cognitive neuroscience that help us learn from large-scale patterns of
brain structure and function, a step far beyond the simple learning rules of ANNs (Arbib 2003).

Notes
1. The term ‘neuromorphic architecture’ is already in use by computer scientists when – in contrast to the
design methodology for serial computers – they design circuits for highly parallel computation which
are inspired by neural circuitry. However, we will use it only in the context of the design of the built
environment – while noting that the ‘brains’ of buildings may employ circuitry using ‘neuromorphic
architecture’ in the computer scientists’ sense.
2. In this article, we consider neuroscience at the level of circuits of neurons, going upwards to see how
these circuits are linked within and across various brain regions and downward only far enough to note
that learning can depend on plasticity of synapses, the connections where signals from one neuron affect
the activity in another. However, it is important to stress that much progress in neuroscience now occurs
at the level of molecular biology and genetics. For example, in the case of energy homeostasis, much
research now features on the way in which specific ‘neurotransmitters’ are used to carry very specific
signals which can then be ‘interpreted’ by specialized receptors on synapses of cells which receive them.
For example (Williams et al. 2000): ‘Neuropeptide Y (NPY) is expressed by neurones of the hypothala-
mic arcuate nucleus that project to important appetite-regulating nuclei, including the paraventricular
nucleus (PVN). NPY injected into the PVN is the most potent central appetite stimulant known, and
also inhibits thermogenesis; repeated administration rapidly induces obesity. The ARC NPY neurones
are stimulated by starvation, probably mediated by falls in circulating leptin and insulin (which both
inhibit these neurones), and contribute to the increased hunger in this and other conditions of energy
deficit. They therefore act homeostatically to correct negative energy balance’.

References
Aldersey-Williams, H., 2003. Zoomorphic: new animal architecture. London: Laurence King Publishers.
Arbib, M.A., 2003. Towards a neurally-inspired computer architecture. Natural Computing, 2, 1–46.
Arbib, M.A., 2012. How the brain got language: the mirror system hypothesis. New York & Oxford: Oxford
University Press.
Arbib, M.A. and Lieblich, I., 1977. Motivational learning of spatial behavior. In: J. Metzler, ed. Systems
neuroscience. New York: Academic Press, 221–239.
Arbib, M.A., Érdi, P., and Szentágothai, J., 1998. Neural organization: structure, function, and dynamics.
Cambridge, MA: The MIT Press.
Arbib, M.A., et al., 2000. Synthetic brain imaging: grasping, mirror neurons and imitation. Neural Networks,
13, 975–997.
Ball, M., Callaghan, V., and Gardner, M., 2010. An adjustable-autonomy agent for intelligent environments.
Presented at 6th International conference on intelligent environments, Kuala Lumpur, Malaysia.
Beer, R.D. and Chiel, H.J., 2008. Computational neuroethology. Scholarpedia [online], 3, 5307. Available
from www.scholarpedia.org [Accessed 29 June 2012].
Bishop, C.M., 2006. Pattern recognition and machine learning. New York: Springer.
Bonaiuto, J.B. and Arbib, M.A., 2010. Extending the mirror neuron system model, II: What did I just do? A
new role for mirror neurons. Biological Cybernetics, 102, 341–359.
Intelligent Buildings International 21

Bonaiuto, J.B., Rosta, E., and Arbib, M.A., 2007. Extending the mirror neuron system model, I: audible
actions and invisible grasps. Biological Cybernetics, 96, 9–38.
Brodmann, K., 1909. Vergleichende Lokalisationslehre der Großhirnrinde in ihren Prinzipien dargestellt auf
Grund des Zellenbaues. Leipzig: Ja Barth.
Brown, S., Martinez, M.J., and Parsons, L.M., 2006. The neural basis of human dance. Cerebral Cortex, 16,
1157–1167.
Callaghan, V., et al., 1909. Programming iSpaces: a tale of two paradigms. In: A. Steventon and S. Wright,
eds. The application of pervasive ICT. Berlin, New York: Springer, 389–421.
Cook, D.J., 2012. How smart is your home? Science, 335, 1579–1580.
Decety, J. and Ickes, W.J., eds., 2009. The social neuroscience of empathy. Cambridge, MA: The MIT Press.
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

Eberhard, J.P., 2008. Brain landscape: the coexistence of neuroscience and architecture. Oxford, New York:
Oxford University Press.
Eng, K., Douglas, R.J., and Verschure, P.F.M.J., 2005. An interactive space that learns to influence human be-
havior. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 35, 66–77.
Fellous, J.-M. and Arbib, M.A., 2005. Who needs emotions: the brain meets the robot. Oxford, New York:
Oxford University Press.
Gallese, V., et al., 1996. Action recognition in the premotor cortex. Brain: A Journal of Neurology, 119,
593–609.
Grafton, S.T., et al., 1996. Localization of grasp representations in humans by positron emission
tomography. 2. Observation compared with imagination. Experimental Brain Research, 112, 103–111.
Guazzelli, A., et al., 1998. Affordances, motivation, and the world graph theory. Adaptive Behavior, 6, 435–471.
Haykin, S.O., 2008. Neural networks and learning machines. Englewood Cliffs, NJ: Prentice-Hall.
Ito, M., 2002. The molecular organization of cerebellar long-term depression. Nature reviews Neuroscience,
3, 896–902.
Krakauer, J.W. and Mazzoni, P., 2011. Human sensorimotor learning: adaptation, skill, and beyond. Current
Opinion in Neurobiology, 21, 636–644.
Leen, G. and Heffernan, D., 2002. Expanding automotive electronic systems. Computer, 35, 88–93.
Lelin, Z., et al., 2010. Wireless physiological monitoring and ocular tracking: 3D calibration in a fully-
immersive virtual health care environment. Engineering in Medicine and Biology Society (EMBC),
2010 Annual international conference of the IEEE, 4464–4467.
Lino, J.A., Benjamin Salem, B., and Rauterberg, M., 2010. Responsive environments: user experiences for
ambient intelligence. Journal of Ambient Intelligence and Smart Environments, . 2, 347–367.
Maguire, E.A., 1997. Hippocampal involvement in human topographical memory.. evidence from functional
imaging. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 352,
1475–1480.
Manzolli, J. and Verschure, P.F.M.J., 2005. Roboser: a real-world composition system. Computer Music
Journal, 29, 55–74.
Mozer, M.C., 1998. The neural network house: an environment that adapts to its inhabitants. In: M. Coen, ed.
Proceedings of the American Association for Artificial Intelligence Spring symposium on intelligent
environments. Menlo, Park, CA: AAAI Press, 110–114.
Mozer, M.C., 2005. Lessons from an adaptive house. In: D. Cook and R. Das, eds. Smart environments: tech-
nologies, protocols, and applications. Hoboken, NJ: John Wiley & Sons, 273–294.
Mozer, M.C., et al., 1995. The neural network house: an overview. In: L. Niklasson and M. Boden, eds.
Current trends in connectionism. Hillsdale, NJ: Erlbaum, 371–380.
O’Grady, M. and O’Hare, G., 2012. How smart is your city? Science, 335, 1580–1581.
O’Keefe, J. and Dostrovsky, J.O., 1971. The hippocampus as a spatial map: preliminary evidence from unit
activity in the freely moving rat. Brain Research, 34, 171–175.
Oztop, E. and Arbib, M.A., 2002. Schema design and implementation of the grasp-related mirror neuron
system. Biological Cybernetics, 87, 116–140.
Palay, S.L., 1956. Synapses in the central nervous system. The Journal of Biophysical and Biochemical
Cytology, 2, 193–202.
Ramón y Cajal, S., 1911. Histologie du systeme nerveux de l’homme et des vertebres. Paris: A. Maloine
(English Translation by N. and L. Swanson, Oxford University Press, 1995).
Scoville, W.B. and Milner, B., 1957. Loss of recent memory after bilateral hippocampal lesions (reprinted in
J Neuropsychiatry Clin Neurosci 2000, 12, pp.103–113). Journal of Neurology, Neurosurgery, and
Psychiatry, 20, 11–21.
Sherrington, C.S., 1906. The integrative action of the nervous system. New Haven and London: Yale
University Press.
22 M.A. Arbib

Squire, L.R. and Wixted, J.T., 2011. The cognitive neuroscience of human memory since H.M. Annual
Review of Neuroscience, 34, 259–288.
Stein, J., Fisher, S.S., and Otto, G., 2010. Interactive architecture: connecting and animating the built
environment with the internet of things. Proceedings of the Internet of Things workshop, 29
November–1 December, 2010, Tokyo, Japan.
Wassermann, K.C., et al., 2003. Live soundscape composition based on synthetic emotions. IEEE
Multimedia, 10, 82–90.
Williams, G., Harrold, J.A., and Cutler, D.J., 2000. The hypothalamus and the regulation of energy homeo-
stasis: lifting the lid on a black box. Proceedings of the Nutrition Society, 59, 385–396.
Yeang, K., 2006. Ecodesign, a manual for ecological design. London: Wiley Academy.
Downloaded by [USC University of Southern California], [Michael Arbib] at 08:52 05 July 2012

View publication stats

You might also like