0% found this document useful (0 votes)
16 views21 pages

How Smart Are Our Environments-with-cover-page-V2 Copy 220125 230352

This paper reviews the advancements in smart environments, highlighting the integration of various disciplines such as artificial intelligence and sensor networks. It discusses the applications of smart environments in health monitoring and identifies ongoing challenges in the field. The authors emphasize the importance of physical components and middleware in creating effective smart environments that enhance user experience and interaction.

Uploaded by

citizeninsane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views21 pages

How Smart Are Our Environments-with-cover-page-V2 Copy 220125 230352

This paper reviews the advancements in smart environments, highlighting the integration of various disciplines such as artificial intelligence and sensor networks. It discusses the applications of smart environments in health monitoring and identifies ongoing challenges in the field. The authors emphasize the importance of physical components and middleware in creating effective smart environments that enhance user experience and interaction.

Uploaded by

citizeninsane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Pervasive and Mobile Computing 3 (2007) 53–73

www.elsevier.com/locate/pmc

How smart are our environments? An updated look


at the state of the art
Diane J. Cook a,∗ , Sajal K. Das b
a School of Electrical Engineering and Computer Science, Washington State University, Pullman,
WA 99164, United States
b Department of Computer Science and Engineering, The University of Texas at Arlington,
TX 76019, United States

Received 21 December 2006; received in revised form 21 December 2006


Available online 28 December 2006

Abstract

In this paper we take a look at the state of the art in smart environments research. The survey is
motivated by the recent dramatic increase of activity in the field, and summarizes work in a variety
of supporting disciplines. We also discuss the application of smart environments research to health
monitoring and assistance, followed by ongoing challenges for continued research.
"c 2007 Elsevier B.V. All rights reserved.

Keywords: Smart environments; Artificial intelligence; Sensor networks; Health monitoring

1. Introduction

Designing smart environments is a goal that appeals to researchers in a variety


of disciplines, including pervasive and mobile computing, sensor networks, artificial
intelligence, robotics, multimedia computing, middleware and agent-based software.
Advances in these supporting fields have prompted a tremendous increase in the number
of smart environment projects. Because of the rising popularity of the topic and a growing

∗ Corresponding author. Tel.: +1 509 335 4985.


E-mail addresses: [email protected] (D.J. Cook), [email protected] (S.K. Das).

c 2007 Elsevier B.V. All rights reserved.


1574-1192/$ - see front matter "
doi:10.1016/j.pmcj.2006.12.001
54 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

Fig. 1. The components of a smart environment.

desire for successful projects in the marketplace, we offer an updated look at the state of
the art in smart environments.
We define a smart environment as one that is able to acquire and apply knowledge
about the environment and its inhabitants in order to improve their experience in that
environment [98]. Typical components of a smart environment are shown in Fig. 1.
Automation in a smart environment can be viewed as a cycle of perceiving the state
of the environment, reasoning about the state together with task goals and outcomes
of possible actions, and acting upon the environment to change the state. Perception
of the environment is a bottom-up process. Sensors monitor the environment using
physical components and make information available through the communication layer.
The database stores this information while other information components process the raw
information into more useful knowledge (e.g., action models, patterns). New information
is presented to the decision-making algorithms (top layer) upon request or by prior
arrangement. Action execution flows top-down. The decision action is communicated
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 55

Fig. 2. Smart environment as an intelligent agent.

to the services layers (information and communication) which record the action and
communicate it to the physical components. The physical layer performs the action with the
help of actuators or device controllers, thus changing the state of the world and triggering
a new perception.
In the remainder of this paper we take a closer look at the state of the art in smart
environments by providing a summary of current research in these component areas. We
also summarize the fundamental challenges and solutions in modeling an inhabitant’s
mobility and activity in smart environments. This is followed by a discussion on the
application of smart environment research to health monitoring and assistance. Finally,
we introduce challenges for continued research.

2. Role of physical components in smart environments

Because smart environment research is being conducted in real-world, physical


environments, the design and effective use of physical components such as sensors,
controllers, and smart devices is vital. This is because sensors enable us to observe, monitor
and interact with the physical world in real time, and also allow us to take appropriate
actions. The design and modeling of a smart environment can be abstracted to an intelligent
agent paradigm as shown in Fig. 2, wherein the physical components are what allow the
agent to sense and act upon the environment. Without these physical components, we end
up with theoretical algorithms that have limited or no practical use.
Like all intelligent agents, a smart environment relies on sensory data from the real
world. As Fig. 2 shows, the environment perceives the environment using these sensors.
Using this information, the agent reasons about the environment and selects an action
that can be taken to change the state of the environment which can be conveyed through
actuators. Table 1 lists some of the properties of the environment that need to be captured
and how they can be measured.
The information required by smart environments is measured by sensors and collected
and shared with the help of (wireless) sensor networks consisting of a large number of
distributed sensor nodes that collaborate and coordinate to accomplish a task. Different
from conventional networks with an ultimate goal of point to point (or point to multiple
points) data forwarding, wireless sensor networks are often deployed to sense, collect,
56 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

Table 1
Sensors for smart environments (adapted from [49])

Properties Measurand

Physical properties Pressure, temperature, humidity, flow


Motion properties Position, velocity, angular velocity, acceleration
Contact properties Strain, force, torque, slip, vibration
Presence Tactile/contact, proximity, distance/range, motion
Biochemical Biochemical agents
Identification Personal features, RFID or personal ID

process, and disseminate information of the targeted physical environments, such as


temperature, humidity, motion, sound, and the like.
The importance of sensor networks as a research area unto itself is indicated by the
increasing number of related workshops [38] and recent efforts that have been initiated by
funding agencies such as DARPA [20] and NSF [64]. Indeed, wireless sensor networks
have attracted a plethora of research efforts due to their vast potential applications, such
as smart buildings, environment or habitat monitoring, utility plants, industry process
control, homes, ships, telemedicine, crisis management, transportation systems, and so on
[4,15,51].
Among desirable features, sensor/actuator networks need to be fast, easy to install and
maintain, robust and self-organizing to create a ubiquitous/pervasive computing platform.
However, such networks are characterized by a high degree of uncertainty in every aspect
of the system, including extremely limited resources on sensor nodes such as energy,
communication, computation and storage. This leads to uncertainty in the sensed data,
the sensing range, localization and synchronization results, wireless channel fluctuation
and transmission, topology control and routing behavior, security, and mobility [51]. Thus
the success of a sensor network is determined by how effectively it can surpass these
infiltrated uncertainties and provide desired confidence in the performance of various
system components. Additionally, due to the deployment of large numbers of sensor
nodes and hence a potentially immense amount of data, it is often impractical to gather
all the sensory data from each individual sensor, in particular from the perspective of
energy conservation. Therefore, in-network processing (e.g., data fusion or aggregation)
is often employed as a key strategy to curtail the network load and hence reduce energy
consumption [5,56]. Aggregation itself may amplify the uncertainty in sensed data coupled
with resource limitations.
To assist manufacturers in creating sensors that can be interfaced to such networks, the
IEEE and NIST (National Institute of Standards and Technologies) created the IEEE 1451
standard for Smart Sensor Networks [37]. The IEEE 1451 studies formalized the notion
of a smart sensor as one that provides additional functions beyond the sensed quantity,
such as signal condition or processing, decision-making functions, or alarm functions
[28]. The result is a device that takes on some of the burden of intelligent reasoning,
reducing the amount of reasoning needed at the agent level. A number of companies have
commercialized sensors that are suitable for such applications [49]. Berkely Motes sensors
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 57

and the TinyOS operating system [19] are popular platforms for working with embedded
networked sensors.
After the intelligent agent builds a representation of the current state of the environment
from perceived information, it can reason about the environment and use this information
to select an action. The agent executes the action using a controller, which causes a change
in the state of the environment.
Although customized controllers can be designed, an effective mechanism for
controlling many devices is using power line communication (PLC). PLC provides
networking and controller services using electrical wiring already deployed in most
environments. X-10 technology is one of the oldest PLC protocols and is typically used to
control lamps and appliances. X-10 controllers send signals over the power line to receive
and facilitate automated control from a computer as well as logging of inhabitant manual
interactions with these devices. X-10 interfaces have the advantage of inexpensive pricing
and ready availability, but they are often hampered by noisy signals and long delays.
The Smart House Applications Language (SHAL) [87] provides a more comprehensive
set of message types for specific sensing and control functions, but requires dedicated
multiconductor wiring.
Reliable data transmission over electrical wiring is difficult to achieve. The HomePlug
protocol specifications address this problem in the American market using error correction
coding and decoding techniques together with automatic request techniques. A peer-to-
peer communication protocol is available in the LonWorks protocol developed by Echelon
[25]. LonWorks networks can be implemented over a wide range of medium, including
power lines, twisted pair, radio frequency (RF), infrared (IR), coaxial cable and fiber optics.
The ZigBee Alliance [101] is also developing wireless monitoring and control products
with low power requirements.
Much of the research in the area of physical component design is performed
independently of smart environment applications. However, some efforts have focused
primarily upon the design or use of these technologies to support smart environment tasks.
For example, Lins et al. [50] propose a tool called BeanWatcher to manage wireless sensor
network applications for mobile devices. This tool is designed primarily for monitoring
and managing multimedia data streams in the intelligent environments, and is being
investigated as a management technique for intrusion detection applications in closed
environments. The use of radio frequency identification (RFID) tags to collect sensor-
derived data has been described by Want [93]; Philipose et al. [72] adopt a similar
approach by tagging objects in the environment and using sensed interactions to build
representations of inhabitant activities as sequences of such interactions. Profiles of
environment inhabitants, based solely upon temperature control behavior, have been built
by Vastamake [92].
In the same way that smart sensors move some of the reasoning work down to the
physical level, so researchers have also developed a number of intelligent devices. These
devices are not intended to solve the entire intelligent environment design problem, but
they do provide intelligent functionality within the confines of a single object and task.
For example, the smart sofa at Trinity College [47] contains programmable sensors on
the couch legs that identifies the individual sitting on the couch based on their weight
distribution. The couch can thus greet the individual and could foreseeably customize
58 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

Fig. 3. The MIT intelligent spoon and interactive tea kettle [59].

the immediate surroundings for that person. A number of intelligent and networked
kitchen appliances have been designed by companies such as GE and Whirlpool that
add multimedia interfaces and status reporting capabilities to the kitchen [89]. The
200ConnectIo device [31] refrigerates food until commanded to cook it by phone,
computer, or personal digital assistant (PDA).
The MIT Things That Think [59] group has developed intelligent devices such as smart
hotpads that determine whether a pan is too hot to touch, a spoon that provides feedback
about the temperature and viscosity of food, and a kettle that says how much longer you
have to wait for tea (see Fig. 3). The Philips interactive tablecloth [73] weaves a power
circuit into a washable linen tablecloth, so that devices can be charged when they are placed
anywhere on the tablecloth. While these devices are novel and useful for limited tasks, they
typically do not consider the bigger picture of interacting with the rest of the environment.
As pointed out by Rode [78], they also rarely consider difficulties encountered in cultures
and markets other than the one for which they are designed. Indeed, these devices would
be much more useful if they could adapt themselves to new environments and uses.
Other intelligent devices have been designed for the purpose of remotely controlling an
inhabitant’s environment. Examples of these physical components include smart phones
[66,76], wearable computers and head-mounted displays [42,65], and a unique gesture
pendant [86] which uses wearable jewelry to recognize gestures for executing control tasks.
The smart jewelry created by Kikin-Gil [43] differs from other intelligent devices in this
class because it allows teenagers to communicate with each other using predefined codes
emitted from their wearable jewelry, the Buddy Beads.

3. Pervasive computing and middleware issues

Rapid advances in smart technologies (e.g., sensors, devices and appliances, wireless
networking), software agents, and middleware technologies have led to the emergence of
pervasive or ubiquitous computing as perhaps the most exciting area of computing in recent
times. Empowered by wireless mobile communications and computing as well as context-
or situation-aware computing, pervasive computing aims at providing a where you want,
when you want, what you want and how you want approach to the services layers shown in
Fig. 1 that connect users to applications and devices.
In fact, models of 21st century ubiquitous computing scenarios [94] depend not just
on the development of capability-rich mobile devices (such as web-phones or wearable
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 59

computers), but also on the development of automated machine-to-machine computing


technologies, whereby devices interact with their peers and the networking infrastructure,
often without explicit operator control. To emphasize the fact that devices must be imbued
with an inherent consciousness about their current location and surrounding environment,
this computing paradigm is also called sentient [35] or context-aware computing.
Major challenges in pervasive computing include invisibility or (user/device)
unawareness, service discovery, interoperability and heterogeneity, proactivity, mobility,
privacy, security and trust [83]. In such environments, hardware and software entities
are expected to function autonomously, continually and correctly. Thus, pervasive
communications and computing offer a suitable platform for realization of smart
environments that link computers to everyday settings and commonplace tasks, and also
acquire and apply knowledge effectively in our surroundings. For an overview of enabling
technologies and challenges in pervasive computing, refer to the survey collected by Kumar
and Das [44].
Traditionally, agents have been employed to work on behalf of users, devices and
applications [9]. In addition, agents can be effectively used to provide transparent interfaces
between disparate entities in the environment, thus enhancing invisibility. Agent interaction
and collaboration is an integral part of pervasive (intelligent) environments, as agents can
overcome the limitations of hundreds and thousands of resource limited devices [45].
Pervasive or smart computing systems need to efficiently support resource and service
discovery — the process of discovering software processes/agents, hardware devices and
services. Service discovery provides situation-awareness to devices and device-awareness
to the environment. Although resource/service provisioning and discovery in mobile
environments has been well addressed in the literature, not much has been reported in
the context of pervasive computing. Among existing service discovery mechanisms, JINI
and Salutation as well as the International Naming System (INS) [2] are used. For a
comprehensive treatment of different mobile middleware architectures and systems and
associated issues, refer to the work of Bellavista and Corradi [8].
3.1. Location-aware services
As mentioned above, a smart environment comprises numerous invisible devices, users,
and ubiquitous services. The development of effective middleware tools to mask the effects
on heterogeneous wireless devices and networks as well as mobility is a major challenge.
Provisioning uniform services regardless of location is also vital. This leads to adaptive
location-aware services, that are most appropriate to the location as well as to the situation
under consideration.
Clearly, “Context (e.g., location and activity) awareness” is a key to building a smart
environment and associated applications. If devices can exploit emerging technologies
to infer the current activity state of the user (e.g., whether the user is walking or
driving, whether he/she is at the office, at home or in a public environment) and the
characteristics of their environment (e.g., the nearest Spanish-speaking ATM), they can
then intelligently manage both the information content and the means of information
distribution. For example, the embedded pressure sensors in the Aware Home [70] capture
inhabitants’ footfalls, and the smart home uses these data for position tracking and
pedestrian recognition.
60 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

The Neural Network House [61], the Intelligent Home [48], the House n [36] and
the MavHome [21,100] projects focus on the development of adaptive control of home
environments by also anticipating the location, routes and activities of the inhabitants.
This section summarizes a novel, information theoretic paradigm for context learning and
prediction that can be used for predicting with high degree of accuracy the inhabitant’s
future locations and activities, for automating activities, for optimizing control of devices
and tasks within the environment, and for identifying anomalies. The benefits of the
approach are a reduction in the cost of maintaining the environment, a reduction in
resource consumption, and provision of special health benefits for elderly and people with
disabilities [22,32,34].
From an information theoretic viewpoint, an inhabitant’s mobility and activity create
an uncertainty of their locations and hence subsequent activities. In order to be cognizant
of their contexts, the smart environment needs to minimize this uncertainty as captured by
Shannon’s entropy measure [18]. An analysis of the inhabitant’s daily routine and life style
reveals that there exist some well-defined patterns. Although these patterns may change
over time, they are not too frequent or random, and can thus be learned. This simple
observation may lead us to assume that the inhabitant’s mobility or activity follows a piece-
wise stationary, stochastic, ergodic process with an associated uncertainty (entropy), as
originally proposed by Bhattacharya and Das [11] for optimally tracking (estimating and
predicting) the location of mobile users in wireless cellular networks.
This compression-based framework [11] was later adopted to design optimal algorithm
for location (activity) tracking in a smart environment [79]. This novel scheme is based
on compressed dictionary management and on-line learning of the inhabitant’s mobility
profile, followed by a predictive resource management (energy consumption) scheme for
a single inhabitant smart space. However, the presence of multiple inhabitants with dy-
namically varying profiles and preferences makes such tracking much more challenging.
This is due mainly to the fact that the relevant contexts of multiple inhabitants in the same
environment are often inherently correlated and thus inter-dependent on each other. There-
fore, the learning and prediction (decision-making) paradigm needs to consider the joint
(simultaneous) entropy for location tracking of multiple inhabitants [81]. In the following,
we consider single inhabitant and multiple inhabitant mobility tracking cases separately.

3.1.1. Single inhabitant mobility tracking


The learning and prediction based paradigm, based on information theory and text
compression, manages the inhabitant’s uncertainty in mobility and activity profiles in daily
life. The underlying idea is to build a compressed (intelligent) dictionary of such profiles
collected from sensor data, learn from this information, and predict future mobility and
actions. This prediction helps device automation and efficient resource management, thus
optimizing the goals of the smart environment. At a conceptual level, prediction involves
some form of statistical inference, where some sample of the inhabitant’s movement profile
(history) is used to provide intelligent estimates of future location, thereby reducing the
location uncertainty associated with the prediction [22,80].
Hypothesizing that the inhabitant’s mobility has repetitive patterns that can be learned,
and assuming the mobility as a stochastic random process, the following lower bound
result was proven [11]: It is impossible to optimally track mobility with less information
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 61

exchange between the smart environment and the device (detecting the inhabitant’s
mobility) than the entropy rate of the stochastic mobility process. Specifically, given the
past observations of the inhabitant’s position and the best possible predictors of future
position, some uncertainty in the position will always exist unless the device and the system
exchange location information. The actual method by which this exchange takes place is
irrelevant to this bound. All that matters is that the exchange exceeds the entropy rate of
the mobility process. Therefore, a key issue in establishing bounds is to characterize the
mobility process (and hence the entropy rate) in an adaptive manner. To this end, based
on the information-theoretic framework, an optimal on-line adaptive location management
algorithm, called LeZi-update, was proposed [11]. Rather than assuming a finite mobility
model, LeZi-update learns an inhabitant’s movement history stored in a Lempel-Ziv type
of compressed dictionary [52], builds a universal model by minimizing the entropy, and
predicts future locations with high accuracy. In other words, LeZi-update offers a model-
independent solution to manage mobility related uncertainty.
The LeZi-update framework uses a symbolic space to represent each sensing zone of the
smart environment as an alphabetic symbol and thus captures the inhabitant’s movement
history as a string of symbols. That is, while the geographic location data are often useful
in obtaining precise location coordinates, the symbolic information removes the burden
of frequent coordinate translation and is capable of achieving universality across different
smart spaces [61,80]. The blessing of symbolic representation also facilitates hierarchical
abstraction of the smart environment infrastructure into different levels of granularity. This
approach assumes that the inhabitants’ itineraries are inherently compressible and allows
application of universal data compression algorithms [16,52], which make very basic and
broad assumptions, and yet minimize the source entropy for stationary ergodic stochastic
processes [77]. The LeZi-update scheme endows the prediction process, by which the
system finds nodes whose position is uncertain, with sufficient information regarding
the node mobility profile. So overall, the application of information-theoretic methods to
location prediction allowed quantification of minimum information exchanges to maintain
accurate location information, provided an on-line method by which to characterize
mobility, and in addition, endowed an optimal prediction sequence [16,22]. Through
learning, this approach allows us to build a higher order mobility model rather than
assuming a finite model, and thus minimizes the entropy and leads to optimal performance.
Not only does the Lezi-update scheme optimally predict the inhabitant’s current location
from past movement patterns, this framework can also be extended to effectively predict
other contexts such as activity, the most likely future routes (or trajectories) [79], resource
provisioning [22,80], and anomaly detection. The route prediction exploits the asymptotic
equi-partition property in information theory [18], which implies that the algorithm
predicts a relatively small set (called the typical set) of routes that the user is likely to take.
A smart environment can then act on this information by efficiently activating resources
(e.g., turning on the lights lying only on these routes).

3.2. Multiple inhabitant mobility tracking

As mentioned earlier, the multiple inhabitant case is more challenging. The mobility
tracking strategy described above is optimal for single inhabitant environments only.
62 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

It treats each inhabitant independently and fails to exploit the correlation between
the activities and hence the mobility patterns of multiple inhabitants within the same
environment. Intuitively, independent application of the above scheme for each individual
actually increases the overall joint location uncertainty. Mathematically, this can be
observed from the fact that conditioning reduces the entropy [18]. Recently, it was proven
[81] that optimal (i.e., attaining a lower bound on the joint entropy) location tracking of
multiple inhabitants is an NP-hard problem.
Assuming a cooperative environment, a cooperative game theory based learning policy
was proposed [82] for location-aware resource management in multi-inhabitant smart
homes. This approach adapts to the uncertainty of multiple inhabitants’ locations and most
likely routes, by varying the learning rate parameters and minimizing the Mahalanobis
distance. However, the complexity of the multi-inhabitant location tracking problem was
not characterized in that work.
Hypothesizing that each inhabitant in a smart environment behaves selfishly to
fulfill his own preferences or objectives and to maximize his utility, the residence of
multiple inhabitants with varying preferences might lead to conflicting goals. Under this
circumstance, a smart environment must be intelligent enough to strike a balance between
multiple preferences, eventually attaining an equilibrium state. If each inhabitant is aware
of the situation facing all others, Nash equilibrium is a combination of deterministic or
randomized choices, one for each inhabitant, from which no inhabitant has an incentive
to unilaterally move away. This motivated the authors to investigate the multi-inhabitant
location tracking problem from the perspective of stochastic (non-cooperative) game
theory [81], where the inhabitants are the players and their activities are the strategies of
the game. The goal is to achieve a Nash equilibrium so that the smart environment is able
to probabilistically predict the inhabitants’ locations and activities with sufficient accuracy
in spite of possible correlations or conflicts. The proposed model and entropy learning
scheme were also validated through a simulation study and real data.

4. Natural interfaces for smart environments

Although designers of smart environments are encouraged by the progress that has
been made in the field over the last few years, much of this progress will go unused if
the technologies are difficult or unnatural for inhabitants. The desktop metaphor that is
generally employed for computer applications is inappropriate for a smart environment. As
pointed out [1], explicit input must now be replaced with more human-life communication
capabilities and with implicit actions. Designers of interfaces for smart environments need
to consider issues such as the usability of the interface, the extent to which the interface is
end-user friendly, and the adaptiveness of the interface.
Instead of requiring a device that is foreign to many elderly adults and other groups
who can benefit from smart environments, the focus of research in this area is on natural
interfaces. The maturing of technologies including motion tracking, gesture recognition
(such as demonstrated in Fig. 4), and speech processing facilitate natural interactions with
smart environments. The Classroom 2000 project [1] provides human–computer interfaces
through devices such as an interactive whiteboard that stores content in a database. The
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 63

Fig. 4. Real-time recognition of forty-word American Sign Language vocabulary [71].

smart classroom [84] also uses an interactive whiteboard, and allows lecturers to write
notes directly on the board with a digital pen. This classroom experience is further
enhanced by video and microphones that recognize a set of gestures, motions, and speech
that can be used to bring up information or focus attention in the room on appropriate
displays and material. The intelligent classroom at Northwestern University [29] employs
many of these same devices, and also uses the captured information to infer speaker intent.
From the inferred intent the room can control light settings, play videos, and display slides.
In none of these cases is explicit programming of the smart environment necessary —
natural actions of the inhabitants elicit appropriate responses from the environment.
Such ease of interaction is particularly important in an office environment, where
workers want to focus on the project at hand without being tripped up by technology. The
AIRE project [3], for example, has designed intelligent workspaces, conference rooms, and
kiosks that use a variety of mechanisms such as gaze-aware interfaces and multi-modal
sketching to ensure that the full meaning of a discussion between co-workers is obtained
through the integration of captured speech and captured writing on a whiteboard. The
Monica project [46] identifies gestures and activities in order to retrieve and project needed
information in a workplace environment. Xie et al. [95] also process images of human
hands and use this information as a virtual mouse. Similarly, the Interactive Room (iRoom)
project at Stanford [27] enables easy retrieval and display of useful information. Users can
display URLs on a selected surface by simply dragging the URL onto the appropriate PDA
icon.
Targeting early childhood education, a Smart Table was designed as part of the Smart
Kindergarten project at UCLA [88]. By automatically monitoring kids’ interaction with
blocks on a table surface, the Smart Table enables teachers to observe learning progress for
children in the class. Children respond particularly well to such natural interfaces, as in the
case of the KidsRoom at MIT [12]. The room immerses children in a fantasy adventure in
which the kids must work together to explore the story. KidsRoom presents children with
an interactive fantasy adventure. Only through teamwork actions such as rowing a virtual
boat and yelling a magic word will the story advance, and these activities are captured
through cameras and microphones placed around the room.
Work on natural interfaces for smart environments extends well beyond simple rooms.
UCLA’s HyperMedia Studio project [57] adapts light and sound on a performance stage
automatically in response to performers’ positions and movements. The driver’s intent
64 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

Fig. 5. Facial expression recognition [71].

project at MIT [71] recognizes driver’s upcoming actions such as passing, turning,
stopping, car following, and lane changing by monitoring hand and leg motions. The
accuracy of classified actions reaches 97% within 0.5 s of the beginning of the driver’s
action. Facial expression recognition systems, such as the one shown in Fig. 5, can enhance
smart cars by recognizing when the driver is sleepy, or change the classroom interaction
when detecting that the students are bored or confused.

5. Inhabitant modeling

One feature that separates smart environments from environments that are user
controllable is the ability to model inhabitant behavior. Inhabitant modeling is a key
software component found in the information layer of a smart environment architecture
(see Fig. 1). If such a model can be built, the model can be used to customize the
environment to achieve goals such as automation, security, or energy efficiency. If the
model results in an accurate enough baseline, the baseline can provide a basis for detecting
anomalies and changes in inhabitant patterns. If the model has the ability to refine itself,
the environment can then potentially adapt itself to these changing patterns.
In this overview we characterize inhabitant modeling approaches based on three
characteristics: (i) The data that are used to build the model; (ii) The type of model that is
built; and (iii) The nature of the model-building algorithm (supervised, unsupervised).
The most common data source for model building is low-level sensor information.
These data are easy to collect and process. However, one challenge in using such low-level
data is the voluminous nature of the data collection. In the MavHome project [96], for
example, collected motion and lighting information alone results in an average of 10,310
events each day. In this project, a data mining pre-processor identifies common sequential
patterns in these data, then uses the patterns to build a hierarchical model of inhabitant
behavior. The approach by Loke [53] also relies upon these sensor data to determine the
inhabitant action and device state, then pulls information from similar situations to provide
a context-aware environment. Like the MavHome project, the iDorm research [24] focuses
on automating a living environment. However, instead of a Markov model, they model
inhabitant behavior by learning fuzzy rules that map the sensor state to actuator readings
representing inhabitant actions.
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 65

The amount of data created by sensors can create a computational challenge for
modeling algorithms. However, the challenge is even greater for researchers who
incorporate audio and visual data into the inhabitant model. Luhr [55] uses video data
to find intertransaction (sequential) association rules in inhabitant actions. These rules then
form the basis for identifying emerging and abnormal behaviors in a smart environment.
The approach in [13] relies on speech detection to automatically model interacting groups
in a smart environment, whereas Moncrieff [60] also employs audio data for generating
inhabitant models. However, such data are combined with sensor data and recorded time
offsets, then used to sense dangerous situations in a smart environment by maintaining an
environment anxiety level.
The modeling techniques described so far can be characterized as unsupervised learning
approaches. However, if prelabeled inhabitant activity data are available, then supervised
learning approaches can be used to build a model of inhabitant activity. This approach is
combined by Muehlenbrock et al. [62] with a naive Bayes learner to identify an individual’s
activity and current availability based on data such as PC/PDA usage. A naive Bayes
learner is also employed by Tapia et al. [90] to identify inhabitant activity from among
a set of 35 possible classes, based on collected sensor data.

6. Decision making

Over the last few years, supporting technologies for smart environments, as described
in the earlier sections of this paper, have emerged, matured, and flourished. These
technologies complete the bottom three layers of our smart environment architecture,
shown in Fig. 1. However, building a fully automated environment on top of these
foundations requires the decision-making component in the top layer of the architecture,
and this is still a rarity. Automated decision-making and control techniques are available
for this task. In the work of Simpson et al. [85], the authors discuss how AI planning
systems could be employed not only to remind inhabitants of their next activity but also
to complete a task if needed. Temporal reasoning combined with a rule-based system is
used [23] to identify hazardous situations and return the environment to a safe state while
contacting the inhabitant.
Few fully-implemented applications of decision-making technologies have been
reported. One of the first is the Adaptive Home [61], which uses a neural network and
a reinforcement learner to determine ideal settings for lights and fans in the home. This
is implemented in a home setting and has been evaluated based on an individual living in
the Adaptive Home. Youngblood et al. [99] also use a reinforcement learner to automate
actual physical environments, the MavPad apartment and the MavLab workplace (shown
in Fig. 6).
The policy is learned based on a hierarchical hidden Markov model constructed through
mining of observed inhabitant actions. Like the Adaptive Home, this approach has been
implemented and tested on volunteers in a living environment [97]. The iDorm project [30]
is another of these notable projects that has realized a fully-implemented automated living
environment. In this case, the setting is a campus dorm environment. The environment
is automated using fuzzy rules learned through observation of inhabitant behavior. These
66 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

Fig. 6. MavPad (left) and MavLab (right) automated environments.

Fig. 7. Annual rate of change by age range.

rules can be added to, modified, and deleted as necessary, which allows the environment
to adapt to changing behavior. However, unlike the reinforcement learner approaches,
automation is based on imitating inhabitant behavior and therefore is more difficult to
employ for alternative goals such as energy efficiency.

7. Health monitoring and assistance

There are many potential uses for a smart environment. Indeed, we anticipate that
features of smart environments would pervade our entire lives. They will automate our
living environment, increase the productivity of our work environment, and customize our
shopping experiences, and accomplishing all of these tasks will also improve the use of
resources such as water and electricity. In this section we focus on one class of applications
for smart environments: health monitoring and assistance.
One reason for singling out this topic is the amount of research activity found here, as
well as the emergence of companies with initiatives to bring smart elder care technologies
into the home [32,68]. Another reason is the tremendous need for smart environment
research to support the quality of life for individuals with disabilities and to promote aging
in place. The need for technology in this area is obvious from looking at our current and
project future demographics. Fertility decline combined with increases in life expectancy
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 67

Fig. 8. Goals of environmental assistive technology.

is resulting in population aging [91]. The resulting impact on age distribution is shown in
Fig. 7. Not only is the number of individuals aged 60 and over expected to triple by 2050,
but the United Nations reports that, in most countries, more of these elderly people are
living alone. To many people, home is a sanctuary. Individuals would rather stay at home,
even at increased risk to their health and safety.
With the maturing of smart environment technologies, at-home automated assistance
can allow people with mental and physical challenges to lead independent lives in their own
homes. Pollack [75] categorizes such assistive technologies based on meeting the goals of
assurance (making sure the individual is safe and performing routine activities), support
(helping individual compensate for impairment), and assessment (determining physical or
cognitive status) (Fig. 8). We summarize the technologies in each of these areas.
In the same fashion as researchers have developed technology for building models
of inhabitant behavior, so similar approaches can be taken to monitor individuals to
determine health status. In one such project [69], sensors are used to detect movement,
use of appliances, and presence in a room, and from this information researchers were
able to analyze the behavior patterns of two elderly ladies living alone. Nambu et al. [63]
found that analyzing TV watching patterns alone was effective at identifying and analyzing
behavior patterns, without the need for additional customized sensors. At the University of
Virginia’s MARC project [7], these sensors were able to actually categorize an individual’s
days into vacation (at home) and work days.
The next step in analyzing behavioral patterns is to detect changes in patterns and
anomalies. For example, MavHome activity data were collected [17] from an apartment
dweller and used to determine increasing, decreasing, and cyclic trends in patterns. Once
a baseline is established, this can be used to identify sudden changes. The approach of
learning intertransaction association rules [55] can also be helpful in identifying emerging
and abnormal activities, and the emotive computing work [60] actually records the anxiety
of the environment based upon deviation from normal behavior. When tied with health-
critical data and events, the environment may decide that information from these algorithms
is important enough to alert the inhabitant and/or caregiver.
Support for individuals living at home with special challenges is found in many varied
forms. If a model has been constructed of normal behavior, then the model can be used to
provide reminders of normal tasks [54]. Mihailidis et al. [58] provide this type of prompting
68 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

for the specific task of handwashing, one of the more stressful tasks for caregivers. By
recognizing where the individual is in the process and reminding them of the next step, the
tested subjects completed the task 25% more times than without the device. Customized
devices can prove useful for these individuals, as well. The benefits of robotic assistants
in nursing homes are demonstrated [74], while the Gator Tech Smart Home [33] provides
a visitor-identifying front door, inhabitant-tracking floor and a smart mailbox to volunteer
seniors living in the Gator Tech Smart Home. Kautz et al. [41] show that assistance is not
limited to a single environment. Using an activity compass, the location of an individual
can be tracked, and a person who may have wandered off can be assisted back to their goal
(or a safe) location.
Finally, smart environments can be used to actually determine the cognitive impairment
of the inhabitants. Such an assessment based on the ability of individuals to efficiently
complete kitchen tasks is demonstrated by Carter and Rosen [14]. A similar type of
assessment is provided [40] by monitoring individuals while they are playing computer
games. Assessment in this case is based on factors such as game difficulty, player
performance, and time to complete the game.

8. Conclusions and ongoing challenges

How smart are our environments? Research in the last few years has certainly matured
smart environment technology to the point of deployment in experimental situations.
This overview article also highlights the fact that there is active research not only in the
supporting technologies areas such as physical components and middleware, but also in
the modeling and decision-making capabilities of entire automated environments. These
highlights indicate that environments are increasing in intelligence.
However, there are many ongoing challenges that researchers in this area continue to
face. The first is the ability to handle multiple inhabitants in a single environment. While
this problem is addressed from a limited perspective [81], modeling not only multiple
independent inhabitants but also accounting for inhabitant interactions and conflicting
goals is a very difficult task, and one that must be addressed in order to make smart
environment technologies viable for the general population.
Similarly, we would like to see the notion of “environment” extend from a single setting
to encompass all of an inhabitant’s spheres of influence. Many projects target a single
environment such as a home, an office, a car, or more recently, a hotel [10]. However, by
merging evidence and features from multiple settings, these environments should be able
to work together in order to customize all of an individual’s interactions with the outside
world to that particular individual. As an example, how can we generalize intelligent
automation and decision-making capabilities to encompass heterogeneous smart spaces
such as smart homes, vehicles, roads, offices, airports, shopping malls, or hospitals, through
which an inhabitant may pass in daily life?
An interesting direction that researchers in the future may consider is not only the ability
to adjust an environment to fit an individual’s preferences, but to use the environment as
a mechanism for influencing change in the individual. Eng et al. [26] have discovered that
visitors may actually visit areas of a museum normally avoided through carefully-selected
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 69

cues given by a robot. Similarly, environmental influences can affect an individual’s


activity patterns, an individual’s mood, and ultimately the individual’s state of health and
mind.
While all of these issues are interesting from a research perspective, they also raise
concerns about the security and privacy of individuals utilizing smart environment
technologies. Reported work [6,67,39] has identified some of these issues and introduced
possible mechanisms for ensuring privacy and security of collected data. However, much
more work remains to ensure that collected data and automated environments do not
jeopardize the privacy or well-being of their inhabitants.
Finally, a useful goal for the smart environment research community is to define
evaluation mechanisms. While performance measures can be defined for each technology
within the architecture hierarchy shown in Fig. 1, performance measures for entire
smart environments still need to be established. This can form the basis of comparative
assessments and identify areas that need further investigation. The technology in this
field is advancing rapidly. By addressing these issues we can ensure that the result is an
environment with reliable functionality that improves the quality of life for its inhabitants
and for our communities.

Acknowledgements

We would like to thank Marco Conti for valuable comments that helped us improve
the quality of the paper. This work is partially supported by NSF grants IIS-0121297 and
IIS-0326505.

References

[1] G.D. Abowd, E.D. Mynatt, Designing for the human experience in smart environments, in: D.J. Cook,
S.K. Das (Eds.), Smart Environments: Technology, Protocols, and Applications, Wiley, 2004, pp. 153–174.
[2] W. Adjie-Winoto, E. Schwartz, H. Balakrishnan, J. Lilley, The design and implementation of an intentional
naming system, in: Proceedings of the Seventeenth ACM Symposium on Operating Systems Principles,
1999, pp. 186–201.
[3] A. Adler, R. Davis, Speech and sketching for multimodal design, in: Proceedings of the 9th International
Conference on Intelligent User Interfaces, 2004, pp. 214–216.
[4] I. Akyildiz, W. Su, Y. Sankarasubramaniam, E. Cayirci, A survey on sensor networks, IEEE
Communications Magazine 40 (8) (2002) 102–114.
[5] B. Krishnamachari, D. Estrin, S. Wicker, Impact of data aggregation in wireless sensor networks, in:
Proceedings of the IEEE International Conference on Distributed Computing Systems, 2002, pp. 575–578.
[6] P.G. Argyroudis, D. O’Mahony, Securing communications in the smart home, in: L. Jang, M. Guo, G. Gao,
N. Jha (Eds.), Proceedings of 2004 International Conference on Embedded and Ubiquitous Computing,
EUC’04, Aizu-Wakamatsu, Japan, in: Lecture Notes in Computer Science, vol. 3207, Springer-Verlag,
August 2004, pp. 891–902.
[7] T.S. Barger, D.E. Brown, M. Alwan, Health status monitoring through analysis of behavioral patterns,
IEEE Transactions on Systems, Man, and Cybernetics, Part A 35 (1) (2005) 22–27.
[8] P. Bellavista, A. Corradi, Mobile Middleware, CRC Press, 2006.
[9] P. Bellavista, A. Corradi, C. Stefanelli, A mobile agent infrastructure for the mobility support, in:
Proceedings of the 2000 ACM symposium on Applied computing, 2000, pp. 239–245.
[10] K. Belson, Your hotel room knows just what you like, November 16, 2005.
70 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

[11] A. Bhattacharya, S.K. Das, Lezi-update: An information-theoretic approach for personal mobility tracking
in pcs networks, Wireless Networks 8 (2002) 121–135.
[12] A.F. Bobick, S.S. Intille, J.W. Davis, F. Baird, C.S. Pinhanez, L.W. Campbell, Y.A. Ivanov, A. Schuette,
A. Wilson, The kidsroom: A perceptually-based interactive and immersive story environment, Presence 8
(4) (1999) 369–393.
[13] O. Brdiczka, J. Maisonnasse, P. Reignier, Automatic detection of interaction groups, in: Proceedings of the
International Conference on Multimodal Interfaces, 2005.
[14] J. Carter, M. Rosen, Unobtrusive sensing of activities of daily living: A preliminary report, in: Proceedings
of the 1st Joint BMES/EMBS Conference, 1999, p. 678.
[15] C. Chong, S. Kumar, Sensor networks: Evolution, opportunities, and challenges, Proceedings of the IEEE
91 (8) (2003) 1247–1256.
[16] J.G. Cleary, I.H. Witten, Data compression using adaptive coding and partial string matching, IEEE
Transactions on Communications 32 (4) (1984) 396–402.
[17] D.J. Cook, G.M. Youngblood, G. Jain, Algorithms for smart spaces, in: Technology for Aging, Disability
and Independence: Computer and Engineering for Design and Applications, Wiley, 2006.
[18] T.M. Cover, J.A. Thomas, Elements of Information Theory, Wiley, 1991.
[19] D. Culler, TinyOS: Operating system design for wireless sensor networks, Sensors (2006).
[20] Darpa Sensit, http://www.sainc.com/sensit/, 2006.
[21] S.K. Das, D.J. Cook, A. Bhattacharya, E.O. Heierman, T.-Y. Lin, The role of prediction algorithms in the
mavhome smart home architecture, IEEE Wireless Communications 9 (6) (2002) 77–84.
[22] S.K. Das, C. Rose, Coping with uncertainty in wireless mobile networks, in: Proceedings of the IEEE
Personal, Indoor and Mobile Radio Communications, 2004.
[23] R.L. de Mántaras, L. Saitta (Eds.), The Use of Temporal Reasoning and Management of Complex Events
in Smart Homes, IOS Press, 2004.
[24] F. Doctor, H. Hagras, V. Callaghan, A fuzzy embedded agent-based approach for realizing ambient
intelligence in intelligent inhabited environments, IEEE Transactions on Systems, Man, and Cybernetics,
Part A 35 (1) (2005) 55–65.
[25] Echelon, http://www.echelon.com/, 2006.
[26] K. Eng, R.J. Douglas, P.F.M.J. Verschure, An interactive space that learns to influence human behavior,
IEEE Transactions on Systems, Man, and Cybernetics, Part A 35 (1) (2005) 66–77.
[27] A. Fox, B. Johanson, P. Hanrahan, T. Winograd, Integrating information appliances into an interactive
space, IEEE Computer Graphics and Applications 20 (3) (2000) 54–65.
[28] R. Frank, Understanding Smart Sensors, Artech House, 2000.
[29] D. Franklin, Cooperating with people: The intelligent classroom, in: Proceedings of the National
Conference on Artificial Intelligence, 1998, pp. 555–560.
[30] H. Hagras, V. Callaghan, M. Colley, G. Clarke, A. Pounds-Cornish, H. Duman, Creating an ambient-
intelligence environment using embedded agents, IEEE Intelligent Systems 19 (6) (2004) 12–20.
[31] L. Hales, Intelligent appliances are wave of the future, January 22, 2006.
[32] I.P. Health, http://www.intel.com/research/prohealth/cs-aging in place.htm, 2006.
[33] A. Helal, W. Mann, H. El-Zabadani, J. King, Y. Kaddoura, E. Jansen, The gator tech smart house: A
programmable pervasive space, IEEE Computer 38 (3) (2005) 50–60.
[34] S. Helal, B. Winkler, C. Lee, Y. Kaddourah, L. Ran, C. Giraldo, W. Mann, Enabling location-aware
pervasive computing applications for the elderly, in: Proceedings of the First IEEE Pervasive Computing
Conference, 2003.
[35] A. Hopper, Sentient Computing, 1999.
[36] House n. House n Living Laboratory Introduction, 2006.
[37] IEEE 1451. A standard smart transducer interface, 2001.
[38] Information Processing in Sensor Networks (IPSN), http://www.ece.wisc.edu/ipsn05/, 2005.
[39] Amigo, Ambient intelligence for the networked home environment, http://www.hitech-projects.com/
euprojects/amigo/, 2006.
[40] H.B. Jimison, M. Pavel, J. Pavel, Adaptive interfaces for home health, in: Proceedings of the International
Workshop on Ubiquitous Computing for Pervasive Healthcare, 2003.
[41] H. Kautz, L. Arnstein, G. Borriello, O. Etzioni, D. Fox, An overview of the assisted cognition project, in:
Proceedings of the AAAI Worskhop on Automation as Caregiver: The Role of Intelligent Technology in
Elder Care, 2002, pp. 60–65.
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 71

[42] T. Keaton, S.M. Dominguez, A.H. Sayed, Browsing the environment with the SNAP & TELL wearable
computer system, Personal and Ubiquitous Computing 9 (6) (2005) 343–355.
[43] R. Kikin-Gil, Buddybeads: Techno-jewelry for non-verbal communication within teenager girls groups,
Personal and Ubiquitous Computing 10 (2–3) (2005) 106–109.
[44] M. Kumar, S.K. Das, Pervasive computing: Enabling technologies and challenges, in: A. Zomaya (Ed.),
Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging
Technologies, Springer, 2006.
[45] M. Kumar, B. Shirazi, S.K. Das, M. Singhal, B. Sung, D. Levine, Pervasive information communities
organization (PICO): A middleware framework for pervasive computing, IEEE Pervasive Computing 2 (3)
(2003) 72–79.
[46] C. Le Gal, Smart offices, in: D.J. Cook, S.K. Das (Eds.), Smart Environments: Technology, Protocols, and
Applications, Wiley, 2004.
[47] J. Legon, ‘Smart sofa’ aimed at couch potatoes, 2003.
[48] V. Lesser, M. Atighetchi, B. Benyo, B. Horling, A. Raja, R. Vincent, T. Wagner, X. Ping, S.X. Zhang, The
intelligent home testbed, in: Proceedings of the Autonomy Control Software Workshop, 1999.
[49] F.L. Lewis, Wireless sensor networks, in: D.J. Cook, S.K. Das (Eds.), Smart Environments: Technology,
Protocols, and Applications, Wiley, 2004.
[50] A. Lins, E.F. Nakamura, A.A.F. Loureiro, J. Claudionor, J.N. Coelho, Beanwatcher: A tool to generate
multimedia monitoring applications for wireless sensor networks, in: A. Marshall, N. Agoulmine (Eds.),
Management of Multimedia Networks and Services, Springer-Verlag, 2003, pp. 128–141.
[51] Y. Liu, S.K. Das, Information intensive wireless sensor networks: Potentials and challenges, IEEE
Communications Magazine 44 (11) (2006) 142–147.
[52] J. Liv, A. Lempel, Compression of individual sequences via variable rate coding, IEEE Transactions on
Information Theory 24 (5) (1978) 530–536.
[53] S.W. Loke, Representing and reasoning with situations for context-aware pervasive computing: A logic
programming perspective, The Knowledge Engineering Review 19 (3) (2005) 213–233.
[54] E.F. LoPresti, A. Mihailidis, N. Kirsch, Assistive technology for cognitive rehabilitation: State of the art,
Neuropsychological Rehabilitation 14 (1–2) (2004) 5–39.
[55] S. Luhr, Recognition of emergent human behaviour in a smart home: A data mining approach, in: Design
and Use of Smart Environments, Journal of Pervasive and Mobile Computing (2007) (special issue).
[56] H. Luo, Y. Liu, S.K. Das, Routing correlated data with fusion cost in wireless sensor networks, IEEE
Transactions on Mobile Computing 5 (11) (2006) 1620–1632.
[57] E. Mendelowitz, J. Burke, Kolo and Nebesko: A distributed media control framework for the arts, in:
Proceedings of the International Conference on Distributed Frameworks for Multimedia Applications,
2005.
[58] A. Mihailidis, J.C. Barbenel, G. Fernie, The efficacy of an intelligent cognitive orthosis to facilitate
handwashing by persons with moderate-to-severe dementia, Neuropsychological Rehabilitation 14 (1–2)
(2004) 135–171.
[59] MIT, Things that think, 2006.
[60] S. Moncrieff, Multi-modal emotive computing in a smart house environment, in: Design and Use of Smart
Environments, Journal of Pervasive and Mobile Computing (2007) (special issue).
[61] M.C. Mozer, Lessons from an adaptive home, in: D.J. Cook, S.K. Das (Eds.), Smart Environments:
Technology, Protocols, and Applications, Wiley, 2004, pp. 273–298.
[62] M. Muehlenbrock, O. Brdiczka, D. Snowdon, J. Meunier, Learning to detect user activity and availability
from a variety of sensor data, in: Proceedings of the IEEE International Conference on Pervasive
Computing and Communications, 2004.
[63] M. Nambu, K. Nakajima, M. Noshira, T. Tamura, An algorithm for the automatic detection of health
conditions, IEEE Engineering Medicine Biology Magazine 24 (4) (2005) 38–42.
[64] National Science Foundation Sensors and Sensor Networks, http://www.nsf.gov/pubs/2005/nsf05526/
nsf05526.htm, 2005.
[65] M. Nilsson, M. Drugge, U. Liljedahl, K. Synnes, P. Parnes, A study on users’ preference on interruption
when using wearable computers and head mounted displays, in: Proceedings of the IEEE International
Conference on Pervasive Computing and Communications, 2005.
72 D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73

[66] A. Nischelwitzer, A. Holzinger, M. Meisenberger, Usability and user-centered development (UCD) for
smart phones — the mobile learning engine (MLE) a user centered development approach for a rich content
application, in: Proceedings of Human Computer Interaction International, 2005.
[67] P. Nixon, W. Wagealla, C. English, S. Terzis, Security, privacy and trust issues in smart environments,
in: D.J. Cook, S.K. Das (Eds.), Smart Environments: Technology, Protocols, and Applications, Wiley,
2004.
[68] Oatfield Estates, 2006.
[69] M. Ogawa, R. Suzuki, S. Otake, T. Izutsu, T. Iwaya, T. Togawa, Long term remote behavioral monitoring
of elderly by using sensors installed in ordering houses, in: Proceedings IEEE-EMBS Special Topic
Conference on Microtechnologies in Medicine and Biology, 2002, pp. 322–335.
[70] R.J. Orr, G.D. Abowd, The smart floor: A mechanism for natural user identification and tracking, in:
Proceedings of the ACM Conference on Human Factors in Computing Systems, 2000.
[71] A. Pentland, Perceptual environments, in: D.J. Cook, S.K. Das (Eds.), Smart Environments: Technology,
Protocols, and Applications, Wiley, 2004.
[72] M. Philipose, K.P. Fishkin, M. Perkowitz, D.J. Patterson, D. Hahnel, D. Fox, H. Kautz, Inferring ADLs
from interactions with objects, IEEE Pervasive Computing (2005).
[73] Philips, Interactive tablecloth, 2006.
[74] J. Pineau, M. Montemerlo, M. Pollack, N. Roy, S. Thrun, Towards robotic assistants in nursing homes:
Challenges and results, Robotics and Autonomous Systems 42 (3–4) (2003).
[75] M.E. Pollack, Intelligent technology for an aging population: The use of AI to assist elders with cognitive
impairment, AI Magazine 26 (2) (2005) 9–24.
[76] N. Ravi, P. Stern, N. Desai, L. Iftode, Accessing ubiquitous services using smart phones, in: Proceedings
of the IEEE International Conference on Pervasive Computing and Communications, 2005.
[77] J. Rissanen, Stochastic Complexity in Statistical Inquiry, World Scientific Publishing Company, 1989.
[78] J.A. Rode, Appliances for whom? considering place, Personal and Ubiquitous Computing 10 (2–3) (2005)
90–94.
[79] A. Roy, S. Bhaumik, A. Bhattacharya, K. Basu, D.J. Cook, S.K. Das, Location aware resource management
in smart homes, in: Proceedings of the Conference on Pervasive Computing, 2003, pp. 521–524.
[80] A. Roy, S.K. Das, A. Misra, Exploiting information theory for adaptive mobility and resource management
in future wireless cellular networks, IEEE Wireless Communications 11 (8) (2004) 59–64.
[81] N. Roy, A. Roy, S.K. Das, Context-aware resource management in multi-inhabitant smart homes: A
Nash h-learning based approach, in: The IEEE Conference on Pervasive Computing and Communications,
Journal of Pervasive and Mobile Computing (2006) (special issue).
[82] N. Roy, A. Roy, K. Basu, S.K. Das, A cooperative learning framework for mobility-aware resource
management in multi-inhabitant smart homes, in: Proceedings of the IEEE Conference on Mobile and
Ubiquitous Systems: Networking and Services, MobiQuitous, 2005, pp. 393–403.
[83] M. Satyanarayanan, Pervasive computing: Vision and challenges, IEEE Personal Computing 8 (4) (2001)
10–17.
[84] Y. Shi, W. Xie, G. Xu, R. Shi, E. Chen, Y. Mao, F. Liu, The smart classroom: Merging technologies for
seamless tele-education, IEEE Pervasive Computing 2 (2003).
[85] R. Simpson, D. Schreckenghost, E.F. LoPresti, N. Kirsch, Plans and planning in smart homes,
in: J. Augusto, C. Nugent (Eds.), AI and Smart Homes, Springer Verlag, 2006.
[86] T. Starner, J. Auxier, D. Ashbrook, M. Gandy, The gesture pendant: A self-illuminating, wearable, infrared
computer vision system for home automation control and medical monitoring, in: Proceedings of the IEEE
International Symposium on Wearable Computing, 2000, pp. 87–94.
[87] H.B. Stauffer, The smart house system: A technical overview, The Computer Applications Journal 31
(1993) 14–23.
[88] P. Steurer, M.B. Srivastava, System design of smart table, in: Proceedings of the IEEE International
Conference on Pervasive Computing and Communications, 2003.
[89] K. Swisher, “Intelligent” appliances will soon invade homes, 2006.
[90] E.M. Tapia, S.S. Intille, K. Larson, Activity recognition in the home using simple and ubiquitous sensors,
in: Proceedings of Pervasive, 2004, pp. 158–175.
[91] United Nations Department of Economic and Social Affairs, http://www.un.org/esa/population/unpop.htm,
2006.
D.J. Cook, S.K. Das / Pervasive and Mobile Computing 3 (2007) 53–73 73

[92] R. Vastamaki, I. Sinkkonen, C. Leinonen, A behavioural model of temperature controller usage and energy
saving, Personal and Ubiquitous Computing 9 (4) (2005) 250–259.
[93] R. Want, Enabling ubiquitous sensing with RFID, Computer 37 (4) (2004) 84–86.
[94] M. Weiser, The computer for the 21st century, Scientific American 265 (3) (1991) 94–104.
[95] W. Xie, E.K. Teoh, R. Venkateswarlu, X. Chen, Hand as natural man–machine interface in smart
environments, in: Proceedings of the IASTED International Conference on Signal Processing, Pattern
Recognition, and Applications, 2006, pp. 117–122.
[96] G.M. Youngblood, Automating inhabitant interactions in home and workplace environments through
data-driven generation of hierarchical partially-observable Markov decision processes, Ph.D. Thesis, The
University of Texas at Arlington, 2005.
[97] G.M. Youngblood, D.J. Cook, Data mining for hierarchical model creation, IEEE Transactions on Systems,
Man, and Cybernetics, Part C (2007).
[98] G.M. Youngblood, D.J. Cook, L.B. Holder, E.O. Heierman, Automation intelligence for the smart
environment, in: Proceedings of the International Joint Conference on Artificial Intelligence, 2005.
[99] G.M. Youngblood, L.B. Holder, D.J. Cook, Managing adaptive versatile environments, Journal of
Pervasive and Mobile Computing 1 (4) (2005) 373–403.
[100] M. Youngblood, D.J. Cook, L.B. Holder, Managing adaptive versatile environments, in: Proceedings of the
IEEE International Conference on Pervasive Computing and Communications, 2005, pp. 351–360.
[101] Zigbee Alliance, http://www.zigbee.org/, 2006.

Dr. Diane J. Cook is currently a Huie-Rogers Chair Professor in the School of Electrical
Engineering and Computer Science at Washington State University. She received a
B.S. degree from Wheaton College in 1985, a M.S. degree from the University of
Illinois in 1987, and a Ph.D. degree from the University of Illinois in 1990. Dr. Cook
currently serves as the editor-in-chief for the IEEE Transactions on Systems, Man, and
Cybernetics, Part B: Cybernetics. Her research interests include artificial intelligence,
machine learning, graph-based relational data mining, smart environments, and robotics.

Dr. Sajal K. Das is a Distinguished Scholar Professor of Computer Science and


Engineering and also the Founding Director of the Center for Research in Wireless
Mobility and Networking (CReWMaN) at the University of Texas at Arlington (UTA).
His current research interests include sensor networks, smart environments, resource and
mobility management in wireless networks, mobile and pervasive computing, wireless
multimedia and QoS provisioning, mobile internet architectures and protocols, grid
computing, biological networking, applied graph theory and game theory. Dr. Das
coauthored the book “Smart Environments: Technology, Protocols, and Applications”
(John Wiley, 2005). He has published over 400 research papers in international
conferences and journals, and holds five US patents. He received Best Paper Awards in IEEE PerCom’06,
ACM MobiCom’99, ICOIN’02, ACM MSwiM’00 and ACM/IEEE PADS’97. He is also a recipient of UTA’s
Outstanding Faculty Research Award in Computer Science (2001 and 2003), College of Engineering Research
Excellence Award (2003), University Award for Distinguished record of Research (2005), and UTA Academy of
Distinguished Scholars Award (2006). Dr. Das serves as the Editor-in-Chief of Pervasive and Mobile Computing
journal, and Associate Editor of IEEE Transactions on Mobile Computing, ACM/Springer Wireless Networks,
IEEE Transactions on Parallel and Distributed Systems. He has served as General or Technical Program Chair
and TPC member of numerous IEEE and ACM conferences. He is a member of IEEE TCCC and TCPP Executive
Committees.

You might also like