Table of Contents
CHAPTER 1: AGENT TECHNOLOGY CHARACTERISTICS.....................................3
1.1 Social Ability....................................................................................................3
1.2 Data Ingestion.................................................................................................3
1.3 Reactivity.........................................................................................................4
1.4 Autonomy........................................................................................................4
1.5 Proactiveness..................................................................................................4
CHAPTER 2: AC SYSTEMS........................................................................................5
2.1 Current State of AC System............................................................................5
2.2 The Future of AC System................................................................................5
2.3 Requirements..................................................................................................6
2.3.1 Advance Technology of Wearable Devices..............................................6
2.3.2 Dataset Collection.....................................................................................6
2.3.3 Ethical consideration.................................................................................6
2.4 Research Papers Sources..............................................................................7
CHAPTER 3: APPLICATION OF AGENTS TO AC SYSTEMS...................................8
3.1 Education - Facial Recognition to Measure Understanding............................8
3.1.1 Agent Architecture....................................................................................8
3.1.2 Learning Method.......................................................................................8
3.1.3 Inference Techniques...............................................................................9
3.1.4 Design of Knowledge Base......................................................................9
3.2 Healthcare - Pain Levels Detection...............................................................10
3.2.1 Agent Architecture..................................................................................11
3.2.2 Learning Method.....................................................................................11
3.2.3 Inference Techniques.............................................................................12
3.2.4 Design of Knowledge Base....................................................................12
1
3.3 Video Games - Detecting Friendliness of the Players..................................13
3.3.1 Agent Architecture..................................................................................14
3.3.2 Learning Method.....................................................................................14
3.3.3 Inference Techniques.............................................................................15
3.3.4 Design of Knowledge Base....................................................................15
CHAPTER 4: CASE STUDY......................................................................................17
4.1 Automotive Industry - In-Car Facial Recognition..........................................17
4.2 Agent Architecture.........................................................................................17
4.3 Learning Method...........................................................................................18
4.4 Inference Techniques....................................................................................18
4.5 Design of Knowledge Base...........................................................................19
4.6 Conclusion.....................................................................................................20
References.................................................................................................................21
2
CHAPTER 1: AGENT TECHNOLOGY CHARACTERISTICS
Affective computing is the development of systems that are capable of recognizing,
interpreting, processing, and simulating human feeling, mood, or emotion using facial
or verbal clues. There are several techniques used to capture affective data, which
are emotional speech, facial affect detection, body gesture, physiological monitoring,
and visual aesthetics.
1.1 Social Ability
Social ability refers to the capability of an agent in communicating or interacting with
other agents or humans with the purpose of satisfying its design objectives. It is
crucial in affective computing as the agent would make decisions or give appropriate
responses by understanding the emotional state of humans. Passive sensors or
wearable devices like video camera, microphone or polygraph are used to capture
data. For instance, most people with autism have difficulty understanding the
emotion of others, a wearable device may assist them to comprehend another
person’s emotions so they can respond and interact in social situations.
1.2 Data Ingestion
Data ingestion is a technique used to obtain, import, transfer and load data for
immediate use or storage in a database. It involves fetching data from variety of
sources and detecting changed data. Data ingestion is important because cleaning,
parsing, assembling and gut-checking data is an extremely time-consuming task to
perform. We have all heard the phrase, bad data yields bad results. Data is essential
in training AC system, a bad quality video of human facial expression brings blur and
unclear facial geometry, as a result it may give false outcome. To give an example, a
system that could detect human affect through speech or video requires facial
geometries to identify emotion. The system will improve with more data collected
from humans and provide a better and precise result. On top of that, it increases
productivity and efficiency, and ensures an improved user experience.
3
1.3 Reactivity
Reactivity refers to the ability of an agent in perceiving its environment and respond
in a rapid pace to changes that happened to fulfil its design objectives. It is one of
the vital key features of affective computing because the environment is dynamic,
agent is designed to take possible failure with logical thinking process. For an
example, an in-car technology invented by a Japanese car company can sense
when the driver is drowsy or distracted and alert the driver to be cautious, suggest
next rest area or contact emergency service in an emergency situation. The method
used is by sensing the surrounding and driver with sensor and undergoing data
mining process to produce an outcome or suggest an action to be taken.
1.4 Autonomy
Agents in affective computing must be capable of autonomous, actions will be taken
without the intervention of humans or others. Machine learning is a part of artificial
intelligence which provides computer systems with the potential to learn
automatically, react to situation accordingly and improve themselves using
observation without direct request. To further illustrate on this point, online learning
class could automatically detect when the student is having difficulty and alert
lecturer regarding the situation, so additional explanations or information could be
offered to the confused student. Agent recognises student’s facial expression
through webcam, confused facial expression is depicted when the student frowns or
raises one eyebrow higher than another.
1.5 Proactiveness
With this quality, agent perform goal-directed behaviour by taking the initiative to
satisfy its design objectives. Agent takes initiative and recognises opportunities to
the environment while achieving its goals without driven solely by events. This can
be portrayed by a wearable computer that could not only monitor patient’s health
data and identify early warning signs, it also pay attention to objects or things that
increase wearer’s stress or adrenaline level, so that it might come out with better
strategies to boost immune system when needed, practicing preventive medicine.
4
CHAPTER 2: AC SYSTEMS
2.1 Current State of AC System
In today’s era of technology, affective computing systems are widely used in
several fields such as healthcare, security, and education. Affective computing works
extremely well in healthcare system, there are several successful applications and
systems on the market, especially agents that could assist people with disabilities
such as individuals with autism or Asperger syndrome. Moreover, veterans with
posttraumatic stress disorder (PTSD) has difficulty in carrying out daily activities.
StartleMart developed virtual reality treatment scenario with real-time stress
detection to treat patients with PTSD. In education field, Subtle Stone is a wireless
squeezable ball which comes with 7 different colours, students can customise the
stone to represent their emotional language by specific colours. By using the tool,
feedback will reach the teacher.
Besides, there are 3 types of unimodal features for affective system, which
are visual modality, audio modality and text modality. Visual modality refers to the
facial expression of humans which allow machine to understand emotions and
sentiments. The data are captured from facial action coding system, facial
expression recognition techniques, extracting temporal features from videos and
body gestures. Furthermore, data in audio modality are gathered through audio
features extraction using deep networks, speaker-independent applications, local
and global features. Text modality can be determined through use of linguistic
patterns, contextual subjectivity, single or cross domain and bag of words modelling.
2.2 The Future of AC System
The main purpose of affective computing is to integrate human emotions with
agents to produce suitable outcome. Henceforth, future wearable technologies might
have the capability to detect more subtle indicators that current tools and sensors
would miss. Having said that, with such powerful sensors can create a more
accurate result or better experience. In gaming industry, affective computing can
improve gaming experience by introducing a whole new different virtual reality kit
which allows player not only see 3D images or environments that is seemingly real,
5
but the player can feel the sensation as well. For instance, if a player gets hit by a
ball in the simulation game, he will experience the contact and feel the pain.
Affective computing system can be utilised in e-commerce trends and
opportunities. E-commerce has become popular and widely used by people from all
walks of life, businesses can use affective agents in detecting consumer’s emotion
while communicating with customer server chatbots. Chatbot may alter responses
based on customer’s behaviour. Additionally, this technology can be apply in
advertising to understand what caught viewer’s attention through eye gazing.
2.3 Requirements
2.3.1 Advance Technology of Wearable Devices
Sensors are important in affective computing as data and inputs generated from
bodily responses are gathered through these tools. Wearable sensors have become
mainstream; detecting stress level, recognising wearer’s speech and tone, heartbeat
rate and body movement are common. Therefore, a higher-level technology that
could indicate eye movement and pupillary response that normal equipment would
miss would potentially improve affective computing.
2.3.2 Dataset Collection
Natural videos and video recordings of actors with scripts are 2 main methodologies
for dataset collection. For video recordings from actors, subjects are requested to act
according to pre-decided scripts. It is true that such datasets might be inaccurate,
causing system trains with corrupted samples or inaccurate information. Therefore,
natural videos captured from real-time play a huge part in training an agent to
increase accuracy and reliability of the system.
2.3.3 Ethical consideration
Ethical issues of affective computing might be potentially harmful if it is not handled
carefully. Should affective computing apply on machines with destructive capabilities
6
such as fighter jets? In addition, will robots with emotional reasoning act
appropriately on a sensitive situation? For instance, if a robot is trained to capture
criminal, can it conduct its job without being affected or show empathy to the
criminal? Therefore, human safety is the key in considering where to apply this
technology without sacrificing human lives.
2.4 Research Papers Sources
i. Chapter 9 - Affective Computing: Historical Foundations, Current Applications,
and Future Trends
ii. A review of affective computing: From unimodal analysis to multimodal fusion
iii. Emotion recognition and affective computing on vocal social media
iv. Affective Computing and the Impact of Gender and Age
v. A survey on mobile affective computing
7
CHAPTER 3: APPLICATION OF AGENTS TO AC SYSTEMS
3.1 Education - Facial Recognition to Measure Understanding
In classroom, it is difficult to understand pupil’s learning state. Are they confused,
excited or bored? Affective computing agent may be more useful than humans in this
situation because they are able to use intelligent technology to detect hidden
responses. One of the issues in classroom is the students might be struggling in
lessons yet shy to raise their hand or seek help from teacher causing them to be
neglected and unable to cope with studies. Thus, applying affective computing
system in the classroom to capture and analyse students’ facial expression means
teachers can understand student’s situation and condition easily, as well as
providing additional support or more explanation to confused pupil. Furthermore,
teachers can vary teaching methods according to student’s capability and learning
styles.
3.1.1 Agent Architecture
Deliberate architecture is implemented in the classroom AC system. The decisions
made in this model is by going through logical reasoning, based on pattern matching
and symbolic manipulation. The agent will reason about student’s learning state and
suggest new teaching method to the teachers in the future to improve effectiveness.
3.1.2 Learning Method
Learning method used in this AC system is supervised learning, inputs and outputs
are given, agent will reason about the rules and provide the best solution. For
example, the system can suggest teachers to change their teaching method
according to the data they received from students based on their excitement and
boredom of initial learning method. On top of that, the agent is able to detect
confused students and notify teachers to provide extra attention on the particular
student.
8
3.1.3 Inference Techniques
Forward chaining inference technique is used in classroom affective computing
system. As shown in the diagram below, learning materials, facial expression
database, student’s behaviour and emotion classification are the knowledge base.
The agent will detect student’s learning state using facial expression database and
student’s behaviour and determine student’s emotion with their behaviour and
emotion classification. If the student shows confused emotion and unfocused, the
system may notify teacher regarding student’s learning and emotional state along
with the learning material which caused confusion of the student. As a result,
teachers can notice the student and prevent negligence as well as provide
supportive actions.
Figure 1: Inference technique in classroom AC system
3.1.4 Design of Knowledge Base
i. Learning materials
Teaching materials provided by teachers used in teaching and learning
situations
ii. Student data
9
It includes information of the students and their facial expression, emotion,
and behaviour
iii. Facial expression database
The database contains different basic emotional expressions from actors
iv. Emotion classification
It determines emotions such as anger, disgust, fear, happiness, sadness, and
surprise
v. Emotion detecting rules
A set of rules to measure student’s learning state such as confusion,
excitement and boredom
vi. Supportive solution rules
A set of rules to determine type of approaches can be used by teachers to
assist students such as teaching methods and supportive actions to be taken
according to events
3.2 Healthcare - Pain Levels Detection
Researchers from MIT have developed an affective computing system that
determines a patient’s pain level by analysing brain activity from a portable
neuroimaging device. The system could effectively assist doctors in diagnosing and
treating pain in unconscious and noncommunicative patients such as children,
patients with dementia, or those undergoing surgery with anaesthesia. A few fNIRS
sensors are placed on a patient’s forehead to detect activity in the prefrontal cortex,
the device uses measured brain signals and a personalised machine-learning
technique developed by the researchers to determine whether a patient is going
through pain. The system could give surgeons real-time data regarding to the pain
level of unconscious patient, so they can adjust medication dosages to stop the pain
signals or prevent long-term chronic pain.
10
3.2.1 Agent Architecture
The agent architecture used in this AC system is reactive architecture as each
behaviour continually maps perceptual input to action output. This is a behaviour-
based agent, it takes sensory information to determine action selection. In this case,
the agent obtains patient’s brain signals through sensors (input) and measure the
appropriate medication dosage to lower unconscious patient’s pain level (output). It
is a simple intelligence as the outputs are simple functions of inputs. For instance, if
patient’s pain level increases, then increase medication dosage.
Figure 2: Reactive agent
3.2.2 Learning Method
Unsupervised learning approach is implemented in MIT pain level detection system,
it looks for previously undetected patterns in a data set with no pre-existing labels. A
personalised pain modelling is developed by MIT which is trained and tested from 43
male participants using the collected pain-processing dataset. The pain intensity is
measured on a scale of 1 to 10, a low pain level is about a 3/10 or high pain level is
about 7/10. The model learns to classify “pain” or “no pain” based on average
responses of the entire patient population between young and old patients. As MIT
collects more data from participants in different ages and genders, the accuracy of
the pain level increases.
11
3.2.3 Inference Techniques
Figure 3: Inference technique in MIT pain level detection
The inference engine used in this healthcare affective computing system is forward
chaining. The system will detect patient’s brain signals through sensors and analyse
them with pain intensity level model which has been trained and tested by the
researchers. The result allows surgeons to adjust and apply correct anaesthesia and
medication dosage.
3.2.4 Design of Knowledge Base
i. Brain signal
It contains oxygenated haemoglobin and deoxygenated haemoglobin of
patients collected through sensors.
ii. Pain intensity rules
The pain intensity is measured on a scale of 1 to 10
iii. Therapy rules
A set of rules that could provide surgeons information to adjust anaesthesia
and medication dosages according to patient’s pain levels
12
3.3 Video Games - Detecting Friendliness of the Players
Integration of affective computing with games brings gaming to a higher level and
provides player a better experience. Player’s emotional state is one of the crucial
parts of any game, the game can adjust its difficulty according to player’s
behaviours. Inputs are collected through sensors such as a microphone, a strap
worn around player’s chest and a device that can measure skin’s conductivity. To
give an example, World of Warcraft by Blizzard Entertainment developed and added
a new feature into the merchant NPC (non-player character), it provides another
option to the players to purchase item at a discounted price. Players click on the
option, and they are given 10 seconds to talk to the NPC, the amount of discount
depends on how friendly the player’s voice is. The system receives voice input
through microphone, breaks it into small equal pieces and interprets the waves in
each of them. There are rules in determining player’s emotion, players tend to have
higher arousal of they are excited, and low arousal when they are sad and
depressed.
Figure 4: World of Warcraft, Innkeeper Interaction
13
3.3.1 Agent Architecture
World of Warcraft included reactive architecture in the affective computing system.
There is no complex reasoning or planning, only inputs and outputs. It works by
receiving voice input from player, the input will be separated into pieces and match
with the data available in the database and produce a reliable result. The process
starts from a merchant NPC, players choose the option to buy an item at a
discounted price, player is given 10 seconds to speak to the merchant and the agent
will determine whether the player is friendly or unfriendly based on the speech.
Therefore, friendly player gets discount while unfriendly player pays the full price of
the item.
3.3.2 Learning Method
Blizzard uses supervised learning method in its AC system. Data are obtained from
actors which is also the representation of standard emotions and stored in the
database. This acted database is technically based on the Basic Emotions theory,
which assumes the existence of six basic emotions. Blizzard collects voices, listen to
them, and tell the agent how human felt. Then, the agent will comprehend these
patterns the same way as humans do. Every time the game detects voice input from
the player, it will interpret and analyse the speech with the data available and
produce a result back to the player.
Figure 5: Process of emotion recognition using speech
14
3.3.3 Inference Techniques
The technique used in this affective computing system is forward chaining. The
knowledge bases are player’s voice input (tone and speech rate) and voice database
which contains actors’ voice. The friendliness of player is measured using player’s
tone with voice database and player’s speech rate with voice in acted database. A
percentage will be rated for the friendliness of the player, combining all the results
obtained, the agent will determine the discounted price offered to the player.
Figure 6: Inference technique used in World of Warcraft
3.3.4 Design of Knowledge Base
i. Voice database
The data in the database is basically collected from actors and based on the
Basic Emotions theory, which assumes the existence of six basic emotions
(anger, fear, disgust, surprise, joy, sadness)
ii. Player’s data
It stores player’s information and voice input
iii. Friendliness rules
15
A set of rules to determine the friendliness of player through speech
iv. Discount rules
A set of rules to measure the amount of discount will be given to players
depending on their friendliness
16
CHAPTER 4: CASE STUDY
4.1 Automotive Industry - In-Car Facial Recognition
Affective computing system can be used in car to detect a potential safety threat on
time. Face recognition can be used to track driver’s face, gaze and emotions, the
agent can sense inattentiveness and tiredness of the driver and prevent road rage
and other safety issues from happening. For example, Nissan has a facial monitoring
system where a camera is installed in the vehicle facing the driver’s face. The aim of
the system is to supervise the driver’s state of consciousness through blinking of
eyes. Alert messages will be triggered to warn the driver when the system detects
signs of drowsiness. Moreover, if the driver has been driving for a considerable
length of time and seems tired or distracted, the system can suggest the driver to
stop the vehicle or a route to the nearest resting area. On top of that, the AC system
may collect relevant driver data which allows the prediction of safety issues. For
instance, if a specific route tends to trigger anger from a specific driver, the system
can recommend an alternative route.
Figure 7: Nissan facial monitoring system
4.2 Agent Architecture
Hybrid architecture is applied in the in-car facial recognition technology, this
architecture allows agent to develop plans, make logical decisions and react to
events without complex reasoning to prevent late result. Planning for a route takes
time to reason and optimal the fastest route according to the environments such as
traffic jam and also driver’s preference. The process will analyse map data, GPS
17
data and driver’s data and reason them into best solution. Meanwhile, agent able to
react to a situation or emergency without complex reasoning or re-planning the
whole process because late result is useless to the dynamic environment. For
instance, system detects driver dozed off while driving, it will alert driver with a loud
noise and tighten seatbelt.
4.3 Learning Method
Reinforcement learning is used by the agent to optimal fastest and effective route.
This algorithm does not need input or output pairs to be presented, it concentrates
on exploration of unlabelled data and exploitation of current data. For instance, the
system will measure the best route which suits driver’s preferences. Apart from that,
another learning method implemented is supervised learning. In the system, inputs
and desired outputs are provided, the algorithm improves accuracy of the predictions
over time. As an example, agent will determine what kind of warning messages will
be triggered according to driver’s state of consciousness. If the driver shows
tiredness after a long hours ride, agent will suggest the nearest rest area.
4.4 Inference Techniques
Both forward and backward chaining inference engines are used in this AC system
based on the features.
Figure 8: Inference technique used to determine fastest and shortest route
18
In order to look for the fastest and shortest route, the agent will measure the map
with driver’s preferences. Both are then determined by driver’s emotion, whether
anger will be triggered while driver is using the particular route based on past
experiences. Driver’s behaviours and facial expressions, facial expression database
and emotion classification are integrated to define driver’s emotion.
Figure 9: Inference technique to monitor state of consciousness
Another feature is to monitor the state of consciousness of the driver. The data used
to determine the consciousness of the driver are driver’s facial expression and
examples of facial expression from acted database. For instance, the eye blinking
state of the driver or distracted gaze. If the agent detects that driver appears tired or
drowsy, the system will prompt an alert message.
4.5 Design of Knowledge Base
i. Map data
Map data is generally the information of building and streets
ii. Driver data
It includes driver’s facial expression, emotion, and behaviour
iii. Facial expression database
The database contains different basic emotional expressions from actors
iv. Emotion classification
19
It determines emotions such as anger, disgust, fear, happiness, sadness, and
surprise
v. Alert messages
Warning information used to notify driver to take cautious actions
vi. Emotion detecting rules
A set of rules to classify driver’s state of consciousness or emotion
vii. Emotion managing rules
It comprises rules to interpret and provide best solution to driver’s situation
viii. Route planning rules
It is used to find a fastest or shortest travelling route between two or more
given locations
4.6 Conclusion
I chose automotive industry (in-car facial recognition) as my case study because I
feel this system has potential to save million of lives. Every day, people die from car
crash or accidents due to several factors such as drunk driving and dozed off while
driving. With this affective computing system, it can alert and warn drivers to prevent
mishaps from happening.
20
References
Anon., 2020. Affective Computing: In-Depth Guide to Emotion AI [2020]. [Online]
Available at: https://research.aimultiple.com/affective-computing/
[Accessed 27 May 2020].
Anon., n.d. Affective computing. [Online]
Available at: https://en.wikipedia.org/wiki/Affective_computing
[Accessed 27 May 2020].
Anon., n.d. Drunk-driving Prevention Concept Car. [Online]
Available at: https://www.nissan-
global.com/EN/TECHNOLOGY/OVERVIEW/dpcc.html
[Accessed 27 May 2020].
Anon., n.d. Forward Chaining and backward chaining in AI. [Online]
Available at: https://www.javatpoint.com/forward-chaining-and-backward-chaining-in-
ai
[Accessed 27 May 2020].
Eugenia Politou, Efthimios Alepis, Constantinos Patsakis, 2017. A survey on mobile
affective computing. [Online]
Available at:
https://www.sciencedirect.com/science/article/pii/S1574013717300382#sec4.4
[Accessed 27 May 2020].
Mark J Grover, Ray Lopez, n.d. Introduction to Data Ingestion. [Online]
Available at: https://www.coursera.org/lecture/ibm-ai-workflow-business-priorities-
data-ingestion/introduction-to-data-ingestion-xkWoX
[Accessed 27 May 2020].
Marr, B., 2016. What is Affective Computing And How Could Emotional Machines
Change Our Lives?. [Online]
Available at: https://www.forbes.com/sites/bernardmarr/2016/05/13/what-is-affective-
computing-and-how-could-emotional-machines-change-our-lives/#351cae9de580
[Accessed 27 May 2020].
21
Mashal, S., n.d. Affective Computing in Gaming Industry: From Machine Learning to
Game User Interface. [Online]
Available at: https://www.audeering.com/2018/11/22/affective-computing-in-the-
game-industry-from-machine-learning-to-game-user-interface/
[Accessed 27 May 2020].
Matheson, R., 2019. Detecting patients’ pain levels via their brain signals. [Online]
Available at: http://news.mit.edu/2019/detecting-pain-levels-brain-signals-0912
[Accessed 27 May 2020].
O'Farrell, J. T., 2016. Three Types of Learning That Artificial Intelligence (AI) Does.
[Online]
Available at: http://www.allthingsinteractive.com/new-blog/2016/12/3/three-types-of-
learning-that-artificial-intelligence-ai-does
[Accessed 27 May 2020].
Shaundra B. Daily, Melva T. James, David Cherry, John J. Porter III, Shelby S.
Darnell, Joseph Isaac, Tania Roy, 2017. Chapter 9 - Affective Computing: Historical
Foundations, Current Applications, and Future Trends. [Online]
Available at:
https://www.sciencedirect.com/science/article/pii/B9780128018514000094#s0055
[Accessed 27 May 2020].
Soujanya Poria, Erik Cambria, Rajiv Bajpai, Amir Hussain, 2017. A review of
affective computing: From unimodal analysis to multimodal fusion. [Online]
Available at: https://www.sciencedirect.com/science/article/pii/S1566253517300738
[Accessed 27 May 2020].
Spreeuwenberg, R., 2017. Does Emotive Computing Belong in the Classroom?.
[Online]
Available at: https://www.edsurge.com/news/2017-01-04-does-emotive-computing-
belong-in-the-classroom?
[Accessed 27 May 2020].
Stefanie Rukavina, Sascha Gruss, Holger Hoffmann, Jun-Wen Tan, Steffen Walter,
Harald C. Traue, 2016. Affective Computing and the Impact of Gender and Age.
[Online]
22
Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4777379/
[Accessed 27 May 2020].
Weihui Dai, Dongmei Han, Yonghui Dai, Dongrong Xu, 2015. Emotion recognition
and affective computing on vocal social media. [Online]
Available at:
https://www.sciencedirect.com/science/article/pii/S037872061500018X#sec0010
[Accessed 27 May 2020].
Zezelj, V., 2019. Face recognition in cars improves safety and convenience. [Online]
Available at: https://visagetechnologies.com/face-recognition-in-cars/
[Accessed 27 May 2020].
23