0 ratings0% found this document useful (0 votes) 48 views39 pagesArtificial Intelligence
Artificial intelligence notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Agent Architecture: An Overview
‘Scanned with CamScannerIntroduction
The advancement of the Intemet technology has increased the need for distributed, concurrent,
heterogeneous and dynamic application systems. Agent technology is a new paradigm suitable for
developing such systems that situates and operates in a dynamic and heterogeneous environment.
What exactly is an agent? To date, there is no widely accepted definition of what an agent is. In this
study, an agent is referred to as an autonomous software entity that is situated in some environment
where it can monitor and response to changes proactively or reactively by itself or through
communication with other agents to persistently achieve certain goal/task on hehalf of user or other
agents (Wooldridge, 2009). An agent possesses certain distinct characteristics such as (Wooldridge &
Jennings, 1995).
* Autonomous: the ability to operate without the direct intervention of human and control over
its internal state.
e Social: the ability to interact with human and other agents.
» Reactive: the ability to perceive changes in the environment and response to it ina timely
fashion.
* Proactive: the ability to show goal directed behavior.
Other characteristics that an agent might have include mobility, benevolent, trustworthiness,
rationality, and learning capability. Mobility is the ability to travel between different hosts in a
computer network. Benevolent is the characteristic agent will always perform what it is asked to do.
‘Scanned with CamScannerTrustworthiness is the characteristic agent will not deliberately communicate false information.
Rationality is the characteristic agent will always never to prevent its goals being achieved. Learning
capability is the ability to adapt itself to fit its environment and to the desires of its users. In a complex
system, an agent may not exist alone in an environment as they may be multiple agents that are
situated in the same environment. Multi-agent system is the study of systems that are made up of
multiple heterogeneous software entity (agents) that interact with each other (Wei B, 1999; Shoham &
Leyton-Brown, 2008). In a multi-agent system environment, agents may have common or conflicting
goal to be achieved (Yu et al., 2010; Durfee & Rosenschein, 1994), The interaction between agents
can happen directly or indirectly. Direct communication is achieved through channel such as message
passing whilst indirect communication is achieved through affecting the environment and sense by
other agents (Genesereth & Ketchpel, 1994; Maes, 1997). Normally, agents that have common goal in
a multi-agent system will cooperate in order to achieve the goal (Doran et al., 1997; Pozna et al.,
2011). In the case of agents with conflicting goals, the agents will compete against each other to
obtain resources for personal goal attainment (Leyton-Brown , 2003). In order for agents to cooperate
and coordinate in achieving their goals, agents need to reason about when and what to do under
certain circumstances.
The foundation of the agent reasoning mechanism lies in the component called agent
architecture. Agent architecture is the blueprint for building an agent just like a class in object-
oriented programming. Wooldridge referred to agent architecture as software architecture that is
intended to support decision making process (Wooldridge, 2001). Maes described agent architecture
as architecture that encompasses techniques and algorithms to support decomposing set of
components and how these components interact (Maes, 1991). Agent architecture is the building
block for creating an agent much like creating an object in a class. The agent architecture is the brain
of the agent as it determines how the knowledge/information is represented in the agent. It also
determines the action the agent should take based on its underlying reasoning/interpretation
mechanism. Thus, different architectures used different representation approaches for their reasoning
mechanism to solve a variety of problems. These architectures can be broadly categorized into three
groups, the classical architecture, the cognitive architecture and the semantic agent architecture. The
classical architectures include logic-based architecture, reactive architecture, BDI architecture, and
hybrid architecture. The logic-based architecture is an agent architecture that uses symbolic
Tepresentation for reasoning. The reactive agent architecture is a direct stimulus-response agent
architecture. On the other hand, the BDI architecture is a deliberative agent architecture based on
mental states characteristic such as belief, desire, and intention. The layered architecture is the hybrid
of reactive and deliberative agent architecture. The cognitive architecture is based on cognitive
sciences and the semantic agent architecture utilizes semantic technology.
‘Scanned with CamScannerThe remainder of this paper describes the various agent architectures that can be used to build
agent and multi-agent system, Section 2 discusses the logic-based architecture. The reactive agent and
BDI architecture are discussed in Section 3 and 4 respectively. Section 5 describes the layered
architectures whilst Sections 6 and 7 present an overview of cognitive architecture and semantic agent
architecture, Finally, we conclude our discussed on agent architectures in Section 8.
Logic-Based Architecture
Logic-based architecture also known as the symbolic-based or deliberative architecture is one the
earliest agent architecture that rests on the physical-symbol systems hypothesis (Newell & Simon,
1976). This classical architecture is based on the traditional artificial symbolic approach by
representing and modeling the environment and the agent behavior with symbolic representation.
Thus, the agent behavior is based on the manipulation of the symbolic representation.
Agent's role in this classical architecture may also be considered as theorem provers
(Shardlow, 1990). The syntactical manipulation of the symbolic representation is the process of
logical deduction or theorem proving. As an instance of theorem proving, the agent specifications
outlines how the agent behaves, how the goals are generated and what action the agent can take to
satisfy these goals. An example of logic-based architecture formalism is as follows:
« Assume that the environment is described by sentences in L and the knowledge base that
contains all the information regarding the environment KB = P(L) where P(L) is the set of
possible environments.
© For each moment of the time t, an agent’s internal state is represented by KB = {KB,, KB,,
KB... KB,} where KBE KB.
«The possible environment states are represented by $ = (1,52, »/.
« Anagent's reasoning mechanism is modeled by a set of deduction rules, p which are the rules
of inference.
© Anagent perception functions as see:S ->P.
© The agent's internal state is updated by a perception function where next:KB x P ->KB.
* Thus, agent can choose an action from a set A = {@,, a, .../, action: KB ->A which is defined
in terms of deduction rules. The outcome of an agent's actions is drawn via the function do
where do:A x $ ->S.
‘© The decision making process is modeled through the rules of inference p, if a do:A can be
derived, the A is returned as an action to be best performed, else if do:A cannot be derived, a
special null action is returned.
‘Vacuum cleaning example in (Russell & Norvig, 1995) illustrates the idea of logic-based
architecture based on the specification above. The programmer has to encode the inference rules p in a
Way that enables the agent to decide what to do. Examples of this kind of classical agent architecture
‘Scanned with CamScannerapproach include classical planning agent such as STRIPS (Fikes & Nilsson, 1971), IPEM (Ambros-
Ingerson & Steel, 1988), Autodrive (Wood, 1993), Softbots (Etzioni et al., 1994), Phoenix systems
(Cohen et al., 1989), IRMA (Bratman et al., 1968), HOMER (Vere & Bickmore, 1990), and GRATE
(Jennings, 1993). BDI architecture is also considered as the subset of logic-based architecture,
However, due to its popularity and wide adoption of the architecture, the discussion on this particular
architecture is detailed in Section 4.
Although, the simplicity and elegance of logical semantics of the logic based architecture is.
attractive, there are several problems associated with this approach. Firstly, the transduction problem
implies the problem of translating modeling into symbolic representation. It is difficult to translate
and model the environment’s information into symbolic representation accurately for computation
process especially complex environment. Secondly, it is also difficult to represent information in a
symbolic form that is suitable for the agents to reason with and in a time constrained environment.
Finally, the transformation of percepts input may not be accurate enough to describe the environment
itself due to certain faults such as sensor error, reasoning error and etc. It is very difficult or
sometimes impossible to pul down all the rules for the situation that will be encountered by the agent
in a complex environment since the deduction process is based on set of inference rules. The
assumption in calculative rationality where the world does not change in a significant way while the
agent is dekberating is not realistic. Assume that on time t,, agent tries to reason an optimal action for
that particular time. However, the reasoning result may only be available at time t; where the
environment has already changed so much so that the optimal action for time t, may not be an optimal
action for time ty, Thus, due to the computational complexity of theorem proving over this approach,
it is not appropriate for time constrained domain.
Building agent in logic-based approach is viewed as a deduction process. An agent is encoded
asa logical theory by using specification and the process of selecting the action is through deduction
process that reduces the problem to a solution such as in theorem proving. An improvement version
logic based approach has been carried out in (Amir & Maynard-Reid, 2004; Amir & Maynard-Reid,
2000). In (Amir & Maynard-Reid, 2004) a logic-based Al architecture is implemented on Brooks’
Subsumption architecture. 3a the implementation of this architecture, different layers of control is
axiomatized in First-Order Logic (FOL), thus, independent theorem provers are used to derive each
layer’s output given its input, This architecture proved the versatility of the theorem provers wiich
allow them to realize complex tasks, while keeping individual theories simple (Amir & Maynard-
Reid, 2000).
Reactive Architecture
Reactive agent architecture is based on the direct mapping of situation to action. It is céfferent from
the logic-based architecture where no central symbolic world model and complex symbolic reasoning
are used. Agent responses to changes in the environment in a stimulus-response based. The reactive
‘Scanned with CamScannerarchitecture is realized through a set of sensors and effectors, where perceptual input is mapped to the
effectors to changes in the environment. Brook's subsumption architecture is known as the est pure
reactive architecture (Brooks, 1986). This architecture was developed by Brook who has critiqued on
many of the drawbacks in logic-based architecture. Figure 1 illustrates an example of reactive
architecture. The figure shows that each of the percept situation is mapped into an action which
specifically responses to the percept situation.
Perception
Environment
Figure 1; Reactive Architecture
The key idea of subsumption architecture is that intelligent behaviour can be generated
without explicit representations and abstract reasoning with symbolic AI technique (Brooks, 1991a;
Brooks, 1991b). Intelligence is an emergent property of ceriain complex systems. Subsumption
architecture is implemented in finite state machines with different layers connected to sensors that
perceive the environment changes and map the action to be performed (Brooks, 1986). A set of task-
accomplishing behaviour are used in the decision making process. Fach of the behaviour can be
thought of as an individual function which maps changes in the environment with an action. Multiple
behaviours that can be fired simultaneously is another characteristic of subsumption architecture. The
subsumption architecture hierarchical structure represents different behaviours. The lowest layer in
the hierarchy has the highest priority. Higher layer represent more abstract behaviour than the lower
layer in the hierarchy. Complex behaviour is achieved through the combination of these behaviours.
Figure 2 shows action selection in the layered architecture. In this layered architecture, the lower the
layer the higher the priority. The lower layer will be the primitive behaviour and higher layer wilt
Tepresent a more abstract behaviour.
‘Scanned with CamScannerFigure 2: Action Selections in Layered Architecture
‘The subsumption architecture is implemented in Steel’s study (Steels, 1990) for the mission
to a distant planet to collect sample of rocks and minerals. A near-optimal performance can be
obtained through simple adjustment and the solution is cheap in computing power and it is also
robust. Chapman came up with similar approach to Brook’s work in (Chapman & Agre, 1986) and is
referred to as the new abstract reasoning, This approach is used in the celebrated PENGI system
which simulated a computer game with central character control that can accomplish routine work
with little variation (Agre & Chapman, 1987). Figure 3 shows a Pengo game in progress (Agre &
Chapman, 1987),PENGI is a program written to play an arcade game called Pengo which is made up
of a 2-Dmaze with unit-sized ice blocks. In this game, PENGI is programmed to move the Penguin in
the game to avoid the bees attack and block slide to survive.
Figure 3: Pengo Game in Progress
‘Scanned with CamScannerAnother similar approach is the situated automata paradigm by Rosenschein and Kaelbling
(Kaelbling, 1991; Kaelbling & Rosenschein, 1990; Rosenschein, 1985; Rosenschein & Kaelbling,
1986). In this approach, the agent is specified in declarative terms such as beefs and goals which are
then compiled into a digital machine which satisfies this declarative specification. The digital machine
can operate in a provably time-bounded fashion. Although the declarative terms are in beliefs and
goals, the digital machine does not use any of the symbolic representation and hence, no symbolic
reasoning actually occurred. The approach is also considered as reactive architecture as no symbolic
Teasoning actually occurred because the declarative terms has been compiled to a digital machine that
reacts based on stimulus response.
Another reactive architecture is introduced by Maes called the Agent Network Architecture.
In this architecture, the agent acts as a set of competence modules (Maes, 1991; Maes, 1989; Maes,
1990). These modules loosely resemble the behaviours of Brook’s subsumption architecture. Pre and
post-conditions determine the activation of each module based on an activation level. This activation
level is stated in real value to indicate the relevancy of the module in a particular situation. Agent
network architecture is somewhat similar to the neural network architecture with different meaning of
the node context.
One of the advantages of reactive architecture is that it is less complicated to design and
implement than logic-based architecture. An agent’s behaviour is computationally tractable. The
robustness of reactive architecture against failure is another advantage, Complex behaviours can be
achieved from the interaction of simple ones. The disadvantages of reactive architecture include (1)
insufficient information about agent's current state to determine an activation action due to modeling
of environment available, (2) the processing of the local information limits the planning capabilities in
long term or bigger picture and hence, learning is difficult to be achieved, (3) emergent behavior
which is not yet fully understood making it even more intricate to engineer. Therefore, it is difficult to
build task-specific agents and one of the solutions is to evolve the agents to perform certain tasks
(Togelius, 2003). The work in this domain is referred to as artificial life.
Belief-Desire-Intention (BDI) Architecture
The BDI architecture is based on practical reasoning by Bratman’s philosophical emphasis on
intentional stance (Bratman, 1987). Practical reasoning is reasoning toward actions - the process of
figuring out what to do. This is different from the theoretical reasoning process as it derives
knowledge or reaches conclusions by using one’s beliefs and knowledge. Human practical reasoning
involves two activities namely deliberation and means-end reasoning. Deliberation decides what state
of affairs needs to be achieved while means-end reasoning decides how to achieve these states of
affairs. In BDI architecture, agent consists of three logic components referred as mental states/mental
attitudes namely beliefs, desires and intentions. Beliefs are the set of information an agent has about
the world. Desires are the agent's motivation or possible options to carry out the actions. Intentions
‘Scanned with CamScannerare the agent’s commitments towards its desires and beliefs. Intentions are key component in practical
reasoning. They describe states of affairs that the agent has commitied to bringing about and as a
result they are action-inducing. Forming the intentions is critical to an agent's success. BDI
architecture probably is the most popular architecture (Rao & Georgeff, 1991) and Practical
Reasoning System (PRS) is one of the well known BDI architectures (Georgeff & Lansky, 1986).
PRS is a framework for building real-time reasoning systems that can perform complex tasks in
dynamic environments. PRS used procedural knowledge representation in describing how to perform
a set of actions in order to achieve goal. This architecture is based on four key data structures: beliefs,
desires, intentions and plans, and an interpreter (see Figure 4). Figure 4 shows the four key data
structures of BDI architecture.
In the PRS system, beliefs represent the information an agent has about its environment.
Desires represent the tasks allocated to the agent corresponding to the goals that should be
accomplished by agent. Intentions represent the agent's commitment towards the goals. Finally, plans
specify some courses of action for the agent in order to achieve its intentions. The plans in the plan
library are pre-compiled plan rather than instantaneously generated. The agent interpreter is
responsible for updating beliefs from observations made from the environment, generating new
desires (tasks) on the basis of new beliefs, and selecting from the subset of currently active desires to
act as intentions. Lasily, the interpreter must select an action to perform the agent's current intentions
and procedural knowledge.
Figure 4: Practical Reasoning Systems
‘Scanned with CamScannerMany agent frameworks have been implemented in BDI architecture including:
+ JAM, a hybrid intelligent agent architecture that draws upon the theories and ideas of the
PRS, Structured Circuit Semantics (SCS), and Act Plan Interlingua (Huber, 1999),
« JACK, a commercial platform for developing industrial and research purpose multi-agent
application in Java (Howden et al., 2001).
e dMARS is a platform for intelligent software agents that makes use of the belief-desire—
intention software model (BDI) for building complex, distributed, time-critical systems in
C+ (a'Inverno et al., 1998).
The advantages of BDI architecture are that the design of the architecture is clear and
intuitive. The functional decomposition of the agent subsystem is clear and the BDI logic has formal
logic properties that can be studied. However, with BDI architecture the question of how to efficiently
implement the functionality in subsystem is not clear and so the agents need to achieve a balance
between commitment (Rao & Georgeff, 1991) and reconsideration (Wooldridge & Parsons, 1998). If
an agent did not stop to reconsider, it might be trying to achieve intention which is not achievable or
no longer valid. If an agent reconsiders often, it might face the risk of not achieving them due to
insufficient time working on the task.
Layered (Hybrid) Architecture
Layered (hybrid) architecture is an agent architecture which allows both reactive and deliberate agent
behavior. Layered architecture combines both the advantages of reactive and logic-based architecture
and at the same time alleviates the problems in both architectures. Subsystems are decomposed into a
Jayer of hierarchical structure to deal with different behaviours. There are two types of interaction that
flow between the layer namely horizontal and vertical. In the horizontal layer architecture, each layer
is directly connected to the sensory input and action output (see Figure 5). Each layer is like an agent
mapping the input to the action to be performed.
TouringMachine is an example of horizontally layered agent architecture. The
TouringMachine agent architecture consists of three activity-producing layers: a reactive layer R, a
planning layer P, and a modeling layer M (Ferguson, 1992). These three layers operate concurrently
and independently in mapping the perception into action (see Figure 6). Each of the layers has its own
internal computation processing mechanism.
‘Scanned with CamScannerLayern
Layern-1
Layer n-2
uony
Perception
Figure 5: Horizontal Layer Architecture
Sensory Action
Input Modelling Layer (M) h Output
Perception 7 Action
Subsystem Planning Layer (P) Subsystem
Reactive Layer (R)
‘Control
Framework
Figure 6: The TouringMachine Agent Control Architecture
‘The advantage of horizontal layer architecture is that only n layers are required for mapping
to n different types of behaviours. However, a mediator function is used to control the inconsistent
actions between layer interactions. Another complexity is the large number of possible interactions
between horizontal layers-m (where m is the number of actions per layer).
Vertical layer architecture eliminates some of these issues as the sensory input and action
output are each dealt with by at most one layer each (creating no inconsistent action suggestions).
There are two types of vertical layered architectures namely one-pass and two-pass control
architectures. In one-pass architecture, control flows from the initial layer that gets data from sensors
to the final layer that generates action output (see Figure 7), In two-pass architecture, data flows up
the sequence of layers and control then flows back down (see Figure 8).
‘Scanned with CamScannerAction . Layern '
¥
Layern , wert
» bayer n-a tae
x layern-2 4 " ~ ¥
>-——__ ——-—
* - + a > t
}—+ F#— a Layer2
Layer 2 4 +
Layer 1 Layer
—_— -
Perception Perception Action
Figure 7; Vertical layer architecture: one-pass Figure 8: Vertical layer architecture: two-pass
InteRRaP is an example of vertically two-pass agent architecture (Muller & Pischel, 1993).
InterRRaP comprises of three control layers namely behavior layer, local planning layer and
cooperative planning layer (see Figure 9), The behaviour layer is the lowest layer in InteRRap which
Tesponses reactively towards the world model. The local planning is the middle layer that uses
planning knowledge for the routine planning to achieve the agent's goal. Finally, the cooperative
planning is the highest layer in InteRRaP that deals with the social interaction. The main difference
between InteRRaP and TouringMachine is the interaction between layers. There are two types of flow
control in InteRRaP which are the bottom up and top down. The bottom up activation deals with the
Jower layer which passes control to the higher layer when it cannot process the current situation. Top
down execution deals with the higher layer which uses the action execution of the lower layer to
achieve goals and tasks. There are two general functions that are implemented in each layer known as
the situation recognition and goal activation function and planning and scheduling function. The
situation recognition and goal activation function is responsible to map the knowledge base and
current goals to a new set of goals. The planning and scheduling function is responsible for selecting
which plans to execute, based on the current plans, goals and knowledge base of that layer.
‘Scanned with CamScannerFigure 9: InteRRap Architecture
The main advantage of vertical layered architecture is the interaction between layers is
reduced significantly to m’(n—1). The main disadvantage is that the architecture depends on its
robustness, so if one layer fails, the entire system fails.
Cognitive Architecture
As stated in (Langley, 2004), there are several ways of constructing intelligent agents which are the
software engineering approach, the multi-agent approach and the cognitive architecture approach. The
cognitive architecture is based on the stream of cognitive science. The study of cognitive science
focuses on the human cognition and psychology. The origin of cognitive architecture started with a
specific class of architecture known as production systems (Neches et al., 1987; Newell, 1973) and
evolves over time. ACT is one of the earliest cognitive architecture build for modeling human
behavior (Anderson, 1976). The construction of intelligent agent using cognitive architecture focuses
on the cognition part, the simulation and modeling of human behaviour. The cognitive architecture
approach is said to be different from multi-agent approach in the following manners (Langley et al.,
2009):
cognitive architecture comes with a programming formalism to encode knowledge and
associates it with ils interpreter,
+ it has strong assumptions on the representation of knowledge and the processes that operate
on them,
‘Scanned with CamScannerit assumes a modular representation of knowledge,
© it offers intelligent behavior at the systems level, rather than at the level of component
methods designed for specialized tasks, and
+ it provides a unified approach in which a common set of representations and mechanisms
reduces the need for such careful crafting.
Cognitive architectures are used to construct intelligent system/agent that models human
performance (Newell, 1990; Meyer & Kieras, 1997). Thisunderlying cognitive architecture should
typically possess the characteristics of (Langley et al., 2009):
«short and long-term memories for storing the agent’s beliefs, goals, and knowledge,
representation of memories and their organization, and
+ functional processes that operate on these structures.
Memory and learning are the two important key components in developing cognitive
architecture. The combination of these two components form a taxonomy of three main categories of
cognitive agent architecture namely symbolic, emergent, and hybrid models (Duch et al., 2008) (see
Figure 10). Symbolic architecture is a classical AI top-down analytical approach that focuses on
information processing by using high-level symbols or declarative knowledge. Examples of cognitive
symbolic architecture are SOAR (Laird et al., 1987; Newell, 1990), EPIC (Meyer & Kieras, 1997),
ICARUS (Langley, 2005). SOAR is a platform created to demonstrate general intelligence behaviour.
The platform is based on symbolic representation which utilizes operator associated to problem space
in production rules. This approach is referred as procedural long term knowledge. The platform has
been used to develop many of the applications including expert system, intelligent control, and human.
behaviour interaction simulation. SOAR platform can also be viewed as a theory of general
intelligence or as theory of human cognition. The architecture is based on human problem solving and
unified theory of cognition, The theory of the architecture is that the unification of different theories
or overlap theories without conflict can produce general intelligence behaviour with appropriate
learning mechanism.
A bottom-up approaches which differentiates it from the symbolic architecture is adopted in
emergent architecture whereby low-level activation signals flowing through a network consisting of
numerous processing units is used. IBCA (O'Reilly et al., 1999), Cortonis (Hecht-Nielsen, 2007),
NuPIC (Hawkins & Blakeslee, 2004) are a few of the systems under the category of emergent
architecture. IBCA is based on the working memory as contro¥ed processing on how the components
in brain dynamically interact with each other to bring about the cognition function. The emergent
property of the dynamic interactions of the components contributes the working model of IBCA.
There are several important components in IBCA and each has own specific function namely the
posterior perceptual and motor cortex (PMC) which are responsible for the sensor and motor
processing based on inference and generalization. The prefrontal cortex (PFC) is responsible for
‘Scanned with CamScannerdynamic and active memory and hippocampus (HCMP) is in charge of the rapid leaning of arbitrary
information.
Hybrid architecture is the combination of the symbolic and emergent paradigms. Examples in
this category of hybrid architectures are ACT-R (Anderson & Lebiere, 2003), CLARION (Sun &
Alexandre, 1997; Sun et al., 2001) and LIOA (Franklin, 2006). ACT-R is one of the earliest
developed cognitive architecture which is based on the modeling of human behaviour. ACT-R is
organized into of a set of modules namely, sensory module, motor module, intentional module and
declarative module. Each module has a temporary storage which holds the short term memory, A
long-term memory is achieved through the combination of these modules. The action selection is
based on the utility calculation, in which the highest utility actions are selected.Figure 10 shows the
taxonomy of cognitive architecture based on the two important design properties which are memory
and learning.
Symbolic Emergent Hybrid
Memory i Memory Memory
‘* Rule-basedmemory © Globalist memory © Localist-distributed
_ © Graph-based memory _* Symbolic-connectionist
: 1 <= I
Learning Learning Learning
© Inductive leaming © Associative leaming ‘* Bottom-up leaming
© Analytical leaming © Competitive leaning * Top-down learning
Figure 10: Taxonomy of Cognitive Architectures (Duch et a., 2008)
Hybrid architecture is the combination of the symbolic and emergent paradigms. Examples in
this category of hybrid architectures are ACT-R (Anderson & Lebiere, 2003), CLARION (Sun &
Alexandre, 1997; Sun et al., 2001) and LIOA (Franklin, 2006). ACT-R is one of the earliest
developed cognitive architecture which is based on the modeling of human behaviour. ACT-R is
organized into of a set of modules namely, sensory module, motor module, intentional module and
declarative module. Each module has a temporary storage which holds the short term memory. A
long-term memory is achieved through the combination of these modules. The action selection is
based on the utility calculation, in which the highest utity actions are selected. Figure 10 shows the
taxonomy of cognitive architecture based on the two important design properties which are memory
and learning.
‘Scanned with CamScannerConclusion
Agent architecture is the key component to constructing agent. It acts as the brain and heart of the
agent to reason and perform action based on its knowledge base. This paper provides the state of the
art of agent architecture. It outlined four main agent architectures: logic, reactive, BDI and layered
architecture. Apart from these four, the cognitive architecture which is based on the human behaviour
is another architecture that is widely adopted in building multi-agent system. Many of the foundations
of cognitive architecture rest on the theories in cognitive sciences domain. The adoption semantic web
technology to enhance the cognitive architecture in analyzing human behaviour will be a very
interesting area to explore. Finally, the semantic technology adoption into agent architecture is also
another interesting topic that should be studied as well. For future work, we would like to draw on
these preliminary reviews to study the potential of a new agent architecture that uses semantic web
technology to enable agent to process the information in a more meaningful way that will in tum
improve its decision making which closely mimics the human reasoning process.
‘Scanned with CamScannerAbstract
Artificial intelligence, and in particular machine
learning, is a fast-emerging field. Research on
artificial intelligence focuses mainly on image-,
text- and voice-based applications, leading to
breakthrough developments in self-driving cars,
voice recognition algorithms and
recommendation systems. In this article, we
present the research of an alternative graph-
based machine learning system that deals with
three-dimensional space, which is more
structured and combinatorial than images, text
or voice. Specifically, we present a function-
driven deep learning approach to generate
conceptual design. We trained and used deep
neural networks to evaluate existing designs
encoded as graphs, extract significant building
blocks as subgraphs and merge them into new
compositions. Finally, we explored the
application of generative adversarial networks to
‘Scanned with CamScannergenerate entirely new and unique designs.
‘Scanned with CamScanner—»> Evaluating
Designs
Design >
Data NN : Discovering
: J I—> Building
-O : : Blocks
00
pur taver | IDDEN LAYERS | ourpur Laver
Figure 1. Abstract representation of deep neural networks
(DNNs). Data are fed into the input layer, which is processed
in the hidden layers (there can be any number of hidden
layers), and a result is produced in the output layer.
DNNs are classified as supervised or
unsupervised. Supervised DNNs are trained with
labelled data, that is, data that contain
information on type, attributes or scores. In
contrast, unsupervised BNNs are fed with raw
data without any added information about the
‘Scanned with CamScannercharacteristics of the data. In our research, we
used both systems; for one study, we used
labelled design data of various nouses
(represented as graphs with node and edge
attributes) with functional scores and hence
utilized a supervised DINN. In another study, we
used labelled design data, but without the
functional scores, which can be considered an
unsupervised DNN.
The training set for our DNN consisted of graph
representations of architectural designs, since
this mode of representation can capture
structused relationships between system
components much better than continuous media
formats, for example, images. Graphs are
mathematical structures comprising nodes
(denoting system components) and edges
(denoting relationships between nodes), and
they lend themselves to mathematical and
algorithmic analysis at the level of system
‘Scanned with CamScannercomponents. Graph theory has been used as a
powerful representational and analytical tool in
diverse fields such as social network analysis,
transportation network analysis, biological
network modelling and Internet engineering. In
our study, node and edge attributes capture
interesting physical characteristics of
architectural design components, for example,
area and volume. In contrast, graph attributes
characterize the overall design functions, for
example, liveability and sleepability. These are
exposed to graph-based DNNs, which attempt to
find patterns occurring in architectural design
samples. Such an analysis is hard for pixel-
based images, which do not carry any explicit
semantics of architectural designs.
‘Scanned with CamScanner2. The Contemporary Concrete Mix
Design and Machine Learning
Techniques
2.1. Concrete Mix Design in European
Corporate Practice
The primary goal of concrete mix design is
to estimate the proper quantitative
composition and proportion of concrete
mixture components. We should use a
composition which allows us to achieve
the best possible concrete performance.
Concrete performance is characterised by
several features, from which the most
significant are compressive strength and
durability. Both concrete strength and
durability should play an essential role in
the concrete mix design. The issue of
durability is essential in the case of an
aggressive environment
[17,18,19,20,21,22]. Based on industry
‘Scanned with CamScannerexperience, we found out that, in
European corporate engineering practice,
there are a few most used methods for
designing a concrete mix. These methods
are the Bukowski method, the Eyman and
Klaus method, and the Paszkowski
method. All the solutions mentioned
above are derived from the so-called
“Three Equations Method”, or Bolomey
Method, which is a mixed experimental-
analytical procedure [10,23,24]. It means
that collected laboratory data should
confirm that mathematical approach. We
calculate a volume of required
components by analytical measures and
validate the results by destructive
laboratory testing. In this method, we use
a fundamental equation of strength,
consistency, and tightness to determine
the three searched values, as follows, the
amount of aggregate, cement, and water,
expressed in kilograms per cubic meter.
‘Scanned with CamScannerThe first equation is the compressive
strength equation or Bolomey formula
(Equation (1)), which expresses the
experimentally determined dependence of
the compressive strength of hardened
concrete on the grade of cement used, the
type of aggregate used, and the water-
cement ratio characterising the cement
paste [23,24]. In this method, the concrete
grade is assumed as input data.
Cc
tem = Ai GF +0, 5) [MPa], (1)
where, fom is a medium compressive
strength of concrete, expressed in
kilograms. The value A; 2 means
coefficients, depending on the grade of
cement and the type of aggregate; C is an
amount of cement in 1 m? of concrete,
expressed in kilograms; and W
corresponds to the amount of water in 1
‘Scanned with CamScannernn
m? of concrete, expressed in kilograms. A
second consistency Equation (2), is
included in the water demand formula
necessary to make a concrete mix with the
required consistency.
W =C-w. +K - wx [dm], (2)
where W is the amount of water in 1 m3
of concrete, expressed in kilograms; C
corresponds to the amount of cement in 1
m of concrete, expressed in kilograms; K
means the amount of aggregate in 1 m? of
concrete, expressed in kilograms; w, is the
cement water demand index in dm? per
kilogram; and w, is the aggregate water
demand index in dm? per kilogram. The
water-tightness of concrete Equation (3) is
included in the simple volume formula,
which indicates that a watertight concrete
mix is obtained if the sum of the volume
of the individual components is equal to
‘Scanned with CamScannerthe volume of the concrete mix.
£ + * 4 w = 1000 [dm*], (3)
Pc Pk
where W is the amount of water in 1 m?
of concrete, expressed in kilograms, C
corresponds to the amount of cement in 1
m/® of concrete, expressed in kilograms, K
means the amount of aggregate in 1 m° of
concrete, expressed in kilograms; p, is the
cement density in kilograms per dm*; and
pr is the aggregate density in kilograms
per dm.
The system of equations presented above,
with three unknowns variables, allows for
calculating the sought amounts of cement
(C), aggregate (K), and water (W) in one
cubic meter of concrete mix. The system is
valid, assuming that there are no air
bubbles in the concrete. Another method
used in the construction industry is “the
‘Scanned with CamScannerdouble coating method” [25]. The methods
above are ones that are used to determine
the quantitative composition of the
concrete mix. However, the actual process
of creating a concrete mix is much
broader, including the following steps:
The first step is to determine the data
needed to design the mix, such as the
purpose of the concrete use, the
compressive strength of the concrete, and
the consistency of the concrete mix. Next,
the qualitative characteristics of the
components should be determined,
namely the type and class of cement and
the type and granularity of the aggregates.
Subsequent steps include an examination
of the properties of the adopted
ingredients; a check of their compliance
with the standard requirements;
determining the characteristics of the
components that will be needed to
‘Scanned with CamScannera
determine the composition of the concrete
mix; and a projection of the aggregate pile.
The successive step is the actual adoption
of the design method and a calculation per
unit of volume. The final stage is to make a
trial sample and examine both the
concrete mix and the hardened concrete
with design assumptions [26].
‘Scanned with CamScanner2 KQML
The Knowledge Query and Manipulation Language (KQML) is a language and an associated
protocol to support the high level communication among intelligent agents. It can he used
as a language for an application program to interact with an intelligent system or for two
or more intelligent systems to interact cooperatively in problem solving. We argue that
KQML should he defined as more than a language with a syntax and semantics, but must
also include a protocol which governs the use of the language (c.g., a pragmatic component )
2.1 Design Issues and Assumption
Architectural assumptions. Agents will typically be separate processes which may be
running in the same address space or on separate machines. ‘The machines will he accessible
via the internet, We need a protocol that is simple and efficient to usc to connect a few pre-
defined agents on a single machine or on several machines on the same local area network.
We also need the protocol to he an appropriate one to scale up to a scenario in which we
have a large number (i.e, hundreds or even thousands) of communicating agents scattered
across the global internet and who are dynamically coming on and off line.
Communication Modes. KQML will support several modes of communication among
agents along several independent dimensions. Along one dimension, it supports interac-
tions which differ in the number of agents involved — from a single agent to a single agent
(i.c., point-to-point), as well as messages from one agent to a set of agents (i.c., multi-
casting). Along another dimension, it permits one to specify the recipient agents cither
explicitly (c.g., by internet address and port number), by symbolic address (¢.g., 10 “to the
the TRANSCOMMapServer” or even by a declarative description form of broadcast (¢.g,. “10
any KIF-speaking agents interested in airport locations”). A final dimension involves syn-
chonicity — it must support synchronous (blocking) as well as (non-blocking) asynchronous
communication.
Syntactic assumptions. Messages in the content, message and communication layers
will be represented as Lisp s-expressions. They will be transmitted between processes in
the form of ascii streams. The forms at the content layer will depend on the content-language
being used and may be represented as strings, if necessary. ‘The forms at the message and
communication layer will he ascii representations of lists with symbopl as the first clement
and whose remaining clements use the Common Lisp keyword argument convention.
Security. Security is an issue in any distributed environment. We will need to develop
conventions and procedures for authentication which will allow an agent to verify that
another agent is who it purports to he. Comment: We should take advantage of the kerberos
system for secure authctication being developed as a part of Project Athena. It appears to
have all the right hooks and will be widely available on IBM, DEC, SUN and other Unix
platforms.
Transaction. Interactions among, knowledge-based systems have a different kind of trans-
action processing which will require something other than the now standard two-phase
commit. ‘That is because interacting agents may use information and knowledge gained
from one information source for longer periods of time than read/write locks support. In
‘Scanned with CamScannerean eae an apa
ore wr (eg. LOOM, Prolog) (eg. LOOM, Profog)
coat | rmpercans | | Ae Aap
‘rept nnd srt tn wre me
Tate
terme Protoot || | rr [atta] Internet Protest Proce!
ct et sine sim
Figure 4: Modern internet communication is governed by a" protocol stack” with distinct, well-defined
layers. Communication between intelligent agents should also be governed by a protocol stack with
distinct, well-defined layers
one way, knowledge-based systems are similar to other advanced systems such as software
engineering or CAD/CAM design environments (see Computing Surveys, 1991). Further,
interactions among knowledge-based systems may better he cast in terms of helicf spaces
and/or logics of belief than in terms of low level transactions. The development of a good
model to support transactions among intelligent agents is a research topic for the KQML
group to consider sometime in the future, Developing a workable solution which is incremen-
tally implementable may prove key to the ultimate success of the KQML. effort. Comment:
Transactions for interacting knowledge-based systems are different than what is standardly
thought of for conventional databases, Bul what is the relationship to versioned databases
and more advanced applications like software enginecring and CAD/CAM databases.
Protocol Approach. ‘The Knowledge Query and Manipulation Language (KQML) is a
language and a protocol to support the high level communication among intelligent agen
It can be used as a language for an application program to interact with an intelligent system
or for two or more intelligent systems to interact cooperatively in problem solving. We
argue that KQML should he defined as more than a language with a syntax and semantics,
but must also include a protocol which governs the use of the language (e.g., a pragmatic
component).
Using a protocol approach is standard in modern communication and distributed pro-
cessing. The first diagram in Figure 4 shows a simplified version of the standard protocol
stack for network communication over an internet. At the top of the stack is the application-
level protocol, in this case SMTP (Simple Mail Transfer Protocol) and at the bottom is the
low level protocol in which data is actually exchanged, From a mailer’s point of view, it is
communicating with another mailer using the SMTP protocol. It need not know any of the
details of the protocols which support its communication.
We are developing a similar approach to support communication among intelligent
agents — defining a protocol stack for transferring knowledge across the internet. ‘The
second diagram in Figure 4 shows a simple protocol stack we are using for the model of
KQMIL. We assume that the KQMI protocol stack is an application protocol layer of the
standard OSI model and thus assume reliable communication.
SKTP, a Simple Knowledge Transfer Protocol, supports KQMI. interactions and is de-
fined as a protocol stack with at least three layers: content, message and communication.
‘Scanned with CamScannerAdditional layers will appear below these three to supply reliable communication streams
between the processes. The content layer contains an expression in some language which
encodes the knowledge to be conveyed. The message layer adds additional attributes which
describe attributes of the content layer such as the language it is expressed in, the ontology
it assumes and the kind of speech act it represents (e.g... an assertion or a query). The final
communication layer adds still more attributes which describe the lower level communica-
tion parameters, such as the identity of the sender and recipient and whether or not the
communication is meant to by synchronous or asynchronous.
‘Scanned with CamScannerKnowledge Interchange Format (KIF)
Knowledge Interchange Format (KIF) is a computer-
oriented language for the interchange of knowledge
among disparate programs. It has declarative
semantics (i.e. the meaning of expressions in the
representation can be understood without appeal to
an interpreter for manipulating those expressions); it
is logically comprehensive (i.e. it provides for the
expression of arbitrary sentences in the first-order
predicate calculus); it provides for the representation
of knowledge about the representation of knowledge;
it provides for the representation of nonmonotonic
reasoning rules; and it provides for the definition of
objects, functions, and relations.
‘Scanned with CamScannerArtificial Intelligence and Negotiation?
¢ What is artificial intetligence?
Thinking Rationally
Acting Humanly Acting Rationally
‘Scanned with CamScanner* Synergies between both fields:
Thinking Humanly Thinking Rationally
INEGotiation)
Sify Sai
Systems
PoJewoiny
Pela (oho y tice
a)
(o)
Cy
ay
)
3
=
5
ter
©
5}
Acting humanly Acting Rationally
‘Scanned with CamScannerE-marketplaces: Automated Negotiation
— Thinking Rationally & Acting Rationally
— Goal: Optimality according to the available
information
* Pareto Optimality
* Nash Bargaining Point
— Examples: Ebay, Amazon, etc...
— Approaches:
+ Algorithmic Game Theory
* Bounded Rationality Approaches & Heuristics
* Mechanism Design
‘Scanned with CamScanner¢ Social Simulation
— Thinking Humanly & Acting Humanly
— Goal: Mimick human behavior to provide
predictions
* Emotions, cultural factors, social identity
theory, etc.
— Examples: Supply chain simulation
— Uses:
« Pilot experiments
* Train real negotiators
« Predict the effect of new environmental
conditions
‘Scanned with CamScanner* Negotiation Support Systems
= All-rounder!
— Goal: Support one/all parties to reach an
efficient agreement
= Examples: AutoMed, Persuader
— Approaches:
* Enforcement/Recommendation
* Best Response Mechanism
* Reasoning about the opponent
‘Scanned with CamScanner3.1, Integrative Bargaining Model
The goal of integrative negotiation (also known as “non-zero-sum-game” or “win-win game”)
is to attain a mutually-beneficial agreement that maximizes settlement efficiency and fairness under
suitable circumstances [5]. Integrative approaches employ objective criteria to make a circumstance of
mutual gain and emphasizes the significance of exchanging information among the negotiators [26].
Walton and McKersie (1965) described the integrative bargaining model as a negotiation approach
in which negotiators employ problem-solving behavior that refers to a state of desire for finding
a solution to the problem to reach a definite goal. Problem-solving is generally recommended to
achieve an integrative settlement [27,28]. Negotiators attempt to redefine the problem, analyze the
cause of settlement difficulties and explore a wide range of mutually-acceptable alternative solutions
through maximum information sharing and disclosure of each party’s needs and interests [28-30]. The
effectiveness of the problem-solving approach depends upon the presence of some psychological and
information conditions: motivation, information and language, trust and a supportive climate [12].
Motivation describes how parties must have the motivation to solve the problem and thus anticipate
the problem as significant enough to address and discuss [12]. Information and language state that
those participating in the problem-solving process must have contact to appropriate information
relevant and be authorized to use it [12]. Meanwhile, the trust and support climate is marked by
encouragement and freedom to behave spontaneously without fear of sanctions [12].
‘Scanned with CamScannerTA
wa |
cy \
Tat
7
m2
73
GI We
change
Figure 1. Research model 1.
Sustainability 2019, 11, 6832
‘situation
Process
Has
Negotiation
‘Situation
Outcome cy
Counterpan's
‘Soci
psychological
we Outcome
‘Opponent
Desire for
Future
Negotiation
Figure 2. Research model 2.
‘Scanned with CamScanner