0% found this document useful (0 votes)
10 views

Chapter 4

Uploaded by

haxih26625
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Chapter 4

Uploaded by

haxih26625
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

CHAPTER-4

LOGICAL AGENT
KNOWLEDGE-BASED AGENT
• The central component of a knowledge-based
agent is its knowledge base.
• A knowledge base is a set of sentences.
• Each sentence is expressed in a language
called a knowledge representation language
and represents some assertion about the
world.
• There should be a way to add new sentences
to KB, and a way to query what is known.
KNOWLEDGE-BASED AGENT
• The standard names for these tasks are TELL
and ASK.
• Both tasks may involve inference –that is
driving new sentence from old.
• Inference must obey the fundamental
requirement that when one ASKs a question of
the knowledge base , the answer should
follow from what has been told, TELLED to the
knowledge base previously.
KNOWLEDGE-BASED AGENT:
BACKGROUND KNOWLEDGE
The agent maintains a knowledge base ,
KB,which contain some back ground
knowledge. Each time the agent program is
called it does three things:

First - TELLs the KB what it perceives


Second -ASK the KB what action it should
perform
Third -The agent records its choice with TELL
and the action is executed
KNOWLEDGE-BASED AGENT: BACKGROUND KNOWLEDGE

The details of the representation language are hidden


inside three functions:
1) MAKE-PERCEPT-SENTENCE: construct a sentence
asserting that the agent perceived the given percept at
the given time.
2) MAKE-ACTION-QUERY: constructs a sentence that
asks what action should be done at the current time
3) MAKE –ACTION-SENTENCE: constructs a sentence
asserting that the chosen action was executed.

The details of inference mechanisms are hidden inside


TELL and ASK
Knowledge-based agent
DECLARATIVE VS PROCEDURAL APPROACH
• A knowledge-based agent can be built simply by
TELLing, it what it needs to know. Starting with an
empty knowledge base, the agent designer can TELL
sentences one by one until the agent knows how to
operate in its environment.This is called the
declarative approach to system building.

• The procedural approach encodes desired


behaviors directly as program code ;minimizing
the role of explicit representation and reasoning
can result in much more efficient system
THE WUMPUS WORLD
• Wumpus world is a cave consisting of rooms
connected by passageways. A beast that eats
anyone who enters its room.
• The Wumpus can be shot by an agent but the
agent has only one arrow. Some rooms contain
bottomless pits that will trap anyone who
wanders into these rooms.
• GOAL is to find a heap of gold.
WUMPUS WORLD PEAS DESCRIPTION
• Performance measure:
+1000 for picking up the
gold,-1000 for falling into a pit
or being eaten by the
wumpus ,-1 for each action
taken and -10 for using up the
arrow
• Environment: A 4×4 grid of
rooms.
WUMPUS WORLD PEAS DESCRIPTION
• Sensors: The agent has five sensors each of which gives a
single bit of information.
 Stench- In the square containing the Wumpus and in the directly
(not diagonally) adjacent squares the agent will perceive a stench.
 Breeze- In the square directly adjacent to a pit, the agent will
perceive a breeze.
 Glitter- In the square where the gold is, the agent will perceive a
glitter.
 When an agent walks into a wall, it will perceive a bump.
 When the Wumpus is killed, it emits a woeful scream that can be
perceived anywhere in the cave.
• Actuators: The agent can move forward ,turn left by 90o or
turn right by 90o
The agent dies if it enters a square containing a pit or a live
wumpus.
LOGIC
Syntax of the representation language ,which specifies all the
sentences that are well formed .

X + Y =4 is a well –formed sentence


X2Y += not a well-formed sentence

A logic must also define the semantics of the language, meaning


of sentences. In logic the meaning is more precise .It defines the
truth of each sentence with respect to each possible world

Eg: x + y =4 is true, x=2 and y=2


x+y =4 is false ,x=1 and y=1
LOGIC
• We use the term model in place of Possible
world.
• If a sentence α is true in model m, we say that
m satisfies α or sometimes m is a model of α.
We use the notation M(α) to mean the set of
all models of α.
• m is a model of α – the sentence α is true in
model m
ENTAILMENT
•The relation of logical entailment is the idea that a
sentence follows logically from another sentence.
α╞ β
The sentence α entails the sentence β
The formal definition : α╞ β if and only if in every
model in which α is true β is also true.
α |= β if and only if M(α) ⊆ M(β) .
• Understanding Entailment & Inference- think of
the set of all consequences of KB as a haystack and
of α as a needle.
• Entailment is like the needle being in the
haystack; inference is like finding it.
INFERENCE
• An inference algorithm i can derived a from
KB, we write
• KB ├i α
• which is pronounced “α is derived from KB by i
“ or “ i derives a from KB”
• An inference algorithm that derives only
entailed sentences is called sound or truth-
preserving
• sentence α can be derived from KB by a
procedure i (an inference algorithm)
PROPOSITIONAL LOGIC
• Propositional logic is the simplest logic – illustrates basic ideas
• The relation between a sentence and another sentence is called
entailment.
Syntax
• The syntax of propositional logic defines the allowable
sentences.
• The atomic sentences- the indivisible syntactic elements-consist
of a single proposition symbol.
• Each such symbol stands for a proposition that can be true or
false.
• There are two proposition symbols with fixed meanings: True is
the always true –proposition and False is the always false
proposition
PROPOSITIONAL LOGIC
• Complex sentences are constructed from simpler sentences
using logical connectives.
• There are 5 connectives in common use:
• NOT (¬): ¬W is called negation
• AND(˄) : W1,3 A P3,1 is called conjuction ;it’s part are the
conjuncts.
• OR(˅) : A sentence using V such as (W1,3A P3,1) V W2,2 is called
disjunction ;is a disjunction of disjuncts.
• Implies(⇒): A sentence such as (W1,3A P3,1)⇒ ¬W2,2 is called an
implication. Its premise or antecedent is (W1,3A P3,1) and its
conclusion or consequent is ¬W2,2 . Implications are also
known as rules or if-then statements.
• If and only if(⇔): W1,3 ⇔ ¬W2,2 is a biconditional
BNF FOR PROPOSITIONAL LOGIC
ORDER OF PRECEDENCE

The order of precedence in


propositional logic is from
highest to lowest is:
⌐,ᴧ,ᴠ, ⇒, ⇔
SEMANTICS
• Semantics defines the rules for determining
the truth of a sentence with respect to a
particular model. Each model specifies
true/false for each proposition symbol.
• All sentences are constructed from atomic
sentences and the five connectives. The rules
can also be expressed with truth tables.
• A knowledge base consists of a set of
sentences. Logical knowledge base is a
conjunction of those sentences.
• ⇒ (implies) eg: P implies Q or if P then Q.
TRUTH TABLES FOR 5 LOGICAL CONNECTIVES
STANDARD LOGICAL EQUIVALENCES
INFERENCE and PROOFs

• Patterns of inference are called inference rules. Inference rules that


can be applied to derive a proof- A chain of conclusions that leads
to the desired goal.

• The best known rule is called Modus Ponens


(α ⇒ β), α
β
The notation means that whenever any sentences of the form α
⇒ β and α are given then the sentence β will be inferred.

And-Elimination, which says that ,from a conjunction any of the


conjucts can be inferred
α˄β
α
CONJUNCTIVE NORMAL FORM(CNF)
• A sentence expressed as a conjunction of disjunctions of
literals is said to be in conjunctive normal form or CNF.
Steps of CNF:
1. Eliminate ⇔, replacing (α ⇔ β) with ((α ⇒ β) ˄ (( β ⇒ α )
2. Eliminate ⇒ ,replacing (α ⇒ β) with (¬ α˅ β)
3. ¬(¬a) ≡ a a double – negation elimination
¬(α ˄ β) ≡ (¬ α ˅ ¬ β) De Morgan
¬(α ˅ β) ≡ (¬ α ˄ ¬ β) De Morgan
4. Now we have a sentence containing nested ∧ and ∨
operators applied to literals. We apply the distributivity law
from Figure 7.11, distributing ∨ over ∧ wherever possible.
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1) .
LOCAL SEARCH ALGORITHMS AND
OPTIMIZATION PROBLEMS
• Local search algorithms operate using a single
current state (rather than multiple
paths) ,move only to neighbors of that state .
They are not systematic
• Two key advantages:
1) They use very little memory
2) Find solutions in large or infinite (continuous)
state spaces
LOCAL SEARCH ALGORITHMS AND
OPTIMIZATION PROBLEMS
• Local search algorithms are useful for solving pure
optimization problems, in which the aim is to find the best
state according to an objective function.
• A landscape has both “location” (defined by the state) and
“elevation” (defined by the value of the heuristic cost
function or objective function).
• If elevation corresponds to cost, then the aim is to find the
lowest valley—a global minimum; if elevation corresponds
to an objective function, then the aim is to find the highest
peak—a global maximum.
• A complete local search algorithm always finds a goal if one
exists; an optimal algorithm always find a global minimum/
maximum.
HILL-CLIMBING SEARCH
• It is simply a loop that continually moves in the direction
of increasing value-that is, uphill. It terminates when it
reaches a "peak" where no neighbor has a higher value.
• The algorithm does not maintain a search tree, so the
current node data structure need only record the state
and its objective function value.
• Hill-climbing does not look ahead beyond the immediate
neighbors of the current state.
• Hill climbing is sometimes called greedy local search
because it grabs a good neighbor state without thinking
ahead about where to go next.
• Local Maxima: a local maximum is a peak that is higher
than each of its neighboring states but lower than the
global maximum.
Hill-climbing search
GENETIC ALGORITHMS
•A genetic algorithm or GA is a variant of stochastic
beam search in which successor states are
generated by combining two parent states ,
rather than by modifying a single state.
• GAs begin with a set of k randomly generated
states ,called the population. Each state, or
individual, is represented as a string over a finite
alphabet—most commonly, a string of 0s and 1s.
ONLINE SEARCH AGENTS
• An online search agent operates by interleaving
computation and action: first it takes an action, then it
observes the environment and computes the next
action.
• Online search is a necessary idea for an exploration
problem, where the states and actions are unknown to
the agent.
• An agent in this state of ignorance must use its actions
as experiments to determine what to do next, and
hence must interleave computation and action.
• Example : a robot that is placed in a new building and
must explore it to build map that it can use for getting
from A to B
ONLINE SEARCH AGENTS
• An online agent receives a
percept telling it what state it has
reached; from this information, it
can augment its map of the
environment.
• The current map is used to decide
where to go next.
ONLINE LOCAL SEARCH
• A random walk can be used to explore the
environment.
• A random walk simply selects at random one of the
available actions from the current state; preference
can be given to actions that have not yet been tried.
• Random walk will eventually find a goal or
complete its exploration provided that the space is
finite .
• The process can be very slow.
• The figure shows an environment in which a random walk
will take exponentially many steps to find the goal
because at each step backward progress is twice as likely
as forward progress.
• An agent implementing this scheme which is called
learning real-time A*(LRTA*).
• Optimizing under uncertainty encourages the agent to
explore new ,possibly promising paths.
• An LRTA* agent is guaranteed to find a goal in any
finite ,safely explorable environment.
THANK
YOU!!!

You might also like