0% found this document useful (0 votes)
38 views57 pages

Unit 3

1. Propositional logic is a technique of knowledge representation that uses symbolic variables to represent logical statements as being either true or false. 2. Propositional logic uses logical connectives like negation, conjunction, disjunction, implication, and biconditional to construct complex logical statements from simpler atomic propositions. 3. The truth of complex statements is determined by the truth tables defined for each logical connective. Propositional logic provides a simple yet powerful way to represent logical relationships in artificial intelligence systems.

Uploaded by

Raghavendra Rao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views57 pages

Unit 3

1. Propositional logic is a technique of knowledge representation that uses symbolic variables to represent logical statements as being either true or false. 2. Propositional logic uses logical connectives like negation, conjunction, disjunction, implication, and biconditional to construct complex logical statements from simpler atomic propositions. 3. The truth of complex statements is determined by the truth tables defined for each logical connective. Propositional logic provides a simple yet powerful way to represent logical relationships in artificial intelligence systems.

Uploaded by

Raghavendra Rao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

LOGICAL AGENTS

INTRODUCTION
“In which we design agents that can form representations of a complex world, use
a process of inference to derive new representations about the world, and use
these new representations to deduce what to do”

“Humans, know things; and what they know helps them do things. They make strong
claims about how the intelligence of humans is achieved not by purely reflex
mechanisms but by processes of reasoning that operate on internal
representations of knowledge. In AI, this approach to intelligence is embodied in
knowledge-based agents.”

2
KNOWLEDGE-BASED AGENTS
The central component of a knowledge - based agent is its knowledge base(KB)
A knowledge base is a set of sentences
Each sentence is expressed in a language called a knowledge representation language and
represents some assertion about the world
We dignify a sentence with the name axiom, when the sentence is taken as given without being
derived from other sentences
There must be a way to add new sentences to the knowledge base and a way to query what is
known. The standard names for these operations are TELL and ASK, respectively
Both operations may involve inference - that is, deriving new sentences from old
Inference must obey the requirement that when one ASKs a question of the knowledge base, the
answer should follow from what has been told (or TELLed) to the knowledge base previously

3
BACKGROUND KNOWLEDGE
As all our agents, it takes a percept as input and returns
an action. The agent maintains a knowledge base, KB,
which may initially contain some background knowledge
Each time the agent program is called, it does three
things.
First, it TELLs the knowledge base what it perceives.
Second, it ASKs the knowledge base what action it
should perform. In the process of answering this query,
extensive reasoning may be done about the current
state of the world, about the outcomes of possible
action sequences, and so on.
Third, the agent program TELLs the knowledge base
which action was chosen, and the agent executes the
action.

4
THE WUMPUS WORLD
The wumpus world is a cave consisting of rooms connected by passageways. Lurking
somewhere in the cave is the terrible wumpus, a beast that eats anyone who enters its
room.
The wumpus can be shot by an agent, but the agent has only one arrow. Some rooms
contain bottomless pits that will trap anyone who wanders into these rooms (except
for the wumpus, which is too big to fall in). The only mitigating feature of this bleak
environment is the possibility of finding a heap of gold. Although the wumpus world is
rather tame by modern computer game standards, it illustrates some important points
about intelligence.

5
Performance measure: +1000 for climbing out of the cave with
the gold, –1000 for falling into a pit or being eaten by the
wumpus, –1 for each action taken and –10 for using up the
arrow. The game ends either when the agent dies or when the
agent climbs out of the cave
Environment: A 4×4 grid of rooms. The agent always starts in
the square labeled [1,1], facing to the right. The locations of the
gold and the wumpus are chosen randomly, with a uniform
distribution, from the squares other than the start square. In
addition, each square other than the start can be a pit, with
probability 0.2

6
Actuators: The agent can move Forward, TurnLeft by 90◦, or TurnRight by 90◦. The
agent dies a miserable death if it enters a square containing a pit or a live wumpus.
(It is safe, even though smelly, to enter a square with a dead wumpus.)
If an agent tries to move forward and bumps into a wall, then the agent does not
move. The action Grab can be used to pick up the gold if it is in the same square as
the agent.
The action Shoot can be used to fire an arrow in a straight line in the direction the
agent is facing. The arrow continues until it either hits (and hence kills) the wumpus or
hits a wall. The agent has only one arrow, so only the first Shoot action has any effect.
Finally, the action Climb can be used to climb out of the cave, but only from square
[1,1]
7
Sensors: The agent has five sensors, each of which gives a single bit of information:
In the square containing the wumpus and in the directly (not diagonally) adjacent squares, the agent
will perceive a Stench.
In the squares directly adjacent to a pit, the agent will perceive a Breeze.
In the square where the gold is, the agent will perceive a Glitter.
When an agent walks into a wall, it will perceive a Bump.
When the wumpus is killed, it emits a woeful Scream that can be perceived anywhere in the cave.

The percepts will be given to the agent program in the form of a list of five symbols;
for example, if there is a stench and a breeze, but no glitter, bump, or scream, the
agent program will get [Stench, Breeze, None, None, None].

8
9
LOGIC
 Syntax
Semantics
Truth
Possible world
Model
Satisfaction
Entailment

10
PROPOSITIONAL LOGIC: A VERY SIMPLE LOGIC
The atomic sentences consist of a single proposition symbol. Each such symbol stands
for a proposition that can be true or false
Atomic sentences are easy:
• True is true in every model and False is false in every model
• The truth value of every other proposition symbol must be specified directly in the
model

11
Complex sentences are constructed from simpler sentences, using parentheses and
logical connectives.
There are five connectives in common use

12
PROPOSITIONAL LOGIC IN ARTIFICIAL
INTELLIGENCE
Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions. A proposition
is a declarative statement which is either true or false. It is a technique of knowledge representation in logical and
mathematical form
 Propositional logic is also called Boolean logic as it works on 0 and 1.
 In propositional logic, we use symbolic variables to represent the logic, and we can use any symbol for a
representing a proposition, such A, B, C, P, Q, R, etc.
 Propositions can be either true or false, but it cannot be both.
 Propositional logic consists of an object, relations or function, and logical connectives.
 These connectives are also called logical operators.
 The propositions and connectives are the basic elements of the propositional logic.
 Connectives can be said as a logical operator which connects two sentences.
 A proposition formula which is always true is called tautology, and it is also called a valid sentence.
 A proposition formula which is always false is called Contradiction.
 A proposition formula which has both true and false values is called Statements which are questions, commands, or
opinions are not propositions such as "Where is Rohini", "How are you", "What is your name", are not propositions

13
SYNTAX OF PROPOSITIONAL LOGIC
There are two types of Propositions:
Atomic Propositions are the simple propositions. It consists of a single proposition
symbol. These are the sentences which must be either true or false.
Example: a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.
Compound propositions are constructed by combining simpler or atomic propositions,
using parenthesis and logical connectives.
Example: a) "It is raining today, and street is wet."
b) "Ankit is a doctor, and his clinic is in Mumbai."

14
For complex sentences, we have five rules, which hold for any subsentences P and Q
in any model m (here “iff” means “if and only if”):
 ¬P is true iff P is false in m.
 P ∧ Q is true iff both P and Q are true in m.
 P ∨ Q is true iff either P or Q is true in m.
 P ⇒ Q is true unless P is true and Q is false in m.
 P ⇔ Q is true iff P and Q are both true or both false in m.

15
LOGICAL CONNECTIVES:
 Negation: A sentence such as ¬ P is called negation of P. A literal can be either Positive literal or negative literal.
 Conjunction: A sentence which has ∧ connective such as, P ∧ Q is called a conjunction.
Example:
Rohan is intelligent and hardworking. It can be written as, P= Rohan is intelligent, Q= Rohan is hardworking. → P∧
Q.
 Disjunction: A sentence which has ∨ connective, such as P ∨ Q. is called disjunction, where P and Q are the
propositions.
Example:
"Ritika is a doctor or Engineer", Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.
 Implication: A sentence such as P → Q, is called an implication. Implications are also known as if-then rules. It can
be represented as If it is raining, then the street is wet. Let P= It is raining, and Q= Street is wet, so it is represented
as P → Q
 Biconditional: A sentence such as P⇔ Q is a Biconditional sentence, example If I am breathing, then I am alive
P= I am breathing, Q= I am alive, it can be represented as P ⇔ Q.
16
17
18
PROPERTIES OF OPERATORS:

Commutativity: Distributive:
 P∧ Q= Q ∧ P, or  P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
 P ∨ Q = Q ∨ P.  P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).

Associativity: DE Morgan's Law:


 (P ∧ Q) ∧ R= P ∧ (Q ∧ R),  ¬ (P ∧ Q) = (¬P) ∨ (¬Q)
 (P ∨ Q) ∨ R= P ∨ (Q ∨ R)  ¬ (P ∨ Q) = (¬ P) ∧ (¬Q).

Identity element: Double-negation elimination:


 P ∧ True = P,  ¬ (¬P) = P.
 P ∨ True= True.

19
A SIMPLE KNOWLEDGE BASE
 Px,y is true if there is a pit in [x, y].
Wx,y is true if there is a wumpus in [x, y], dead or alive.
Bx,y is true if the agent perceives a breeze in [x, y].
Sx,y is true if the agent perceives a stench in [x, y].

The sentences we write will suffice to derive ¬P1,2 (there is no pit in [1,2])
There is no pit in [1,1]:
R1 : ¬P1,1 .
A square is breezy if and only if there is a pit in a neighboring square. R2 : B1,1 ⇔ (P1,2 ∨ P2,1) .
R3 : B2,1 ⇔ (P1,1 ∨ P2,2 ∨ P3,1) .
The preceding sentences are true in all wumpus worlds. Now we include the breeze percepts for the first
two squares visited in the specific world the agent is in, leading up to the situation in Figure.
R4 : ¬B1,1 .
R5 : B2,1
20
INFERENCE PROCEDURE
Inference:
In artificial intelligence, need intelligent computers which can create new logic from old logic
or by evidence, so generating the conclusions from evidence and facts is termed as
Inference.
Inference rules:
Inference rules are the templates for generating valid arguments.
Inference rules are applied to derive proofs in artificial intelligence, and the proof is a
sequence of the conclusion that leads to the desired goal.
In inference rules, the implication among all the connectives plays an important role.

21
TERMINOLOGIES RELATED TO INFERENCE RULES
Implication: It is one of the logical connectives which can be represented as P → Q. It is a
Boolean expression.
Converse: The converse of implication, which means the right-hand side proposition goes to
the left-hand side and vice-versa. It can be written as Q → P.
Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
Inverse: The negation of implication is called inverse. It can be represented as ¬ P → ¬ Q.

22
TYPES OF INFERENCE RULES : MODUS PONENS
The Modus Ponens rule is one of the most important rules of inference, and it states
that if P and P → Q is true, then we can infer that Q will be true. It can be
represented as:

Example:
Statement-1: "If I am sleepy then I go to bed" ==> P→ Q
Statement-2: "I am sleepy" ==> P
Conclusion: "I go to bed." ==> Q.
Hence, we can say that, if P→ Q is true and P is true then Q will be true.
Proof by Truth table:

23
Another useful inference rule is And-Elimination, which says that, from a conjunction,
any of the conjuncts can be inferred: α ∧ β / α
For example, from (WumpusAhead ∧ WumpusAlive), WumpusAlive can be inferred.
By considering the possible truth values of α and β, one can show easily that Modus
Ponens and And-Elimination are sound once and for all. These rules can then be used
in any particular instances where they apply, generating sound inferences without the
need for enumerating models.
All of the logical equivalences in Figure(Next slide) can be used as inference rules.
For example, the equivalence for biconditional elimination yields the two inference
rules

24
STANDARD LOGICAL EQUIVALENCES. THE SYMBOLS ALPHA,
BETA, AND GAMA STAND FOR ARBITRARY
SENTENCES OF PROPOSITIONAL LOGIC

25
MODUS TOLLENS

26
HYPOTHETICAL SYLLOGISM

27
DISJUNCTIVE SYLLOGISM

28
ADDITION

29
SIMPLIFICATION

30
RESOLUTION

31
PROPOSITIONAL THEOREM PROVING
Theorem proving - applying rules of inference directly to the sentences in our
knowledge base to construct a proof of the desired sentence without consulting
models
1st concept is logical equivalence: two sentences α and β are logically equivalent if
they are true in the same set of models. We write this as α ≡ β
Equivalence is also follows: any two sentences α and β are equivalent only if each of
them entails the other: α ≡ β if and only if α |= β and β |= α
The second concept we will need is validity. A sentence is valid if it is true in all
models. For example, the sentence P ∨ ¬P is valid
Valid sentences are also known as tautologies - they are necessarily true

32
INFERENCE AND PROOFS
Let us see how these inference rules and equivalences can be used in the wumpus world. We start with the
knowledge base containing R1 through R5 and show how to prove ¬P1,2, that is, there is no pit in [1,2]. We
apply biconditional elimination to R2 to obtain
R6: (B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1) .
Then we apply And-Elimination to R6 to obtain
R7: ((P1,2 ∨ P2,1) ⇒ B1,1) .
Logical equivalence for contrapositives gives
R8: (¬B1,1 ⇒ ¬(P1,2 ∨ P2,1)) .
Now we can apply Modus Ponens with R8 and the percept R4 (i.e., ¬B1,1), to obtain
R9 : ¬(P1,2 ∨ P2,1) .
Finally, we apply De Morgan’s rule, giving the conclusion
R10 : ¬P1,2 ∧ ¬P2,1 .
That is, neither [1,2] nor [2,1] contains a pit.

33
PROOF BY RESOLUTION
The current section introduces a single inference rule, resolution, that yields a complete inference
algorithm when coupled with any complete search algorithm.
We begin by using a simple version of the resolution rule in the wumpus world. Let us consider the steps
leading up to Figure (a): the agent returns from [2,1] to [1,1] and then goes to [1,2], where it perceives
a stench, but no breeze. We add the following facts to the knowledge base:
R11 : ¬B1,2 .
R12 : B1,2 ⇔ (P1,1 ∨ P2,2 ∨ P1,3) .
By the same process that led to R10 earlier, we can now derive the absence of pits in [2,2] and [1,3]
(remember that [1,1] is already known to be pitless):
R13 : ¬P2,2 .
R14 : ¬P1,3 .

34
We can also apply biconditional elimination to R3, followed by Modus Ponens with
R5, to obtain the fact that there is a pit in [1,1], [2,2], or [3,1]:
R15 : P1,1 ∨ P2,2 ∨ P3,1 .
Now comes the first application of the resolution rule: the literal ¬P2,2 in R13
resolves with the literal P2,2 in R15 to give the resolvent
R16 : P1,1 ∨ P3,1 .
if there’s a pit in one of [1,1], [2,2], and [3,1] and it’s not in [2,2], then it’s in [1,1] or
[3,1]. Similarly, the literal ¬P1,1 in R1 resolves with the literal P1,1 in R16 to give
R17 : P3,1 .

35
Clause - a disjunction of literals - and a literal and produces a new clause
Single literal can be viewed as a disjunction of one literal, also known as a unit
clause
There is one more technical aspect of the resolution rule: the resulting clause should
contain only one copy of each literal
The removal of multiple copies of literals is called factoring
For example, if we resolve (A ∨ B) with (A∨¬B), we obtain (A ∨ A), which is reduced
to just A

36
CONJUNCTIVE NORMAL FORM
The resolution rule applies only to clauses (that is, disjunctions of literals), so it would
seem to be relevant only to knowledge bases and queries consisting of clauses
How, then, can it lead to a complete inference procedure for all of propositional
logic?
The answer is
Every sentence of propositional logic is logically equivalent to a conjunction of
clauses. A sentence expressed as a conjunction of clauses is said to be in conjunctive
normal form or CNF

37
WE ILLUSTRATE THE PROCEDURE BY CONVERTING THE
SENTENCE B1,1 ⇔ (P1,2 ∨ P2,1) INTO CNF.
THE STEPS ARE
1. Eliminate ⇔, replacing α ⇔ β with (α ⇒ β) ∧ (β ⇒ α).
(B1,1 ⇒ (P1,2 ∨ P2,1)) ∧ ((P1,2 ∨ P2,1) ⇒ B1,1) .
2. Eliminate ⇒, replacing α ⇒ β with ¬α ∨ β: ( Implication Elimination Slide 25)
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬(P1,2 ∨ P2,1) ∨ B1,1)
3. CNF requires ¬ to appear only in literals, so we “move ¬ inwards” by repeated application of the following
equivalences from Figure Slide 19
 ¬(¬α) ≡ α (double-negation elimination) ¬(α ∧ β) ≡ (¬α ∨ ¬β) (De Morgan) ¬(α ∨ β) ≡ (¬α ∧ ¬β) (De Morgan)

In the example, we require just one application of the last rule: (¬B1,1 ∨ P1,2 ∨ P2,1) ∧ ((¬P1,2 ∧ ¬P2,1) ∨ B1,1) .
4. Now we have a sentence containing nested ∧ and ∨ operators applied to literals. We apply the distributivity law
from Figure Slide 19, distributing ∨ over ∧ wherever possible.
(¬B1,1 ∨ P1,2 ∨ P2,1) ∧ (¬P1,2 ∨ B1,1) ∧ (¬P2,1 ∨ B1,1) .
The original sentence is now in CNF, as a conjunction of three clauses. It is much harder to read, but it can be used as
input to a resolution procedure. 38
A grammar for conjunctive normal form, Horn clauses, and definite clauses.
A clause such as A ∧ B ⇒ C is still a definite clause when it is written as ¬A ∨ ¬B ∨ C, but only the former is
considered the canonical form for definite clauses.
One more class is the k-CNF sentence, which is a CNF sentence where each clause has at most k literals.
39
A RESOLUTION ALGORITHM
A resolution algorithm first, converted into CNF. Then, the resolution rule is applied to
the resulting clauses
Each pair that contains complementary literals is resolved to produce a new clause,
which is added to the set if it is not already present
The process continues until one of two things happens:
 there are no new clauses that can be added, in which case KB does not entail α; or,
 two clauses resolve to yield the empty clause, in which case KB entails α.

The empty clause - a disjunction of no disjuncts - is equivalent to False because a


disjunction is true only if at least one of its disjuncts is true

40
WE CAN APPLY THE RESOLUTION PROCEDURE TO A VERY SIMPLE INFERENCE IN THE WUMPUS WORLD. WHEN THE
AGENT IS IN [1,1], THERE IS NO BREEZE, SO THERE CAN BE NO PITS I N NEIGHBORING SQUARES. THE RELEVANT
KNOWLEDGE BASE IS
KB = R2 ∧ R4 = (B1,1 ⇔ (P1,2 ∨ P2,1)) ∧ ¬B1,1

41
HORN CLAUSES AND DEFINITE CLAUSES
In many practical situations, however, the full power of resolution is not needed. Some real-
world knowledge bases satisfy certain restrictions on the form of sentences they contain, which
enables them to use a more restricted and efficient inference algorithm
One such restricted form is the definite clause, which is a disjunction of literals of which exactly
one is positive
For example, the clause (¬L1,1 ∨¬Breeze ∨B1,1) is a definite clause, whereas (¬B1,1 ∨ P1,2
∨ P2,1) is not
Slightly more general is the Horn clause, which is a disjunction of literals of which at most one
is positive
All definite clauses are Horn clauses, as are clauses with no positive literals; these are called
goal clauses

42
Every definite clause can be written as an implication whose premise is a conjunction
of positive literals and whose conclusion is a single positive literal
For example, the definite clause (¬L1,1 ∨ ¬Breeze ∨ B1,1) can be written as the
implication
(L1,1 ∧ Breeze) ⇒ B1,1. In the implication form, the sentence is easier to understand:
it says that if the agent is in [1,1] and there is a breeze, then [1,1] is breezy
In Horn form, the premise is called the body and the conclusion is called the head
A sentence consisting of a single positive literal, such as L1,1, is called a fact
It too can be written in implication form as True ⇒ L1,1, but it is simpler to write just
L1,1.
43
FORWARD AND BACKWARD CHAINING
The forward-chaining algorithm determines if a single proposition symbol q the query is entailed by a knowledge base of definite
clauses
It begins from known facts (positive literals) in the knowledge base
If all the premises of an implication are known, then its conclusion is added to the set of known facts. For example, if L1,1 and
Breeze are known and (L1,1 ∧ Breeze) ⇒ B1,1 is in the knowledge base, then B1,1 can be added. This process continues until the
query q is added or until no further inferences can be made.

The backward-chaining algorithm, as its name suggests, works backward from the query. If the query q is known to be true, then no
work is needed
The algorithm finds those implications in the knowledge base whose conclusion is q. If all the premises of one of those implications
can be proved true (by backward chaining), then q is true

44
PROBLEM
P⇒Q
L ∧M ⇒ P
B∧L⇒M
A∧P⇒L
A∧B⇒L
A
B

45
EFFECTIVE PROPOSITIONAL MODEL CHECKING
we describe two families of efficient algorithms for general propositional inference
based on model checking:
One approach based on backtracking search & one on local hill-climbing search

46
A COMPLETE BACKTRACKING ALGORITHM
Early termination: The algorithm detects whether the sentence must be true or false,
even with a partially completed model. A clause is true if any literal is true, even if
the other literals do not yet have truth values; hence, the sentence as a whole could
be judged true even before the model is complete.
For example, the sentence (A ∨ B) ∧ (A ∨ C) is true if A is true, regardless of the
values of B and C.
Similarly, a sentence is false if any clause is false, which occurs when each of its
literals is false.
Again, this can occur long before the model is complete
Early termination avoids examination of entire subtrees in the search space

47
DPLL – DAVIS PUTNAM LOGEMANN & LOVELAND
Pure symbol heuristic: A pure symbol is a symbol that always appears with the same
“sign” in all clauses. For example, in the three clauses (A ∨ ¬B), (¬B ∨ ¬C), and (C
∨ A), the symbol A is pure because only the positive literal appears, B is pure
because only the negative literal appears, and C is impure
Unit clause heuristic: A unit clause was defined earlier as a clause with just one literal

48
function DPLL-SATISFIABLE?(s) returns true or false
inputs: s, a sentence in propositional logic
clauses ←the set of clauses in the CNF representation of s
symbols←a list of the proposition symbols in s
return DPLL(clauses, symbols,{ })

function DPLL(clauses, symbols, model ) returns true or false


if every clause in clauses is true in model then return true
if some clause in clauses is false in model then return false

P, value ← FIND-PURE-SYMBOL(symbols, clauses, model )


if P is non-null then return DPLL(clauses, symbols - P, model ∪ {P=value})
P, value ← FIND-UNIT-CLAUSE(clauses,model )
if P is non-null then return DPLL(clauses, symbols - P, model ∪ {P=value})
P ←FIRST(symbols); rest ←REST(symbols)
return DPLL(clauses, rest ,model ∪ {P=true}) or DPLL(clauses, rest ,model ∪ {P=false}))

49
LOCAL SEARCH ALGORITHMS
function WALKSAT(clauses, p, max flips) returns a satisfying model or failure
inputs: clauses, a set of clauses in propositional logic
p, the probability of choosing to do a “random walk” move, typically around 0.5
max flips, number of flips allowed before giving up
model ← a random assignment of true/false to the symbols in clauses
for i= 1to max flips do
if model satisfies clauses then return model
clause ← a randomly selected clause from clauses that is false in model
with probability p flip the value in model of a randomly selected symbol from clause
else flip whichever symbol in clause maximizes the number of satisfied clauses
return failure

50
AGENTS BASED ON PROPOSITIONAL LOGIC
The current state of the world
- The knowledge base is composed of axioms
- general knowledge about how the world works
- percept sentences obtained from the agent’s experience in a particular world
 The agent knows that the starting square contains no pit (¬P1,1) and no wumpus (¬W1,1).
 Each square, it knows that the square is breezy if and only if a neighboring square has a pit
B1,1 ⇔ (P1,2 ∨ P2,1)
 A square is smelly if and only if a neighboring square has a wumpus S1,1 ⇔ (W1,2 ∨W2,1)

51
AXIOM
Axioms that allow the agent to keep track of fluents such as Lt x,y.
Fluents refer an aspect world that changes, L is Location of Agent, t refers time when
agent at that location x,y
If the agent is at location [1, 1] facing east at time 0 and goes Forward , the result is
that the agent is in square [2, 1] and no longer is in [1, 1]:
L0 1,1 ∧ FacingEast 0 ∧ Forward 0 ⇒ (L1 2,1 ∧ ¬L1 1,1)

52
SUCCESSOR-STATE AXIOM

L t+1 1,1 is true if either


(a) the agent moved Forward from [1, 2] when facing south, or from [2, 1]
when facing west; or
(b) Lt 1,1 was already true and the action did not cause movement (either
because the action was not Forward or because the action bumped into a wall)
L t+1 1,1 ⇔ (L t 1,1 ∧ (¬Forward t ∨ Bumpt+1))
∨ (L t 1,2 ∧ (South t ∧ Forward t))
∨ (L t 2,1 ∧ (West t ∧ Forward t))
Another Axiom
Ok t x,y ⇔ ¬Px,y ∧¬(Wx,y ∧ WumpusAlive t) .

53
A HYBRID AGENT
The agent program maintains and updates a knowledge base as well as a current plan.
The initial knowledge base contains the a temporal axioms - those that don’t depend on t, such as the
axiom relating the breeziness of squares to the presence of pits.
At each time step, the new percept sentence is added along with all the axioms that depend on t, such as
the successor-state axioms.
The agent uses logical inference, by ASKing questions of the knowledge base, to work out which squares
are safe and which have yet to be visited.
The main body of the agent program constructs a plan based on a decreasing priority of goals.
First, if there is a glitter, the program constructs a plan to grab the gold, follow a route back to the initial
location, and climb out of the cave.
Otherwise, if there is no current plan, the program plans a route to the closest safe square that it has not
visited yet, making sure the route goes through only safe squares.
54
Route planning is done with A∗ search, not with ASK.
If there are no safe squares to explore, the next step - if the agent still has an arrow
is to try to make a safe square by shooting at one of the possible wumpus locations.
These are determined by asking where ASK(KB,¬Wx,y) is false - that is, where it is
not known that there is not a wumpus.
The function PLAN-SHOT uses PLAN-ROUTE to plan a sequence of actions that will
line up this shot.
If this fails, the program looks for a square to explore that is not provably unsafe -
that is, a square for which ASK(KB,¬Okt x,y) returns false.
If there is no such square, then the mission is impossible and the agent retreats to [1,
1] and climbs out of the cave.
55
56
UNIT – IV …

57

You might also like