Mod-3.2-Knowledge Representation-FOL
Mod-3.2-Knowledge Representation-FOL
representation First-order
predicate logic
What is knowledge-engineering?
• The process of constructing a knowledge-base
in first-order logic is called as knowledge
engineering.
• In knowledge-engineering, someone who
investigates a particular domain, learns
important concept of that domain, and
generates a formal representation of the
objects, is known as knowledge engineer.
What is knowledge acquisition?
• Often, the knowledge engineer is trained in
representation but is not an expert in the
domain at hand, be it circuit design, space
station mission scheduling, or whatever.
• The knowledge engineer will usually interview
the real experts to become educated about
the domain and to elicit the required
knowledge, in a process called knowledge
acquisition.
The knowledge-engineering process:
process:
The knowledge-engineering process:
• 1. Identify the task
• 2. Assemble the relevant knowledge
• 3. Decide on vocabulary
• 4. Encode general knowledge about the domain • 5.
Encode a description of the problem instance • 6.
Pose queries to the inference procedure and get
answers
• 7 Debug the knowledge base
KNOWLEDGE ENGINEERING vs.
PROGRAMMING
• A useful analogy can be made between
knowledge engineering and programming.
Both activities can be seen as consisting of
four steps:
Eliminate Implication:
∀x[¬Philo(x) ∨ ∃y[Book(y) ∧ Write(x, y)]]
Eliminate Implication:
Skolemize:
substitute y by g(x) ∀x[¬Philo(x) ∨ [Book(g(x)) ∧ Write(x, g(x))]]
Example:
• Drop Universal quantifiers.
In this step we will drop all universal quantifier since all the
statements are not implicitly quantified so we don't need it.
– ¬ food(x) V likes(John, x)
– food(Apple)
– food(vegetables)
– ¬ eats(y, z) V killed(y) V food(z)
– eats (Anil, Peanuts)
– alive(Anil)
– ¬ eats(Anil, w) V eats(Harry, w)
– killed(g) V alive(g)
– ¬ alive(k) V ¬ killed(k)
– likes(John, Peanuts).
Example:
– Distribute conjunction ∧ over disjunction ¬. This
step will not make any change in this problem. •
Step-3: Negate the statement to be proved – In
this statement, we will apply negation to the
conclusion statements, which will be written as
¬likes(John, Peanuts)
• Step-4: Draw Resolution graph: Now in this
step, we will solve the problem by resolution
tree using substitution. For the above problem,
it will be given as follows:
Example:
Example:
• Explanation of Resolution graph:
• In the first step of resolution graph, ¬likes(John, Peanuts) , and
likes(John, x) get resolved(canceled) by substitution of {Peanuts/x},
and we are left with ¬ food(Peanuts)
• In the second step of the resolution graph, ¬ food(Peanuts) , and
food(z) get resolved (canceled) by substitution of { Peanuts/z}, and
we are left with ¬ eats(y, Peanuts) V killed(y) .
• In the third step of the resolution graph, ¬ eats(y, Peanuts) and eats
(Anil, Peanuts) get resolved by substitution {Anil/y}, and we are left
with Killed(Anil) .
• In the fourth step of the resolution graph, Killed(Anil) and ¬ killed(k)
get resolve by substitution {Anil/k}, and we are left with ¬
alive(Anil) .
• In the last step of the resolution graph ¬ alive(Anil) and alive(Anil) get
resolved.
Practice Questions on Resolution
Practice Question-1
• All hounds howl at night.
• Anyone who has any cats will not have any mice.
• Light sleepers do not have anything which howls at night.
• John has either a cat or a hound.
Prove by resolution that
• (Conclusion) If John is a light sleeper, then John does not have any mice.
Practice Question-2
• Everyone who feels warm either is drunk, or every costume they have is warm. •
Every costume that is warm is furry.
• Every AI student is a CS student.
• Every AI student has some robot costume.
• No robot costume is furry.
Prove by resolution that
• (Conclusion) If every CS student feels warm, then every AI student is drunk.
• Step-2
• At the second step, we will see those facts which infer from available facts and with
satisfied premises.
• Rule-(1) does not satisfy premises, so it will not be added in the first iteration. •
Rule-(2) and (3) are already added.
• Hence it
is proved that Robert is Criminal using forward chaining approach.
B. Backward Chaining:
• Backward-chaining is also known as a backward deduction or
backward reasoning method when using an inference engine. • A
backward chaining algorithm is a form of reasoning, which starts
with the goal and works backward, chaining through rules to find
known facts that support the goal.
• Properties of backward chaining:
• It is known as a top-down approach.
• Backward-chaining is based on modus ponens inference rule.
• In backward chaining, the goal is broken into sub-goal or sub
goals to prove the facts true.
B. Backward Chaining:
• It is called a goal-driven approach, as a list of goals decides
which rules are selected and used.
• Backward -chaining algorithm is used in game theory,
automated theorem proving tools, inference engines, proof
assistants, and various AI applications.
• The backward-chaining method mostly used a depth-first
search strategy for proof.
B. Backward Chaining:
Example:
• In backward-chaining, we will use the same above example,
and will rewrite all the rules.
• American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) →
Criminal(p) ...(1)
• Owns(A, T1) ........(2)
• Missile(T1)
• ?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p,
A) ......(4)
• Missile(p) → Weapons (p) .......(5)
• Enemy(p, America) →Hostile(p) ........(6) • Enemy
(A, America) .........(7)
• American(Robert). ..........(8)
B. Backward Chaining:
Backward-Chaining proof:
• In Backward chaining, we will start with our goal predicate,
which is Criminal(Robert), and then infer further rules.
Step-1:
• At the first step, we will take the goal fact. And from the goal
fact, we will infer other facts, and at last, we will prove those
facts true. So our goal fact is "Robert is Criminal," so following
is the predicate of it.
B. Backward Chaining:
Step-2:
• At the second step, we will infer other facts form goal fact
which satisfies the rules. So as we can see in Rule-1, the goal
predicate Criminal (Robert) is present with substitution
{Robert/P}. So we will add all the conjunctive facts below the
first level and will replace p with Robert.
• Here we can see American (Robert) is a fact, so it is proved
here.
B. Backward Chaining:
B. Backward Chaining:
Step-3:
• At step-3, we will extract further fact Missile(q) which
infer from Weapon(q), as it satisfies Rule-(5). Weapon (q) is
also true with the substitution of a constant T1 at q.
B. Backward Chaining:
Step-4:
• At step-4, we can infer facts Missile(T1) and Owns(A, T1) form
Sells(Robert, T1, r) which satisfies the Rule- 4, with the
substitution of A in place of r.
• So these two statements are proved here.
B. Backward Chaining:
B. Backward Chaining:
Step-5:
• At step-5, we can infer the fact Enemy(A, America ) from
Hostile(A) which satisfies Rule- 6.
• And hence all the statements are proved true using backward
chaining.
B. Backward Chaining: