0% found this document useful (0 votes)
18 views11 pages

Artificial Intelligence 5

This document discusses planning in artificial intelligence, defining the planning problem with components such as initial state, goal state, and actions. It introduces the Blocks World problem as a classic example of planning and explains Means-Ends Analysis as a problem-solving technique. Additionally, it covers machine learning concepts, expert systems, and their architectures, emphasizing the importance of knowledge representation and reasoning in AI.

Uploaded by

ar4462
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views11 pages

Artificial Intelligence 5

This document discusses planning in artificial intelligence, defining the planning problem with components such as initial state, goal state, and actions. It introduces the Blocks World problem as a classic example of planning and explains Means-Ends Analysis as a problem-solving technique. Additionally, it covers machine learning concepts, expert systems, and their architectures, emphasizing the importance of knowledge representation and reasoning in AI.

Uploaded by

ar4462
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

1

ARTIFICIAL INTELLIGENCE
UNIT-5

Planning
Planning in AI is the process of finding a sequence of actions that will
achieve a specific goal given an initial state of the world. Unlike search,
which explores states directly, planning typically involves reasoning about
actions and their effects in a more abstract or symbolic way before
committing to execution.

The Planning Problem


A planning problem is formally defined by:
1. Initial State: A description of the world at the beginning, typically a
set of facts.
2. Goal State (or Goal Condition): A description of the desired state of
the world that the agent wants to achieve. This is often a set of logical
propositions that must be true.
3. Actions (or Operators): A set of possible actions the agent can
perform. Each action is typically defined by:
○ Preconditions: A set of conditions that must be true in the
current state for the action to be executable.
○ Effects (or Postconditions): A set of changes to the state that
result from performing the action. Effects can be additive (add
new facts) or deletive (remove existing facts).
The Task: Find a valid sequence of actions (a plan) that, when executed
from the initial state, transforms the world into a state where the goal
conditions are satisfied.

Simple Planning Agent


A simple planning agent might follow a basic loop:
2

1. Formulate Goal: Determine what needs to be achieved.


2. Formulate Problem: Define the initial state, goal state, and available
actions.
3. Search for Solution: Use a planning algorithm to find a sequence of
actions (a plan).
4. Execute Plan: Perform the actions in the real world.
5. Monitor and Revise: Observe the environment. If the plan fails or the
environment changes unexpectedly, replan.
This highlights the iterative nature of planning in dynamic or uncertain
environments.
Blocks World Problem

The Blocks World is a classic AI planning domain, simple enough to


illustrate fundamental planning concepts but rich enough to pose interesting
challenges.
Scenario: Imagine a table with a set of blocks, each identified by a letter
(e.g., A, B, C). Blocks can be on the table or on top of other blocks. Only one
block can be on top of another. Only clear blocks (no block on top of them)
can be moved.
Typical Actions:

● STACK(x, y): Put block x on top of block y.


○ Preconditions: CLEAR(x), CLEAR(y), ON(x, Table) (or
x is held), x != y.
○ Effects: ON(x, y), NOT CLEAR(y), NOT ON(x, Table)
(if x was on table), HOLDING(x) becomes NOT HOLDING(x)
(if x was held).
● UNSTACK(x, y): Remove block x from block y.
○ Preconditions: ON(x, y), CLEAR(x), ARM_EMPTY.
○ Effects: HOLDING(x), CLEAR(y), NOT ON(x, y), NOT
ARM_EMPTY.
● PICKUP(x): Pick up block x from the table.
○ Preconditions: ON(x, Table), CLEAR(x), ARM_EMPTY.
3

○ Effects: HOLDING(x), NOT ON(x, Table), NOT


ARM_EMPTY.
● PUTDOWN(x): Put block x onto the table.
○ Preconditions: HOLDING(x).
○ Effects: ON(x, Table), ARM_EMPTY, NOT HOLDING(x).

Example Problem:
Initial State:
Table
A
B
C

● (Means: ON(A, Table), ON(B, Table), ON(C, Table),


CLEAR(A), CLEAR(B), CLEAR(C), ARM_EMPTY)

Goal State:
Table
C
B
A

● (Means: ON(A, B), ON(B, C), ON(C, Table))

A possible plan:

1. PICKUP(A)
2. STACK(A, B)
3. UNSTACK(A, B) (oops, A is on B, not on C) -> Need to rethink the
steps.
4. PICKUP(C)
5. PUTDOWN(C)
6. PICKUP(B)
7. STACK(B, C)
4

8. PICKUP(A)
9. STACK(A, B)

This shows that simple trial-and-error often leads to non-optimal or invalid


plans. Planning algorithms (like STRIPS, Graphplan, Partial-Order Planning)
are designed to systematically find valid and often optimal plans.

Means-Ends Analysis
Means-Ends Analysis (MEA)is a problem-solving technique used in AI and
cognitive science. It's a goal-driven approach that works by identifying the
differences between the current state and the goal state, and then selecting an
operator (action) that reduces these differences.
How it Works:

1. Compare: Compare the current state with the goal state to


identify any differences.
2. Select Operator: Choose an operator (action) that is relevant to
reducing one of the most significant differences.
3. Establish Preconditions as Subgoals: If the selected operator's
preconditions are not met in the current state, establish those
preconditions as new subgoals.
4. Recursively Solve Subgoals: Recursively apply Means-Ends Analysis
to achieve these subgoals.
5. Apply Operator: Once the preconditions are met, apply the operator,
which transforms the current state.
6. Repeat: Continue until the original goal is achieved.
Example (Simplified Blocks World - Goal: ON(A,B))

● Initial State: ON(A, Table), ON(B, Table), CLEAR(A),


CLEAR(B), ARM_EMPTY
● Goal State: ON(A, B)
1. Difference: ON(A,B) is not true.
2. Relevant Operator: STACK(A, B) (its effect is ON(A,B)).
3. Preconditions of STACK(A, B):
5

○ CLEAR(A) (True in initial state)


○ CLEAR(B) (True in initial state)
○ HOLDING(A) (False in initial state)
○ A != B (True)
4. Subgoal: Achieve HOLDING(A).
5. Relevant Operator for HOLDING(A): PICKUP(A) (its effect is
HOLDING(A)).
6. Preconditions of PICKUP(A):
○ ON(A, Table) (True in initial state)
○ CLEAR(A) (True in initial state)
○ ARM_EMPTY (True in initial state)
7. All preconditions met for PICKUP(A).
8. Apply PICKUP(A): State becomes HOLDING(A), NOT ON(A,
Table), NOT ARM_EMPTY.
9. Subgoal HOLDING(A) achieved. Now return to original goal.
10. Preconditions for STACK(A, B):
○ CLEAR(A) (True)
○ CLEAR(B) (True)
○ HOLDING(A) (True)
○ A != B (True)
11. All preconditions met for STACK(A, B).
12. Apply STACK(A, B): State becomes ON(A, B), NOT
CLEAR(B), ARM_EMPTY, NOT HOLDING(A).
13. Original Goal ON(A, B) achieved. Plan: [PICKUP(A),
STACK(A, B)].

MEA is a powerful heuristic search strategy, particularly useful for reducing


complex problems into smaller, manageable subproblems.

Learning - Machine Learning


6

Learning in AI refers to the ability of an agent to improve its performance on


a task over time, based on experience. Machine Learning (ML) is the field
of AI that provides the methods and techniques for agents to learn.
Machine Learning

is a subset of AI that focuses on building systems that can


Machine Learning (ML)
learn from data. Instead of being explicitly programmed with rules for every
possible scenario, ML systems discover patterns and relationships in data,
allowing them to make predictions or decisions.
Core Idea: An ML algorithm takes data (examples) and builds a model. This
model can then be used to make predictions or decisions on new, unseen data.
Categories of Machine Learning:
1. Supervised Learning:
○ Data: Labeled data (input-output pairs). The algorithm learns a
mapping from inputs to desired outputs.
○ Task: Predict an output value for a given input.
○ Types:
■ Classification: Predicts a discrete category (e.g., spam/not
spam, disease/no disease).
■ Regression: Predicts a continuous value (e.g., house prices,
stock prices).
○ Examples: Linear Regression, Logistic Regression, Support
Vector Machines (SVMs), Decision Trees, Neural Networks.
2. Unsupervised Learning:
○ Data: Unlabeled data. The algorithm finds hidden patterns or
structures within the data.
○ Task: Discover clusters, associations, or reduced dimensions.
○ Types:
■ Clustering: Groups similar data points together (e.g.,
customer segmentation).
■ Association Rule Mining: Finds relationships between
items (e.g., "customers who buy X also tend to buy Y").
■ Dimensionality Reduction: Reduces the number of
features in the data (e.g., PCA).
7

○ Examples: K-Means Clustering, Hierarchical Clustering,


Principal Component Analysis (PCA).
3. Reinforcement Learning (RL):
○ Data: No explicit labeled data. An agent learns by interacting
with an environment, receiving rewards or penalties for its
actions.
○ Task: Learn a policy (mapping from states to actions) that
maximizes cumulative reward over time.
○ Key Concepts: Agent, Environment, State, Action, Reward,
Policy, Value Function.
○ Examples: Q-Learning, SARSA, Deep Q-Networks (DQN),
AlphaGo.
○ Use Cases: Robotics, game playing, autonomous navigation.

Learning Concepts, Methods, and Models


● Learning Concepts:
○ The observations and interactions an agent has with its
Experience:
environment.
○ Target Function: The unknown function that the learning
algorithm aims to approximate (e.g., the function that maps house
features to prices).
○ Hypothesis: The learned approximation of the target function.
○ Training Data: The specific examples used to train the model.
○ Generalization: The ability of a learned model to perform well
on unseen data.
○ Bias-Variance Tradeoff: A fundamental concept in ML,
balancing the error from overly simplistic models (high bias) and
overly complex models that fit noise (high variance).
● Learning Methods (Algorithms):
○ Decision Trees: Tree-like models where each internal node
represents a test on an attribute, and each leaf node represents a
class label or value.
○ Neural Networks: Inspired by the human brain, composed of
interconnected "neurons" that learn to recognize patterns. Deep
Learning is a subfield using deep neural networks.
8

○ Support Vector Machines (SVMs): Find the optimal


hyperplane that separates data points into different classes with
the largest margin.
○ K-Nearest Neighbors (KNN): A non-parametric, instance-based
learning algorithm that classifies new data points based on the
majority class of their 'k' nearest neighbors.
○ Ensemble Methods: Combine multiple learning algorithms to
obtain better predictive performance than could be obtained from
any of the constituent learning algorithms alone (e.g., Random
Forests, Boosting).
● Learning Models:
○ The model is the output of the learning algorithm. It's the
representation of the learned patterns and relationships.
○ Examples: a set of decision tree rules, the weights in a neural
network, the support vectors in an SVM.
○ Models are then used for prediction, classification, clustering,
etc., on new data.

Introduction to Expert Systems


An Expert System (ES) is a computer program that attempts to mimic the
knowledge and reasoning ability of a human expert in a specific, narrow
domain. They were a prominent area of AI research in the 1970s and 1980s.
Core Idea: Extract the specialized knowledge of human experts and encode
it into a computer system in a way that allows the system to perform
reasoning and provide advice/solutions.
Key Characteristics:
● Domain Specificity: Designed for a very particular problem domain
(e.g., medical diagnosis, financial planning, configuration of computer
systems).
● Symbolic Reasoning: Primarily use symbolic AI techniques (rules,
frames, logic) to represent knowledge and perform inference.
● Explanation Capability: Can typically explain how they arrived at a
particular conclusion, making their reasoning transparent to the user.
9

● Heuristic Knowledge: Often contain heuristic (rule-of-thumb)


knowledge in addition to factual knowledge.
● Separation of Knowledge and Control: The knowledge (rules, facts)
is distinct from the inference mechanism (how to use the rules), making
them easier to maintain and update.

Architecture of Expert Systems


The typical architecture of an expert system includes several key
components:
1. Knowledge Base (KB):
○ The heart of the expert system.
○ Contains the domain-specific knowledge acquired from human
experts.
○ Represented using:
■ Facts: Declarative statements about the domain (e.g.,
"Patient's temperature is 102F").
■ Rules: Production rules (IF-THEN statements) that capture
the expert's reasoning logic (e.g., "IF temperature > 100F
AND cough THEN suspect flu").
■ Frames/Semantic Nets: For representing structured
concepts and relationships.
2. Inference Engine:
○ The "brain" of the expert system.
○ It's the reasoning mechanism that manipulates the knowledge in
the KB to draw conclusions or provide advice.
○ Employs strategies like:
■ Forward Chaining: Data-driven, works from facts to
conclusions (e.g., diagnosing a disease from symptoms).
■ Backward Chaining: Goal-driven, works from a
hypothesis back to supporting facts (e.g., confirming a
diagnosis).
3. Working Memory (or Working Storage / Fact Base):
○ A temporary data store that holds the current facts, observations,
and intermediate conclusions relevant to the current consultation.
○ It's where the inference engine applies rules and updates
knowledge during a session.
10

4. User Interface:
○ Enables communication between the user and the expert system.
○ Allows the user to input data (symptoms, questions), receive
advice, and ask for explanations.
5. Explanation Facility (or Justifier):
○ A crucial component that allows the expert system to explain its
reasoning process.
○ Can answer questions like "Why did you ask that question?" (by
showing the rule it's trying to satisfy) or "How did you reach that
conclusion?" (by tracing the sequence of rules that fired).
○ Enhances trust and understanding.
6. Knowledge Acquisition Module:
○ A component (often a separate tool or process) used to help
human experts or knowledge engineers input and update
knowledge in the KB.
○ This is often the bottleneck in expert system development
("knowledge acquisition bottleneck").
How Expert Systems Work (Simplified):
1. User enters facts (e.g., patient symptoms) through the UI. These go into
Working Memory.
2. Inference Engine starts its reasoning process (e.g., backward chaining
for a diagnosis).
3. It finds rules in the KB that match the goal.
4. If preconditions are missing, it asks the user for more information via
the UI.
5. It applies rules, drawing new conclusions, which are added to Working
Memory.
6. This continues until a conclusion (e.g., diagnosis) is reached, which is
presented to the user.
7. The user can query the Explanation Facility to understand the
reasoning.
While modern AI has largely moved beyond classic expert systems towards
machine learning, the concepts of knowledge representation, inference, and
the structured architecture of ES remain foundational and influential in many
AI applications.
11

You might also like