0% found this document useful (0 votes)
458 views11 pages

AI Unit 4

Uploaded by

rcharls430
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
458 views11 pages

AI Unit 4

Uploaded by

rcharls430
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Course Title : Artificial Intelligence

Course Code: BTAIML 502-20


UNIT 4
What Is Learning?
In Artificial Intelligence (AI), learning refers to the process by which machines or systems
improve their performance or behavior over time based on experience or data. Learning
enables AI systems to perform tasks without being explicitly programmed for every possible
scenario.
There are different types of learning in AI:
1. Supervised Learning:
o The AI system is trained on labeled data, where the input-output pairs are
known. The goal is to learn a mapping from inputs to outputs so that the
system can predict outputs for new inputs.
o Example: Image classification, where the AI learns to identify objects like cats
and dogs from labeled images.
2. Unsupervised Learning:
o The AI system is trained on unlabeled data, meaning it has to find patterns or
structures in the data on its own.
o Example: Clustering similar data points, such as customer segmentation in
marketing.
3. Reinforcement Learning:
o The AI system learns by interacting with an environment and receiving
feedback in the form of rewards or penalties. It aims to maximize cumulative
rewards through trial and error.
o Example: Teaching a robot to navigate a maze by rewarding it when it reaches
the exit.
4. Semi-supervised Learning:
o Combines both labeled and unlabeled data. The AI is trained on a small
amount of labeled data and a large amount of unlabeled data.
o Example: Using a few labeled images and many unlabeled images to train a
facial recognition system.
5. Deep Learning:
o A subset of machine learning that uses neural networks with many layers
(deep networks) to model complex patterns in data. It's often used in tasks like
speech recognition, image recognition, and natural language processing.
o Example: Detecting objects in real-time video streams.
Rote Learning
Rote learning in AI refers to a basic type of learning where a system memorizes data or
information without understanding the underlying concepts. It is a form of learning where the
system recalls facts or patterns exactly as they were presented, rather than generalizing or
adapting to new situations.
In AI, rote learning can be seen in situations where a model or system:
 Stores and retrieves exact matches from past experiences or examples.
 Does not apply reasoning or abstraction to derive new knowledge from previous data.
 Is limited in its ability to handle new, unseen data because it relies on repeating stored
knowledge.
Key Characteristics:
 No Generalization: The system only repeats what it has been explicitly taught
without being able to apply it in different contexts.
 Limited Adaptability: Rote learning is not useful for tasks that require flexibility or
learning from new situations.
 Memory-Based: It heavily depends on remembering specific input-output pairs rather
than learning underlying patterns.
Example:
A chatbot that uses a fixed set of responses without being able to adapt or understand new
questions is an example of rote learning. If it can only respond to a specific set of questions it
was programmed with, it is not learning in the true sense but merely retrieving memorized
information.
Rote learning contrasts with more sophisticated learning techniques like supervised learning
or reinforcement learning, where systems can generalize from examples and improve over
time.
Learning by Taking Advice in AI refers to a learning method where the system improves its
performance by incorporating external guidance or instructions from a human or another
knowledgeable source. Rather than learning solely from raw data or its own experiences, the
system takes in specific advice or rules to guide its learning process.
Key Aspects:
1. Guided Learning:
o The AI system receives advice, often in the form of rules, examples, or
suggestions, which helps it make better decisions.
o This advice can come from experts, labeled data, or predefined strategies.
2. Combining Advice with Experience:
o The system doesn't rely only on advice but combines it with its learning from
experiences, improving its overall performance by balancing both sources of
knowledge.
3. Improving Efficiency:
o By taking advice, the system can learn more quickly and efficiently, as it
doesn't have to figure out everything on its own. The advice gives it a head
start in areas where learning might be slow or difficult.
4. Application:
o Useful in environments where some prior knowledge or expert advice is
available and can significantly boost learning, like in game-playing AI (e.g.,
chess or Go) or medical diagnosis systems.
Example:
In reinforcement learning, an AI agent might receive advice on which actions to avoid or
prioritize in certain situations, speeding up the learning process and helping it avoid costly
mistakes during early exploration.
Learning in Problem Solving in AI refers to the process by which an AI system improves its
ability to solve problems over time by using previous experiences, data, or feedback. Instead
of solving each problem from scratch, the system learns patterns, strategies, or solutions that
can be applied to future problems, thereby enhancing efficiency and accuracy.
Types of Learning in Problem Solving:
1. Experience-Based Learning:
o The AI system learns from its past problem-solving experiences. By analyzing
past solutions, the system becomes better at identifying efficient methods or
shortcuts for similar problems in the future.
o Example: A pathfinding algorithm in AI (like A* or Dijkstra’s) may improve
by learning which paths are typically optimal in specific types of
environments.
2. Inductive Learning:
o In this approach, the system generalizes from specific instances or examples.
It uses solved examples to learn patterns or rules that can be applied to new,
unseen problems.
o Example: Learning rules for playing a game by observing winning strategies
from past games.
3. Heuristic Learning:
o AI systems often use heuristics, which are rules of thumb or strategies, to
make problem-solving more efficient. Through experience or feedback, the
system learns which heuristics work best for particular types of problems.
o Example: In chess, the AI might learn that controlling the center of the board
often leads to better outcomes and prioritize such moves in future games.
4. Reinforcement Learning in Problem Solving:
o The AI system learns through trial and error, receiving rewards or penalties for
actions taken during problem-solving. Over time, the system improves its
strategies based on feedback.
o Example: A robot learning to navigate a maze by being rewarded for finding
the correct path and penalized for hitting walls.
5. Case-Based Learning:
o The AI stores past problem-solving cases and retrieves relevant cases when
faced with new problems. By comparing the current problem with past cases,
the system can adapt previously successful solutions.
o Example: A legal AI system might solve new legal cases by referring to how
similar cases were resolved in the past.
Advantages of Learning in Problem Solving:
 Efficiency: AI systems become faster at solving problems by learning from past
attempts, reducing the need for trial and error.
 Adaptability: The system can handle a broader range of problems and adapt to new
situations by using learned knowledge.
 Optimization: AI can learn optimal strategies or approaches, resulting in better
performance over time.

Learning from Examples


Learning from Examples in AI refers to a process where the system is trained by being
provided with specific input-output pairs, or examples, from which it can learn to generalize
and make predictions on new, unseen data. This method is commonly used in supervised
learning, where the goal is for the AI system to learn the relationship between inputs and
their corresponding outputs.
Key Features of Learning from Examples:
1. Training Data:
o The AI model is provided with a dataset containing examples that consist of
both inputs and their correct outputs (labels).
o Example: In image recognition, the input is an image, and the output is the
label (e.g., "cat" or "dog").
2. Generalization:
o The AI system learns patterns from the examples, enabling it to generalize to
new data. This means that after training, the system can correctly predict
outputs for inputs it has never seen before.
o Example: After being trained on multiple labeled images of cats and dogs, the
system can classify a new image as either a cat or a dog.
3. Model Creation:
o A learning algorithm processes the examples to create a model, which captures
the underlying patterns or rules in the data.
o Example: In machine learning, algorithms like decision trees, neural networks,
or support vector machines (SVM) are used to build such models.
4. Error Correction:
o During training, if the system makes an incorrect prediction, it adjusts its
internal parameters to minimize future errors. This process continues until the
model achieves a satisfactory level of accuracy.
o Example: In neural networks, backpropagation adjusts the weights of the
network to reduce errors over time.
5. Testing on New Data:
o Once trained, the system is tested on new examples (test data) to evaluate how
well it has learned. The goal is for the system to accurately predict outputs for
these unseen examples.
o Example: After training a sentiment analysis model on labeled reviews, it is
tested on new reviews to see if it can correctly classify them as positive or
negative.
Example in AI:
 Image Classification: An AI system learns to identify animals by being provided with
labeled images. For example, it is shown several images of dogs labeled "dog" and
several images of cats labeled "cat." After seeing enough examples, the AI can
identify whether a new image contains a cat or a dog.
 Spam Detection: An email system learns to detect spam by being trained on
examples of emails that are labeled as "spam" or "not spam." After learning from
these examples, the system can identify new spam emails even if they are not
identical to the ones seen during training.
Advantages of Learning from Examples:
 Efficiency: AI systems can quickly learn complex patterns and relationships from
large datasets of examples.
 Scalability: As more examples are provided, the system becomes better at handling
more diverse and complex tasks.
 Versatility: This method is widely applicable to many domains, such as image
recognition, language processing, and recommendation systems.

Winston’s Learning Program


Winston's Learning Program refers to an early AI system developed by Patrick Winston in
the 1970s that aimed to understand and model how humans learn from examples. Winston’s
program was designed to learn structural concepts, such as recognizing objects or patterns, by
analyzing examples and building symbolic representations. It is an important early work in AI
and cognitive science, illustrating how machines can use reasoning and abstraction to learn
from data.
Key Concepts of Winston’s Learning Program:
1. Learning by Example:
o The program learns from a set of positive and negative examples of a concept.
It uses these examples to generalize a definition or representation of the
concept.
o Example: If given examples of different types of animals and their
characteristics (legs, tails, ears), the program could learn the concept of a
"dog" by identifying features common to dogs.
2. Structural Descriptions:
o Winston's program represents objects and their relationships using structural
descriptions. These descriptions break down objects into parts and
relationships between parts.
o Example: A house might be represented as a structure with walls, a roof, and
doors. The relationships between these parts (e.g., "the roof is above the
walls") help the program learn what a house is.
3. Difference Identification:
o The program compares positive examples with negative examples to identify
critical differences that distinguish one from the other. These differences help
refine the concept's definition.
o Example: If an example of a "bird" has wings and flies, and a negative
example does not, the program can infer that wings and flying may be
important features of birds.
4. Generalization and Specialization:
o Winston’s learning algorithm allows the system to generalize concepts from
examples, forming broader definitions based on commonalities. Conversely, it
can specialize by incorporating constraints that fit the concept.
o Example: From various animals, the program might generalize that all animals
with wings and feathers are birds. If it encounters a penguin, it may specialize
this concept by adding that not all birds can fly.
5. Concept Hierarchies:
o The program creates a hierarchy of concepts, where more specific concepts
(like "dog" or "bird") are nested under broader categories (like "animal"). This
hierarchical structure mimics how humans categorize knowledge.
6. Learning from Feedback:
o The system improves over time by adjusting its representations based on
feedback, learning from mistakes when it misclassifies examples.
Example of Winston’s Learning:
Consider the task of learning the concept of an "arch." Winston’s program might be given
several examples of structures that are arches and non-arches. It would analyze features such
as "a top block resting on two side blocks" and generalize that this is the structure of an arch.
It might also refine this concept by noting that if the top block is missing, it is not an arch,
which helps in distinguishing between examples.
Significance of Winston’s Learning Program:
 Early AI Learning: Winston's program was one of the first to tackle learning from
examples and reasoning about structural concepts. It laid the foundation for later AI
systems, particularly those focused on symbolic reasoning and knowledge
representation.
 Cognitive Modeling: The program mirrored how humans learn by identifying
common patterns and differences from examples, providing insight into cognitive
processes.
 Influence on Machine Learning: While modern machine learning often uses
statistical methods (like neural networks), Winston’s program highlighted the
importance of structural and symbolic reasoning, which remains relevant in fields like
knowledge representation and reasoning (KRR) in AI.

Decision Trees
A Decision Tree is a popular machine learning algorithm used for both classification and
regression tasks. It represents decisions and their possible consequences, including chance
event outcomes, resource costs, and utility, in a tree-like structure. Each internal node
represents a decision based on a feature, each branch represents the outcome of the decision,
and each leaf node represents the final classification or output.
Key Components of a Decision Tree:
1. Root Node:
o The top node of the tree, representing the best decision to split the data. The
feature that gives the highest information gain or lowest impurity is chosen
here.
2. Decision Nodes (Internal Nodes):
o These nodes represent tests on attributes or features. Each decision node splits
the data based on a feature and its value.
o Example: "Is the temperature > 30°C?" is a decision node in a tree predicting
whether it will rain.
3. Branches:
o Each branch represents the outcome of the decision at a node. Depending on
the test (e.g., yes/no), the data moves down to the next level of the tree.
4. Leaf Nodes (Terminal Nodes):
o These nodes represent the final decision or classification. Once the data
reaches a leaf node, the output is given.
o Example: A leaf node could represent the final class, like "Rain" or "No Rain,"
in a weather prediction model.
How Decision Trees Work:
1. Splitting:
o The decision tree algorithm recursively splits the dataset based on the most
significant features. It uses criteria like Gini impurity (for classification) or
Mean Squared Error (for regression) to determine how to split data. The
goal is to find splits that result in the most homogeneous subsets.
2. Stopping Criteria:
o The tree keeps splitting until one of the stopping conditions is met:
 All the data points in a node belong to the same class.
 There are no more features left to split on.
 A predefined maximum depth of the tree is reached.
3. Pruning (optional):
o To prevent overfitting, decision trees can be "pruned" by cutting off branches
that have little predictive power. This helps simplify the model and improves
generalization to new data.
Example of a Decision Tree:
Imagine you are building a decision tree to predict whether a person will buy a computer,
based on the following features: age, income, and student status.
1. The root node could be "Is the person's age < 30?"
o If yes, go to the next decision node.
o If no, check another feature like income.
2. If the person is a student, they might be more likely to buy a computer.
o The leaf node here might say "Yes, will buy a computer."
3. Based on the data, the tree will continue to branch out until it makes a decision at the
leaf nodes.
Advantages of Decision Trees:
 Easy to Understand: Decision trees are intuitive and their visual representation is
simple to interpret, even for non-experts.
 Handles Both Numerical and Categorical Data: Decision trees can work with a mix
of both types of data without needing much preprocessing.
 No Need for Feature Scaling: Unlike many algorithms, decision trees do not require
scaling of features, as they operate based on thresholds.
 Non-Parametric: Decision trees don't assume any underlying probability distribution,
making them flexible for a wide variety of tasks.
Disadvantages of Decision Trees:
 Overfitting: Decision trees can overfit the training data, especially if the tree grows
too deep. This results in poor performance on new, unseen data.
 Unstable: Small changes in the data can result in a completely different tree being
generated, as the model is sensitive to the data it is trained on.
 Biased to Dominant Features: Decision trees may favor features with many distinct
values (like continuous numerical features), which could lead to biased splits.
Use Cases:
 Classification: Decision trees are often used for classification tasks, such as spam
detection, credit scoring, and medical diagnosis.
 Regression: Decision trees can also be used for predicting continuous values in
regression tasks, such as predicting housing prices or stock prices.
Extensions of Decision Trees:
1. Random Forest:
o An ensemble method that creates multiple decision trees and averages their
results to improve accuracy and reduce overfitting.
2. Gradient Boosting Trees:
o Another ensemble method where multiple decision trees are built sequentially,
and each tree tries to correct the errors of the previous ones.

You might also like