AI_1 (2)
AI_1 (2)
WHAT IS AI?
Short Note on AI for Exams
● Definition:
Artificial Intelligence (AI) is the field of science and engineering that aims to create
intelligent systems capable of reasoning, learning, perceiving, and acting autonomously.
It attempts to replicate human intelligence in machines.
Approaches to AI:
AI can be categorized into four major approaches:
Applications of AI:
AI is a universal field with applications in various domains, including:
TURING TEST
● Definition:
The Turing Test was proposed by Alan Turing in 1950 to determine whether a machine
can exhibit human-like intelligence.
● Concept:
○ A human evaluator interacts with both a human and a machine through a
text-based interface.
○ If the evaluator cannot reliably distinguish the machine from the human, the
machine is considered intelligent.
● Requirements for Passing the Turing Test:
○ Natural Language Processing (NLP): To understand and generate human
language.
○ Knowledge Representation: To store and retrieve information.
○ Automated Reasoning: To answer questions logically.
○ Machine Learning: To adapt and improve responses over time.
● Total Turing Test:
○ Includes a visual and physical component where the machine must perceive
objects (computer vision) and manipulate them (robotics).
● Significance:
○ The Turing Test remains an important benchmark in AI research.
○ While no machine has completely passed the test, AI chatbots like ChatGPT and
others have come close.
This test helps in evaluating AI systems but has limitations, as passing it does not necessarily
mean true intelligence.
HISTORY OF AI
In 1956 John McCarthy, marked the birth of AI as a field. It brought together key researchers
to explore how machines could simulate human intelligence, shaping AI's future development.
The early years of AI saw major breakthroughs despite limited computing power. John
McCarthy played a crucial role by developing LISP (1958), a programming language that
became the foundation for AI research. He also introduced the Advice Taker, an early AI
system designed to reason and learn from new information without reprogramming. Another key
advancement was Arthur Samuel's checkers-playing program, which demonstrated machine
learning by improving its gameplay over time. These innovations laid the groundwork for AI by
emphasizing reasoning, learning, and symbolic processing.
Early AI research relied on weak methods, which were general but ineffective for complex
problems. The alternative was domain-specific knowledge, leading to expert systems like:
These innovations highlighted the need for expert knowledge in AI, shaping modern expert
systems and AI applications.
From the 1980s, AI had evolved from ad-hoc approaches to a rigorous, scientific
methodology, integrating established theories from other fields. Key advancements include:
Recent advancements in AI emphasize the importance of data over algorithm choice. Key
examples:
This shift suggests that learning from vast datasets can overcome AI’s knowledge
bottleneck, reducing reliance on manual knowledge engineering and leading to widespread
real-world AI applications.
WHAT IS AGENT?
Definition of an Agent
● An agent is any system that perceives its environment through sensors and acts upon
it using actuators to achieve a goal.
● Agents can be classified into human agents, robotic agents, and software agents,
depending on their form and functionality.
● Percept refers to the input received by an agent at a particular moment from its
environment. It serves as the agent's awareness of the surroundings.
● Percept Sequence is the complete history of everything the agent has perceived up to
that point, determining its future actions.
● An intelligent agent is one that makes optimal decisions based on its percepts to
achieve a specific objective efficiently.
● A non-intelligent agent may follow predefined rules but lacks the ability to adapt or
learn from past experiences.
TYPES OF AGENTS
Simple Reflex Agents in AI
1. Definition of Simple Reflex Agents
○ Simple reflex agents make decisions based only on the current percept,
ignoring past percepts.
○ Their actions follow predefined condition-action rules (also called if-then
rules).
○ These agents work well in fully observable environments where each percept
provides complete information for decision-making.
2. Working Mechanism
○ Simple reflex agents are easy to implement but have limited intelligence.
○ They work best in fully observable, structured environments with well-defined
rules.
○ In partially observable environments, the agent may not always have complete
information.
○ It makes the best possible guess about the world based on past percepts and
its internal model.
4. Advantages Over Simple Reflex Agents
○ Model-based reflex agents are a step ahead of simple reflex agents by using
memory and a world model.
○ They are widely used in AI applications like self-driving cars, robotic
navigation, and automated decision-making.
Goal-Based Agents in AI
1. Definition of Goal-Based Agents
○ A goal-based agent takes actions not just based on the current state but also
on a desired goal it wants to achieve.
○ It reasons about the future consequences of actions before making decisions.
○ The agent uses a model of how the world changes and evaluates possible
actions.
○ It selects actions that will lead to its goal, even if multiple steps are needed.
○ Example: A self-driving taxi at a junction chooses to turn left, right, or go
straight based on its destination goal.
2. Key Components
○ Internal State: Keeps track of past percepts and updates based on observations.
○ World Model: Describes how actions impact the environment.
○ Goal Information: Defines the desired end state.
○ Decision Process: Considers “What will happen if I take this action?” before
executing it.
3. Advantages Over Reflex Agents
○ More flexible: The same agent can achieve different goals by changing its goal
description.
○ Better adaptability: Can modify its behavior if new conditions arise (e.g., braking
efficiency in rain).
○ Handles complex decisions: Unlike simple reflex agents, it can plan long action
sequences to achieve goals.
4. Limitations
○ Computationally expensive: Requires searching or planning to find the best
action sequence.
○ Slower reaction time: Since it considers future consequences, it may take
longer to decide.
○ Requires a well-defined goal: If goals are unclear or conflicting,
decision-making can be difficult.
5. Conclusion
○ Goal-based agents provide better flexibility and decision-making than reflex
agents.
○ They are used in AI applications like navigation systems, automated planning,
and robotic pathfinding.
Utility-Based Agents in AI
1. Definition of Utility-Based Agents
○ These agents go beyond simple goal achievement by considering the quality
of different outcomes.
○ They use a utility function to measure how desirable each state is, allowing
them to make better decisions.
2. How They Work
○ The agent has a model of the world that predicts how actions will affect future
states.
○ A utility function assigns a score to each possible state, indicating how
beneficial it is.
○ The agent selects the action that maximizes expected utility, considering
possible uncertainties.
3. Key Components
○ Internal State: Keeps track of the environment.
○ World Model: Understands how actions change the world.
○ Utility Function: Measures happiness or preference over different states.
○ Decision Process: Computes the expected utility of each action and selects
the best one.
4. Advantages
○ More flexible than goal-based agents.
○ Can adapt to changing conditions (e.g., adjusting speed in heavy traffic).
○ Works well in uncertain environments where outcomes are unpredictable.
5. Challenges
○ Computationally expensive: Calculating expected utility for many possible
outcomes can be slow.
○ Requires a well-defined utility function: Hard to design if preferences are
unclear or subjective.
○ Not always perfect: Due to limited time and resources, approximate solutions
are often used.
6. Conclusion
○ Utility-based agents maximize expected utility, making them smarter and
more adaptive than simple goal-based agents.
○ They are widely used in robotics, economics, AI decision-making, and
autonomous systems.
Learning Agents in AI
1. Definition of Learning Agents
A rational agent is an entity that makes decisions based on available information to achieve the
best possible outcome. It selects actions that maximize its performance measure, given its
knowledge of the environment, possible actions, and past percepts.
1. Performance Measure – Defines success based on the desirability of the agent’s
actions.
2. Knowledge of the Environment – Prior knowledge helps the agent make informed
decisions.
3. Available Actions – The agent can only choose from the actions it is capable of
performing.
4. Percept Sequence – The sequence of observations made by the agent guides its
decisions.
A rational agent adapts to changing environments, avoids unnecessary actions, and modifies its
behavior based on experience.
Significance in AI:
● Serves as the basis for rule-based AI, expert systems, and logic-based reasoning.
● Supports problem-solving, planning, and natural language processing.
● Forms the foundation of symbolic AI, contrasting with connectionist AI (neural
networks).
The Physical Symbol System Hypothesis states that any system capable of general
intelligence must be a physical symbol system.
SYMBOLIC AI
Symbolic AI (also called Good Old-Fashioned AI - GOFAI) is an approach to Artificial
Intelligence that uses symbols and rules to represent knowledge and solve problems. It is
based on the idea that human thinking can be represented using symbols (such as words,
numbers, or logic statements) and rules (if-then statements).
1. Symbols – Represent objects, actions, or concepts (e.g., "Apple" can represent a fruit).
2. Rules – Define relationships between symbols (e.g., "If it is red and round, it might be an
apple").
3. Inference Engine – Applies rules to symbols to draw conclusions and make decisions.
👉
problems.
Example: MYCIN (an AI system in medicine) was used to diagnose bacterial infections
based on symptoms and lab results by applying predefined rules.
Limitations:
❌ Struggles with real-world uncertainty and complex data like images or speech.
❌ Requires manual rule creation, which can be time-consuming.
Symbolic AI was the dominant AI approach before machine learning and neural networks
became popular. However, it is still useful in fields like law, healthcare, and expert
decision-making systems.