0% found this document useful (0 votes)
6 views15 pages

AI_1 (2)

Artificial Intelligence (AI) aims to create intelligent systems that can reason, learn, and act autonomously, with various approaches including cognitive modeling and rational agents. Key components of AI include natural language processing, machine learning, and robotics, with applications in healthcare, finance, and education. The document also discusses the evolution of AI, types of agents, and the concept of rationality in decision-making.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views15 pages

AI_1 (2)

Artificial Intelligence (AI) aims to create intelligent systems that can reason, learn, and act autonomously, with various approaches including cognitive modeling and rational agents. Key components of AI include natural language processing, machine learning, and robotics, with applications in healthcare, finance, and education. The document also discusses the evolution of AI, types of agents, and the concept of rationality in decision-making.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

AI INTRODUCTION

WHAT IS AI?
Short Note on AI for Exams

●​ Definition:​
Artificial Intelligence (AI) is the field of science and engineering that aims to create
intelligent systems capable of reasoning, learning, perceiving, and acting autonomously.
It attempts to replicate human intelligence in machines.

Approaches to AI:​
AI can be categorized into four major approaches:

1.​ Thinking Humanly (Cognitive Modeling Approach):


○​ Understanding human thought processes and replicating them in machines.
○​ Uses methods like psychological experiments, brain imaging, and introspection.
2.​ Thinking Rationally (Laws of Thought Approach):
○​ Focuses on developing logical reasoning based on formal rules.
○​ Derived from Aristotle’s logic and syllogisms.
3.​ Acting Humanly (Turing Test Approach):
○​ Making machines exhibit human-like intelligence through natural language
communication and problem-solving.
○​ Requires capabilities like natural language processing, knowledge
representation, reasoning, and learning.
4.​ Acting Rationally (Rational Agent Approach):
○​ AI agents act in a way that maximizes success based on available data.
○​ More general than logical reasoning as it includes decision-making in uncertain
environments.

Key Components of AI:

●​ Natural Language Processing (NLP): Enables AI to understand and generate human


language.
●​ Knowledge Representation: Stores and organizes information for reasoning and
decision-making.
●​ Automated Reasoning: Uses logical inference to draw conclusions from stored
knowledge.
●​ Machine Learning (ML): Allows AI to learn from data and improve its performance over
time.
●​ Computer Vision: Enables machines to interpret and analyze images or videos.
●​ Robotics: Integrates AI with hardware to perform tasks like navigation and manipulation.

Applications of AI:​
AI is a universal field with applications in various domains, including:

●​ Healthcare: Medical diagnosis, robotic surgery, drug discovery.


●​ Autonomous Vehicles: Self-driving cars and traffic management.
●​ Finance: Fraud detection, algorithmic trading.
●​ Education: AI tutors, personalized learning systems.
●​ Entertainment: AI in gaming, music composition, and content recommendation.

Challenges and Future of AI:

●​ Ethical concerns, such as bias in AI and job displacement.


●​ Need for improved computational power and data privacy.
●​ Future AI research focuses on achieving General AI (machines with human-like
reasoning abilities).

TURING TEST
●​ Definition:​
The Turing Test was proposed by Alan Turing in 1950 to determine whether a machine
can exhibit human-like intelligence.
●​ Concept:
○​ A human evaluator interacts with both a human and a machine through a
text-based interface.
○​ If the evaluator cannot reliably distinguish the machine from the human, the
machine is considered intelligent.
●​ Requirements for Passing the Turing Test:
○​ Natural Language Processing (NLP): To understand and generate human
language.
○​ Knowledge Representation: To store and retrieve information.
○​ Automated Reasoning: To answer questions logically.
○​ Machine Learning: To adapt and improve responses over time.
●​ Total Turing Test:
○​ Includes a visual and physical component where the machine must perceive
objects (computer vision) and manipulate them (robotics).
●​ Significance:
○​ The Turing Test remains an important benchmark in AI research.
○​ While no machine has completely passed the test, AI chatbots like ChatGPT and
others have come close.

This test helps in evaluating AI systems but has limitations, as passing it does not necessarily
mean true intelligence.
HISTORY OF AI
In 1956 John McCarthy, marked the birth of AI as a field. It brought together key researchers
to explore how machines could simulate human intelligence, shaping AI's future development.

The early years of AI saw major breakthroughs despite limited computing power. John
McCarthy played a crucial role by developing LISP (1958), a programming language that
became the foundation for AI research. He also introduced the Advice Taker, an early AI
system designed to reason and learn from new information without reprogramming. Another key
advancement was Arthur Samuel's checkers-playing program, which demonstrated machine
learning by improving its gameplay over time. These innovations laid the groundwork for AI by
emphasizing reasoning, learning, and symbolic processing.

Early AI research relied on weak methods, which were general but ineffective for complex
problems. The alternative was domain-specific knowledge, leading to expert systems like:

●​ DENDRAL (1969): Developed at Stanford, it used chemical knowledge to deduce


molecular structures from mass spectrometry data. Instead of brute-force searching, it
applied expert rules, making it the first successful knowledge-intensive AI system.
●​ MYCIN (1970s): A medical diagnosis system for blood infections. It used 450 rules and
outperformed junior doctors. Unlike DENDRAL, it handled uncertainty in medical
knowledge using certainty factors.

These innovations highlighted the need for expert knowledge in AI, shaping modern expert
systems and AI applications.

From the 1980s, AI had evolved from ad-hoc approaches to a rigorous, scientific
methodology, integrating established theories from other fields. Key advancements include:

●​ Hidden Markov Models (HMMs) in Speech Recognition


○​ Dominated speech recognition by providing a mathematical foundation.
○​ Uses statistical training on large datasets, leading to continuous
improvements.
○​ Applied successfully to handwriting recognition and other pattern recognition
tasks.

Shift from Algorithm-Centric to Data-Centric AI from 2000s

Recent advancements in AI emphasize the importance of data over algorithm choice. Key
examples:

●​ Word-Sense Disambiguation (Yarowsky, 1995)


○​ Determining whether "plant" means factory or flora.
○​ Achieved 96% accuracy without labeled data, using large text corpora.
○​ Showed that more data improves performance more than better algorithms
(Banko & Brill, 2001).

This shift suggests that learning from vast datasets can overcome AI’s knowledge
bottleneck, reducing reliance on manual knowledge engineering and leading to widespread
real-world AI applications.

WHAT IS AGENT?
Definition of an Agent

●​ An agent is any system that perceives its environment through sensors and acts upon
it using actuators to achieve a goal.
●​ Agents can be classified into human agents, robotic agents, and software agents,
depending on their form and functionality.

Percept & Percept Sequence

●​ Percept refers to the input received by an agent at a particular moment from its
environment. It serves as the agent's awareness of the surroundings.
●​ Percept Sequence is the complete history of everything the agent has perceived up to
that point, determining its future actions.

Agent Function & Agent Program

●​ Agent Function is a mathematical mapping that defines the relationship between


percept sequences and the actions taken by an agent. It describes the agent's
behavior in a theoretical sense.
●​ Agent Program is the actual implementation of the agent function, executed within a
system. It determines how the agent processes, perceives and selects actions based on
predefined logic or learning mechanisms.
●​ The efficiency of an agent program influences the agent’s overall performance and
adaptability in different scenarios.

Intelligent vs. Non-Intelligent Agents

●​ An intelligent agent is one that makes optimal decisions based on its percepts to
achieve a specific objective efficiently.
●​ A non-intelligent agent may follow predefined rules but lacks the ability to adapt or
learn from past experiences.

AI & Decision Making


●​ AI focuses on designing agents that can operate in complex environments requiring
non-trivial decision-making.
●​ These agents often involve machine learning, probabilistic reasoning, and
optimization techniques to improve performance over time.
●​ The goal is to create systems that can function autonomously, learn from experiences,
and adapt to new challenges effectively.

TYPES OF AGENTS
Simple Reflex Agents in AI
1.​ Definition of Simple Reflex Agents​

○​ Simple reflex agents make decisions based only on the current percept,
ignoring past percepts.
○​ Their actions follow predefined condition-action rules (also called if-then
rules).
○​ These agents work well in fully observable environments where each percept
provides complete information for decision-making.
2.​ Working Mechanism​

○​ The agent continuously senses the environment and matches percepts to


predefined rules.
○​ The agent function is straightforward and does not require memory of past
percepts.
○​ Simple reflex agents rely on hardcoded condition-action rules to decide what
action to take.
○​ These rules can be implemented using Boolean circuits or logic-based
programming.
3.​ Limitations of Simple Reflex Agents​

○​ Lack of Memory: They do not remember past percepts, which limits


decision-making in partially observable environments.
○​ Infinite Loops: If an agent lacks location awareness (e.g., vacuum cleaner
without position sensing), it may get stuck in repetitive actions.
4.​ Randomization in Simple Reflex Agents​

○​ Randomized behavior can help avoid infinite loops in partially observable


environments.
○​ While useful in some cases, randomization is not an ideal solution for designing
intelligent agents.
5.​ Conclusion​

○​ Simple reflex agents are easy to implement but have limited intelligence.
○​ They work best in fully observable, structured environments with well-defined
rules.

Model-Based Reflex Agents in AI


1.​ Definition of Model-Based Reflex Agents​

○​ Unlike simple reflex agents, model-based reflex agents maintain an internal


state that helps them handle partial observability.
○​ This internal state stores past percepts and updates based on how the world
evolves.
○​ The agent perceives the environment and updates its internal model of the
world.
○​ It then applies condition-action rules to determine the appropriate action..
2.​ Components of a Model-Based Reflex Agent​

○​ Internal State: Stores relevant past percepts to infer missing information.


○​ World Model: Encodes how the world changes and how the agent’s actions
affect it.
○​ Condition-Action Rules: Determines the next action based on the updated
internal state.
3.​ Handling Uncertainty​

○​ In partially observable environments, the agent may not always have complete
information.
○​ It makes the best possible guess about the world based on past percepts and
its internal model.
4.​ Advantages Over Simple Reflex Agents​

○​ Handles partial observability by maintaining an internal state.


○​ More flexible and intelligent as it can adapt to dynamic environments.
○​ Reduces errors compared to simple reflex agents that act only on current
percepts.
5.​ Limitations​

○​ Computationally complex due to the need for memory and processing.


○​ Still reactive, meaning it does not plan for long-term goals like a goal-based
agent.
○​ May have uncertainty, as the agent’s internal state is just an approximation of
reality.
6.​ Conclusion​

○​ Model-based reflex agents are a step ahead of simple reflex agents by using
memory and a world model.
○​ They are widely used in AI applications like self-driving cars, robotic
navigation, and automated decision-making.

Goal-Based Agents in AI
1.​ Definition of Goal-Based Agents
○​ A goal-based agent takes actions not just based on the current state but also
on a desired goal it wants to achieve.
○​ It reasons about the future consequences of actions before making decisions.
○​ The agent uses a model of how the world changes and evaluates possible
actions.
○​ It selects actions that will lead to its goal, even if multiple steps are needed.
○​ Example: A self-driving taxi at a junction chooses to turn left, right, or go
straight based on its destination goal.
2.​ Key Components
○​ Internal State: Keeps track of past percepts and updates based on observations.
○​ World Model: Describes how actions impact the environment.
○​ Goal Information: Defines the desired end state.
○​ Decision Process: Considers “What will happen if I take this action?” before
executing it.
3.​ Advantages Over Reflex Agents
○​ More flexible: The same agent can achieve different goals by changing its goal
description.
○​ Better adaptability: Can modify its behavior if new conditions arise (e.g., braking
efficiency in rain).
○​ Handles complex decisions: Unlike simple reflex agents, it can plan long action
sequences to achieve goals.
4.​ Limitations
○​ Computationally expensive: Requires searching or planning to find the best
action sequence.
○​ Slower reaction time: Since it considers future consequences, it may take
longer to decide.
○​ Requires a well-defined goal: If goals are unclear or conflicting,
decision-making can be difficult.
5.​ Conclusion
○​ Goal-based agents provide better flexibility and decision-making than reflex
agents.
○​ They are used in AI applications like navigation systems, automated planning,
and robotic pathfinding.
Utility-Based Agents in AI
1.​ Definition of Utility-Based Agents
○​ These agents go beyond simple goal achievement by considering the quality
of different outcomes.
○​ They use a utility function to measure how desirable each state is, allowing
them to make better decisions.
2.​ How They Work
○​ The agent has a model of the world that predicts how actions will affect future
states.
○​ A utility function assigns a score to each possible state, indicating how
beneficial it is.
○​ The agent selects the action that maximizes expected utility, considering
possible uncertainties.
3.​ Key Components
○​ Internal State: Keeps track of the environment.
○​ World Model: Understands how actions change the world.
○​ Utility Function: Measures happiness or preference over different states.
○​ Decision Process: Computes the expected utility of each action and selects
the best one.
4.​ Advantages
○​ More flexible than goal-based agents.
○​ Can adapt to changing conditions (e.g., adjusting speed in heavy traffic).
○​ Works well in uncertain environments where outcomes are unpredictable.
5.​ Challenges
○​ Computationally expensive: Calculating expected utility for many possible
outcomes can be slow.
○​ Requires a well-defined utility function: Hard to design if preferences are
unclear or subjective.
○​ Not always perfect: Due to limited time and resources, approximate solutions
are often used.
6.​ Conclusion
○​ Utility-based agents maximize expected utility, making them smarter and
more adaptive than simple goal-based agents.
○​ They are widely used in robotics, economics, AI decision-making, and
autonomous systems.
Learning Agents in AI
1.​ Definition of Learning Agents​

○​ These agents improve their performance over time by learning from


experiences.
○​ Instead of being manually programmed with all rules, they adapt to new
environments and make better decisions.
○​ The agent performs actions based on its current knowledge.
○​ The critic observes the outcome and tells the agent whether the action was
good or bad.
○​ The learning element updates the agent’s decision-making rules to avoid
mistakes and improve performance.
2.​ Components of a Learning Agent​

○​Performance Element: Decides actions based on current knowledge.


○​Learning Element: Improves the agent’s knowledge based on feedback.
○​Critic: Evaluates performance using a fixed standard and provides feedback.
○​Problem Generator: Encourages exploration by suggesting new actions to gain
better knowledge.
3.​ How Learning Works​

○​ The agent performs actions based on its current knowledge.


○​ The critic observes the outcome and tells the agent whether the action was
good or bad.
○​ The learning element updates the agent’s decision-making rules to avoid
mistakes and improve performance.
○​ The problem generator suggests experiments to explore better actions.
4.​ Advantages​

○​ Adaptability: Can handle new environments without manual reprogramming.


○​ Continuous Improvement: Gets better over time with more data.
○​ Efficient Learning: Uses feedback to refine decision-making.
5.​ Challenges​

○​ Requires Feedback: Needs a way to measure success (critic).


○​ Exploration vs. Exploitation: Balancing between trying new things and using
what works.
6.​ Conclusion​

○​ Learning agents continuously improve by modifying their components based


on feedback.
○​ They are used in robotics, self-driving cars, AI assistants, and automated
decision systems.
RATIONAL AGENTS AND GOOD BEHAVIOUR

Rationality and Rational Agents

A rational agent is an entity that makes decisions based on available information to achieve the
best possible outcome. It selects actions that maximize its performance measure, given its
knowledge of the environment, possible actions, and past percepts.

Key Aspects of Rationality:

1.​ Performance Measure – Defines success based on the desirability of the agent’s
actions.
2.​ Knowledge of the Environment – Prior knowledge helps the agent make informed
decisions.
3.​ Available Actions – The agent can only choose from the actions it is capable of
performing.
4.​ Percept Sequence – The sequence of observations made by the agent guides its
decisions.

Rationality vs. Omniscience:

●​ Rational agents act based on expected outcomes, not perfect foresight.


●​ They gather new information (exploration) and learn from experiences to improve
performance.

A rational agent adapts to changing environments, avoids unnecessary actions, and modifies its
behavior based on experience.

PHYSICAL SYMBOL SYSTEM


A Physical Symbol System (PSS) is a fundamental concept in Artificial Intelligence (AI) that
describes a system capable of creating, manipulating, and interpreting symbols to generate
intelligent behavior.

Key Characteristics of a PSS:

1.​ Symbols – Basic units that represent objects, concepts, or ideas.


2.​ Symbol Structures – Organized collections of symbols that form meaningful patterns.
3.​ Processes – Rules and mechanisms for creating, modifying, and interpreting symbols.
4.​ Physical Realization – Exists in a physical form, such as a computer or human brain.

Significance in AI:

●​ Serves as the basis for rule-based AI, expert systems, and logic-based reasoning.
●​ Supports problem-solving, planning, and natural language processing.
●​ Forms the foundation of symbolic AI, contrasting with connectionist AI (neural
networks).

The Physical Symbol System Hypothesis states that any system capable of general
intelligence must be a physical symbol system.

SYMBOLIC AI
Symbolic AI (also called Good Old-Fashioned AI - GOFAI) is an approach to Artificial
Intelligence that uses symbols and rules to represent knowledge and solve problems. It is
based on the idea that human thinking can be represented using symbols (such as words,
numbers, or logic statements) and rules (if-then statements).

How Symbolic AI Works:

1.​ Symbols – Represent objects, actions, or concepts (e.g., "Apple" can represent a fruit).
2.​ Rules – Define relationships between symbols (e.g., "If it is red and round, it might be an
apple").
3.​ Inference Engine – Applies rules to symbols to draw conclusions and make decisions.

Example of Symbolic AI:


Expert Systems – These are AI programs that use stored knowledge and logical rules to solve

👉
problems.​
Example: MYCIN (an AI system in medicine) was used to diagnose bacterial infections
based on symptoms and lab results by applying predefined rules.

Advantages of Symbolic AI:

✔ Can explain its decisions (transparent reasoning).​


✔ Works well for tasks requiring logical reasoning and structured knowledge.

Limitations:

❌ Struggles with real-world uncertainty and complex data like images or speech.​
❌ Requires manual rule creation, which can be time-consuming.
Symbolic AI was the dominant AI approach before machine learning and neural networks
became popular. However, it is still useful in fields like law, healthcare, and expert
decision-making systems.

Environment (with respect to an Agent)


In AI, the environment refers to the external system in which an agent operates and interacts.
It provides percepts (inputs) to the agent and responds to the agent's actions, leading to
changes in its state. The environment can be classified based on properties such as
fully/partially observable, deterministic/stochastic, static/dynamic, and
discrete/continuous.

You might also like