0% found this document useful (0 votes)
99 views10 pages

Ai Unit-1 6TH Sem

Artificial Intelligence (AI) simulates human intelligence in machines, enabling them to think, learn, and perform tasks. AI is categorized by capabilities (Narrow, General, Super AI) and functionalities (Reactive, Limited Memory, Theory of Mind, Self-Aware). The document also discusses the foundations, history, types of intelligent agents, and the importance of task environments in AI design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views10 pages

Ai Unit-1 6TH Sem

Artificial Intelligence (AI) simulates human intelligence in machines, enabling them to think, learn, and perform tasks. AI is categorized by capabilities (Narrow, General, Super AI) and functionalities (Reactive, Limited Memory, Theory of Mind, Self-Aware). The document also discusses the foundations, history, types of intelligent agents, and the importance of task environments in AI design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT-1

What is Artificial Intelligence (AI)?

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are
designed to think, learn, and perform tasks that typically require human cognition. AI systems are
capable of reasoning, problem-solving, decision-making, understanding natural language,
recognizing patterns, and adapting to new situations.

Types of AI

AI can be classified into different types based on capabilities and functionalities:

A. Based on Capabilities:

1. Narrow AI (Weak AI):

o Designed for a specific task (e.g., Siri, Google Assistant, Chatbots).

o Cannot perform tasks outside its designated function.

2. General AI (Strong AI):

o Possesses human-like intelligence.

o Can perform any intellectual task that a human can do.

3. Super AI:

o Hypothetical AI that surpasses human intelligence in all aspects.

o Can perform creative tasks, make decisions, and even improve itself.

B. Based on Functionalities:

1. Reactive Machines:

o No memory; only react to current situations (e.g., IBM’s Deep Blue Chess program).

2. Limited Memory:

o Can store past data to make decisions (e.g., self-driving cars).

3. Theory of Mind AI:

o Still under research; capable of understanding emotions and human thoughts.

4. Self-Aware AI:

o Future AI with consciousness and self-awareness.

2. Foundations of AI

The foundation of AI is based on multiple disciplines, including:

A. Mathematics

 Probability, statistics, and logic play a crucial role in AI algorithms.


 Bayesian networks, fuzzy logic, and optimization techniques.

B. Computer Science

 Programming languages like Python, Java, and C++.

 Algorithms, data structures, and computation theory.

C. Neuroscience

 AI models inspired by the human brain (e.g., Neural Networks).

 Understanding cognition and decision-making processes.

D. Psychology and Cognitive Science

 Helps in designing AI that mimics human thought processes.

 Learning theories, decision-making models.

E. Linguistics

 Natural Language Processing (NLP) enables machines to understand human language.

 Speech recognition and machine translation.

F. Philosophy

 Ethics of AI, consciousness, and machine morality.

3. History of AI

The development of AI can be divided into several phases:

A. Ancient and Classical Period

 Ancient myths and legends about artificial beings with intelligence (e.g., Greek mythology).

 Philosophers like Aristotle developed logic-based reasoning.

B. Early 20th Century

 Mathematicians such as Alan Turing and Kurt Gödel laid the foundation.

 The Turing Test was introduced to determine machine intelligence.

C. AI Boom (1950s - 1970s)

 First AI programs developed, such as the Logic Theorist and General Problem Solver.

 John McCarthy coined the term Artificial Intelligence in 1956.

 Expert systems and symbolic reasoning were developed.

D. AI Winters (1970s - 1990s)

 Funding cuts due to unmet expectations.

 AI research slowed down due to limitations in computing power.


E. AI Resurgence (1990s - Present)

 Advances in machine learning, deep learning, and data availability.

 AI-powered systems like IBM’s Watson, AlphaGo, and ChatGPT emerged.

4. AI – Past, Present, and Future

A. Past

 Early AI was based on rule-based systems and symbolic reasoning.

 Limited by hardware constraints.

B. Present

 AI is widely used in healthcare, finance, entertainment, and autonomous systems.

 Machine learning, deep learning, and big data are driving AI advancements.

C. Future

 AI is expected to become more autonomous and creative.

 Advancements in Artificial General Intelligence (AGI) and Quantum AI.

 Ethical challenges and the risks of AI surpassing human control.

5. Intelligent Agents

An intelligent agent is an autonomous entity that perceives its environment and takes actions to
achieve goals.

A. Components of an Intelligent Agent

1. Perception (Sensors) – Gathers information from the environment (e.g., cameras,


microphones).

2. Processing Unit – Decides what action to take.

3. Action (Actuators) – Performs actions based on decisions (e.g., robot arms, motors).

B. Types of Intelligent Agents

1. Simple Reflex Agents – React to specific conditions without memory.

2. Model-Based Agents – Use an internal model of the world to make decisions.

3. Goal-Based Agents – Choose actions to achieve a goal.

4. Utility-Based Agents – Optimize decision-making by maximizing utility.

5. Learning Agents – Adapt and improve over time.

6. Environments of Intelligent Agents


The environment in which an AI agent operates plays a crucial role in decision-making.

A. Types of AI Environments

1. Fully Observable vs. Partially Observable

o Fully Observable: AI has complete knowledge (e.g., chess).

o Partially Observable: AI lacks full information (e.g., self-driving cars).

2. Deterministic vs. Stochastic

o Deterministic: Future states are predictable.

o Stochastic: Random events impact outcomes.

3. Episodic vs. Sequential

o Episodic: Each task is independent.

o Sequential: Current actions impact future decisions.

4. Static vs. Dynamic

o Static: Environment doesn’t change over time.

o Dynamic: Environment evolves (e.g., stock markets).

5. Discrete vs. Continuous

o Discrete: Limited actions (e.g., board games).

o Continuous: Infinite possibilities (e.g., real-world navigation).

6. Single-Agent vs. Multi-Agent

o Single-Agent: AI interacts with the environment alone.

o Multi-Agent: AI competes or cooperates with other agents.

7. Specifying the Task Environment

Defining the task environment is critical for designing AI systems.

A. PEAS Model

The PEAS framework helps specify the task environment of an AI system:

1. Performance Measure – Evaluates how well the AI performs.

2. Environment – Defines the world where AI operates.

3. Actuators – Components that perform actions.

4. Sensors – Gather data from the environment.

Example: Self-Driving Car


PEAS Component Description

Performance Measure Safety, fuel efficiency, passenger comfort

Environment Roads, traffic, weather conditions

Actuators Steering, braking, acceleration

Sensors Cameras, radar, GPS, lidar

Task Environment and Its Specification

Definition of Task Environment

A task environment refers to the external setting in which an intelligent agent operates. It
encompasses everything the agent can perceive, the actions it can take, and the goals it must
achieve. Understanding the task environment is essential in designing effective AI systems.

PEAS Framework

The PEAS framework (Performance Measure, Environment, Actuators, Sensors) helps in defining
and structuring task environments clearly. Let’s go into more detail about each component:

1. Performance Measure:

o Defines the success criteria for the agent.

o Examples:

 In a self-driving car, the performance measure could include safety, fuel


efficiency, travel time, and passenger comfort.

 In a chess-playing AI, the performance measure could be winning the game


while minimizing the number of moves.

2. Environment:

o Refers to the external world with which the agent interacts.

o The environment can be real-world (like a robotic vacuum cleaning a house) or


virtual (like a chatbot answering queries).

3. Actuators:

o These are the components that allow the agent to take actions and affect the
environment.

o Examples:
 A robotic arm's actuators could be motors and joints.

 A self-driving car’s actuators include steering, accelerator, and braking


systems.

4. Sensors:

o These allow the agent to perceive the environment by collecting data.

o Examples:

 A self-driving car has cameras, LiDAR, GPS, and ultrasonic sensors.

 A chatbot’s sensors include text or voice input from the user.

Example of a Task Environment Using the PEAS Framework

Performance
Agent Environment Actuators Sensors
Measure

Self-driving Safety, speed, Roads, traffic, Steering,


Cameras, LiDAR, GPS
car efficiency weather accelerator, brake

Chess-playing Winning, efficiency Chessboard, Moving chess Camera (for real chess),
AI of moves opponent pieces board state memory

Robotic Cleanliness, battery House, furniture, Wheels, vacuum


Bump sensors, cameras
Vacuum usage dust motor

Properties of Task Environments

Understanding the characteristics of a task environment helps determine the appropriate design for
an agent. Here are some detailed classifications:

1. Observability

 Fully Observable: The agent has complete knowledge of the environment at any given time.

o Example: Chess, where the entire board is visible.

 Partially Observable: The agent has limited knowledge and must infer missing information.

o Example: Poker, where opponents' cards are hidden.

2. Determinism

 Deterministic: The outcome of an action is predictable and certain.

o Example: A chess move always results in a predictable board state.

 Stochastic: The outcome of an action has randomness involved.

o Example: A self-driving car's actions depend on unpredictable factors like pedestrian


movement.

3. Episodic vs. Sequential


 Episodic: Each action is independent of previous actions.

o Example: Classifying images, where each image is analyzed separately.

 Sequential: The current action affects future outcomes.

o Example: Driving a car, where every turn and brake decision affects the journey.

4. Static vs. Dynamic

 Static: The environment remains unchanged while the agent decides.

o Example: Solving a crossword puzzle (the puzzle doesn't change).

 Dynamic: The environment changes over time, requiring real-time decision-making.

o Example: A stock-trading AI responding to fluctuating market conditions.

5. Discrete vs. Continuous

 Discrete: The number of possible states is finite.

o Example: A chess game (finite board positions).

 Continuous: The number of states is infinite.

o Example: A robotic arm moving freely in 3D space.

6. Single-agent vs. Multi-agent

 Single-agent: The agent operates alone without competition or cooperation.

o Example: A puzzle-solving AI.

 Multi-agent: The agent interacts with other agents, either competitively (chess) or
cooperatively (self-driving cars navigating traffic together).

Agent-Based Programs and Structure of Agents

An agent-based program is a system designed to perceive its environment, process the information,
and take appropriate actions.

Structure of an Agent

The general architecture of an agent consists of the following:

1. Perception:

o The agent gathers input from sensors (e.g., a camera, microphone, or GPS).

2. Agent Function:

o A mapping from percepts to actions. This function determines what the agent does
given its sensory inputs.

3. Decision-Making Unit:

o Processes sensor data and determines the best course of action.


o Can be based on rules, learning, or planning algorithms.

4. Actuators:

o Allow the agent to interact with and modify the environment (e.g., robotic arms,
chatbots sending responses).

Types of Agents

1. Simple Reflex Agents

 React to the current situation using condition-action rules (if-then statements).

 No memory or internal state tracking.

 Fast and efficient but limited in capability.

 Example: A thermostat turns on heating when the temperature is below 20°C.

Limitations: Cannot handle partial observability or complex decision-making.

2. Model-Based Reflex Agents

 Maintain an internal model of the environment to handle partial observability.

 Can track previous states to infer missing information.

 Example: A self-driving car remembers where pedestrians were last seen, even if they're
temporarily hidden by another vehicle.

Limitations: Requires memory and additional processing power.

3. Goal-Based Agents

 Make decisions based on long-term goals rather than just reacting.

 Require search and planning to determine the best course of action.

 Example: A chess-playing AI plans moves to checkmate an opponent instead of just capturing


pieces.

Limitations: Requires more computation and planning algorithms.

4. Utility-Based Agents

 Consider multiple possible goals and select the action that maximizes overall utility.

 Use a utility function to evaluate different options.

 Example: A self-driving car balances safety, speed, and fuel efficiency when deciding how to
drive.
Limitations: Requires a well-designed utility function and is computationally expensive.

 The task environment defines how an agent interacts with the world.

 PEAS framework helps specify task environments.

 Different properties (observability, determinism, etc.) impact agent design.

 Four main types of agents exist:

o Simple Reflex Agents – React based on conditions.

o Model-Based Agents – Use memory and state tracking.

o Goal-Based Agents – Aim for long-term objectives.

o Utility-Based Agents – Optimize for multiple factors.

You might also like