0% found this document useful (0 votes)
65 views32 pages

Aiml Notes

Uploaded by

shivarajpawar72
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views32 pages

Aiml Notes

Uploaded by

shivarajpawar72
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT 1

Definition of Artificial Intelligence.

Artificial intelligence (AI) is a field of study that focuses on creating machines and computers
that can perform tasks that usually require human intelligence. AI uses algorithms, data, and
computational power to simulate human intelligence.

AI can perform a variety of tasks, including:

learning, reasoning, problem-solving, perception, language understanding, interacting with the


environment, and exercising creativity.

AI is a broad field that includes many disciplines, such as computer science, data analytics, and
neuroscience. Some examples of AI in everyday life include voice assistants like Alexa and
Siri, and customer service chatbots.

AI can be applied to real-world problems, and companies can use it to make their businesses
more efficient and profitable. However, the value of AI comes from how companies use it to
assist humans, and how they explain what the systems do to build trust.

History of Artificial Intelligence

Artificial Intelligence is not a new word and not a new technology for researchers. This
technology is much older than you would imagine. Even there are the myths of Mechanical
men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of
AI which defines the journey from the AI generation to till date development.
Maturation of Artificial Intelligence (1943-1952)

Between 1943 and 1952, there was notable progress in the expansion of artificial intelligence
(AI). Throughout this period, AI transitioned from a mere concept to tangible experiments and
practical applications. Here are some key events that happened during this period:

o Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
o Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial neural
network (ANN) named SNARC. They utilized 3,000 vacuum tubes to mimic a network
of 40 neurons.

The birth of Artificial Intelligence (1952-1956)


From 1952 to 1956, AI surfaced as a unique domain of investigation. During this period,
pioneers and forward-thinkers commenced the groundwork for what would ultimately
transform into a revolutionary technological domain. Here are notable occurrences from this
era:

o Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-Playing
Program, which marked the world's first self-learning program for playing games.
o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some
theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.

The golden years-Early enthusiasm (1956-1974)

The period from 1956 to 1974 is commonly known as the "Golden Age" of artificial
intelligence (AI). In this timeframe, AI researchers and innovators were filled with enthusiasm
and achieved remarkable advancements in the field. Here are some notable events from this
era:

o Year 1958: During this period, Frank Rosenblatt introduced the perceptron, one of the
early artificial neural networks with the ability to learn from data. This invention laid
the foundation for modern neural networks. Simultaneously, John McCarthy developed
the Lisp programming language, which swiftly found favor within the AI community,
becoming highly popular among developers.
o Year 1959: Arthur Samuel is credited with introducing the phrase "machine learning"
in a pivotal paper in which he proposed that computers could be programmed to surpass
their creators in performance. Additionally, Oliver Selfridge made a notable
contribution to machine learning with his publication "Pandemonium: A Paradigm for
Learning." This work outlined a model capable of self-improvement, enabling it to
discover patterns in events more effectively.
o Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow created
STUDENT, one of the early programs for natural language processing (NLP), with the
specific purpose of solving algebra word problems.
o Year 1965: The initial expert system, Dendral, was devised by Edward Feigenbaum,
Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. It aided organic chemists in
identifying unfamiliar organic compounds.
o Year 1966: The researchers emphasized developing algorithms that can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named ELIZA. Furthermore, Stanford Research Institute created Shakey, the
earliest mobile intelligent robot incorporating AI, computer vision, navigation, and
NLP. It can be considered a precursor to today's self-driving cars and drones.
o Year 1968: Terry Winograd developed SHRDLU, which was the pioneering
multimodal AI capable of following user instructions to manipulate and reason within
a world of blocks.
o Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning algorithm known as
backpropagation, which enabled the development of multilayer artificial neural
networks. This represented a significant advancement beyond the perceptron and laid
the groundwork for deep learning. Additionally, Marvin Minsky and Seymour Papert
authored the book "Perceptrons," which elucidated the constraints of basic neural
networks. This publication led to a decline in neural network research and a resurgence
in symbolic AI research.
o Year 1972: The first intelligent humanoid robot was built in Japan, which was named
WABOT-1.
o Year 1973: James Lighthill published the report titled "Artificial Intelligence: A
General Survey," resulting in a substantial reduction in the British government's
backing for AI research.

The first AI winter (1974-1980)

The initial AI winter, occurring from 1974 to 1980, is known as a tough period for artificial
intelligence (AI). During this time, there was a substantial decrease in research funding, and
AI faced a sense of letdown.
o The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.

A boom of AI (1980-1987)

Between 1980 and 1987, AI underwent a renaissance and newfound vitality after the
challenging era of the First AI Winter. Here are notable occurrences from this timeframe:

o In 1980, the first national conference of the American Association of Artificial


Intelligence was held at Stanford University.
o Year 1980: After AI's winter duration, AI came back with an "Expert System". Expert
systems were programmed to emulate the decision-making ability of a human expert.
Additionally, Symbolics Lisp machines were brought into commercial use, marking the
onset of an AI resurgence. However, in subsequent years, the Lisp machine market
experienced a significant downturn.
o Year 1981: Danny Hillis created parallel computers tailored for AI and various
computational functions, featuring an architecture akin to contemporary GPUs.
o Year 1984: Marvin Minsky and Roger Schank introduced the phrase "AI winter"
during a gathering of the Association for the Advancement of Artificial Intelligence.
They cautioned the business world that exaggerated expectations about AI would result
in disillusionment and the eventual downfall of the industry, which indeed occurred
three years later.
o Year 1985: Judea Pearl introduced Bayesian network causal analysis, presenting
statistical methods for encoding uncertainty in computer systems.

The second AI winter (1987-1993)

o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.

The emergence of intelligent agents (1993-2011)


Between 1993 and 2011, there were significant leaps forward in artificial intelligence (AI),
particularly in the development of intelligent computer programs. During this era, AI
professionals shifted their emphasis from attempting to match human intelligence to crafting
pragmatic, ingenious software tailored to specific tasks. Here are some noteworthy occurrences
from this timeframe:

o Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by defeating world
chess champion Gary Kasparov, marking the first time a computer triumphed over a
reigning world chess champion. Moreover, Sepp Hochreiter and Jürgen Schmidhuber
introduced the Long Short-Term Memory recurrent neural network, revolutionizing the
capability to process entire sequences of data such as speech or video.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came into the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
o Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the paper titled
"Utilizing Graphics Processors for Extensive Deep Unsupervised Learning,"
introducing the concept of employing GPUs for the training of expansive neural
networks.
o Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli Meier, and Jonathan
Masci created the initial CNN that attained "superhuman" performance by emerging as
the victor in the German Traffic Sign Recognition competition. Furthermore, Apple
launched Siri, a voice-activated personal assistant capable of generating responses and
executing actions in response to voice commands.

Deep learning, big data and artificial general intelligence (2011-present)

From 2011 to the present moment, significant advancements have unfolded within the artificial
intelligence (AI) domain. These achievements can be attributed to the amalgamation of deep
learning, extensive data application, and the ongoing quest for artificial general intelligence
(AGI). Here are notable occurrences from this timeframe:

o Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had to solve
complex questions as well as riddles. Watson had proved that it could understand
natural language and can solve tricky questions quickly.
o Year 2012: Google launched an Android app feature, "Google Now", which was able
to provide information to the user as a prediction. Further, Geoffrey Hinton, Ilya
Sutskever, and Alex Krizhevsky presented a deep CNN structure that emerged
victorious in the ImageNet challenge, sparking the proliferation of research and
application in the field of deep learning.
o Year 2013: China's Tianhe-2 system achieved a remarkable feat by doubling the speed
of the world's leading supercomputers to reach 33.86 petaflops. It retained its status as
the world's fastest system for the third consecutive time. Furthermore, DeepMind
unveiled deep reinforcement learning, a CNN that acquired skills through repetitive
learning and rewards, ultimately surpassing human experts in playing games. Also,
Google researcher Tomas Mikolov and his team introduced Word2vec, a tool designed
to automatically discern the semantic connections among words.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test." Whereas Ian Goodfellow and his team pioneered generative
adversarial networks (GANs), a type of machine learning framework employed for
producing images, altering pictures, and crafting deepfakes, and Diederik Kingma and
Max Welling introduced variational autoencoders (VAEs) for generating images,
videos, and text. Also, Facebook engineered the DeepFace deep learning facial
recognition system, capable of identifying human faces in digital images with accuracy
nearly comparable to human capabilities.
o Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go player Lee
Sedol in Seoul, South Korea, prompting reminiscence of the Kasparov chess match
against Deep Blue nearly two decades earlier.Whereas Uber initiated a pilot program
for self-driving cars in Pittsburgh, catering to a limited group of users.
o Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
o Google has demonstrated an AI program, "Duplex," which was a virtual assistant that
had taken hairdresser appointments on call, and the lady on the other side didn't notice
that she was talking with the machine.
o Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of producing
images based on textual prompts.
o Year 2022: In November, OpenAI launched ChatGPT, offering a chat-oriented
interface to its GPT-3.5 LLM.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.

GOOD BEHAVIOUR IN AI

Good behavior in AI refers to ensuring that artificial intelligence systems operate in a way that
aligns with human values, ethics, and societal norms. It encompasses various principles and
guidelines aimed at making AI systems safe, fair, transparent, and accountable. Some of the
key aspects of good behavior in AI are:

1. Ethical Decision-Making:

AI should be designed to make decisions that are ethical and considerate of human values. This
involves avoiding decisions that could cause harm to individuals or society. Ethical guidelines,
such as those related to fairness, justice, and respect for autonomy, should guide the AI’s
decision-making process.

2. Transparency and Explainability:

A well-behaved AI should be transparent in its operations. This means that users should be able
to understand how an AI system arrives at its decisions. Explainability is crucial for building
trust, especially in areas like healthcare or finance, where AI systems can have significant
impacts.

3. Fairness and Non-Discrimination:

Good behavior in AI means avoiding biases that could result in unfair treatment of individuals
based on race, gender, age, or other attributes. This requires careful training, testing, and
monitoring of AI models to prevent and correct any discriminatory patterns.

4. Accountability:
AI systems must have a clear accountability structure. If an AI makes an incorrect or harmful
decision, it should be possible to identify where the responsibility lies—whether with the
developer, the organization deploying the AI, or the AI itself in some cases. This ensures there
is a way to address and correct any negative impacts.

5. Privacy and Security:

A responsible AI respects user privacy and is designed with strong data protection measures.
It should not misuse personal data or make decisions that infringe on an individual’s right to
privacy. Additionally, it should be safeguarded against malicious attacks or misuse.

6. Safety and Reliability:

AI systems must be designed to operate safely and predictably, even in unforeseen


circumstances. This includes ensuring robustness against potential failures or adversarial
attacks and minimizing risks to human safety.

7. Human-Centric Design:

Good AI behavior involves putting human interests at the center. AI should augment human
capabilities and not replace or undermine human judgment. It should support human decision-
making rather than dictate it.

8. Beneficial Use:

AI should be used for the benefit of society. Good behavior implies that AI is used to solve
real-world problems and contribute positively to areas like healthcare, education, and
environmental sustainability, without being used maliciously or harmfully.

9. Avoiding Manipulation:

AI should not be used to manipulate, deceive, or exploit people’s vulnerabilities. This includes
avoiding the use of dark patterns, misinformation, or any tactics that could trick users into
harmful behaviors.

10. Continuous Monitoring and Evaluation:


AI systems should be continuously monitored and evaluated to ensure that they continue to
behave as intended over time. This is especially important as AI systems can evolve or learn in
unexpected ways that might introduce unwanted behaviors.

Example Scenario

If an AI is used for loan approvals, good behavior would involve:

 Ensuring transparency by explaining to applicants why they were approved or


rejected.
 Fairness by making decisions without bias toward any demographic group.
 Accountability by allowing appeals and having mechanisms for human review.
 Privacy by safeguarding personal information.

By adhering to these principles, AI can become a powerful tool that works harmoniously with
human values and societal goals.

Rationality in Artificial Intelligence

In AI, rationality means designing agents that make decisions to maximize their chances of
achieving their goals. An AI is considered rational if it consistently takes actions that align with
its predefined objectives, given its knowledge and available resources.

Example of Rationality

Imagine you are a manager deciding whether to invest in a new project. A rational approach
would involve:

1. Analyzing the expected returns on investment.


2. Considering the potential risks and uncertainties.
3. Gathering as much relevant data as possible.
4. Weighing this information to decide if the benefits outweigh the costs.

If you decide to invest after careful evaluation, it would be seen as a rational decision, assuming
the analysis was conducted logically and objectively.

Rationality vs. Irrationality


While rationality is guided by logic and evidence, irrationality involves decisions driven by
emotions, biases, or faulty reasoning. For example, a person who gambles impulsively despite
the low probability of winning is acting irrationally, as their decisions are influenced more by
emotions than logi4cal assessment.

By understanding and applying the principles of rationality, individuals and organizations can
make better, more informed decisions that lead to optimal outcomes.

Agents in Artificial Intelligence


In artificial intelligence, an agent is a computer program or system that is designed to perceive
its environment, make decisions and take actions to achieve a specific goal or set of goals. The
agent operates autonomously, meaning it is not directly controlled by a human operator.

Agents can be classified into different types based on their characteristics, such as whether they
are reactive or proactive, whether they have a fixed or dynamic environment, and whether they
are single or multi-agent systems.

 Reactive agents are those that respond to immediate stimuli from their environment and
take actions based on those stimuli. Proactive agents, on the other hand, take initiative and
plan ahead to achieve their goals. The environment in which an agent operates can also be
fixed or dynamic. Fixed environments have a static set of rules that do not change, while
dynamic environments are constantly changing and require agents to adapt to new
situations.
 Multi-agent systems involve multiple agents working together to achieve a common goal.
These agents may have to coordinate their actions and communicate with each other to
achieve their objectives. Agents are used in a variety of applications, including robotics,
gaming, and intelligent systems. They can be implemented using different programming
languages and techniques, including machine learning and natural language processing.
Artificial intelligence is defined as the study of rational agents. A rational agent could be
anything that makes decisions, such as a person, firm, machine, or software. It carries out an
action with the best outcome after considering past and current percepts(agent’s perceptual
inputs at a given instance). An AI system is composed of an agent and its environment. The
agents act in their environment. The environment may contain other agents.
An agent is anything that can be viewed as:
 Perceiving its environment through sensors and
 Acting upon that environment through actuators
Note: Every agent can perceive its own actions (but not always the effects).

Interaction of Agents with the Environment

Structure of an AI Agent
To understand the structure of Intelligent Agents, we should be familiar
with Architecture and Agent programs. Architecture is the machinery that the agent executes
on. It is a device with sensors and actuators, for example, a robotic car, a camera, and a PC. An
agent program is an implementation of an agent function. An agent function is a map from
the percept sequence(history of all that an agent has perceived to date) to an action.

Agent = Architecture + Agent Program

There are many examples of agents in artificial intelligence. Here are a few:

 Intelligent personal assistants: These are agents that are designed to help users with
various tasks, such as scheduling appointments, sending messages, and setting reminders.
Examples of intelligent personal assistants include Siri, Alexa, and Google Assistant.
 Autonomous robots: These are agents that are designed to operate autonomously in the
physical world. They can perform tasks such as cleaning, sorting, and delivering goods.
Examples of autonomous robots include the Roomba vacuum cleaner and the Amazon
delivery robot.
 Gaming agents: These are agents that are designed to play games, either against human
opponents or other agents. Examples of gaming agents include chess-playing agents and
poker-playing agents.
 Fraud detection agents: These are agents that are designed to detect fraudulent behavior
in financial transactions. They can analyze patterns of behavior to identify suspicious
activity and alert authorities. Examples of fraud detection agents include those used by
banks and credit card companies.
 Traffic management agents: These are agents that are designed to manage traffic flow in
cities. They can monitor traffic patterns, adjust traffic lights, and reroute vehicles to
minimize congestion. Examples of traffic management agents include those used in smart
cities around the world.
 A software agent has Keystrokes, file contents, received network packages that act as
sensors and displays on the screen, files, and sent network packets acting as actuators.
 A Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs,
mouth, and other body parts act as actuators.
 A Robotic agent has Cameras and infrared range finders which act as sensors and various
motors act as actuators.
Characteristics of an Agent

Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability :

 Simple Reflex Agents


 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent
 Multi-agent systems
 Hierarchical agents

Simple Reflex Agents

Simple reflex agents ignore the rest of the percept history and act only on the basis of
the current percept. Percept history is the history of all that an agent has perceived to date.
Problems with Simple reflex agents are :

 Very limited intelligence.


 No knowledge of non-perceptual parts of the state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the collection of rules needs to be
updated.

Simple Reflex Agents

Model-Based Reflex Agents

It works by finding a rule whose condition matches the current situation. A model-based agent
can handle partially observable environments by the use of a model about the world.
Model-Based Reflex Agents

Goal-Based Agents

These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to reduce their
distance from the goal. This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state.
Goal-Based Agents

Utility-Based Agents

The agents which are developed having their end uses as building blocks are called utility-
based agents. When there are multiple possible alternatives, then to decide which one is best,
utility-based agents are used. They choose actions based on a preference (utility) for each
state.
Utility-Based Agents

Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning
Learning Agent
Multi-Agent Systems
These agents interact with other agents to achieve a common goal. They may have to coordinate
their actions and communicate with each other to achieve their objective.

Hierarchical Agents
These agents are organized into a hierarchy, with high-level agents overseeing the behavior of
lower-level agents. The high-level agents provide goals and constraints, while the low-level
agents carry out specific tasks. Hierarchical agents are useful in complex environments with
many tasks and sub-tasks.

Uses of Agents
Agents are used in a wide range of applications in artificial intelligence, including:

 Robotics: Agents can be used to control robots and automate tasks in manufacturing,
transportation, and other industries.
 Smart homes and buildings: Agents can be used to control heating, lighting, and other
systems in smart homes and buildings, optimizing energy use and improving comfort.
 Transportation systems: Agents can be used to manage traffic flow, optimize routes for
autonomous vehicles, and improve logistics and supply chain management.
 Healthcare: Agents can be used to monitor patients, provide personalized treatment plans,
and optimize healthcare resource allocation.
 Finance: Agents can be used for automated trading, fraud detection, and risk management
in the financial industry.
 Games: Agents can be used to create intelligent opponents in games and simulations,
providing a more challenging and realistic experience for players.
 Natural language processing: Agents can be used for language translation, question
answering, and chatbots that can communicate with users in natural language.
 Cybersecurity: Agents can be used for intrusion detection, malware analysis, and network
security.
 Environmental monitoring: Agents can be used to monitor and manage natural resources,
track climate change, and improve environmental sustainability.
 Social media: Agents can be used to analyze social media data, identify trends and patterns,
and provide personalized recommendations to users.

Agent Environment in AI

An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense
and act upon it. An environment is mostly said to be non-feministic.

Features of Environment

Fully observable vs Partially Observable

1. Static vs Dynamic
2. Discrete vs Continuous
3. Deterministic vs Stochastic
4. Single-agent vs Multi-agent

1. Fully observable vs Partially Observable:

o If an agent sensor can sense or access the complete state of an environment at each
point in time then it is a fully observable environment, it is partially observable. For
reference, Imagine a chess-playing agent. In this case, the agent can fully observe the
state of the chessboard at all times. Its sensors (in this case, vision or the ability to access
the board's state) provide complete information about the current position of all pieces.
This is a fully observable environment because the agent has perfect information about
the state of the world.

2. Deterministic vs Stochastic:

o If an agent's current state and selected action can completely determine the next state
of the environment, then such an environment is called a deterministic environment.
For reference, Chess is a classic example of a deterministic environment. In chess, the
rules are well-defined, and each move made by a player has a clear and predictable
outcome based on those rules. If you move a pawn from one square to another, the
resulting state of the chessboard is entirely determined by that action, as is your
opponent's response. There's no randomness or uncertainty in the outcomes of chess
moves because they follow strict rules. In a deterministic environment like chess,
knowing the current state and the actions taken allows you to completely determine the
next state.
o A stochastic environment is random and cannot be determined completely by an agent.
For reference, The stock market is an example of a stochastic environment. It's highly
influenced by a multitude of unpredictable factors, including economic events, investor
sentiment, and news. While there are patterns and trends, the exact behavior of stock
prices is inherently random and cannot be completely determined by any individual or
agent. Even with access to extensive data and analysis tools, stock market movements
can exhibit a high degree of unpredictability. Random events and market sentiment play
significant roles, introducing uncertainty.

3. Episodic vs Sequential:

o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action. For example, Tic-Tac-Toe is a classic example of an
episodic environment.

4. Single-agent vs Multi-agent

o If only one agent is involved in an environment, and operating by itself then such an
environment is called a single-agent environment. For example, Solitaire is a classic
example of a single-agent environment. When you play Solitaire, you're the only agent
involved.
o However, if multiple agents are operating in an environment, then such an environment
is called a multi-agent environment. For reference, A soccer match is an example of a
multi-agent environment. In a soccer game, there are two teams, each consisting of
multiple players (agents). These players work together to achieve common goals
(scoring goals and preventing the opposing team from scoring). Each player has their
own set of actions and decisions, and they interact with both their teammates and the
opposing team. The outcome of the game depends on the coordinated actions and
strategies of all the agents on the field. It's a multi-agent environment because there are
multiple autonomous entities (players) interacting in a shared environment.
Intelligent Agents:

o An intelligent agent is an autonomous entity which act upon an environment using


sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent agent.
o Following are the main four rules for an AI agent:
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

 Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
 Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator
can be an electric motor, gears, rails, etc.
 Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Rational Agent:

 A rational agent is an agent which has clear preference, models uncertainty, and acts in
a way to maximize its performance measure with all possible actions.
 A rational agent is said to perform the right things. AI is about creating rational agents
to use for game theory and decision theory for various real-world scenarios.
 For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward and for
each wrong action, an agent gets a negative reward.

Rationality:

 The rationality of an agent is measured by its performance measure. Rationality can be


judged on the basis of following points:

 Performance measure which defines the success criterion.

 Agent prior knowledge of its environment.

 Best possible actions that an agent can perform.

 The sequence of percepts.

PEAS Representation

 PEAS is a type of model on which an AI agent works upon. When we define an AI


agent or rational agent, then we can group its properties under PEAS representation
model. It is made up of four words:

 P: Performance measure

 E: Environment

 A: Actuators

 S: Sensors
PEAS for self-driving cars:

 Performance: Safety, time, legal drive, comfort

 Environment: Roads, other vehicles, road signs, pedestrian

 Actuators: Steering, accelerator, brake, signal, horn

 Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.

Example of Agents with their PEAS representation.

Agent Performance Environment Actuators Sensors


measure

• Healthy
• Patient
Keyboard
1. Medical patient • Tests
• Hospital (Entry of
Diagnose • Minimized • Treatments
symptoms)
• Staff
cost

• Camera

• Dirt
• Room detection
• Cleanness sensor
• Table • Wheels
• Efficiency • Cliff
2. Vacuum • Wood floor • Brushes
• Battery sensor
Cleaner
• Carpet • Vacuum
life • Bump
Extractor
• Various Sensor
• Security
obstacles
• Infrared
Wall
Sensor
• Percentage • Camera
• Conveyor belt
3. Part -picking of parts in • Jointed Arms
with parts, • Joint
Robot correct • Hand angle
• Bins
bins. sensors.

UNIT 2

Problem Solving in Artificial Intelligence


Problem solving is a core aspect of artificial intelligence (AI) that mimics human cognitive
processes. It involves identifying challenges, analyzing situations, and applying strategies to
find effective solutions.
This article explores the various dimensions of problem solving in AI, the types of problem-
solving agents, the steps involved, and the components that formulate associated problems.
Table of Content
 Understanding Problem-Solving Agents
 Types of Problems in AI
o 1. Ignorable Problems
o 2. Recoverable Problems
o 3. Irrecoverable Problems
 Steps in Problem Solving in Artificial Intelligence (AI)
 Components of Problem Formulation in AI
 Techniques for Problem Solving in AI
 Challenges in Problem Solving with AI
Understanding Problem-Solving Agents
In artificial intelligence (AI), agents are entities that perceive their environment and take
actions to achieve specific goals. Problem-solving agents stand out due to their focus on
identifying and resolving issues systematically. Unlike reflex agents, which react to stimuli
based on predefined mappings, problem-solving agents analyze situations and employ
various techniques to achieve desired outcomes.
Types of Problems in AI
1. Ignorable Problems
These are problems or errors that have minimal or no impact on the overall performance of
the AI system. They are minor and can be safely ignored without significantly affecting the
outcome.
Examples:
 Slight inaccuracies in predictions that do not affect the larger goal (e.g., small variance
in image pixel values during image classification).
 Minor data preprocessing errors that don’t alter the results significantly.
Handling: These problems often don’t require intervention and can be overlooked in real-
time systems without adverse effects.
2. Recoverable Problems
Recoverable problems are those where the AI system encounters an issue, but it can recover
from the error, either through manual intervention or built-in mechanisms, such as error-
handling functions.
Examples:
 Missing data that can be imputed or filled in by statistical methods.
 Incorrect or biased training data that can be retrained or corrected during the process.
 System crashes that can be recovered through checkpoints or retraining.
Handling: These problems require some action—either automated or manual recovery.
Systems can be designed with fault tolerance or error-correcting mechanisms to handle these.
3. Irrecoverable Problems
Description: These are critical problems that lead to permanent failure or incorrect outcomes
in AI systems. Once encountered, the system cannot recover, and these problems can cause
significant damage or misperformance.
Examples:
 Complete corruption of the training dataset leading to irreversible bias or poor
performance.
 Security vulnerabilities in AI models that allow for adversarial attacks, rendering the
system untrustworthy.
 Overfitting to the extent that the model cannot generalize to new data.
Handling: These problems often require a complete overhaul or redesign of the system,
including retraining the model, rebuilding the dataset, or addressing fundamental issues in
the AI architecture.
Steps in Problem Solving in Artificial Intelligence (AI)
The process of problem solving in AI consists of several finite steps that parallel human
cognitive processes. These steps include:
1. Problem Definition: This initial step involves clearly specifying the inputs and
acceptable solutions for the system. A well-defined problem lays the groundwork for
effective analysis and resolution.

2. Problem Analysis: In this step, the problem is thoroughly examined to understand its
components, constraints, and implications. This analysis is crucial for identifying viable
solutions.

3. Knowledge Representation: This involves gathering detailed information about the


problem and defining all potential techniques that can be applied. Knowledge
representation is essential for understanding the problem’s context and available
resources.

4. Problem Solving: The selection of the best techniques to address the problem is made in
this step. It often involves comparing various algorithms and approaches to determine the
most effective method.

Components of Problem Formulation in AI

Effective problem-solving in AI is dependent on several critical components:


 Initial State: This represents the starting point for the AI agent, establishing the context
in which the problem is addressed. The initial state may also involve initializing methods
for problem-solving.

 Action: This stage involves selecting functions associated with the initial state and
identifying all possible actions. Each action influences the progression toward the desired
goal.
 Transition: This component integrates the actions from the previous stage, leading to the
next state in the problem-solving process. Transition modeling helps visualize how
actions affect outcomes.

 Goal Test: This stage verifies whether the specified goal has been achieved through the
integrated transition model. If the goal is met, the action ceases, and the focus shifts to
evaluating the cost of achieving that goal.

 Path Costing: This component assigns a numerical value representing the cost of
achieving the goal. It considers all associated hardware, software, and human resource
expenses, helping to optimize the problem-solving strategy.

Techniques for Problem Solving in AI

Several techniques are prevalent in AI for effective problem-solving:


1. Search Algorithms
Search algorithms are foundational in AI, used to explore possible solutions in a structured
manner. Common types include:

 Uninformed Search: Such as breadth-first and depth-first search, which do not use
problem-specific information.

 Informed Search: Algorithms like A* that use heuristics to find solutions more
efficiently.

2. Constraint Satisfaction Problems (CSP)


CSPs involve finding solutions that satisfy specific constraints. AI uses techniques like
backtracking, constraint propagation, and local search to solve these problems effectively.

3. Optimization Techniques
AI often tackles optimization problems, where the goal is to find the best solution from a set
of feasible solutions. Techniques such as linear programming, dynamic programming,
and evolutionary algorithms are commonly employed.
4. Machine Learning
Machine learning techniques allow AI systems to learn from data and improve their problem-
solving abilities over time. Supervised, unsupervised, and reinforcement learning paradigms
offer various approaches to adapt and enhance performance.

5. Natural Language Processing (NLP)


NLP enables AI to understand and process human language, making it invaluable for solving
problems related to text analysis, sentiment analysis, and language translation. Techniques
like tokenization, sentiment analysis, and named entity recognition play crucial roles in this
domain.
Challenges in Problem Solving with AI

Despite its advancements, AI problem-solving faces several challenges:


 Complexity: Some problems are inherently complex and require significant
computational resources and time to solve.

 Data Quality: AI systems are only as good as the data they are trained on. Poor quality
data can lead to inaccurate solutions.

 Interpretability: Many AI models, especially deep learning, act as black boxes, making
it challenging to understand their decision-making processes.

 Ethics and Bias: AI systems can inadvertently reinforce biases present in the training
data, leading to unfair or unethical outcomes.

Uninformed Search Algorithms

Introduction:
Uninformed search is one in which the search systems do not use any clues about the suitable area
but it depend on the random nature of search. Nevertheless, they begins the exploration of search
space (all possible solutions) synchronously,. The search operation begins from the initial state and
providing all possible next steps arrangement until goal is reached. These are mostly the simplest
search strategies, but they may not be suitable for complex paths which involve in irrelevant or
even irrelevant components. These algorithms are necessary for solving basic tasks or providing
simple processing before passing on the data to more advanced search algorithms that incorporate
prioritized information.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

Informed search Algorithm

A* Search Algorithm in Artificial Intelligence

An Introduction to A* Search Algorithm in AI

7. A* (pronounced "A-star") is a powerful graph traversal and pathfinding algorithm widely


used in artificial intelligence and computer science. It is mainly used to find the shortest
path between two nodes in a graph, given the estimated cost of getting from the current
node to the destination node. The main advantage of the algorithm is its ability to provide
an optimal path by exploring the graph in a more informed way compared to traditional
search algorithms such as Dijkstra's algorithm.
8. Algorithm A* combines the advantages of two other search algorithms: Dijkstra's
algorithm and Greedy Best-First Search. Like Dijkstra's algorithm, A* ensures that the path
found is as short as possible but does so more efficiently by directing its search through a
heuristic similar to Greedy Best-First Search. A heuristic function, denoted h(n), estimates
the cost of getting from any given node n to the destination node.

You might also like