0% found this document useful (0 votes)
18 views24 pages

Unit-I AI Material

The document provides an overview of artificial intelligence (AI), including its definition, history, types, and current state. It discusses the evolution of AI from its inception in the 1950s to present advancements in machine learning, natural language processing, and ethical considerations. Additionally, it introduces the concepts of intelligent agents and environments, emphasizing the importance of rationality in decision-making processes within AI systems.

Uploaded by

fahmeedashaik45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views24 pages

Unit-I AI Material

The document provides an overview of artificial intelligence (AI), including its definition, history, types, and current state. It discusses the evolution of AI from its inception in the 1950s to present advancements in machine learning, natural language processing, and ethical considerations. Additionally, it introduces the concepts of intelligent agents and environments, emphasizing the importance of rationality in decision-making processes within AI systems.

Uploaded by

fahmeedashaik45
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit 1

Chapter 1:

Foundations of AI: What is AI, History of AI, Strong and


weak AI, The State of the Art.
What is AI?

Artificial Intelligence (AI) refers to the simulation of human intelligence processes by


machines, especially computer systems. It involves creating algorithms and systems that
enable computers to perform tasks that would typically require human intelligence, such
as learning from experience, reasoning based on acquired knowledge, problem-solving,
understanding natural language, and adapting to new situations.

Artificial intelligence (AI) is a research field that studies how to realize the
intelligent human behaviours on a computer. The ultimate goal of AI is to make a
computer that can learn , plan, and solve problems autonomously.

AI aims to replicate cognitive functions associated with human minds, allowing machines
to analyse data, draw conclusions, make decisions, and even exhibit a level of creativity. It
encompasses a broad range of techniques and approaches that enable computers to
mimic human-like behaviour and intelligence in various ways.

AI techniques and technologies include machine learning, deep learning, natural language
processing, computer vision, robotics, expert systems, and more. These techniques enable
machines to learn from data, adapt to new information, and improve their performance
over time.

History of AI:

1950s: The Birth of AI:


- 1950: Alan Turing introduces the "Turing Test" as a way to determine a machine's ability
to exhibit intelligent behavior.

- 1951: UNIVAC I, one of the earliest commercially produced computers, is used to predict
the outcome of the U.S. presidential election.

- 1956: The Dartmouth Workshop, organized by John McCarthy, Marvin Minsky, Nathaniel
Rochester, and Claude Shannon, marks the birth of artificial intelligence as a formal field
of study.

1960s: Early Developments:

- 1961: The first working AI program, the Logic Theorist, is developed by Allen Newell
and Herbert A. Simon.

- 1966: ELIZA, an early natural language processing program, is created by Joseph


Weizenbaum.

- 1969: Shakey, a mobile robot developed at Stanford Research Institute, demonstrates


basic reasoning and problem-solving capabilities.

1970s: Knowledge-Based Systems:

- 1973: The R1 system, an early expert system, is developed to assist in diagnosing blood
infections.

- 1974: The MYCIN system, designed to diagnose and suggest treatment for bacterial
infections, showcases the potential of AI in medical applications.

1980s: AI Winter and Expert Systems:

- Late 1980s: AI research experiences a decline in funding and interest, referred to as the
"AI Winter."

- Despite the challenges, expert systems continue to be developed for various domains,
relying on rule-based approaches.
1990s: Rise of Machine Learning:

- 1997: IBM's Deep Blue defeats world chess champion Garry Kasparov, showcasing the
potential of machine learning in complex games.

- The field of machine learning gains prominence, focusing on algorithms that allow
computers to learn from data.

2000s: Machine Learning Advances:

- 2006: Geoffrey Hinton's paper on deep learning rejuvenates interest in neural networks
and paves the way for significant advancements in image and speech recognition.

- 2011: IBM's Watson wins the game show Jeopardy!, demonstrating AI's ability to
process and understand human language.

2010s: Deep Learning Dominance:

- 2012: Deep neural networks achieve a breakthrough in image classification in the


ImageNet competition.

- 2014: Google's DeepMind creates a neural network that learns to play video games in a
manner similar to humans.

- 2016: AlphaGo, a program developed by DeepMind, defeats world Go champion Lee


Sedol, showcasing AI's progress in complex strategy games.

- 2018: OpenAI's GPT (Generative Pre-trained Transformer) introduces large-scale


language models capable of generating human-like text.

2020s: Continued Advancements:

- AI continues to advance rapidly, with applications in healthcare, finance, autonomous


vehicles, natural language processing, and more.
- Ethical concerns and discussions about AI's societal impact gain prominence.

Remember that this timeline provides a high-level overview, and there are many more
developments and nuances within the history of artificial intelligence. The field continues
to evolve, and new breakthroughs are made regularly.

Strong and weak AI:

Artificial Intelligence (AI) can be categorized into different types based on their
capabilities, functions, and levels of sophistication. Here are some common types of AI:

Narrow AI (Weak AI):

Narrow AI refers to AI systems that are designed and trained for a specific task or a limited
set of tasks. They excel in performing those tasks but lack the ability to generalize their
knowledge to other domains. Examples include virtual personal assistants like Siri and
Alexa, recommendation systems, and image recognition software.

General AI (Strong AI):

General AI represents a hypothetical level of AI that possesses human-like cognitive


abilities. It would be capable of understanding, learning, and applying knowledge across
a wide range of tasks, similar to the way humans do. This level of AI remains aspirational
and has not been achieved as of my last update in September 2021.

Artificial Narrow Intelligence (ANI):

ANI refers to AI systems that are focused on a single task and are highly specialized in
performing that task. They do not possess any form of general intelligence or
consciousness.
Artificial General Intelligence (AGI):

AGI is the term used to describe AI systems that possess human-like general intelligence,
allowing them to understand, learn, and perform tasks across a wide range of domains.
Achieving AGI is a long-term goal of AI research.

Artificial Superintelligence (ASI):

ASI goes beyond human intelligence and represents AI systems that surpass the cognitive
abilities of the smartest human beings in every aspect. This is a concept that raises ethical
and existential questions and is often discussed in terms of its potential impact on society.

Reactive Machines:

Reactive AI systems are designed to perform specific tasks based on predefined rules and
patterns. They don't have the ability to learn from experience or adapt to new situations.
Chess-playing programs that rely on a set of predetermined moves are examples of
reactive machines.

Limited Memory AI:

These AI systems can make decisions based on past experiences to a limited extent. They
can learn from historical data and adjust their behavior accordingly. Self-driving cars, for
example, use historical data to improve their driving patterns over time.

Theory of Mind AI:

This type of AI would possess an understanding of human emotions, beliefs, intentions,


and other mental states. It could predict and respond to human behavior in a more
socially intelligent manner.
Self-aware AI:

Self-aware AI is a theoretical concept where AI systems would not only possess human-
like cognitive abilities but also self-awareness and consciousness. This level of AI is still
in the realm of science fiction and philosophy.

The State of Art(AI):

1. Machine Learning and Deep Learning:

Deep learning, a subfield of machine learning, was making significant strides.


Convolutional Neural Networks (CNNs) were dominating image recognition tasks, while
Recurrent Neural Networks (RNNs) and Transformer models like BERT were excelling in
natural language processing (NLP) tasks. Generative Adversarial Networks (GANs) were
also being used for tasks such as image generation and style transfer.

2. Natural Language Processing:

NLP was seeing remarkable progress with the development of models like OpenAI's
GPT-3. These models demonstrated the ability to generate coherent human-like text and
perform various language-related tasks, including translation, summarization, and
question answering.

3. Computer Vision:

Computer vision applications were becoming increasingly sophisticated. Object


detection, image segmentation, and facial recognition systems were being widely used in
fields such as healthcare, autonomous vehicles, and security.
4. Reinforcement Learning:

Reinforcement learning was gaining traction in training AI agents to make sequential


decisions based on rewards and penalties. This had applications in robotics, gaming,
finance, and more.

5. Autonomous Systems:

Autonomous vehicles were a focal point, with companies investing heavily in developing
self-driving cars. These systems rely on a combination of AI techniques, including
computer vision and sensor fusion.

6. Ethical and Responsible AI:

As AI was being integrated into various aspects of society, discussions about ethics, bias,
transparency, and accountability were gaining prominence. Researchers and
policymakers were working to address potential biases in AI systems and ensure that
their deployment was fair and equitable.

7. Healthcare and Life Sciences:

AI was being applied in healthcare for tasks such as medical image analysis, drug
discovery, and personalized medicine. AI-powered diagnostic tools were being developed
to assist medical professionals in making accurate diagnoses.

8. Climate and Sustainability:

AI was being harnessed to address environmental challenges. It was used for climate
modeling, energy optimization, and monitoring natural resources.

9. AI Ethics and Regulation:


Governments and organizations were beginning to develop frameworks for regulating
AI and ensuring its responsible deployment. Discussions around data privacy, bias
mitigation, and algorithmic transparency were becoming more prevalent.

10. AI Research and Collaboration:

The AI community was highly collaborative, with research and breakthroughs being
shared across academia and industry. Open-source libraries and platforms were helping
democratize access to AI tools and technologies.
Chapter 2:

Intelligent Agents: Agents and Environments

Good Behaviour: The Concept of Rationality, The


Nature of Environments, The Structure of Agents.
Intelligent Agents:

In the context of artificial intelligence and reinforcement learning, the concepts of


"agents" and "environments" are fundamental. They describe the relationship between
an AI system and its external surroundings. Let's explore these concepts in more detail:

1. Agent:

An agent is an entity that perceives its environment and takes actions to achieve goals
or objectives. In the context of AI, an agent can be any computational system, such as a
robot, software program, or even a virtual character in a video game. The agent's primary
function is to make decisions about what actions to take based on the information it
receives from its environment.

Agents are designed to exhibit some level of intelligence or autonomy in their decision-
making process. They can use various strategies, algorithms, and heuristics to make
choices that lead to desired outcomes.

2. Environment:

The environment refers to the external context or the surroundings in which an agent
operates. It encompasses everything that the agent interacts with, perceives, and affects.
Environments can be real-world settings (like a room, a street, or a factory) or virtual
constructs (like a simulated world in a video game).
The environment provides feedback to the agent based on the actions it takes. This
feedback is often in the form of rewards or penalties, which the agent uses to learn and
improve its decision-making over time. The environment's state and the consequences of
the agent's actions are crucial components that determine the outcome of the agent's
interactions.

The interaction between agents and environments forms the basis of reinforcement
learning, a machine learning paradigm that involves training agents to learn optimal
strategies by interacting with their environments. In reinforcement learning, the agent's
goal is to learn a policy—a set of actions that maximizes its cumulative reward over time.

The process of an agent interacting with an environment in a reinforcement learning


setting typically involves the following steps:

1. Observation: The agent observes the current state of the environment, which may
include sensory input, data, or other relevant information.

2. Action: Based on its observation, the agent selects an action from its available set of
actions. The action may lead to changes in the environment.

3. Environment Response: The environment responds to the agent's action, leading to a


transition to a new state. The environment also provides feedback in the form of rewards
or penalties based on the consequences of the action.

4. Learning: The agent learns from the feedback received. It updates its decision-making
strategy (policy) to improve its future actions, aiming to maximize its cumulative reward
over time.

5. Iteration: The agent continues to interact with the environment, iteratively refining its
policy to achieve better performance.
The concepts of agents and environments are foundational to understanding how AI
systems interact with and learn from the world around them. They are particularly
relevant in the fields of reinforcement learning, robotics, autonomous systems, and AI-
driven decision-making.

Good behaviour:

The concept of rationality

The concept of rationality is central to the study of decision-making and artificial


intelligence. It refers to the idea of making choices that are logical, reasoned, and aimed
at achieving specific goals or objectives. Rationality involves selecting actions or making
decisions that are expected to lead to outcomes that align with an individual's or an
agent's preferences, given the available information and constraints.

In the context of artificial intelligence, the concept of rationality is often used to guide the
behaviour of intelligent agents. An intelligent agent is considered rational when it
consistently selects actions that maximize its expected utility or value. Utility refers to the
satisfaction or desirability associated with different outcomes.

Here are a few key points related to the concept of rationality:

1. Expected Utility Maximization: A rational agent aims to maximize its expected utility.
This involves evaluating the potential outcomes of different actions, assessing their
associated utilities, and selecting the action that is expected to yield the highest overall
utility.

2. Reasoning Under Uncertainty: Rational decision-making often takes place in


situations where there is uncertainty or incomplete information. Rational agents use
techniques such as probability theory and decision theory to make informed choices in
uncertain environments.

3. Consistency: Rationality implies consistency in decision-making. If an agent prefers


option A over option B in one situation and then prefers option B over option A in another
situation with similar characteristics, its choices would be considered inconsistent.
4. Trade-offs: Rational agents consider trade-offs between different objectives and
constraints. They weigh the benefits and costs of different options and choose actions that
strike a balance between competing interests.

5. Normative vs. Descriptive Rationality: Normative rationality refers to the idealized


concept of decision-making where choices are made to maximize expected utility.
Descriptive rationality, on the other hand, acknowledges that human decision-making
might deviate from the normative ideal due to cognitive limitations, biases, and other
factors.

6. Context and Preferences:

Rationality is context-dependent and depends on an agent's preferences or goals.


Different agents might have different preferences, leading to different rational choices in
the same situation.

7. Bounded Rationality: The concept of bounded rationality recognizes that human and
AI decision-makers often have limitations in terms of cognitive resources, time, and
information. As a result, their decisions might be satisfactory rather than optimal.

In AI, designing agents that exhibit rational behavior involves understanding the decision-
making process, modeling uncertainties, and developing algorithms that can reason and
make choices effectively. Rationality is a guiding principle in the development of AI
systems, especially those involved in areas like reinforcement learning, game theory, and
decision support systems.

It's important to note that the concept of rationality can vary depending on the specific
framework, model, or context in which it is applied.

The nature of environment:

In the context of artificial intelligence and the interaction between agents and
environments, the nature of environments refers to the characteristics and properties
that define the external context in which agents operate and make decisions.
Environments play a crucial role in shaping the behaviour of agents, as they provide the
information, challenges, and feedback necessary for learning and decision-making
processes. Environments can vary widely in their complexity, dynamics, and the types of
tasks they present to agents.

Here are some key aspects that describe the nature of environments in the context of AI:

1. Observable vs. Partially Observable:

Environments can be observable, meaning that an agent has complete information


about the current state, or partially observable, where an agent only has access to a
limited subset of information. Partial observability can add complexity to decision-
making, as agents need to account for uncertainty and potential hidden states.

2. Deterministic vs. Stochastic:

Environments can be deterministic, where the outcome of an action is entirely


determined by the current state, or stochastic, where there is an element of randomness
or uncertainty in the outcomes. Stochastic environments require agents to consider
probabilities when making decisions.

3. Static vs. Dynamic:

Environments can be static, where the state does not change over time, or dynamic,
where the state evolves based on agent actions and external factors. Dynamic
environments introduce temporal aspects that agents must consider when planning and
decision-making.

4. Episodic vs. Sequential:

In episodic environments, an agent's actions do not affect future states or outcomes (e.g.,
playing individual games of chess). In sequential environments, actions have
consequences that affect subsequent states and decisions (e.g., navigating a maze).

5. Adversarial vs. Cooperative:

Environments can be adversarial, where multiple agents have conflicting objectives and
compete against each other, or cooperative, where agents work together to achieve shared
goals. Games and multi-agent systems often involve adversarial or cooperative dynamics.

6. Continuous vs. Discrete:


Environments can have continuous state and action spaces, where values can take any
real number, or discrete state and action spaces, where values are restricted to a set of
distinct options. The complexity of planning and decision-making can vary based on the
continuity of the environment.

7. Static vs. Interactive:

Some environments are static, meaning that they do not change based on the agent's
actions. In contrast, interactive environments change based on the agent's interactions,
requiring real-time adaptation and decision-making.

8. Simulated vs. Real-World:

Environments can be simulated, meaning they exist within a virtual space or computer
simulation, or real-world, where agents interact with the physical world. Simulated
environments are often used for training and testing AI algorithms before deploying them
in real-world scenarios.

The nature of the environment heavily influences the strategies, algorithms, and
approaches that agents need to employ to achieve their goals. Different AI techniques and
algorithms are developed to handle various types of environments, each presenting
unique challenges and opportunities for learning and decision-making. The choice of
environment and how it's modeled is a critical aspect of AI research and application.

The Structure of Agents:

In the context of artificial intelligence, an agent is a system that perceives its environment
and takes actions to achieve its goals or objectives. The structure of an agent refers to its
components and organization that enable it to interact with the environment, make
decisions, and learn from its experiences. The structure of an agent typically consists of
several key components:

1. Sensors:

Sensors are the components that allow the agent to perceive or gather information from
its environment. Depending on the type of agent and the context, sensors can include
cameras, microphones, touch sensors, GPS receivers, and various other types of sensors
that capture relevant data from the surroundings.

2. Perception and Processing:

Once the sensors collect data, the agent's processing components analyze and interpret
the sensory information. This step involves extracting relevant features, recognizing
patterns, and understanding the current state of the environment. Machine learning
algorithms are often used to process sensory data and make sense of the information.

3. Knowledge Representation:

Agents often maintain some form of internal representation of the world, which is used
to store relevant information about the environment, past experiences, and the agent's
current state. This can include databases, knowledge graphs, semantic networks, or other
data structures that help the agent reason and make decisions.

4. Decision-Making and Reasoning:

The decision-making component of an agent processes the information obtained from


perception and knowledge representation to make decisions. This can involve reasoning
about the consequences of different actions, evaluating potential outcomes, and selecting
the best course of action to achieve the agent's goals. Decision-making can be rule-based,
heuristic-based, or learned through machine learning.

5. Actuators:

Actuators are the components that allow the agent to interact with its environment by
performing actions. These actions can range from physical movements (in the case of
robots) to sending commands or responses (in the case of software agents). Actuators
translate the agent's decisions into tangible actions in the environment.

6. Learning and Adaptation:

Many agents have the ability to learn from their experiences and improve their decision-
making over time. Learning mechanisms can include reinforcement learning, supervised
learning, unsupervised learning, and more. Learning allows the agent to adjust its
behavior based on feedback from the environment and achieve better performance.
7. Communication:

In multi-agent environments, agents may need to communicate with each other to


coordinate actions or share information. Communication mechanisms can include
message passing, signaling, or other methods to exchange data and collaborate.

8. Goal Specification:

Agents are designed to achieve specific goals or objectives. The goal specification
component defines what the agent aims to achieve and how it prioritizes different goals.
This component influences the agent's decision-making process.

The structure of an agent can vary significantly based on the application, the type of
environment, and the AI techniques used. For example, a robotic agent navigating through
a physical environment would have different components compared to a software agent
managing a virtual customer service chatbot. Designing the appropriate structure for an
agent involves considering its capabilities, the tasks it needs to perform, and the
interactions it will have with its environment.

Types of Agents

• Agents can be grouped into five classes based on their degree of perceived
intelligence and capability :

• Simple Reflex Agents

• Model-Based Reflex Agents

• Goal-Based Agents

• Utility-Based Agents

• Learning Agent

Simple reflex agents

• Simple reflex agents ignore the rest of the percept history and act only on the
basis of the current percept.
• Percept history is the history of all that an agent has perceived to date. The agent
function is based on the condition-action rule.

• A condition-action rule is a rule that maps a state i.e, condition to an action.

• If the condition is true, then the action is taken, else not.

• This agent function only succeeds when the environment is fully observable.

• For simple reflex agents operating in partially observable environments, infinite


Problems with Simple reflex agents are :

• Very limited intelligence.

• No knowledge of non-perceptual parts of the state.

• Usually too big to generate and store.

• If there occurs any change in the environment, then the collection of rules need to
be updated.

• loops are often unavoidable.

• It may be possible to escape from infinite loops if the agent can randomize its
actions.

• A simple reflex agent takes an action based on only the current environmental
situation; it maps the current percept into proper action ignoring the history of
percepts.
• →The mapping process could be simply a table-based or by any rule-based
matching algorithm.
• →An example of this class is a robotic vacuum cleaner that deliberate in an
infinite loop, each percept contains a state of a current location [clean] or [dirty]
and, accordingly, it decides whether to [suck] or [continue-moving].
Model-based reflex agents

• It works by finding a rule whose condition matches the current situation.

• A model-based agent can handle partially observable environments by the use


of a model about the world.

• The agent has to keep track of the internal state which is adjusted by each percept
and that depends on the percept history.

• The current state is stored inside the agent which maintains some kind of
structure describing the part of the world which cannot be seen.

• Updating the state requires information about :

→how the world evolves independently from the agent, and

→how the agent’s actions affect the world.

A model-based reflex agent needs memory for storing the percept history; it uses the
percept history to help to reveal the current unobservable aspects of the environment.

→An example of this IA class is the self-steering mobile vision,(self driving cars)
where it's necessary to check the percept history to fully understand how the world is
evolving.
Goal-based agents

• These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations).

• Their every action is intended to reduce its distance from the goal.

• This allows the agent a way to choose among multiple possibilities, selecting the
one which reaches a goal state.

• The knowledge that supports its decisions is represented explicitly and can be
modified, which makes these agents more flexible.

• They usually require search and planning. The goal-based agent’s behavior can
easily be changed.

• A goal-based reflex agent has a goal and has a strategy to reach that goal.
• →All actions are taken to reach this goal.
• →More precisely, from a set of possible actions, it selects the one that improves
the progress towards the goal (not necessarily the best one).
• →An example of this IA class is any searching robot that has an initial location
and wants to reach a destination.
Utility-based agents

• The agents which are developed having their end uses as building blocks are
called utility-based agents.

• When there are multiple possible alternatives, then to decide which one is best,
utility-based agents are used.

• They choose actions based on a preference (utility) for each state.

• Sometimes achieving the desired goal is not enough.

• We may look for a quicker, safer, cheaper trip to reach a destination.

• Agent happiness should be taken into consideration. Utility describes


how “happy” the agent is.

• Because of the uncertainty in the world, a utility agent chooses the action that
maximizes the expected utility.

• A utility function maps a state onto a real number which describes the associated
degree of happiness.
• →An utility-based reflex agent is like the goal-based agent but with a measure
of "how much happy" an action would make it rather than the goal-based binary
feedback ['happy', 'unhappy’].
• →This kind of agents provide the best solution.
• →An example is the route recommendation system which solves the 'best' route
to reach a destination.

Learning Agent :

• A learning agent in AI is the type of agent that can learn from its past experiences
or it has learning capabilities.

• It starts to act with basic knowledge and then is able to act and adapt automatically
through learning.
A learning agent has mainly four conceptual components, which are:

1. Learning element: It is responsible for making improvements by learning from


the environment

2. Critic: The learning element takes feedback from critics which describes how well
the agent is doing with respect to a fixed performance standard.

3. Performance element: It is responsible for selecting external action

4. Problem Generator: This component is responsible for suggesting actions that


will lead to new and informative experiences.

5. A learning agent is an agent capable of learning from experience.


6. →It has the capability of automatic information acquisition and integration into
the system.
7. -->Any agent designed and expected to be successful in an uncertain environment
is considered to be learning agent.
8. →human being as an example of learning agent.
How the components of agent programs work

In very high-level terms, we have described agent programmes as being made up

of different parts that work together to respond to queries like, "What is the state of

the world right now?"

"What should I do right now?" What effect do my activities have? The following

query for an AI student is, "How in the world do these components work?" We

want to call the reader's attention to some fundamental differences between the

many ways that the components can describe the environment that the agent

inhabits because it takes nearly a thousand pages to even begin to answer that

question effectively.
The representations can be roughly categorised along an axis of escalating

complexity and expressive capability, namely atomic, factored, and structured.

Considering a specific agent component, such as the one that addresses "What my

actions do," can help to clarify these concepts. This component depicts potential

environmental changes that could follow from an activity, and Figure 2.16 shows

schematic representations of those potential t

ransitions.

You might also like