Aiml Notes
Aiml Notes
Artificial intelligence (AI) is a field of study that focuses on creating machines and computers
that can perform tasks that usually require human intelligence. AI uses algorithms, data, and
computational power to simulate human intelligence.
AI is a broad field that includes many disciplines, such as computer science, data analytics, and
neuroscience. Some examples of AI in everyday life include voice assistants like Alexa and
Siri, and customer service chatbots.
AI can be applied to real-world problems, and companies can use it to make their businesses
more efficient and profitable. However, the value of AI comes from how companies use it to
assist humans, and how they explain what the systems do to build trust.
Artificial Intelligence is not a new word and not a new technology for researchers. This
technology is much older than you would imagine. Even there are the myths of Mechanical
men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of
AI which defines the journey from the AI generation to till date development.
Maturation of Artificial Intelligence (1943-1952)
Between 1943 and 1952, there was notable progress in the expansion of artificial intelligence
(AI). Throughout this period, AI transitioned from a mere concept to tangible experiments and
practical applications. Here are some key events that happened during this period:
o Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
o Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial neural
network (ANN) named SNARC. They utilized 3,000 vacuum tubes to mimic a network
of 40 neurons.
o Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-Playing
Program, which marked the world's first self-learning program for playing games.
o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some
theorems.
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.
The period from 1956 to 1974 is commonly known as the "Golden Age" of artificial
intelligence (AI). In this timeframe, AI researchers and innovators were filled with enthusiasm
and achieved remarkable advancements in the field. Here are some notable events from this
era:
o Year 1958: During this period, Frank Rosenblatt introduced the perceptron, one of the
early artificial neural networks with the ability to learn from data. This invention laid
the foundation for modern neural networks. Simultaneously, John McCarthy developed
the Lisp programming language, which swiftly found favor within the AI community,
becoming highly popular among developers.
o Year 1959: Arthur Samuel is credited with introducing the phrase "machine learning"
in a pivotal paper in which he proposed that computers could be programmed to surpass
their creators in performance. Additionally, Oliver Selfridge made a notable
contribution to machine learning with his publication "Pandemonium: A Paradigm for
Learning." This work outlined a model capable of self-improvement, enabling it to
discover patterns in events more effectively.
o Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow created
STUDENT, one of the early programs for natural language processing (NLP), with the
specific purpose of solving algebra word problems.
o Year 1965: The initial expert system, Dendral, was devised by Edward Feigenbaum,
Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. It aided organic chemists in
identifying unfamiliar organic compounds.
o Year 1966: The researchers emphasized developing algorithms that can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which
was named ELIZA. Furthermore, Stanford Research Institute created Shakey, the
earliest mobile intelligent robot incorporating AI, computer vision, navigation, and
NLP. It can be considered a precursor to today's self-driving cars and drones.
o Year 1968: Terry Winograd developed SHRDLU, which was the pioneering
multimodal AI capable of following user instructions to manipulate and reason within
a world of blocks.
o Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning algorithm known as
backpropagation, which enabled the development of multilayer artificial neural
networks. This represented a significant advancement beyond the perceptron and laid
the groundwork for deep learning. Additionally, Marvin Minsky and Seymour Papert
authored the book "Perceptrons," which elucidated the constraints of basic neural
networks. This publication led to a decline in neural network research and a resurgence
in symbolic AI research.
o Year 1972: The first intelligent humanoid robot was built in Japan, which was named
WABOT-1.
o Year 1973: James Lighthill published the report titled "Artificial Intelligence: A
General Survey," resulting in a substantial reduction in the British government's
backing for AI research.
The initial AI winter, occurring from 1974 to 1980, is known as a tough period for artificial
intelligence (AI). During this time, there was a substantial decrease in research funding, and
AI faced a sense of letdown.
o The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
Between 1980 and 1987, AI underwent a renaissance and newfound vitality after the
challenging era of the First AI Winter. Here are notable occurrences from this timeframe:
o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.
o Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by defeating world
chess champion Gary Kasparov, marking the first time a computer triumphed over a
reigning world chess champion. Moreover, Sepp Hochreiter and Jürgen Schmidhuber
introduced the Long Short-Term Memory recurrent neural network, revolutionizing the
capability to process entire sequences of data such as speech or video.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
o Year 2006: AI came into the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
o Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the paper titled
"Utilizing Graphics Processors for Extensive Deep Unsupervised Learning,"
introducing the concept of employing GPUs for the training of expansive neural
networks.
o Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli Meier, and Jonathan
Masci created the initial CNN that attained "superhuman" performance by emerging as
the victor in the German Traffic Sign Recognition competition. Furthermore, Apple
launched Siri, a voice-activated personal assistant capable of generating responses and
executing actions in response to voice commands.
From 2011 to the present moment, significant advancements have unfolded within the artificial
intelligence (AI) domain. These achievements can be attributed to the amalgamation of deep
learning, extensive data application, and the ongoing quest for artificial general intelligence
(AGI). Here are notable occurrences from this timeframe:
o Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had to solve
complex questions as well as riddles. Watson had proved that it could understand
natural language and can solve tricky questions quickly.
o Year 2012: Google launched an Android app feature, "Google Now", which was able
to provide information to the user as a prediction. Further, Geoffrey Hinton, Ilya
Sutskever, and Alex Krizhevsky presented a deep CNN structure that emerged
victorious in the ImageNet challenge, sparking the proliferation of research and
application in the field of deep learning.
o Year 2013: China's Tianhe-2 system achieved a remarkable feat by doubling the speed
of the world's leading supercomputers to reach 33.86 petaflops. It retained its status as
the world's fastest system for the third consecutive time. Furthermore, DeepMind
unveiled deep reinforcement learning, a CNN that acquired skills through repetitive
learning and rewards, ultimately surpassing human experts in playing games. Also,
Google researcher Tomas Mikolov and his team introduced Word2vec, a tool designed
to automatically discern the semantic connections among words.
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test." Whereas Ian Goodfellow and his team pioneered generative
adversarial networks (GANs), a type of machine learning framework employed for
producing images, altering pictures, and crafting deepfakes, and Diederik Kingma and
Max Welling introduced variational autoencoders (VAEs) for generating images,
videos, and text. Also, Facebook engineered the DeepFace deep learning facial
recognition system, capable of identifying human faces in digital images with accuracy
nearly comparable to human capabilities.
o Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go player Lee
Sedol in Seoul, South Korea, prompting reminiscence of the Kasparov chess match
against Deep Blue nearly two decades earlier.Whereas Uber initiated a pilot program
for self-driving cars in Pittsburgh, catering to a limited group of users.
o Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
o Google has demonstrated an AI program, "Duplex," which was a virtual assistant that
had taken hairdresser appointments on call, and the lady on the other side didn't notice
that she was talking with the machine.
o Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of producing
images based on textual prompts.
o Year 2022: In November, OpenAI launched ChatGPT, offering a chat-oriented
interface to its GPT-3.5 LLM.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and
Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.
GOOD BEHAVIOUR IN AI
Good behavior in AI refers to ensuring that artificial intelligence systems operate in a way that
aligns with human values, ethics, and societal norms. It encompasses various principles and
guidelines aimed at making AI systems safe, fair, transparent, and accountable. Some of the
key aspects of good behavior in AI are:
1. Ethical Decision-Making:
AI should be designed to make decisions that are ethical and considerate of human values. This
involves avoiding decisions that could cause harm to individuals or society. Ethical guidelines,
such as those related to fairness, justice, and respect for autonomy, should guide the AI’s
decision-making process.
A well-behaved AI should be transparent in its operations. This means that users should be able
to understand how an AI system arrives at its decisions. Explainability is crucial for building
trust, especially in areas like healthcare or finance, where AI systems can have significant
impacts.
Good behavior in AI means avoiding biases that could result in unfair treatment of individuals
based on race, gender, age, or other attributes. This requires careful training, testing, and
monitoring of AI models to prevent and correct any discriminatory patterns.
4. Accountability:
AI systems must have a clear accountability structure. If an AI makes an incorrect or harmful
decision, it should be possible to identify where the responsibility lies—whether with the
developer, the organization deploying the AI, or the AI itself in some cases. This ensures there
is a way to address and correct any negative impacts.
A responsible AI respects user privacy and is designed with strong data protection measures.
It should not misuse personal data or make decisions that infringe on an individual’s right to
privacy. Additionally, it should be safeguarded against malicious attacks or misuse.
7. Human-Centric Design:
Good AI behavior involves putting human interests at the center. AI should augment human
capabilities and not replace or undermine human judgment. It should support human decision-
making rather than dictate it.
8. Beneficial Use:
AI should be used for the benefit of society. Good behavior implies that AI is used to solve
real-world problems and contribute positively to areas like healthcare, education, and
environmental sustainability, without being used maliciously or harmfully.
9. Avoiding Manipulation:
AI should not be used to manipulate, deceive, or exploit people’s vulnerabilities. This includes
avoiding the use of dark patterns, misinformation, or any tactics that could trick users into
harmful behaviors.
Example Scenario
By adhering to these principles, AI can become a powerful tool that works harmoniously with
human values and societal goals.
In AI, rationality means designing agents that make decisions to maximize their chances of
achieving their goals. An AI is considered rational if it consistently takes actions that align with
its predefined objectives, given its knowledge and available resources.
Example of Rationality
Imagine you are a manager deciding whether to invest in a new project. A rational approach
would involve:
If you decide to invest after careful evaluation, it would be seen as a rational decision, assuming
the analysis was conducted logically and objectively.
By understanding and applying the principles of rationality, individuals and organizations can
make better, more informed decisions that lead to optimal outcomes.
Agents can be classified into different types based on their characteristics, such as whether they
are reactive or proactive, whether they have a fixed or dynamic environment, and whether they
are single or multi-agent systems.
Reactive agents are those that respond to immediate stimuli from their environment and
take actions based on those stimuli. Proactive agents, on the other hand, take initiative and
plan ahead to achieve their goals. The environment in which an agent operates can also be
fixed or dynamic. Fixed environments have a static set of rules that do not change, while
dynamic environments are constantly changing and require agents to adapt to new
situations.
Multi-agent systems involve multiple agents working together to achieve a common goal.
These agents may have to coordinate their actions and communicate with each other to
achieve their objectives. Agents are used in a variety of applications, including robotics,
gaming, and intelligent systems. They can be implemented using different programming
languages and techniques, including machine learning and natural language processing.
Artificial intelligence is defined as the study of rational agents. A rational agent could be
anything that makes decisions, such as a person, firm, machine, or software. It carries out an
action with the best outcome after considering past and current percepts(agent’s perceptual
inputs at a given instance). An AI system is composed of an agent and its environment. The
agents act in their environment. The environment may contain other agents.
An agent is anything that can be viewed as:
Perceiving its environment through sensors and
Acting upon that environment through actuators
Note: Every agent can perceive its own actions (but not always the effects).
Structure of an AI Agent
To understand the structure of Intelligent Agents, we should be familiar
with Architecture and Agent programs. Architecture is the machinery that the agent executes
on. It is a device with sensors and actuators, for example, a robotic car, a camera, and a PC. An
agent program is an implementation of an agent function. An agent function is a map from
the percept sequence(history of all that an agent has perceived to date) to an action.
There are many examples of agents in artificial intelligence. Here are a few:
Intelligent personal assistants: These are agents that are designed to help users with
various tasks, such as scheduling appointments, sending messages, and setting reminders.
Examples of intelligent personal assistants include Siri, Alexa, and Google Assistant.
Autonomous robots: These are agents that are designed to operate autonomously in the
physical world. They can perform tasks such as cleaning, sorting, and delivering goods.
Examples of autonomous robots include the Roomba vacuum cleaner and the Amazon
delivery robot.
Gaming agents: These are agents that are designed to play games, either against human
opponents or other agents. Examples of gaming agents include chess-playing agents and
poker-playing agents.
Fraud detection agents: These are agents that are designed to detect fraudulent behavior
in financial transactions. They can analyze patterns of behavior to identify suspicious
activity and alert authorities. Examples of fraud detection agents include those used by
banks and credit card companies.
Traffic management agents: These are agents that are designed to manage traffic flow in
cities. They can monitor traffic patterns, adjust traffic lights, and reroute vehicles to
minimize congestion. Examples of traffic management agents include those used in smart
cities around the world.
A software agent has Keystrokes, file contents, received network packages that act as
sensors and displays on the screen, files, and sent network packets acting as actuators.
A Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs,
mouth, and other body parts act as actuators.
A Robotic agent has Cameras and infrared range finders which act as sensors and various
motors act as actuators.
Characteristics of an Agent
Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability :
Simple reflex agents ignore the rest of the percept history and act only on the basis of
the current percept. Percept history is the history of all that an agent has perceived to date.
Problems with Simple reflex agents are :
It works by finding a rule whose condition matches the current situation. A model-based agent
can handle partially observable environments by the use of a model about the world.
Model-Based Reflex Agents
Goal-Based Agents
These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to reduce their
distance from the goal. This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state.
Goal-Based Agents
Utility-Based Agents
The agents which are developed having their end uses as building blocks are called utility-
based agents. When there are multiple possible alternatives, then to decide which one is best,
utility-based agents are used. They choose actions based on a preference (utility) for each
state.
Utility-Based Agents
Learning Agent
A learning agent in AI is the type of agent that can learn from its past experiences or it has
learning capabilities. It starts to act with basic knowledge and then is able to act and adapt
automatically through learning
Learning Agent
Multi-Agent Systems
These agents interact with other agents to achieve a common goal. They may have to coordinate
their actions and communicate with each other to achieve their objective.
Hierarchical Agents
These agents are organized into a hierarchy, with high-level agents overseeing the behavior of
lower-level agents. The high-level agents provide goals and constraints, while the low-level
agents carry out specific tasks. Hierarchical agents are useful in complex environments with
many tasks and sub-tasks.
Uses of Agents
Agents are used in a wide range of applications in artificial intelligence, including:
Robotics: Agents can be used to control robots and automate tasks in manufacturing,
transportation, and other industries.
Smart homes and buildings: Agents can be used to control heating, lighting, and other
systems in smart homes and buildings, optimizing energy use and improving comfort.
Transportation systems: Agents can be used to manage traffic flow, optimize routes for
autonomous vehicles, and improve logistics and supply chain management.
Healthcare: Agents can be used to monitor patients, provide personalized treatment plans,
and optimize healthcare resource allocation.
Finance: Agents can be used for automated trading, fraud detection, and risk management
in the financial industry.
Games: Agents can be used to create intelligent opponents in games and simulations,
providing a more challenging and realistic experience for players.
Natural language processing: Agents can be used for language translation, question
answering, and chatbots that can communicate with users in natural language.
Cybersecurity: Agents can be used for intrusion detection, malware analysis, and network
security.
Environmental monitoring: Agents can be used to monitor and manage natural resources,
track climate change, and improve environmental sustainability.
Social media: Agents can be used to analyze social media data, identify trends and patterns,
and provide personalized recommendations to users.
Agent Environment in AI
An environment is everything in the world which surrounds the agent, but it is not a part of an
agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense
and act upon it. An environment is mostly said to be non-feministic.
Features of Environment
1. Static vs Dynamic
2. Discrete vs Continuous
3. Deterministic vs Stochastic
4. Single-agent vs Multi-agent
o If an agent sensor can sense or access the complete state of an environment at each
point in time then it is a fully observable environment, it is partially observable. For
reference, Imagine a chess-playing agent. In this case, the agent can fully observe the
state of the chessboard at all times. Its sensors (in this case, vision or the ability to access
the board's state) provide complete information about the current position of all pieces.
This is a fully observable environment because the agent has perfect information about
the state of the world.
2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine the next state
of the environment, then such an environment is called a deterministic environment.
For reference, Chess is a classic example of a deterministic environment. In chess, the
rules are well-defined, and each move made by a player has a clear and predictable
outcome based on those rules. If you move a pawn from one square to another, the
resulting state of the chessboard is entirely determined by that action, as is your
opponent's response. There's no randomness or uncertainty in the outcomes of chess
moves because they follow strict rules. In a deterministic environment like chess,
knowing the current state and the actions taken allows you to completely determine the
next state.
o A stochastic environment is random and cannot be determined completely by an agent.
For reference, The stock market is an example of a stochastic environment. It's highly
influenced by a multitude of unpredictable factors, including economic events, investor
sentiment, and news. While there are patterns and trends, the exact behavior of stock
prices is inherently random and cannot be completely determined by any individual or
agent. Even with access to extensive data and analysis tools, stock market movements
can exhibit a high degree of unpredictability. Random events and market sentiment play
significant roles, introducing uncertainty.
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action. For example, Tic-Tac-Toe is a classic example of an
episodic environment.
4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called a single-agent environment. For example, Solitaire is a classic
example of a single-agent environment. When you play Solitaire, you're the only agent
involved.
o However, if multiple agents are operating in an environment, then such an environment
is called a multi-agent environment. For reference, A soccer match is an example of a
multi-agent environment. In a soccer game, there are two teams, each consisting of
multiple players (agents). These players work together to achieve common goals
(scoring goals and preventing the opposing team from scoring). Each player has their
own set of actions and decisions, and they interact with both their teammates and the
opposing team. The outcome of the game depends on the coordinated actions and
strategies of all the agents on the field. It's a multi-agent environment because there are
multiple autonomous entities (players) interacting in a shared environment.
Intelligent Agents:
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.
Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator
can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in
a way to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents
to use for game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward and for
each wrong action, an agent gets a negative reward.
Rationality:
PEAS Representation
P: Performance measure
E: Environment
A: Actuators
S: Sensors
PEAS for self-driving cars:
• Healthy
• Patient
Keyboard
1. Medical patient • Tests
• Hospital (Entry of
Diagnose • Minimized • Treatments
symptoms)
• Staff
cost
• Camera
• Dirt
• Room detection
• Cleanness sensor
• Table • Wheels
• Efficiency • Cliff
2. Vacuum • Wood floor • Brushes
• Battery sensor
Cleaner
• Carpet • Vacuum
life • Bump
Extractor
• Various Sensor
• Security
obstacles
• Infrared
Wall
Sensor
• Percentage • Camera
• Conveyor belt
3. Part -picking of parts in • Jointed Arms
with parts, • Joint
Robot correct • Hand angle
• Bins
bins. sensors.
UNIT 2
2. Problem Analysis: In this step, the problem is thoroughly examined to understand its
components, constraints, and implications. This analysis is crucial for identifying viable
solutions.
4. Problem Solving: The selection of the best techniques to address the problem is made in
this step. It often involves comparing various algorithms and approaches to determine the
most effective method.
Action: This stage involves selecting functions associated with the initial state and
identifying all possible actions. Each action influences the progression toward the desired
goal.
Transition: This component integrates the actions from the previous stage, leading to the
next state in the problem-solving process. Transition modeling helps visualize how
actions affect outcomes.
Goal Test: This stage verifies whether the specified goal has been achieved through the
integrated transition model. If the goal is met, the action ceases, and the focus shifts to
evaluating the cost of achieving that goal.
Path Costing: This component assigns a numerical value representing the cost of
achieving the goal. It considers all associated hardware, software, and human resource
expenses, helping to optimize the problem-solving strategy.
Uninformed Search: Such as breadth-first and depth-first search, which do not use
problem-specific information.
Informed Search: Algorithms like A* that use heuristics to find solutions more
efficiently.
3. Optimization Techniques
AI often tackles optimization problems, where the goal is to find the best solution from a set
of feasible solutions. Techniques such as linear programming, dynamic programming,
and evolutionary algorithms are commonly employed.
4. Machine Learning
Machine learning techniques allow AI systems to learn from data and improve their problem-
solving abilities over time. Supervised, unsupervised, and reinforcement learning paradigms
offer various approaches to adapt and enhance performance.
Data Quality: AI systems are only as good as the data they are trained on. Poor quality
data can lead to inaccurate solutions.
Interpretability: Many AI models, especially deep learning, act as black boxes, making
it challenging to understand their decision-making processes.
Ethics and Bias: AI systems can inadvertently reinforce biases present in the training
data, leading to unfair or unethical outcomes.
Introduction:
Uninformed search is one in which the search systems do not use any clues about the suitable area
but it depend on the random nature of search. Nevertheless, they begins the exploration of search
space (all possible solutions) synchronously,. The search operation begins from the initial state and
providing all possible next steps arrangement until goal is reached. These are mostly the simplest
search strategies, but they may not be suitable for complex paths which involve in irrelevant or
even irrelevant components. These algorithms are necessary for solving basic tasks or providing
simple processing before passing on the data to more advanced search algorithms that incorporate
prioritized information.
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search