0% found this document useful (0 votes)
3 views

Search Algorithms Part 1

Uploaded by

esraaaboelkhair8
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Search Algorithms Part 1

Uploaded by

esraaaboelkhair8
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

Search Part I
Created @October 14, 2024 8:42 PM

Tags

Key Concepts in Artificial Intelligence


Intelligent Agents
Definition: Entities that perceive their environment via sensors and act upon it
using actuators.

Examples:

Robots: Use sensors and actuators to manipulate objects.

Software Agents: Such as web crawlers that index pages.

Components:

Perception: Sensing the environment.

Decision-Making: Choosing actions based on perceptions.

Actuation: Executing actions to interact with the environment.

Agent Types:

Simple Reflex Agents: Respond to current conditions (e.g., thermostats).

Model-Based Agents: Keep track of past states (e.g., robotic vacuums).

Goal-Based Agents: Act to achieve specific goals (e.g., chess AIs).

Utility-Based Agents: Maximize performance based on a utility function


(e.g., self-driving cars).

Problem Solving and Search


State: Current snapshot of the environment.

Initial State: Starting point for problem-solving.

1. Search Part I 1
Actions: Possible moves or choices from a given state.

Transition Model: Maps actions to resulting states.

Goal Test: Determines if the current state meets the goal.

Path Cost: Total cost of actions taken to reach a state.

Search Strategies

Uninformed Search (Blind Search)


1. Breadth-First Search (BFS): Expands the shallowest nodes; ideal for
unweighted graphs.

2. Depth-First Search (DFS): Expands the deepest nodes; useful in deep


solutions.

3. Uniform-Cost Search: Expands nodes with the lowest cost; suitable for
weighted graphs.

Informed Search (Heuristic Search)


1. Greedy Best-First Search: Choosing the node closest to the goal based on a
heuristic.

2. A* Search: Combines path cost and heuristic to find the most promising node.

Adversarial Search
Competitive Environments: Used in games like chess.

Minimax Algorithm: Maximizes one player’s score while minimizing the


opponent's.

Alpha-Beta Pruning: Optimizes minimax by eliminating branches that won't


affect the final decision.

Machine Learning (ML) and Neural Networks


Definition: Algorithms that enable learning from data.

Types:

1. Search Part I 2
Supervised Learning: Uses labeled data (e.g., spam detection).

Unsupervised Learning: Finds patterns in unlabeled data (e.g., customer


segmentation).

Neural Networks: Models inspired by the brain; used in deep learning for
complex tasks.

Optimization and Genetic Algorithms


Optimization Problems: Finding the best solution from many options (e.g.,
cost minimization).

Genetic Algorithms: Evolve solutions using natural selection principles.

Components: Selection, Crossover, Mutation.

Handling Uncertainty and Decision Theory


Uncertainty in AI: Making decisions with incomplete information (e.g., self-
driving cars).

Bayesian Networks: Probabilistic models for reasoning under uncertainty.

Decision Theory: Optimal decision-making strategies under uncertainty.

Search Algorithms
Key Components of a Search Problem
1. Initial State:

The starting condition of the agent within its environment.

Example: In a maze-solving problem, the initial state would be the starting


position of the agent in the maze.

2. Actions:

The possible moves or decisions the agent can make from a given state.

Function: ACTIONS(s) : Returns the set of actions that can be executed from
state( s ).

1. Search Part I 3
Example: In a maze, possible actions might be "move up," "move down,"
"move left," and "move right."

3. Transition Model:

Describes the result of applying an action in a state.

Function: RESULT(s, a) : Returns the state resulting from performing action (


a ) in state ( s ).

Example: If the current state is (2, 3) in a grid, performing the action


"move up" would result in the new state (1, 3).

4. Goal Test:

A mechanism to determine if a given state is a goal state.

Example: In a pathfinding problem, the goal test checks if the current


position matches the target destination.

5. Path Cost Function:

A numerical cost associated with the path from the initial state to the
current state.

Example: In a weighted graph, the cost could represent the distance, time,
or resources needed to traverse from one node to another.

State Space
State Space:

The set of all states reachable from the initial state through any sequence
of actions.

Example: In a chess game, the state space consists of all possible board
configurations.

Solution:

A sequence of actions that leads from the initial state to the goal state.

Example: The moves made by a player to win a game.

Optimal Solution:

1. Search Part I 4
The solution with the lowest path cost among all possible solutions.

Example: The shortest path in a navigation app that minimizes travel time.

Search Approaches

Breadth-First Search (BFS)


Approach:

Expands the shallowest (closest to the root) nodes first.

Useful when the solution is expected to be found at a shallow depth.

Data Structure:

Uses a queue (FIFO) to manage nodes to be explored.

Goal:

To find a solution with the fewest steps.

Example:

Finding the shortest path in an unweighted graph, such as navigating a


maze.

Code:

import sys
graph = {
'1': ['2', '3'],
'2': ['4'],
'3': ['5'],
'4': ['5'],
'5': ['3', '4']
}

visited = []
queue = []

def bfs(visited, graph, node):

1. Search Part I 5
visited.append(node)
queue.append(node)

while queue:
currentNode = queue.pop(0)
print (currentNode, end = " ")

for neighbor in graph[currentNode]:


if neighbor not in visited:
visited.append(neighbor)
queue.append(neighbor)

bfs(visited, graph, '1')

Depth-First Search (DFS)


Approach:

Expands the deepest (farthest from the root) nodes first.

Can go down one path until it reaches a dead end, then backtrack.

Data Structure:

Uses a stack (LIFO) to manage nodes.

Potential Issues:

Can get stuck in deep branches (infinite loops) unless modified with
checks to avoid revisiting nodes.

Example:

Navigating through a maze where the solution might be deep.

Code:

import sys
graph = {
'1': ['2', '3'],
'2': ['4'],

1. Search Part I 6
'3': ['5'],
'4': ['5'],
'5': ['3', '4']
}

visited2 = set()

def dfs(visited, graph, node):


if node not in visited:
print(node, end=" ")
visited.add(node)

for neighbor in graph[node]:


dfs(visited, graph, neighbor)

dfs(visited2, graph, '1')

Frontier and Explored Set


Frontier: The set of nodes available for exploration.

Start with the frontier containing the initial state.

Explored Set: A record of nodes that have already been expanded to avoid
revisiting them.

Steps in Search Algorithm:


1. Remove a node from the frontier.

2. If the node contains the goal state, return the solution.

3. Expand the node, adding resulting nodes to the frontier.

Revised Approach:
Include an explored set to track expanded nodes.

1. Add each expanded node to the explored set.

1. Search Part I 7
2. Only add new nodes to the frontier if they are not in the explored set or the
frontier.

Optimization Concepts

Weighted Graphs:
Nodes and edges have associated weights or costs.

Path Cost: The sum of weights along a path from the initial state to the goal.

Optimal Solution:
A solution with the lowest path cost among all possible solutions.

Example: In a navigation application, the optimal solution minimizes distance


or travel time.

1. Search Part I 8

You might also like