0% found this document useful (0 votes)
4 views

Ai_notes

The document introduces Artificial Intelligence (AI), detailing its approaches, foundations, rational agents, and task environments. It covers key concepts such as human-like behavior, logical reasoning, and various agent structures, emphasizing AI's multidisciplinary nature. Additionally, it explores problem-solving techniques, search strategies, optimization methods, and adversarial search in AI.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Ai_notes

The document introduces Artificial Intelligence (AI), detailing its approaches, foundations, rational agents, and task environments. It covers key concepts such as human-like behavior, logical reasoning, and various agent structures, emphasizing AI's multidisciplinary nature. Additionally, it explores problem-solving techniques, search strategies, optimization methods, and adversarial search in AI.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

1.

Summary of "Introduction to AI" (DSE 3252)

The document provides an introduction to Artificial Intelligence (AI), structured into key
approaches, foundations, agent-based concepts, and task environments.

1. Approaches to AI

 Acting Humanly (Turing Test): AI must possess natural language processing,


knowledge representation, automated reasoning, machine learning, computer
vision, and robotics to mimic human intelligence.
 Thinking Humanly (Cognitive Modeling): AI can model human thought through
introspection, psychological experiments, and brain imaging.
 Thinking Rationally (Laws of Thought): AI follows logical reasoning, based on
Aristotle’s syllogisms. However, formalizing uncertain knowledge is a challenge.
 Acting Rationally (Rational Agents): AI is an autonomous agent that perceives,
adapts, persists, and pursues goals. It is more general than logic-based AI.

2. Foundations of AI

AI is grounded in:

 Philosophy: Questions about reasoning, knowledge, and induction.


 Mathematics: Formal logic (Boolean logic), probability, computational complexity
(NP-completeness).
 Economics: Decision theory (utility-based choices) and game theory.
 Neuroscience: Understanding how brains process information, EEG, optogenetics,
and brain-machine interfaces.
 Psychology: AI as an information-processing system; models based on cognitive
psychology.
 Computer Engineering: Moore’s Law, quantum computing, and AI hardware
advancements.
 Control Theory & Cybernetics: Designing self-regulating systems.
 Linguistics: Natural Language Processing (NLP) and knowledge representation.

3. Rational Agents

 Performance Measure: AI evaluates its success based on percepts and prior


knowledge.
 Rationality vs. Omniscience: AI maximizes expected performance but cannot
predict all outcomes.
 Learning & Autonomy: AI can be pre-programmed or learn from experience.

4. Task Environments

 Fully Observable vs. Partially Observable: AI may or may not have complete data.
 Single-Agent vs. Multi-Agent: AI can operate independently or interact with other
agents.
 Deterministic vs. Stochastic: Predictable vs. probabilistic environments.
 Episodic vs. Sequential: Decisions may or may not impact future actions.
 Static vs. Dynamic: Environment may change over time.
 Discrete vs. Continuous: Finite vs. infinite states.
 Known vs. Unknown: Whether AI knows the environment’s rules.

5. AI Agent Structures

 Simple Reflex Agents: React to current percepts (e.g., vacuum cleaner robot).
 Model-Based Reflex Agents: Maintain an internal model of the world.
 Goal-Based Agents: Take actions to achieve defined goals.
 Utility-Based Agents: Maximize performance using utility functions.
 Learning Agents: Improve over time using a learning element, critic, and problem
generator.

Conclusion

AI is a multidisciplinary field based on logic, learning, decision-making, and autonomous


agent behavior. It is evolving through advances in computing, psychology, and neuroscience.

Summary of "Problem Solving in AI" (DSE 3252)

This document explores problem-solving agents, search strategies, optimization


techniques, and adversarial search in Artificial Intelligence.

1. Problem-Solving Agents

 Goal-Based Agents: Use atomic representations to decide actions.


 State Space: Represented as a directed graph, where nodes are states and edges are
actions.
 Solution: A sequence of actions leading from the initial state to the goal with minimal
cost.
 Problem Definition:
o Initial state
o Actions
o Transition model
o Goal test
o Path cost function

Examples

 Vacuum Agent: Navigates a 2-location environment, cleaning dirt.


 8-Puzzle Problem: Tiles must be arranged into a specific goal configuration.

2. Search Strategies
Uninformed Search (Blind Search)

1. Breadth-First Search (BFS):


o Expands shallowest unvisited node.
o Completeness: ✅ Guaranteed to find a solution.
o Optimality: ✅ If step costs are uniform.
o Time Complexity: O(b^d) (exponential in depth d).
o Space Complexity: O(b^d) (stores all nodes at each level).
2. Depth-First Search (DFS):
o Expands deepest unvisited node.
o Completeness: ❌ (for infinite depth spaces).
o Optimality: ❌ (does not guarantee shortest path).
o Time Complexity: O(b^m) (m = max depth).
o Space Complexity: O(bm) (only stores path to current node).
3. Uniform Cost Search (UCS):
o Expands node with the lowest path cost.
o Optimality: ✅ Guarantees the lowest-cost solution.
o Time Complexity: O(b^(C/ε))* (exponential in cost).
o Space Complexity: *O(b^(C/ε))**.

Informed (Heuristic) Search

4. Greedy Best-First Search:


o Expands node closest to goal using heuristic h(n).
o Completeness: ❌ May get stuck in loops.
o Optimality: ❌ Does not guarantee shortest path.
o Time Complexity: O(b^m).
o Space Complexity: O(b^m).
5. *A Search (Best First with Path Cost)**:
o Uses f(n) = g(n) + h(n) where:
 g(n) = cost from start to node n.
 h(n) = estimated cost from n to goal.
o Optimality: ✅ If heuristic is admissible & consistent.
o Time Complexity: O(b^d).
o Space Complexity: O(b^d).

3. Local Search & Optimization

 Used when path is irrelevant, only goal state matters.


 Examples: n-Queens, VLSI layout, scheduling problems.
 Techniques:
1. Hill Climbing:
 Moves towards the highest heuristic value.
 Issues: Local maxima, ridges, plateaus.
 Variations:
 Steepest-Ascent Hill Climbing (best neighbor selection).
 Stochastic Hill Climbing (random selection).
2. Simulated Annealing:
 Accepts worse moves occasionally to escape local optima.
3. Genetic Algorithms:
 Uses selection, crossover, mutation for optimization.

4. Adversarial Search (Game Playing)

 Used in competitive environments (e.g., Chess, Tic-Tac-Toe).


 Game Representation:
o Initial State
o Actions
o Transition Model
o Terminal Test
o Utility Function (Win/Loss/Draw).

Search Strategies

1. Minimax Algorithm:
o Two-player zero-sum games.
o Computes minimax value at each node.
o Time Complexity: O(b^m) (DFS-based).
o Space Complexity: O(bm).
2. Alpha-Beta Pruning:
o Prunes unnecessary nodes without affecting final decision.
o Time Complexity: O(b^(m/2)) (better than minimax).
o Key Parameters:
 α (Alpha): Best choice for MAX.
 β (Beta): Best choice for MIN.
3. Heuristic Alpha-Beta Search:
o Uses move ordering to improve pruning.
o Transposition Tables: Avoid re-evaluating states.
o Killer Heuristics: Prioritizes historically strong moves.

Conclusion

This document covers AI problem-solving techniques, from uninformed and informed


search to optimization and game theory. Techniques like A, hill climbing, minimax, and
alpha-beta pruning* help AI solve complex tasks efficiently.

You might also like