Ai_notes
Ai_notes
The document provides an introduction to Artificial Intelligence (AI), structured into key
approaches, foundations, agent-based concepts, and task environments.
1. Approaches to AI
2. Foundations of AI
AI is grounded in:
3. Rational Agents
4. Task Environments
Fully Observable vs. Partially Observable: AI may or may not have complete data.
Single-Agent vs. Multi-Agent: AI can operate independently or interact with other
agents.
Deterministic vs. Stochastic: Predictable vs. probabilistic environments.
Episodic vs. Sequential: Decisions may or may not impact future actions.
Static vs. Dynamic: Environment may change over time.
Discrete vs. Continuous: Finite vs. infinite states.
Known vs. Unknown: Whether AI knows the environment’s rules.
5. AI Agent Structures
Simple Reflex Agents: React to current percepts (e.g., vacuum cleaner robot).
Model-Based Reflex Agents: Maintain an internal model of the world.
Goal-Based Agents: Take actions to achieve defined goals.
Utility-Based Agents: Maximize performance using utility functions.
Learning Agents: Improve over time using a learning element, critic, and problem
generator.
Conclusion
1. Problem-Solving Agents
Examples
2. Search Strategies
Uninformed Search (Blind Search)
Search Strategies
1. Minimax Algorithm:
o Two-player zero-sum games.
o Computes minimax value at each node.
o Time Complexity: O(b^m) (DFS-based).
o Space Complexity: O(bm).
2. Alpha-Beta Pruning:
o Prunes unnecessary nodes without affecting final decision.
o Time Complexity: O(b^(m/2)) (better than minimax).
o Key Parameters:
α (Alpha): Best choice for MAX.
β (Beta): Best choice for MIN.
3. Heuristic Alpha-Beta Search:
o Uses move ordering to improve pruning.
o Transposition Tables: Avoid re-evaluating states.
o Killer Heuristics: Prioritizes historically strong moves.
Conclusion