Search (uninformed + informed)
• Maze Pathfinding (Grid World): A tile-based maze game where an agent must reach a goal through
a grid of obstacles. This demonstrates uninformed (e.g. BFS/DFS) vs. informed search (A, Dijkstra,
Greedy Best-First) in action. The agent explores states (grid cells) until it finds the goal. AI Techniques: BFS/
DFS for breadth/depth search; A search with admissible heuristics (e.g. Manhattan or Euclidean
distance) for informed pathfinding; optionally Dijkstra’s for weighted costs 1 2 . Math/Proofs:
Analyze time/space complexity (e.g. O(b^d) for breadth-first on a grid) and the optimality of A: if the
heuristic is admissible and consistent, A is guaranteed to find the shortest path 2 . One can sketch a
proof by induction on f(n)=g(n)+h(n) or cite standard results. References: Research on pathfinding in
games shows A is “provably optimal” for shortest paths in game navigation 2 . Also cite algorithm
textbooks or papers (e.g. Hart, Nilsson & Raphael 1968 original A paper) for proofs of admissibility.
Tutorials/Code: The Pathfinding Visualizer (Python/PyGame) is a great open-source example that
implements BFS, DFS, Dijkstra, and A in a 5×5 grid 1 . (One can adapt it to larger grids or different
games.) Many blogs walk through implementing A in Pygame or even in Unity. Teaching Tips: On a
blog, illustrate the search frontier expanding in real time (e.g. screenshots from the visualizer),
compare path lengths found by BFS vs. A. Emphasize how the admissible heuristic “hurries” A toward
the goal (show code for computing Manhattan distance). Include pseudocode of A and a sketch of its
optimality proof. Link to academic sources for the properties of A and BFS (e.g. completeness/optimality
statements) to strengthen the explanation.
• Sliding Puzzle (8-Puzzle) Solver: The classic 8-puzzle (3×3 sliding tiles) treated as a game where the
agent (solver) rearranges tiles to reach the goal state. This exemplifies search with a large state
space. AI Technique: A search using heuristics like the Manhattan distance (or pattern database
heuristics) is common; one can also implement BFS (uninformed) as a comparison, but A with a good
heuristic vastly outperforms brute force. Math/Proofs: Discuss the state space size (9!/2 ≈ 181,440
reachable states), admissibility of the Manhattan heuristic, and proofs that A with an admissible
heuristic always finds an optimal solution 3 . Outline how f(n)=g(n)+h(n) guides the search and mention
that without a good h the search can be exponential. References: A good reference is an educational report
or slides on 8-puzzle A (e.g. IIT Kanpur notes show that admissible heuristics guarantee an optimal
path 3 ). The GitHub 8-Puzzle A* Solver shows a Python implementation using Manhattan distance
4 . For theory, cite general heuristic-search sections in AI textbooks (Russell & Norvig) or articles on
informed search. Tutorials/Code: The mentioned GitHub provides code and explains two heuristics
(Manhattan and another). There are many Python tutorials (even videos) on implementing 8-puzzle
A. Use an interactive demo to show how A explores fewer nodes than BFS. Teaching Tips: On the blog,
present the puzzle as a “game” the user can play, then show the AI solving it step-by-step. Explain
heuristics by example: show Manhattan distances of tiles. Discuss the proof outline that if h is
admissible, A* cannot miss the optimal path. Include links to research/slides for deeper reading on
heuristic search in puzzles.
1
Constraint Satisfaction Problems (CSP)
• Sudoku Solver: Treat Sudoku as a CSP game: each cell is a variable with domain {1–9} and
constraints forbidding repeats. AI Technique: Backtracking search with constraint propagation (e.g.
AC-3 arc consistency) and heuristics (Minimum Remaining Values, least-constraining value). For
example, use AC-3 to prune domains and then depth-first search. Math/Principles: Explain CSP
definition (variables, domains, constraints) and how forward-checking or arc consistency reduces
branching. Mention that Sudoku has 9^9 potential assignments but constraints massively shrink
this. One can outline the correctness of AC-3 or argue why backtracking with MAC (Maintaining Arc
Consistency) is complete. References: Stress that Sudoku is a standard CSP example; Peter Norvig’s
essay explicitly shows using “constraint propagation and search” 5 . Cite that work or standard AI
textbooks on constraint solving. For formal AC-3, one could cite a constraint programming paper
(though a general description from an AI text or Wikipedia suffices). Tutorials/Code: The GitHub
Python-AC3-Backtracking CSP Sudoku Solver is open source (MIT licensed) and implements Sudoku
solving with backtracking and AC-3 6 . Many tutorials (articles, YouTube) show coding Sudoku in
Python using recursion and constraint checks. Teaching Tips: Write about this project by first
explaining Sudoku rules, then casting it as variables/domains/constraints. Show code snippets for
backtracking and for the AC-3 algorithm (explaining a queue of arcs). Use the Norvig essay 5 to
motivate why propagation (peers elimination) + search solves the puzzle easily. Emphasize the CSP
viewpoint (use diagrams of peers) and cite any relevant academic survey of CSP in games/puzzles.
• N-Queens Puzzle (CSP/Backtracking): The N×N queens placement puzzle is a CSP: place N queens
on an N×N chessboard so none attack each other. AI Technique: Backtracking search with CSP
heuristics (e.g. place one queen per row, use forward-checking or heuristics like “one queen per row
ensures no row conflicts, so just avoid columns/diagonals”). Math/Proofs: You can analyze the
reduction of search by symmetry (fix one queen to break symmetry). No deep proofs needed beyond
reasoning about pruning. You can mention classic facts (solutions count, etc.) or reduction to graph
coloring. References: This is a classic example of CSP/backtracking. Cite a code example like N--
Queen-Problem repository which explicitly says “N-Queen Puzzle using Backtracking CSP” 7 . For
theory, any AI textbook section on CSP backtracking will cover it. Optionally reference Donald Knuth’s
dancing links as advanced topic. Tutorials/Code: The above GitHub provides Python code.
GeeksforGeeks or tutorial blogs have step-by-step N-Queen solvers (often using recursion).
Teaching Tips: On the blog, present N-Queens visually (board diagrams) and show how backtracking
places queens row by row. Explain CSP terms: variables (rows), domains (columns), constraints (no
two share column/diagonal). Provide pseudocode for backtracking+pruning (checking a partial
assignment). Suggest readers experiment by coding small N. Mention symmetry arguments (divide
by two for even N) to hint at deeper structure.
Logical Reasoning and Inference
• Wumpus World Game: A simple grid-based game from AI textbooks where an agent must find gold
while avoiding pits and a “Wumpus”. It exemplifies logical inference (propositional or first-order
logic). The agent perceives cues (breeze, stench) and must infer safe moves. AI Technique:
Knowledge representation + rule-based inference. Formulate percepts as logical statements (e.g.
“(¬P_{1,2}) ∧ (S_{1,1})” etc.) and use resolution or propositional inference to deduce where the
2
Wumpus or pits are. Math/Principles: Key is logic consistency and resolution. You can outline how
clauses are derived from the rules (e.g. “Stench(1,1) → (Wumpus is in one of adjacent squares)”) and
then use propositional logic entailment. No heavy math beyond Boolean logic proofs. References:
The GitHub wumpus-world example explicitly solves Wumpus World using matrices and First-Order
Logic 8 . It even states: “Wumpus World example solved in python using ... First Order Logic” 8 .
Classic readings include Russell & Norvig (the original Wumpus example) or Germanenko’s slides.
Tutorials/Code: Besides the above repo, there are many tutorials (textbook chapters, CS188
projects) on Wumpus. A YouTube series walks through coding “Hunt the Wumpus” in Python.
Teaching Tips: On the blog, walk through the percepts/rules of Wumpus World. Show how a
knowledge base is built (clauses for every cell) and how inference is performed (e.g. resolution or
forward-chaining). Emphasize the idea of a belief state (possible worlds). Use small grid examples to
illustrate deducing the safe move. Tie it to rule-based AI: these if-then rules form the “AI” controlling
the agent. Cite the repo 8 and any classic AI text for the value of knowledge-based agents.
• Einstein’s Zebra Puzzle (Logic Deduction Game): A logic puzzle (five houses with attributes) where
the solver must deduce facts from clues. Treated as a game where the agent fills in a solution. AI
Technique: Constraint logic or rule-based inference. You can solve it with a constraint solver (as in
the example) or by manual logical deduction. Math/Principles: The puzzle is essentially a CSP or
propositional logic problem. Discuss how adding “all-different” constraints (each attribute is unique)
prunes the search. No heavy math proof, but note that such puzzles often require backtracking if
inference stalls. References: A great example is the blog “Solving Einstein’s Puzzle with Python-
Constraint” 9 . It explicitly frames the Zebra puzzle with a constraint library. Cite the introduction: it
says the Zebra Puzzle is a logic puzzle of clues 9 . You could also mention it’s NP-complete in
general (Latin square type problem). Tutorials/Code: The linked blog shows Python code using the
python-constraint library to solve it 10 . Other tutorials use backtracking or even Prolog.
Teaching Tips: For the blog, present the puzzle narrative (list the clues). Explain how to translate
clues into constraints (use examples from the code, e.g. “Englishman = red” maps to a lambda
constraint). Highlight use of an “AllDifferent” constraint. Show the Python-Constraint code and result
as a demo. Use this to teach how logic puzzles can be solved by systematic search plus constraints.
Encourage readers to try more logic puzzles similarly. Cite the project 9 and possibly point to
generic CSP discussion for such puzzles.
Planning
• Blocks World (Goal-Stack/STRIPS Planning): A classic AI planning domain with blocks A,B,C on a
table. The “game” is to rearrange from an initial stack configuration to a goal stack. AI Technique:
STRIPS-style planning or Goal Stack Planning. Represent actions like “Move(block, from, to)” with
preconditions and postconditions. Then plan by working backward from the goal state. Math/
Principles: Outline how STRIPS uses first-order logic pre/postconditions. You can show a goal stack
algorithm: push goal predicates onto a stack and resolve them one by one by finding an action.
Mention the algorithm terminates or backtracks if conflicts arise. References: A useful resource is
the STRIPS planner implementation 11 , which describes backwards planning (“works backwards by
starting at the goal state…stacking subgoals”) 11 . Also the goal-stack medium article 12 explains
goal-stack planning conceptually. Cite that explanation: “Goal Stack Planning…works backwards from
the goal state” 12 . Tutorials/Code: The GitHub STRIPS implementation contains example domains
(including simple blocks) 11 . The Medium blog has Python code for goal-stack planning. You can
3
also use PDDL tools or talk about the Stanford STRIPS history. Teaching Tips: On the blog, start by
illustrating an example initial/goal (with diagrams of block stacks). Describe the operators (pick up,
put down). Walk through the goal-stack planner steps (with a stack diagram): how it picks a subgoal
(e.g. “On(C,A)”) and finds an action to achieve it, then updates subgoals. Compare to STRIPS
backward chaining (the repo’s Readme 11 summarizes the idea). Emphasize the difference from
search: planning infers actions. Provide pseudocode (as given in the readme 11 ). Cite the repo for
students to experiment (monkey/banana and blocks examples).
• Monkey-and-Banana Puzzle (STRIPS Planning): Another planning example: a monkey in a room
must get bananas by moving and using a box. Represent it as a planning problem: states (MonkeyAt,
BoxAt, BananaAt, Level), actions (Move, ClimbUp, ClimbDown, MoveBox, TakeBananas) with pre/post
conditions. AI Technique: Use STRIPS-style planner to find a sequence of actions. As with Blocks
World, backward search (goal-stack) can be used. Math/Principles: Discuss the planning graph or
explain the STRIPS solver’s backward-chaining approach. No deep proofs needed; outline that
planning is often PSPACE-complete, but small domains are tractable. References: The STRIPS GitHub
includes this as “monkeys.txt” 13 (initial/goal and actions). It shows the classic monkey/banana
formalization. For theory, citing the same STRIPS repo explanation 11 suffices. Tutorials/Code:
Students can modify the STRIPS solver to include the monkeys domain (or write a simpler forward
search planner). The blog can link to the STRIPS code and optionally a Python simulation. Teaching
Tips: Present this puzzle story to engage readers, then show its STRIPS formulation (as in [40]).
Explain each action’s preconditions/effects. Show how the planner works step-by-step (e.g., move,
climb, take). Stress planning concepts: backward goal expansion, action precondition satisfaction.
Encourage writing the plan as blog narrative (“Monkey goes to box, climbs up, grabs bananas”). Use
it to compare partial-order vs. linear planning discussions (block out-of-order actions don’t happen
here but mention concept).
Probabilistic Reasoning
• Monty Hall Game (Bayesian Inference): Model the Monty Hall game (three doors, one car) as a
Bayes net to illustrate inference. AI Technique: Bayesian network inference. Create random
variables for the contestant’s pick, the prize door, and the host’s opened door. Compute P(prize |
host’s action). Math/Principles: Present Bayes’ Theorem formula or network factorization. Show that
P (switch wins) = 23 . Derive it via conditional probabilities or belief propagation. References: The
pgmpy example “Monty Hall Problem” builds a Bayes net for this scenario 14 . We can cite that it’s
solved “using Bayes’ Theorem” 14 . Also cite basic probability texts or Judea Pearl’s work on causal
inference for context. Tutorials/Code: The pgmpy documentation example shows full Python code
setting up the network and querying for posteriors 14 . Numerous explanations exist (videos, blog
posts) on coding Monty Hall in Python or even JAGS/BUGS. Teaching Tips: On the blog, tell the
Monty Hall story to hook readers. Then show the Bayes net diagram (contestant→host←prize) and
CPDs from the example 15 . Walk through calculating probabilities: illustrate why staying has 1/3 vs.
switching 2/3. Provide pseudocode for variable elimination (or using a library). Emphasize how the
network encodes “the host knows where the car is” via conditional tables. Cite the example 14 and
perhaps a scholarly article on Bayesian reasoning in games for depth.
• Pac-Man “Ghostbusters” (Hidden Markov Model): A Pac-Man variant where ghosts move invisibly;
the agent receives noisy distance observations. This demonstrates Hidden Markov Models and
4
filtering. AI Technique: HMM inference (forward algorithm) and particle filters for tracking. Math/
Principles: Outline the HMM: hidden state = ghost location, observations = noisy sensor readings.
Derive the forward (filtering) update equations and briefly mention convergence. Optionally note
that exact inference is exponential in number of ghosts (just mention complexity, no need to prove).
References: The Berkeley CS188 Ghostbusters project explicitly uses HMMs: “Probabilistic inference
in a hidden Markov model tracks hidden ghosts” 16 . Cite that description. Sutton & Barto (RL
textbook) or Murphy (HMM book) are general refs. Tutorials/Code: CS188 provides code for ghost
tracking, but omardroubi/PacmanAI summarizes it: “Project 4: Ghostbusters – exact inference
(forward) and particle filters” 16 . One can adapt or use PyGame Pacman. For a simpler demo, use a
1D world localization example. Teaching Tips: Explain the HMM setup in Pac-Man: hidden positions
and percepts. Walk through the forward update (multiply by observation likelihood, sum over
transitions). If code isn’t too advanced, embed pseudocode for the filter. Show visualizations of belief
distributions over time. Emphasize the math of Bayes rule for filtering. Link to [46] to show this
classic use. You could also mention particle filtering to illustrate approximate inference. Relate to
games by calling it “Ghost tracking logic” to engage readers.
Reinforcement Learning
• Q-Learning in a Grid Game: A simple game (e.g. 5×5 grid with a goal/gem and hazards) where an
agent learns a policy. For example, “Collect the gem while avoiding mines” 17 . AI Technique: Q-
learning (model-free RL) with a Q-table. Optionally mention Value Iteration as the dynamic-
programming counterpart if reward/dynamics are known. Math/Principles: Introduce the Bellman
equation: Q(s, a) ← Q(s, a) + α[r + γ maxa′ Q(s′ , a′ ) − Q(s, a)] . Prove convergence conditions
(in tabular case with decaying α). No deep proofs needed, but mention that with an optimal learning
rate schedule, Q-learning converges to Q∗ . References: Use the Medium tutorial “Building a Q-
Learning Grid Environment” which describes a 5×5 grid RL game 17 . Also Sutton & Barto (RL Book)
for formal foundations. Tutorials/Code: The grid_qlearning GitHub provides code to create grid
environments and train with Q-learning 18 . OpenAI Gym’s FrozenLake or GridWorld examples also
illustrate value iteration or Q-learning. Teaching Tips: For the blog, describe the environment and
show an animation of the agent learning (like in [55], which visualizes it). Explain state/action space,
reward signal, and the update rule. Provide Python code snippets (e.g. the main Q-update loop).
Discuss exploration vs. exploitation. Encourage readers to tweak parameters or grid layouts.
Highlight the math: show how the Bellman optimality equation underpins Q-learning’s updates. Link
to [56] for code students can run. You might also suggest extending to larger games (e.g. Snake or
simple 2D platformers) after mastering the basics.
• Value Iteration / Policy Iteration Example: Another RL-oriented project could be solving a Markov
Decision Process via dynamic programming. For example, use OpenAI Gym’s FrozenLake or a custom
grid. AI Technique: Value iteration or policy iteration to compute optimal value function and policy.
Math/Principles: Derive the Bellman optimality equations for values V ∗ (s) = maxa [R(s, a) +
γ ∑s′ P (s′ ∣s, a)V ∗ (s′ )] and policy improvement. Prove that iterative updates converge to V ∗
(sketch contraction mapping argument). References: Cite Sutton & Barto or any RL textbook chapter
on DP. A smaller academic reference is Bertsekas’s DP text. Tutorials/Code: Many examples (e.g.,
Udacity RL course) implement value iteration on grid worlds. Could link to a simple Python notebook.
Teaching Tips: Write about how an agent can “plan” by computing values when the model is known.
Show code for iterative updates, and plots of value functions over iterations. Compare to Q-learning
5
(model-free vs. model-based). Emphasize math proof of convergence (mention contraction mapping
and γ < 1 ).
Game Tree Search (Minimax, Alpha-Beta)
• Tic-Tac-Toe Minimax: The standard 3×3 X/O game solved by Minimax search. AI Technique: Game
tree search using Minimax (zero-sum), optionally with Alpha-Beta pruning. Math/Principles: Explain
Minimax recursion: at terminal states score = +1/–1/0. Show how the algorithm maximizes the
minimal loss. One can prove by induction that Minimax correctly selects the optimal move in a zero-
sum game. For Alpha-Beta, explain pruning conditions (provide the correctness argument that
pruning does not change the result). References: The GitHub tic-tac-toe-minimax repo explains
Minimax and even gives pseudocode 19 . It notes these are zero-sum games 20 . You can cite that
definition. For theory, cite classic AI textbooks (Munro’s or Russell & Norvig’s chapters on adversarial
search) for Minimax proof. Tutorials/Code: Many simple implementations exist in Python: the above
repo provides a working game and explanation. DataCamp’s “Unbeatable Tic-Tac-Toe AI” guides
coding Minimax step-by-step. Teaching Tips: On the blog, show the game tree diagram for a few
plies and illustrate how Minimax backpropagates values. Include the pseudocode from [59] or your
own. Provide Python code of a recursive Minimax function. Demonstrate how Alpha-Beta greatly
reduces nodes (maybe count nodes visited). Suggest interactive examples: e.g. a reader plays
against the Minimax AI. Always clarify the zero-sum assumption. Reference [59] for further detail on
the algorithm structure.
• Connect Four with Alpha-Beta: A more complex two-player game. AI Technique: Minimax with
alpha-beta pruning to handle larger search depth. Possibly use evaluation functions (incomplete info
if cutoff). Math/Principles: Discuss that without pruning, Minimax on Connect4’s branching factor
(~7) is large, but alpha-beta reduces it (best-case to √b^d). Outline how correct alpha-beta is (prune
when β≤α). One can mention that with a perfect heuristic cutoff, this achieves optimal reduction.
References: The Connect-Four Python project implements exactly this: “Connect Four game… with
minimax and alpha-beta pruning” 21 . Cite that description. The book Knuth, or AI texts, can be
referenced for alpha-beta complexity analysis. Tutorials/Code: The above GitHub has code for
Connect4 with AI. There are also many blogs on building a Connect4 AI. Teaching Tips: Explain
Connect4 rules and how to represent the state. On the blog, describe the evaluation function (if any).
Show how alpha-beta prunes parts of the tree. For clarity, walk through a small example tree and
indicate pruning points (diagram). Provide code fragments for the recursive alpha-beta function.
Encourage readers to test performance difference (count nodes with/without pruning). End by
noting how this generalizes to chess/checkers etc. Cite [61] as a concrete example and link to any
classic analysis of alpha-beta.
Sources: Each project above includes citations to code repositories or papers (e.g., A* pathfinding 2 ,
Sudoku CSP solver 6 , Wumpus World logic 8 , STRIPS planner 11 , Pac-Man HMM 16 , Minimax Tic-Tac-
Toe 19 , Connect4 AI 21 ). These sources provide algorithmic details and open-source code. In your blog
posts, include these references when explaining the AI technique or offering further reading. Ensure to
clarify concepts thoroughly, use diagrams (game boards, search trees), and tie each game example back to
the MIT 6.034 topic it illustrates.
6
1 GitHub - Tauseef-Hilal/Pathfinding-Visualizer: Pathfinding visualizations with Python and Pygame
https://github.com/Tauseef-Hilal/Pathfinding-Visualizer
2 (PDF) A*-based Pathfinding in Modern Computer Games
https://www.researchgate.net/publication/267809499_A-based_Pathfinding_in_Modern_Computer_Games
3 Slide 1
https://cse.iitk.ac.in/users/cs365/2009/ppt/13jan_Aman.pdf
4 GitHub - harshkv/8_Puzzle_Problem: 8 Puzzle Problem using A* Algorithm in Python
https://github.com/harshkv/8_Puzzle_Problem
5 Solving Every Sudoku Puzzle
https://norvig.com/sudoku.html
6 GitHub - stressGC/Python-AC3-Backtracking-CSP-Sudoku-Solver: Python implementation of a sudoku
puzzle solver (CSP) using AC3 and backtracking algorithms
https://github.com/stressGC/Python-AC3-Backtracking-CSP-Sudoku-Solver
7 GitHub - Chandra-Suryadevara/N--Queen-Problem: CSP Using BackTracking Algorithm
https://github.com/Chandra-Suryadevara/N--Queen-Problem
8 GitHub - humbertobg/wumpus-world: Wumpus World example solved in python using Matrices and
First Order Logic
https://github.com/humbertobg/wumpus-world
9 10 Project: Solving Einstein's Puzzle with Python-Constraint
https://artificialcognition.github.io/who-owns-the-zebra
11 GitHub - tansey/strips: A python implementation of the STRIPS planning algorithm
https://github.com/tansey/strips
12 Goal Stack Planning for Blocks World Problem | by Apoorv Dixit | Medium
https://apoorvdixit619.medium.com/goal-stack-planning-for-blocks-world-problem-41779d090f29
13 strips/monkeys.txt at master · tansey/strips · GitHub
https://github.com/tansey/strips/blob/master/monkeys.txt
14 15 Monty Hall Problem — pgmpy 1.0.0 documentation
https://pgmpy.org/examples/Monty%20Hall%20Problem.html
16GitHub - omardroubi/PacmanAI: An array of AI and Machine Learning techniques applied on the Pac-
Man Game in Python
https://github.com/omardroubi/PacmanAI
17 Building a Q-Learning Grid Environment with Pygame A Complete Guide | by Harindra Dilshan |
Javarevisited | Jun, 2025 | Medium
https://medium.com/javarevisited/building-a-q-learning-grid-environment-with-pygame-a-complete-guide-8095c6593682
18 GitHub - vinits5/grid_qlearning: Create grid environments and train the agent using q-learning (q-tables)
https://github.com/vinits5/grid_qlearning
19 20 GitHub - Cledersonbc/tic-tac-toe-minimax: Minimax is a AI algorithm.
https://github.com/Cledersonbc/tic-tac-toe-minimax
7
21 GitHub - kupshah/Connect-Four: Connect Four game written in Python with miniMax and alpha-beta
pruning implementation for AI.
https://github.com/kupshah/Connect-Four