项目概述
本文将详细解析一个基于深度 Q 网络(DQN)的迷宫导航智能体系统。该系统通过强化学习算法让智能体在自定义的迷宫环境中自主学习导航策略,最终实现从起点到终点的最优路径寻找。整个项目由四个核心文件组成,分别负责经验回
核心组件解析
1. 经验回放缓冲区(ReplayBuffer.py)
经验回放是 DQN 算法的关键技术之一,用于解决强化学习中的样本相关性问题。
from collections import deque
import random
import numpy as np
class ReplayBuffer:
def __init__(self, capacity):
self.buffer = deque(maxlen=capacity) # 使用双端队列存储经验
def push(self, state, action, reward, next_state, done):
# 存储单条经验 (状态, 动作, 奖励, 下一状态, 结束标志)
self.buffer.append((state, action, reward, next_state, done))
def sample(self, batch_size):
# 随机采样一批经验
batch = random.sample(self.buffer, batch_size)
# 将批量经验按类型分组并转换为numpy数组
state, action, reward, next_state, done = map(np.stack, zip(*batch))
return state, action, reward, next_state, done
def __len__(self):
return len(self.buffer)
该缓冲区使用双端队列实现,具有固定容量,当存储满时会自动移除最早的经验。push
方法用于存储智能体与环境交互的经验,sample
方法用于随机抽取批量经验进行训练,有效降低了样本间的相关性。
2. Q 网络模型(Q_net.py)
Q 网络是 DQN 的核心,用于近似动作价值函数(Q 函数)。
import torch.nn as nn
class DQN(nn.Module):
def __init__(self, state_dim, action_dim):
super(DQN, self).__init__()
self.fc = nn.Sequential(
nn.Linear(state_dim, 64), # 输入层到隐藏层
nn.ReLU(),
nn.Linear(64, 64), # 隐藏层
nn.ReLU(),
nn.Linear(64, action_dim) # 输出层(每个动作一个Q值)
)
def forward(self, x):
# 将二维观测展平为一维向量
x = x.view(x.size(0), -1)
return self.fc(x)
该网络采用三层全连接结构:
- 输入层:接收状态特征(展平后的迷宫状态)
- 两个隐藏层:各包含 64 个神经元,使用 ReLU 激活函数
- 输出层:输出每个可能动作的 Q 值(动作价值)
3. 迷宫环境(env_create.py)
自定义的迷宫环境定义了智能体的交互空间和规则。
import math
import numpy as np
import gym
from gym import spaces
class MazeEnv(gym.Env):
def __init__(self):
# 定义动作空间(上下左右)
self.action_space = spaces.Discrete(4)
# 定义观测空间(二维网格)
self.observation_space = spaces.Box(low=0, high=3, shape=(5, 5), dtype=np.int32)
# 初始化迷宫地图(0=空地,1=墙,2=终点,3=智能体)
self.maze = np.zeros((5, 5))
self.maze[1:4, 3] = 1
self.maze[1:3, 1] = 1
self.maze[2, 0] = 1
self.maze[2, 4] = 1
self.maze[4,4] = 2 # 终点
self.agent_pos = [0, 0] # 智能体初始位置
def reset(self, **kwargs):
self.agent_pos = [0, 0]
return self._get_observation()
def step(self, action):
# 记录移动前的位置
old_pos = self.agent_pos.copy()
# 尝试执行动作
if action == 0: # 上
new_row = max(0, self.agent_pos[0] - 1)
self.agent_pos[0] = new_row
elif action == 1: # 下
new_row = min(4, self.agent_pos[0] + 1)
self.agent_pos[0] = new_row
elif action == 2: # 左
new_col = max(0, self.agent_pos[1] - 1)
self.agent_pos[1] = new_col
elif action == 3: # 右
new_col = min(4, self.agent_pos[1] + 1)
self.agent_pos[1] = new_col
# 检查是否尝试超出边界
boundary_penalty = 0
if action == 0 and old_pos[0] == 0 and self.agent_pos[0] == 0:
boundary_penalty = -5 # 上边界惩罚
elif action == 1 and old_pos[0] == 4 and self.agent_pos[0] == 4:
boundary_penalty = -5 # 下边界惩罚
elif action == 2 and old_pos[1] == 0 and self.agent_pos[1] == 0:
boundary_penalty = -5 # 左边界惩罚
elif action == 3 and old_pos[1] == 4 and self.agent_pos[1] == 4:
boundary_penalty = -5 # 右边界惩罚
# 检查是否撞墙
if self.maze[self.agent_pos[0], self.agent_pos[1]] == 1:
reward = -10 # 撞墙惩罚
self.agent_pos = old_pos # 回退到移动前位置
done = False
elif self.maze[self.agent_pos[0], self.agent_pos[1]] == 2:
reward = 100 # 到达终点奖励
done = True
else:
reward = -1 # 每步小惩罚
done=False
# self.render()
# 距离奖励
goal_pos = np.argwhere(self.maze == 2)[0]
distance_reward =(self.agent_pos[0]-goal_pos[0])**2+(self.agent_pos[1]-goal_pos[1])**2
distance_reward =round(math.sqrt(distance_reward))
# 叠加边界,距离惩罚
reward += boundary_penalty
reward -= distance_reward
return self._get_observation(), reward, done, {}
def _get_observation(self):
obs = self.maze.copy()
obs[self.agent_pos[0], self.agent_pos[1]] = 3 # 标记智能体位置
return obs
def render(self, mode='human'):
for i in range(5):
for j in range(5):
if i == self.agent_pos[0] and j == self.agent_pos[1]:
print('A', end=' ')
elif self.maze[i, j] == 1:
print('#', end=' ')
elif self.maze[i, j] == 2:
print('G', end=' ') # G=Goal
elif self.maze[i, j] == 0:
print('.', end=' ') # A=Agent
print()
print()
环境核心机制:
- 动作空间:4 个离散动作(上下左右移动)
- 状态空间:5x5 网格,包含墙壁、空地、终点和智能体位置
- 奖励机制(重点):
- 到达终点:+100 奖励
- 撞墙:-10 惩罚
- 每步移动:-1 基础惩罚(鼓励尽快到达终点)
- 边界碰撞:-5 惩罚
- 距离惩罚:与终点的距离越远,惩罚越大(可以加速收敛,加了这个之后训练时间大大缩短)
4. 主训练逻辑(main.py)
该文件实现了 DQN 算法的核心训练流程和测试逻辑。
训练流程解析
-
初始化组件:
- 策略网络(policy_net):用于当前决策
- 目标网络(target_net):用于计算目标 Q 值,定期从策略网络复制参数
- 经验回放缓冲区:存储交互经验
- 优化器:使用 Adam 优化器
-
ε- 贪婪策略:
- 平衡探索与利用,初期高探索率(ε=1.0),逐渐衰减至 0.2
- 随机数大于 ε 时:选择 Q 值最大的动作(利用)
- 随机数小于 ε 时:随机选择动作(探索)
-
训练循环:
import time import torch.optim as optim import numpy as np import random import torch from env_create import MazeEnv from ReplayBuffer import ReplayBuffer from Q_net import DQN import torch.nn as nn def train_dqn(env, episodes=200): # 获取状态维度和动作维度 state_dim = np.prod(env.observation_space.shape) action_dim = env.action_space.n print(state_dim) print(action_dim) # 初始化网络和优化器 policy_net = DQN(state_dim, action_dim) target_net = DQN(state_dim, action_dim) target_net.load_state_dict(policy_net.state_dict()) target_net.eval() optimizer = optim.Adam(policy_net.parameters(), lr=0.001) replay_buffer = ReplayBuffer(10000) # 超参数 GAMMA = 0.99 # 折扣因子 EPS_START = 1.0 # epsilon初始值 EPS_END = 0.2 # epsilon最终值 EPS_DECAY = 1000 # epsilon衰减速率 TARGET_UPDATE = 15 # 目标网络更新频率 BATCH_SIZE = 64 # 批量大小 steps_done = 0 rewards_history = [] best_reward = -float('inf') # 初始化最佳奖励为负无穷 for episode in range(episodes): state = env.reset() episode_reward = 0 done = False T_start=round(time.time()) while not done: # epsilon-greedy策略选择动作 epsilon = EPS_END + (EPS_START - EPS_END) * np.exp(-steps_done / EPS_DECAY) steps_done += 1 if random.random() > epsilon: with torch.no_grad(): state_tensor = torch.FloatTensor(state).unsqueeze(0) action = policy_net(state_tensor).argmax().item() else: action = env.action_space.sample() # 执行动作 next_state, reward, done, _ = env.step(action) episode_reward += reward # 存储经验 replay_buffer.push(state, action, reward, next_state, done) state = next_state # 训练网络 if len(replay_buffer) > BATCH_SIZE: states, actions, rewards, next_states, dones = replay_buffer.sample(BATCH_SIZE) states_tensor = torch.FloatTensor(states) actions_tensor = torch.LongTensor(actions).unsqueeze(1) rewards_tensor = torch.FloatTensor(rewards) next_states_tensor = torch.FloatTensor(next_states) dones_tensor = torch.FloatTensor(dones) # 计算当前Q值 q_values = policy_net(states_tensor).gather(1, actions_tensor).squeeze() # print('q_values',q_values) # 计算目标Q值(使用目标网络) next_q_values = target_net(next_states_tensor).max(1)[0].detach() # print('next_q_values',next_q_values) # print('rewards_tensor',rewards_tensor) targets = rewards_tensor + GAMMA * next_q_values * (1 - dones_tensor) # print('targets',targets) # 计算损失并优化 loss = nn.MSELoss()(q_values, targets) # print('loss',loss) optimizer.zero_grad() loss.backward() optimizer.step() T_finals=round(time.time()) if T_finals-T_start>60: env.render() # 定期更新目标网络 if episode % TARGET_UPDATE == 0: target_net.load_state_dict(policy_net.state_dict()) rewards_history.append(episode_reward) print(f"Episode {episode}, Reward: {episode_reward}, Epsilon: {epsilon:.4f}") # 如果当前奖励是最佳奖励,则保存模型 if episode_reward > best_reward: best_reward = episode_reward torch.save(policy_net.state_dict(), 'best_dqn_model.pth') return policy_net, rewards_history # 训练智能体 env = MazeEnv() policy, rewards = train_dqn(env) def play_game(env, policy, episodes=5): for episode in range(episodes): state = env.reset() done = False total_reward = 0 while not done: env.render() # 可视化 with torch.no_grad(): state_tensor = torch.FloatTensor(state).unsqueeze(0) action = policy(state_tensor).argmax().item() state, reward, done, _ = env.step(action) total_reward += reward print(f"Episode {episode} completed with reward: {total_reward}") env.close() # 加载最佳模型 best_policy = DQN(np.prod(env.observation_space.shape), env.action_space.n) best_policy.load_state_dict(torch.load('best_dqn_model.pth')) # 测试最佳模型 play_game(env, best_policy)
-
Q 值计算与优化:
- 当前 Q 值:策略网络对当前状态 - 动作对的评估
- 目标 Q 值:使用贝尔曼方程计算,结合即时奖励和下一状态的最大 Q 值
- 损失函数:均方误差(MSE),最小化当前 Q 值与目标 Q 值的差异
-
模型保存与测试:
- 保存获得最大奖励的模型
- 测试阶段使用最优模型,通过贪心策略(不探索)执行动作
算法原理
该项目实现的 DQN 算法核心思想包括:
- 经验回放:打破样本相关性,提高训练稳定性
- 目标网络:固定目标 Q 值的计算,减少训练波动
- ε- 贪婪探索:平衡探索新状态和利用已知信息
- 深度神经网络:近似复杂的状态 - 动作价值函数
通过这些技术,智能体能够在迷宫环境中自主学习导航策略,逐渐找到从起点 (0,0) 到终点 (4,4) 的最优路径,避开墙壁和边界。
总结+效果展示
该代码只是简单的实现了一个完整的基于深度学习的强化学习系统,使用固定的迷宫对智能体进行训练,读者可以考虑添加随机迷宫生成以让其适应不同的迷宫。