yolov5+DeepSORT复现
时间: 2025-01-18 21:01:04 浏览: 85
### 使用 YOLOv5 与 DeepSORT 实现目标检测和跟踪
#### 初始化环境与安装依赖库
为了构建基于 YOLOv5 和 DeepSORT 的多目标跟踪系统,需先设置开发环境并安装必要的软件包。此过程涉及 Python 及 PyTorch 环境的搭建,确保兼容最新版本的 YOLOv5[^1]。
```bash
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
git clone https://github.com/ultralytics/yolov5.git
cd yolov5
pip install -r requirements.txt
```
接着,还需下载并配置 DeepSORT 部分所需的资源:
```bash
git clone https://github.com/nwojke/deep_sort.git
cd deep_sort
pip install .
```
#### 加载预训练模型与初始化组件
完成环境准备之后,加载预先训练好的 YOLOv5 权重文件,并创建相应的对象用于后续处理。同时也要准备好 DeepSORT 所必需的数据结构和参数设定。
```python
from models.experimental import attempt_load
from utils.general import non_max_suppression, scale_coords
from utils.torch_utils import select_device
import numpy as np
import cv2
from deep_sort import preprocessing
from deep_sort.nn_matching import NearestNeighborDistanceMetric
from deep_sort.detection import Detection
from deep_sort.tracker import Tracker
weights_path = 'yolov5s.pt' # 或者其他路径下的权重文件
device = select_device('') # 自动选择可用设备 (CPU/GPU)
model = attempt_load(weights_path, map_location=device)
max_cosine_distance = 0.4 # 设置最大余弦距离阈值
nn_budget = None # NN匹配预算
metric = NearestNeighborDistanceMetric("cosine", max_cosine_distance, nn_budget)
tracker = Tracker(metric)
```
#### 处理图像帧执行检测与跟踪逻辑
对于每一帧输入图片或视频流画面,在这里定义具体的业务流程来调用之前建立的对象来进行物体识别及轨迹预测工作。
```python
def process_frame(frame):
img_size = 640 # 输入网络尺寸大小
conf_thres = 0.4 # 置信度阈值
iou_thres = 0.5 # IOU交并比阈值
# 图像预处理操作...
with torch.no_grad():
pred = model(img)[0]
det = non_max_suppression(pred, conf_thres, iou_thres)[0]
if det is not None and len(det):
bbox_xyxy = []
confidences = []
for *xyxy, conf, cls in reversed(det):
xywh = [int((b + a)/2) for b,a in zip(xyxy[::2], xyxy[1::2])]
bbox_xyxy.append(xywh+[conf.item()])
confidences.append(conf.item())
features = encoder(frame, bbox_xyxy)
detections = [Detection(bbox, score, feature)
for bbox, score, feature in zip(bbox_xyxy, confidences, features)]
tracker.predict()
matches = matcher.match(detections)
output_bboxes = []
for track_id, detection_idx in enumerate(matches[:, 1]):
tracked_bbox = detections[detection_idx].to_tlbr().tolist()
output_bboxes.append([tracked_bbox, int(track_id)])
return frame, output_bboxes
```
以上代码片段展示了如何利用 YOLOv5 进行初步的目标定位,再通过 DeepSORT 完成进一步的身份确认与连续追踪功能[^4]。
阅读全文
相关推荐














