1 代码下载到本地
建议:Utuban系统
命令行指令:
下载:Git clone GitHub - ultralytics/ultralytics: Ultralytics YOLO11 🚀
打开:code ultralytics
注:后续流程不依赖requirements.txt安装依赖包
2 新建虚拟环境
conda create -n py39 python=3.9 anaconda
注:末尾+anaconda会自动装额外的科学计算的包,比较好。
注:数据集的格式,YOLO、COCO、VOC、json等,可以相互转换。
3 torch的安装
查看CUDA版本:nvidia-smi,如下:
下载conda时,版本一定要小于等于下图的版本。
Pytorch官网:https://pytorch.org/get-started/previous-versions/
比如下载V2.3.0版本的Conda:
# CUDA 12.4
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia
注:如果遇到conda下载比较慢的时候,可以直接把下载链接复制到浏览器下载,然后直接pip安装下载好的包名字:
pip install torch-2.3.0+cu121-cp310-cp310-linux x86 64.wh1
pip挂代理:
搜索pip proxy
指令:pip install -r requirements.txt --proxy=代理服务器IP:端口号
比如:
4 opencv的安装
指令:pip install opencv-python seaborn --proxy=代理服务器IP:端口号
data.yaml
train、val、test:地址使用images,运行时程序会自动将images替换为labels来寻找标签。
其他包:
pip install thop
5 路径:(推荐使用绝对路径)
比如:data.yaml
方法1:
train: E:/Datasets/Cats/train/images
val: E:/Datasets/Cats/val/images
test: E:/Datasets/Cats/test/images
方法2:
path: E:/Datasets/Cats/
train: train/images
val: val/images
test: test/images
6 bili代码
6.1 新建train.py
from ultralytics import YOLO
if __name__ == ‘__main__’:
model = YOLO(“ultralytics/cfg/models/11/yolo11s.yaml”) # 模型配置
# model.load(‘yolov11n.pt’) # 加载预训练权重。可以不用
model.train(
data = ’data.yaml’,
imgsz = 640,
epochs = 50,
batch = 16,
close_mosaic = 0,
optimizer = ‘SGD’,
project = ‘runs/train’, #保存根目录
name = ‘exp’, #保存名字
)
即可开始训练。
注:虽然原版文件中没有yolo11s.yaml,不用管,可以直接使用,程序会自动下载。
注:如果使用预训练权重,程序运行时会出现一行代码:
Trasferred ???/499 items from pretrained weights.
6.2 新建val.py
import warnings
warnings.filterwarning(‘ignore’)
from ultralytics import YOLO
if __name__ == ‘__main__’
model = YOLO(“runs/train/exp/weights/best.pt”) # 模型配置
model.val(
data = ’/root/datasets/belt/data.yaml’,
split = ‘val’,
imgsz = 640,
# iou = 0.7,
# rect = False,
# save_json = True, # if you need cal coco metrics
project = ‘runs/train’, #保存根目录
name = ‘exp’, #保存名字
)
即可开始验证。
6.3 新建detect.py
import warnings
warnings.filterwarning(‘ignore’)
from ultralytics import YOLO
if __name__ == ‘__main__’
model = YOLO(“runs/train/exp/weights/best.pt”) # 模型配置
model.predict(
source = ’datasets/images/test’, # 源图片路径
project = ‘runs/detect’, # 保存根目录
name = ‘exp’, #保存名字
save = True,
# conf = 0.2,
# iou = 0.7,
# agnostic_nms = True,
# visualize = True, # visualize model features maps
# line_width = 2,
# show_conf = False, # show prediction confidence
# show_labels = True, # show prediction labels
# save_txt = True, # save result as .txt file
# save_crop = True, # save cropped images with results
)
检测模型框架:
Detection (COCO)
OBB (DOTAv1)
7 Ultralytics (github源码)
7.1 Train
在自定义数据集上训练YOLO11 模型。在该模式下,模型使用指定的数据集和超参数进行训练。训练过程包括优化模型参数,使其能够准确预测图像中物体的类别和位置。
从零开始:
from ultralytics import YOLO
model = YOLO("yolo11n.yaml")
results = model.train(
data="coco8.yaml", # path to dataset YAML
epochs=100, # number of training epochs
imgsz=640, # training image size
device="cpu", # device to run on, i.e. device=0 or device=0,1,2,3 or device=cpu
)
来自预训练模型:
from ultralytics import YOLO
model = YOLO("yolo11n.pt") # pass any model type
results = model.train(
data="coco8.yaml", # path to dataset YAML
epochs=100, # number of training epochs
imgsz=640, # training image size
device="cpu", # device to run on, i.e. device=0 or device=0,1,2,3 or device=cpu
)
7.2 Val
用于在YOLO11 模型训练完成后对其进行验证。在该模式下,模型在验证集上进行评估,以衡量其准确性和泛化性能。该模式可用于调整模型的超参数,以提高其性能。
Train之后的Val:
from ultralytics import YOLO
# Load a YOLO11 model
model = YOLO("yolo11n.yaml")
# Train the model
model.train(data="coco8.yaml", epochs=5)
# Validate on training data
model.val()
在另一个数据集上Val:
from ultralytics import YOLO
# Load a YOLO11 model
model = YOLO("yolo11n.yaml")
# Train the model
model.train(data="coco8.yaml", epochs=5)
# Validate on separate data
model.val(data="path/to/separate/data.yaml")
7.3 Predict
模型从检查点文件加载,用户可以提供图像或视频来执行推理。模型会预测输入图像或视频中物体的类别和位置。
import cv2
from PIL import Image
from ultralytics import YOLO
model = YOLO("model.pt")
# accepts all formats - image/dir/Path/URL/video/PIL/ndarray. 0 for webcam
results = model.predict(source="0")
results = model.predict(source="folder", show=True) # Display preds. Accepts all YOLO predict arguments
# from PIL
im1 = Image.open("bus.jpg")
results = model.predict(source=im1, save=True) # save plotted images
# from ndarray
im2 = cv2.imread("bus.jpg")
results = model.predict(source=im2, save=True, save_txt=True) # save predictions as labels
# from list of PIL/ndarray
results = model.predict(source=[im1, im2])
# show results
results[0].show()
# Export the model to ONNX format
path = model.export(format="onnx") #return path to exported model 返回导出模型的路径