YOLOv10: 实时端到端的目标检测。
性能
YOLOv10比最先进的YOLOv9延迟时间更低,测试结果可以与YOLOv9媲美,可能会成为YOLO系列模型部署的“新选择”。
目录
1 数据准备
2 配置文件
3 训练
4 验证
5 预测
6 导出模型
7 ONNX模型的使用
官方论文地址:https://arxiv.org/pdf/2405.14458
官方代码地址:https://github.com/THU-MIG/yolov10
安装
建议使用Conda虚拟环境。
① 克隆YOLOv10项目
git clone https://github.com/THU-MIG/yolov10.git
② 安装
conda create -n yolov10 python=3.9conda activate yolov10cd yolov10项目所在路径pip install -r requirements.txtpip install -e . -i https://pypi.tuna.tsinghua.edu.cn/simple
1 数据准备
可以使用开源的数据集,也可以自己准备数据集。
①标注工具
⒈labelme
安装方法:pip install labelme
使用方法:终端输入labelme
标注后生成的标记文件是json文件。
⒉labelimg
安装方法: pip install labelimg
使用方法:
cd到labelImg所在路径;
python3 labelImg.py
标注后生成的标记文件是xml文件。
②数据集整理
原始数据集格式如下图所示:
Annotations里面存放标签xml文件。
JPEGImage 里面存放原始图片。
labels文件夹里面存放的是标签txt文件。这个文件夹里的文件是通过脚本XmlToTxt.py生成的。
XmlToTxt.py的代码如下:
import xml.etree.ElementTree as ET
import os
import os
import random
# TODO 这里按照类别去修改
classes = ['fire']
# TODO 这里按照实际XML文件夹路径去修改
xml_filepath = 'dataset_fire/Annotations/'
# TODO 这里按照实际想要保存结果txt文件夹的路径去修改
labels_savepath = 'dataset_fire/labels/'
abs_path = os.getcwd()def convert(size, box):dw = 1. / (size[0])dh = 1. / (size[1])x = (box[0] + box[1]) / 2.0 - 1y = (box[2] + box[3]) / 2.0 - 1w = box[1] - box[0]h = box[3] - box[2]x = x * dww = w * dwy = y * dhh = h * dhreturn x, y, w, hdef convert_annotation(image_id):in_file = open(xml_filepath + '%s.xml' % (image_id), encoding='UTF-8')out_file = open(labels_savepath + '%s.txt' % (image_id), 'w')tree = ET.parse(in_file)root = tree.getroot()size = root.find('size')w = int(size.find('width').text)h = int(size.find('height').text)for obj in root.iter('object'):difficult = obj.find('difficult').textcls = obj.find('name').textif cls not in classes or int(difficult) == 1:continuecls_id = classes.index(cls)xmlbox = obj.find('bndbox')b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),float(xmlbox.find('ymax').text))b1, b2, b3, b4 = b# 标注越界修正if b2 > w:b2 = wif b4 > h:b4 = hb = (b1, b2, b3, b4)bb = convert((w, h), b)out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')def run():total_xml = os.listdir(xml_filepath)num = len(total_xml)names = []for xml in total_xml:names.append(xml[:-4])for name in names:convert_annotation(name)passif __name__ == '__main__':run()pass
然后,根据JPEGImage 文件夹和labels文件夹通过脚本deal_dataset.py将数据集划分为如下结构。
deal_dataset.py的代码如下:
import os
import random
import shutil# 原数据集目录
root_dir = 'dataset_fire/'
# 划分比例
train_ratio = 0.8
valid_ratio = 0.1
test_ratio = 0.1# 设置随机种子
random.seed(42)# TODo 这里按照实际数据集路径去修改
split_dir = 'dataset_fire_split/'
os.makedirs(os.path.join(split_dir, 'train/images'), exist_ok=True)
os.makedirs(os.path.join(split_dir, 'train/labels'), exist_ok=True)
os.makedirs(os.path.join(split_dir, 'val/images'), exist_ok=True)
os.makedirs(os.path.join(split_dir, 'val/labels'), exist_ok=True)
os.makedirs(os.path.join(split_dir, 'test/images'), exist_ok=True)
os.makedirs(os.path.join(split_dir, 'test/labels'), exist_ok=True)# TODo 这里按照实际数据集路径去修改
imgpath = "JPEGImages"
labelpath = "labels"
image_files = os.listdir(os.path.join(root_dir, imgpath))
label_files = os.listdir(os.path.join(root_dir, labelpath))# 随机打乱文件列表
combined_files = list(zip(image_files, label_files))
random.shuffle(combined_files)
image_files_shuffled, label_files_shuffled = zip(*combined_files)# 根据比例计算划分的边界索引
train_bound = int(train_ratio * len(image_files_shuffled))
valid_bound = int((train_ratio + valid_ratio) * len(image_files_shuffled))# 将图片和标签文件移动到相应的目录
for i, (image_file, label_file) in enumerate(zip(image_files_shuffled, label_files_shuffled)):if i < train_bound:shutil.move(os.path.join(root_dir, imgpath, image_file), os.path.join(split_dir, 'train/images', image_file))shutil.move(os.path.join(root_dir, labelpath, label_file), os.path.join(split_dir, 'train/labels', label_file))elif i < valid_bound:shutil.move(os.path.join(root_dir, imgpath, image_file), os.path.join(split_dir, 'valid/images', image_file))shutil.move(os.path.join(root_dir, labelpath, label_file), os.path.join(split_dir, 'valid/labels', label_file))else:shutil.move(os.path.join(root_dir, imgpath, image_file), os.path.join(split_dir, 'test/images', image_file))shutil.move(os.path.join(root_dir, labelpath, label_file), os.path.join(split_dir, 'test/labels', label_file))
至此,数据集准备好啦!💛 💙 💜 ❤️ 💚 💛 💙 💜 ❤️ 💚 💛 💙 💜 ❤️ 💚 💛 💙 💜 ❤️ 💚
2 配置文件
在YOLOv10的项目下新建fire.yaml文件,内容如下:
train: dataset_fire_split/train
val: dataset_fire_split/val
test: dataset_fire_split/test
nc: 1
# classes
names:0: fire
修改ultralytics/cfg/models/v10/yolov10s.yaml文件内容:
3 训练
imgsz:图像放缩大小resize,默认是640。
device:设备id,可以是cpu,如果只有一张显卡,则device=0,如果有两张,则device=0,1,依次类推。
训练示例如下:
- 方式一
从yaml构建全新的模型
yolo detect train data=fire.yaml model=yolov10s.yaml epochs=200 batch=8 imgsz=640 device=cpu project=yolov10
- 方式二
配置好ultralytics/cfg/default.yaml这个文件之后,可以直接执行这个文件进行训练,这样就不需要在命令行输入其它的参数。
yolo cfg=ultralytics/cfg/default.yaml
官方原版的default.yaml的内容如下:
# Ultralytics YOLO 🚀, AGPL-3.0 license
# Default training settings and hyperparameters for medium-augmentation COCO trainingtask: detect # (str) YOLO task, i.e. detect, segment, classify, pose
mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmark# Train settings -------------------------------------------------------------------------------------------------------
model: # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml
data: # (str, optional) path to data file, i.e. coco128.yaml
epochs: 100 # (int) number of epochs to train for
time: # (float, optional) number of hours to train for, overrides epochs if supplied
patience: 100 # (int) epochs to wait for no observable improvement for early stopping of training
batch: 16 # (int) number of images per batch (-1 for AutoBatch)
imgsz: 640 # (int | list) input images size as int for train and val modes, or list[w,h] for predict and export modes
save: True # (bool) save train checkpoints and predict results
save_period: -1 # (int) Save checkpoint every x epochs (disabled if < 1)
val_period: 1 # (int) Validation every x epochs
cache: False # (bool) True/ram, disk or False. Use cache for data loading
device: # (int | str | list, optional) device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
workers: 8 # (int) number of worker threads for data loading (per RANK if DDP)
project: # (str, optional) project name
name: # (str, optional) experiment name, results saved to 'project/name' directory
exist_ok: False # (bool) whether to overwrite existing experiment
pretrained: True # (bool | str) whether to use a pretrained model (bool) or a model to load weights from (str)
optimizer: auto # (str) optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto]
verbose: True # (bool) whether to print verbose output
seed: 0 # (int) random seed for reproducibility
deterministic: True # (bool) whether to enable deterministic mode
single_cls: False # (bool) train multi-class data as single-class
rect: False # (bool) rectangular training if mode='train' or rectangular validation if mode='val'
cos_lr: False # (bool) use cosine learning rate scheduler
close_mosaic: 10 # (int) disable mosaic augmentation for final epochs (0 to disable)
resume: False # (bool) resume training from last checkpoint
amp: True # (bool) Automatic Mixed Precision (AMP) training, choices=[True, False], True runs AMP check
fraction: 1.0 # (float) dataset fraction to train on (default is 1.0, all images in train set)
profile: False # (bool) profile ONNX and TensorRT speeds during training for loggers
freeze: None # (int | list, optional) freeze first n layers, or freeze list of layer indices during training
multi_scale: False # (bool) Whether to use multiscale during training
# Segmentation
overlap_mask: True # (bool) masks should overlap during training (segment train only)
mask_ratio: 4 # (int) mask downsample ratio (segment train only)
# Classification
dropout: 0.0 # (float) use dropout regularization (classify train only)# Val/Test settings ----------------------------------------------------------------------------------------------------
val: True # (bool) validate/test during training
split: val # (str) dataset split to use for validation, i.e. 'val', 'test' or 'train'
save_json: False # (bool) save results to JSON file
save_hybrid: False # (bool) save hybrid version of labels (labels + additional predictions)
conf: # (float, optional) object confidence threshold for detection (default 0.25 predict, 0.001 val)
iou: 0.7 # (float) intersection over union (IoU) threshold for NMS
max_det: 300 # (int) maximum number of detections per image
half: False # (bool) use half precision (FP16)
dnn: False # (bool) use OpenCV DNN for ONNX inference
plots: True # (bool) save plots and images during train/val# Predict settings -----------------------------------------------------------------------------------------------------
source: # (str, optional) source directory for images or videos
vid_stride: 1 # (int) video frame-rate stride
stream_buffer: False # (bool) buffer all streaming frames (True) or return the most recent frame (False)
visualize: False # (bool) visualize model features
augment: False # (bool) apply image augmentation to prediction sources
agnostic_nms: False # (bool) class-agnostic NMS
classes: # (int | list[int], optional) filter results by class, i.e. classes=0, or classes=[0,2,3]
retina_masks: False # (bool) use high-resolution segmentation masks
embed: # (list[int], optional) return feature vectors/embeddings from given layers# Visualize settings ---------------------------------------------------------------------------------------------------
show: False # (bool) show predicted images and videos if environment allows
save_frames: False # (bool) save predicted individual video frames
save_txt: False # (bool) save results as .txt file
save_conf: False # (bool) save results with confidence scores
save_crop: False # (bool) save cropped images with results
show_labels: True # (bool) show prediction labels, i.e. 'person'
show_conf: True # (bool) show prediction confidence, i.e. '0.99'
show_boxes: True # (bool) show prediction boxes
line_width: # (int, optional) line width of the bounding boxes. Scaled to image size if None.# Export settings ------------------------------------------------------------------------------------------------------
format: torchscript # (str) format to export to, choices at https://docs.ultralytics.com/modes/export/#export-formats
keras: False # (bool) use Kera=s
optimize: False # (bool) TorchScript: optimize for mobile
int8: False # (bool) CoreML/TF INT8 quantization
dynamic: False # (bool) ONNX/TF/TensorRT: dynamic axes
simplify: False # (bool) ONNX: simplify model
opset: # (int, optional) ONNX: opset version
workspace: 4 # (int) TensorRT: workspace size (GB)
nms: False # (bool) CoreML: add NMS# Hyperparameters ------------------------------------------------------------------------------------------------------
lr0: 0.01 # (float) initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
lrf: 0.01 # (float) final learning rate (lr0 * lrf)
momentum: 0.937 # (float) SGD momentum/Adam beta1
weight_decay: 0.0005 # (float) optimizer weight decay 5e-4
warmup_epochs: 3.0 # (float) warmup epochs (fractions ok)
warmup_momentum: 0.8 # (float) warmup initial momentum
warmup_bias_lr: 0.1 # (float) warmup initial bias lr
box: 7.5 # (float) box loss gain
cls: 0.5 # (float) cls loss gain (scale with pixels)
dfl: 1.5 # (float) dfl loss gain
pose: 12.0 # (float) pose loss gain
kobj: 1.0 # (float) keypoint obj loss gain
label_smoothing: 0.0 # (float) label smoothing (fraction)
nbs: 64 # (int) nominal batch size
hsv_h: 0.015 # (float) image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # (float) image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # (float) image HSV-Value augmentation (fraction)
degrees: 0.0 # (float) image rotation (+/- deg)
translate: 0.1 # (float) image translation (+/- fraction)
scale: 0.5 # (float) image scale (+/- gain)
shear: 0.0 # (float) image shear (+/- deg)
perspective: 0.0 # (float) image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # (float) image flip up-down (probability)
fliplr: 0.5 # (float) image flip left-right (probability)
bgr: 0.0 # (float) image channel BGR (probability)
mosaic: 1.0 # (float) image mosaic (probability)
mixup: 0.0 # (float) image mixup (probability)
copy_paste: 0.0 # (float) segment copy-paste (probability)
auto_augment: randaugment # (str) auto augmentation policy for classification (randaugment, autoaugment, augmix)
erasing: 0.4 # (float) probability of random erasing during classification training (0-1)
crop_fraction: 1.0 # (float) image crop fraction for classification evaluation/inference (0-1)# Custom config.yaml ---------------------------------------------------------------------------------------------------
cfg: # (str, optional) for overriding defaults.yaml# Tracker settings ------------------------------------------------------------------------------------------------------
tracker: botsort.yaml # (str) tracker type, choices=[botsort.yaml, bytetrack.yaml]
- 方式三(推荐)
首先,需要下载模型,模型下载链接如下:
yolov10n.pt yolov10s.pt yolov10m.pt yolov10b.pt yolov10l.pt yolov10x.pt
下载后的模型放在YOLOv10的工程目录下即可。
从yaml构建全新的模型,将预训练权重转移到这个模型并开始训练。
# 数据配置文件最好用绝对路径哈yolo detect train data=fire.yaml model=ultralytics/cfg/models/v10/yolov10s.yaml pretrained=yolov10s.pt epochs=50 batch=8 imgsz=640 device=cpu project=yolov10
训练过程的产物:
训练结束后,模型保存在路径yolov10/train5/weights下,如下图:
4 验证
验证示例如下:
注意数据配置文件尽量用绝对路径。
cd yolov10项目所在的路径yolo task=detect mode=val split=val model=yolov10/train5/weights/best.pt data=fire.yaml batch=2 device=cpu
验证过程的产物:
5 预测
预测示例如下:
cd yolov10项目所在的路径yolo task=detect mode=predict model=yolov10/train5/weights/best.pt source=test.jpg device=cpu
预测效果如下图:
说明:本次训练过程只是说明过程,训练轮数不够,因此检测结果置信度一般。
6 导出模型
导出ONNX模型示例:
# export custom trained modelyolo task=detect mode=export model=yolov10/train5/weights/best.pt format=onnx
7 ONNX模型的使用
命令行方式:
yolo detect predict model=yolov10/train5/weights/best.onnx source='test.jpg'
检测结果如下图:
到此,本文分享的内容就结束啦!遇见便是缘,感恩遇见!点个赞 + 关注吧!哈哈哈哈!!!💛 💙 💜 ❤️ 💚 💛 💙 💜 ❤️ 💚 💛 💙 💜 ❤️ 💚 💛 💙 💜 ❤️ 💚💛 💙 💜 ❤️ 💚 💛 💙 💜 ❤️ 💚 💛 💙 💜 ❤️ 💚