💡💡💡本文自研创新改进:MSAM(CBAM升级版):通道注意力具备多尺度性能,多分支深度卷积更好的提取多尺度特征,最后高效结合空间注意力
1)作为注意力MSAM使用;
推荐指数:五星
MSCA | 亲测在多个数据集能够实现涨点,对标CBAM。
收录YOLOv5原创自研
https://blog.csdn.net/m0_63774211/category_12511931.html
💡💡💡全网独家首发创新(原创),适合paper !!!
💡💡💡 2024年计算机视觉顶会创新点适用于Yolov5、Yolov7、Yolov8等各个Yolo系列,专栏文章提供每一步步骤和源码,轻松带你上手魔改网络 !!!
💡💡💡重点:通过本专栏的阅读,后续你也可以设计魔改网络,在网络不同位置(Backbone、head、detect、loss等)进行魔改,实现创新!!!
1.计算机视觉中的注意力机制
一般来说,注意力机制通常被分为以下基本四大类:
通道注意力 Channel Attention
空间注意力机制 Spatial Attention
时间注意力机制 Temporal Attention
分支注意力机制 Branch Attention
2.CBAM:通道注意力和空间注意力的集成者
轻量级的卷积注意力模块,它结合了通道和空间的注意力机制模块
论文题目:《CBAM: Convolutional Block Attention Module》
论文地址: https://arxiv.org/pdf/1807.06521.pdf
上图可以看到,CBAM包含CAM(Channel Attention Module)和SAM(Spartial Attention Module)两个子模块,分别进行通道和空间上的Attention。这样不只能够节约参数和计算力,并且保证了其能够做为即插即用的模块集成到现有的网络架构中去。
3.自研MSAM
3.1 原理介绍
多尺度卷积注意(MSCA)
论文:https://arxiv.org/pdf/2209.08575.pdf
摘要:介绍了一种用于语义分割的简单卷积网络体系结构SegNeXt。由于在编码空间信息时自我注意的效率,最近基于Transformer的模型已主导语义分割领域。在本文中,我们证明了卷积注意比Transformer中的自注意机制更有效地编码上下文信息。本文对已有成功分割方案进行了重审视并发现了几个有助于性能提升的关键成分,进而促使我们设计了一种新型的卷积注意力架构方案SegNeXt。在没有任何花哨的成分下,我们的SegNeXt显着改善了以前在流行基准测试 (包括ADE20K,Cityscapes,COCO-Stuff,Pascal VOC,Pascal Context和iSAID) 上最先进的方法的性能。值得注意的是,SegNeXt的性能优于EfficientNet-L2 w/ NAS-FPN,并且仅使用其1/10参数在Pascal VOC 2012测试一下排行榜上实现90.6% mIoU。与ad20k数据集上具有相同或更少计算的最新方法相比,SegNeXt平均实现了约2.0% mIoU改进。
设计了一种新的多尺度卷积注意(MSCA)模块。如图2 (a)所示,MSCA包含三个部分:深度卷积聚合局部信息,多分支深度条卷积捕获多尺度上下文,以及1×1卷积建模不同通道之间的关系。
3.2 自研MSAM注意力介绍
多尺度卷积注意(MSCA)模块具备多尺度性能
原理:CBMA原先的通道注意力替换为MSCA,使通道注意力具备多尺度性能
ps:如有需要用于paper等途径,可自己命名实现自主创新
3.3 MSAM引入到YOLOv5
3.4 MSAM加入models/attention/MSAM.py
import torch
import torch.nn as nn
from torch.nn import functional as Fclass MSCAAttention(nn.Module):# SegNext NeurIPS 2022# https://github.com/Visual-Attention-Network/SegNeXt/tree/maindef __init__(self, dim):super().__init__()self.conv0 = nn.Conv2d(dim, dim, 5, padding=2, groups=dim)self.conv0_1 = nn.Conv2d(dim, dim, (1, 7), padding=(0, 3), groups=dim)self.conv0_2 = nn.Conv2d(dim, dim, (7, 1), padding=(3, 0), groups=dim)self.conv1_1 = nn.Conv2d(dim, dim, (1, 11), padding=(0, 5), groups=dim)self.conv1_2 = nn.Conv2d(dim, dim, (11, 1), padding=(5, 0), groups=dim)self.conv2_1 = nn.Conv2d(dim, dim, (1, 21), padding=(0, 10), groups=dim)self.conv2_2 = nn.Conv2d(dim, dim, (21, 1), padding=(10, 0), groups=dim)self.conv3 = nn.Conv2d(dim, dim, 1)def forward(self, x):u = x.clone()attn = self.conv0(x)attn_0 = self.conv0_1(attn)attn_0 = self.conv0_2(attn_0)attn_1 = self.conv1_1(attn)attn_1 = self.conv1_2(attn_1)attn_2 = self.conv2_1(attn)attn_2 = self.conv2_2(attn_2)attn = attn + attn_0 + attn_1 + attn_2attn = self.conv3(attn)return attn * uclass ChannelAttention(nn.Module):# Channel-attention module https://github.com/open-mmlab/mmdetection/tree/v3.0.0rc1/configs/rtmdetdef __init__(self, channels: int) -> None:super().__init__()self.pool = nn.AdaptiveAvgPool2d(1)self.fc = nn.Conv2d(channels, channels, 1, 1, 0, bias=True)self.act = nn.Sigmoid()def forward(self, x: torch.Tensor) -> torch.Tensor:return x * self.act(self.fc(self.pool(x)))class SpatialAttention(nn.Module):# Spatial-attention moduledef __init__(self, kernel_size=7):super().__init__()assert kernel_size in (3, 7), 'kernel size must be 3 or 7'padding = 3 if kernel_size == 7 else 1self.cv1 = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False)self.act = nn.Sigmoid()def forward(self, x):return x * self.act(self.cv1(torch.cat([torch.mean(x, 1, keepdim=True), torch.max(x, 1, keepdim=True)[0]], 1)))class MSAM(nn.Module):# by AI&CV https://blog.csdn.net/m0_63774211/category_12289773.html# Convolutional Block Attention Moduledef __init__(self, c1, kernel_size=7): # ch_in, kernelssuper().__init__()self.channel_attention = MSCAAttention(c1)self.spatial_attention = SpatialAttention(kernel_size)def forward(self, x):return self.spatial_attention(self.channel_attention(x))
3.5 注册yolo.py
MSAM进行注册
from models.attention.MSAM import MSAM
修改def parse_model(d, ch, verbose=True): # model_dict, input_channels(3)
#####attention ####elif m iS MSAM:c1, c2 = ch[f], args[0]if c2 != nc:c2 = make_divisible(min(c2, max_channels) * width, 8)args = [c1, *args[1:]]
3.6 yolov5s_MSAM.yaml
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Parameters
nc: 1 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:- [10,13, 16,30, 33,23] # P3/8- [30,61, 62,45, 59,119] # P4/16- [116,90, 156,198, 373,326] # P5/32backbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2[-1, 1, Conv, [128, 3, 2]], # 1-P2/4[-1, 3, C3, [128]],[-1, 1, Conv, [256, 3, 2]], # 3-P3/8[-1, 6, C3, [256]],[-1, 1, Conv, [512, 3, 2]], # 5-P4/16[-1, 9, C3, [512]],[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32[-1, 3, C3, [1024]],[-1, 1, SPPF, [1024, 5]], # 9[-1, 1, MSCA_CBAM, [1024]], # 10]# YOLOv5 v6.0 head
head:[[-1, 1, Conv, [512, 1, 1]],[-1, 1, nn.Upsample, [None, 2, 'nearest']],[[-1, 6], 1, Concat, [1]], # cat backbone P4[-1, 3, C3, [512, False]], # 14[-1, 1, Conv, [256, 1, 1]],[-1, 1, nn.Upsample, [None, 2, 'nearest']],[[-1, 4], 1, Concat, [1]], # cat backbone P3[-1, 3, C3, [256, False]], # 18 (P3/8-small)[-1, 1, Conv, [256, 3, 2]],[[-1, 15], 1, Concat, [1]], # cat head P4[-1, 3, C3, [512, False]], # 21 (P4/16-medium)[-1, 1, Conv, [512, 3, 2]],[[-1, 11], 1, Concat, [1]], # cat head P5[-1, 3, C3, [1024, False]], # 24 (P5/32-large)[[18, 21, 24], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)]
4.数据集验证
由于时间精力有限,仅在v8验证了可行性,以下仅供参考
数据集介绍道路缺陷检测数据集,数据大小390张,随机划分为训练、测试、验证集。
下图可见,缺陷存在各个尺度的特征,验证多尺度创新点是十分合适的
原始v8n性能
YOLOv8 summary (fused): 168 layers, 3005843 parameters, 0 gradients, 8.1 GFLOPsClass Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 5/5 [00:01<00:00, 2.50it/s]all 71 63 0.731 0.732 0.8 0.47
cbam性能
YOLOv8_CBAM summary (fused): 176 layers, 3071733 parameters, 0 gradients, 8.1 GFLOPsClass Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 5/5 [00:02<00:00, 2.46it/s]all 71 63 0.834 0.683 0.822 0.442
msam性能
YOLOv8_MSAM summary (fused): 181 layers, 3099893 parameters, 0 gradients, 8.2 GFLOPsClass Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 5/5 [00:02<00:00, 2.46it/s]all 71 63 0.788 0.794 0.855 0.507