【深度学习实战(31)】模型结构之CSPDarknet

文章目录

  • 一、CSPDarknet整体结构
  • 二、CSPNet结构
    • 2.1 里面小残差结构Bottleneck
    • 2.2 外层大残差结构CSP
  • 三、SPP结构
  • 四、CSPDarknet结构
    • 4.1 stem结构
    • 4.2 dark2,3,4结构,dark3为例:
    • 4.3 dark5结构
    • 4.4 CSPDarkNet整体结构
  • 四、完整代码


一、CSPDarknet整体结构

CSPDarknet主要借鉴Darknet53yolov3中使用)的网络框架,并使用了SPPCSPNet结构做了改进。
在这里插入图片描述

二、CSPNet结构

论文链接:https://arxiv.org/pdf/1911.11929
CSPNet由里层小残差Bottleneck结构和外层大残差结构CSP组成。

2.1 里面小残差结构Bottleneck

Bottleneck小残差结构图如下,其中Conv2D可以根据需要设置为普通卷积BaseConv或者是深度可分离卷积DWConv,激活函数可以选择relu或者leaky_relu
在这里插入图片描述

代码实现

#--------------------------------------------------#
#   activation func.
#--------------------------------------------------#
def get_activation(name="lrelu", inplace=True):if name == "relu":module = nn.ReLU(inplace=inplace)elif name == "lrelu":module = nn.LeakyReLU(0.1, inplace=inplace)else:raise AttributeError("Unsupported act type: {}".format(name))return module#--------------------------------------------------#
#   BaseConv
#--------------------------------------------------#
class BaseConv(nn.Module):def __init__(self, in_channels, out_channels, ksize, stride, groups=1, bias=False, act="lrelu"):super().__init__()pad         = (ksize - 1) // 2self.conv   = nn.Conv2d(in_channels, out_channels, kernel_size=ksize, stride=stride, padding=pad, groups=groups, bias=bias)self.bn     = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.03)self.act    = get_activation(act, inplace=True)def forward(self, x):return self.act(self.bn(self.conv(x)))def fuseforward(self, x):return self.act(self.conv(x))#--------------------------------------------------#
#   DWConv
#--------------------------------------------------#
class DWConv(nn.Module):def __init__(self, in_channels, out_channels, ksize, stride=1, act="lrelu"):super().__init__()self.dconv = BaseConv(in_channels, in_channels, ksize=ksize, stride=stride, groups=in_channels, act=act,)self.pconv = BaseConv(in_channels, out_channels, ksize=1, stride=1, groups=1, act=act)def forward(self, x):x = self.dconv(x)return self.pconv(x)#--------------------------------------------------#
#   残差结构的构建,小的残差结构
#--------------------------------------------------#
class Bottleneck(nn.Module):# Standard bottleneckdef __init__(self, in_channels, out_channels, shortcut=True, expansion=0.5, depthwise=False, act="lrelu"):super().__init__()hidden_channels = int(out_channels * expansion)Conv = DWConv if depthwise else BaseConv#--------------------------------------------------##   利用1x1卷积进行通道数的缩减。缩减率一般是50%#--------------------------------------------------#self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)#--------------------------------------------------##   利用3x3卷积进行通道数的拓张。并且完成特征提取#--------------------------------------------------#self.conv2 = Conv(hidden_channels, out_channels, 3, stride=1, act=act)self.use_add = shortcut and in_channels == out_channelsdef forward(self, x):y = self.conv2(self.conv1(x))if self.use_add:y = y + xreturn y

2.2 外层大残差结构CSP

CSPLayer外层大残差结构图如下
在这里插入图片描述
代码实现

#--------------------------------------------------#
#   CSP
#--------------------------------------------------#
class CSPLayer(nn.Module):def __init__(self, in_channels, out_channels, n=1, shortcut=True, expansion=0.5, depthwise=False, act="lrelu"):# ch_in, ch_out, number, shortcut, groups, expansionsuper().__init__()hidden_channels = int(out_channels * expansion)  #--------------------------------------------------##   主干部分的初次卷积#--------------------------------------------------#self.conv1  = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)#--------------------------------------------------##   大的残差边部分的初次卷积#--------------------------------------------------#self.conv2  = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)#-----------------------------------------------##   对堆叠的结果进行卷积的处理#-----------------------------------------------#self.conv3  = BaseConv(2 * hidden_channels, out_channels, 1, stride=1, act=act)#--------------------------------------------------##   根据循环的次数构建上述Bottleneck残差结构#--------------------------------------------------#module_list = [Bottleneck(hidden_channels, hidden_channels, shortcut, 1.0, depthwise, act=act) for _ in range(n)]self.m      = nn.Sequential(*module_list)def forward(self, x):#-------------------------------##   x_1是主干部分#-------------------------------#x_1 = self.conv1(x)#-------------------------------##   x_2是大的残差边部分#-------------------------------#x_2 = self.conv2(x)#-----------------------------------------------##   主干部分利用残差结构堆叠继续进行特征提取#-----------------------------------------------#x_1 = self.m(x_1)#-----------------------------------------------##   主干部分和大的残差边部分进行堆叠#-----------------------------------------------#x = torch.cat((x_1, x_2), 1)#-----------------------------------------------##   对堆叠的结果进行卷积的处理#-----------------------------------------------#return self.conv3(x)

三、SPP结构

在一般的CNN结构中,在卷积层后面通常连接着全连接。而全连接层的特征数是固定的,所以在网络输入的时候,会固定输入的大小(fixed-size)。但在现实中,我们的输入的图像尺寸总是不能满足输入时要求的大小。然而通常的手法就是裁剪(crop)和拉伸(warp)
在这里插入图片描述

这样做总是不好的:图像的纵横比(ratio aspect) 和 输入图像的尺寸是被改变的。这样就会扭曲原始的图像。而Kaiming He在这里提出了一个SPP(Spatial Pyramid Pooling)层能很好的解决这样的问题, 但SPP通常连接在最后一层卷基层。
论文链接:https://arxiv.org/pdf/1406.4729
在这里插入图片描述
代码实现

#--------------------------------------------------#
#   SPP
#--------------------------------------------------#
class SPPBottleneck(nn.Module):def __init__(self, in_channels, out_channels, kernel_sizes=(5, 9, 13), activation="lrelu"):super().__init__()hidden_channels = in_channels // 2self.conv1      = BaseConv(in_channels, hidden_channels, 1, stride=1, act=activation)self.m          = nn.ModuleList([nn.MaxPool2d(kernel_size=ks, stride=1, padding=ks // 2) for ks in kernel_sizes])conv2_channels  = hidden_channels * (len(kernel_sizes) + 1)self.conv2      = BaseConv(conv2_channels, out_channels, 1, stride=1, act=activation)def forward(self, x):x = self.conv1(x)x = torch.cat([x] + [m(x) for m in self.m], dim=1)x = self.conv2(x)return x

四、CSPDarknet结构

参考yolov3中的Darknet53结构,结合二,三节的CSPSPP结构,就可以进行CSPDarknet完整网络结构的搭建。CSPDarkenet主体由inputstemdark2dark3dark4dark5组成。其中dark2,3,4结构类似,stemdark5稍微有点区别。

4.1 stem结构

结构
在这里插入图片描述
代码实现

#-----------------------------------------------#
#   输入图片是640, 640, 3
#   初始的基本通道是64
#-----------------------------------------------#
base_channels   = int(wid_mul * 64)  # 64
base_depth      = max(round(dep_mul * 3), 1)  # 3#-----------------------------------------------#
#   利用卷积结构进行特征提取
#   640, 640, 3 -> 320, 320, 64
#-----------------------------------------------#
self.stem       = Conv(3, base_channels, 6, 2)

4.2 dark2,3,4结构,dark3为例:

结构
在这里插入图片描述

代码实现

#-----------------------------------------------#
#   完成卷积之后,160, 160, 128 -> 80, 80, 256
#   完成CSPlayer之后,80, 80, 256 -> 80, 80, 256
#-----------------------------------------------#
self.dark3 = nn.Sequential(Conv(base_channels * 2, base_channels * 4, 3, 2, act=act),CSPLayer(base_channels * 4, base_channels * 4, n=base_depth * 3, depthwise=depthwise, act=act),
)

4.3 dark5结构

结构
在这里插入图片描述

代码实现

#-----------------------------------------------#
#   完成卷积之后,40, 40, 512 -> 20, 20, 1024
#   完成SPP之后,20, 20, 1024 -> 20, 20, 1024
#   完成CSPlayer之后,20, 20, 1024 -> 20, 20, 1024
#-----------------------------------------------#
self.dark5 = nn.Sequential(Conv(base_channels * 8, base_channels * 16, 3, 2, act=act),SPPBottleneck(base_channels * 16, base_channels * 16, activation=act),# SPPF(base_channels * 16, base_channels * 16,activation=act),CSPLayer(base_channels * 16, base_channels * 16, n=base_depth, shortcut=False, depthwise=depthwise, act=act),
)

4.4 CSPDarkNet整体结构

结构
在这里插入图片描述

代码实现

#--------------------------------------------------#
#   CSPDarknet
#--------------------------------------------------#
class CSPDarknet(nn.Module):def __init__(self, dep_mul, wid_mul, out_features=("dark3", "dark4", "dark5"), depthwise=False, act="lrelu"):super().__init__()assert out_features, "please provide output features of Darknet"self.out_features = out_featuresConv = DWConv if depthwise else BaseConv#-----------------------------------------------##   输入图片是640, 640, 3#   初始的基本通道是64#-----------------------------------------------#base_channels   = int(wid_mul * 64)  # 64base_depth      = max(round(dep_mul * 3), 1)  # 3#-----------------------------------------------##   利用卷积结构进行特征提取#   640, 640, 3 -> 320, 320, 64#-----------------------------------------------#self.stem       = Conv(3, base_channels, 6, 2)#-----------------------------------------------##   完成卷积之后,320, 320, 64 -> 160, 160, 128#   完成CSPlayer之后,160, 160, 128 -> 160, 160, 128#-----------------------------------------------#self.dark2 = nn.Sequential(Conv(base_channels, base_channels * 2, 3, 2, act=act),CSPLayer(base_channels * 2, base_channels * 2, n=base_depth, depthwise=depthwise, act=act),)#-----------------------------------------------##   完成卷积之后,160, 160, 128 -> 80, 80, 256#   完成CSPlayer之后,80, 80, 256 -> 80, 80, 256#-----------------------------------------------#self.dark3 = nn.Sequential(Conv(base_channels * 2, base_channels * 4, 3, 2, act=act),CSPLayer(base_channels * 4, base_channels * 4, n=base_depth * 3, depthwise=depthwise, act=act),)#-----------------------------------------------##   完成卷积之后,80, 80, 256 -> 40, 40, 512#   完成CSPlayer之后,40, 40, 512 -> 40, 40, 512#-----------------------------------------------#self.dark4 = nn.Sequential(Conv(base_channels * 4, base_channels * 8, 3, 2, act=act),CSPLayer(base_channels * 8, base_channels * 8, n=base_depth * 3, depthwise=depthwise, act=act),)#-----------------------------------------------##   完成卷积之后,40, 40, 512 -> 20, 20, 1024#   完成SPP之后,20, 20, 1024 -> 20, 20, 1024#   完成CSPlayer之后,20, 20, 1024 -> 20, 20, 1024#-----------------------------------------------#self.dark5 = nn.Sequential(Conv(base_channels * 8, base_channels * 16, 3, 2, act=act),SPPBottleneck(base_channels * 16, base_channels * 16, activation=act),# SPPF(base_channels * 16, base_channels * 16,activation=act),CSPLayer(base_channels * 16, base_channels * 16, n=base_depth, shortcut=False, depthwise=depthwise, act=act),)def forward(self, x):outputs = {}x = self.stem(x)outputs["stem"] = xx = self.dark2(x)outputs["dark2"] = x#-----------------------------------------------##   dark3的输出为80, 80, 256,是一个有效特征层#-----------------------------------------------#x = self.dark3(x)outputs["dark3"] = x#-----------------------------------------------##   dark4的输出为40, 40, 512,是一个有效特征层#-----------------------------------------------#x = self.dark4(x)outputs["dark4"] = x#-----------------------------------------------##   dark5的输出为20, 20, 1024,是一个有效特征层#-----------------------------------------------#x = self.dark5(x)outputs["dark5"] = xreturn [v for k, v in outputs.items() if k in self.out_features]

四、完整代码

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
# Copyright (c) Megvii, Inc. and its affiliates.import torch
from torch import nn
from torchsummary import summary#--------------------------------------------------#
#   activation func.
#--------------------------------------------------#
def get_activation(name="lrelu", inplace=True):if name == "relu":module = nn.ReLU(inplace=inplace)elif name == "lrelu":module = nn.LeakyReLU(0.1, inplace=inplace)else:raise AttributeError("Unsupported act type: {}".format(name))return module#--------------------------------------------------#
#   BaseConv (CBL)
#--------------------------------------------------#
class BaseConv(nn.Module):def __init__(self, in_channels, out_channels, ksize, stride, groups=1, bias=False, act="lrelu"):super().__init__()pad         = (ksize - 1) // 2self.conv   = nn.Conv2d(in_channels, out_channels, kernel_size=ksize, stride=stride, padding=pad, groups=groups, bias=bias)self.bn     = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.03)self.act    = get_activation(act, inplace=True)def forward(self, x):return self.act(self.bn(self.conv(x)))def fuseforward(self, x):return self.act(self.conv(x))#--------------------------------------------------#
#   DWConv
#--------------------------------------------------#
class DWConv(nn.Module):def __init__(self, in_channels, out_channels, ksize, stride=1, act="lrelu"):super().__init__()self.dconv = BaseConv(in_channels, in_channels, ksize=ksize, stride=stride, groups=in_channels, act=act,)self.pconv = BaseConv(in_channels, out_channels, ksize=1, stride=1, groups=1, act=act)def forward(self, x):x = self.dconv(x)return self.pconv(x)#--------------------------------------------------#
#   SPP
#--------------------------------------------------#
class SPPBottleneck(nn.Module):def __init__(self, in_channels, out_channels, kernel_sizes=(5, 9, 13), activation="lrelu"):super().__init__()hidden_channels = in_channels // 2self.conv1      = BaseConv(in_channels, hidden_channels, 1, stride=1, act=activation)self.m          = nn.ModuleList([nn.MaxPool2d(kernel_size=ks, stride=1, padding=ks // 2) for ks in kernel_sizes])conv2_channels  = hidden_channels * (len(kernel_sizes) + 1)self.conv2      = BaseConv(conv2_channels, out_channels, 1, stride=1, act=activation)def forward(self, x):x = self.conv1(x)x = torch.cat([x] + [m(x) for m in self.m], dim=1)x = self.conv2(x)return x#--------------------------------------------------#
#   残差结构的构建,小的残差结构
#--------------------------------------------------#
class Bottleneck(nn.Module):# Standard bottleneckdef __init__(self, in_channels, out_channels, shortcut=True, expansion=0.5, depthwise=False, act="lrelu"):super().__init__()hidden_channels = int(out_channels * expansion)Conv = DWConv if depthwise else BaseConv#--------------------------------------------------##   利用1x1卷积进行通道数的缩减。缩减率一般是50%#--------------------------------------------------#self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)#--------------------------------------------------##   利用3x3卷积进行通道数的拓张。并且完成特征提取#--------------------------------------------------#self.conv2 = Conv(hidden_channels, out_channels, 3, stride=1, act=act)self.use_add = shortcut and in_channels == out_channelsdef forward(self, x):y = self.conv2(self.conv1(x))if self.use_add:y = y + xreturn y#--------------------------------------------------#
#   CSP
#--------------------------------------------------#
class CSPLayer(nn.Module):def __init__(self, in_channels, out_channels, n=1, shortcut=True, expansion=0.5, depthwise=False, act="lrelu"):# ch_in, ch_out, number, shortcut, groups, expansionsuper().__init__()hidden_channels = int(out_channels * expansion)  #--------------------------------------------------##   主干部分的初次卷积#--------------------------------------------------#self.conv1  = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)#--------------------------------------------------##   大的残差边部分的初次卷积#--------------------------------------------------#self.conv2  = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)#-----------------------------------------------##   对堆叠的结果进行卷积的处理#-----------------------------------------------#self.conv3  = BaseConv(2 * hidden_channels, out_channels, 1, stride=1, act=act)#--------------------------------------------------##   根据循环的次数构建上述Bottleneck残差结构#--------------------------------------------------#module_list = [Bottleneck(hidden_channels, hidden_channels, shortcut, 1.0, depthwise, act=act) for _ in range(n)]self.m      = nn.Sequential(*module_list)def forward(self, x):#-------------------------------##   x_1是主干部分#-------------------------------#x_1 = self.conv1(x)#-------------------------------##   x_2是大的残差边部分#-------------------------------#x_2 = self.conv2(x)#-----------------------------------------------##   主干部分利用残差结构堆叠继续进行特征提取#-----------------------------------------------#x_1 = self.m(x_1)#-----------------------------------------------##   主干部分和大的残差边部分进行堆叠#-----------------------------------------------#x = torch.cat((x_1, x_2), 1)#-----------------------------------------------##   对堆叠的结果进行卷积的处理#-----------------------------------------------#return self.conv3(x)#--------------------------------------------------#
#   CSPDarknet
#--------------------------------------------------#
class CSPDarknet(nn.Module):def __init__(self, dep_mul, wid_mul, out_features=("dark3", "dark4", "dark5"), depthwise=False, act="lrelu"):super().__init__()assert out_features, "please provide output features of Darknet"self.out_features = out_featuresConv = DWConv if depthwise else BaseConv#-----------------------------------------------##   输入图片是640, 640, 3#   初始的基本通道是64#-----------------------------------------------#base_channels   = int(wid_mul * 64)  # 64base_depth      = max(round(dep_mul * 3), 1)  # 3#-----------------------------------------------##   利用卷积结构进行特征提取#   640, 640, 3 -> 320, 320, 64#-----------------------------------------------#self.stem       = Conv(3, base_channels, 6, 2)#-----------------------------------------------##   完成卷积之后,320, 320, 64 -> 160, 160, 128#   完成CSPlayer之后,160, 160, 128 -> 160, 160, 128#-----------------------------------------------#self.dark2 = nn.Sequential(Conv(base_channels, base_channels * 2, 3, 2, act=act),CSPLayer(base_channels * 2, base_channels * 2, n=base_depth, depthwise=depthwise, act=act),)#-----------------------------------------------##   完成卷积之后,160, 160, 128 -> 80, 80, 256#   完成CSPlayer之后,80, 80, 256 -> 80, 80, 256#-----------------------------------------------#self.dark3 = nn.Sequential(Conv(base_channels * 2, base_channels * 4, 3, 2, act=act),CSPLayer(base_channels * 4, base_channels * 4, n=base_depth * 3, depthwise=depthwise, act=act),)#-----------------------------------------------##   完成卷积之后,80, 80, 256 -> 40, 40, 512#   完成CSPlayer之后,40, 40, 512 -> 40, 40, 512#-----------------------------------------------#self.dark4 = nn.Sequential(Conv(base_channels * 4, base_channels * 8, 3, 2, act=act),CSPLayer(base_channels * 8, base_channels * 8, n=base_depth * 3, depthwise=depthwise, act=act),)#-----------------------------------------------##   完成卷积之后,40, 40, 512 -> 20, 20, 1024#   完成SPP之后,20, 20, 1024 -> 20, 20, 1024#   完成CSPlayer之后,20, 20, 1024 -> 20, 20, 1024#-----------------------------------------------#self.dark5 = nn.Sequential(Conv(base_channels * 8, base_channels * 16, 3, 2, act=act),SPPBottleneck(base_channels * 16, base_channels * 16, activation=act),# SPPF(base_channels * 16, base_channels * 16,activation=act),CSPLayer(base_channels * 16, base_channels * 16, n=base_depth, shortcut=False, depthwise=depthwise, act=act),)def forward(self, x):outputs = {}x = self.stem(x)outputs["stem"] = xx = self.dark2(x)outputs["dark2"] = x#-----------------------------------------------##   dark3的输出为80, 80, 256,是一个有效特征层#-----------------------------------------------#x = self.dark3(x)outputs["dark3"] = x#-----------------------------------------------##   dark4的输出为40, 40, 512,是一个有效特征层#-----------------------------------------------#x = self.dark4(x)outputs["dark4"] = x#-----------------------------------------------##   dark5的输出为20, 20, 1024,是一个有效特征层#-----------------------------------------------#x = self.dark5(x)outputs["dark5"] = xreturn [v for k, v in outputs.items() if k in self.out_features]if __name__ == '__main__':dep_mul =1wid_mul = 1net = CSPDarknet(dep_mul,wid_mul,out_features=("dark3", "dark4", "dark5"), depthwise=False, act="lrelu")summary(net, input_size=(3, 320, 320), batch_size=2, device="cpu")

运行查看CSPDarknet完整结构信息


----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv2d-1          [2, 32, 160, 160]           3,456BatchNorm2d-2          [2, 32, 160, 160]              64LeakyReLU-3          [2, 32, 160, 160]               0BaseConv-4          [2, 32, 160, 160]               0Conv2d-5            [2, 64, 80, 80]          18,432BatchNorm2d-6            [2, 64, 80, 80]             128LeakyReLU-7            [2, 64, 80, 80]               0BaseConv-8            [2, 64, 80, 80]               0Conv2d-9            [2, 32, 80, 80]           2,048BatchNorm2d-10            [2, 32, 80, 80]              64LeakyReLU-11            [2, 32, 80, 80]               0BaseConv-12            [2, 32, 80, 80]               0Conv2d-13            [2, 32, 80, 80]           2,048BatchNorm2d-14            [2, 32, 80, 80]              64LeakyReLU-15            [2, 32, 80, 80]               0BaseConv-16            [2, 32, 80, 80]               0Conv2d-17            [2, 32, 80, 80]           1,024BatchNorm2d-18            [2, 32, 80, 80]              64LeakyReLU-19            [2, 32, 80, 80]               0BaseConv-20            [2, 32, 80, 80]               0Conv2d-21            [2, 32, 80, 80]           9,216BatchNorm2d-22            [2, 32, 80, 80]              64LeakyReLU-23            [2, 32, 80, 80]               0BaseConv-24            [2, 32, 80, 80]               0Bottleneck-25            [2, 32, 80, 80]               0Conv2d-26            [2, 64, 80, 80]           4,096BatchNorm2d-27            [2, 64, 80, 80]             128LeakyReLU-28            [2, 64, 80, 80]               0BaseConv-29            [2, 64, 80, 80]               0CSPLayer-30            [2, 64, 80, 80]               0Conv2d-31           [2, 128, 40, 40]          73,728BatchNorm2d-32           [2, 128, 40, 40]             256LeakyReLU-33           [2, 128, 40, 40]               0BaseConv-34           [2, 128, 40, 40]               0Conv2d-35            [2, 64, 40, 40]           8,192BatchNorm2d-36            [2, 64, 40, 40]             128LeakyReLU-37            [2, 64, 40, 40]               0BaseConv-38            [2, 64, 40, 40]               0Conv2d-39            [2, 64, 40, 40]           8,192BatchNorm2d-40            [2, 64, 40, 40]             128LeakyReLU-41            [2, 64, 40, 40]               0BaseConv-42            [2, 64, 40, 40]               0Conv2d-43            [2, 64, 40, 40]           4,096BatchNorm2d-44            [2, 64, 40, 40]             128LeakyReLU-45            [2, 64, 40, 40]               0BaseConv-46            [2, 64, 40, 40]               0Conv2d-47            [2, 64, 40, 40]          36,864BatchNorm2d-48            [2, 64, 40, 40]             128LeakyReLU-49            [2, 64, 40, 40]               0BaseConv-50            [2, 64, 40, 40]               0Bottleneck-51            [2, 64, 40, 40]               0Conv2d-52            [2, 64, 40, 40]           4,096BatchNorm2d-53            [2, 64, 40, 40]             128LeakyReLU-54            [2, 64, 40, 40]               0BaseConv-55            [2, 64, 40, 40]               0Conv2d-56            [2, 64, 40, 40]          36,864BatchNorm2d-57            [2, 64, 40, 40]             128LeakyReLU-58            [2, 64, 40, 40]               0BaseConv-59            [2, 64, 40, 40]               0Bottleneck-60            [2, 64, 40, 40]               0Conv2d-61            [2, 64, 40, 40]           4,096BatchNorm2d-62            [2, 64, 40, 40]             128LeakyReLU-63            [2, 64, 40, 40]               0BaseConv-64            [2, 64, 40, 40]               0Conv2d-65            [2, 64, 40, 40]          36,864BatchNorm2d-66            [2, 64, 40, 40]             128LeakyReLU-67            [2, 64, 40, 40]               0BaseConv-68            [2, 64, 40, 40]               0Bottleneck-69            [2, 64, 40, 40]               0Conv2d-70           [2, 128, 40, 40]          16,384BatchNorm2d-71           [2, 128, 40, 40]             256LeakyReLU-72           [2, 128, 40, 40]               0BaseConv-73           [2, 128, 40, 40]               0CSPLayer-74           [2, 128, 40, 40]               0Conv2d-75           [2, 256, 20, 20]         294,912BatchNorm2d-76           [2, 256, 20, 20]             512LeakyReLU-77           [2, 256, 20, 20]               0BaseConv-78           [2, 256, 20, 20]               0Conv2d-79           [2, 128, 20, 20]          32,768BatchNorm2d-80           [2, 128, 20, 20]             256LeakyReLU-81           [2, 128, 20, 20]               0BaseConv-82           [2, 128, 20, 20]               0Conv2d-83           [2, 128, 20, 20]          32,768BatchNorm2d-84           [2, 128, 20, 20]             256LeakyReLU-85           [2, 128, 20, 20]               0BaseConv-86           [2, 128, 20, 20]               0Conv2d-87           [2, 128, 20, 20]          16,384BatchNorm2d-88           [2, 128, 20, 20]             256LeakyReLU-89           [2, 128, 20, 20]               0BaseConv-90           [2, 128, 20, 20]               0Conv2d-91           [2, 128, 20, 20]         147,456BatchNorm2d-92           [2, 128, 20, 20]             256LeakyReLU-93           [2, 128, 20, 20]               0BaseConv-94           [2, 128, 20, 20]               0Bottleneck-95           [2, 128, 20, 20]               0Conv2d-96           [2, 128, 20, 20]          16,384BatchNorm2d-97           [2, 128, 20, 20]             256LeakyReLU-98           [2, 128, 20, 20]               0BaseConv-99           [2, 128, 20, 20]               0Conv2d-100           [2, 128, 20, 20]         147,456BatchNorm2d-101           [2, 128, 20, 20]             256LeakyReLU-102           [2, 128, 20, 20]               0BaseConv-103           [2, 128, 20, 20]               0Bottleneck-104           [2, 128, 20, 20]               0Conv2d-105           [2, 128, 20, 20]          16,384BatchNorm2d-106           [2, 128, 20, 20]             256LeakyReLU-107           [2, 128, 20, 20]               0BaseConv-108           [2, 128, 20, 20]               0Conv2d-109           [2, 128, 20, 20]         147,456BatchNorm2d-110           [2, 128, 20, 20]             256LeakyReLU-111           [2, 128, 20, 20]               0BaseConv-112           [2, 128, 20, 20]               0Bottleneck-113           [2, 128, 20, 20]               0Conv2d-114           [2, 256, 20, 20]          65,536BatchNorm2d-115           [2, 256, 20, 20]             512LeakyReLU-116           [2, 256, 20, 20]               0BaseConv-117           [2, 256, 20, 20]               0CSPLayer-118           [2, 256, 20, 20]               0Conv2d-119           [2, 512, 10, 10]       1,179,648BatchNorm2d-120           [2, 512, 10, 10]           1,024LeakyReLU-121           [2, 512, 10, 10]               0BaseConv-122           [2, 512, 10, 10]               0Conv2d-123           [2, 256, 10, 10]         131,072BatchNorm2d-124           [2, 256, 10, 10]             512LeakyReLU-125           [2, 256, 10, 10]               0BaseConv-126           [2, 256, 10, 10]               0MaxPool2d-127           [2, 256, 10, 10]               0MaxPool2d-128           [2, 256, 10, 10]               0MaxPool2d-129           [2, 256, 10, 10]               0Conv2d-130           [2, 512, 10, 10]         524,288BatchNorm2d-131           [2, 512, 10, 10]           1,024LeakyReLU-132           [2, 512, 10, 10]               0BaseConv-133           [2, 512, 10, 10]               0SPPBottleneck-134           [2, 512, 10, 10]               0Conv2d-135           [2, 256, 10, 10]         131,072BatchNorm2d-136           [2, 256, 10, 10]             512LeakyReLU-137           [2, 256, 10, 10]               0BaseConv-138           [2, 256, 10, 10]               0Conv2d-139           [2, 256, 10, 10]         131,072BatchNorm2d-140           [2, 256, 10, 10]             512LeakyReLU-141           [2, 256, 10, 10]               0BaseConv-142           [2, 256, 10, 10]               0Conv2d-143           [2, 256, 10, 10]          65,536BatchNorm2d-144           [2, 256, 10, 10]             512LeakyReLU-145           [2, 256, 10, 10]               0BaseConv-146           [2, 256, 10, 10]               0Conv2d-147           [2, 256, 10, 10]         589,824BatchNorm2d-148           [2, 256, 10, 10]             512LeakyReLU-149           [2, 256, 10, 10]               0BaseConv-150           [2, 256, 10, 10]               0Bottleneck-151           [2, 256, 10, 10]               0Conv2d-152           [2, 512, 10, 10]         262,144BatchNorm2d-153           [2, 512, 10, 10]           1,024LeakyReLU-154           [2, 512, 10, 10]               0BaseConv-155           [2, 512, 10, 10]               0CSPLayer-156           [2, 512, 10, 10]               0
================================================================
Total params: 4,212,672
Trainable params: 4,212,672
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 2.34
Forward/backward pass size (MB): 303.91
Params size (MB): 16.07
Estimated Total Size (MB): 322.32
----------------------------------------------------------------

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/6874.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

通过DataGrip将mysql表结构信息转存excel 复制select结果的insert插入语句

各位小伙伴们大家好,欢迎来到这个小扎扎的专栏 总结 | 提效 | 拓展,在这个系列专栏中记录了博主在学习期间总结的大块知识点,以及日常工作中遇到的各种技术点 ┗|`O′|┛ 🌆 内容速览 1 查询表结构信息,并…

[安洵杯 2019]JustBase(替换表)

题目: 我们看到题目是一些杂乱的字母和符号,但从题目和末尾的两个号,我们猜测是base64加密,但题目未出现1-9数字,而base64不包含!#$%等字符,所以我们考虑将字符!#$%按照键盘替换成数字1-9. 替换代码如下&a…

基于光伏电站真实数据集的深度学习预测模型(Python代码,深度学习五个模型)

效果视频链接:基于深度学习光伏预测系统(五个模型)_哔哩哔哩_bilibili 界面设计 注册界面 登录界面 主界面 展示界面 1.数据集来源 The SOLETE dataset 这里分别保存了不同间隔采样时间表格 1min是以1min 间隔采集的数据集 数据集截图&…

Java 中的 pyftpdlib 替代品

在 Java 中,有一些替代方案可以用来实现 FTP 服务器功能,类似于 Python 中的 pyftpdlib。目前我们常用的就是三种替换方案,这里需要根据自己的项目需求选择合适的方法。 1、问题背景 在 Java 环境下寻找一款与 pyftpdlib 类似的 FTP 服务器库…

企业开发基础--数据库

今天完成了数据库学习的全部内容,在事务,索引,范式中要有个人逻辑上的理解,也算是卡着点完成了大多数预期,还有一个Java游戏未完成,会后续补上。 之后的一周要完成34道数据库练习题以及JDBC,学…

ip地址快速切换软件是什么

ip地址快速切换软件是什么?随着互联网的发展,IP地址变得愈发关键。对于网络工作者、营销人员和游戏玩家,频繁更换IP地址成了日常工作需求。为满足这一需求,IP地址快速切换软件如虎观代理IP软件应运而生,它为用户提供了…

避雷!7.7分,新增1区TOP被标记On Hold,5本已被踢除!

本周投稿推荐 SSCI • 2/4区经管类,2.5-3.0(录用率99%) SCIE(CCF推荐) • 计算机类,2.0-3.0(最快18天录用) SCIE(CCF-C类) • IEEE旗下,1/2…

低代码技术的深度应用:物资管理的创新与效率提升

引言 在当今企业运营中,物资管理扮演着至关重要的角色。无论是制造业、零售业还是服务行业,有效的物资管理都是确保业务正常运转和持续发展的关键。然而,随着企业规模的扩大和业务的复杂化,物资管理也变得越来越复杂。企业需要管…

Day05-JavaWeb开发-请求响应(postman工具/参数/案例)分层解耦(三层架构/IOC-DI引入)

1. 请求响应 1.1 概述 1.2 请求-postman工具 1.3 请求-简单参数&实体参数 1.3.1 简单参数 1.3.2 实体参数 1.4 请求-数组集合参数 1.5 请求-日期参数&json参数 1.6 请求-路径参数 1.7 响应-ResponseBody 1.8 响应-案例 2. 分层解耦 2.1 三层架构 2.2 分层解耦(IOC-D…

流畅的python-学习笔记_序列

概念 抽象基类:ABC, Abstract Base Class,ABC还有一个概念,是一个编程语言 序列 内置序列类型 分类 可分为容器类型和扁平类型 容器类型有list, tuple, collections.deque等,存储元素类型可不同&…

实验九 Java 语言网络通信程序设计练习(课内实验)

一、实验目的 本次实验的主要目的是练习网络通信程序的设计方法,并掌握计算机网络基 础知识、Java语言网络通信程序类库的结构和使用方法。 二、实验要求 1. 认真阅读实验内容,完成实验内容所设的题目。 2. 能够应用多种编辑环境编写Java语言源程序…

南京观海微电子---电源,从微观角度观看电功率是怎么产生

从微观角度看看无功功率是怎么产生的,在此之前,我们得先知道引起无功功率的元器件是储能器件,主要是电感和电容。 首先,在宏观上,我们知道电感能导致电压超前电流90,可从如下公式推出: 由此可以…

强到离谱!AI绘画Stable Diffusion让商业换装如此简单!AI一键换装,AI绘画教程

今天给大家介绍一款可以让 Stable Diffusion 轻松实现AI一键换装的超强插件—— Inpaint Anything ,它能精准地替换图片中的指定部位,不仅上手简单,而且简直强到离谱! 首先,我们要下载这个插件。插件可看文末扫描获取…

【ARM Cortex-M3指南】7:嵌套向量中断控制器和中断控制

文章目录 七、嵌套向量中断控制器和中断控制7.1 嵌套向量中断控制器概述7.2 基本的中断配置7.2.1 中断使能和清除使能7.2.2 中断设置挂起和清除挂起7.2.3 优先级7.2.4 活跃状态7.2.5 PRIMASK和FAULTMASK特殊寄存器7.2.6 BASEPRI特殊寄存器7.2.7 其他异常的配置寄存器 7.3 设置中…

LSTM计算指示图

掌握网络结构组件构成 输入门、遗忘门、输出门候选记忆细胞记忆细胞隐藏状态ref:6.8. 长短期记忆(LSTM) — 《动手学深度学习》 文档 (gluon.ai)

【论文阅读笔记】Order Matters(AAAI 20)

个人博客地址 注:部分内容参考自GPT生成的内容 论文笔记:Order Matters(AAAI 20) 用于二进制代码相似性检测的语义感知神经网络 论文:《Order Matters: Semantic-Aware Neural Networks for Binary Code Similarity Detection》…

一分钱不花从HTTP升级到HTTPS

HTTP升级到HTTPS是一个涉及安全性和技术实施的过程,主要目的是为了提升网站数据传输的安全性,防止数据被窃取或篡改。以下是一些关于从HTTP升级到HTTPS的技术性要点和步骤概述,结合上述信息资源: 一、理解HTTPS的重要性 HTTPS (…

IDEA--debug

1. 单点调试的三个级别 Step into:在单步执行时,遇到子函数就进入并且继续单步执行。Step over:在单步执行时,在函数内遇到子函数时不会进入子函数内单步执行,而是将子函数整个执行完再停止,也就是把子函数…

[图解]SysML和EA建模住宅安全系统-01

1 00:00:00,980 --> 00:00:03,100 接下来,我们来看一下案例 2 00:00:04,930 --> 00:00:06,750 我们这次课程的案例 3 00:00:07,090 --> 00:00:13,800 选用了SysML实用指南的书上 4 00:00:13,810 --> 00:00:16,180 第十七章这个案例 5 00:00:16,350 …

Lib city笔记:TrajectoryDataset

1 AbstractDataset 抽象类,所有数据集的基类 2 TrajectoryDataset 2.1 __init__ 2.2 get_data 2.3 cutter_filter 2.3.1 按照时间间隔切割 2.3.2 按照同一天切割 2.3.3 按照固定窗口长度切割 2.4 get_encoder 2.5 encode_traj 2.6 divid_data 把数据集划分成训练…