经典卷积神经网络-ResNet

经典卷积神经网络-ResNet

一、背景介绍

残差神经网络(ResNet)是由微软研究院的何恺明、张祥雨、任少卿、孙剑等人提出的。ResNet 在2015 年的ILSVRC(ImageNet Large Scale Visual Recognition Challenge)中取得了冠军。残差神经网络的主要贡献是发现了“退化现象(Degradation)”,并针对退化现象发明了 “快捷连接(Shortcut connection)”,极大的消除了深度过大的神经网络训练困难问题。神经网络的“深度”首次突破了100层、最大的神经网络甚至超过了1000层。

在这里插入图片描述

二、ResNet网络结构

2.0 残差块

在这里插入图片描述

配合吴恩达深度学习视频中的图片进行讲解:

在这里插入图片描述

如图所示,Residual block就是将 a [ l ] a^{[l]} a[l]传送到 z l + 2 z^{l+2} zl+2上,其相加之后再进行激活得到 a [ l + 2 ] a^{[l+2]} a[l+2]。这一步骤称为"skip connection",即指 a [ l ] a^{[l]} a[l]跳过一层或好几层,从而将信息传递到神经网络的更深层。所以构建一个ResNet网络就是通过将很多这样的残差块堆积在一起,形成一个深度神经网络。

那么引入残差块为什么有效呢? 一个直观上的理解:

在这里插入图片描述

如图所示,假设我们给我们的神经网络再增加两层,我们要得到 a [ l + 2 ] a^{[l+2]} a[l+2],我们通过增加一个残差块来完成。这时 a [ l + 2 ] = g ( z [ l + 2 ] + a [ l ] ) = g ( w [ l + 2 ] a [ l + 1 ] + a [ l ] ) a^{[l+2]}=g(z^{[l+2]}+a^{[l]})=g(w^{[l+2]}a^{[l+1]}+a^{}[l]) a[l+2]=g(z[l+2]+a[l])=g(w[l+2]a[l+1]+a[l]),如果我们应用了L2正则化,此时权重参数会减小,我们可以极端的假设 w [ l + 2 ] = 0 , b [ l + 2 ] = 0 w^{[l+2]}=0,b^{[l+2]}=0 w[l+2]=0b[l+2]=0,那么得到 a [ l + 2 ] = g ( a [ l ] ) = a [ l ] a^{[l+2]}=g(a^{[l]})=a^{[l]} a[l+2]=g(a[l])=a[l](因为使用的是ReLU激活函数,非负的值激活后为原来的值, a [ l ] a^{[l]} a[l]已经经过ReLU激活过了,所以全为非负值)。这意味着,即使给神经网络增加了两层,它的效果并不逊色于更简单的神经网络。所以给大型的神经网络添加残差块来增加网络深度,并不会影响网络的表现。如果我们增加的这两层碰巧能学习到一些有用的信息,那么它就比原来的神经网络表现的更好。

论文中ResNet层数在34及以下和50及以上时采用的是不同的残差块。下面我们分别介绍:

2.1 ResNet-34

在这里插入图片描述

如上图所示是ResNet-34以下采用的残差块,我们将其称作BasicBlock

ResNet-34的网络结构如下:图中实线表示通道数没有变化,虚线表示通道数发生了变化。

在这里插入图片描述

其具体的网络结构如下表:

在这里插入图片描述

2.2 ResNet-50

在这里插入图片描述

如图所示是ResNet-50以上采用的残差块,我们将其称作Bottleneck,使用了1 × 1的卷积来进行通道数的改变,减小计算量。其具体的网络结构见上面的表。

三、论文部分解读

  • 论文中的第一张图就表明了更深层的“plain”神经网络(即不用Residual Learning)的错误率在训练集和测试集上甚至比层数少的“plain”神经网络还要高,所以就引出了问题:训练很深的网络是一个很难的问题。
    在这里插入图片描述

  • 论文中的这张图就表明了,使用Residual Learning后训练更深层的神经网络效果会变得更好。

    在这里插入图片描述

  • 论文中处理残差连接中输入和输出不对等的解决方法:

    • 添加一些额外的0,使得输入和输出的形状可以对应起来可以做相加。
    • 输入和输出不对等的时候使用1 × 1的卷积操作来做投影(目前都是这种方法)
    • 所有的地方都使用1 × 1的卷积来做投影,计算量太大没必要
  • 论文最主要的是提出了residual结构(残差结构),并搭建超深的网络结构(突破1000层)

  • 使用了大量BN来加速训练(丢弃dropout)

  • 论文中对CIFAR10数据集做了大量实验验证其效果,最后还将ResNet用到了目标检测领域

三、ResNet-18的Pytorch实现

import torch
import torch.nn as nn
import torchsummary# BasicBlock
class BasicBlock(nn.Module):def __init__(self, in_channels, out_channels, stride=1):super(BasicBlock, self).__init__()# 第一个卷积有可能要进行下采样 即将输出通道翻倍 输出数据大小全部减半# 所以我们让第一个卷积的stride设置为可传入的参数# 如果要进行BN 卷积就不需要加偏置self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)self.bn1 = nn.BatchNorm2d(out_channels)self.relu = nn.ReLU(inplace=True)# 第二个卷积保持尺寸和通道数self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)self.bn2 = nn.BatchNorm2d(out_channels)# 如果最后输入和输出的通道数不一样 那么就用1 × 1的卷积调节输入 方便后面能和输出相加# shortcut操作也是downsample(下采样) 即将输出通道翻倍 输出数据大小全部减半# 这一步主要做的就是为了能让最后的输出数据和输入数据"连接"上 即相加self.shortcut = nn.Sequential()if stride != 1 or in_channels != out_channels:self.shortcut = nn.Sequential(nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels))def forward(self, x):# 记录identityidentity = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)# 进行残差连接out += self.shortcut(identity)# 残差连接之后激活out = self.relu(out)return out# ResNet-18
class ResNet18(nn.Module):def __init__(self, num_classes=1000):super(ResNet18, self).__init__()# output_size = [(input_size - kernel_size + 2padding) / stride] + 1# 112 = [(224 - 7 + 2padding) / stride] + 1 -> padding = 3self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)self.bn1 = nn.BatchNorm2d(64)self.relu = nn.ReLU(inplace=True)# 56 = [(112 - 3 + 2padding) / 2] + 1 -> padding = 1self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.layer1 = self._make_layer(in_channels=64, out_channels=64, blocks=2, stride=1)self.layer2 = self._make_layer(in_channels=64, out_channels=128, blocks=2, stride=2)self.layer3 = self._make_layer(in_channels=128, out_channels=256, blocks=2, stride=2)self.layer4 = self._make_layer(in_channels=256, out_channels=512, blocks=2, stride=2)# 平均池化 参数就是out_sizeself.avgpool = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Linear(512, num_classes)def _make_layer(self, in_channels, out_channels, blocks, stride=1):layer = []# 因为第一个残差块的输入通道可能与输出通道不同 所以单独拿出来赋值layer.append(BasicBlock(in_channels, out_channels, stride))for _ in range(1, blocks):layer.append(BasicBlock(out_channels, out_channels))return nn.Sequential(*layer)def forward(self, x):out = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.maxpool(out)out = self.layer1(out)out = self.layer2(out)out = self.layer3(out)out = self.layer4(out)out = self.avgpool(out)out = torch.flatten(out, 1)out = self.fc(out)return outif __name__ == '__main__':DEVICE = "cuda" if torch.cuda.is_available() else "cpu"model = (ResNet18())model.to(DEVICE)# print(model)print(torch.cuda.is_available())torchsummary.summary(model, (3, 224, 224), 64)

主要注意点是:

  • BasicBlock(34层以下的残差块)中残差连接的方法,注意输入和输出的通道数,用1×1的卷积做好尺寸和维度对齐。
  • 使用_make_layer()函数批量生成残差块。

在控制台输出网络结构:

ResNet18((conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(layer1): Sequential((0): BasicBlock((conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(shortcut): Sequential())(1): BasicBlock((conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(shortcut): Sequential()))(layer2): Sequential((0): BasicBlock((conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(shortcut): Sequential((0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): BasicBlock((conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(shortcut): Sequential()))(layer3): Sequential((0): BasicBlock((conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(shortcut): Sequential((0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): BasicBlock((conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(shortcut): Sequential()))(layer4): Sequential((0): BasicBlock((conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(shortcut): Sequential((0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): BasicBlock((conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(shortcut): Sequential()))(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))(fc): Linear(in_features=512, out_features=1000, bias=True)
)

使用torchsummary来测试网络:

----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv2d-1         [64, 64, 112, 112]           9,408BatchNorm2d-2         [64, 64, 112, 112]             128ReLU-3         [64, 64, 112, 112]               0MaxPool2d-4           [64, 64, 56, 56]               0Conv2d-5           [64, 64, 56, 56]          36,864BatchNorm2d-6           [64, 64, 56, 56]             128ReLU-7           [64, 64, 56, 56]               0Conv2d-8           [64, 64, 56, 56]          36,864BatchNorm2d-9           [64, 64, 56, 56]             128ReLU-10           [64, 64, 56, 56]               0BasicBlock-11           [64, 64, 56, 56]               0Conv2d-12           [64, 64, 56, 56]          36,864BatchNorm2d-13           [64, 64, 56, 56]             128ReLU-14           [64, 64, 56, 56]               0Conv2d-15           [64, 64, 56, 56]          36,864BatchNorm2d-16           [64, 64, 56, 56]             128ReLU-17           [64, 64, 56, 56]               0BasicBlock-18           [64, 64, 56, 56]               0Conv2d-19          [64, 128, 28, 28]          73,728BatchNorm2d-20          [64, 128, 28, 28]             256ReLU-21          [64, 128, 28, 28]               0Conv2d-22          [64, 128, 28, 28]         147,456BatchNorm2d-23          [64, 128, 28, 28]             256Conv2d-24          [64, 128, 28, 28]           8,192BatchNorm2d-25          [64, 128, 28, 28]             256ReLU-26          [64, 128, 28, 28]               0BasicBlock-27          [64, 128, 28, 28]               0Conv2d-28          [64, 128, 28, 28]         147,456BatchNorm2d-29          [64, 128, 28, 28]             256ReLU-30          [64, 128, 28, 28]               0Conv2d-31          [64, 128, 28, 28]         147,456BatchNorm2d-32          [64, 128, 28, 28]             256ReLU-33          [64, 128, 28, 28]               0BasicBlock-34          [64, 128, 28, 28]               0Conv2d-35          [64, 256, 14, 14]         294,912BatchNorm2d-36          [64, 256, 14, 14]             512ReLU-37          [64, 256, 14, 14]               0Conv2d-38          [64, 256, 14, 14]         589,824BatchNorm2d-39          [64, 256, 14, 14]             512Conv2d-40          [64, 256, 14, 14]          32,768BatchNorm2d-41          [64, 256, 14, 14]             512ReLU-42          [64, 256, 14, 14]               0BasicBlock-43          [64, 256, 14, 14]               0Conv2d-44          [64, 256, 14, 14]         589,824BatchNorm2d-45          [64, 256, 14, 14]             512ReLU-46          [64, 256, 14, 14]               0Conv2d-47          [64, 256, 14, 14]         589,824BatchNorm2d-48          [64, 256, 14, 14]             512ReLU-49          [64, 256, 14, 14]               0BasicBlock-50          [64, 256, 14, 14]               0Conv2d-51            [64, 512, 7, 7]       1,179,648BatchNorm2d-52            [64, 512, 7, 7]           1,024ReLU-53            [64, 512, 7, 7]               0Conv2d-54            [64, 512, 7, 7]       2,359,296BatchNorm2d-55            [64, 512, 7, 7]           1,024Conv2d-56            [64, 512, 7, 7]         131,072BatchNorm2d-57            [64, 512, 7, 7]           1,024ReLU-58            [64, 512, 7, 7]               0BasicBlock-59            [64, 512, 7, 7]               0Conv2d-60            [64, 512, 7, 7]       2,359,296BatchNorm2d-61            [64, 512, 7, 7]           1,024ReLU-62            [64, 512, 7, 7]               0Conv2d-63            [64, 512, 7, 7]       2,359,296BatchNorm2d-64            [64, 512, 7, 7]           1,024ReLU-65            [64, 512, 7, 7]               0BasicBlock-66            [64, 512, 7, 7]               0
AdaptiveAvgPool2d-67            [64, 512, 1, 1]               0Linear-68                 [64, 1000]         513,000
================================================================
Total params: 11,689,512
Trainable params: 11,689,512
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 36.75
Forward/backward pass size (MB): 4018.74
Params size (MB): 44.59
Estimated Total Size (MB): 4100.08
----------------------------------------------------------------

四、ResNet-50的Pytorch实现

import torch
import torch.nn as nn
import torchsummary# Bottleneck
class Bottleneck(nn.Module):# expansion = 4,因为Bottleneck中每个残差结构输出维度都是输入维度的4倍expansion = 4def __init__(self, in_channels, out_channels, stride=1):super(Bottleneck, self).__init__()# 注意这里1×1的卷积不用paddingself.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False)self.bn1 = nn.BatchNorm2d(out_channels)# 维持特征图尺寸 padding = 1self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)self.bn2 = nn.BatchNorm2d(out_channels)# 注意这里1×1的卷积不用paddingself.conv3 = nn.Conv2d(out_channels, out_channels * self.expansion, kernel_size=1, stride=1, bias=False)self.bn3 = nn.BatchNorm2d(out_channels * self.expansion)self.relu = nn.ReLU(inplace=True)self.shortcut = nn.Sequential()if stride != 1 or in_channels != out_channels * self.expansion:self.shortcut = nn.Sequential(nn.Conv2d(in_channels, out_channels * self.expansion, stride=stride, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels * self.expansion))def forward(self, x):identity = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)out = self.relu(out)out = self.conv3(out)out = self.bn3(out)# 残差连接out += self.shortcut(identity)out = self.relu(out)return out# ResNet50
class ResNet50(nn.Module):def __init__(self, num_classes=1000):super(ResNet50, self).__init__()# output_size = [(input_size - kernel_size + 2padding) / stride] + 1# 112 = [(224 - 7 + 2padding) / stride] + 1 -> padding = 3self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)self.bn1 = nn.BatchNorm2d(64)self.relu = nn.ReLU(inplace=True)# 56 = [(112 - 3 + 2padding) / 2] + 1 -> padding = 1self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.layer1 = self._make_layer(64, 64, blocks=3, stride=1)self.layer2 = self._make_layer(256, 128, blocks=4, stride=2)self.layer3 = self._make_layer(512, 256, blocks=6, stride=2)self.layer4 = self._make_layer(1024, 512, blocks=3, stride=2)# 平均池化 参数就是out_sizeself.avgpool = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Linear(2048, num_classes)def _make_layer(self, in_channels, out_channels, blocks, stride=1):layer = []# 因为第一个残差块的输入通道可能与输出通道不同 所以单独拿出来赋值layer.append(Bottleneck(in_channels, out_channels, stride))for _ in range(1, blocks):# 接下来的每一个其输入通道都是输出的四倍layer.append(Bottleneck(out_channels * 4, out_channels))return nn.Sequential(*layer)def forward(self, x):out = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.maxpool(out)out = self.layer1(out)out = self.layer2(out)out = self.layer3(out)out = self.layer4(out)out = self.avgpool(out)out = torch.flatten(out, 1)out = self.fc(out)return outif __name__ == '__main__':DEVICE = "cuda" if torch.cuda.is_available() else "cpu"model = ResNet50()model.to(DEVICE)# print(model)torchsummary.summary(model, (3, 224, 224), 64)

主要注意点是:

  • Bottleneck(50层以上残差块)中块与块直接连接的通道数问题。在BasicBlock中由于两个相同的块在连接的时候通道数是相同的,而Bottleneck中两个相同的块在连接的时候其通道数相差四倍。即下面这段代码:

        for _ in range(1, blocks):# 接下来的每一个其输入通道都是输出的四倍layer.append(Bottleneck(out_channels * 4, out_channels))
    
  • 注意最后全连接的通道数,ResNet50以上的最后全连接的in_features为2048。

  • 注意在残差连接的时候,shortcut调整的维度一定要和输出维度对上,即下面这段代码:

    self.shortcut = nn.Sequential()
    if stride != 1 or in_channels != out_channels * self.expansion:self.shortcut = nn.Sequential(nn.Conv2d(in_channels, out_channels * self.expansion, stride=stride, kernel_size=1, bias=False),nn.BatchNorm2d(out_channels * self.expansion))
    
  • 写代码的时候,要注意只有每一堆残差块结构中的第一个残差块的第一个卷积操作的stride为2,其余都为1,(除了第一堆残差块要维持尺度不变),即:

    self.layer1 = self._make_layer(64, 64, blocks=3, stride=1)
    

在控制台输出网络结构:

ResNet50((conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(layer1): Sequential((0): Bottleneck((conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential((0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottleneck((conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential())(2): Bottleneck((conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential()))(layer2): Sequential((0): Bottleneck((conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential((0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential())(2): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential())(3): Bottleneck((conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential()))(layer3): Sequential((0): Bottleneck((conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential((0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential())(2): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential())(3): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential())(4): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential())(5): Bottleneck((conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential()))(layer4): Sequential((0): Bottleneck((conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential((0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)))(1): Bottleneck((conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential())(2): Bottleneck((conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(shortcut): Sequential()))(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))(fc): Linear(in_features=2048, out_features=1000, bias=True)
)

使用torchsummary来测试网络:

----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv2d-1         [64, 64, 112, 112]           9,408BatchNorm2d-2         [64, 64, 112, 112]             128ReLU-3         [64, 64, 112, 112]               0MaxPool2d-4           [64, 64, 56, 56]               0Conv2d-5           [64, 64, 56, 56]           4,096BatchNorm2d-6           [64, 64, 56, 56]             128ReLU-7           [64, 64, 56, 56]               0Conv2d-8           [64, 64, 56, 56]          36,864BatchNorm2d-9           [64, 64, 56, 56]             128ReLU-10           [64, 64, 56, 56]               0Conv2d-11          [64, 256, 56, 56]          16,384BatchNorm2d-12          [64, 256, 56, 56]             512Conv2d-13          [64, 256, 56, 56]          16,384BatchNorm2d-14          [64, 256, 56, 56]             512ReLU-15          [64, 256, 56, 56]               0Bottleneck-16          [64, 256, 56, 56]               0Conv2d-17           [64, 64, 56, 56]          16,384BatchNorm2d-18           [64, 64, 56, 56]             128ReLU-19           [64, 64, 56, 56]               0Conv2d-20           [64, 64, 56, 56]          36,864BatchNorm2d-21           [64, 64, 56, 56]             128ReLU-22           [64, 64, 56, 56]               0Conv2d-23          [64, 256, 56, 56]          16,384BatchNorm2d-24          [64, 256, 56, 56]             512ReLU-25          [64, 256, 56, 56]               0Bottleneck-26          [64, 256, 56, 56]               0Conv2d-27           [64, 64, 56, 56]          16,384BatchNorm2d-28           [64, 64, 56, 56]             128ReLU-29           [64, 64, 56, 56]               0Conv2d-30           [64, 64, 56, 56]          36,864BatchNorm2d-31           [64, 64, 56, 56]             128ReLU-32           [64, 64, 56, 56]               0Conv2d-33          [64, 256, 56, 56]          16,384BatchNorm2d-34          [64, 256, 56, 56]             512ReLU-35          [64, 256, 56, 56]               0Bottleneck-36          [64, 256, 56, 56]               0Conv2d-37          [64, 128, 28, 28]          32,768BatchNorm2d-38          [64, 128, 28, 28]             256ReLU-39          [64, 128, 28, 28]               0Conv2d-40          [64, 128, 28, 28]         147,456BatchNorm2d-41          [64, 128, 28, 28]             256ReLU-42          [64, 128, 28, 28]               0Conv2d-43          [64, 512, 28, 28]          65,536BatchNorm2d-44          [64, 512, 28, 28]           1,024Conv2d-45          [64, 512, 28, 28]         131,072BatchNorm2d-46          [64, 512, 28, 28]           1,024ReLU-47          [64, 512, 28, 28]               0Bottleneck-48          [64, 512, 28, 28]               0Conv2d-49          [64, 128, 28, 28]          65,536BatchNorm2d-50          [64, 128, 28, 28]             256ReLU-51          [64, 128, 28, 28]               0Conv2d-52          [64, 128, 28, 28]         147,456BatchNorm2d-53          [64, 128, 28, 28]             256ReLU-54          [64, 128, 28, 28]               0Conv2d-55          [64, 512, 28, 28]          65,536BatchNorm2d-56          [64, 512, 28, 28]           1,024ReLU-57          [64, 512, 28, 28]               0Bottleneck-58          [64, 512, 28, 28]               0Conv2d-59          [64, 128, 28, 28]          65,536BatchNorm2d-60          [64, 128, 28, 28]             256ReLU-61          [64, 128, 28, 28]               0Conv2d-62          [64, 128, 28, 28]         147,456BatchNorm2d-63          [64, 128, 28, 28]             256ReLU-64          [64, 128, 28, 28]               0Conv2d-65          [64, 512, 28, 28]          65,536BatchNorm2d-66          [64, 512, 28, 28]           1,024ReLU-67          [64, 512, 28, 28]               0Bottleneck-68          [64, 512, 28, 28]               0Conv2d-69          [64, 128, 28, 28]          65,536BatchNorm2d-70          [64, 128, 28, 28]             256ReLU-71          [64, 128, 28, 28]               0Conv2d-72          [64, 128, 28, 28]         147,456BatchNorm2d-73          [64, 128, 28, 28]             256ReLU-74          [64, 128, 28, 28]               0Conv2d-75          [64, 512, 28, 28]          65,536BatchNorm2d-76          [64, 512, 28, 28]           1,024ReLU-77          [64, 512, 28, 28]               0Bottleneck-78          [64, 512, 28, 28]               0Conv2d-79          [64, 256, 14, 14]         131,072BatchNorm2d-80          [64, 256, 14, 14]             512ReLU-81          [64, 256, 14, 14]               0Conv2d-82          [64, 256, 14, 14]         589,824BatchNorm2d-83          [64, 256, 14, 14]             512ReLU-84          [64, 256, 14, 14]               0Conv2d-85         [64, 1024, 14, 14]         262,144BatchNorm2d-86         [64, 1024, 14, 14]           2,048Conv2d-87         [64, 1024, 14, 14]         524,288BatchNorm2d-88         [64, 1024, 14, 14]           2,048ReLU-89         [64, 1024, 14, 14]               0Bottleneck-90         [64, 1024, 14, 14]               0Conv2d-91          [64, 256, 14, 14]         262,144BatchNorm2d-92          [64, 256, 14, 14]             512ReLU-93          [64, 256, 14, 14]               0Conv2d-94          [64, 256, 14, 14]         589,824BatchNorm2d-95          [64, 256, 14, 14]             512ReLU-96          [64, 256, 14, 14]               0Conv2d-97         [64, 1024, 14, 14]         262,144BatchNorm2d-98         [64, 1024, 14, 14]           2,048ReLU-99         [64, 1024, 14, 14]               0Bottleneck-100         [64, 1024, 14, 14]               0Conv2d-101          [64, 256, 14, 14]         262,144BatchNorm2d-102          [64, 256, 14, 14]             512ReLU-103          [64, 256, 14, 14]               0Conv2d-104          [64, 256, 14, 14]         589,824BatchNorm2d-105          [64, 256, 14, 14]             512ReLU-106          [64, 256, 14, 14]               0Conv2d-107         [64, 1024, 14, 14]         262,144BatchNorm2d-108         [64, 1024, 14, 14]           2,048ReLU-109         [64, 1024, 14, 14]               0Bottleneck-110         [64, 1024, 14, 14]               0Conv2d-111          [64, 256, 14, 14]         262,144BatchNorm2d-112          [64, 256, 14, 14]             512ReLU-113          [64, 256, 14, 14]               0Conv2d-114          [64, 256, 14, 14]         589,824BatchNorm2d-115          [64, 256, 14, 14]             512ReLU-116          [64, 256, 14, 14]               0Conv2d-117         [64, 1024, 14, 14]         262,144BatchNorm2d-118         [64, 1024, 14, 14]           2,048ReLU-119         [64, 1024, 14, 14]               0Bottleneck-120         [64, 1024, 14, 14]               0Conv2d-121          [64, 256, 14, 14]         262,144BatchNorm2d-122          [64, 256, 14, 14]             512ReLU-123          [64, 256, 14, 14]               0Conv2d-124          [64, 256, 14, 14]         589,824BatchNorm2d-125          [64, 256, 14, 14]             512ReLU-126          [64, 256, 14, 14]               0Conv2d-127         [64, 1024, 14, 14]         262,144BatchNorm2d-128         [64, 1024, 14, 14]           2,048ReLU-129         [64, 1024, 14, 14]               0Bottleneck-130         [64, 1024, 14, 14]               0Conv2d-131          [64, 256, 14, 14]         262,144BatchNorm2d-132          [64, 256, 14, 14]             512ReLU-133          [64, 256, 14, 14]               0Conv2d-134          [64, 256, 14, 14]         589,824BatchNorm2d-135          [64, 256, 14, 14]             512ReLU-136          [64, 256, 14, 14]               0Conv2d-137         [64, 1024, 14, 14]         262,144BatchNorm2d-138         [64, 1024, 14, 14]           2,048ReLU-139         [64, 1024, 14, 14]               0Bottleneck-140         [64, 1024, 14, 14]               0Conv2d-141            [64, 512, 7, 7]         524,288BatchNorm2d-142            [64, 512, 7, 7]           1,024ReLU-143            [64, 512, 7, 7]               0Conv2d-144            [64, 512, 7, 7]       2,359,296BatchNorm2d-145            [64, 512, 7, 7]           1,024ReLU-146            [64, 512, 7, 7]               0Conv2d-147           [64, 2048, 7, 7]       1,048,576BatchNorm2d-148           [64, 2048, 7, 7]           4,096Conv2d-149           [64, 2048, 7, 7]       2,097,152BatchNorm2d-150           [64, 2048, 7, 7]           4,096ReLU-151           [64, 2048, 7, 7]               0Bottleneck-152           [64, 2048, 7, 7]               0Conv2d-153            [64, 512, 7, 7]       1,048,576BatchNorm2d-154            [64, 512, 7, 7]           1,024ReLU-155            [64, 512, 7, 7]               0Conv2d-156            [64, 512, 7, 7]       2,359,296BatchNorm2d-157            [64, 512, 7, 7]           1,024ReLU-158            [64, 512, 7, 7]               0Conv2d-159           [64, 2048, 7, 7]       1,048,576BatchNorm2d-160           [64, 2048, 7, 7]           4,096ReLU-161           [64, 2048, 7, 7]               0Bottleneck-162           [64, 2048, 7, 7]               0Conv2d-163            [64, 512, 7, 7]       1,048,576BatchNorm2d-164            [64, 512, 7, 7]           1,024ReLU-165            [64, 512, 7, 7]               0Conv2d-166            [64, 512, 7, 7]       2,359,296BatchNorm2d-167            [64, 512, 7, 7]           1,024ReLU-168            [64, 512, 7, 7]               0Conv2d-169           [64, 2048, 7, 7]       1,048,576BatchNorm2d-170           [64, 2048, 7, 7]           4,096ReLU-171           [64, 2048, 7, 7]               0Bottleneck-172           [64, 2048, 7, 7]               0
AdaptiveAvgPool2d-173           [64, 2048, 1, 1]               0Linear-174                 [64, 1000]       2,049,000
================================================================
Total params: 25,557,032
Trainable params: 25,557,032
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 36.75
Forward/backward pass size (MB): 15200.01
Params size (MB): 97.49
Estimated Total Size (MB): 15334.25
----------------------------------------------------------------

参考链接:

  • https://link.zhihu.com/?target=https%3A//arxiv.org/pdf/1512.03385.pdf

  • https://blog.csdn.net/m0_64799972/article/details/132753608

  • https://blog.csdn.net/m0_50127633/article/details/117200212

  • https://www.bilibili.com/video/BV1Bo4y1T7Lc/?spm_id_from=333.999.0.0&vd_source=c7e390079ff3e10b79e23fb333bea49d

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/591575.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Nginx 反向代理负载均衡

Nginx 反向代理负载均衡 普通的负载均衡软件,如 LVS,其实现的功能只是对请求数据包的转发、传递,从负载均衡下的节点服务器来看,接收到的请求还是来自访问负载均衡器的客户端的真实用户;而反向代理就不一样了&#xf…

Android混淆那些事

前言 作为一个Android开发,大家或多或少都有一些关于混淆的了解(毕竟披个纱布也比裸奔要好的多吧)。混淆的概念虽然容易理解,但相信大多数开发可能还是在网上搜索通用配置后通过C-V大法接入到自己的项目中,这也使得混…

canvas绘制网格线示例

查看专栏目录 canvas示例教程100专栏,提供canvas的基础知识,高级动画,相关应用扩展等信息。canvas作为html的一部分,是图像图标地图可视化的一个重要的基础,学好了canvas,在其他的一些应用上将会起到非常重…

docker 部署教学版本

文章目录 一、docker使用场景及常用命令1)docker使用场景2)rocky8(centos8)安装 docker3)docker 常用命令补充常用命令 二、 单独部署每个镜像,部署spring 应用镜像推荐(2023-12-18)1、 安装使用 mysql1.1 …

高可用解决方案 Keepalived 概述

概述 Keepalived 介绍 Keepalived 是 Linux 下一个轻量级别的高可用解决方案,通过 **VRRP 协议(虚拟路由冗余协议)**来实现服务或者网络的高可用,可以利用其来解决单点故障。 起初是为 LVS 设计的,一个 LVS 服务会有 …

计算机毕业论文内容参考|基于智能搜索引擎的图书管理系统的设计与实现

文章目录 摘要前言绪论课题背景国内外现状与趋势课题内容相关技术与方法介绍系统分析系统设计系统实现系统测试总结与展望摘要 本文介绍了基于智能搜索引擎的图书管理系统的设计与实现。该系统旨在提供一个高效、智能化的图书管理平台,帮助用户更快、更准确地找到所需的图书资…

2024年PMP报考需要什么条件?怎么报名?

一、PMP是什么 PMP 是项目管理的入门级证书,全称是项目管理专业人士资格认证,由美国项目管理协会(PMI)举办的,受到全球200多个国家的认可,从1999 年到现在已经有20多年发展历史了。 顾名思义,…

【大数据面试知识点】分区器Partitioner:HashPartitioner、RangePartitioner

Spark HashParitioner的弊端是什么? HashPartitioner分区的原理很简单,对于给定的key,计算其hashCode,并除于分区的个数取余,如果余数小于0,则用余数分区的个数,最后返回的值就是这个key所属的…

阶段十-分布式-nginx服务器

一、Nginx简介 Nginx 是高性能的 HTTP 和反向代理的服务器,处理高并发能力是十分强大的,能经受高负载的考验,有报告表明能支持高达 50,000 个并发连接数。tomcat并发数量理论值是500,实际也就300左右。 1.2 正向代理 正向代理代理的是客户…

世微 DW01 4.2V锂电池保护电路芯片 专业电源管理芯片

一、 描述 DW01A 是一个锂电池保护电路,为避免锂电池因过充电、过放电、电流过大导致电池寿命缩短或电池被损坏而设计的。它具有高精确度的电压检测与时间延迟电路。 二、 主要特点 工作电流低; 过充检测 4.3V,过充释放 4.05V; 过…

缅怀一代传奇!TVP创始委员陈皓与他的《左耳听风:传奇程序员练级攻略》

引言 中文技术圈时常被一种浮躁所困扰。互联网企业历经跑马圈地的红利期后,开始在精细化运营的路上艰难求索;圈子里的程序员们,也被日益放缓的业务需求和不断内卷的行业态势所影响,职业困境、年龄危机成了老生常谈的话题。 成长往…

three.js 多通道组合

效果&#xff1a; 代码&#xff1a; <template><div><el-container><el-main><div class"box-card-left"><div id"threejs" style"border: 1px solid red"></div><div style"border: 1px so…

AI大模型引领未来智慧科研暨ChatGPT在地学、GIS、气象、农业、生态、环境等领域中的高级应用

以ChatGPT、LLaMA、Gemini、DALLE、Midjourney、Stable Diffusion、星火大模型、文心一言、千问为代表AI大语言模型带来了新一波人工智能浪潮&#xff0c;可以面向科研选题、思维导图、数据清洗、统计分析、高级编程、代码调试、算法学习、论文检索、写作、翻译、润色、文献辅助…

C语言中关于while语句的理解以及getchar和putchar

while是一个循环语句&#xff0c;关于while的一些理解可以看下面这串代码 #include <stdio.h> int main() {int i 0;scanf("%d", &i);printf("输入十以内的数字&#xff0c;从输入的数字开始一直数到十&#xff1a;");while (i<10){printf(…

php ext-sodium 拓展安装 linux+windows

php编译安装(linux)&#xff0c;可以参考&#xff1a;php编译安装 一、windows soduim源码包自带&#xff0c;直接修改php.ini&#xff0c;取消extensionsodium注释即可 二、linux 1.安装依赖 apt-get install libsodium-dev2.进入源码目录 这里写自己的源码目录 cd /us…

6种版本的并查集(java实现版)

目录 引入 并查集的具体讲解及代码实现 Quick Find Quick Union 基于size的优化 代码实现 基于rank的优化 代码实现 路径压缩 代码实现 更多关于路径压缩的并查集 引入 由孩子指向父亲的这种特殊的树结构可以很高效的处理连接问题&#xff0c;在一个复杂的图中&…

音视频技术开发周刊 | 326

每周一期&#xff0c;纵览音视频技术领域的干货。 新闻投稿&#xff1a;contributelivevideostack.com。 全球最强「开源版Gemini」诞生&#xff01;全能多模态模型Emu2登热榜&#xff0c;多项任务刷新SOTA 最强的全能多模态模型来了&#xff01;就在近日&#xff0c;智源研究院…

使用echarts的bmap配置项绘制区域轮廓遮罩

示例图 代码 <template><div id"map" style"width: 100%; height: 100vh"></div> </template><script> import * as echarts from "echarts"; import "echarts/extension/bmap/bmap"; export default…

华为交换机入门(六):VLAN的配置

VLAN&#xff08;Virtual Local Area Network&#xff09;即虚拟局域网&#xff0c;是将一个物理的LAN在逻辑上划分成多个广播域的通信技术。VLAN内的主机间可以直接通信&#xff0c;而VLAN间不能直接互通&#xff0c;从而将广播报文限制在一个VLAN内。 VLAN 主要用来解决如何…

企业工商信息数据哪里获取?工商全量信息有什么渠道?

随着互联网的发展和普及&#xff0c;越来越多的企业选择在网上进行业务推广和品牌宣传。对于一些想要了解企业工商信息的用户来说&#xff0c;如何获取企业工商信息数据成了一个非常重要的问题。下面分享获取企业工商全量信息的渠道和方式&#xff1a; 首先&#xff0c;我们可以…