【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解

【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解

文章目录

  • 【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解
  • 前言
  • Inception-ResNet讲解
    • Inception-ResNet-V1
    • Inception-ResNet-V2
    • 残差模块的缩放(Scaling of the Residuals)
    • Inception-ResNet的总体模型结构
  • GoogLeNet(Inception-ResNet) Pytorch代码
    • ## Inception-ResNet-V1
    • Inception-ResNet-V2
  • 完整代码
    • Inception-ResNet-V1
    • Inception-ResNet-V2
  • 总结


前言

GoogLeNet(Inception-ResNet)是由谷歌的Szegedy, Christian等人在《Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning【AAAI-2017】》【论文地址】一文中提出的改进模型,受启发于ResNet【参考】在深度网络上较好的表现影响,论文将残差连接加入到Inception结构中形成2个Inception-ResNet版本的网络,它将残差连接取代原本Inception块中池化层部分,并将拼接变成了求和相加,提升了Inception的训练速度。

因为InceptionV4、Inception-Resnet-v1和Inception-Resnet-v2同出自一篇论文,大部分读者对InceptionV4存在误解,认为它是Inception模块与残差学习的结合,其实InceptionV4没有使用残差学习的思想,它基本延续了Inception v2/v3的结构,只有Inception-Resnet-v1和Inception-Resnet-v2才是Inception模块与残差学习的结合产物。


Inception-ResNet讲解

Inception-ResNet的核心思想是将Inception模块和ResNet模块进行融合,以利用它们各自的优点。Inception模块通过并行多个不同大小的卷积核来捕捉多尺度的特征,而ResNet模块通过残差连接解决了深层网络中的梯度消失和梯度爆炸问题,有助于更好地训练深层模型。Inception-ResNet使用了与InceptionV4【参考】类似的Inception模块,并在其中引入了ResNet的残差连接。这样,网络中的每个Inception模块都包含了两个分支:一个是常规的Inception结构,另一个是包含残差连接的Inception结构。这种设计使得模型可以更好地学习特征表示,并且在训练过程中可以更有效地传播梯度。

Inception-ResNet-V1

Inception-ResNet-v1:一种和InceptionV3【参考】具有相同计算损耗的结构。

  1. Stem结构: Inception-ResNet-V1的Stem结构类似于此前的InceptionV3网络中Inception结构组之前的网络层。

    所有卷积中没有标记为V表示填充方式为"SAME Padding",输入和输出维度一致;标记为V表示填充方式为"VALID Padding",输出维度视具体情况而定。

  2. Inception-resnet-A结构: InceptionV4网络中Inception-A结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。

    Inception-resnet结构残差连接代替了Inception中的池化层,并用残差连接相加操作取代了原Inception块中的拼接操作。

  3. Inception-resnet-B结构: InceptionV4网络中Inception-B结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。

  4. Inception-resnet-C结构: InceptionV4网络中Inception-C结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。

  5. Redution-A结构: 与InceptionV4网络中Redution-A结构一致,区别在于卷积核的个数。

    k和l表示卷积个数,不同网络结构的redution-A结构k和l是不同的。

  6. Redution-B结构:
    .

Inception-ResNet-V2

Inception-ResNet-v2:这是一种和InceptionV4具有相同计算损耗的结构,但是训练速度要比纯Inception-v4要快
Inception-ResNet-v2的整体框架和Inception-ResNet-v1的一致,除了Inception-ResNet-v2的stem结构与Inception V4的相同,其他的的结构Inception-ResNet-v2与Inception-ResNet-v1的类似,只不过卷积的个数Inception-ResNet-v2数量更多。

  1. Stem结构: Inception-ResNet-v2的stem结构与Inception V4的相同。
  2. Inception-resnet-A结构: InceptionV4网络中Inception-A结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。
  3. Inception-resnet-B结构: InceptionV4网络中Inception-B结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。
  4. Inception-resnet-C结构: InceptionV4网络中Inception-C结构的变体,1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。
  5. Redution-A结构: 与InceptionV4网络中Redution-A结构一致,区别在于卷积核的个数。

    k和l表示卷积个数,不同网络结构的redution-A结构k和l是不同的。

    1. Redution-B结构:

残差模块的缩放(Scaling of the Residuals)

如果单个网络层卷积核数量过多(超过1000),残差网络开始出现不稳定,网络会在训练过程早期便会开始失效—经过几万次训练后,平均池化层之前的层开始只输出0。降低学习率、增加额外的BN层都无法避免这种状况。因此在将shortcut分支加到当前残差块的输出之前,对残差块的输出进行放缩能够稳定训练

通常,将残差放缩因子定在0.1到0.3之间去缩放残差块输出。即使缩放并不是完全必须的,它似乎并不会影响最终的准确率,但是放缩能有益于训练的稳定性。

Inception-ResNet的总体模型结构

下图是原论文给出的关于 Inception-ResNet-V1模型结构的详细示意图:

下图是原论文给出的关于 Inception-ResNet-V2模型结构的详细示意图:

读者注意了,原始论文标注的 Inception-ResNet-V2通道数有一部分是错的,写代码时候对应不上。

两个版本的总体结构相同,具体的Stem、Inception块、Redution块则稍微不同。
Inception-ResNet-V1和 Inception-ResNet-V2在图像分类中分为两部分:backbone部分: 主要由 Inception-resnet模块、Stem模块和池化层(汇聚层)组成,分类器部分:由全连接层组成。


GoogLeNet(Inception-ResNet) Pytorch代码

## Inception-ResNet-V1

卷积层组: 卷积层+BN层+激活函数

# 卷积组: Conv2d+BN+ReLU
class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):super(BasicConv2d, self).__init__()self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn = nn.BatchNorm2d(out_channels)self.relu = nn.ReLU(inplace=True)def forward(self, x):x = self.conv(x)x = self.bn(x)x = self.relu(x)return x

Stem模块: 卷积层组+池化层

# Stem:BasicConv2d+MaxPool2d
class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3x3(32 stride2 valid)self.conv1 = BasicConv2d(in_channels, 32, kernel_size=3, stride=2)# conv3*3(32 valid)self.conv2 = BasicConv2d(32, 32, kernel_size=3)# conv3*3(64)self.conv3 = BasicConv2d(32, 64, kernel_size=3, padding=1)# maxpool3*3(stride2 valid)self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2)# conv1*1(80)self.conv5 = BasicConv2d(64, 80, kernel_size=1)# conv3*3(192 valid)self.conv6 = BasicConv2d(80, 192, kernel_size=1)# conv3*3(256 stride2 valid)self.conv7 = BasicConv2d(192, 256, kernel_size=3, stride=2)def forward(self, x):x = self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x = self.conv7(self.conv6(self.conv5(x)))return x

Inception_ResNet-A模块: 卷积层组+池化层

# Inception_ResNet_A:BasicConv2d+MaxPool2d
class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale=1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale = scale# conv1*1(32)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)+conv3*3(32)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride=1, padding=1))# conv1*1(32)+conv3*3(32)+conv3*3(32)self.branch_2 = nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride=1, padding=1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride=1, padding=1))# conv1*1(256)self.conv = BasicConv2d(ch1x1+ch3x3+ch3x3X2_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)x2 = self.branch_2(x)# 拼接x_res = torch.cat((x0, x1, x2), dim=1)x_res = self.conv(x_res)return self.relu(x + self.scale * x_res)

Inception_ResNet-B模块: 卷积层组+池化层

# Inception_ResNet_B:BasicConv2d+MaxPool2d
class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale=1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale = scale# conv1*1(128)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)+conv1*7(128)+conv1*7(128)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride=1, padding=(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride=1, padding=(3, 0)))# conv1*1(896)self.conv = BasicConv2d(ch1x1+ch_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)# 拼接x_res = torch.cat((x0, x1), dim=1)x_res = self.conv(x_res)return self.relu(x + self.scale * x_res)

Inception_ResNet-C模块: 卷积层组+池化层

# Inception_ResNet_C:BasicConv2d+MaxPool2d
class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext,  scale=1.0, activation=True):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale = scale# 是否激活self.activation = activation# conv1*1(192)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)+conv1*3(192)+conv3*1(192)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride=1, padding=(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride=1, padding=(1, 0)))# conv1*1(1792)self.conv = BasicConv2d(ch1x1+ch3x3X2_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)# 拼接x_res = torch.cat((x0, x1), dim=1)x_res = self.conv(x_res)if self.activation:return self.relu(x + self.scale * x_res)return x + self.scale * x_res

redutionA模块: 卷积层组+池化层

# redutionA:BasicConv2d+MaxPool2d
class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 = nn.Sequential(BasicConv2d(in_channels, n, kernel_size=3, stride=2),)# conv1*1(k)+conv3*3(l)+conv3*3(m stride2 valid)self.branch2 = nn.Sequential(BasicConv2d(in_channels, k, kernel_size=1),BasicConv2d(k, l, kernel_size=3, padding=1),BasicConv2d(l, m, kernel_size=3, stride=2))# maxpool3*3(stride2 valid)self.branch3 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=2))def forward(self, x):branch1 = self.branch1(x)branch2 = self.branch2(x)branch3 = self.branch3(x)# 拼接outputs = [branch1, branch2, branch3]return torch.cat(outputs, 1)

redutionB模块: 卷积层组+池化层

# redutionB:BasicConv2d+MaxPool2d
class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)+conv3x3(384 stride2 valid)self.branch_0 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride=2, padding=0))# conv1*1(256)+conv3x3(256 stride2 valid)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride=2, padding=0),)# conv1*1(256)+conv3x3(256)+conv3x3(256 stride2 valid)self.branch_2 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride=1, padding=1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride=2, padding=0))# maxpool3*3(stride2 valid)self.branch_3 = nn.MaxPool2d(3, stride=2, padding=0)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)x2 = self.branch_2(x)x3 = self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim=1)

Inception-ResNet-V2

Inception-ResNet-V2除了Stem,其他模块在结构上与Inception-ResNet-V1一致。
卷积层组: 卷积层+BN层+激活函数

# 卷积组: Conv2d+BN+ReLU
class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):super(BasicConv2d, self).__init__()self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn = nn.BatchNorm2d(out_channels)self.relu = nn.ReLU(inplace=True)def forward(self, x):x = self.conv(x)x = self.bn(x)x = self.relu(x)return x

Stem模块: 卷积层组+池化层

# Stem:BasicConv2d+MaxPool2d
class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3*3(32 stride2 valid)self.conv1 = BasicConv2d(in_channels, 32, kernel_size=3, stride=2)# conv3*3(32 valid)self.conv2 = BasicConv2d(32, 32, kernel_size=3)# conv3*3(64)self.conv3 = BasicConv2d(32, 64, kernel_size=3, padding=1)# maxpool3*3(stride2 valid) & conv3*3(96 stride2 valid)self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2)self.conv4 = BasicConv2d(64, 96, kernel_size=3, stride=2)# conv1*1(64)+conv3*3(96 valid)self.conv5_1_1 = BasicConv2d(160, 64, kernel_size=1)self.conv5_1_2 = BasicConv2d(64, 96, kernel_size=3)# conv1*1(64)+conv7*1(64)+conv1*7(64)+conv3*3(96 valid)self.conv5_2_1 = BasicConv2d(160, 64, kernel_size=1)self.conv5_2_2 = BasicConv2d(64, 64, kernel_size=(7, 1), padding=(3, 0))self.conv5_2_3 = BasicConv2d(64, 64, kernel_size=(1, 7), padding=(0, 3))self.conv5_2_4 = BasicConv2d(64, 96, kernel_size=3)# conv3*3(192 valid) & maxpool3*3(stride2 valid)self.conv6 = BasicConv2d(192, 192, kernel_size=3, stride=2)self.maxpool6 = nn.MaxPool2d(kernel_size=3, stride=2)def forward(self, x):x1_1 = self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x1_2 = self.conv4(self.conv3(self.conv2(self.conv1(x))))x1 = torch.cat([x1_1, x1_2], 1)x2_1 = self.conv5_1_2(self.conv5_1_1(x1))x2_2 = self.conv5_2_4(self.conv5_2_3(self.conv5_2_2(self.conv5_2_1(x1))))x2 = torch.cat([x2_1, x2_2], 1)x3_1 = self.conv6(x2)x3_2 = self.maxpool6(x2)x3 = torch.cat([x3_1, x3_2], 1)return x3

Inception_ResNet-A模块: 卷积层组+池化层

# Inception_ResNet_A:BasicConv2d+MaxPool2d
class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale=1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale = scale# conv1*1(32)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)+conv3*3(32)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride=1, padding=1))# conv1*1(32)+conv3*3(48)+conv3*3(64)self.branch_2 = nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride=1, padding=1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride=1, padding=1))# conv1*1(384)self.conv = BasicConv2d(ch1x1+ch3x3+ch3x3X2_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)x2 = self.branch_2(x)# 拼接x_res = torch.cat((x0, x1, x2), dim=1)x_res = self.conv(x_res)return self.relu(x + self.scale * x_res)

Inception_ResNet-B模块: 卷积层组+池化层

# Inception_ResNet_B:BasicConv2d+MaxPool2d
class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale=1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale = scale# conv1*1(192)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)+conv1*7(160)+conv1*7(192)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride=1, padding=(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride=1, padding=(3, 0)))# conv1*1(1154)self.conv = BasicConv2d(ch1x1+ch_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)# 拼接x_res = torch.cat((x0, x1), dim=1)x_res = self.conv(x_res)return self.relu(x + self.scale * x_res)

Inception_ResNet-C模块: 卷积层组+池化层

# Inception_ResNet_C:BasicConv2d+MaxPool2d
class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext,  scale=1.0, activation=True):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale = scale# 是否激活self.activation = activation# conv1*1(192)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)+conv1*3(224)+conv3*1(256)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride=1, padding=(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride=1, padding=(1, 0)))# conv1*1(2048)self.conv = BasicConv2d(ch1x1+ch3x3X2_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)# 拼接x_res = torch.cat((x0, x1), dim=1)x_res = self.conv(x_res)if self.activation:return self.relu(x + self.scale * x_res)return x + self.scale * x_res

redutionA模块: 卷积层组+池化层

# redutionA:BasicConv2d+MaxPool2d
class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 = nn.Sequential(BasicConv2d(in_channels, n, kernel_size=3, stride=2),)# conv1*1(k)+conv3*3(l)+conv3*3(m stride2 valid)self.branch2 = nn.Sequential(BasicConv2d(in_channels, k, kernel_size=1),BasicConv2d(k, l, kernel_size=3, padding=1),BasicConv2d(l, m, kernel_size=3, stride=2))# maxpool3*3(stride2 valid)self.branch3 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=2))def forward(self, x):branch1 = self.branch1(x)branch2 = self.branch2(x)branch3 = self.branch3(x)# 拼接outputs = [branch1, branch2, branch3]return torch.cat(outputs, 1)

redutionB模块: 卷积层组+池化层

# redutionB:BasicConv2d+MaxPool2d
class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)+conv3x3(384 stride2 valid)self.branch_0 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride=2, padding=0))# conv1*1(256)+conv3x3(288 stride2 valid)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride=2, padding=0),)# conv1*1(256)+conv3x3(288)+conv3x3(320 stride2 valid)self.branch_2 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride=1, padding=1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride=2, padding=0))# maxpool3*3(stride2 valid)self.branch_3 = nn.MaxPool2d(3, stride=2, padding=0)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)x2 = self.branch_2(x)x3 = self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim=1)

完整代码

Inception-ResNet的输入图像尺寸是299×299

Inception-ResNet-V1

import torch
import torch.nn as nn
from torchsummary import summary# 卷积组: Conv2d+BN+ReLU
class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):super(BasicConv2d, self).__init__()self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn = nn.BatchNorm2d(out_channels)self.relu = nn.ReLU(inplace=True)def forward(self, x):x = self.conv(x)x = self.bn(x)x = self.relu(x)return x# Stem:BasicConv2d+MaxPool2d
class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3x3(32 stride2 valid)self.conv1 = BasicConv2d(in_channels, 32, kernel_size=3, stride=2)# conv3*3(32 valid)self.conv2 = BasicConv2d(32, 32, kernel_size=3)# conv3*3(64)self.conv3 = BasicConv2d(32, 64, kernel_size=3, padding=1)# maxpool3*3(stride2 valid)self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2)# conv1*1(80)self.conv5 = BasicConv2d(64, 80, kernel_size=1)# conv3*3(192 valid)self.conv6 = BasicConv2d(80, 192, kernel_size=1)# conv3*3(256 stride2 valid)self.conv7 = BasicConv2d(192, 256, kernel_size=3, stride=2)def forward(self, x):x = self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x = self.conv7(self.conv6(self.conv5(x)))return x# Inception_ResNet_A:BasicConv2d+MaxPool2d
class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale=1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale = scale# conv1*1(32)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)+conv3*3(32)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride=1, padding=1))# conv1*1(32)+conv3*3(32)+conv3*3(32)self.branch_2 = nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride=1, padding=1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride=1, padding=1))# conv1*1(256)self.conv = BasicConv2d(ch1x1+ch3x3+ch3x3X2_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)x2 = self.branch_2(x)# 拼接x_res = torch.cat((x0, x1, x2), dim=1)x_res = self.conv(x_res)return self.relu(x + self.scale * x_res)# Inception_ResNet_B:BasicConv2d+MaxPool2d
class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale=1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale = scale# conv1*1(128)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)+conv1*7(128)+conv1*7(128)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride=1, padding=(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride=1, padding=(3, 0)))# conv1*1(896)self.conv = BasicConv2d(ch1x1+ch_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)# 拼接x_res = torch.cat((x0, x1), dim=1)x_res = self.conv(x_res)return self.relu(x + self.scale * x_res)# Inception_ResNet_C:BasicConv2d+MaxPool2d
class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext,  scale=1.0, activation=True):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale = scale# 是否激活self.activation = activation# conv1*1(192)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)+conv1*3(192)+conv3*1(192)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride=1, padding=(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride=1, padding=(1, 0)))# conv1*1(1792)self.conv = BasicConv2d(ch1x1+ch3x3X2_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)# 拼接x_res = torch.cat((x0, x1), dim=1)x_res = self.conv(x_res)if self.activation:return self.relu(x + self.scale * x_res)return x + self.scale * x_res# redutionA:BasicConv2d+MaxPool2d
class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 = nn.Sequential(BasicConv2d(in_channels, n, kernel_size=3, stride=2),)# conv1*1(k)+conv3*3(l)+conv3*3(m stride2 valid)self.branch2 = nn.Sequential(BasicConv2d(in_channels, k, kernel_size=1),BasicConv2d(k, l, kernel_size=3, padding=1),BasicConv2d(l, m, kernel_size=3, stride=2))# maxpool3*3(stride2 valid)self.branch3 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=2))def forward(self, x):branch1 = self.branch1(x)branch2 = self.branch2(x)branch3 = self.branch3(x)# 拼接outputs = [branch1, branch2, branch3]return torch.cat(outputs, 1)# redutionB:BasicConv2d+MaxPool2d
class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)+conv3x3(384 stride2 valid)self.branch_0 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride=2, padding=0))# conv1*1(256)+conv3x3(256 stride2 valid)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride=2, padding=0),)# conv1*1(256)+conv3x3(256)+conv3x3(256 stride2 valid)self.branch_2 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride=1, padding=1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride=2, padding=0))# maxpool3*3(stride2 valid)self.branch_3 = nn.MaxPool2d(3, stride=2, padding=0)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)x2 = self.branch_2(x)x3 = self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim=1)class Inception_ResNetv1(nn.Module):def __init__(self, num_classes = 1000, k=192, l=192, m=256, n=384):super(Inception_ResNetv1, self).__init__()blocks = []blocks.append(Stem(3))for i in range(5):blocks.append(Inception_ResNet_A(256,32, 32, 32, 32, 32, 32, 256, 0.17))blocks.append(redutionA(256, k, l, m, n))for i in range(10):blocks.append(Inception_ResNet_B(896, 128, 128, 128, 128, 896, 0.10))blocks.append(redutionB(896,256, 384, 256, 256, 256))for i in range(4):blocks.append(Inception_ResNet_C(1792,192, 192, 192, 192, 1792, 0.20))blocks.append(Inception_ResNet_C(1792, 192, 192, 192, 192, 1792, activation=False))self.features = nn.Sequential(*blocks)self.conv = BasicConv2d(1792, 1536, 1)self.global_average_pooling = nn.AdaptiveAvgPool2d((1, 1))self.dropout = nn.Dropout(0.8)self.linear = nn.Linear(1536, num_classes)def forward(self, x):x = self.features(x)x = self.conv(x)x = self.global_average_pooling(x)x = x.view(x.size(0), -1)x = self.dropout(x)x = self.linear(x)return xif __name__ == '__main__':device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")model = Inception_ResNetv1().to(device)summary(model, input_size=(3, 229, 229))

summary可以打印网络结构和参数,方便查看搭建好的网络结构。

Inception-ResNet-V2

import torch
import torch.nn as nn
from torchsummary import summary# 卷积组: Conv2d+BN+ReLU
class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):super(BasicConv2d, self).__init__()self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn = nn.BatchNorm2d(out_channels)self.relu = nn.ReLU(inplace=True)def forward(self, x):x = self.conv(x)x = self.bn(x)x = self.relu(x)return x# Stem:BasicConv2d+MaxPool2d
class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3*3(32 stride2 valid)self.conv1 = BasicConv2d(in_channels, 32, kernel_size=3, stride=2)# conv3*3(32 valid)self.conv2 = BasicConv2d(32, 32, kernel_size=3)# conv3*3(64)self.conv3 = BasicConv2d(32, 64, kernel_size=3, padding=1)# maxpool3*3(stride2 valid) & conv3*3(96 stride2 valid)self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2)self.conv4 = BasicConv2d(64, 96, kernel_size=3, stride=2)# conv1*1(64)+conv3*3(96 valid)self.conv5_1_1 = BasicConv2d(160, 64, kernel_size=1)self.conv5_1_2 = BasicConv2d(64, 96, kernel_size=3)# conv1*1(64)+conv7*1(64)+conv1*7(64)+conv3*3(96 valid)self.conv5_2_1 = BasicConv2d(160, 64, kernel_size=1)self.conv5_2_2 = BasicConv2d(64, 64, kernel_size=(7, 1), padding=(3, 0))self.conv5_2_3 = BasicConv2d(64, 64, kernel_size=(1, 7), padding=(0, 3))self.conv5_2_4 = BasicConv2d(64, 96, kernel_size=3)# conv3*3(192 valid) & maxpool3*3(stride2 valid)self.conv6 = BasicConv2d(192, 192, kernel_size=3, stride=2)self.maxpool6 = nn.MaxPool2d(kernel_size=3, stride=2)def forward(self, x):x1_1 = self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x1_2 = self.conv4(self.conv3(self.conv2(self.conv1(x))))x1 = torch.cat([x1_1, x1_2], 1)x2_1 = self.conv5_1_2(self.conv5_1_1(x1))x2_2 = self.conv5_2_4(self.conv5_2_3(self.conv5_2_2(self.conv5_2_1(x1))))x2 = torch.cat([x2_1, x2_2], 1)x3_1 = self.conv6(x2)x3_2 = self.maxpool6(x2)x3 = torch.cat([x3_1, x3_2], 1)return x3# Inception_ResNet_A:BasicConv2d+MaxPool2d
class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale=1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale = scale# conv1*1(32)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)+conv3*3(32)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride=1, padding=1))# conv1*1(32)+conv3*3(48)+conv3*3(64)self.branch_2 = nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride=1, padding=1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride=1, padding=1))# conv1*1(384)self.conv = BasicConv2d(ch1x1+ch3x3+ch3x3X2_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)x2 = self.branch_2(x)# 拼接x_res = torch.cat((x0, x1, x2), dim=1)x_res = self.conv(x_res)return self.relu(x + self.scale * x_res)# Inception_ResNet_B:BasicConv2d+MaxPool2d
class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale=1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale = scale# conv1*1(192)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)+conv1*7(160)+conv1*7(192)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride=1, padding=(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride=1, padding=(3, 0)))# conv1*1(1154)self.conv = BasicConv2d(ch1x1+ch_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)# 拼接x_res = torch.cat((x0, x1), dim=1)x_res = self.conv(x_res)return self.relu(x + self.scale * x_res)# Inception_ResNet_C:BasicConv2d+MaxPool2d
class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext,  scale=1.0, activation=True):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale = scale# 是否激活self.activation = activation# conv1*1(192)self.branch_0 = BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)+conv1*3(224)+conv3*1(256)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride=1, padding=(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride=1, padding=(1, 0)))# conv1*1(2048)self.conv = BasicConv2d(ch1x1+ch3x3X2_2, ch1x1ext, 1)self.relu = nn.ReLU(inplace=True)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)# 拼接x_res = torch.cat((x0, x1), dim=1)x_res = self.conv(x_res)if self.activation:return self.relu(x + self.scale * x_res)return x + self.scale * x_res# redutionA:BasicConv2d+MaxPool2d
class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 = nn.Sequential(BasicConv2d(in_channels, n, kernel_size=3, stride=2),)# conv1*1(k)+conv3*3(l)+conv3*3(m stride2 valid)self.branch2 = nn.Sequential(BasicConv2d(in_channels, k, kernel_size=1),BasicConv2d(k, l, kernel_size=3, padding=1),BasicConv2d(l, m, kernel_size=3, stride=2))# maxpool3*3(stride2 valid)self.branch3 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=2))def forward(self, x):branch1 = self.branch1(x)branch2 = self.branch2(x)branch3 = self.branch3(x)# 拼接outputs = [branch1, branch2, branch3]return torch.cat(outputs, 1)# redutionB:BasicConv2d+MaxPool2d
class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)+conv3x3(384 stride2 valid)self.branch_0 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride=2, padding=0))# conv1*1(256)+conv3x3(288 stride2 valid)self.branch_1 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride=2, padding=0),)# conv1*1(256)+conv3x3(288)+conv3x3(320 stride2 valid)self.branch_2 = nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride=1, padding=1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride=2, padding=0))# maxpool3*3(stride2 valid)self.branch_3 = nn.MaxPool2d(3, stride=2, padding=0)def forward(self, x):x0 = self.branch_0(x)x1 = self.branch_1(x)x2 = self.branch_2(x)x3 = self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim=1)class Inception_ResNetv2(nn.Module):def __init__(self, num_classes = 1000, k=256, l=256, m=384, n=384):super(Inception_ResNetv2, self).__init__()blocks = []blocks.append(Stem(3))for i in range(5):blocks.append(Inception_ResNet_A(384,32, 32, 32, 32, 48, 64, 384, 0.17))blocks.append(redutionA(384, k, l, m, n))for i in range(10):blocks.append(Inception_ResNet_B(1152, 192, 128, 160, 192, 1152, 0.10))blocks.append(redutionB(1152, 256, 384, 288, 288, 320))for i in range(4):blocks.append(Inception_ResNet_C(2144,192, 192, 224, 256, 2144, 0.20))blocks.append(Inception_ResNet_C(2144, 192, 192, 224, 256, 2144, activation=False))self.features = nn.Sequential(*blocks)self.conv = BasicConv2d(2144, 1536, 1)self.global_average_pooling = nn.AdaptiveAvgPool2d((1, 1))self.dropout = nn.Dropout(0.8)self.linear = nn.Linear(1536, num_classes)def forward(self, x):x = self.features(x)x = self.conv(x)x = self.global_average_pooling(x)x = x.view(x.size(0), -1)x = self.dropout(x)x = self.linear(x)return xif __name__ == '__main__':device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")model = Inception_ResNetv2().to(device)summary(model, input_size=(3, 229, 229))

summary可以打印网络结构和参数,方便查看搭建好的网络结构。


总结

尽可能简单、详细的介绍了Inception-ResNet将Inception和ResNet结合的作用和过程,讲解了Inception-ResNet模型的结构和pytorch代码。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/154596.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

图解算法数据结构-LeetBook-栈和队列04_望远镜中最高的海拔_滑动窗口

科技馆内有一台虚拟观景望远镜,它可以用来观测特定纬度地区的地形情况。该纬度的海拔数据记于数组 heights ,其中 heights[i] 表示对应位置的海拔高度。请找出并返回望远镜视野范围 limit 内,可以观测到的最高海拔值。 示例 1: 输…

为什么需要MuleSoft?如何与Salesforce协同工作?

MuleSoft通过一个集成平台将应用程序及其数据(跨云和内部云)连接起来。这被称为iPaaS,可将云应用程序相互集成,以及与本地和传统应用程序集成。 MuleSoft非常适合希望过渡到云的组织,提供了一种强大的集成解决方案。随着组织越来越依赖云及其…

CV计算机视觉每日开源代码Paper with code速览-2023.11.17

点击CV计算机视觉,关注更多CV干货 论文已打包,点击进入—>下载界面 点击加入—>CV计算机视觉交流群 1.【点云分割】(CVPR2023)Center Focusing Network for Real-Time LiDAR Panoptic Segmentation 论文地址:…

第2关:图的深度遍历

任务要求参考答案评论2 任务描述相关知识编程要求测试说明 任务描述 本关任务:以邻接表存储图,要求编写程序实现图的深度优先遍历。 相关知识 图的深度优先遍历类似于树的先序遍历, 是树的先序遍历的推广,其基本思想如下: 访…

CFCA证书——基于SM2/3算法的安全信任

在中国金融认证中心(CFCA)发行的证书中,采用了最新的国密SM2/3算法来提供更高的安全保障。这一创新举措进一步增强了我国网络安全能力,并为用户提供了一种更可靠、更安全的选择。 SM2/3算法是中国自主研发的非对称加密算法&#…

2023年亚太杯数学建模思路 - 案例:感知机原理剖析及实现

文章目录 1 感知机的直观理解2 感知机的数学角度3 代码实现 4 建模资料 # 0 赛题思路 (赛题出来以后第一时间在CSDN分享) https://blog.csdn.net/dc_sinor?typeblog 1 感知机的直观理解 感知机应该属于机器学习算法中最简单的一种算法,其…

基于FPGA的五子棋(论文+源码)

1.系统设计 在本次设计中,整个系统硬件框图如下图所示,以ALTERA的FPGA作为硬件载体,VGA接口,PS/2鼠标来完成设计,整个系统可以完成人人对战,人机对战的功能。系统通过软件编程来实现上述功能。将在硬件设计…

真菌DAP-seq|丝状真菌中与碳利用相关的调控和转录景观

转录因子 (Transcription Factors, TFs)是指能够以序列特异性方式结合DNA并且调节转录的蛋白质。TF与特异性DNA序列结合调节转录,同时会和其它功能蛋白结合调控下游基因的转录和翻译过程,也会和增强子等其它顺式作用元件结合,使整个调控过程更…

时间序列预测中的4大类8种异常值检测方法(从根源上提高预测精度)

一、本文介绍 本文给大家带来的是时间序列预测中异常值检测,在我们的数据当中有一些异常值(Outliers)是指在数据集中与其他数据点显著不同的数据点。它们可能是一些极端值,与数据集中的大多数数据呈现明显的差异。异常值可能由于…

linux在anaconda环境中配置GPU版本的cuda+cudnn+pytorch深度学习环境(简单可行!一次完成!)

一、安装前要知道的事情: pytorch是基于CUDA的深度学习框架,因此,pytorch的版本必须依赖于cuda toolkit的版本CUDA Toolkit可以理解成一个工具包,主要包含了CUDA-C和CUDA-C编译器、一些科学库和实用程序库、CUDA和library API的代…

postman查询数据库-Xmysql

步骤1:安装node.js 下载地址:Download | Node.js步骤2:安装Xmysql工具,命令行输入 npm install -g xmysql ,过程中会自动安装相关依赖;步骤3:连接数据库 xmysql -h ip -u 账号 -p 密码 -d 库名 如下表示连…

【Proteus仿真】【Arduino单片机】多功能数字时钟设计

文章目录 一、功能简介二、软件设计三、实验现象联系作者 一、功能简介 本项目使用Proteus8仿真Arduino单片机控制器,使用PCF8574、LCD1602液晶、DS1302温度传感器、DS1302时钟、按键、蜂鸣器等。 主要功能: 系统运行后,LCD1602显示当前日期…

【数据结构初阶(3)】双向带头结点循环链表

文章目录 Ⅰ 概念及结构Ⅱ 基本操作实现1. 结点的定义2. 创建头节点3. 创建新结点4. 双向链表销毁5. 双向链表打印6. 双向链表尾插7. 双向链表尾删8. 双向链表头插9. 双向链表头删10. 双向链表查找11. 在指定 pos 位置前插入新结点12. 删除指定 pos 位置的结点 Ⅲ 十分钟手搓链…

​vmware虚拟机ubuntu系统配置静态ip​

把虚拟机当成服务器,如果虚拟机的ip是一直变化的,每次远程连接需要都修改连接虚拟机的ip地址,这肯定是麻烦的。 一、设置一下本机的VMnet8的ip 配置路径:控制面板->所有控制面板项->网络和共享中心 二、首先设置NAT 选自…

数据结构【DS】树的性质

度为m的树 m叉树 至少有一个节点的度m 允许所有节点的度都<m 一定是非空树&#xff0c;至少有m1个节点 可以是空树 节点数 总度数 1m叉树&#xff1a; 高度为h的m叉树 节点数最少为&#xff1a;h具有n个结点的m叉树 最大高度&#xff1a;n度为m的树&#xff1a; 具有…

Postman的各种参数你都用对了吗?

大家好&#xff0c;我是G探险者。 Postman我们都不陌生&#xff0c;作为一个广泛使用的 HTTP 客户端&#xff0c;平时我们使用它来测试接口&#xff0c;无非就是把接口的url放进去&#xff0c;然后根据请求类型get或者post,在不同位置传一下参数&#xff0c;除了常见的 Params…

Redis(地理空间Geospatial和HyperLogLog)

Geospatial&#xff1a; Redis中的Geospatial提供了一种存储和处理地理空间数据的能力&#xff0c;这对于许多应用非常有用。以下是Redis中的Geospatial的一些作用&#xff1a; 1. 地理位置查询&#xff1a;可以存储地理位置的坐标信息&#xff0c;并且可以通过查询指定半径范…

第2关:图的深度优先遍历

任务要求参考答案评论2 任务描述相关知识编程要求测试说明 任务描述 本关任务&#xff1a;以邻接矩阵存储图&#xff0c;要求编写程序实现图的深度优先遍历。 相关知识 图的深度优先遍历类似于树的先序遍历, 是树的先序遍历的推广&#xff0c;其基本思想如下&#xff1a; …

DITTEL控制器维修SENSITRON6-2AE

DITTEL工控产品维修包括&#xff1a;德国DITTEL平衡测试仪维修,DITTEL模块&#xff0c;过程监控模块&#xff0c;DITTEL控制器&#xff0c;平衡头&#xff0c;机电平衡头&#xff0c;显示器&#xff0c;平衡系统等产品。 DITTEL过程控制模块维修 DM6000是一个过程控制模块&…

onnx模型转换opset版本和固定动态输入尺寸

背景&#xff1a;之前我想把onnx模型从opset12变成opset12&#xff0c;太慌乱就没找着&#xff0c;最近找到了官网上有示例的&#xff0c;大爱onnx官网&#xff0c;分享给有需求没找着的小伙伴们。 1. onnx模型转换opset版本 官网示例&#xff1a; import onnx from onnx im…