EfficientNet与复合缩放理论(Compound Scaling Theory) 详解(MATLAB)

1.EfficientNet网络与模型复合缩放

1.1 EfficientNet网络简介

1.1.1 提出背景、动机与过程

        EfficientNet是一种高效的卷积神经网络(CNN),由Google的研究团队Tan等人在2019年提出。EfficientNet的设计目标是提高网络的性能,同时减少计算资源的消耗,“EfficientNet”的名称也由此而来。

        为了实现在更少的模型参数和计算需求下达到更高的模型性能,Tan 等人基于神经架构搜索方法(Neural Architecture Search,NAS)首先得到了该模型的基线网络(baseline network),称其为EfficientNet-b0 。

神经架构搜索方法(Neural Architecture Search):NAS的思想类似于机器学习模型的超参数优化方法(如网格搜索算法、粒子群优化算法等),两者两者都定义了一个待搜索的参数空间,在本质上都属于优化算法。

  • NAS的搜索空间一般包含网络架构的参数,这些架构由不同的网络层类型、数量、连接方式等参数组成。
  • 超参数优化方法的搜索空间则是模型超参数的集合,这些超参数定义了模型的行为和训练过程。

作者等人应用的NAS方法为“多目标神经架构搜索(multi-objective neural architecture search)”,该方法的原文地址为:MnasNet: Platform-Aware Neural Architecture Search for Mobile

        基于EfficientNet-b0对模型扩展(缩放)的参数组合进行了网格搜索,协调了扩展过程中输入图像分辨率γ、网络深度α、宽度β的“最佳比值”,并以此提出了复合这三种因素的拓展方法,称为复合缩放(Compound Scaling)理论。基于该理论对EfficientNet的基线模型(EfficientNet-b0)进行复合缩放,得到EfficientNet-b1~b7,8种不同的规模的EfficientNet网络。最终在不同数据集中达到了当时领先的水准。

注意:在Tan等人的研究中,缩放系数(即α, β, γ)的确定是基于网格搜索(Grid Search)的,而不是神经架构搜索(NAS)技术。很多博主都搞错这点。

原论文:Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks[C]//International conference on machine learning. PMLR, 2019: 6105-6114.

原论文地址:[1905.11946] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

1.1.2 模型性能

该网络在ImageNet数据集上的表现极为优异,能够在准确率和计算效率之间实现良好的平衡。EfficientNet网络采用了深度可扩展卷积网络(Mobile Inverted Bottleneck Block)模块作为基础模块,并对其进行多维度的缩减和扩展,使得网络具备更快的推理速度和更小的模型体积。

从跑分结果上来看轻量化的EfficientNet-B0(76.3%,5.3M)要优于ResNet-50(75.3%,25M)、MobileNet V3-Large(75.2%,5.4M)和ShuffleNet V2(75.4%)。差于MoblieViT V3-XS(76.7%,2.5M)和MobileViT-S(78.4%,5.6M)。(看来混合结构才是大势所趋啊)

中等规模的EfficientNet-B4(82.4%,9.2M)要优于ResNet-200(81.8%)、SENet-101(81.4%,49.2M)和DeepVit-L(82.2%,55M)。

较大大规模的EfficientNet-B7(84.4%,66M),则于ResNeSt-269(84.5%,111M)和AMD(ViT-B/16)(84.6%,87M)相当。

从其表现来看,一些传统模型和方法的Baseline替换成EfficientNet的Baseline基本都会有所提升,综合评价是弱于混合模型,强于大多数ConvNet。

对比图

ImageNet模型排行榜:ImageNet Benchmark (Image Classification) | Papers With Code

1.1.3 EfficientNet的后续相关研究

EfficientNet取得成功后,基于该模型及复合缩放的思路,学者继续优化了该模型及其方法,比较有代表性的相关研究有:

2019年:NoisyStudent(EfficientNet)

论文地址:Self-training with Noisy Student improves ImageNet classification

NoisyStudent是一种半监督学习方法,它扩展了自我训练和蒸馏的思想。该方法的核心在于利用未标注的数据来提升模型的性能。Xie等人结合了NoisyStudent方法和EfficientNet模型提出的NoisyStudent(EfficientNet)让其在ImageNet上的精度更近一步。最终NoisyStudent(EfficientNet-L2)取得了(88.4%,480M)的成绩。

这里的EfficientNet-L2是Xie等人基于EfficientNet-b0继续扩展的,其大小要比b7还要大很多,具体的,其对于B0的缩放倍率为宽度(Width):4.3,深度(Depth):5.3 ,分辨率为(Train Res.):800*800。而EfficientNet-B7的这三个数值是2.0,3.1和600*600。

2020年 FixEfficientNet

论文地址:Fixing the train-test resolution discrepancy: FixEfficientNet

Touvron等人提出了FixRes方法,通过联合优化训练和测试时的分辨率和尺度选择,保持相同的区域分类(RoC)采样。他们将其与EfficientNet结合得到了FixEfficientNet,FixEfficientNet-L2最终取得了88.5% ,480M成绩。

2020年 Meta Pseudo Labels(EfficientNet-L2)

论文地址:Meta Pseudo Labels

Pham等人提出了Meta Pseudo Labels,一种半监督学习方法,结合EfficientNet-L2,该方法在ImageNet上实现了90.2%,480M的成绩。在ViT得到发展之前,打遍天下无敌手,也是目前在ImageNet数据集中的顶端模型。

2021年 EfficientNet-V2

论文地址:EfficientNetV2: Smaller Models and Faster Training

Tan等人在EfficientNet成功的基础上提出了EfficientNetV2,这是一种新的、更小且更快的卷积神经网络家族。通过训练感知的NAS和缩放策略,EfficientNetV2在训练速度和参数效率方面显著优于之前的模型。

1.2 EfficientNet结构

1.2.1EfficientNet的基本单元:MBConv模块

EfficientNet属于当时背景下的缝合怪,由于是基于NAS搜索出来网络,从结构上来说,其糅合MobileNet和ResNet等模型的结构特点。该模型采用了的基本结构称为MBConv模块Mobile Inverted Bottleneck Convolution,移动倒置残差卷积模块),其可以看作由MobileNet的Inverted Residual Block发展而来的模块,其有以下特点:

  1. 反向瓶颈转换:MBConv模块首先通过1x1的卷积层对输入特征图进行升维处理,增加通道数,然后通过一个深度可分离卷积层(Depthwise Convolution)进行空间特征提取,最后再通过另一个1x1的卷积层进行降维处理,恢复到原始的通道数或接近的通道数。这种结构形成了一个反向的瓶颈转换,即先膨胀通道数再压缩通道数。
  2. 残差连接:MBConv模块中包含残差连接(Residual Connection),将模块的输入与经过上述操作后的输出相加,以促进梯度流动,加速训练过程,并有助于解决深度网络中的退化问题。
  3. Squeeze-and-Excitation(SE)模块:对于经典的EfficientNet的版本,会包含一个SE模块。SE模块通过先全局平均池化(压缩),然后通过两个全连接层来调整通道间的权重,实现对特征通道的重新校准,增强重要特征,抑制不重要的特征。

MBConv模块主要包含两种变体,MBConv1和MBConv6

MBConv1的结构

图都是自己画的,和别人不一样很正常,不一样的话建议以我为标准。没错,很多博主都画错或讲错了。

MBConv1是Efficient浅层中的结构:

MBConv1由普通卷积、深度卷积(或称为分组卷积 grouped convolution),批归一化层BN(Batch-Normalization)、Swish激活函数、SE模块、加法层、乘法层构成。假设输入数据的大小为M*M*N(空间-空间-通道,S-S-C),在第一个Conv后对通道进行增,一般为输入数据的六倍,MBconv1处理完后在空间大小下采样为输入的1/2,但通道数输出为原先的1/2。

注意:

  • 在EfficientNet中,SE模块中间的激活函数为Swish而不是Relu或其他。
  • MBconv1中,Multiplication Layer后的卷积核对通道进行下采样,压缩为原来的1/4,SE模块中也有下采用过程,为输入通道的1/4。
  • Dethwise-Conv的卷积核大小为k*k,会有所变动,常见的为3*3,5*5.具体数值可以参考结构的表格
 MBConv6的结构

MBConv6是EfficientNet中的深层结构,下图列举了1层、2层和3层的MBConv6,4层、5层的MBConv6以此类推。

其主要嵌套MBConv1结构,通过Skip-Connection 进行连接。相较于输入(M*M*C),其空间尺度下采用为原来的1/2,通道尺度上采样至3/2。当MBConv6的重复层(Layers)为1时,其结构与MBConv1相似,但前面多了个size为1*1的Conv层,将通道维度上采样6倍。二层、三层等多层中的MBConv6层的结构都是单层MBConv6,不是复合嵌套。特别的,MBConv6不一定会对空间尺度进行下采样,具体看模型的设计细节。

注:

  • 重要的,在同一stage中,在MBConv6中的第二个及后续的MBConv6层中,Dethwise-Convolution的步幅为1,不为2,同时每个卷积核Filters的数量与第一层的MBConv6对应,以便对齐输出数据的尺度。
  • 与单独MBConv1不同,MBConv6中的MBConv1中Multiplication Layer后的Conv对通道的下采样为1/4而不是1/2。
  • 官方源码对于MBConv的dropout操作是在训练过程利用“Stochastic Depth”(随机深度技术)实现的,对模型进行正则化,也就是避免过拟合。并不是MBConv结构中插入了dropout层。

MATLAB构建EfficientNet-b0

根据MBConv1和MBConv6的结构,基于Tan等人提供的参数表,就可以非常的轻松建立EfficientNet结构的网络.

如EfficientNet-b0:

EfficientNet-b0是基于NAS搜索出来的,其优化目标是:ACC(m )\times[FLOPS(m)/T]^w,即:准确率*浮点消耗。

  • m代表当前模型
  • T是目标FLOPS(Floating Point Operations Per Second,每秒浮点运算次数)的一个阈值或目标值,是一个常数,由人为设定,用于权衡模型的计算复杂度和准确率。
  • w为-0.07, 是一个超参数,用于在准确率(accuracy)和FLOPS之间找到一个平衡点。具体来说,这个超参数在优化过程中会被用来调整准确率提升和计算量增加之间的相对重要性。当 w 的值为负值时(如-0.07),优化过程会更加倾向于选择那些计算量较小(即FLOPS较低)但准确率相对较高的模型。

EfficientNet由9个“Stage”构成,Stage2为MBConv1,3~7为多层MBConv6("#Layers"对应层数), 第8层为单层的MBConv6。可以发现每个Stage中,空间尺度下采用与原来的1/2,通道上采样原来的3/2。但是在Stage-7中,4层的MBconv6为对空间进行下采样,需要注意。

在EfficientNet-b0中,通道的上采样也并非严格按照3/2进行设计,如Stage4,和Stage5等。这是由于EfficientNet的参数是由于NAS搜索出来的,而不是人为设计的,所以会比较反直觉,但基本是接近3/2~2之间的下采样倍率。但这不是严格要求的,多一些少一些对模型性能影响不是很明显。

OK,基于此的构建EfficientNet-b0(这里是复制了MATLAB官方提供的EfficientNet-b0预训练模型的结构生成代码):


net = dlnetwork;tempNet = [imageInputLayer([224 224 3],"Name","ImageInput","Normalization","zscore")convolution2dLayer([3 3],32,"Name","efficientnet-b0|model|stem|conv2d|Conv2D","Padding",[0 1 0 1],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|stem|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|stem|MulLayer")groupedConvolution2dLayer([3 3],1,32,"Name","efficientnet-b0|model|blocks_0|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_0|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|MulLayer");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_0|se|GlobAvgPool")convolution2dLayer([1 1],8,"Name","Conv__301")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_0|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|se|MulLayer")convolution2dLayer([1 1],32,"Name","Conv__304")sigmoidLayer("Name","efficientnet-b0|model|blocks_0|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_0|se|MulLayer_1")convolution2dLayer([1 1],16,"Name","efficientnet-b0|model|blocks_0|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_0|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)convolution2dLayer([1 1],96,"Name","efficientnet-b0|model|blocks_1|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|MulLayer")groupedConvolution2dLayer([3 3],1,96,"Name","efficientnet-b0|model|blocks_1|depthwise_conv2d|depthwise","Padding",[0 1 0 1],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_1|se|GlobAvgPool")convolution2dLayer([1 1],4,"Name","Conv__309")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_1|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|se|MulLayer")convolution2dLayer([1 1],96,"Name","Conv__312")sigmoidLayer("Name","efficientnet-b0|model|blocks_1|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_1|se|MulLayer_1")convolution2dLayer([1 1],24,"Name","efficientnet-b0|model|blocks_1|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],144,"Name","efficientnet-b0|model|blocks_2|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|MulLayer")groupedConvolution2dLayer([3 3],1,144,"Name","efficientnet-b0|model|blocks_2|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_2|se|GlobAvgPool")convolution2dLayer([1 1],6,"Name","Conv__319")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_2|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|se|MulLayer")convolution2dLayer([1 1],144,"Name","Conv__322")sigmoidLayer("Name","efficientnet-b0|model|blocks_2|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_2|se|MulLayer_1")convolution2dLayer([1 1],24,"Name","efficientnet-b0|model|blocks_2|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_2|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_2|Add")convolution2dLayer([1 1],144,"Name","efficientnet-b0|model|blocks_3|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|MulLayer")groupedConvolution2dLayer([5 5],1,144,"Name","efficientnet-b0|model|blocks_3|depthwise_conv2d|depthwise","Padding",[1 2 1 2],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_3|se|GlobAvgPool")convolution2dLayer([1 1],6,"Name","Conv__327")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_3|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|se|MulLayer")convolution2dLayer([1 1],144,"Name","Conv__330")sigmoidLayer("Name","efficientnet-b0|model|blocks_3|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_3|se|MulLayer_1")convolution2dLayer([1 1],40,"Name","efficientnet-b0|model|blocks_3|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],240,"Name","efficientnet-b0|model|blocks_4|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|MulLayer")groupedConvolution2dLayer([5 5],1,240,"Name","efficientnet-b0|model|blocks_4|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_4|se|GlobAvgPool")convolution2dLayer([1 1],10,"Name","Conv__337")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_4|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|se|MulLayer")convolution2dLayer([1 1],240,"Name","Conv__340")sigmoidLayer("Name","efficientnet-b0|model|blocks_4|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_4|se|MulLayer_1")convolution2dLayer([1 1],40,"Name","efficientnet-b0|model|blocks_4|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_4|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_4|Add")convolution2dLayer([1 1],240,"Name","efficientnet-b0|model|blocks_5|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|MulLayer")groupedConvolution2dLayer([3 3],1,240,"Name","efficientnet-b0|model|blocks_5|depthwise_conv2d|depthwise","Padding",[0 1 0 1],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_5|se|GlobAvgPool")convolution2dLayer([1 1],10,"Name","Conv__345")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_5|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|se|MulLayer")convolution2dLayer([1 1],240,"Name","Conv__348")sigmoidLayer("Name","efficientnet-b0|model|blocks_5|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_5|se|MulLayer_1")convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_5|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_6|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|MulLayer")groupedConvolution2dLayer([3 3],1,480,"Name","efficientnet-b0|model|blocks_6|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_6|se|GlobAvgPool")convolution2dLayer([1 1],20,"Name","Conv__355")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_6|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|se|MulLayer")convolution2dLayer([1 1],480,"Name","Conv__358")sigmoidLayer("Name","efficientnet-b0|model|blocks_6|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_6|se|MulLayer_1")convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_6|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_6|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_6|Add");
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_7|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|MulLayer")groupedConvolution2dLayer([3 3],1,480,"Name","efficientnet-b0|model|blocks_7|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_7|se|GlobAvgPool")convolution2dLayer([1 1],20,"Name","Conv__365")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_7|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|se|MulLayer")convolution2dLayer([1 1],480,"Name","Conv__368")sigmoidLayer("Name","efficientnet-b0|model|blocks_7|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_7|se|MulLayer_1")convolution2dLayer([1 1],80,"Name","efficientnet-b0|model|blocks_7|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_7|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_7|Add")convolution2dLayer([1 1],480,"Name","efficientnet-b0|model|blocks_8|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|MulLayer")groupedConvolution2dLayer([5 5],1,480,"Name","efficientnet-b0|model|blocks_8|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_8|se|GlobAvgPool")convolution2dLayer([1 1],20,"Name","Conv__373")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_8|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|se|MulLayer")convolution2dLayer([1 1],480,"Name","Conv__376")sigmoidLayer("Name","efficientnet-b0|model|blocks_8|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_8|se|MulLayer_1")convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_8|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_9|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|MulLayer")groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_9|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_9|se|GlobAvgPool")convolution2dLayer([1 1],28,"Name","Conv__383")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_9|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|se|MulLayer")convolution2dLayer([1 1],672,"Name","Conv__386")sigmoidLayer("Name","efficientnet-b0|model|blocks_9|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_9|se|MulLayer_1")convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_9|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_9|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_9|Add");
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_10|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|MulLayer")groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_10|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_10|se|GlobAvgPool")convolution2dLayer([1 1],28,"Name","Conv__393")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_10|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|se|MulLayer")convolution2dLayer([1 1],672,"Name","Conv__396")sigmoidLayer("Name","efficientnet-b0|model|blocks_10|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_10|se|MulLayer_1")convolution2dLayer([1 1],112,"Name","efficientnet-b0|model|blocks_10|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_10|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_10|Add")convolution2dLayer([1 1],672,"Name","efficientnet-b0|model|blocks_11|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|MulLayer")groupedConvolution2dLayer([5 5],1,672,"Name","efficientnet-b0|model|blocks_11|depthwise_conv2d|depthwise","Padding",[1 2 1 2],"Stride",[2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_11|se|GlobAvgPool")convolution2dLayer([1 1],28,"Name","Conv__401")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_11|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|se|MulLayer")convolution2dLayer([1 1],672,"Name","Conv__404")sigmoidLayer("Name","efficientnet-b0|model|blocks_11|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_11|se|MulLayer_1")convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_11|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_12|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|MulLayer")groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_12|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_12|se|GlobAvgPool")convolution2dLayer([1 1],48,"Name","Conv__411")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_12|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|se|MulLayer")convolution2dLayer([1 1],1152,"Name","Conv__414")sigmoidLayer("Name","efficientnet-b0|model|blocks_12|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_12|se|MulLayer_1")convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_12|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_12|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_12|Add");
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_13|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|MulLayer")groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_13|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_13|se|GlobAvgPool")convolution2dLayer([1 1],48,"Name","Conv__421")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_13|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|se|MulLayer")convolution2dLayer([1 1],1152,"Name","Conv__424")sigmoidLayer("Name","efficientnet-b0|model|blocks_13|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_13|se|MulLayer_1")convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_13|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_13|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = additionLayer(2,"Name","efficientnet-b0|model|blocks_13|Add");
net = addLayers(net,tempNet);tempNet = [convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_14|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|MulLayer")groupedConvolution2dLayer([5 5],1,1152,"Name","efficientnet-b0|model|blocks_14|depthwise_conv2d|depthwise","Padding",[2 2 2 2])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_14|se|GlobAvgPool")convolution2dLayer([1 1],48,"Name","Conv__431")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_14|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|se|MulLayer")convolution2dLayer([1 1],1152,"Name","Conv__434")sigmoidLayer("Name","efficientnet-b0|model|blocks_14|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_14|se|MulLayer_1")convolution2dLayer([1 1],192,"Name","efficientnet-b0|model|blocks_14|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_14|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = [additionLayer(2,"Name","efficientnet-b0|model|blocks_14|Add")convolution2dLayer([1 1],1152,"Name","efficientnet-b0|model|blocks_15|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|MulLayer")groupedConvolution2dLayer([3 3],1,1152,"Name","efficientnet-b0|model|blocks_15|depthwise_conv2d|depthwise","Padding",[1 1 1 1])batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|SigmoidLayer_1");
net = addLayers(net,tempNet);tempNet = multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|MulLayer_1");
net = addLayers(net,tempNet);tempNet = [globalAveragePooling2dLayer("Name","efficientnet-b0|model|blocks_15|se|GlobAvgPool")convolution2dLayer([1 1],48,"Name","Conv__439")];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|blocks_15|se|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|se|MulLayer")convolution2dLayer([1 1],1152,"Name","Conv__442")sigmoidLayer("Name","efficientnet-b0|model|blocks_15|se|SigmoidLayer_1")];
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|blocks_15|se|MulLayer_1")convolution2dLayer([1 1],320,"Name","efficientnet-b0|model|blocks_15|conv2d_1|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|blocks_15|tpu_batch_normalization_2|FusedBatchNorm","Epsilon",0.00100000004749745)convolution2dLayer([1 1],1280,"Name","efficientnet-b0|model|head|conv2d|Conv2D")batchNormalizationLayer("Name","efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","Epsilon",0.00100000004749745)];
net = addLayers(net,tempNet);tempNet = sigmoidLayer("Name","efficientnet-b0|model|head|SigmoidLayer");
net = addLayers(net,tempNet);tempNet = [multiplicationLayer(2,"Name","efficientnet-b0|model|head|MulLayer")globalAveragePooling2dLayer("Name","efficientnet-b0|model|head|global_average_pooling2d|GlobAvgPool")fullyConnectedLayer(1000,"Name","efficientnet-b0|model|head|dense|MatMul")softmaxLayer("Name","Softmax")];
net = addLayers(net,tempNet);% clean up helper variable
clear tempNet;Connect Layer Branches
Connect all the branches of the network to create the network graph.
net = connectLayers(net,"efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|stem|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|stem|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|stem|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|stem|SigmoidLayer","efficientnet-b0|model|stem|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_0|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_0|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|SigmoidLayer","efficientnet-b0|model|blocks_0|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|MulLayer","efficientnet-b0|model|blocks_0|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|MulLayer","efficientnet-b0|model|blocks_0|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__301","efficientnet-b0|model|blocks_0|se|SigmoidLayer");
net = connectLayers(net,"Conv__301","efficientnet-b0|model|blocks_0|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|se|SigmoidLayer","efficientnet-b0|model|blocks_0|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_0|se|SigmoidLayer_1","efficientnet-b0|model|blocks_0|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_1|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_1|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|SigmoidLayer","efficientnet-b0|model|blocks_1|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_1|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_1|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|SigmoidLayer_1","efficientnet-b0|model|blocks_1|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|MulLayer_1","efficientnet-b0|model|blocks_1|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|MulLayer_1","efficientnet-b0|model|blocks_1|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__309","efficientnet-b0|model|blocks_1|se|SigmoidLayer");
net = connectLayers(net,"Conv__309","efficientnet-b0|model|blocks_1|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|se|SigmoidLayer","efficientnet-b0|model|blocks_1|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|se|SigmoidLayer_1","efficientnet-b0|model|blocks_1|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_1|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_2|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_2|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|SigmoidLayer","efficientnet-b0|model|blocks_2|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_2|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_2|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|SigmoidLayer_1","efficientnet-b0|model|blocks_2|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|MulLayer_1","efficientnet-b0|model|blocks_2|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|MulLayer_1","efficientnet-b0|model|blocks_2|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__319","efficientnet-b0|model|blocks_2|se|SigmoidLayer");
net = connectLayers(net,"Conv__319","efficientnet-b0|model|blocks_2|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|se|SigmoidLayer","efficientnet-b0|model|blocks_2|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|se|SigmoidLayer_1","efficientnet-b0|model|blocks_2|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_2|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_2|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_3|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_3|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|SigmoidLayer","efficientnet-b0|model|blocks_3|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_3|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_3|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|SigmoidLayer_1","efficientnet-b0|model|blocks_3|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|MulLayer_1","efficientnet-b0|model|blocks_3|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|MulLayer_1","efficientnet-b0|model|blocks_3|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__327","efficientnet-b0|model|blocks_3|se|SigmoidLayer");
net = connectLayers(net,"Conv__327","efficientnet-b0|model|blocks_3|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|se|SigmoidLayer","efficientnet-b0|model|blocks_3|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|se|SigmoidLayer_1","efficientnet-b0|model|blocks_3|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_3|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_4|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_4|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|SigmoidLayer","efficientnet-b0|model|blocks_4|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_4|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_4|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|SigmoidLayer_1","efficientnet-b0|model|blocks_4|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|MulLayer_1","efficientnet-b0|model|blocks_4|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|MulLayer_1","efficientnet-b0|model|blocks_4|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__337","efficientnet-b0|model|blocks_4|se|SigmoidLayer");
net = connectLayers(net,"Conv__337","efficientnet-b0|model|blocks_4|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|se|SigmoidLayer","efficientnet-b0|model|blocks_4|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|se|SigmoidLayer_1","efficientnet-b0|model|blocks_4|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_4|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_4|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_5|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_5|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|SigmoidLayer","efficientnet-b0|model|blocks_5|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_5|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_5|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|SigmoidLayer_1","efficientnet-b0|model|blocks_5|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|MulLayer_1","efficientnet-b0|model|blocks_5|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|MulLayer_1","efficientnet-b0|model|blocks_5|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__345","efficientnet-b0|model|blocks_5|se|SigmoidLayer");
net = connectLayers(net,"Conv__345","efficientnet-b0|model|blocks_5|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|se|SigmoidLayer","efficientnet-b0|model|blocks_5|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|se|SigmoidLayer_1","efficientnet-b0|model|blocks_5|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_5|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_6|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_6|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|SigmoidLayer","efficientnet-b0|model|blocks_6|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_6|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_6|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|SigmoidLayer_1","efficientnet-b0|model|blocks_6|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|MulLayer_1","efficientnet-b0|model|blocks_6|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|MulLayer_1","efficientnet-b0|model|blocks_6|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__355","efficientnet-b0|model|blocks_6|se|SigmoidLayer");
net = connectLayers(net,"Conv__355","efficientnet-b0|model|blocks_6|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|se|SigmoidLayer","efficientnet-b0|model|blocks_6|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|se|SigmoidLayer_1","efficientnet-b0|model|blocks_6|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_6|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|Add","efficientnet-b0|model|blocks_7|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_6|Add","efficientnet-b0|model|blocks_7|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_7|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_7|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|SigmoidLayer","efficientnet-b0|model|blocks_7|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_7|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_7|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|SigmoidLayer_1","efficientnet-b0|model|blocks_7|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|MulLayer_1","efficientnet-b0|model|blocks_7|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|MulLayer_1","efficientnet-b0|model|blocks_7|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__365","efficientnet-b0|model|blocks_7|se|SigmoidLayer");
net = connectLayers(net,"Conv__365","efficientnet-b0|model|blocks_7|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|se|SigmoidLayer","efficientnet-b0|model|blocks_7|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|se|SigmoidLayer_1","efficientnet-b0|model|blocks_7|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_7|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_7|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_8|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_8|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|SigmoidLayer","efficientnet-b0|model|blocks_8|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_8|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_8|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|SigmoidLayer_1","efficientnet-b0|model|blocks_8|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|MulLayer_1","efficientnet-b0|model|blocks_8|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|MulLayer_1","efficientnet-b0|model|blocks_8|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__373","efficientnet-b0|model|blocks_8|se|SigmoidLayer");
net = connectLayers(net,"Conv__373","efficientnet-b0|model|blocks_8|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|se|SigmoidLayer","efficientnet-b0|model|blocks_8|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|se|SigmoidLayer_1","efficientnet-b0|model|blocks_8|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_8|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_9|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_9|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|SigmoidLayer","efficientnet-b0|model|blocks_9|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_9|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_9|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|SigmoidLayer_1","efficientnet-b0|model|blocks_9|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|MulLayer_1","efficientnet-b0|model|blocks_9|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|MulLayer_1","efficientnet-b0|model|blocks_9|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__383","efficientnet-b0|model|blocks_9|se|SigmoidLayer");
net = connectLayers(net,"Conv__383","efficientnet-b0|model|blocks_9|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|se|SigmoidLayer","efficientnet-b0|model|blocks_9|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|se|SigmoidLayer_1","efficientnet-b0|model|blocks_9|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_9|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|Add","efficientnet-b0|model|blocks_10|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_9|Add","efficientnet-b0|model|blocks_10|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_10|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_10|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|SigmoidLayer","efficientnet-b0|model|blocks_10|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_10|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_10|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|SigmoidLayer_1","efficientnet-b0|model|blocks_10|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|MulLayer_1","efficientnet-b0|model|blocks_10|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|MulLayer_1","efficientnet-b0|model|blocks_10|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__393","efficientnet-b0|model|blocks_10|se|SigmoidLayer");
net = connectLayers(net,"Conv__393","efficientnet-b0|model|blocks_10|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|se|SigmoidLayer","efficientnet-b0|model|blocks_10|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|se|SigmoidLayer_1","efficientnet-b0|model|blocks_10|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_10|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_10|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_11|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_11|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|SigmoidLayer","efficientnet-b0|model|blocks_11|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_11|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_11|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|SigmoidLayer_1","efficientnet-b0|model|blocks_11|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|MulLayer_1","efficientnet-b0|model|blocks_11|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|MulLayer_1","efficientnet-b0|model|blocks_11|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__401","efficientnet-b0|model|blocks_11|se|SigmoidLayer");
net = connectLayers(net,"Conv__401","efficientnet-b0|model|blocks_11|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|se|SigmoidLayer","efficientnet-b0|model|blocks_11|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|se|SigmoidLayer_1","efficientnet-b0|model|blocks_11|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_11|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_12|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_12|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|SigmoidLayer","efficientnet-b0|model|blocks_12|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_12|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_12|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|SigmoidLayer_1","efficientnet-b0|model|blocks_12|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|MulLayer_1","efficientnet-b0|model|blocks_12|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|MulLayer_1","efficientnet-b0|model|blocks_12|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__411","efficientnet-b0|model|blocks_12|se|SigmoidLayer");
net = connectLayers(net,"Conv__411","efficientnet-b0|model|blocks_12|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|se|SigmoidLayer","efficientnet-b0|model|blocks_12|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|se|SigmoidLayer_1","efficientnet-b0|model|blocks_12|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_12|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|Add","efficientnet-b0|model|blocks_13|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_12|Add","efficientnet-b0|model|blocks_13|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_13|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_13|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|SigmoidLayer","efficientnet-b0|model|blocks_13|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_13|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_13|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|SigmoidLayer_1","efficientnet-b0|model|blocks_13|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|MulLayer_1","efficientnet-b0|model|blocks_13|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|MulLayer_1","efficientnet-b0|model|blocks_13|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__421","efficientnet-b0|model|blocks_13|se|SigmoidLayer");
net = connectLayers(net,"Conv__421","efficientnet-b0|model|blocks_13|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|se|SigmoidLayer","efficientnet-b0|model|blocks_13|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|se|SigmoidLayer_1","efficientnet-b0|model|blocks_13|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_13|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|Add","efficientnet-b0|model|blocks_14|conv2d|Conv2D");
net = connectLayers(net,"efficientnet-b0|model|blocks_13|Add","efficientnet-b0|model|blocks_14|Add/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_14|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_14|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|SigmoidLayer","efficientnet-b0|model|blocks_14|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_14|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_14|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|SigmoidLayer_1","efficientnet-b0|model|blocks_14|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|MulLayer_1","efficientnet-b0|model|blocks_14|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|MulLayer_1","efficientnet-b0|model|blocks_14|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__431","efficientnet-b0|model|blocks_14|se|SigmoidLayer");
net = connectLayers(net,"Conv__431","efficientnet-b0|model|blocks_14|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|se|SigmoidLayer","efficientnet-b0|model|blocks_14|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|se|SigmoidLayer_1","efficientnet-b0|model|blocks_14|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_14|tpu_batch_normalization_2|FusedBatchNorm","efficientnet-b0|model|blocks_14|Add/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_15|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|blocks_15|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|SigmoidLayer","efficientnet-b0|model|blocks_15|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_15|SigmoidLayer_1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|tpu_batch_normalization_1|FusedBatchNorm","efficientnet-b0|model|blocks_15|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|SigmoidLayer_1","efficientnet-b0|model|blocks_15|MulLayer_1/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|MulLayer_1","efficientnet-b0|model|blocks_15|se|GlobAvgPool");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|MulLayer_1","efficientnet-b0|model|blocks_15|se|MulLayer_1/in2");
net = connectLayers(net,"Conv__439","efficientnet-b0|model|blocks_15|se|SigmoidLayer");
net = connectLayers(net,"Conv__439","efficientnet-b0|model|blocks_15|se|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|se|SigmoidLayer","efficientnet-b0|model|blocks_15|se|MulLayer/in2");
net = connectLayers(net,"efficientnet-b0|model|blocks_15|se|SigmoidLayer_1","efficientnet-b0|model|blocks_15|se|MulLayer_1/in1");
net = connectLayers(net,"efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|head|SigmoidLayer");
net = connectLayers(net,"efficientnet-b0|model|head|tpu_batch_normalization|FusedBatchNorm","efficientnet-b0|model|head|MulLayer/in1");
net = connectLayers(net,"efficientnet-b0|model|head|SigmoidLayer","efficientnet-b0|model|head|MulLayer/in2");
net = initialize(net);
plot(net);

 1.3 复合缩放理论与EfficientNet的扩展模型(B1~B7)

1.3.1单一维度扩展模型存在的问题

        有许多方法可以根据不同的资源约束来缩放ConvNet:ResNet可以通过调整网络深度(#layers)来缩小(例如,ResNet-18)或放大(例如,ResNet-200),而WideResNet和MobileNets可以通过网络宽度(#channels)来缩放。因此,人们也普遍认识到,更大的输入图像尺寸将有助于准确性,但会增加FLOPS的开销。尽管先前的研究已经表明网络深度和宽度对于ConvNets的表达能力都很重要,但如何有效地缩放ConvNet以实现更好的效率和准确性仍然是一个开放的问题。

        直观地,对于更高分辨率的图像,我们应该增加网络深度,以便更大的感受野可以帮助捕获包含更多像素的相似特征。相应地,当分辨率更高时,我们也应该增加网络宽度,以便捕获更多具有更多像素的细粒度模式。

Tan等人提供的示意图,复合缩放需要结合模型多个规模参数进行权衡,包括宽度,深度和分辨率

所以问题的主要困难在于最优的d、w、r相互依赖,并且在不同的资源约束下值会发生变化。由于这个困难,传统方法大多只在一个维度上缩放ConvNets:深度、宽度或分辨率。相关研究表明,扩展任何维度的网络宽度、深度或分辨率都可以提高准确性,但对于更大的模型,准确性增益会减少。也就是说,单独地对某一维度进行不断扩展,对于计算资源来说并不高效,我们需要一种更科学的方法去指导我们合理的扩展模型。

1.3.2 复合缩放方法的构建 

我们数学建模的思路描述上述的优化问题:

在固定的计算资源限制下(计算量提高一倍),利用网络的最优深度、宽度和输入分辨率与计算量的关系(式1):

Depth: d=\alpha ^\Phi

Width:w=\beta ^\Phi

Resolution: r=\gamma ^\Phi

subject\: to:\: \alpha \cdot \beta ^2 \cdot \gamma^2 \approx 2 \: ( \alpha \geq 1, \beta\geq 1, \gamma\geq 1)        

复合系数ϕ指定为1,而αβγ则是决定如何将这些额外资源分别分配给网络宽度、深度和分辨率的常数。

这一问题的描述方程可写为(式2):

_{d,w,r}^{max}Accuracy(Net(d,w,r))

subject \: to\: Net(d,w,r)=\odot _{i=1...s}\widehat{F}_{i}^{d\cdot \widehat{L_i}}(X_{r\cdot \widehat{H_i},r\cdot \widehat{W_i},w\cdot \widehat{C_i}})

Memory(Net)\leq Targat\: memory

FLOPS(Net)\leq Targat \: Flops  

\odot _{i=1...s}\widehat{F}_{i}表示EfficientNet-b0中一系列层的组合操作。w,d,r分别是用于缩放后网络宽度、深度和分辨率的倍率;Fi、Li、Hi、Wi、Ci是基线网络(EfficientNet-b0)中预定义的参数,头上带个尖(widehat),主要是用来特指EfficientNet-b0。

其中:

  • Fi表示基线网络中第i层卷积层的操作符(operator),即其对应结构,
  • Li表示基线网络中第i阶段(stage)中卷积层的重复次数,和放大深度倍率d关联。
  • Hi表示基线网络中第i层卷积层的操作符(operator),和分辨率的放大倍率r关联。
  • Wi表示基线网络中第i层输入张量的宽度(width),和分辨率的放大倍率r关联。
  • Ci表示基线网络中第i层输入张量的通道数(channel dimension),和宽度的放大倍率w关联。

从基线模型EfficientNet-B0出发,基于复合缩放方法数学思路,通过两步来对其进行扩展:

第一步:我们首先设定φ=1(假设有两倍以上的资源可用),并根据公式1和公式2对α(深度)、β(宽度)、γ(分辨率)进行小范围的网格搜索特别地,在式1的约束条件下,Tan等人发现对于EfficientNet-B0而言最佳的α、β、γ值分别为1.2、1.1和1.15

第二步:随后,将α、β、γ固定为常数(1.2、1.1和1.15),并使用公式1,通过调整不同的\phi值来扩展基线网络,从而得到了EfficientNet-B1至B7。

注意:

  • α、β、γ值分别为1.2、1.1和1.15是作者针对EfficientNet-b0网格搜索的优化值,并不是对所有模型都是这个扩展值,对于其他的模型α、β、γ的值并不一样。
  • 此处优化的α、β、γ值是针对ImageNet数据库迭代出来的优化值,不是理论计算得到上的最优值,因此需要注意其对于数据库或使用场景不同时普适性,对于不同场景会有所变动。

1.3.3 将Efficient-b0扩展到b1~b7

基于EfficientNet-b0和搜索到的优化缩放参数α(深度)、β(宽度)、γ(分辨率),通过复合系数\Phi的指数放大(如\alpha^\Phi)可以计算出EfficientNet的扩展模型的缩放系数,即b1~b7;

对于EfficientNet-b1~b3,\Phi的取值为0.5,1,2,计算公式为\alpha^\Phi, \beta ^\Phi ,\delta ^\Phi

如EfficientNet-b1,其分辨率为224*1.15^{0.5}=240.21,舍入得240,其网络宽度缩放倍率为1.1^{0.5}=1.049舍入得1.0,网络深度的缩放倍率为1.2^{0.5}=1.095舍入得1.1,故EfficientNet-b1的输入分辨率、宽度和深度增加比例分别为240*240,1.0和1.1

对于EfficientNet-b4~b7,\Phi的取值为就为其系列值即:4、5、6、7;但是深度缩放倍率的计算公式修改为\alpha^{\Phi-(\Phi +1)*0.1},而分辨率的缩放则是取\delta ^\Phi附近的某个合适的数值。

如EfficientNet-b7,其分辨率为224*1.15^{7}=595.8445,取600,其网络宽度缩放倍率为1.1^{7}=1.9487,舍入得2.0,网络深度的缩放倍率为1.2^{7-(7+1)*0.1}=3.0961,舍入得3.1,故EfficientNet-b7的输入分辨率、宽度和深度增加比例分别为600*600,2.0和3.1.

下列是b0~b7的放大系数表:

ModelInput resolutionwidth coefficientdepth coefficient
b0 22411
b124011.1
b22601.11.2
b33001.21.4
b43801.41.8
b54561.62.2
b65281.82.6
b760023.1

Input resolution代表训练网络时输入网络的图像大小,如224就代表输入分辨率大小为224*224

width coefficient和depth_coefficient分别代表channel维度上的倍率因子(也就是卷积核数量(filters)和网络的深度。

如b2处的width coefficient为1.1 ,即卷积核数量扩大到原来的1.1倍,原先为32,那么现在就是32*1.1=35.2,然后再一直取到8的倍数,即40;

depth_coefficient负责加深网络的深度,如b4的depth_coefficient为1.8,那么相较于b0,对应Stage的MBconv的层数将会增加到原先的1.8倍,并向上取整。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/64245.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java多线程与线程池技术详解(十)

拥有梦想,即拥有了生命的火种。 梦想是一座高山,攀爬起来虽然艰辛,但一旦到达顶峰,你的努力就将被铭记于人心。 梦想是一个拼图,每一次努力都是一块拼图,最终汇成一个完整的梦想。 梦想是你的信念&#xff…

后端-redis在springboot项目中的使用步骤

redis在springboot项目中的使用场景 如果再创建一张包含状态的表,里面就有两个字段一个id,一个状态,太浪费,那就使用redis存储, 设置营业状态打烊还是营业中

【鸿蒙实战开发】数据的下拉刷新与上拉加载

本章介绍 本章主要介绍 ArkUI 开发中最常用的场景下拉刷新, 上拉加载,在本章中介绍的内容在实际开发过程当中会高频的使用,所以同学们要牢记本章的内容。下面就让我们开始今天的讲解吧! List 组件 在 ArkUI 中List容器组件也可以实现数据滚动的效果&a…

ElasticSearch 常见故障解析与修复秘籍

文章目录 一、ElasticSearch启动服务提示无法使用root用户二、ElasticSearch启动提示进程可拥有的虚拟内存少三、ElasticSearch提示用户拥有的可创建文件描述符太少四、ElasticSearch集群yellow状态分析五、ElasticSearch节点磁盘使用率过高,read_only状态问题解决六…

Motionface RTASR 离线实时语音识别直播字幕使用教程

软件使用场景: 直播、视频会议、课堂教学等需要实时字幕的场景。 1:系统要求 软件运行支持32位/64位windows 10/11系统,其他硬件要求无,无显卡也能实时识别字幕。 2:下载安装 链接:百度网盘 请输入提取码 提取码&#…

Https身份鉴权(小迪网络安全笔记~

附:完整笔记目录~ ps:本人小白,笔记均在个人理解基础上整理,若有错误欢迎指正! 5.2 Https&身份鉴权 引子:上一篇主要对Http数据包结构、内容做了介绍,本篇则聊聊Https、身份鉴权等技术。 …

Linux 中的 mkdir 命令:深入解析

在 Linux 系统中,mkdir 命令用于创建目录。它是文件系统管理中最基础的命令之一,广泛应用于日常操作和系统管理中。本文将深入探讨 mkdir 命令的功能、使用场景、高级技巧,并结合 GNU Coreutils 的源码进行详细分析。 1. mkdir 命令的基本用法…

【实验】【H3CNE邓方鸣】交换机端口安全实验+2024.12.11

实验来源:邓方鸣交换机端口安全实验 软件下载: 华三虚拟实验室: 华三虚拟实验室下载 wireshark:wireshark SecureCRT v8.7 版本: CRT下载分享与破解 文章目录 dot1x 开启802.1X身份验证 开启802.1X身份验证,需要在系统视图和接口视…

OpenCV实验篇:识别图片颜色并绘制轮廓

第三篇:识别图片颜色并绘制轮廓 1. 实验原理 颜色识别的原理: 颜色在图像处理中通常使用 HSV 空间来表示。 HSV 空间是基于人类视觉系统的一种颜色模型,其中: H(Hue):色调,表示颜色…

vue2-请求代理,动态target

当你在 Vue 2 项目中将 axios 的 baseURL 配置为 http://192.168.11.111:8762 时,所有请求都被认为是绝对路径请求,这种请求会直接发送到目标服务器, 跳过开发服务器的代理。 baseURL具体值 这就是为什么代理配置无法拦截 /exportPdf 的原因…

算法-字符串-76.最小覆盖子串

一、题目 二、思路解析 1.思路: 滑动窗口!!! 2.常用方法: 无 3.核心逻辑: 1.特殊情况:s或t是否为空字符串 if(snull||tnull)return ""; 2.声明一个字符数组——用于记录对应字符出现…

BatchNorm 与 LayerNorm

文章目录 1. BatchNorm批量归一化2. LayerNorm层归一化3. BatchNorm 和 LayerNorm 对比4. BatchNorm 和 LayerNorm 怎么选择References 今天重看Transformer,发现里面提到了BatchNorm和LayerNorm两种归一化方法,在这儿做一下总结和整理。 1. BatchNorm批…

《机器学习》2.4假设检验 t分布 F分布

目录 t发布 注意是这个东西服从t分布 数据服从t分布通常是在以下情况下: 以下是一些具体的例子,说明在何种情况下数据会服从t分布: t检验 交叉验证t检验 样本方差​编辑 F分布(fisher Friedman检验是一种非参数统计方法&a…

java aspose word 模板根据数据导出pdf

支持以功能&#xff1a; 1、字符串占位符替换。 2、占位符循环替换。 3、图片替换。 4、基础图标&#xff0c;折现、饼图、柱状图。 本案例运行环境&#xff1a; 1、aspose word21.1版本。 2、jdk 18。 话不多说直接上代码。 <!-- 图表相关 --><dependency><gro…

Linux高性能服务器编程 | 读书笔记 |9.定时器

9. 定时器 网络程序需要处理定时事件&#xff0c;如定期检测一个客户连接的活动状态。服务器程序通常管理着众多定时事件&#xff0c;有效地组织这些定时事件&#xff0c;使其在预期的时间被触发且不影响服务器的主要逻辑&#xff0c;对于服务器的性能有至关重要的影响。为此&…

QT 国际化(翻译)

QT国际化&#xff08;Internationalization&#xff0c;简称I18N&#xff09;是指将一个软件应用程序的界面、文本、日期、数字等元素转化为不同的语言和文化习惯的过程。这使得软件能够在不同的国家和地区使用&#xff0c;并且可以根据用户的语言和地区提供本地化的使用体验。…

3D 生成重建034-NerfDiff借助扩散模型直接生成nerf

3D 生成重建034-NerfDiff借助扩散模型直接生成nerf 文章目录 0 论文工作1 论文方法2 实验结果 0 论文工作 感觉这个论文可能能shapE差不多同时期工作&#xff0c;但是shapE是生成任意种类。 本文提出了一种新颖的单图像视图合成方法NerfDiff&#xff0c;该方法利用神经辐射场 …

【CNN卷积神经网络算法】卷积神经网络

卷积神经网络整体架构 总结 输入层&#xff1a;原始数据&#xff08;图像&#xff09; 例如可以输入一张RGB图像&#xff0c;其可以标示为一个三维矩阵&#xff0c;宽W高H和通道数C3对应着RGB卷积层&#xff1a;提取局部特征&#xff0c;变换输入数据为丰富的特征图 网络使用多…

HarmonyOS Next 元服务新建到上架全流程

HarmonyOS Next 元服务新建到上架全流程 接上篇 这篇文章的主要目的是介绍元服务从新建到上家的完整流程 在AGC平台上新建一个项目 链接 一个项目可以多个应用 AGC新建一个元服务应用 新建一个本地元服务项目 如果成功在AGC平台上新建过元服务&#xff0c;那么这里会自动显…

Mac/Windows端长期破解myBase8方法(无需安装火绒)

提醒 不管哪个端&#xff0c;都需要先退出myBase。 Mac 进入用户根目录/Users/c0ny100&#xff0c;即下边是Macintosh HD > 用户 > [你的用户名]这个界面然后按ShiftCommond.&#xff0c;显示隐藏文件。找到.Mybase8.ini文件 打开.Mybase8.ini文件&#xff0c;删除Fir…