语义分割 LR-ASPP网络学习笔记 (附代码)

论文地址:https://arxiv.org/abs/1905.02244

代码地址:https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/tree/master/pytorch_segmentation/lraspp

1.是什么?

LR-ASPP是一个轻量级语义分割网络,它是在MobileNetV3论文中提出来的,非常适合移动端部署。相比于其他语义分割网络,LR-ASPP在精度上有所牺牲,但在速度上有很大的提升。LR-ASPP的推理时间在CPU上只需要0.3秒,而且它的网络结构也非常简单。

2.为什么?

LR-ASPP 和 DeepLab V3 中的 ASPP 都是用于语义分割任务的分割解码器,但它们在一些方面有不同的创新点和设计思路。

简化结构:LR-ASPP 相对于传统的 ASPP 结构进行了简化。传统的 ASPP 通常包含多个并行的卷积分支(DeepLab v2 中有 4 个,DeepLab v3 中有 5 个),每个分支使用不同膨胀率的膨胀卷积来捕捉不同尺度的上下文信息。而 LR-ASPP 通过减少分支数量和膨胀卷积的膨胀率,降低了计算复杂性,使得模型更适合在资源受限的移动设备上使用。

节省计算资源:由于在移动设备上计算资源有限,LR-ASPP 的设计考虑了计算效率的问题。相比于 DeepLab V3 中的 ASPP,LR-ASPP 在保持一定的性能的前提下,减少了计算量,使得模型能够更好地适应移动设备的硬件限制。

高效的语义分割:LR-ASPP 在移动设备上表现出较好的分割性能。它能够有效地捕捉不同尺度的上下文信息,从而更准确地对图像中的像素进行语义分类。同时,由于其计算效率的优势,LR-ASPP 在移动设备上可以实现实时的语义分割,适用于许多实际应用场景。

总体而言,LR-ASPP 是 MobileNetV3 中的一项创新,它在计算效率和性能之间取得了良好的平衡,为移动设备上的语义分割任务提供了高效而有效的解决方案。而 DeepLab V3 中的 ASPP 则是传统 ASPP 的一种改进版本,它在大规模计算资源的情况下可以取得较好的性能。
 

3.怎么样?

3.1网络结构

 在语义分割任务中,Backbone 为 MobileNet v3,但进行了一些改动:

  1. Backbone 不再进行 32 倍下采样,而是仅进行 16 倍下采样
  2. Backbone 中最后的几个 BottleNet 中使用膨胀卷积

在一般的语义分割任务,Backbone 通常会进行 8 倍下采样,而 MobileNet v3 LR-ASPP 为了使模型更加轻量化,于是进行了 16 倍 下采样。3.2原理分析3.3代码实现

接下来我们看一下分割头(Segmentation Head),也就是 LR-ASPP。在 Backbone 输出上分成了两个分支,如上图所示:

【第一个分支】上面的分支通过一个简单的 1 × 1 卷积(bias=False)汉堡包结构,即 Conv -> BN -> ReLU,得到一个输出特征图 F_{1}
 
【第二个分支】第二分支通过一个 核大小为 49 × 49 ,步长为 [ 16 , 20 ]  的全局平均池化层(AvgPooling Layer),之后再通过一个 1 × 1 的普通卷积(bias=False) + Sigmoid,得到一个输出特征图F_{2 }

根据观察源码,F_{1}分支中的 ReLU 就是普通的 ReLU 而非 ReLU6
F_{2 }分支中的 Bilinear Upsample 其实是不需要的(正常来说,通过 Sigmoid 层后得到的就是长度为 128 的向量)
F_{2 }分支中的 AdaptiveAvgPool2d -> 1×1 Conv -> Sigmoid 与 MobileNet v3 中提出的 Squeeze-and-Excitation(SE)注意力模块非常相似

【第一次融合】F_{1\bigotimes }F_{2}之后,经过双线性插值进行 2 倍上采样,之后再经过普通的 1 × 1 1 \times 11×1 卷积,得到输出特征图 F_{3}
【第三个分支】将 Backbone 中经过 8 倍下采样的特征图拿出来,经过 普通的 1 × 1 卷积得到输出特征图F_{4}
【第二次融合】F_{3}\bigotimes F_{4}=F_{5},得到 LR-ASPP 的输出特征图。
之后需要进行 8 倍双线性插值上采样得到和我输入图片一样大小的单通道图片,即F_{OUT}

3.2代码实现

mobilenet_backbone

from typing import Callable, List, Optionalimport torch
from torch import nn, Tensor
from torch.nn import functional as F
from functools import partialdef _make_divisible(ch, divisor=8, min_ch=None):"""This function is taken from the original tf repo.It ensures that all layers have a channel number that is divisible by 8It can be seen here:https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py"""if min_ch is None:min_ch = divisornew_ch = max(min_ch, int(ch + divisor / 2) // divisor * divisor)# Make sure that round down does not go down by more than 10%.if new_ch < 0.9 * ch:new_ch += divisorreturn new_chclass ConvBNActivation(nn.Sequential):def __init__(self,in_planes: int,out_planes: int,kernel_size: int = 3,stride: int = 1,groups: int = 1,norm_layer: Optional[Callable[..., nn.Module]] = None,activation_layer: Optional[Callable[..., nn.Module]] = None,dilation: int = 1):padding = (kernel_size - 1) // 2 * dilationif norm_layer is None:norm_layer = nn.BatchNorm2dif activation_layer is None:activation_layer = nn.ReLU6super(ConvBNActivation, self).__init__(nn.Conv2d(in_channels=in_planes,out_channels=out_planes,kernel_size=kernel_size,stride=stride,dilation=dilation,padding=padding,groups=groups,bias=False),norm_layer(out_planes),activation_layer(inplace=True))self.out_channels = out_planesclass SqueezeExcitation(nn.Module):def __init__(self, input_c: int, squeeze_factor: int = 4):super(SqueezeExcitation, self).__init__()squeeze_c = _make_divisible(input_c // squeeze_factor, 8)self.fc1 = nn.Conv2d(input_c, squeeze_c, 1)self.fc2 = nn.Conv2d(squeeze_c, input_c, 1)def forward(self, x: Tensor) -> Tensor:scale = F.adaptive_avg_pool2d(x, output_size=(1, 1))scale = self.fc1(scale)scale = F.relu(scale, inplace=True)scale = self.fc2(scale)scale = F.hardsigmoid(scale, inplace=True)return scale * xclass InvertedResidualConfig:def __init__(self,input_c: int,kernel: int,expanded_c: int,out_c: int,use_se: bool,activation: str,stride: int,dilation: int,width_multi: float):self.input_c = self.adjust_channels(input_c, width_multi)self.kernel = kernelself.expanded_c = self.adjust_channels(expanded_c, width_multi)self.out_c = self.adjust_channels(out_c, width_multi)self.use_se = use_seself.use_hs = activation == "HS"  # whether using h-swish activationself.stride = strideself.dilation = dilation@staticmethoddef adjust_channels(channels: int, width_multi: float):return _make_divisible(channels * width_multi, 8)class InvertedResidual(nn.Module):def __init__(self,cnf: InvertedResidualConfig,norm_layer: Callable[..., nn.Module]):super(InvertedResidual, self).__init__()if cnf.stride not in [1, 2]:raise ValueError("illegal stride value.")self.use_res_connect = (cnf.stride == 1 and cnf.input_c == cnf.out_c)layers: List[nn.Module] = []activation_layer = nn.Hardswish if cnf.use_hs else nn.ReLU# expandif cnf.expanded_c != cnf.input_c:layers.append(ConvBNActivation(cnf.input_c,cnf.expanded_c,kernel_size=1,norm_layer=norm_layer,activation_layer=activation_layer))# depthwisestride = 1 if cnf.dilation > 1 else cnf.stridelayers.append(ConvBNActivation(cnf.expanded_c,cnf.expanded_c,kernel_size=cnf.kernel,stride=stride,dilation=cnf.dilation,groups=cnf.expanded_c,norm_layer=norm_layer,activation_layer=activation_layer))if cnf.use_se:layers.append(SqueezeExcitation(cnf.expanded_c))# projectlayers.append(ConvBNActivation(cnf.expanded_c,cnf.out_c,kernel_size=1,norm_layer=norm_layer,activation_layer=nn.Identity))self.block = nn.Sequential(*layers)self.out_channels = cnf.out_cself.is_strided = cnf.stride > 1def forward(self, x: Tensor) -> Tensor:result = self.block(x)if self.use_res_connect:result += xreturn resultclass MobileNetV3(nn.Module):def __init__(self,inverted_residual_setting: List[InvertedResidualConfig],last_channel: int,num_classes: int = 1000,block: Optional[Callable[..., nn.Module]] = None,norm_layer: Optional[Callable[..., nn.Module]] = None):super(MobileNetV3, self).__init__()if not inverted_residual_setting:raise ValueError("The inverted_residual_setting should not be empty.")elif not (isinstance(inverted_residual_setting, List) andall([isinstance(s, InvertedResidualConfig) for s in inverted_residual_setting])):raise TypeError("The inverted_residual_setting should be List[InvertedResidualConfig]")if block is None:block = InvertedResidualif norm_layer is None:norm_layer = partial(nn.BatchNorm2d, eps=0.001, momentum=0.01)layers: List[nn.Module] = []# building first layerfirstconv_output_c = inverted_residual_setting[0].input_clayers.append(ConvBNActivation(3,firstconv_output_c,kernel_size=3,stride=2,norm_layer=norm_layer,activation_layer=nn.Hardswish))# building inverted residual blocksfor cnf in inverted_residual_setting:layers.append(block(cnf, norm_layer))# building last several layerslastconv_input_c = inverted_residual_setting[-1].out_clastconv_output_c = 6 * lastconv_input_clayers.append(ConvBNActivation(lastconv_input_c,lastconv_output_c,kernel_size=1,norm_layer=norm_layer,activation_layer=nn.Hardswish))self.features = nn.Sequential(*layers)self.avgpool = nn.AdaptiveAvgPool2d(1)self.classifier = nn.Sequential(nn.Linear(lastconv_output_c, last_channel),nn.Hardswish(inplace=True),nn.Dropout(p=0.2, inplace=True),nn.Linear(last_channel, num_classes))# initial weightsfor m in self.modules():if isinstance(m, nn.Conv2d):nn.init.kaiming_normal_(m.weight, mode="fan_out")if m.bias is not None:nn.init.zeros_(m.bias)elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):nn.init.ones_(m.weight)nn.init.zeros_(m.bias)elif isinstance(m, nn.Linear):nn.init.normal_(m.weight, 0, 0.01)nn.init.zeros_(m.bias)def _forward_impl(self, x: Tensor) -> Tensor:x = self.features(x)x = self.avgpool(x)x = torch.flatten(x, 1)x = self.classifier(x)return xdef forward(self, x: Tensor) -> Tensor:return self._forward_impl(x)def mobilenet_v3_large(num_classes: int = 1000,reduced_tail: bool = False,dilated: bool = False) -> MobileNetV3:"""Constructs a large MobileNetV3 architecture from"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>.weights_link:https://download.pytorch.org/models/mobilenet_v3_large-8738ca79.pthArgs:num_classes (int): number of classesreduced_tail (bool): If True, reduces the channel counts of all feature layersbetween C4 and C5 by 2. It is used to reduce the channel redundancy in thebackbone for Detection and Segmentation.dilated: whether using dilated conv"""width_multi = 1.0bneck_conf = partial(InvertedResidualConfig, width_multi=width_multi)adjust_channels = partial(InvertedResidualConfig.adjust_channels, width_multi=width_multi)reduce_divider = 2 if reduced_tail else 1dilation = 2 if dilated else 1inverted_residual_setting = [# input_c, kernel, expanded_c, out_c, use_se, activation, stride, dilationbneck_conf(16, 3, 16, 16, False, "RE", 1, 1),bneck_conf(16, 3, 64, 24, False, "RE", 2, 1),  # C1bneck_conf(24, 3, 72, 24, False, "RE", 1, 1),bneck_conf(24, 5, 72, 40, True, "RE", 2, 1),  # C2bneck_conf(40, 5, 120, 40, True, "RE", 1, 1),bneck_conf(40, 5, 120, 40, True, "RE", 1, 1),bneck_conf(40, 3, 240, 80, False, "HS", 2, 1),  # C3bneck_conf(80, 3, 200, 80, False, "HS", 1, 1),bneck_conf(80, 3, 184, 80, False, "HS", 1, 1),bneck_conf(80, 3, 184, 80, False, "HS", 1, 1),bneck_conf(80, 3, 480, 112, True, "HS", 1, 1),bneck_conf(112, 3, 672, 112, True, "HS", 1, 1),bneck_conf(112, 5, 672, 160 // reduce_divider, True, "HS", 2, dilation),  # C4bneck_conf(160 // reduce_divider, 5, 960 // reduce_divider, 160 // reduce_divider, True, "HS", 1, dilation),bneck_conf(160 // reduce_divider, 5, 960 // reduce_divider, 160 // reduce_divider, True, "HS", 1, dilation),]last_channel = adjust_channels(1280 // reduce_divider)  # C5return MobileNetV3(inverted_residual_setting=inverted_residual_setting,last_channel=last_channel,num_classes=num_classes)def mobilenet_v3_small(num_classes: int = 1000,reduced_tail: bool = False,dilated: bool = False) -> MobileNetV3:"""Constructs a large MobileNetV3 architecture from"Searching for MobileNetV3" <https://arxiv.org/abs/1905.02244>.weights_link:https://download.pytorch.org/models/mobilenet_v3_small-047dcff4.pthArgs:num_classes (int): number of classesreduced_tail (bool): If True, reduces the channel counts of all feature layersbetween C4 and C5 by 2. It is used to reduce the channel redundancy in thebackbone for Detection and Segmentation.dilated: whether using dilated conv"""width_multi = 1.0bneck_conf = partial(InvertedResidualConfig, width_multi=width_multi)adjust_channels = partial(InvertedResidualConfig.adjust_channels, width_multi=width_multi)reduce_divider = 2 if reduced_tail else 1dilation = 2 if dilated else 1inverted_residual_setting = [# input_c, kernel, expanded_c, out_c, use_se, activation, stride, dilationbneck_conf(16, 3, 16, 16, True, "RE", 2, 1),  # C1bneck_conf(16, 3, 72, 24, False, "RE", 2, 1),  # C2bneck_conf(24, 3, 88, 24, False, "RE", 1, 1),bneck_conf(24, 5, 96, 40, True, "HS", 2, 1),  # C3bneck_conf(40, 5, 240, 40, True, "HS", 1, 1),bneck_conf(40, 5, 240, 40, True, "HS", 1, 1),bneck_conf(40, 5, 120, 48, True, "HS", 1, 1),bneck_conf(48, 5, 144, 48, True, "HS", 1, 1),bneck_conf(48, 5, 288, 96 // reduce_divider, True, "HS", 2, dilation),  # C4bneck_conf(96 // reduce_divider, 5, 576 // reduce_divider, 96 // reduce_divider, True, "HS", 1, dilation),bneck_conf(96 // reduce_divider, 5, 576 // reduce_divider, 96 // reduce_divider, True, "HS", 1, dilation)]last_channel = adjust_channels(1024 // reduce_divider)  # C5return MobileNetV3(inverted_residual_setting=inverted_residual_setting,last_channel=last_channel,num_classes=num_classes)

LR-ASPP

from collections import OrderedDictfrom typing import Dictimport torch
from torch import nn, Tensor
from torch.nn import functional as F
from .mobilenet_backbone import mobilenet_v3_largeclass IntermediateLayerGetter(nn.ModuleDict):"""Module wrapper that returns intermediate layers from a modelIt has a strong assumption that the modules have been registeredinto the model in the same order as they are used.This means that one should **not** reuse the same nn.Moduletwice in the forward if you want this to work.Additionally, it is only able to query submodules that are directlyassigned to the model. So if `model` is passed, `model.feature1` canbe returned, but not `model.feature1.layer2`.Args:model (nn.Module): model on which we will extract the featuresreturn_layers (Dict[name, new_name]): a dict containing the namesof the modules for which the activations will be returned asthe key of the dict, and the value of the dict is the nameof the returned activation (which the user can specify)."""_version = 2__annotations__ = {"return_layers": Dict[str, str],}def __init__(self, model: nn.Module, return_layers: Dict[str, str]) -> None:if not set(return_layers).issubset([name for name, _ in model.named_children()]):raise ValueError("return_layers are not present in model")orig_return_layers = return_layersreturn_layers = {str(k): str(v) for k, v in return_layers.items()}# 重新构建backbone,将没有使用到的模块全部删掉layers = OrderedDict()for name, module in model.named_children():layers[name] = moduleif name in return_layers:del return_layers[name]if not return_layers:breaksuper(IntermediateLayerGetter, self).__init__(layers)self.return_layers = orig_return_layersdef forward(self, x: Tensor) -> Dict[str, Tensor]:out = OrderedDict()for name, module in self.items():x = module(x)if name in self.return_layers:out_name = self.return_layers[name]out[out_name] = xreturn outclass LRASPP(nn.Module):"""Implements a Lite R-ASPP Network for semantic segmentation from`"Searching for MobileNetV3"<https://arxiv.org/abs/1905.02244>`_.Args:backbone (nn.Module): the network used to compute the features for the model.The backbone should return an OrderedDict[Tensor], with the key being"high" for the high level feature map and "low" for the low level feature map.low_channels (int): the number of channels of the low level features.high_channels (int): the number of channels of the high level features.num_classes (int): number of output classes of the model (including the background).inter_channels (int, optional): the number of channels for intermediate computations."""__constants__ = ['aux_classifier']def __init__(self,backbone: nn.Module,low_channels: int,high_channels: int,num_classes: int,inter_channels: int = 128) -> None:super(LRASPP, self).__init__()self.backbone = backboneself.classifier = LRASPPHead(low_channels, high_channels, num_classes, inter_channels)def forward(self, x: Tensor) -> Dict[str, Tensor]:input_shape = x.shape[-2:]features = self.backbone(x)out = self.classifier(features)out = F.interpolate(out, size=input_shape, mode="bilinear", align_corners=False)result = OrderedDict()result["out"] = outreturn resultclass LRASPPHead(nn.Module):def __init__(self,low_channels: int,high_channels: int,num_classes: int,inter_channels: int) -> None:super(LRASPPHead, self).__init__()self.cbr = nn.Sequential(nn.Conv2d(high_channels, inter_channels, 1, bias=False),nn.BatchNorm2d(inter_channels),nn.ReLU(inplace=True))self.scale = nn.Sequential(nn.AdaptiveAvgPool2d(1),nn.Conv2d(high_channels, inter_channels, 1, bias=False),nn.Sigmoid())self.low_classifier = nn.Conv2d(low_channels, num_classes, 1)self.high_classifier = nn.Conv2d(inter_channels, num_classes, 1)def forward(self, inputs: Dict[str, Tensor]) -> Tensor:low = inputs["low"]high = inputs["high"]x = self.cbr(high)s = self.scale(high)x = x * sx = F.interpolate(x, size=low.shape[-2:], mode="bilinear", align_corners=False)return self.low_classifier(low) + self.high_classifier(x)def lraspp_mobilenetv3_large(num_classes=21, pretrain_backbone=False):# 'mobilenetv3_large_imagenet': 'https://download.pytorch.org/models/mobilenet_v3_large-8738ca79.pth'# 'lraspp_mobilenet_v3_large_coco': 'https://download.pytorch.org/models/lraspp_mobilenet_v3_large-d234d4ea.pth'backbone = mobilenet_v3_large(dilated=True)if pretrain_backbone:# 载入mobilenetv3 large backbone预训练权重backbone.load_state_dict(torch.load("mobilenet_v3_large.pth", map_location='cpu'))backbone = backbone.features# Gather the indices of blocks which are strided. These are the locations of C1, ..., Cn-1 blocks.# The first and last blocks are always included because they are the C0 (conv1) and Cn.stage_indices = [0] + [i for i, b in enumerate(backbone) if getattr(b, "is_strided", False)] + [len(backbone) - 1]low_pos = stage_indices[-4]  # use C2 here which has output_stride = 8high_pos = stage_indices[-1]  # use C5 which has output_stride = 16low_channels = backbone[low_pos].out_channelshigh_channels = backbone[high_pos].out_channelsreturn_layers = {str(low_pos): "low", str(high_pos): "high"}backbone = IntermediateLayerGetter(backbone, return_layers=return_layers)model = LRASPP(backbone, low_channels, high_channels, num_classes)return model

参考:

[语义分割] LR-ASPP(MobileNet v3、轻量化、16倍下采样、膨胀卷积、ASPP、SE)

LR-ASPP论文

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/201590.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【ESP8266】ESP8266集成开发环境对比

当涉及到ESP8266开发环境的选择时&#xff0c;有几个常见的选择可供开发人员使用。在本篇文章中&#xff0c;我们将对比一些目前最流行的ESP8266集成开发环境&#xff08;IDE&#xff09;&#xff0c;以帮助您选择最适合您的需求的开发环境。 总结&#xff1a;Arduino IDE和Pl…

HarmonyOS应用开发——页面

我们将对于多页面以及更多有趣的功能展开叙述&#xff0c;这次我们对于 HarmonyOS 的很多有趣常用组件并引出一些其他概念以及解决方案、页面跳转传值、生命周期、启动模式&#xff08;UiAbility&#xff09;&#xff0c;样式的书写、状态管理以及动画等方面进行探讨 页面之间…

项目进度已经落后了,项目经理该怎么办?

进度管理是项目管理的核心工作之一&#xff0c;通过可续的进度计划与控制管理&#xff0c;最终实现项目按照目标交付。 进度管理的两大核心工作&#xff1a;计划制定、过程管控。 项目管理过程中难免会遇到工作进度和计划不一致的情况&#xff0c;有效管理项目进度&#xff…

Redis安装和使用(基于windows)

Redis是一个使用C语言编写的开源、高性能、非关系型的键值对存储数据库。它支持多种数据结构&#xff0c;包括字符串、列表、集合、有序集合、哈希表等。Redis的内存操作能力极强&#xff0c;其读写性能非常优秀&#xff0c;且支持持久化&#xff0c;可以将数据存储到磁盘上&am…

使用 React 和 ECharts 创建地球模拟扩散和飞线效果

在本博客中&#xff0c;我们将学习如何使用 React 和 ECharts 创建一个酷炫的地球模拟扩散效果。我们将使用 ECharts 作为可视化库&#xff0c;以及 React 来构建我们的应用。地球贴图在文章的结尾。 最终效果 准备工作 首先&#xff0c;确保你已经安装了 React&#xff0c;并…

智能安全芯片ACH512芯片描述及功能

ACH512 芯片是一款基于安全算法的高性能 SOC 芯片&#xff0c; 主要应用于 eMMC/SD/Nandflash 大容量存储设备、加密 U 盘、指纹识别等市场。 芯片采用 32 位内核&#xff0c;片内集成多种安全密码模块&#xff0c;包括SM1、 SM2、 SM3、 SM4、 SSF33 算法以及RSA/ECC、 ECDSA、…

数据结构 | 二叉树的各种遍历

数据结构 | 二叉树的各种遍历 文章目录 数据结构 | 二叉树的各种遍历创建节点 && 创建树二叉树的前中后序遍历二叉树节点个数二叉树叶子节点个数二叉树第k层节点个数二叉树查找值为x的节点二叉树求树的高度二叉树的层序遍历判断二叉树是否是完全二叉树 我们本章来实现二…

同调群的维度 和 同调群的秩

同调群的维度是指同调群中非零元素的最小阶数。与线性代数中对向量空间的维度的理解类似。对同调群&#xff0c;k维同调群的维度是k。 同调群的秩是指同调群中的自由部分的维度。同调群通常包含自由部分和挠部分。同调群的秩是指同调群中自由部分的维度。对同调群&#xff0c;…

SQL SERVER 设置权限和隐藏其他数据库

一、创建用户名&#xff0c;选择默认数据库 二、分配权限 --对用户EAM分配 View_1视图 只有 只读select权限 GRANT select on View_1 to EAM --对用户分配指定表权限&#xff08;读写删&#xff09; GRANT SELECT , INSERT , UPDATE , DELETE ON table1 TO [用户名] --对用户分…

更改 Mac 所使用网络服务的顺序

如果以多种不同的方式&#xff08;例如使用 Wi-Fi 或以太网&#xff09;接入互联网或网络&#xff0c;你可以更改连接时电脑所尝试的网络连接顺序。 如果有多个活跃的连接&#xff0c;电脑会首先尝试列表顶部的连接&#xff0c;然后按降序尝试其他连接。 你不能更改虚拟专用网…

详解Python 迭代器介绍及作用

文章目录 迭代器&#xff1a;初探什么是迭代器&#xff1f;通过迭代器进行迭代迭代器 for 循环的工作构建自定义迭代器Python 无限迭代器Python 迭代器的好处总结关于Python技术储备一、Python所有方向的学习路线二、Python基础学习视频三、精品Python学习书籍四、Python工具包…

Linux下安装MySQL 5.6

1、下载二进制安装文件 使用wget下载MySQL 5.6.35二进制安装文件并存放在/root目录下。 wget https://downloads.mysql.com/archives/get/p/23/file/mysql-5.6.35-linux-glibc2.5-x86_64.tar.gz ll mysql-5.6.35-linux-glibc2.5-x86_64.tar.gz 2、创建mysql用户 先创建mysql…

【c语言指针详解】复杂数据结构的指针用法

目录 一、动态内存分配 1.1 使用malloc和free函数进行内存的动态分配和释放 1.2 内存泄漏和野指针的概念和解决方法 二、复杂数据结构的指针用法 2.1 结构体指针和成员访问操作符 2.2 指针数组和指向指针的指针 2.2.1 指针数组 2.2.2 指向指针的指针 2.3 动态内存分配与结构体指…

Vue项目解决van-calendar 打开下拉框显示空白(白色),需滑动一下屏幕,才可正常显示

问题描述&#xff0c;如图 ipad(平板&#xff09;或者 H5移动端引入Vant组件的日历组件&#xff08;van-calendar&#xff09;&#xff0c;初始化显示空白&#xff0c;需滚动一下屏幕&#xff0c;才可正常显示 解决方法 需在van-calendar上绑定open"openCalendar"事件…

应用层自定义协议

文章目录 一、前言二、应用层自定义协议三、通用协议格式3.1 xml3.2 josn3.3 protobuffer 后端开发必须掌握的知识点&#xff01; 一、前言 应用层主要是干嘛的呢&#xff1f;&#xff1f; 应用层协议定义了应用程序之间通信的规则和标准。定义了数据的格式、数据交换的标准和…

第74讲:MySQL数据库InnoDB存储引擎事务:Redo Log与Undo Logo的核心概念

文章目录 1.InnoDB引擎中的逻辑存储结构2.事务的基本概念3.Redo log的核心概念3.1.什么是Redo log3.2.如果没有redo log面临的问题3.3.使用redo log之后是怎样的流程 4.Undo log的核心概念 1.InnoDB引擎中的逻辑存储结构 InnoDB存储引擎的逻辑结构分为以下几层&#xff1a; Ta…

【链表Linked List】力扣-83 删除排序链表中的重复元素

目录 题目描述 解题过程 题目描述 给定一个已排序的链表的头 head &#xff0c; 删除所有重复的元素&#xff0c;使每个元素只出现一次 。返回 已排序的链表 。 示例 1&#xff1a; 输入&#xff1a;head [1,1,2] 输出&#xff1a;[1,2]示例 2&#xff1a; 输入&#xff1…

Python源码17:使用海龟画图turtle画五星红旗

turtle模块是一个Python的标准库之一&#xff0c;它提供了一个基于Turtle graphics的绘图库。Turtle graphics是一种流行的绘图方式&#xff0c;它通过控制一个小海龟在屏幕上移动来绘制图形。 turtle模块可以让您轻松地创建和控制海龟图形&#xff0c;从而帮助您学习Python编…

python基于轻量级卷积神经网络模型ShuffleNetv2开发构建辣椒病虫害图像识别系统

轻量级识别模型在我们前面的博文中已经有过很多实践了&#xff0c;感兴趣的话可以自行移步阅读&#xff1a; 《移动端轻量级模型开发谁更胜一筹&#xff0c;efficientnet、mobilenetv2、mobilenetv3、ghostnet、mnasnet、shufflenetv2驾驶危险行为识别模型对比开发测试》 《基…

你的手机注册了多少互联网账号?赶快通过这个功能查询一下吧!

一键查询手机号绑定&#xff01;你的手机注册了多少互联网账号&#xff1f;赶快查询一下吧&#xff01; 你知道你名下的手机号绑定了多少互联网账号吗&#xff1f; 怎么查询手机号绑定了什么账号呢&#xff1f; ...... 不用担心 一键查询手机号绑定的帐号功能来了&#xff01; …