【代码解读】OpenCOOD框架之model模块(以PointPillarFCooper为例)

point_pillar_fcooper

  • PointPillarFCooper
  • PointPillars
    • PillarVFE
    • PFNLayer
    • PointPillarScatter
    • BaseBEVBackbone
    • DownsampleConv
      • DoubleConv
  • SpatialFusion
  • 检测头

(紧扣PointPillarFCooper的框架结构,一点一点看代码)

PointPillarFCooper

# -*- coding: utf-8 -*-
# Author: Runsheng Xu <rxx3386@ucla.edu>
# License: TDG-Attribution-NonCommercial-NoDistrib
import pprintimport torch.nn as nnfrom opencood.models.sub_modules.pillar_vfe import PillarVFE
from opencood.models.sub_modules.point_pillar_scatter import PointPillarScatter
from opencood.models.sub_modules.base_bev_backbone import BaseBEVBackbone
from opencood.models.sub_modules.downsample_conv import DownsampleConv
from opencood.models.sub_modules.naive_compress import NaiveCompressor
from opencood.models.fuse_modules.f_cooper_fuse import SpatialFusionclass PointPillarFCooper(nn.Module):"""F-Cooper implementation with point pillar backbone."""def __init__(self, args):super(PointPillarFCooper, self).__init__()print("args: ")pprint.pprint(args)self.max_cav = args['max_cav']# PIllar VFE Voxel Feature Encodingself.pillar_vfe = PillarVFE(args['pillar_vfe'],num_point_features=4,voxel_size=args['voxel_size'],point_cloud_range=args['lidar_range'])self.scatter = PointPillarScatter(args['point_pillar_scatter'])self.backbone = BaseBEVBackbone(args['base_bev_backbone'], 64)# used to downsample the feature map for efficient computationself.shrink_flag = Falseif 'shrink_header' in args:self.shrink_flag = Trueself.shrink_conv = DownsampleConv(args['shrink_header'])self.compression = Falseif args['compression'] > 0:self.compression = Trueself.naive_compressor = NaiveCompressor(256, args['compression'])self.fusion_net = SpatialFusion()self.cls_head = nn.Conv2d(128 * 2, args['anchor_number'],kernel_size=1)self.reg_head = nn.Conv2d(128 * 2, 7 * args['anchor_number'],kernel_size=1)if args['backbone_fix']:self.backbone_fix()
  • args: 其实就是从hypes_yaml配置文件里传来的参数
args:
{'anchor_number': 2,'backbone_fix': False,'base_bev_backbone': {'layer_nums': [3, 5, 8],'layer_strides': [2, 2, 2],'num_filters': [64, 128, 256],'num_upsample_filter': [128, 128, 128],'upsample_strides': [1, 2, 4]},'compression': 0,'lidar_range': [-140.8, -40, -3, 140.8, 40, 1],'max_cav': 5,'pillar_vfe': {'num_filters': [64],'use_absolute_xyz': True,'use_norm': True,'with_distance': False},'point_pillar_scatter': {'grid_size': array([704, 200,   1], dtype=int64),'num_features': 64},'shrink_header': {'dim': [256],'input_dim': 384,'kernal_size': [1],'padding': [0],'stride': [1]},'voxel_size': [0.4, 0.4, 4]}
    def backbone_fix(self):"""Fix the parameters of backbone during finetune on timedelay。"""for p in self.pillar_vfe.parameters():p.requires_grad = Falsefor p in self.scatter.parameters():p.requires_grad = Falsefor p in self.backbone.parameters():p.requires_grad = Falseif self.compression:for p in self.naive_compressor.parameters():p.requires_grad = Falseif self.shrink_flag:for p in self.shrink_conv.parameters():p.requires_grad = Falsefor p in self.cls_head.parameters():p.requires_grad = Falsefor p in self.reg_head.parameters():p.requires_grad = False

backbone_fix 方法用于在模型微调过程中固定骨干网络的参数,以避免它们被更新。
遍历了模型中各个需要固定参数的组件,并将它们的 requires_grad 属性设置为 False,这意味着这些参数不会被优化器更新。
我们来看 forward 方法:

    def forward(self, data_dict):voxel_features = data_dict['processed_lidar']['voxel_features']voxel_coords = data_dict['processed_lidar']['voxel_coords']voxel_num_points = data_dict['processed_lidar']['voxel_num_points']record_len = data_dict['record_len']batch_dict = {'voxel_features': voxel_features,'voxel_coords': voxel_coords,'voxel_num_points': voxel_num_points,'record_len': record_len}# n, 4 -> n, cbatch_dict = self.pillar_vfe(batch_dict)# n, c -> N, C, H, Wbatch_dict = self.scatter(batch_dict)batch_dict = self.backbone(batch_dict)spatial_features_2d = batch_dict['spatial_features_2d']# downsample feature to reduce memoryif self.shrink_flag:spatial_features_2d = self.shrink_conv(spatial_features_2d)# compressorif self.compression:spatial_features_2d = self.naive_compressor(spatial_features_2d)fused_feature = self.fusion_net(spatial_features_2d, record_len)psm = self.cls_head(fused_feature)rm = self.reg_head(fused_feature)output_dict = {'psm': psm,'rm': rm}return output_dict

forward 方法定义了模型的前向传播过程。它接受一个数据字典作为输入,包含了经过处理的点云数据。
首先,从输入字典中提取出点云特征、体素坐标、体素点数等信息。
然后,依次将数据通过 pillar_vfe、scatter 和 backbone 这几个模块进行处理,得到了一个包含了空间特征的张量 spatial_features_2d。
如果启用了特征图的下采样(shrink_flag 为 True),则对 spatial_features_2d 进行下采样。
如果启用了特征压缩(compression 为 True),则对 spatial_features_2d 进行压缩。
最后,将压缩后的特征通过 fusion_net 进行融合,并通过 cls_head 和 reg_head 进行分类和回归,得到预测结果。
整个 forward 方法实现了模型的数据流动过程,从输入数据到最终输出结果的计算过程。

  • PointPillarsFcooper结构
PointPillarFCooper((pillar_vfe): PillarVFE((pfn_layers): ModuleList((0): PFNLayer((linear): Linear(in_features=10, out_features=64, bias=False)(norm): BatchNorm1d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True))))(scatter): PointPillarScatter()(backbone): BaseBEVBackbone((blocks): ModuleList((0): Sequential((0): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)(1): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), bias=False)(2): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(3): ReLU()(4): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(5): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(6): ReLU()(7): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(8): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(9): ReLU()(10): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(11): BatchNorm2d(64, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(12): ReLU())(1): Sequential((0): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)(1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), bias=False)(2): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(3): ReLU()(4): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(5): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(6): ReLU()(7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(8): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(9): ReLU()(10): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(11): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(12): ReLU()(13): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(14): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(15): ReLU()(16): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(17): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(18): ReLU())(2): Sequential((0): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0)(1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)(2): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(3): ReLU()(4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(5): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(6): ReLU()(7): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(8): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(9): ReLU()(10): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(11): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(12): ReLU()(13): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(14): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(15): ReLU()(16): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(17): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(18): ReLU()(19): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(20): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(21): ReLU()(22): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(23): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(24): ReLU()(25): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)(26): BatchNorm2d(256, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(27): ReLU()))(deblocks): ModuleList((0): Sequential((0): ConvTranspose2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(1): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(2): ReLU())(1): Sequential((0): ConvTranspose2d(128, 128, kernel_size=(2, 2), stride=(2, 2), bias=False)(1): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(2): ReLU())(2): Sequential((0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(4, 4), bias=False)(1): BatchNorm2d(128, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)(2): ReLU())))(shrink_conv): DownsampleConv((layers): ModuleList((0): DoubleConv((double_conv): Sequential((0): Conv2d(384, 256, kernel_size=(1, 1), stride=(1, 1))(1): ReLU(inplace=True)(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(3): ReLU(inplace=True)))))(fusion_net): SpatialFusion()(cls_head): Conv2d(256, 2, kernel_size=(1, 1), stride=(1, 1))(reg_head): Conv2d(256, 14, kernel_size=(1, 1), stride=(1, 1))
)

PointPillars

在这里插入图片描述
网络overview:网络的主要组成部分是PFN、Backbone和 SSD 检测头。原始点云被转换为堆叠的柱子张量和柱子索引张量。编码器使用堆叠的柱子来学习一组特征,这些特征可以分散回卷积神经网络的 2D 伪图像。检测头使用来自主干的特征来预测对象的 3D 边界框。请注意:在这里,我们展示了汽车网络的骨干维度。

PillarVFE

就是 voxel feature encoder:先对点云进行特征提取
VFE由PFNLayer(Pillar Feature Net)组成

  • model_cfg
{'num_filters': [64],'use_absolute_xyz': True,'use_norm': True,'with_distance': False},
class PillarVFE(nn.Module):def __init__(self, model_cfg, num_point_features, voxel_size,point_cloud_range):super().__init__()self.model_cfg = model_cfgself.use_norm = self.model_cfg['use_norm']self.with_distance = self.model_cfg['with_distance']self.use_absolute_xyz = self.model_cfg['use_absolute_xyz']num_point_features += 6 if self.use_absolute_xyz else 3if self.with_distance:num_point_features += 1self.num_filters = self.model_cfg['num_filters']assert len(self.num_filters) > 0num_filters = [num_point_features] + list(self.num_filters)pfn_layers = []for i in range(len(num_filters) - 1):in_filters = num_filters[i]out_filters = num_filters[i + 1]pfn_layers.append(PFNLayer(in_filters, out_filters, self.use_norm,last_layer=(i >= len(num_filters) - 2)))self.pfn_layers = nn.ModuleList(pfn_layers)self.voxel_x = voxel_size[0]self.voxel_y = voxel_size[1]self.voxel_z = voxel_size[2]self.x_offset = self.voxel_x / 2 + point_cloud_range[0]self.y_offset = self.voxel_y / 2 + point_cloud_range[1]self.z_offset = self.voxel_z / 2 + point_cloud_range[2]

PFNLayer

这里只是一个全连接+归一化(好像和原来的算法有出入)

class PFNLayer(nn.Module):def __init__(self,in_channels,out_channels,use_norm=True,last_layer=False):super().__init__()self.last_vfe = last_layerself.use_norm = use_normif not self.last_vfe:out_channels = out_channels // 2if self.use_norm:self.linear = nn.Linear(in_channels, out_channels, bias=False)self.norm = nn.BatchNorm1d(out_channels, eps=1e-3, momentum=0.01)else:self.linear = nn.Linear(in_channels, out_channels, bias=True)self.part = 50000

PointPillarScatter

主要作用就是三维点云压缩成bev(鸟瞰图)

class PointPillarScatter(nn.Module):def __init__(self, model_cfg):super().__init__()self.model_cfg = model_cfgself.num_bev_features = self.model_cfg['num_features']self.nx, self.ny, self.nz = model_cfg['grid_size']assert self.nz == 1
  • model_cfg:
{'grid_size': array([704, 200,   1], dtype=int64),'num_features': 64}

BaseBEVBackbone

参考这个图
在这里插入图片描述
3 * Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)

5 * Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)

8 * Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)

3、5、8对应着layer_nums

  • model_cfg
{'layer_nums': [3, 5, 8],'layer_strides': [2, 2, 2],'num_filters': [64, 128, 256],'num_upsample_filter': [128, 128, 128],'upsample_strides': [1, 2, 4]},
class BaseBEVBackbone(nn.Module):def __init__(self, model_cfg, input_channels):super().__init__()self.model_cfg = model_cfgif 'layer_nums' in self.model_cfg:assert len(self.model_cfg['layer_nums']) == \len(self.model_cfg['layer_strides']) == \len(self.model_cfg['num_filters'])layer_nums = self.model_cfg['layer_nums']layer_strides = self.model_cfg['layer_strides']num_filters = self.model_cfg['num_filters']else:layer_nums = layer_strides = num_filters = []if 'upsample_strides' in self.model_cfg:assert len(self.model_cfg['upsample_strides']) \== len(self.model_cfg['num_upsample_filter'])num_upsample_filters = self.model_cfg['num_upsample_filter']upsample_strides = self.model_cfg['upsample_strides']else:upsample_strides = num_upsample_filters = []num_levels = len(layer_nums)   # len(layer_nums)个Sequentialc_in_list = [input_channels, *num_filters[:-1]]self.blocks = nn.ModuleList()self.deblocks = nn.ModuleList()for idx in range(num_levels):cur_layers = [nn.ZeroPad2d(1),nn.Conv2d(c_in_list[idx], num_filters[idx], kernel_size=3,stride=layer_strides[idx], padding=0, bias=False),nn.BatchNorm2d(num_filters[idx], eps=1e-3, momentum=0.01),nn.ReLU()]for k in range(layer_nums[idx]):  # 每个Sequential里有多少个以下结构cur_layers.extend([nn.Conv2d(num_filters[idx], num_filters[idx],kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(num_filters[idx], eps=1e-3, momentum=0.01),nn.ReLU()])self.blocks.append(nn.Sequential(*cur_layers))# 以下是deblock模块if len(upsample_strides) > 0:stride = upsample_strides[idx]if stride >= 1:self.deblocks.append(nn.Sequential(nn.ConvTranspose2d(num_filters[idx], num_upsample_filters[idx],upsample_strides[idx],stride=upsample_strides[idx], bias=False),nn.BatchNorm2d(num_upsample_filters[idx],eps=1e-3, momentum=0.01),nn.ReLU()))else:stride = np.round(1 / stride).astype(np.int)self.deblocks.append(nn.Sequential(nn.Conv2d(num_filters[idx], num_upsample_filters[idx],stride,stride=stride, bias=False),nn.BatchNorm2d(num_upsample_filters[idx], eps=1e-3,momentum=0.01),nn.ReLU()))c_in = sum(num_upsample_filters)if len(upsample_strides) > num_levels:self.deblocks.append(nn.Sequential(nn.ConvTranspose2d(c_in, c_in, upsample_strides[-1],stride=upsample_strides[-1], bias=False),nn.BatchNorm2d(c_in, eps=1e-3, momentum=0.01),nn.ReLU(),))self.num_bev_features = c_in

DownsampleConv

其实就是下采样(用了几个DoubleConv)
主要作用就是

  • 降低计算成本: 在深度神经网络中,参数量和计算量通常会随着输入数据的尺寸增加而增加。通过下采样,可以降低每个层的输入数据的尺寸,从而降低网络的计算成本。
  • 减少过拟合: 下采样可以通过减少输入数据的维度和数量来减少模型的复杂性,从而有助于降低过拟合的风险。过拟合是指模型在训练数据上表现良好,但在测试数据上表现较差的现象。
  • 提高模型的泛化能力: 通过减少输入数据的空间分辨率,下采样有助于模型学习更加抽象和通用的特征,从而提高了模型对于不同数据的泛化能力。
  • 加速训练和推理过程: 由于下采样可以降低网络的计算成本,因此可以加快模型的训练和推理过程。这对于处理大规模数据和实时应用特别有用。
class DownsampleConv(nn.Module):def __init__(self, config):super(DownsampleConv, self).__init__()self.layers = nn.ModuleList([])input_dim = config['input_dim']for (ksize, dim, stride, padding) in zip(config['kernal_size'],config['dim'],config['stride'],config['padding']):self.layers.append(DoubleConv(input_dim,dim,kernel_size=ksize,stride=stride,padding=padding))input_dim = dim

config参数

{'dim': [256],'input_dim': 384,'kernal_size': [1],'padding': [0],'stride': [1]},

DoubleConv

其实就是两层卷积

class DoubleConv(nn.Module):"""Double convoltuionArgs:in_channels: input channel numout_channels: output channel num"""def __init__(self, in_channels, out_channels, kernel_size,stride, padding):super().__init__()self.double_conv = nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size,stride=stride, padding=padding),nn.ReLU(inplace=True),nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1),nn.ReLU(inplace=True))

SpatialFusion

其实就是取最大来进行融合特征
在这里插入图片描述

class SpatialFusion(nn.Module):def __init__(self):super(SpatialFusion, self).__init__()def regroup(self, x, record_len):cum_sum_len = torch.cumsum(record_len, dim=0)split_x = torch.tensor_split(x, cum_sum_len[:-1].cpu())return split_xdef forward(self, x, record_len):# x: B, C, H, W, split x:[(B1, C, W, H), (B2, C, W, H)]split_x = self.regroup(x, record_len)out = []for xx in split_x:xx = torch.max(xx, dim=0, keepdim=True)[0]out.append(xx)return torch.cat(out, dim=0)

检测头

(cls_head): Conv2d(256, 2, kernel_size=(1, 1), stride=(1, 1))
(reg_head): Conv2d(256, 14, kernel_size=(1, 1), stride=(1, 1))

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/707383.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Linux环境安装jira

jira 是项目与事务跟踪工具&#xff0c;被广泛应用于缺陷跟踪、客户服务、需求收集、流程审批、任务跟踪、项目跟踪和敏捷管理等工作领域。 jira 软件安装包直接搜官网&#xff0c;然后可以选择免费的来下载&#xff1a; 安装 jira 之前&#xff0c;需要 Java 和 mysql 环境的…

时隔一年的测评:gpt3.5发展到什么程度了?

名人说&#xff1a;一花独放不是春&#xff0c;百花齐放花满园。——《增广贤文》 作者&#xff1a;Code_流苏(CSDN)&#xff08;一个喜欢古诗词和编程的Coder&#x1f60a;&#xff09; 目录 一、简要介绍1、chatgpt是什么&#xff1f;2、主要特点3、工作原理4、应用限制5、使…

亚信安慧AntDB助力全链路实时化

实时数据平台&#xff0c;快速实现企业全链路实时化 引入数据仓库、数据挖掘、HTAP等先进理念&#xff0c;通过实时数据应用平台来装载庞大的信息量&#xff0c;进行实时分析处理&#xff0c;克服数据处理过程中的困难&#xff0c;是当下各企事业单位、互联网、金融&#xff0c…

大数据集群管理软件 CDH、Ambari、DataSophon 对比

文章目录 引言工具介绍CDHAmbariDataSophon 对比分析 引言 大数据集群管理方式分为手工方式和工具方式&#xff0c;手工方式一般指的是手动维护平台各个组件&#xff0c;工具方式是靠大数据集群管理软件对集群进行管理维护。本文针对于常见的方法和工具进行比较&#xff0c;帮助…

早产儿视网膜病变分期,自动化+半监督(无需大量医生标注数据)

早产儿视网膜病变 ROP 分期 提出背景解法框架解法步骤一致性正则化算法构建思路 实验 提出背景 论文&#xff1a;https://www.cell.com/action/showPdf?piiS2589-0042%2823%2902593-2 早产儿视网膜病变&#xff08;ROP&#xff09;目前是全球婴儿失明的主要原因之一。 这是…

Dledger部署RocketMQ高可用集群(9节点集群)

文章目录 &#x1f50a;博主介绍&#x1f964;本文内容规划集群准备工作节点0配置&#xff08;ip地址为192.168.80.101的机器&#xff09;节点1配置&#xff08;ip地址为192.168.80.102的机器&#xff09;节点2配置&#xff08;ip地址为192.168.80.103的机器&#xff09;在所有…

C语言--- 指针(3)

一.字符指针变量 在指针的类型中&#xff0c;我们知道有一种指针类型为字符指针char * 一般使用&#xff1a; #include<stdio.h> int main() {char ch a;char* p &ch;*p b;printf("%c\n",ch);return 0; } 其实还有一种使用方式 &#xff1a; #inc…

用了这么久的python,这些零碎的基础知识,你还记得多少?

python内置的数据类型 Python3.7内置的关键字 [False, None, True, and, as, assert, async, await, break, class, continue, def, del, elif, else, except, finally, for, from, global, if, import, in, is, lambda,nonlocal, not, or, pass, raise, return, try, while, …

vue专栏总纲

博主个人小程序已经上线&#xff1a;【中二少年工具箱】 小程序二维如下&#xff1a; 正文开始 专栏简介专栏初衷 专栏简介 本系列文章由浅入深&#xff0c;从基础知识到实战开发&#xff0c;非常适合入门同学。 零基础读者也能成功由本系列文章入门&#xff0c;但如果您具…

Unity中字符串拼接0GC方案

本文主要分析C#字符串拼接产生GC的原因&#xff0c;以及介绍名为ZString的库&#xff0c;它可以将字符串生成的内存分配为零。 在C#中&#xff0c;字符串拼接通常有三种方式&#xff1a; 直接使用号连接&#xff1b;string.format;使用StringBuilder&#xff1b; 下面分别细…

新版极狐gitlab安装+配置详细版

这里安装的服务器环境是centos7.9系统&#xff0c;安装极狐版本16.9。 极狐地址&#xff1a;https://gitlab.cn/install/ 1. 安装和配置所需的依赖 在 CentOS 7 上&#xff0c;下面的命令会在系统防火墙中打开 HTTP、HTTPS 和 SSH 访问。这是一个可选步骤&#xff0c;如果您…

Docker部署Portainer图形化管理工具

文章目录 前言1. 部署Portainer2. 本地访问Portainer3. Linux 安装cpolar4. 配置Portainer 公网访问地址5. 公网远程访问Portainer6. 固定Portainer公网地址 前言 Portainer 是一个轻量级的容器管理工具&#xff0c;可以通过 Web 界面对 Docker 容器进行管理和监控。它提供了可…

物业智能水电抄表管理系统

物业智能水电抄表管理系统是物业管理行业的关键技术之一&#xff0c;其结合了智能化、远程监控和数据分析等功能&#xff0c;为物业管理公司和业主提供了高效、精准的水电抄表管理解决方案。该系统具有多项优势&#xff0c;能够提升物业管理效率&#xff0c;降低成本&#xff0…

第五节:Vben Admin权限-前端控制方式

系列文章目录 第一节:Vben Admin介绍和初次运行 第二节:Vben Admin 登录逻辑梳理和对接后端准备 第三节:Vben Admin登录对接后端login接口 第四节:Vben Admin登录对接后端getUserInfo接口 第五节:Vben Admin权限-前端控制方式 文章目录 系列文章目录前言一、Vben Admin权…

py32 link,让PY32单片机开发更容易上手。

py32 link支持PY32系列单片机的调试和烧录&#xff0c;⽀持Keil、IAR等多种开发环境&#xff0c;开发简单易上手。PY32 link使用Type-C接⼝供电&#xff0c;搭载了MH32F103A芯片 LQFP64封装&#xff0c;MH32F103A有着216MHz主频和256KB flash&#xff0c;96KB RAM大资源&#x…

【Python】Code2flow学习笔记

1 Code2flow介绍 Code2flow是一个代码可视化工具库&#xff0c;旨在帮助开发人员更好地理解和分析代码&#xff1a; 可以将Python代码转换为流程图&#xff0c;以直观的方式展示代码的执行流程和逻辑结构。具有简单易用、高度可定制化和美观的特点&#xff0c;适用于各种代码…

Groovy(第九节) Groovy 之单元测试

JUnit 利用 Java 对 Song 类进行单元测试 默认情况下 Groovy 编译的类属性是私有的,所以不能直接在 Java 中访问它们,必须像下面这样使用 setter: 编写这个测试用例余下的代码就是小菜一碟了。测试用例很好地演示了这样一点:用 Groovy 所做的一切都可以轻易地在 Java 程序…

算法--动态规划(线性DP、区间DP)

这里写目录标题 tip数组下标从0开始还是从1开始 线性DP数学三角形介绍算法思想例题代码 最长上升子序列介绍算法思想例题代码 最长公共子序列介绍算法思想例题代码 编辑距离介绍例题代码 区间DP问题石子合并介绍算法思想例题代码 tip 数组下标从0开始还是从1开始 如果代码中涉…

Opencv实战(3)详解霍夫变换

霍夫变换 Opencv实战系列指路前文&#xff1a; Opencv(1)读取与图像操作 Opencv(2)绘图与图像操作 文章目录 霍夫变换1.霍夫线变换1.1 原理1.2 HoughLines() 2.霍夫圆变换2.1 原理2.2 HoughCircles() 最基本的霍夫变换是从黑白图像中检测直线(线段) 霍夫变换(Hough Transform…

【vue】什么是虚拟Dom,怎么实现虚拟DOM,虚拟DOM一定更快吗

什么是虚拟Dom 虚拟 DOM 基于虚拟节点 VNode&#xff0c;VNode 本质上是一个对象&#xff0c;VDOM 就是VNode 组成的 废话&#xff0c;js 中所有的东西都是对象 虚拟DOM 为什么快&#xff0c;做了哪些优化 批量更新 多个DOM合并更新减少浏览器的重排和重绘局部更新 通过新VDO…