自监督语义分割面模型——Masked Autoencoders Are Scalable Vision Learners(MAE)论文阅读

1、摘要

This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3× or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pretraining and shows promising scaling behavior

本文证明了掩码自编码器(MAE)是一种可扩展的计算机视觉自监督学习算法。我们的MAE方法很简单:我们屏蔽输入图像的随机补丁并重建缺失的像素。它基于两个核心设计。首先,我们开发了一个非对称编码器-解码器架构,其中一个编码器仅对补丁的可见子集(没有掩码令牌)进行操作,以及一个轻量级解码器,该解码器从潜在表示和掩码令牌重建原始图像其次,我们发现掩盖输入图像的高比例,例如75%,产生了一个重要的和有意义的自我监督任务。这两种设计的结合使我们能够高效地训练大型模型:我们加速了训练(3倍或更多)并提高了准确性。我们的可扩展方法允许学习泛化良好的大容量模型:例如,在仅使用ImageNet-1K数据的方法中,vanilla ViT-Huge模型达到了最好的准确率(87.8%)。下游任务的迁移性能优于监督预训练,并显示出有希望的缩放行为。
在这里插入图片描述
我们的MAE架构。在预训练过程中,图像补丁的一个大的随机子集(例如75%)被屏蔽掉。该编码器应用于可见补丁的小子集。在编码器之后引入掩码令牌,然后由一个小型解码器处理完整的编码补丁和掩码令牌,以像素为单位重建原始图像。预训练后,丢弃解码器,将编码器应用于未损坏的图像(完整的补丁集)进行识别任务。

2、 ImageNet Experiments

We do self-supervised pre-training on the ImageNet-1K (IN1K) [13] training set. Then we do supervised training to evaluate the representations with (i) end-to-end fine-tuning or (ii) linear probing. We report top-1 validation accuracy of a single 224×224 crop. Details are in Appendix

我们在ImageNet-1K (IN1K)训练集上进行自监督预训练。然后我们利用端到端微调或(i线性探测进行监督训练来评估表征。我们报告了单个224×224作物的前1验证精度。详情见附录。
在这里插入图片描述
(a)解码器深度。深度解码器可以提高线性探测精度。
(b)解码器宽度。解码器可以比编码器(1024-d)更窄。
©掩码令牌。没有掩码的编码器比kens更准确、更快(表2)。
(d)重建目标。像素作为重建目标是有效的。
(e)数据增强。我们的MAE工作在最小或没有增强。
(f)掩膜采样。随机抽样效果最好。

Table 1. MAE ablation experiments with ViT-L/16 on ImageNet-1K. We report fine-tuning (ft) and linear probing (lin) accuracy (%). If not specified, the default is: the decoder has depth 8 and width 512, the reconstruction target is unnormalized pixels, the data augmentation is random resized cropping, the masking ratio is 75%, and the pre-training length is 800 epochs. Default settings are marked in gray .

表1。viti - l /16在ImageNet-1K上的MAE消融实验。我们报告了微调(ft)和线性探测(lin)精度(%)。如果未指定,默认为:解码器深度8,宽度512,重建目标为非归一化像素,数据增强为随机调整大小裁剪,掩蔽比为75%,预训练长度为800 epoch。默认设置以灰色标记。

3、Transfer Learning Experiments

在这里插入图片描述
Table 3. Comparisons with previous results on ImageNet1K. The pre-training data is the ImageNet-1K training set (except the tokenizer in BEiT was pre-trained on 250M DALLE data [45]). All self-supervised methods are evaluated by end-to-end fine-tuning. The ViT models are B/16, L/16, H/14 [16]. The best for each column is underlined. All results are on an image size of 224, except for ViT-H with an extra result on 448. Here our MAE reconstructs normalized pixels and is pre-trained for 1600 epochs

表3。与先前ImageNet1K结果的比较。预训练数据为ImageNet-1K训练集(除了BEiT中的标记器是在250M DALLE数据上预训练的[45])。所有的自监督方法都是通过端到端微调来评估的。ViT模型为B/16、L/16、H/14[16]。每一栏最好的都用下划线标出。所有的结果都是224的图像大小,除了ViT-H的额外结果是448。在这里,我们的MAE重建归一化像素,并进行1600次预训练。

Object detection and segmentation.

We fine-tune Mask R-CNN [24] end-to-end on COCO [33]. The ViT backbone is adapted for use with FPN [32] (see A.3). We apply this approach for all entries in Table 4. We report box AP for object detection and mask AP for instance segmentation

目标检测和分割。我们在COCO[33]上端到端微调Mask R-CNN。ViT骨干网适用于FPN。我们将此方法应用于表4中的所有条目。我们报告了用于对象检测的框AP和用于实例分割的掩码AP。
在这里插入图片描述
COCO object detection and segmentation using a ViT Mask R-CNN baseline. All entries are based on our implementation. Self-supervised entries use IN1K data without labels. Mask AP follows a similar trend as box AP

使用ViT Mask R-CNN基线的COCO对象检测和分割。所有条目都基于我们的实现。自监督条目使用不带标签的IN1K数据。掩码AP的趋势与方框AP相似

Compared to supervised pre-training, our MAE performs better under all configurations (Table 4). With the smaller ViT-B, our MAE is 2.4 points higher than supervised pretraining (50.3 vs. 47.9, APbox). More significantly, with the larger ViT-L, our MAE pre-training outperforms supervised pre-training by 4.0 points (53.3 vs. 49.3)

与监督预训练相比,我们的MAE在所有配置下都表现更好(表4)。在ViT-B较小的情况下,我们的MAE比监督预训练高2.4分(50.3 vs. 47.9, APbox)。更重要的是,随着ViT-L的增大,我们的MAE预训练比监督预训练高出4.0分(53.3比49.3)。

Semantic segmentation.

We experiment on ADE20K [66] using UperNet [57] (see A.4). Table 5 shows that our pretraining significantly improves results over supervised pretraining, e.g., by 3.7 points for ViT-L. Our pixel-based MAE also outperforms the token-based BEiT. These observations are consistent with those in COCO

语义分割。我们使用UperNet[57]对ADE20K[66]进行了实验(见A.4)。表5显示,我们的预训练显著提高了监督预训练的结果,例如viti - l提高了3.7分。我们基于像素的MAE也优于基于令牌的MAE。这些观测结果与COCO的观测结果一致。

MMSegmention用法

要使用其他存储库的预训练模型,需要转换关键字。
我们在tools目录中提供了一个beit2mmseg.py脚本,用于将MAE模型的关键字从官方存储库转换为MMSegmentation样式。

python tools/model_converters/beit2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH}
python tools/model_converters/beit2mmseg.py https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_base.pth pretrain/mae_pretrain_vit_base_mmcls.pth
python tools/model_converters/beit2mmseg.py https://dl.fbaipublicfiles.com/mae/pretrain/mae_pretrain_vit_large.pth pretrain/mae_pretrain_vit_large_mmcls.pth

此脚本转换模型并将PRETRAIN_PATH转换后的模型存储在STORE_PATH.
在我们的默认设置中,预训练模型可以定义如下:
在这里插入图片描述
验证模型的单尺度结果:

sh tools/dist_test.sh \
work_dirs/runs/train/selfup/mae/upernet_mae-base_fp16_8x2_512x512_160k_ade20k.py checkpoints/mae/upernet_mae-base_fp16_8x2_512x512_160k_ade20k_20220426_174752-f92a2975.pth  --eval mIoU
python  tools/test.py --config work_dirs/runs/train/selfup/mae/upernet_mae-base_fp16_8x2_512x512_160k_ade20k.py  --checkpoint checkpoints/mae/upernet_mae-base_fp16_8x2_512x512_160k_ade20k_20220426_174752-f92a2975.pth  --eval  mIoU

由于相对位置嵌入要求输入长宽相等,因此采用滑动窗口进行多尺度推理。所以我们设置min_size=512,即最短边为512。所以config的多尺度推理是单独进行的,而不是’–aug-test’。对于多尺度推理:

sh tools/dist_test.sh \
configs/mae/upernet_mae-base_fp16_512x512_160k_ade20k_ms.py \
upernet_mae-base_fp16_8x2_512x512_160k_ade20k_20220426_174752-f92a2975.pth $GPUS --eval mIoU

在这里插入图片描述

mmsegmentation/mmseg/models/backbones/mae.py

# Copyright (c) OpenMMLab. All rights reserved.import math
import mathimport torch
import torch.nn as nn
from mmcv.cnn.utils.weight_init import (constant_init, kaiming_init,trunc_normal_)
from mmcv.runner import ModuleList, _load_checkpoint
from torch.nn.modules.batchnorm import _BatchNormfrom mmseg.utils import get_root_logger
from ..builder import BACKBONES
from .beit import BEiT, BEiTAttention, BEiTTransformerEncoderLayerclass MAEAttention(BEiTAttention):"""Multi-head self-attention with relative position bias used in MAE.具有相对位置偏差的多头自注意在MAE中的应用This module is different from ``BEiTAttention`` by initializing therelative bias table with zeros."""def init_weights(self):"""Initialize relative position bias with zeros."""# As MAE initializes relative position bias as zeros and this class# inherited from BEiT which initializes relative position bias# with `trunc_normal`, `init_weights` here does# nothing and just passes directlypassclass MAETransformerEncoderLayer(BEiTTransformerEncoderLayer):"""Implements one encoder layer in Vision Transformer.This module is different from ``BEiTTransformerEncoderLayer`` by replacing``BEiTAttention`` with ``MAEAttention``."""def build_attn(self, attn_cfg):self.attn = MAEAttention(**attn_cfg)@BACKBONES.register_module()
class MAE(BEiT):"""VisionTransformer with support for patch.Args:img_size (int | tuple): Input image size. Default: 224.patch_size (int): The patch size. Default: 16.in_channels (int): Number of input channels. Default: 3.embed_dims (int): embedding dimension. Default: 768.num_layers (int): depth of transformer. Default: 12.num_heads (int): number of attention heads. Default: 12.mlp_ratio (int): ratio of mlp hidden dim to embedding dim.Default: 4.MLP隐藏维度与编码维度之比out_indices (list | tuple | int): Output from which stages.Default: -1.attn_drop_rate (float): The drop out rate for attention layer.Default 0.0drop_path_rate (float): stochastic depth rate. Default 0.0.norm_cfg (dict): Config dict for normalization layer.Default: dict(type='LN')act_cfg (dict): The activation config for FFNs.Default: dict(type='GELU').patch_norm (bool): Whether to add a norm in PatchEmbed Block.Default: False.final_norm (bool): Whether to add a additional layer to normalizefinal feature map. Default: False.num_fcs (int): The number of fully-connected layers for FFNs.Default: 2.norm_eval (bool): Whether to set norm layers to eval mode, namely,freeze running stats (mean and var). Note: Effect on Batch Normand its variants only. Default: False.pretrained (str, optional): model pretrained path. Default: None.init_values (float): Initialize the values of Attention and FFNwith learnable scaling. Defaults to 0.1.init_cfg (dict or list[dict], optional): Initialization config dict.Default: None."""def __init__(self,img_size=224,patch_size=16,in_channels=3,embed_dims=768,num_layers=12,num_heads=12,mlp_ratio=4,out_indices=-1,attn_drop_rate=0.,drop_path_rate=0.,norm_cfg=dict(type='LN'),act_cfg=dict(type='GELU'),patch_norm=False,final_norm=False,num_fcs=2,norm_eval=False,pretrained=None,init_values=0.1,init_cfg=None):super(MAE, self).__init__(img_size=img_size,patch_size=patch_size,in_channels=in_channels,embed_dims=embed_dims,num_layers=num_layers,num_heads=num_heads,mlp_ratio=mlp_ratio,out_indices=out_indices,qv_bias=False,attn_drop_rate=attn_drop_rate,drop_path_rate=drop_path_rate,norm_cfg=norm_cfg,act_cfg=act_cfg,patch_norm=patch_norm,final_norm=final_norm,num_fcs=num_fcs,norm_eval=norm_eval,pretrained=pretrained,init_values=init_values,init_cfg=init_cfg)self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dims))self.num_patches = self.patch_shape[0] * self.patch_shape[1]self.pos_embed = nn.Parameter(torch.zeros(1, self.num_patches + 1, embed_dims))def _build_layers(self):dpr = [x.item()for x in torch.linspace(0, self.drop_path_rate, self.num_layers)]self.layers = ModuleList()for i in range(self.num_layers):self.layers.append(MAETransformerEncoderLayer(embed_dims=self.embed_dims,num_heads=self.num_heads,feedforward_channels=self.mlp_ratio * self.embed_dims,attn_drop_rate=self.attn_drop_rate,drop_path_rate=dpr[i],num_fcs=self.num_fcs,bias=True,act_cfg=self.act_cfg,norm_cfg=self.norm_cfg,window_size=self.patch_shape,init_values=self.init_values))def fix_init_weight(self):"""Rescale the initialization according to layer id.This function is copied from  https://github.com/microsoft/unilm/blob/master/beit/modeling_pretrain.py. # noqa: E501Copyright (c) Microsoft CorporationLicensed under the MIT License"""def rescale(param, layer_id):param.div_(math.sqrt(2.0 * layer_id))for layer_id, layer in enumerate(self.layers):rescale(layer.attn.proj.weight.data, layer_id + 1)rescale(layer.ffn.layers[1].weight.data, layer_id + 1)def init_weights(self):def _init_weights(m):if isinstance(m, nn.Linear):trunc_normal_(m.weight, std=.02)if isinstance(m, nn.Linear) and m.bias is not None:nn.init.constant_(m.bias, 0)elif isinstance(m, nn.LayerNorm):nn.init.constant_(m.bias, 0)nn.init.constant_(m.weight, 1.0)self.apply(_init_weights)self.fix_init_weight()if (isinstance(self.init_cfg, dict)and self.init_cfg.get('type') == 'Pretrained'):logger = get_root_logger()checkpoint = _load_checkpoint(self.init_cfg['checkpoint'], logger=logger, map_location='cpu')state_dict = self.resize_rel_pos_embed(checkpoint)state_dict = self.resize_abs_pos_embed(state_dict)self.load_state_dict(state_dict, False)elif self.init_cfg is not None:super(MAE, self).init_weights()else:# We only implement the 'jax_impl' initialization implemented at# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353  # noqa: E501# Copyright 2019 Ross Wightman# Licensed under the Apache License, Version 2.0 (the "License")trunc_normal_(self.cls_token, std=.02)for n, m in self.named_modules():if isinstance(m, nn.Linear):trunc_normal_(m.weight, std=.02)if m.bias is not None:if 'ffn' in n:nn.init.normal_(m.bias, mean=0., std=1e-6)else:nn.init.constant_(m.bias, 0)elif isinstance(m, nn.Conv2d):kaiming_init(m, mode='fan_in', bias=0.)elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)):constant_init(m, val=1.0, bias=0.)def resize_abs_pos_embed(self, state_dict):if 'pos_embed' in state_dict:pos_embed_checkpoint = state_dict['pos_embed']embedding_size = pos_embed_checkpoint.shape[-1]num_extra_tokens = self.pos_embed.shape[-2] - self.num_patches# height (== width) for the checkpoint position embeddingorig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens)**0.5)# height (== width) for the new position embeddingnew_size = int(self.num_patches**0.5)# class_token and dist_token are kept unchangedif orig_size != new_size:extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens]# only the position tokens are interpolatedpos_tokens = pos_embed_checkpoint[:, num_extra_tokens:]pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size,embedding_size).permute(0, 3, 1, 2)pos_tokens = torch.nn.functional.interpolate(pos_tokens,size=(new_size, new_size),mode='bicubic',align_corners=False)pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)state_dict['pos_embed'] = new_pos_embedreturn state_dictdef forward(self, inputs):B = inputs.shape[0]x, hw_shape = self.patch_embed(inputs)# stole cls_tokens impl from Phil Wang, thankscls_tokens = self.cls_token.expand(B, -1, -1)x = torch.cat((cls_tokens, x), dim=1)x = x + self.pos_embedouts = []for i, layer in enumerate(self.layers):x = layer(x)if i == len(self.layers) - 1:if self.final_norm:x = self.norm1(x)if i in self.out_indices:out = x[:, 1:]B, _, C = out.shapeout = out.reshape(B, hw_shape[0], hw_shape[1],C).permute(0, 3, 1, 2).contiguous()outs.append(out)return tuple(outs)

配置文件解析

norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(type='EncoderDecoder', #type:要构建的模型的类型pretrained='work_dirs/pretrain/mae/mae_pretrain_vit_base_mmcls.pth',#预训练模型的位置backbone=dict(type='MAE',#骨干模块的类型img_size=(512, 512),patch_size=16,in_channels=3,embed_dims=768,num_layers=12,num_heads=12,mlp_ratio=4,out_indices=[3, 5, 7, 11],attn_drop_rate=0.0,drop_path_rate=0.1,norm_cfg=dict(type='LN', eps=1e-06),act_cfg=dict(type='GELU'),norm_eval=False,init_values=1.0),           neck=dict(type='Feature2Pyramid', embed_dim=768, rescales=[4, 2, 1, 0.5]),#颈部结构连接ViT主干和解码器头。decode_head=dict(type='UPerHead',in_channels=[768, 768, 768, 768],in_index=[0, 1, 2, 3],pool_scales=(1, 2, 3, 6),channels=768,dropout_ratio=0.1,num_classes=150,norm_cfg=dict(type='SyncBN', requires_grad=True),align_corners=False,loss_decode=dict(type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),auxiliary_head=dict(type='FCNHead',in_channels=768,in_index=2,channels=256,num_convs=1,concat_input=False,dropout_ratio=0.1,num_classes=150,norm_cfg=dict(type='SyncBN', requires_grad=True),align_corners=False,loss_decode=dict(type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),train_cfg=dict(),test_cfg=dict(mode='slide', crop_size=(512, 512), stride=(341, 341)))
dataset_type = 'ADE20KDataset'
data_root = 'data/ade/ADEChallengeData2016'
img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [dict(type='LoadImageFromFile'),dict(type='LoadAnnotations', reduce_zero_label=True),dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75),dict(type='RandomFlip', prob=0.5),dict(type='PhotoMetricDistortion'),dict(type='Normalize',mean=[123.675, 116.28, 103.53],std=[58.395, 57.12, 57.375],to_rgb=True),dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),dict(type='DefaultFormatBundle'),dict(type='Collect', keys=['img', 'gt_semantic_seg'])
]
test_pipeline = [dict(type='LoadImageFromFile'),dict(type='MultiScaleFlipAug',img_scale=(2048, 512),flip=False,transforms=[dict(type='Resize', keep_ratio=True),dict(type='RandomFlip'),dict(type='Normalize',mean=[123.675, 116.28, 103.53],std=[58.395, 57.12, 57.375],to_rgb=True),dict(type='ImageToTensor', keys=['img']),dict(type='Collect', keys=['img'])])
]
data = dict(samples_per_gpu=2,workers_per_gpu=4,train=dict(type='ADE20KDataset',data_root='data/ade/ADEChallengeData2016',img_dir='images/training',ann_dir='annotations/training',pipeline=[dict(type='LoadImageFromFile'),dict(type='LoadAnnotations', reduce_zero_label=True),dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75),dict(type='RandomFlip', prob=0.5),dict(type='PhotoMetricDistortion'),dict(type='Normalize',mean=[123.675, 116.28, 103.53],std=[58.395, 57.12, 57.375],to_rgb=True),dict(type='Pad', size=(512, 512), pad_val=0, seg_pad_val=255),dict(type='DefaultFormatBundle'),dict(type='Collect', keys=['img', 'gt_semantic_seg'])]),val=dict(type='ADE20KDataset',data_root='data/ade/ADEChallengeData2016',img_dir='images/validation',ann_dir='annotations/validation',pipeline=[dict(type='LoadImageFromFile'),dict(type='MultiScaleFlipAug',img_scale=(2048, 512),flip=False,transforms=[dict(type='Resize', keep_ratio=True),dict(type='RandomFlip'),dict(type='Normalize',mean=[123.675, 116.28, 103.53],std=[58.395, 57.12, 57.375],to_rgb=True),dict(type='ImageToTensor', keys=['img']),dict(type='Collect', keys=['img'])])]),test=dict(type='ADE20KDataset',data_root='data/ade/ADEChallengeData2016',img_dir='images/validation',ann_dir='annotations/validation',pipeline=[dict(type='LoadImageFromFile'),dict(type='MultiScaleFlipAug',img_scale=(2048, 512),flip=False,transforms=[dict(type='Resize', keep_ratio=True),dict(type='RandomFlip'),dict(type='Normalize',mean=[123.675, 116.28, 103.53],std=[58.395, 57.12, 57.375],to_rgb=True),dict(type='ImageToTensor', keys=['img']),dict(type='Collect', keys=['img'])])]))
log_config = dict(interval=50,hooks=[dict(type='TextLoggerHook', by_epoch=False),dict(type='TensorboardLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
cudnn_benchmark = True
optimizer = dict(type='AdamW',lr=0.0001,betas=(0.9, 0.999),weight_decay=0.05,constructor='LayerDecayOptimizerConstructor',paramwise_cfg=dict(num_layers=12, layer_decay_rate=0.65))
optimizer_config = dict()
lr_config = dict(policy='poly',warmup='linear',warmup_iters=1500,warmup_ratio=1e-06,power=1.0,min_lr=0.0,by_epoch=False)
runner = dict(type='IterBasedRunner', max_iters=160000)
checkpoint_config = dict(by_epoch=False, interval=16000)
evaluation = dict(interval=16000,metric=['mIoU', 'mFscore'],pre_eval=True,save_best='auto')
fp16 = dict(loss_scale='dynamic')
work_dir = 'work_dirs/runs/train/selfup/mae'
gpu_ids = [0]
auto_resume = False

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/996.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

SpringBoot使用JWT进行身份验证

JWT身份验证的流程 用户登录: 用户向服务器提供他们的用户名和密码。 服务器验证:服务器接收到请求,验证用户名和密码。 生成JWT:如果用户名和密码验证通过,服务器将创建一个 JWT。 JWT 包含了一些数据(称…

[JVM] 5. 运行时数据区(2)-- 程序计数器(Program Counter Register)

一、概述 JVM中的程序计数器(Program Counter Register)是对物理PC寄存器的一种抽象模拟。它是一块很小的内存空间,几乎可以忽略不记。也是运行速度最快的存储区域。在 JVM 规范中,每个线程都有它自己的程序计数器,是…

redis之主从复制、哨兵、集群

文章目录 一、redis的高可用1.1 redis高可用的概念1.2 Redis的高可用技术 二、redis 主从复制2.1主从复制的原理2.2搭建Redis 主从复制 三、Redis 哨兵模式3.1搭建Redis 哨兵模式3.2启动哨兵模式3.3查看哨兵信息3.4故障模拟 四、Redis 群集模式4.1搭建Redis 群集模式 一、redis…

【Excel】excel多个单元格的内容合并到一个单元格,并使用分隔符

方法一:使用连接符 & 左键单击选中“D2”单元格,在D2单元格中输入公式“A2&B2&C2”,按“Enter”即可实现数据合并。 ------如果想连接的时候,中间加分隔符,可以使用:公式A2&"&#xf…

Azure Kinect 之 Note(一)

Azure Kinect Azure Kinect DK 是一款开发人员工具包,配有先进的AI 传感器,提供复杂的计算机视觉和语音模型。 Kinect 将深度传感器、空间麦克风阵列与视频摄像头和方向传感器整合成一体式的小型设备,提供多种模式、选项和软件开发工具包(S…

面试题更新之-HTML5的新特性

文章目录 导文新特性有哪些?HTML5的新特性带来了许多好处 导文 面试题更新之-HTML5的新特性 新特性有哪些? HTML5引入了许多新特性和改进,以下是一些HTML5的新特性: 语义化标签:HTML5引入了一系列的语义化标签&#…

远程在Ubuntu20.04安装nvidia显卡驱动

第零步,找人装一个todesk。 在终端运行: ifconfig 记住ip地址,后面要用。 第一步,安装软件: sudo apt-get update sudo apt-get install g gcc make 第二步,下载显卡驱动: 官方驱动 | NVI…

【ThinkPHP】实现一个逆向工程生成model

ThinkPHP为了节省一些重复的步骤,写了个简单版的生成model的工具,逆向生成model代码,节省时间,专注写业务代码。 ThinkPHP中的命令行也提供了一些生成代码的命令: make:controller 创建控制器 make:model 创建模型 m…

医院制剂研发与真实世界评价论坛圆满闭幕

医院制剂是新药的摇篮和宝库,现代科技为医院制剂的研发和转化赋能。在新时代新政策下,2023年07月16日,由湖南省药学会医院制剂研发与真实世界评价专业委员会(下称“专委会”)主委单位湖南易能生物医药有限公司&#xf…

划片机的技术分解

划片机是一种切割设备,主要用于将硬脆材料(如硅晶圆、蓝宝石基片、LED基片等)分割成较小的单元。其工作原理是以强力磨削为划切机理,通过空气静压电主轴带动刀片与工件接触点的划切线方向呈直线运动,将每一个具有独立电…

概率论的学习和整理18:为什么 P(至少成功1次) = Σ P(几何分布) ,总结几何分布和连续失败概率的关系,二项分布和累计成功k次的关系

目录 1 先说结论: 2 Σ几何分布的P(xn) P(n次试验至少成功1次) 2.1 几何分布的概率 2.2 这个是可以证明的,下面是推导过程 2.3 怎么理解呢? 3 另外,P(累计成功k次) ΣP(成功k次的二项分布) 3.1 成功k次的概率 和 累计成…

回收站怎么看当天删除的文件?在回收站中找不到被删除文件怎么回事

在日常使用电脑的过程中,我们常常会遭遇删除文件的错误,这时回收站就像是一剂“后悔药”。然而,当回收站中堆积了许多已删除的文件时,我们如何才能找到当天删除的文件呢?如果回收站在这时无法提供文件,我们…

本地Linux 部署 Dashy 并远程访问

文章目录 简介1. 安装Dashy2. 安装cpolar3.配置公网访问地址4. 固定域名访问 转载自cpolar极点云文章:本地Linux 部署 Dashy 并远程访问 简介 Dashy 是一个开源的自托管的导航页配置服务,具有易于使用的可视化编辑器、状态检查、小工具和主题等功能。你…

Python应用实例(二)数据可视化(一)

数据可视化(一) 1.安装Matplotlib2.绘制简单的折线图2.1 修改标签文字和线条粗细2.2 矫正图形2.3 使用内置样式2.4 使用scatter()绘制散点图并设置样式2.5 使用scatter()绘制一系列点2.6 自动计算数据2.7 自定义颜色2.8 使用颜色映射2.9 自动保存图表 数…

Visual Studio 自定义的颜色字体不生效

问题描述: 1、dll1中引用第三方库的类不识别,颜色黑白,自定义颜色不生效;定义的是结构体 2、在dll2引用另一个dll1中的结构体。结构体不识别,今天成员函数cpp中自定义颜色不生效。 问题解决方式: 全部清…

【MySQL备份与还原、索引、视图】练习

一、备份与还原 /***************************样例表***************************/CREATE DATABASE booksDB;use booksDB;CREATE TABLE books(bk_id INT NOT NULL PRIMARY KEY,bk_title VARCHAR(50) NOT NULL,copyright YEAR NOT NULL);INSERT INTO booksVALUES (11078, Lear…

macOS 14 Sonama - 小记

文章目录 Sonoma 官方资讯关于 Sonama 命名关于 壁纸Sonoma 官方资讯 macOS Sonoma Preview https://www.apple.com/hk/en/macos/sonoma-preview/官方视频介绍 Apple Events --> Watch the Keynote --> 00:43:13 (约14min) https://www.apple.com/hk/en/apple-events/mac…

树莓派使用Nginx 搭建轻量级网站远程访问

文章目录 1. Nginx安装2. 安装cpolar3.配置域名访问Nginx4. 固定域名访问5. 配置静态站点 转载自cpolar极点云文章:树莓派使用Nginx 搭建轻量级网站远程访问 安装 Nginx(发音为“engine-x”)可以将您的树莓派变成一个强大的 Web 服务器&#…

2023年iOS App Store上架流程详解(上)

目录 1.注册开发者账号 2.登录并配置人员 3.申请证书和配置文件 一.证书管理​ 二.新建证书​ 三.使用appuploader服务同步证书​ 1)申请证书 2)添加Identifiers和配置App ID 3)申请配置文件 1.在Xcode项目中配置签名 2.上传应用包…

Web3的2048,Sui 8192能否打开全链游戏的大门?

作者:Peng SUN,Foresight News Sui 8192:一局游戏就是一个NFT Sui 8192智能合约基于Move语言编写,构成非常简单,包括游戏、Game Board与排行榜(Leaderboard)三部分,覆盖方块移动、…