MMCV1.6.0之Runner/Hook/EMAHook (模型 ema)

mmcv/mmcv/runner/hooks/ema.py

EMAHook 类是一个用于在训练过程中对模型参数应用指数移动平均 (EMA) 的钩子。EMA是一种平滑技术,通过在每次迭代中更新模型参数的移动平均值,来减小参数更新的波动性。此钩子在 EvalHook 和 CheckpointSaverHook 之前执行。

@HOOKS.register_module()
class EMAHook(Hook):"""Exponential Moving Average Hook.Use Exponential Moving Average on all parameters of model in trainingprocess. All parameters have a ema backup, which update by the formulaas below. EMAHook takes priority over EvalHook and CheckpointSaverHook... math::Xema\_{t+1} = (1 - \text{momentum}) \timesXema\_{t} +  \text{momentum} \times X_tArgs:momentum (float): The momentum used for updating ema parameter.Defaults to 0.0002.interval (int): Update ema parameter every interval iteration.Defaults to 1.warm_up (int): During first warm_up steps, we may use smaller momentumto update ema parameters more slowly. Defaults to 100.resume_from (str, optional): The checkpoint path. Defaults to None."""def __init__(self,momentum: float = 0.0002,interval: int = 1,warm_up: int = 100,resume_from: Optional[str] = None):assert isinstance(interval, int) and interval > 0self.warm_up = warm_upself.interval = intervalassert momentum > 0 and momentum < 1self.momentum = momentum**intervalself.checkpoint = resume_fromdef before_run(self, runner):"""To resume model with it's ema parameters more friendly.Register ema parameter as ``named_buffer`` to model"""model = runner.modelif is_module_wrapper(model):model = model.moduleself.param_ema_buffer = {}self.model_parameters = dict(model.named_parameters(recurse=True))for name, value in self.model_parameters.items():# "." is not allowed in module's buffer namebuffer_name = f"ema_{name.replace('.', '_')}"self.param_ema_buffer[name] = buffer_namemodel.register_buffer(buffer_name, value.data.clone())self.model_buffers = dict(model.named_buffers(recurse=True))if self.checkpoint is not None:runner.resume(self.checkpoint)def after_train_iter(self, runner):"""Update ema parameter every self.interval iterations."""curr_step = runner.iter# We warm up the momentum considering the instability at beginningmomentum = min(self.momentum,(1 + curr_step) / (self.warm_up + curr_step))if curr_step % self.interval != 0:returnfor name, parameter in self.model_parameters.items():buffer_name = self.param_ema_buffer[name]buffer_parameter = self.model_buffers[buffer_name]buffer_parameter.mul_(1 - momentum).add_(momentum, parameter.data)def after_train_epoch(self, runner):"""We load parameter values from ema backup to model before theEvalHook."""self._swap_ema_parameters()def before_train_epoch(self, runner):"""We recover model's parameter from ema backup after last epoch'sEvalHook."""self._swap_ema_parameters()def _swap_ema_parameters(self):"""Swap the parameter of model with parameter in ema_buffer."""for name, value in self.model_parameters.items():temp = value.data.clone()ema_buffer = self.model_buffers[self.param_ema_buffer[name]]value.data.copy_(ema_buffer.data)ema_buffer.data.copy_(temp)

参数
momentum (float): 用于更新 EMA 参数的动量,默认为 0.0002。
interval (int): 每隔 interval 次迭代更新一次 EMA 参数,默认为 1。
warm_up (int): 在前 warm_up 步期间,使用较小的动量来更新 EMA 参数,默认为 100。
resume_from (str, 可选): 检查点路径,默认为 None。
代码总结
EMAHook 类通过在训练过程中对模型参数应用指数移动平均,提供了一种平滑模型参数更新的方法。它在训练开始时初始化 EMA 参数,在每次迭代后根据动量和间隔更新 EMA 参数,在每个训练周期前后交换模型参数和 EMA 缓冲区中的参数,以确保在评估模型性能时使用 EMA 平滑后的参数。

mmdetection/mmdet/core/hook/ema.py

class BaseEMAHook(Hook):"""Exponential Moving Average Hook.Use Exponential Moving Average on all parameters of model in trainingprocess. All parameters have a ema backup, which update by the formulaas below. EMAHook takes priority over EvalHook and CheckpointHook. Note,the original model parameters are actually saved in ema field after train.Args:momentum (float): The momentum used for updating ema parameter.Ema's parameter are updated with the formula:`ema_param = (1-momentum) * ema_param + momentum * cur_param`.Defaults to 0.0002.skip_buffers (bool): Whether to skip the model buffers, such asbatchnorm running stats (running_mean, running_var), it does notperform the ema operation. Default to False.interval (int): Update ema parameter every interval iteration.Defaults to 1.resume_from (str, optional): The checkpoint path. Defaults to None.momentum_fun (func, optional): The function to change momentumduring early iteration (also warmup) to help early training.It uses `momentum` as a constant. Defaults to None."""def __init__(self,momentum=0.0002,interval=1,skip_buffers=False,resume_from=None,momentum_fun=None):assert 0 < momentum < 1self.momentum = momentumself.skip_buffers = skip_buffersself.interval = intervalself.checkpoint = resume_fromself.momentum_fun = momentum_fundef before_run(self, runner):"""To resume model with it's ema parameters more friendly.Register ema parameter as ``named_buffer`` to model."""model = runner.modelif is_module_wrapper(model):model = model.moduleself.param_ema_buffer = {}if self.skip_buffers:self.model_parameters = dict(model.named_parameters())else:self.model_parameters = model.state_dict()for name, value in self.model_parameters.items():# "." is not allowed in module's buffer namebuffer_name = f"ema_{name.replace('.', '_')}"self.param_ema_buffer[name] = buffer_namemodel.register_buffer(buffer_name, value.data.clone())self.model_buffers = dict(model.named_buffers())if self.checkpoint is not None:runner.resume(self.checkpoint)def get_momentum(self, runner):return self.momentum_fun(runner.iter) if self.momentum_fun else \self.momentumdef after_train_iter(self, runner):"""Update ema parameter every self.interval iterations."""if (runner.iter + 1) % self.interval != 0:returnmomentum = self.get_momentum(runner)for name, parameter in self.model_parameters.items():# exclude num_trackingif parameter.dtype.is_floating_point:buffer_name = self.param_ema_buffer[name]buffer_parameter = self.model_buffers[buffer_name]buffer_parameter.mul_(1 - momentum).add_(parameter.data, alpha=momentum)def after_train_epoch(self, runner):"""We load parameter values from ema backup to model before theEvalHook."""self._swap_ema_parameters()def before_train_epoch(self, runner):"""We recover model's parameter from ema backup after last epoch'sEvalHook."""self._swap_ema_parameters()def _swap_ema_parameters(self):"""Swap the parameter of model with parameter in ema_buffer."""for name, value in self.model_parameters.items():temp = value.data.clone()ema_buffer = self.model_buffers[self.param_ema_buffer[name]]value.data.copy_(ema_buffer.data)ema_buffer.data.copy_(temp)@HOOKS.register_module()
class ExpMomentumEMAHook(BaseEMAHook):"""EMAHook using exponential momentum strategy.使用指数动量策略Args:total_iter (int): The total number of iterations of EMA momentum.Defaults to 2000."""def __init__(self, total_iter=2000, **kwargs):super(ExpMomentumEMAHook, self).__init__(**kwargs)self.momentum_fun = lambda x: (1 - self.momentum) * math.exp(-(1 + x) / total_iter) + self.momentum@HOOKS.register_module()
class LinearMomentumEMAHook(BaseEMAHook):"""EMAHook using linear momentum strategy.EMAHook采用线性动量策略Args:warm_up (int): During first warm_up steps, we may use smaller decayto update ema parameters more slowly. Defaults to 100."""def __init__(self, warm_up=100, **kwargs):super(LinearMomentumEMAHook, self).__init__(**kwargs)self.momentum_fun = lambda x: min(self.momentum**self.interval,(1 + x) / (warm_up + x))

mmengine/mmengine/hooks/ema_hook.py

# Copyright (c) OpenMMLab. All rights reserved.
import copy
import itertools
import logging
from typing import Dict, Optionalfrom mmengine.logging import print_log
from mmengine.model import is_model_wrapper
from mmengine.registry import HOOKS, MODELS
from .hook import DATA_BATCH, Hook@HOOKS.register_module()
class EMAHook(Hook):"""A Hook to apply Exponential Moving Average (EMA) on the model duringtraining.Note:- EMAHook takes priority over CheckpointHook.- The original model parameters are actually saved in ema field aftertrain.- ``begin_iter`` and ``begin_epoch`` cannot be set at the same time.Args:ema_type (str): The type of EMA strategy to use. You can find thesupported strategies in :mod:`mmengine.model.averaged_model`.Defaults to 'ExponentialMovingAverage'.strict_load (bool): Whether to strictly enforce that the keys of``state_dict`` in checkpoint match the keys returned by``self.module.state_dict``. Defaults to False.Changed in v0.3.0.begin_iter (int): The number of iteration to enable ``EMAHook``.Defaults to 0.begin_epoch (int): The number of epoch to enable ``EMAHook``.Defaults to 0.**kwargs: Keyword arguments passed to subclasses of:obj:`BaseAveragedModel`"""priority = 'NORMAL'def __init__(self,ema_type: str = 'ExponentialMovingAverage',strict_load: bool = False,begin_iter: int = 0,begin_epoch: int = 0,**kwargs):self.strict_load = strict_loadself.ema_cfg = dict(type=ema_type, **kwargs)assert not (begin_iter != 0 and begin_epoch != 0), ('`begin_iter` and `begin_epoch` should not be both set.')assert begin_iter >= 0, ('`begin_iter` must larger than or equal to 0, 'f'but got begin_iter: {begin_iter}')assert begin_epoch >= 0, ('`begin_epoch` must larger than or equal to 0, 'f'but got begin_epoch: {begin_epoch}')self.begin_iter = begin_iterself.begin_epoch = begin_epoch# If `begin_epoch` and `begin_iter` are not set, `EMAHook` will be# enabled at 0 iteration.self.enabled_by_epoch = self.begin_epoch > 0def before_run(self, runner) -> None:"""Create an ema copy of the model.Args:runner (Runner): The runner of the training process."""model = runner.modelif is_model_wrapper(model):model = model.moduleself.src_model = modelself.ema_model = MODELS.build(self.ema_cfg, default_args=dict(model=self.src_model))def before_train(self, runner) -> None:"""Check the begin_epoch/iter is smaller than max_epochs/iters.Args:runner (Runner): The runner of the training process."""if self.enabled_by_epoch:assert self.begin_epoch <= runner.max_epochs, ('self.begin_epoch should be smaller than or equal to 'f'runner.max_epochs: {runner.max_epochs}, but got 'f'begin_epoch: {self.begin_epoch}')else:assert self.begin_iter <= runner.max_iters, ('self.begin_iter should be smaller than or equal to 'f'runner.max_iters: {runner.max_iters}, but got 'f'begin_iter: {self.begin_iter}')def after_train_iter(self,runner,batch_idx: int,data_batch: DATA_BATCH = None,outputs: Optional[dict] = None) -> None:"""Update ema parameter.Args:runner (Runner): The runner of the training process.batch_idx (int): The index of the current batch in the train loop.data_batch (Sequence[dict], optional): Data from dataloader.Defaults to None.outputs (dict, optional): Outputs from model. Defaults to None."""if self._ema_started(runner):self.ema_model.update_parameters(self.src_model)else:ema_params = self.ema_model.module.state_dict()src_params = self.src_model.state_dict()for k, p in ema_params.items():p.data.copy_(src_params[k].data)def before_val_epoch(self, runner) -> None:"""We load parameter values from ema model to source model beforevalidation.Args:runner (Runner): The runner of the training process."""self._swap_ema_parameters()def after_val_epoch(self,runner,metrics: Optional[Dict[str, float]] = None) -> None:"""We recover source model's parameter from ema model after validation.Args:runner (Runner): The runner of the validation process.metrics (Dict[str, float], optional): Evaluation results of allmetrics on validation dataset. The keys are the names of themetrics, and the values are corresponding results."""self._swap_ema_parameters()def before_test_epoch(self, runner) -> None:"""We load parameter values from ema model to source model before test.Args:runner (Runner): The runner of the training process."""self._swap_ema_parameters()def after_test_epoch(self,runner,metrics: Optional[Dict[str, float]] = None) -> None:"""We recover source model's parameter from ema model after test.Args:runner (Runner): The runner of the testing process.metrics (Dict[str, float], optional): Evaluation results of allmetrics on test dataset. The keys are the names of themetrics, and the values are corresponding results."""self._swap_ema_parameters()def before_save_checkpoint(self, runner, checkpoint: dict) -> None:"""Save ema parameters to checkpoint.Args:runner (Runner): The runner of the testing process."""checkpoint['ema_state_dict'] = self.ema_model.state_dict()# Save ema parameters to the source model's state dict so that we# can directly load the averaged model weights for deployment.# Swapping the state_dict key-values instead of swapping model# parameters because the state_dict is a shallow copy of model# parameters.self._swap_ema_state_dict(checkpoint)def after_load_checkpoint(self, runner, checkpoint: dict) -> None:"""Resume ema parameters from checkpoint.Args:runner (Runner): The runner of the testing process."""from mmengine.runner.checkpoint import load_state_dictif 'ema_state_dict' in checkpoint and runner._resume:# The original model parameters are actually saved in ema# field swap the weights back to resume ema state.self._swap_ema_state_dict(checkpoint)self.ema_model.load_state_dict(checkpoint['ema_state_dict'], strict=self.strict_load)# Support load checkpoint without ema state dict.else:if runner._resume:print_log('There is no `ema_state_dict` in checkpoint. ''`EMAHook` will make a copy of `state_dict` as the ''initial `ema_state_dict`', 'current', logging.WARNING)load_state_dict(self.ema_model.module,copy.deepcopy(checkpoint['state_dict']),strict=self.strict_load)def _swap_ema_parameters(self) -> None:"""Swap the parameter of model with ema_model."""avg_param = (itertools.chain(self.ema_model.module.parameters(),self.ema_model.module.buffers())if self.ema_model.update_buffers elseself.ema_model.module.parameters())src_param = (itertools.chain(self.src_model.parameters(),self.src_model.buffers())if self.ema_model.update_buffers else self.src_model.parameters())for p_avg, p_src in zip(avg_param, src_param):tmp = p_avg.data.clone()p_avg.data.copy_(p_src.data)p_src.data.copy_(tmp)def _swap_ema_state_dict(self, checkpoint):"""Swap the state dict values of model with ema_model."""model_state = checkpoint['state_dict']ema_state = checkpoint['ema_state_dict']for k in ema_state:if k[:7] == 'module.':tmp = ema_state[k]ema_state[k] = model_state[k[7:]]model_state[k[7:]] = tmpdef _ema_started(self, runner) -> bool:"""Whether ``EMAHook`` has been initialized at current iteration orepoch.:attr:`ema_model` will be initialized when ``runner.iter`` or``runner.epoch`` is greater than ``self.begin`` for the first time.Args:runner (Runner): Runner of the training, validation process.Returns:bool: Whether ``EMAHook`` has been initialized."""if self.enabled_by_epoch:return runner.epoch + 1 >= self.begin_epochelse:return runner.iter + 1 >= self.begin_iter

mmengine.hooks.EMAHook(ema_type=‘ExponentialMovingAverage’, strict_load=False, begin_iter=0, begin_epoch=0, **kwargs)

mmengine/mmengine/model/averaged_model.py

# Copyright (c) OpenMMLab. All rights reserved.
import logging
from abc import abstractmethod
from copy import deepcopy
from typing import Optionalimport torch
import torch.nn as nn
from torch import Tensorfrom mmengine.logging import print_log
from mmengine.registry import MODELSclass BaseAveragedModel(nn.Module):"""A base class for averaging model weights.Weight averaging, such as SWA and EMA, is a widely used technique fortraining neural networks. This class implements the averaging processfor a model. All subclasses must implement the `avg_func` method.This class creates a copy of the provided module :attr:`model`on the :attr:`device` and allows computing running averages of theparameters of the :attr:`model`.The code is referenced from: https://github.com/pytorch/pytorch/blob/master/torch/optim/swa_utils.py.Different from the `AveragedModel` in PyTorch, we use in-place operationto improve the parameter updating speed, which is about 5 times fasterthan the non-in-place version.In mmengine, we provide two ways to use the model averaging:1. Use the model averaging module in hook:We provide an :class:`mmengine.hooks.EMAHook` to apply the modelaveraging during training. Add ``custom_hooks=[dict(type='EMAHook')]``to the config or the runner.2. Use the model averaging module directly in the algorithm. Take the emateacher in semi-supervise as an example:>>> from mmengine.model import ExponentialMovingAverage>>> student = ResNet(depth=50)>>> # use ema model as teacher>>> ema_teacher = ExponentialMovingAverage(student)Args:model (nn.Module): The model to be averaged.interval (int): Interval between two updates. Defaults to 1.device (torch.device, optional): If provided, the averaged model willbe stored on the :attr:`device`. Defaults to None.update_buffers (bool): if True, it will compute running averages forboth the parameters and the buffers of the model. Defaults toFalse."""  # noqa: E501def __init__(self,model: nn.Module,interval: int = 1,device: Optional[torch.device] = None,update_buffers: bool = False) -> None:super().__init__()self.module = deepcopy(model).requires_grad_(False)self.interval = intervalif device is not None:self.module = self.module.to(device)self.register_buffer('steps',torch.tensor(0, dtype=torch.long, device=device))self.update_buffers = update_buffersif update_buffers:self.avg_parameters = self.module.state_dict()else:self.avg_parameters = dict(self.module.named_parameters())@abstractmethoddef avg_func(self, averaged_param: Tensor, source_param: Tensor,steps: int) -> None:"""Use in-place operation to compute the average of the parameters. Allsubclasses must implement this method.Args:averaged_param (Tensor): The averaged parameters.source_param (Tensor): The source parameters.steps (int): The number of times the parameters have beenupdated."""def forward(self, *args, **kwargs):"""Forward method of the averaged model."""return self.module(*args, **kwargs)def update_parameters(self, model: nn.Module) -> None:"""Update the parameters of the model. This method will execute the``avg_func`` to compute the new parameters and update the model'sparameters.Args:model (nn.Module): The model whose parameters will be averaged."""src_parameters = (model.state_dict()if self.update_buffers else dict(model.named_parameters()))if self.steps == 0:for k, p_avg in self.avg_parameters.items():p_avg.data.copy_(src_parameters[k].data)elif self.steps % self.interval == 0:for k, p_avg in self.avg_parameters.items():if p_avg.dtype.is_floating_point:device = p_avg.deviceself.avg_func(p_avg.data,src_parameters[k].data.to(device),self.steps)if not self.update_buffers:# If not update the buffers,# keep the buffers in sync with the source model.for b_avg, b_src in zip(self.module.buffers(), model.buffers()):b_avg.data.copy_(b_src.data.to(b_avg.device))self.steps += 1@MODELS.register_module()
class StochasticWeightAverage(BaseAveragedModel):"""Implements the stochastic weight averaging (SWA) of the model.Stochastic Weight Averaging was proposed in `Averaging Weights Leads toWider Optima and Better Generalization, UAI 2018.<https://arxiv.org/abs/1803.05407>`_ by Pavel Izmailov, DmitriiPodoprikhin, Timur Garipov, Dmitry Vetrov and Andrew Gordon Wilson."""def avg_func(self, averaged_param: Tensor, source_param: Tensor,steps: int) -> None:"""Compute the average of the parameters using stochastic weightaverage.Args:averaged_param (Tensor): The averaged parameters.source_param (Tensor): The source parameters.steps (int): The number of times the parameters have beenupdated."""averaged_param.add_(source_param - averaged_param,alpha=1 / float(steps // self.interval + 1))@MODELS.register_module()
class ExponentialMovingAverage(BaseAveragedModel):r"""Implements the exponential moving average (EMA) of the model.All parameters are updated by the formula as below:.. math::Xema_{t+1} = (1 - momentum) * Xema_{t} +  momentum * X_t.. note::This :attr:`momentum` argument is different from one used in optimizerclasses and the conventional notion of momentum. Mathematically,:math:`Xema_{t+1}` is the moving average and :math:`X_t` is thenew observed value. The value of momentum is usually a small number,allowing observed values to slowly update the ema parameters.Args:model (nn.Module): The model to be averaged.momentum (float): The momentum used for updating ema parameter.Defaults to 0.0002.Ema's parameter are updated with the formula:math:`averaged\_param = (1-momentum) * averaged\_param +momentum * source\_param`.interval (int): Interval between two updates. Defaults to 1.device (torch.device, optional): If provided, the averaged model willbe stored on the :attr:`device`. Defaults to None.update_buffers (bool): if True, it will compute running averages forboth the parameters and the buffers of the model. Defaults toFalse."""  # noqa: W605def __init__(self,model: nn.Module,momentum: float = 0.0002,interval: int = 1,device: Optional[torch.device] = None,update_buffers: bool = False) -> None:super().__init__(model, interval, device, update_buffers)assert 0.0 < momentum < 1.0, 'momentum must be in range (0.0, 1.0)'\f'but got {momentum}'if momentum > 0.5:print_log('The value of momentum in EMA is usually a small number,''which is different from the conventional notion of 'f'momentum but got {momentum}. Please make sure the 'f'value is correct.',logger='current',level=logging.WARNING)self.momentum = momentumdef avg_func(self, averaged_param: Tensor, source_param: Tensor,steps: int) -> None:"""Compute the moving average of the parameters using exponentialmoving average.Args:averaged_param (Tensor): The averaged parameters.source_param (Tensor): The source parameters.steps (int): The number of times the parameters have beenupdated."""averaged_param.lerp_(source_param, self.momentum)@MODELS.register_module()
class MomentumAnnealingEMA(ExponentialMovingAverage):r"""Exponential moving average (EMA) with momentum annealing strategy.Args:model (nn.Module): The model to be averaged.momentum (float): The momentum used for updating ema parameter.Defaults to 0.0002.Ema's parameter are updated with the formula:math:`averaged\_param = (1-momentum) * averaged\_param +momentum * source\_param`.gamma (int): Use a larger momentum early in training and graduallyannealing to a smaller value to update the ema model smoothly. Themomentum is calculated as max(momentum, gamma / (gamma + steps))Defaults to 100.interval (int): Interval between two updates. Defaults to 1.device (torch.device, optional): If provided, the averaged model willbe stored on the :attr:`device`. Defaults to None.update_buffers (bool): if True, it will compute running averages forboth the parameters and the buffers of the model. Defaults toFalse."""def __init__(self,model: nn.Module,momentum: float = 0.0002,gamma: int = 100,interval: int = 1,device: Optional[torch.device] = None,update_buffers: bool = False) -> None:super().__init__(model=model,momentum=momentum,interval=interval,device=device,update_buffers=update_buffers)assert gamma > 0, f'gamma must be greater than 0, but got {gamma}'self.gamma = gammadef avg_func(self, averaged_param: Tensor, source_param: Tensor,steps: int) -> None:"""Compute the moving average of the parameters using the linearmomentum strategy.Args:averaged_param (Tensor): The averaged parameters.source_param (Tensor): The source parameters.steps (int): The number of times the parameters have beenupdated."""momentum = max(self.momentum,self.gamma / (self.gamma + self.steps.item()))averaged_param.lerp_(source_param, momentum)

EMAHook配置文件设置

EMAHook在对模型训练时进行指数移动平均运算,目的是提高模型的鲁棒性。请注意,指数移动平均生成的模型仅用于验证和测试,不影响训练。

mmcv1.6原函数配置设置

custom_hooks = [dict(type='EMAHook')]

mmengine原函数配置设置

custom_hooks = [dict(type='EMAHook')]
runner = Runner(custom_hooks=custom_hooks, ...)
runner.train()

EMAHook默认使用ExponentialMovingAverage,可选值为StochasticWeightAverage和MomentumAnnealingEMA。通过设置ema_type可以使用其他平均策略。

custom_hooks = [dict(type='EMAHook', ema_type='StochasticWeightAverage')]

更多用法请参见EMAHook API 参考。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/51607.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Postman中的灰度发布测试:API部署的稳健之路

Postman中的灰度发布测试&#xff1a;API部署的稳健之路 在软件开发中&#xff0c;灰度发布是一种渐进式的部署策略&#xff0c;它允许新版本的软件逐步向用户推出&#xff0c;从而降低新版本可能带来的风险。Postman作为一个强大的API开发和测试工具&#xff0c;提供了多种功…

Springboot集成微信公众号模板通知java

先看效果 1、微信模板消息官方文档 微信公众平台 2、申请微信公众平台接口测试账号 微信公众平台 3、创建3个实体 &#xff08;1&#xff09;、ConfigBean import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configurat…

python中,jsonpath提取数据的时候出现TypeError: ‘bool‘ object is not subscriptable怎么解决

json格式如下&#xff1a; { success: True, result: { codeInfo: { code: 0, msg: 成功 }, uploadToken: { resId: rzJRpo, endpoint: https://sit-api-ypsx-resource.ypsx-internal.com/r…

stm32入门-----DMA直接存储器存取(上——理论篇)

目录 前言 DMA 1.简介 2.存储器映像 3.DMA结构 4.数据宽度与对齐 5.DMA工作示例 前言 本期我们就开始学习DMA直接存储器存取&#xff0c;DMA是一个数据装运的小助手&#xff0c;执行数据的搬运处理&#xff0c;减少了CPU的负担&#xff0c;在stm32中担当重要的工作。在前…

pypi如何上传自己的代码记录

目录 一. 注册pypi账号并创建token 1. 注册pypi账号并创建token 2. Pypi账号注册 3. 邮箱验证 ​编辑 4. 重新生成恢复代码 5. 输入账号密码 ​编辑 6. 保存code并继续 ​编辑7. 输入一行即可&#xff0c;然后点击verify 8. 点击左方目录内的account setting&#xff…

基于微信小程序的高校排课系统 /基于微信小程序的排课管理系统/课程管理系统

摘 要 随着信息技术和网络技术的飞速发展&#xff0c;人类已进入全新信息化时代&#xff0c;传统管理技术已无法高效&#xff0c;便捷地管理信息。为了迎合时代需求&#xff0c;优化管理效率&#xff0c;各种各样的管理系统应运而生&#xff0c;各行各业相继进入信息管理时代&a…

7月29日,每日信息差

第一、淘宝天猫计划全面优化运费险策略&#xff0c;与合作伙伴进行多轮磋商&#xff0c;部分新政策有望在今年 9 月试运行。策略调整后&#xff0c;商家将获得更多运费险补贴&#xff0c;降低经营成本 第二、三星电子与全国三星电子工会将于 7 月 29 日下午展开为期三天的薪资…

爱快路由的dns强制客户端代理真是个强大的功能

大致情况是这样的&#xff1a;同事说在linux服务器/etc/resolv.conf上随便写个IP地址【不在线的】&#xff0c;dns地址也能解析&#xff0c;让我帮忙查查。 我看了下也感觉纳闷&#xff0c;试了下不光在服务器上&#xff0c;我本地的pc随便设置了个dns解析也是一样的。 通过wir…

密码错误springboot也正常启动了

1 项目背景 在日常开发中&#xff0c;开发有时候拿不到数据库&#xff0c;依赖组件的密码&#xff0c;例如由devops进行配置&#xff0c;甚至放到一些密码管理组件中。这样就会出现密码错误的情况。项目发布成功&#xff0c;那么依赖项例如Mysql, 访问别的系统的ak, sk真的准备…

解决Qt6 error: The kit needs to define a CMake tool to parse this project.

cmake对于Qt6来说很重要&#xff0c;所以学会cmake是必须的。 上述错误&#xff0c;就是我在Windows10下运行cmake项目总是报错的一个问题。 明明路径已经配好了&#xff0c;却总是报错。 具体原因可能和cmake的版本&#xff0c;以及是否设置为默认有关。另外也和QtCreator中…

飞凌全志T527开发板实现局域网内文件传输功能

之前玩开发板的时候&#xff0c;如果需要实现主机与开发板之间的文件传输&#xff0c;通常是通过挂载NFS的方式&#xff0c;而飞凌的OKT527板载WIFI&#xff0c;并且官方提供的镜像中已经将其成功驱动&#xff0c;那我们就可以通过WIFI连接家中的路由器的方式&#xff0c;实现局…

codeforce(2024年7月29日)B

我的代码 ai比然与bi和bi-1的的二进制含有1的相同&#xff08;有一同一&#xff09; #include <bits/stdc.h>using namespace std; typedef long long ll; typedef double db; typedef long double ldb; typedef pair<int, int> pii; typedef pair<ll, ll>…

redis优化 持久化

redis缓存中间件 nginx web服务器 缓存数据库 php 转发动态请求 tomcat 即是web页面也可以转发动态请求 springboot 自带tomcat redis也是一个数据库&#xff0c;不单单是一个缓存工具。 redis 非关系型数据库 nosql not only sql 不仅仅是sql。 键值对形式 ky value …

【Redis】Centos7 安装 redis(详细教程)

查看当前 Redis 版本&#xff1a; 当前的 redis 版本太老了&#xff0c;选择安装 Redis5。 一、使用 yum 安装 1、首先安装 scl 源 yum install centos-release-scl-rh 由于我之前已经安装过了&#xff0c;所以加载速度比较快&#xff0c;且显示已经安装成功&#xff0c;是最…

MongoDB整合SpringBoot

文章目录 SpringBoot整合MongoDB环境准备文档操作相关注解创建实体添加文档查询文档更新文档删除文档 小技巧&#xff1a;如何去掉_class属性 SpringBoot整合MongoDB https://docs.spring.io/spring-boot/docs/current/reference/html/data.html#data.nosql.mongodb.repositor…

prompt技巧(2) - 如何解决对话过程中角色反转问题

在一些角色扮演场景下&#xff0c;需要大模型扮演某个角色进行对话&#xff0c;但是在对话过程中偶尔会发生角色反转问题&#xff0c;如大模型扮演客户角色与销售人员进行对话&#xff0c;大模型有时会忘记自己身份&#xff0c;突然以销售人员的身份进行回答&#xff0c;这种情…

Postman中API测试的艺术:测试用例复用的高级技巧

Postman中API测试的艺术&#xff1a;测试用例复用的高级技巧 在API测试过程中&#xff0c;复用测试用例可以显著提高测试效率和一致性。Postman作为一个强大的API开发工具&#xff0c;提供了多种机制来实现测试用例的复用。本文将深入探讨Postman中API测试用例复用的技巧&…

开源一个react路由缓存库

Github仓库 背景 产品希望可以像浏览器那样每打开一个路由&#xff0c;会多一个tab&#xff0c;用户可以切换tab访问之前加载过的页面&#xff0c;且不会重新加载。真就产品一句话…… Github上有轮子了吗 Github上开箱即用的轮子是基于react-router-dom V5实现的&#xff…

ubuntu上部署vue项目到ngixn中+SpringBoot项目+postgresql数据库

文章目录 前提1.Ubuntu上安装ngix2.部署Vue项目2.1上传vue项目2.2.配置 3.Ubuntu上安装Postgres4.部署springboot项目 前提 记一次在ubuntu部署前端vue和后端springboot项目&#xff0c;以及数据库postgresql的安装以及启动、停止等常用的命令。 1.Ubuntu上安装ngix 1、检查…

java解决全排列问题

java解决全排列问题 全排列问题1 给定一个可包含重复数字的序列 nums &#xff0c;按任意顺序返回所有不重复的全排列。 思路 我们把问题看成n个排列成一行的空格&#xff0c;从左往右依次填入给定的n个数&#xff0c;每个数只能使用一次&#xff0c;可以使用回溯法。 递归函数…