昇思MindSpore学习笔记6-02计算机视觉--ResNet50迁移学习

摘要:

        记录MindSpore AI框架使用ResNet50迁移学习方法对ImageNet狼狗图片分类的过程、步骤。包括环境准备、下载数据集、数据集加载、构建模型、固定特征训练、训练评估和模型预测等。

一、

迁移学习的方法

        在大数据集上训练得到预训练模型

        初始化网络权重参数

        固定特征提取器

        应用于特定任务

ImageNet数据集中的狼和狗图像进行分类。

二、环境准备

%%capture captured_output
# 实验环境已经预装了mindspore==2.2.14,如需更换mindspore版本,可更改下面mindspore的版本号
!pip uninstall mindspore -y
!pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14
# 查看当前 mindspore 版本
!pip show mindspore

输出:

Name: mindspore
Version: 2.2.14
Summary: MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
Home-page: https://www.mindspore.cn
Author: The MindSpore Authors
Author-email: contact@mindspore.cn
License: Apache 2.0
Location: /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages
Requires: asttokens, astunparse, numpy, packaging, pillow, protobuf, psutil, scipy
Required-by: 

三、数据准备

1.下载数据集

ImageNet

        狼与狗每个分类

        120张训练图像

        30张验证图像

download接口

        下载数据集

        自动解压到当前目录下

from download import download
​
dataset_url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/intermediate/Canidae_data.zip"
​
download(dataset_url, "./datasets-Canidae", kind="zip", replace=True)

输出:

Creating data folder...
Downloading data from https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/intermediate/Canidae_data.zip (11.3 MB)file_sizes: 100%|███████████████████████████| 11.9M/11.9M [00:00<00:00, 140MB/s]
Extracting zip file...
Successfully downloaded / unzipped to ./datasets-Canidae
'./datasets-Canidae'

数据集目录结构:

datasets-Canidae/data/
└── Canidae├── train│   ├── dogs│   └── wolves└── val├── dogs└── wolves

2.加载数据集

mindspore.dataset.ImageFolderDataset接口

        加载数据集

        图像增强操作。

定义执行过程的输入参数:

batch_size = 18                             # 批量大小
image_size = 224                            # 训练图像空间大小
num_epochs = 5                             # 训练周期数
lr = 0.001                                  # 学习率
momentum = 0.9                              # 动量
workers = 4                                 # 并行线程个数import mindspore as ms
import mindspore.dataset as ds
import mindspore.dataset.vision as vision
​
# 数据集目录路径
data_path_train = "./datasets-Canidae/data/Canidae/train/"
data_path_val = "./datasets-Canidae/data/Canidae/val/"
​
# 创建训练数据集
​
def create_dataset_canidae(dataset_path, usage):"""数据加载"""data_set = ds.ImageFolderDataset(dataset_path,num_parallel_workers=workers,shuffle=True,)
​# 数据增强操作mean = [0.485 * 255, 0.456 * 255, 0.406 * 255]std = [0.229 * 255, 0.224 * 255, 0.225 * 255]scale = 32
​if usage == "train":# Define map operations for training datasettrans = [vision.RandomCropDecodeResize(size=image_size, scale=(0.08, 1.0), ratio=(0.75, 1.333)),vision.RandomHorizontalFlip(prob=0.5),vision.Normalize(mean=mean, std=std),vision.HWC2CHW()]else:# Define map operations for inference datasettrans = [vision.Decode(),vision.Resize(image_size + scale),vision.CenterCrop(image_size),vision.Normalize(mean=mean, std=std),vision.HWC2CHW()]
​# 数据映射操作data_set = data_set.map(operations=trans,input_columns='image',num_parallel_workers=workers)
​
​# 批量操作data_set = data_set.batch(batch_size)
​return data_set
​
​
dataset_train = create_dataset_canidae(data_path_train, "train")
step_size_train = dataset_train.get_dataset_size()
​
dataset_val = create_dataset_canidae(data_path_val, "val")
step_size_val = dataset_val.get_dataset_size()

3.数据集可视化

mindspore.dataset.ImageFolderDataset接口

        加载训练数据集

        返回值为字典

create_dict_iterator 接口

        创建数据迭代器

        next 迭代访问数据集

        batch_size 设为18

                next一次获取图像及标签数据的数量

data = next(dataset_train.create_dict_iterator())
images = data["image"]
labels = data["label"]
​
print("Tensor of image", images.shape)
print("Labels:", labels)

输出:

Tensor of image (18, 3, 224, 224)
Labels: [0 0 1 0 0 1 0 1 0 0 0 1 0 0 1 0 1 1]

显示图像及标签数据

标题为图像对应的label名称

import matplotlib.pyplot as plt
import numpy as np
​
# class_name对应label,按文件夹字符串从小到大的顺序标记label
class_name = {0: "dogs", 1: "wolves"}
​
plt.figure(figsize=(5, 5))
for i in range(4):# 获取图像及其对应的labeldata_image = images[i].asnumpy()data_label = labels[i]# 处理图像供展示使用data_image = np.transpose(data_image, (1, 2, 0))mean = np.array([0.485, 0.456, 0.406])std = np.array([0.229, 0.224, 0.225])data_image = std * data_image + meandata_image = np.clip(data_image, 0, 1)# 显示图像plt.subplot(2, 2, i+1)plt.imshow(data_image)plt.title(class_name[int(labels[i].asnumpy())])plt.axis("off")
​
plt.show()

输出:

四、训练模型

训练模型ResNet50

        Pretrained=True

        下载ResNet50的预训练模型

        加载权重参数

1.构建Resnet50网络

from typing import Type, Union, List, Optional
from mindspore import nn, train
from mindspore.common.initializer import Normal
​
​
weight_init = Normal(mean=0, sigma=0.02)
gamma_init = Normal(mean=1, sigma=0.02)class ResidualBlockBase(nn.Cell):expansion: int = 1  # 最后一个卷积核数量与第一个卷积核数量相等
​def __init__(self, in_channel: int, out_channel: int,stride: int = 1, norm: Optional[nn.Cell] = None,down_sample: Optional[nn.Cell] = None) -> None:super(ResidualBlockBase, self).__init__()if not norm:self.norm = nn.BatchNorm2d(out_channel)else:self.norm = norm
​self.conv1 = nn.Conv2d(in_channel, out_channel,kernel_size=3, stride=stride,weight_init=weight_init)self.conv2 = nn.Conv2d(in_channel, out_channel,kernel_size=3, weight_init=weight_init)self.relu = nn.ReLU()self.down_sample = down_sample
​def construct(self, x):"""ResidualBlockBase construct."""identity = x  # shortcuts分支
​out = self.conv1(x)  # 主分支第一层:3*3卷积层out = self.norm(out)out = self.relu(out)out = self.conv2(out)  # 主分支第二层:3*3卷积层out = self.norm(out)
​if self.down_sample is not None:identity = self.down_sample(x)out += identity  # 输出为主分支与shortcuts之和out = self.relu(out)
​return outclass ResidualBlock(nn.Cell):expansion = 4  # 最后一个卷积核的数量是第一个卷积核数量的4倍
​def __init__(self, in_channel: int, out_channel: int,stride: int = 1, down_sample: Optional[nn.Cell] = None) -> None:super(ResidualBlock, self).__init__()
​self.conv1 = nn.Conv2d(in_channel, out_channel,kernel_size=1, weight_init=weight_init)self.norm1 = nn.BatchNorm2d(out_channel)self.conv2 = nn.Conv2d(out_channel, out_channel,kernel_size=3, stride=stride,weight_init=weight_init)self.norm2 = nn.BatchNorm2d(out_channel)self.conv3 = nn.Conv2d(out_channel, out_channel * self.expansion,kernel_size=1, weight_init=weight_init)self.norm3 = nn.BatchNorm2d(out_channel * self.expansion)
​self.relu = nn.ReLU()self.down_sample = down_sample
​def construct(self, x):
​identity = x  # shortscuts分支
​out = self.conv1(x)  # 主分支第一层:1*1卷积层out = self.norm1(out)out = self.relu(out)out = self.conv2(out)  # 主分支第二层:3*3卷积层out = self.norm2(out)out = self.relu(out)out = self.conv3(out)  # 主分支第三层:1*1卷积层out = self.norm3(out)
​if self.down_sample is not None:identity = self.down_sample(x)
​out += identity  # 输出为主分支与shortcuts之和out = self.relu(out)
​return outdef make_layer(last_out_channel, block: Type[Union[ResidualBlockBase, ResidualBlock]],channel: int, block_nums: int, stride: int = 1):down_sample = None  # shortcuts分支
​
​if stride != 1 or last_out_channel != channel * block.expansion:
​down_sample = nn.SequentialCell([nn.Conv2d(last_out_channel, channel * block.expansion,kernel_size=1, stride=stride, weight_init=weight_init),nn.BatchNorm2d(channel * block.expansion, gamma_init=gamma_init)])
​layers = []layers.append(block(last_out_channel, channel, stride=stride, down_sample=down_sample))
​in_channel = channel * block.expansion# 堆叠残差网络for _ in range(1, block_nums):
​layers.append(block(in_channel, channel))
​return nn.SequentialCell(layers)from mindspore import load_checkpoint, load_param_into_net
​
class ResNet(nn.Cell):def __init__(self, block: Type[Union[ResidualBlockBase, ResidualBlock]],layer_nums: List[int], num_classes: int, input_channel: int) -> None:super(ResNet, self).__init__()
​self.relu = nn.ReLU()# 第一个卷积层,输入channel为3(彩色图像),输出channel为64self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, weight_init=weight_init)self.norm = nn.BatchNorm2d(64)# 最大池化层,缩小图片的尺寸self.max_pool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode='same')# 各个残差网络结构块定义,self.layer1 = make_layer(64, block, 64, layer_nums[0])self.layer2 = make_layer(64 * block.expansion, block, 128, layer_nums[1], stride=2)self.layer3 = make_layer(128 * block.expansion, block, 256, layer_nums[2], stride=2)self.layer4 = make_layer(256 * block.expansion, block, 512, layer_nums[3], stride=2)# 平均池化层self.avg_pool = nn.AvgPool2d()# flattern层self.flatten = nn.Flatten()# 全连接层self.fc = nn.Dense(in_channels=input_channel, out_channels=num_classes)
​def construct(self, x):
​x = self.conv1(x)x = self.norm(x)x = self.relu(x)x = self.max_pool(x)
​x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)
​x = self.avg_pool(x)x = self.flatten(x)x = self.fc(x)
​return x
​
def _resnet(model_url: str, block: Type[Union[ResidualBlockBase, ResidualBlock]],layers: List[int], num_classes: int, pretrained: bool, pretrianed_ckpt: str,input_channel: int):model = ResNet(block, layers, num_classes, input_channel)
​if pretrained:# 加载预训练模型download(url=model_url, path=pretrianed_ckpt, replace=True)param_dict = load_checkpoint(pretrianed_ckpt)load_param_into_net(model, param_dict)
​return model
​
def resnet50(num_classes: int = 1000, pretrained: bool = False):"ResNet50模型"resnet50_url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/models/application/resnet50_224_new.ckpt"resnet50_ckpt = "./LoadPretrainedModel/resnet50_224_new.ckpt"return _resnet(resnet50_url, ResidualBlock, [3, 4, 6, 3], num_classes,pretrained, resnet50_ckpt, 2048)

2.固定特征进行训练

训练固定特征

        冻结除最后一层之外的所有网络层

        设置 requires_grad == False 冻结参数

        不在反向传播中计算梯度

import mindspore as ms
import matplotlib.pyplot as plt
import os
import time
​
net_work = resnet50(pretrained=True)
​
# 全连接层输入层的大小
in_channels = net_work.fc.in_channels
# 输出通道数大小为狼狗分类数2
head = nn.Dense(in_channels, 2)
# 重置全连接层
net_work.fc = head
​
# 平均池化层kernel size为7
avg_pool = nn.AvgPool2d(kernel_size=7)
# 重置平均池化层
net_work.avg_pool = avg_pool
​
# 冻结除最后一层外的所有参数
for param in net_work.get_parameters():if param.name not in ["fc.weight", "fc.bias"]:param.requires_grad = False
​
# 定义优化器和损失函数
opt = nn.Momentum(params=net_work.trainable_params(), learning_rate=lr, momentum=0.5)
loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
​
​
def forward_fn(inputs, targets):logits = net_work(inputs)loss = loss_fn(logits, targets)
​return loss
​
grad_fn = ms.value_and_grad(forward_fn, None, opt.parameters)
​
def train_step(inputs, targets):loss, grads = grad_fn(inputs, targets)opt(grads)return loss
​
# 实例化模型
model1 = train.Model(net_work, loss_fn, opt, metrics={"Accuracy": train.Accuracy()})

输出:

Downloading data from https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/models/application/resnet50_224_new.ckpt (97.7 MB)file_sizes: 100%|█████████████████████████████| 102M/102M [00:00<00:00, 115MB/s]
Successfully downloaded file to ./LoadPretrainedModel/resnet50_224_new.ckpt

五、训练和评估

保存评估精度最高的ckpt文件于当前路径:

        ./BestCheckpoint/resnet50-best-freezing-param.ckpt

import mindspore as ms
import matplotlib.pyplot as plt
import os
import time
dataset_train = create_dataset_canidae(data_path_train, "train")
step_size_train = dataset_train.get_dataset_size()
​
dataset_val = create_dataset_canidae(data_path_val, "val")
step_size_val = dataset_val.get_dataset_size()
​
num_epochs = 5
​
# 创建迭代器
data_loader_train = dataset_train.create_tuple_iterator(num_epochs=num_epochs)
data_loader_val = dataset_val.create_tuple_iterator(num_epochs=num_epochs)
best_ckpt_dir = "./BestCheckpoint"
best_ckpt_path = "./BestCheckpoint/resnet50-best-freezing-param.ckpt"
import mindspore as ms
import matplotlib.pyplot as plt
import os
import time
# 开始循环训练
print("Start Training Loop ...")
​
best_acc = 0
​
for epoch in range(num_epochs):losses = []net_work.set_train()
​epoch_start = time.time()
​# 为每轮训练读入数据for i, (images, labels) in enumerate(data_loader_train):labels = labels.astype(ms.int32)loss = train_step(images, labels)losses.append(loss)
​# 每个epoch结束后,验证准确率
​acc = model1.eval(dataset_val)['Accuracy']
​epoch_end = time.time()epoch_seconds = (epoch_end - epoch_start) * 1000step_seconds = epoch_seconds/step_size_train
​print("-" * 20)print("Epoch: [%3d/%3d], Average Train Loss: [%5.3f], Accuracy: [%5.3f]" % (epoch+1, num_epochs, sum(losses)/len(losses), acc))print("epoch time: %5.3f ms, per step time: %5.3f ms" % (epoch_seconds, step_seconds))
​if acc > best_acc:best_acc = accif not os.path.exists(best_ckpt_dir):os.mkdir(best_ckpt_dir)ms.save_checkpoint(net_work, best_ckpt_path)
​
print("=" * 80)
print(f"End of validation the best Accuracy is: {best_acc: 5.3f}, "f"save the best ckpt file in {best_ckpt_path}", flush=True)

输出:

Start Training Loop ...
--------------------
Epoch: [  1/  5], Average Train Loss: [0.641], Accuracy: [0.850]
epoch time: 125652.211 ms, per step time: 8975.158 ms
--------------------
Epoch: [  2/  5], Average Train Loss: [0.560], Accuracy: [0.983]
epoch time: 1105.189 ms, per step time: 78.942 ms
--------------------
Epoch: [  3/  5], Average Train Loss: [0.507], Accuracy: [0.983]
epoch time: 838.326 ms, per step time: 59.880 ms
--------------------
Epoch: [  4/  5], Average Train Loss: [0.451], Accuracy: [1.000]
epoch time: 927.405 ms, per step time: 66.243 ms
--------------------
Epoch: [  5/  5], Average Train Loss: [0.377], Accuracy: [0.983]
epoch time: 950.847 ms, per step time: 67.918 ms
================================================================================
End of validation the best Accuracy is:  1.000, save the best ckpt file in ./BestCheckpoint/resnet50-best-freezing-param.ckpt

六、可视化模型预测

使用固定特征得到的best.ckpt文件

预测验证集的狼和狗图像数据

预测字体为蓝色         预测正确

预测字体为红色         预测错误

import matplotlib.pyplot as plt
import mindspore as ms
​
def visualize_model(best_ckpt_path, val_ds):net = resnet50()# 全连接层输入层的大小in_channels = net.fc.in_channels# 输出通道数大小为狼狗分类数2head = nn.Dense(in_channels, 2)# 重置全连接层net.fc = head# 平均池化层kernel size为7avg_pool = nn.AvgPool2d(kernel_size=7)# 重置平均池化层net.avg_pool = avg_pool# 加载模型参数param_dict = ms.load_checkpoint(best_ckpt_path)ms.load_param_into_net(net, param_dict)model = train.Model(net)# 加载验证集的数据进行验证data = next(val_ds.create_dict_iterator())images = data["image"].asnumpy()labels = data["label"].asnumpy()class_name = {0: "dogs", 1: "wolves"}# 预测图像类别output = model.predict(ms.Tensor(data['image']))pred = np.argmax(output.asnumpy(), axis=1)
​# 显示图像及图像的预测值plt.figure(figsize=(5, 5))for i in range(4):plt.subplot(2, 2, i + 1)# 若预测正确,显示为蓝色;若预测错误,显示为红色color = 'blue' if pred[i] == labels[i] else 'red'plt.title('predict:{}'.format(class_name[pred[i]]), color=color)picture_show = np.transpose(images[i], (1, 2, 0))mean = np.array([0.485, 0.456, 0.406])std = np.array([0.229, 0.224, 0.225])picture_show = std * picture_show + meanpicture_show = np.clip(picture_show, 0, 1)plt.imshow(picture_show)plt.axis('off')
​plt.show()
visualize_model(best_ckpt_path, dataset_val)

输出:

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/43505.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【机器学习】特征选择:精炼数据,提升模型效能

&#x1f308;个人主页: 鑫宝Code &#x1f525;热门专栏: 闲话杂谈&#xff5c; 炫酷HTML | JavaScript基础 ​&#x1f4ab;个人格言: "如无必要&#xff0c;勿增实体" 文章目录 特征选择&#xff1a;精炼数据&#xff0c;提升模型效能引言为何进行特征选择&a…

STM32的独立看门狗详解

目录 1.独立看门狗是什么&#xff1f; 2.独立看门狗的作用 3.独立看门狗的实现原理 4.独立看门狗用到的寄存器 4.1 IWDG_KR &#xff08;关键字计时器&#xff09; 4.2 IWDG_PR&#xff08;预分频寄存器&#xff09; 4.3 IWDG_RLR&#xff08;重装载寄存器&#xff09…

接口调用的三种方式

例子&#xff1a; curl --location http://110.0.0.1:1024 \ --header Content-Type: application/json \ --data {"task_id": 1 }方式一&#xff1a;postman可视化图形调用 方式二&#xff1a;Vscode中powershell发送请求 #powershell (psh) Invoke-WebRequest -U…

温州海经区管委会主任、乐清市委书记徐建兵带队莅临麒麟信安调研

7月8日上午&#xff0c;温州海经区管委会主任、乐清市委书记徐建兵&#xff0c;乐清市委常委、副市长叶序锋&#xff0c;乐清市委办主任郑志坚一行莅临麒麟信安调研&#xff0c;乐清市投资促进服务中心及湖南省浙江总商会相关人员陪同参加。麒麟信安董事长杨涛、总裁刘文清热情…

elasticsearch集群模式部署

系统版本&#xff1a;CentOS Linux release 7.9.2009 (Core) es版本&#xff1a; elasticsearch-7.6.2 本次搭建es集群为三个节点 添加启动用户 添加之前用户要是创建好了的(这里的es用户并不是绝对要求&#xff0c;你可以根据具体的需要命名创建) visudo 修改配置文件 sys…

【吊打面试官系列-MyBatis面试题】使用 MyBatis 的 mapper 接口调用时有哪些要求?

大家好&#xff0c;我是锋哥。今天分享关于 【使用 MyBatis 的 mapper 接口调用时有哪些要求&#xff1f;】面试题&#xff0c;希望对大家有帮助&#xff1b; 使用 MyBatis 的 mapper 接口调用时有哪些要求&#xff1f; 1、Mapper 接口方法名和 mapper.xml 中定义的每个 sql 的…

数据结构(初阶1)

文章目录 一、复杂度概念 二、时间复杂度 2.1 大O的渐进表示法 2.2 时间复杂度计算示例 2.2.1. // 计算Func2的时间复杂度&#xff1f; 2.2.2.// 计算Func3的时间复杂度&#xff1f; 2.2.3.// 计算Func4的时间复杂度&#xff1f; 2.2.4.// 计算strchr的时间复杂度&#xff1f; …

面对数据不一致性的解决方案:

polarDB是读写分离和计算存储分离的分布式数据库&#xff0c;并且副本的log replicate是基于Parallel-Raft协议来实现的。所以在瞬时进行写和读的操作时&#xff0c;是不可避免会存在数据一致性问题&#xff0c;导致这个数据一致性问题的原因不是事务&#xff0c;而是多副本日志…

springboot篮球馆管理系统-计算机毕业设计源码21945

目 录 摘要 1 绪论 1.1选题背景 1.2研究意义 1.3论文结构与章节安排 2 篮球馆管理系统系统分析 2.1 可行性分析 2.1.1 技术可行性分析 2.1.2 经济可行性分析 2.1.3 法律可行性分析 2.2 系统功能分析 2.2.1 功能性分析 2.2.2 非功能性分析 2.3 系统用例分析 2.4 …

数据结构--二叉树收尾

1.二叉树销毁 运用递归方法 分类&#xff1a; 根节点左子树右子树&#xff08;一般都是这个思路&#xff0c;不断进行递归即可&#xff09; 选择方法&#xff08;分析)&#xff1a; 前序&#xff1a;如果直接销毁根就无法找到左子树右子树 中序&#xff1a;也会导致丢失其…

【算法】(C语言):快速排序(递归)、归并排序(递归)、希尔排序

快速排序&#xff08;递归&#xff09; 左指针指向第一个数据&#xff0c;右指针指向最后一个数据。取第一个数据作为中间值。右指针指向的数据 循环与中间值比对&#xff0c;若大于中间值&#xff0c;右指针往左移动一位&#xff0c;若小于中间值&#xff0c;右指针停住。右…

红酒的奇幻之旅:从葡萄园到酒杯的魔法

在世界的某个角落&#xff0c;隐藏着一场关于红酒的奇幻之旅。这是一场从葡萄园到酒杯的魔法变幻&#xff0c;将大自然的馈赠与人类的智慧很好结合&#xff0c;最终呈现在我们眼前的&#xff0c;是一杯散发着迷人香气的雷盛红酒。 一、葡萄园的魔法启幕 当清晨的第一缕阳光洒落…

windows server 2019 更新补丁

1 点击 搜索按键--windows 更新设置 2 点击 安装补丁 3 安装完成&#xff0c;重新启动服务器

算法小练之 位运算基础

前言 今天正式走入&#xff0c;位运算这个章节&#xff0c;关于这一部分我会先介绍几个重要的知识点&#xff0c;然后再根据几个力扣上的题来讲解。 了解6种位操作 总所周知&#xff0c;变量在计算机中都是二进制存储的&#xff0c;比如一个变量int a 1&#xff1b; 它的存…

nvidia driver和cuda版本较低,ubuntu系统更新nvidia驱动的方法(对于小白最快最最保险的方法)

问题描述&#xff1a; 系统&#xff1a;ubuntu22.04LTS 这两天安装另一个低版本的pytorch环境&#xff0c;提示我的cuda版本很旧&#xff0c;然后运行程序时候甚至直接报错&#xff0c;如下所示&#xff1a; .local/lib/python3.10/site-packages/torch/cuda/__init__.py&quo…

高通开发系列 - 使用QFIL工具单刷某个镜像文件

By: fulinux E-mail: fulinux@sina.com Blog: https://blog.csdn.net/fulinus 喜欢的盆友欢迎点赞和订阅! 你的喜欢就是我写作的动力! 返回:专栏总目录 目录 背景过程记录背景 有时候设备中刷的是user版本,无法使用fastboot刷单个镜像,这个时候该怎么办呢? 要解决在user…

Linux 一键部署Mysql 8.4.1 LTS

mysql 前言 MySQL 是一个基于 SQL(Structured Query Language)的数据库系统,SQL 是一种用于访问和管理数据库的标准语言。MySQL 以其高性能、稳定性和易用性而闻名,它被广泛应用于各种场景,包括: Web 应用程序:许多动态网站和内容管理系统(如 WordPress)使用 MySQL 存…

Python从0到100(三十五):beautifulsoup的学习

前言&#xff1a; 零基础学Python&#xff1a;Python从0到100最新最全教程。 想做这件事情很久了&#xff0c;这次我更新了自己所写过的所有博客&#xff0c;汇集成了Python从0到100&#xff0c;共一百节课&#xff0c;帮助大家一个月时间里从零基础到学习Python基础语法、Pyth…

【网络安全科普】网络安全指南请查收

随着社会信息化深入发展&#xff0c;互联网对人类文明进步奖发挥更大的促进作用。但与此同时&#xff0c;互联网领域的问题也日益凸显。网络犯罪、网络监听、网络攻击等是又发生&#xff0c;网络安全与每个人都息息相关&#xff0c;下面&#xff0c;一起来了解网络安全知识吧。…

锐捷统一上网行为管理与审计系统static_convert接口远程命令执行漏洞复现 [附POC]

文章目录 锐捷统一上网行为管理与审计系统static_convert接口远程命令执行漏洞复现 [附POC]0x01 前言0x02 漏洞描述0x03 影响版本0x04 漏洞环境0x05 漏洞复现1.访问漏洞环境2.构造POC3.复现 锐捷统一上网行为管理与审计系统static_convert接口远程命令执行漏洞复现 [附POC] 0x…