昇思MindSpore学习笔记3-03热门LLM及其他AI应用--基于MobileNetv2的垃圾分类

摘要:

MindSpore AI框架使用MobileNetv2模型开发垃圾分检代码。检测本地图像中的垃圾物体,保存检测结果到文件。记录了开发过程和步骤,包括环境准备、数据下载、加载和预处理、模型搭建、训练、测试、推理应用等。

1、实验目的

了解垃圾分类应用代码的编写(Python语言);

了解Linux操作系统的基本使用;

掌握atc命令进行模型转换的基本操作。

2、概念

MobileNetv2网络模型原理

        Google 2017年提出

        用于轻量级CNN网络

                移动端

                嵌入式或IoT设备

        技术采用

                使用深度可分离卷积(Depthwise Separable Convolution)

                引入宽度系数α和分辨率系数β

                倒残差结构(Inverted residual block)

                Linear Bottlenecks

        优点

                减少模型参数

                减小运算量

                提高模型的准确率,且优化后的模型更小

Inverted residual block结构

        1x1卷积升维

        3x3 DepthWise卷积

        1x1卷积降维

Residual block结构

        1x1卷积降维

        3x3卷积

        1x1卷积升维

3、实验环境

支持win_x86和Linux系统,CPU/GPU/Ascend均可

参考《MindSpore环境搭建实验手册》

%%capture captured_output
# 实验环境已经预装了mindspore==2.2.14,如需更换mindspore版本,可更改下面mindspore的版本号
!pip uninstall mindspore -y
!pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14
# 查看当前 mindspore 版本
!pip show mindspore

输出:

Name: mindspore
Version: 2.2.14
Summary: MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
Home-page: https://www.mindspore.cn
Author: The MindSpore Authors
Author-email: contact@mindspore.cn
License: Apache 2.0
Location: /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages
Requires: asttokens, astunparse, numpy, packaging, pillow, protobuf, psutil, scipy
Required-by: mindnlp

4、数据处理

4.1数据集格式

ImageFolder格式每一类图片整理成单独的一个文件夹

数据集结构:

└─ImageFolder
├─train
│   class1Folder
│   ......
└─evalclass1Folder
......

4.2下载数据集

from download import download
​
# 下载data_en数据集
url = "https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/MindStudio-pc/data_en.zip" 
path = download(url, "./", kind="zip", replace=True)

输出:

Downloading data from https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/MindStudio-pc/data_en.zip (21.3 MB)file_sizes: 100%|███████████████████████████| 22.4M/22.4M [00:00<00:00, 123MB/s]
Extracting zip file...
Successfully downloaded / unzipped to ./

4.3下载训练权重文件

from download import download
​
# 下载预训练权重文件
url = "https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/ComputerVision/mobilenetV2-200_1067.zip" 
path = download(url, "./", kind="zip", replace=True)

输出:

Downloading data from https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/ComputerVision/mobilenetV2-200_1067.zip (25.5 MB)file_sizes: 100%|███████████████████████████| 26.7M/26.7M [00:00<00:00, 109MB/s]
Extracting zip file...
Successfully downloaded / unzipped to ./

4.4数据加载

将模块导入,具体如下:

import math
import numpy as np
import os
import random
​
from matplotlib import pyplot as plt
from easydict import EasyDict
from PIL import Image
import numpy as np
import mindspore.nn as nn
from mindspore import ops as P
from mindspore.ops import add
from mindspore import Tensor
import mindspore.common.dtype as mstype
import mindspore.dataset as de
import mindspore.dataset.vision as C
import mindspore.dataset.transforms as C2
import mindspore as ms
from mindspore import set_context, nn, Tensor, load_checkpoint, save_checkpoint, export
from mindspore.train import Model
from mindspore.train import Callback, LossMonitor, ModelCheckpoint, CheckpointConfig
​
os.environ['GLOG_v'] = '3' # Log level includes 3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).
os.environ['GLOG_logtostderr'] = '0' # 0:输出到文件,1:输出到屏幕
os.environ['GLOG_log_dir'] = '../../log' # 日志目录
os.environ['GLOG_stderrthreshold'] = '2' # 输出到目录也输出到屏幕:3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).
set_context(mode=ms.GRAPH_MODE, device_target="CPU", device_id=0) # 设置采用图模式执行,设备为Ascend#

配置后续训练、验证、推理用到的参数:

# 垃圾分类数据集标签,以及用于标签映射的字典。
garbage_classes = {'干垃圾': ['贝壳', '打火机', '旧镜子', '扫把', '陶瓷碗', '牙刷', '一次性筷子', '脏污衣服'],'可回收物': ['报纸', '玻璃制品', '篮球', '塑料瓶', '硬纸板', '玻璃瓶', '金属制品', '帽子', '易拉罐', '纸张'],'湿垃圾': ['菜叶', '橙皮', '蛋壳', '香蕉皮'],'有害垃圾': ['电池', '药片胶囊', '荧光灯', '油漆桶']
}
​
class_cn = ['贝壳', '打火机', '旧镜子', '扫把', '陶瓷碗', '牙刷', '一次性筷子', '脏污衣服','报纸', '玻璃制品', '篮球', '塑料瓶', '硬纸板', '玻璃瓶', '金属制品', '帽子', '易拉罐', '纸张','菜叶', '橙皮', '蛋壳', '香蕉皮','电池', '药片胶囊', '荧光灯', '油漆桶']
class_en = ['Seashell', 'Lighter','Old Mirror', 'Broom','Ceramic Bowl', 'Toothbrush','Disposable Chopsticks','Dirty Cloth','Newspaper', 'Glassware', 'Basketball', 'Plastic Bottle', 'Cardboard','Glass Bottle', 'Metalware', 'Hats', 'Cans', 'Paper','Vegetable Leaf','Orange Peel', 'Eggshell','Banana Peel','Battery', 'Tablet capsules','Fluorescent lamp', 'Paint bucket']
​
index_en = {'Seashell': 0, 'Lighter': 1, 'Old Mirror': 2, 'Broom': 3, 'Ceramic Bowl': 4, 'Toothbrush': 5, 'Disposable Chopsticks': 6, 'Dirty Cloth': 7,'Newspaper': 8, 'Glassware': 9, 'Basketball': 10, 'Plastic Bottle': 11, 'Cardboard': 12, 'Glass Bottle': 13, 'Metalware': 14, 'Hats': 15, 'Cans': 16, 'Paper': 17,'Vegetable Leaf': 18, 'Orange Peel': 19, 'Eggshell': 20, 'Banana Peel': 21,'Battery': 22, 'Tablet capsules': 23, 'Fluorescent lamp': 24, 'Paint bucket': 25}
​
# 训练超参
config = EasyDict({"num_classes": 26,"image_height": 224,"image_width": 224,#"data_split": [0.9, 0.1],"backbone_out_channels":1280,"batch_size": 16,"eval_batch_size": 8,"epochs": 10,"lr_max": 0.05,"momentum": 0.9,"weight_decay": 1e-4,"save_ckpt_epochs": 1,"dataset_path": "./data_en","class_index": index_en,"pretrained_ckpt": "./mobilenetV2-200_1067.ckpt" # mobilenetV2-200_1067.ckpt 
})

4.5数据预处理操作

读取垃圾分类数据集

        ImageFolderDataset方法

                指定训练集和测试集

预处理

        归一化

        修改图像频道

        训练集增加丰富度

                RandomCropDecodeResize

                RandomHorizontalFlip

                RandomColorAdjust

                Shuffle

        测试集

                Decode

                Resize

                CenterCrop

def create_dataset(dataset_path, config, training=True, buffer_size=1000):"""create a train or eval dataset
​Args:dataset_path(string): the path of dataset.config(struct): the config of train and eval in diffirent platform.
​Returns:train_dataset, val_dataset"""data_path = os.path.join(dataset_path, 'train' if training else 'test')ds = de.ImageFolderDataset(data_path, num_parallel_workers=4, class_indexing=config.class_index)resize_height = config.image_heightresize_width = config.image_widthnormalize_op = C.Normalize(mean=[0.485*255, 0.456*255, 0.406*255], std=[0.229*255, 0.224*255, 0.225*255])change_swap_op = C.HWC2CHW()type_cast_op = C2.TypeCast(mstype.int32)
​if training:crop_decode_resize = C.RandomCropDecodeResize(resize_height, scale=(0.08, 1.0), ratio=(0.75, 1.333))horizontal_flip_op = C.RandomHorizontalFlip(prob=0.5)color_adjust = C.RandomColorAdjust(brightness=0.4, contrast=0.4, saturation=0.4)train_trans = [crop_decode_resize, horizontal_flip_op, color_adjust, normalize_op, change_swap_op]train_ds = ds.map(input_columns="image", operations=train_trans, num_parallel_workers=4)train_ds = train_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=4)train_ds = train_ds.shuffle(buffer_size=buffer_size)ds = train_ds.batch(config.batch_size, drop_remainder=True)else:decode_op = C.Decode()resize_op = C.Resize((int(resize_width/0.875), int(resize_width/0.875)))center_crop = C.CenterCrop(resize_width)eval_trans = [decode_op, resize_op, center_crop, normalize_op, change_swap_op]eval_ds = ds.map(input_columns="image", operations=eval_trans, num_parallel_workers=4)eval_ds = eval_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=4)ds = eval_ds.batch(config.eval_batch_size, drop_remainder=True)
​return ds

展示部分处理后的数据:

ds = create_dataset(dataset_path=config.dataset_path, config=config, training=False)
print(ds.get_dataset_size())
data = ds.create_dict_iterator(output_numpy=True)._get_next()
images = data['image']
labels = data['label']
​
for i in range(1, 5):plt.subplot(2, 2, i)plt.imshow(np.transpose(images[i], (1,2,0)))plt.title('label: %s' % class_en[labels[i]])plt.xticks([])
plt.show()

输出:

Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-1.9831933..2.64].
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-2.117904..2.4285715].
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-2.0494049..2.273987].
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-1.9809059..2.64].
32

5、MobileNetV2模型搭建

定义MobileNetV2网络

继承mindspore.nn.Cell基类

        __init__方法     定义神经网络的各层

        Construct         构造方法

        ReLU6             原始模型激活函数

        池化模块         全局平均池化层

__all__ = ['MobileNetV2', 'MobileNetV2Backbone', 'MobileNetV2Head', 'mobilenet_v2']
​
def _make_divisible(v, divisor, min_value=None):if min_value is None:min_value = divisornew_v = max(min_value, int(v + divisor / 2) // divisor * divisor)if new_v < 0.9 * v:new_v += divisorreturn new_v
​
class GlobalAvgPooling(nn.Cell):"""Global avg pooling definition.
​Args:
​Returns:Tensor, output tensor.
​Examples:>>> GlobalAvgPooling()"""
​def __init__(self):super(GlobalAvgPooling, self).__init__()
​def construct(self, x):x = P.mean(x, (2, 3))return x
​
class ConvBNReLU(nn.Cell):"""Convolution/Depthwise fused with Batchnorm and ReLU block definition.
​Args:in_planes (int): Input channel.out_planes (int): Output channel.kernel_size (int): Input kernel size.stride (int): Stride size for the first convolutional layer. Default: 1.groups (int): channel group. Convolution is 1 while Depthiwse is input channel. Default: 1.
​Returns:Tensor, output tensor.
​Examples:>>> ConvBNReLU(16, 256, kernel_size=1, stride=1, groups=1)"""
​def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):super(ConvBNReLU, self).__init__()padding = (kernel_size - 1) // 2in_channels = in_planesout_channels = out_planesif groups == 1:conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, 
pad_mode='pad', padding=padding)else:out_channels = in_planesconv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad_mode='pad',padding=padding, group=in_channels)
​layers = [conv, nn.BatchNorm2d(out_planes), nn.ReLU6()]self.features = nn.SequentialCell(layers)
​def construct(self, x):output = self.features(x)return output
​
class InvertedResidual(nn.Cell):"""Mobilenetv2 residual block definition.
​Args:inp (int): Input channel.oup (int): Output channel.stride (int): Stride size for the first convolutional layer. Default: 1.expand_ratio (int): expand ration of input channel
​Returns:Tensor, output tensor.
​Examples:>>> ResidualBlock(3, 256, 1, 1)"""
​def __init__(self, inp, oup, stride, expand_ratio):super(InvertedResidual, self).__init__()assert stride in [1, 2]
​hidden_dim = int(round(inp * expand_ratio))self.use_res_connect = stride == 1 and inp == oup
​layers = []if expand_ratio != 1:layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))layers.extend([ConvBNReLU(hidden_dim, hidden_dim,stride=stride, groups=hidden_dim),nn.Conv2d(hidden_dim, oup, kernel_size=1,stride=1, has_bias=False),nn.BatchNorm2d(oup),])self.conv = nn.SequentialCell(layers)self.cast = P.Cast()
​def construct(self, x):identity = xx = self.conv(x)if self.use_res_connect:return P.add(identity, x)return x
​
class MobileNetV2Backbone(nn.Cell):"""MobileNetV2 architecture.
​Args:class_num (int): number of classes.width_mult (int): Channels multiplier for round to 8/16 and others. Default is 1.has_dropout (bool): Is dropout used. Default is falseinverted_residual_setting (list): Inverted residual settings. Default is Noneround_nearest (list): Channel round to . Default is 8Returns:Tensor, output tensor.
​Examples:>>> MobileNetV2(num_classes=1000)"""
​def __init__(self, width_mult=1., inverted_residual_setting=None, round_nearest=8,input_channel=32, last_channel=1280):super(MobileNetV2Backbone, self).__init__()block = InvertedResidual# setting of inverted residual blocksself.cfgs = inverted_residual_settingif inverted_residual_setting is None:self.cfgs = [# t, c, n, s[1, 16, 1, 1],[6, 24, 2, 2],[6, 32, 3, 2],[6, 64, 4, 2],[6, 96, 3, 1],[6, 160, 3, 2],[6, 320, 1, 1],]
​# building first layerinput_channel = _make_divisible(input_channel * width_mult, round_nearest)self.out_channels = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)features = [ConvBNReLU(3, input_channel, stride=2)]# building inverted residual blocksfor t, c, n, s in self.cfgs:output_channel = _make_divisible(c * width_mult, round_nearest)for i in range(n):stride = s if i == 0 else 1features.append(block(input_channel, output_channel, stride, expand_ratio=t))input_channel = output_channelfeatures.append(ConvBNReLU(input_channel, self.out_channels, kernel_size=1))self.features = nn.SequentialCell(features)self._initialize_weights()
​def construct(self, x):x = self.features(x)return x
​def _initialize_weights(self):"""Initialize weights.
​Args:
​Returns:None.
​Examples:>>> _initialize_weights()"""self.init_parameters_data()for _, m in self.cells_and_names():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.set_data(Tensor(np.random.normal(0, np.sqrt(2. / n),m.weight.data.shape).astype("float32")))if m.bias is not None:m.bias.set_data(Tensor(np.zeros(m.bias.data.shape, dtype="float32")))elif isinstance(m, nn.BatchNorm2d):m.gamma.set_data(Tensor(np.ones(m.gamma.data.shape, dtype="float32")))m.beta.set_data(Tensor(np.zeros(m.beta.data.shape, dtype="float32")))
​@propertydef get_features(self):return self.features
​
class MobileNetV2Head(nn.Cell):"""MobileNetV2 architecture.
​Args:class_num (int): Number of classes. Default is 1000.has_dropout (bool): Is dropout used. Default is falseReturns:Tensor, output tensor.
​Examples:>>> MobileNetV2(num_classes=1000)"""
​
def __init__(self, input_channel=1280, num_classes=1000, 
has_dropout=False, activation="None"):super(MobileNetV2Head, self).__init__()# mobilenet headhead = ([GlobalAvgPooling(), nn.Dense(input_channel, num_classes, has_bias=True)] 
if not has_dropout else[GlobalAvgPooling(), nn.Dropout(0.2), nn.Dense(input_channel, num_classes, has_bias=True)])self.head = nn.SequentialCell(head)self.need_activation = Trueif activation == "Sigmoid":self.activation = nn.Sigmoid()elif activation == "Softmax":self.activation = nn.Softmax()else:self.need_activation = Falseself._initialize_weights()
​def construct(self, x):x = self.head(x)if self.need_activation:x = self.activation(x)return x
​def _initialize_weights(self):"""Initialize weights.
​Args:
​Returns:None.
​Examples:>>> _initialize_weights()"""self.init_parameters_data()for _, m in self.cells_and_names():if isinstance(m, nn.Dense):m.weight.set_data(Tensor(np.random.normal(0, 0.01, m.weight.data.shape).astype("float32")))if m.bias is not None:m.bias.set_data(Tensor(np.zeros(m.bias.data.shape, dtype="float32")))@propertydef get_head(self):return self.head
​
class MobileNetV2(nn.Cell):"""MobileNetV2 architecture.
​Args:class_num (int): number of classes.width_mult (int): Channels multiplier for round to 8/16 and others. Default is 1.has_dropout (bool): Is dropout used. Default is falseinverted_residual_setting (list): Inverted residual settings. Default is Noneround_nearest (list): Channel round to . Default is 8Returns:Tensor, output tensor.
​Examples:>>> MobileNetV2(backbone, head)"""
​def __init__(self, num_classes=1000, width_mult=1., has_dropout=False, inverted_residual_setting=None, \round_nearest=8, input_channel=32, last_channel=1280):super(MobileNetV2, self).__init__()self.backbone = MobileNetV2Backbone(width_mult=width_mult, \inverted_residual_setting=inverted_residual_setting, \round_nearest=round_nearest, input_channel=input_channel, \
last_channel=last_channel).get_featuresself.head = MobileNetV2Head(input_channel=self.backbone.out_channel, \
num_classes=num_classes, \
has_dropout=has_dropout).get_head
​def construct(self, x):x = self.backbone(x)x = self.head(x)return x
​
class MobileNetV2Combine(nn.Cell):"""MobileNetV2Combine architecture.
​Args:backbone (Cell): the features extract layers.head (Cell):  the fully connected layers.Returns:Tensor, output tensor.
​Examples:>>> MobileNetV2(num_classes=1000)"""
​def __init__(self, backbone, head):super(MobileNetV2Combine, self).__init__(auto_prefix=False)self.backbone = backboneself.head = head
​def construct(self, x):x = self.backbone(x)x = self.head(x)return x
​
def mobilenet_v2(backbone, head):return MobileNetV2Combine(backbone, head)

6、MobileNetV2模型的训练与测试

6.1训练策略

静态学习率,如0.01。

随着训练步数的增加,模型逐渐收敛,逐渐降低权重参数的更新幅度,减小模型训练抖动。

动态下降学习率常见策略:

        polynomial decay/square decay;

        cosine decay,此例选用;

        exponential decay;

        stage decay.

def cosine_decay(total_steps, lr_init=0.0, lr_end=0.0, lr_max=0.1, warmup_steps=0):"""Applies cosine decay to generate learning rate array.
​Args:total_steps(int): all steps in training.lr_init(float): init learning rate.lr_end(float): end learning ratelr_max(float): max learning rate.warmup_steps(int): all steps in warmup epochs.
​Returns:list, learning rate array."""lr_init, lr_end, lr_max = float(lr_init), float(lr_end), float(lr_max)decay_steps = total_steps - warmup_stepslr_all_steps = []inc_per_step = (lr_max - lr_init) / warmup_steps if warmup_steps else 0for i in range(total_steps):if i < warmup_steps:lr = lr_init + inc_per_step * (i + 1)else:cosine_decay = 0.5 * (1 + math.cos(math.pi * (i - warmup_steps) / decay_steps))lr = (lr_max - lr_end) * cosine_decay + lr_endlr_all_steps.append(lr)
​return lr_all_steps

6.2保存训练模型

模型训练过程中添加检查点(Checkpoint)保存模型参数。

保存模型使用场景:

        训练后推理场景

                用于后续的推理或预测操作。

                保存实时验证精度最高的模型参数用于预测操作。

        再训练场景

                长时间训练,断点处继续训练。

        Fine-tuning微调场景

                微调用于其他类似任务的模型训练。

下例加载ImageNet数据上预训练的MobileNetv2进行Fine-tuning

        只训练最后修改的FC层

        训练过程中保存Checkpoint

def switch_precision(net, data_type):if ms.get_context('device_target') == "Ascend":net.to_float(data_type)for _, cell in net.cells_and_names():if isinstance(cell, nn.Dense):cell.to_float(ms.float32)

6.3模型训练与测试

训练之前

        定义训练函数

        读取数据并对模型进行实例化,

        定义优化器和损失函数

概念:

损失函数(目标函数

        衡量预测值与实际值的差异程度。

        深度学习通过迭代缩小损失函数的值,提高模型性能。

优化器

        最小化损失函数,在训练过程中改进模型。

损失函数关于权重的梯度

        指示优化器优化方向

参数更新

        固定MobileNetV2Backbone层参数,训练中不更新

        只更新MobileNetV2Head模块参数

MindSpore损失函数

        SoftmaxCrossEntropyWithLogits,此例选用

        L1Loss

        MSELoss

训练测试过程中打印loss值总体趋势逐步减小,精度逐步提高。

每一epoch模型会计算测试精度,不断提升MobileNetV2模型的预测能力

from mindspore.amp import FixedLossScaleManager
import time
LOSS_SCALE = 1024
​
train_dataset = create_dataset(dataset_path=config.dataset_path, config=config)
eval_dataset = create_dataset(dataset_path=config.dataset_path, config=config)
step_size = train_dataset.get_dataset_size()backbone = MobileNetV2Backbone() #last_channel=config.backbone_out_channels
# Freeze parameters of backbone. You can comment these two lines.
for param in backbone.get_parameters():param.requires_grad = False
# load parameters from pretrained model
load_checkpoint(config.pretrained_ckpt, backbone)
​
head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
network = mobilenet_v2(backbone, head)
​
# define loss, optimizer, and model
loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
loss_scale = FixedLossScaleManager(LOSS_SCALE, drop_overflow_update=False)
lrs = cosine_decay(config.epochs * step_size, lr_max=config.lr_max)
opt = nn.Momentum(network.trainable_params(), lrs, config.momentum, config.weight_decay, loss_scale=LOSS_SCALE)
​
# 定义用于训练的train_loop函数。
def train_loop(model, dataset, loss_fn, optimizer):# 定义正向计算函数def forward_fn(data, label):logits = model(data)loss = loss_fn(logits, label)return loss
​# 定义微分函数,使用mindspore.value_and_grad获得微分函数grad_fn,输出loss和梯度。# 由于是对模型参数求导,grad_position 配置为None,传入可训练参数。grad_fn = ms.value_and_grad(forward_fn, None, optimizer.parameters)
​# 定义 one-step training函数def train_step(data, label):loss, grads = grad_fn(data, label)optimizer(grads)return loss
​size = dataset.get_dataset_size()model.set_train()for batch, (data, label) in enumerate(dataset.create_tuple_iterator()):loss = train_step(data, label)
​if batch % 10 == 0:loss, current = loss.asnumpy(), batchprint(f"loss: {loss:>7f}  [{current:>3d}/{size:>3d}]")
​
# 定义用于测试的test_loop函数。
def test_loop(model, dataset, loss_fn):num_batches = dataset.get_dataset_size()model.set_train(False)total, test_loss, correct = 0, 0, 0for data, label in dataset.create_tuple_iterator():pred = model(data)total += len(data)test_loss += loss_fn(pred, label).asnumpy()correct += (pred.argmax(1) == label).asnumpy().sum()test_loss /= num_batchescorrect /= totalprint(f"Test: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
​
print("============== Starting Training ==============")
# 由于时间问题,训练过程只进行了2个epoch ,可以根据需求调整。
epoch_begin_time = time.time()
epochs = 2
for t in range(epochs):begin_time = time.time()print(f"Epoch {t+1}\n-------------------------------")train_loop(network, train_dataset, loss, opt)ms.save_checkpoint(network, "save_mobilenetV2_model.ckpt")end_time = time.time()times = end_time - begin_timeprint(f"per epoch time: {times}s")test_loop(network, eval_dataset, loss)
epoch_end_time = time.time()
times = epoch_end_time - epoch_begin_time
print(f"total time:  {times}s")
print("============== Training Success ==============")

输出:

============== Starting Training ==============
Epoch 1
-------------------------------
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:11:59.680.056 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/1438112663.py]
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:11:59.680.141 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/1438112663.py]
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:11:59.680.195 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/1438112663.py]
loss: 3.241972  [  0/162]
loss: 3.257221  [ 10/162]
loss: 3.279036  [ 20/162]
loss: 3.319016  [ 30/162]
loss: 3.203032  [ 40/162]
loss: 3.281466  [ 50/162]
loss: 3.279725  [ 60/162]
loss: 3.229515  [ 70/162]
loss: 3.267290  [ 80/162]
loss: 3.188127  [ 90/162]
loss: 3.212995  [100/162]
loss: 3.241497  [110/162]
loss: 3.209007  [120/162]
loss: 3.172501  [130/162]
loss: 3.197280  [140/162]
loss: 3.272911  [150/162]
loss: 3.216317  [160/162]
per epoch time: 76.58693552017212s
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:13:15.037.060 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/3136751602.py]
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:13:15.037.167 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/3136751602.py]
Test: Accuracy: 8.0%, Avg loss: 3.194478 Epoch 2
-------------------------------
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:14:30.208.773 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/1438112663.py]
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:14:30.208.853 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/1438112663.py]
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:14:30.208.907 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/1438112663.py]
loss: 3.188095  [  0/162]
loss: 3.165776  [ 10/162]
loss: 3.191228  [ 20/162]
loss: 3.138081  [ 30/162]
loss: 3.074074  [ 40/162]
loss: 3.163727  [ 50/162]
loss: 3.165467  [ 60/162]
loss: 3.173247  [ 70/162]
loss: 3.159379  [ 80/162]
loss: 3.143544  [ 90/162]
loss: 3.200886  [100/162]
loss: 3.201387  [110/162]
loss: 3.165534  [120/162]
loss: 3.171612  [130/162]
loss: 3.203765  [140/162]
loss: 3.166876  [150/162]
loss: 3.125233  [160/162]
per epoch time: 76.00073027610779s
Test: Accuracy: 17.4%, Avg loss: 3.111141 total time:  295.01550674438477s
============== Training Success ==============

7、模型推理

加载模型Checkpoint

        load_checkpoint接口加载数据

        把数据传入给原始网络

CKPT="save_mobilenetV2_model.ckpt"
[13]:def image_process(image):"""Precess one image per time.Args:image: shape (H, W, C)"""mean=[0.485*255, 0.456*255, 0.406*255]std=[0.229*255, 0.224*255, 0.225*255]image = (np.array(image) - mean) / stdimage = image.transpose((2,0,1))img_tensor = Tensor(np.array([image], np.float32))return img_tensor
​
def infer_one(network, image_path):image = Image.open(image_path).resize((config.image_height, config.image_width))logits = network(image_process(image))pred = np.argmax(logits.asnumpy(), axis=1)[0]print(image_path, class_en[pred])
​
def infer():backbone = MobileNetV2Backbone(last_channel=config.backbone_out_channels)head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)network = mobilenet_v2(backbone, head)load_checkpoint(CKPT, network)for i in range(91, 100):infer_one(network, f'data_en/test/Cardboard/000{i}.jpg')
infer()

输出:

[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:16:53.811.718 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/3136751602.py]
[ERROR] CORE(185595,ffff82d60930,python):2024-06-07-01:16:53.811.810 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_185595/3136751602.py]
data_en/test/Cardboard/00091.jpg Lighter
data_en/test/Cardboard/00092.jpg Broom
data_en/test/Cardboard/00093.jpg Basketball
data_en/test/Cardboard/00094.jpg Basketball
data_en/test/Cardboard/00095.jpg Basketball
data_en/test/Cardboard/00096.jpg Glassware
data_en/test/Cardboard/00097.jpg Lighter
data_en/test/Cardboard/00098.jpg Basketball
data_en/test/Cardboard/00099.jpg Basketball

8、导出AIR/GEIR/ONNX模型文件

导出AIR模型文件,用于后续Atlas 200 DK上的模型转换与推理。

当前仅支持MindSpore+Ascend环境。

backbone = MobileNetV2Backbone(last_channel=config.backbone_out_channels)
head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
network = mobilenet_v2(backbone, head)
load_checkpoint(CKPT, network)
​
input = np.random.uniform(0.0, 1.0, size=[1, 3, 224, 224]).astype(np.float32)
# export(network, Tensor(input), file_name='mobilenetv2.air', file_format='AIR')
# export(network, Tensor(input), file_name='mobilenetv2.pb', file_format='GEIR')
export(network, Tensor(input), file_name='mobilenetv2.onnx', file_format='ONNX')

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/diannao/40571.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

解决pip默认安装位置在C盘方法

新版python中使用pip命令将opencv库安装到base环境中 首先我们打开命令控制窗口&#xff0c;激活base环境&#xff0c;输入conda activate base 然后检查一下自己base环境中是否安装opencv库&#xff0c;输入conda list 往下找&#xff0c;找到o开头的地方&#xff0c;发现是…

达梦数据库 页大小与数据库字段长度的关系

对于达梦数据库实例而言&#xff0c;页大小 (page_size)、簇大小 (extent_size)、大小写敏感 (case_sensitive)、字符集 (charset) 这四个参数&#xff0c;一旦确定无法修改&#xff1b;如果过程中发现这些数据设置的不对&#xff0c;只能是重新新建数据库实例&#xff0c;而不…

数智化配补调:零售品牌增长新引擎

随着科技的不断进步和消费者需求的日益个性化、多元化&#xff0c;传统服装行业正面临着前所未有的挑战与机遇。在这个快速变化的时代&#xff0c;如何精准把握市场脉搏&#xff0c;实现库存的高效管理&#xff0c;成为了服装品牌生存与发展的关键。数智化配补调策略应运而生&a…

mysql定时备份数据库

文章目录 核心目标思路具体方法一、编写脚本二、修改文件属性三、找一个mysqldump文件四、把.sh放到定时器里 其它&#xff1a;windows的脚本 核心目标 解决数据库定时备份的工作。centos环境。 思路 用centos的crontab定时执行脚本。 具体方法 一、编写脚本 编写backup_…

vue单独部署到宝塔教程

配置反向代理 注意:如果目标网站是https则写https否则写http 2.关于解决部署后无法刷新,直接报错404 location / { try_files $uri $uri/ /index.html; }

程序员鱼皮的保姆级写简历指南第三弹,简历常见问题和建议汇总

大家好&#xff0c;我是程序员鱼皮。做知识分享这些年来&#xff0c;我看过太多简历、也帮忙修改过很多的简历&#xff0c;发现很多同学是完全不会写简历的、会犯很多常见的问题&#xff0c;不能把自己的优势充分展示出来&#xff0c;导致错失了很多面试机会&#xff0c;实在是…

p6spy 组件打印完整的 SQL 语句、执行耗时

一、前言 我们来配置一下 Mybatis Plus 打印 SQL 功能&#xff08;包括执行耗时&#xff09;&#xff0c;一方面可以了解到每个操作都具体执行的什么 SQL 语句&#xff0c; 另一方面通过打印执行耗时&#xff0c;也可以提前发现一些慢 SQL&#xff0c;提前做好优化&#xff0c…

layui中添加上下文提示弹窗

<p context-tip"自定义上下文提示信息">段落内容...</p> <div context-tip"自定义上下文提示信息">div内容...</div>// 悬浮提示 $("body").on("mouseenter", "*[context-tip]", function () {v…

电脑鼠标一直转圈圈怎么处理?对症下药,分享6种方法

在使用电脑的过程中&#xff0c;鼠标一直转圈圈是一个常见且令人困扰的问题。这种情况通常意味着系统正在处理某些任务&#xff0c;但如果持续时间过长&#xff0c;可能表明系统存在性能问题或错误。本文将详细探讨鼠标一直转圈圈的常见原因及其解决方法。 摘要 电脑鼠标一直转…

【前端项目笔记】8 订单管理

订单管理 效果展示&#xff1a; 在开发功能之前先创建分支order cls 清屏 git branch 查看所有分支&#xff08;*代表当前分支&#xff09; git checkout -b order 新建分支order git push -u origin order 将本地的当前分支提交到云端仓库origin中命名为order 通过路由方式…

基于精益转型打造医疗电子运营新模式

为了保持竞争优势并满足日益增长的客户需求&#xff0c;许多企业开始探索精益转型之路&#xff0c;以打造医疗电子运营的新模式。本文&#xff0c;深圳天行健精益管理咨询公司将从精益转型的概念、实施策略以及面临的挑战等方面&#xff0c;深入探讨如何通过精益转型实现医疗电…

【数据结构】(C语言):堆(二叉树的应用)

堆&#xff1a; 此处堆为二叉树的应用&#xff0c;不是计算机中用于管理动态内存的堆。形状是完全二叉树。堆分两种&#xff1a;最大堆&#xff0c;最小堆。最大堆&#xff1a;每个节点比子树所有节点的数值都大&#xff0c;根节点为最大值。最小堆&#xff1a;每个节点比子树…

python-opencv多态模板匹配简单代码实现

在我实验过程中发现&#xff0c;这种模板匹配如果不做任何处理只对原有图像进行匹配的话&#xff0c;好像效果很瓜 貌似是模板是1 那就只能检测出正常形态下的1&#xff0c;变大或者是 l 都不一定检测到&#xff0c; 也就是说&#xff0c;只能检测和模板图片大小尺寸颜色类别…

docker 安装 禅道

docker pull hub.zentao.net/app/zentao:20.1.1 sudo docker network create --subnet172.172.172.0/24 zentaonet 使用 8087端口号访问 使用禅道mysql 映射到3307 sudo docker run \ --name zentao2 \ -p 8087:80 \ -p 3307:3306 \ --networkzentaonet \ --ip 172.172.172.…

电脑录制视频的软件,电脑录制,4款免费软件推荐

在数字化时代&#xff0c;电脑录制视频的软件已成为我们日常生活和工作中的得力助手&#xff0c;这些软件可以帮助我们轻松捕获到屏幕上的精彩瞬间。但同时市面上的录制视频软件也层出不穷&#xff0c;让人不知该如何选择。到底怎样才能选择到一款适合自己的录屏软件呢&#xf…

【SpringBoot3学习 | 第2篇】SpringBoot3整合+SpringBoot3项目打包运行

文章目录 一. SpringBoot3 整合 SpringMVC1.1 配置静态资源位置1.2 自定义拦截器&#xff08;SpringMVC配置&#xff09; 二. SpringBoot3 整合 Druid 数据源三. SpringBoot3 整合 Mybatis3.1 Mybatis整合3.2 声明式事务整合配置3.3 AOP整合配置 四. SpringBoot3 项目打包和运行…

《Attention Is All You Need》解读

一、简介 “Attention Is All You Need” 是一篇由Ashish Vaswani等人在2017年发表的论文&#xff0c;它在自然语言处理领域引入了一种新的架构——Transformer。这个架构现在被广泛应用于各种任务&#xff0c;如机器翻译、文本摘要、问答系统等。Transformer模型的核心是“自…

小学vr虚拟课堂教学课件开发打造信息化教学典范

在信息技术的浪潮中&#xff0c;VR技术正以其独特的魅力与课堂教学深度融合&#xff0c;引领着教育方式的创新与教学方法的变革。这一变革不仅推动了“以教促学”的传统模式向“自主探索”的新型学习方式转变&#xff0c;更为学生带来了全新的学习体验。 运用信息技术融合VR教学…

深度学习1

1.支持向量机Support Vector Machine&#xff08;SVM&#xff09;是一种对数据二分类的线性分类器&#xff0c;目的是寻找一个超平面对样本进行分割&#xff0c;广泛应用人像识别&#xff0c;手写数字识别&#xff0c;生物信息识别。 二维空间分割界是一条直线&#xff0c;在三…

【基础篇】第4章 Elasticsearch 查询与过滤

在Elasticsearch的世界里&#xff0c;高效地从海量数据中检索出所需信息是其核心价值所在。本章将深入解析查询与过滤的机制&#xff0c;从基础查询到复合查询&#xff0c;再到全文搜索与分析器的定制&#xff0c;为你揭开数据检索的神秘面纱。 4.1 基本查询 4.1.1 Match查询…