基于MobileNetv2的垃圾分类
本文档主要介绍垃圾分类代码开发的方法。通过读取本地图像数据作为输入,对图像中的垃圾物体进行检测,并且将检测结果图片保存到文件中。
1、实验目的
- 了解熟悉垃圾分类应用代码的编写(Python语言);
- 了解Linux操作系统的基本使用;
- 掌握atc命令进行模型转换的基本操作。
2、MobileNetv2模型原理介绍
MobileNet网络是由Google团队于2017年提出的专注于移动端、嵌入式或IoT设备的轻量级CNN网络,相比于传统的卷积神经网络,MobileNet网络使用深度可分离卷积(Depthwise Separable Convolution)的思想在准确率小幅度降低的前提下,大大减小了模型参数与运算量。并引入宽度系数 α和分辨率系数 β使模型满足不同应用场景的需求。
由于MobileNet网络中Relu激活函数处理低维特征信息时会存在大量的丢失,所以MobileNetV2网络提出使用倒残差结构(Inverted residual block)和Linear Bottlenecks来设计网络,以提高模型的准确率,且优化后的模型更小。
图中Inverted residual block结构是先使用1x1卷积进行升维,然后使用3x3的DepthWise卷积,最后使用1x1的卷积进行降维,与Residual block结构相反。Residual block是先使用1x1的卷积进行降维,然后使用3x3的卷积,最后使用1x1的卷积进行升维。
- 说明:
详细内容可参见MobileNetV2论文
3、实验环境
本案例支持win_x86和Linux系统,CPU/GPU/Ascend均可运行。
在动手进行实践之前,确保您已经正确安装了MindSpore。不同平台下的环境准备请参考《MindSpore环境搭建实验手册》。
4、数据处理
4.1数据准备
MobileNetV2的代码默认使用ImageFolder格式管理数据集,每一类图片整理成单独的一个文件夹, 数据集结构如下:
└─ImageFolder
├─train
│ class1Folder
│ ......
└─evalclass1Folder......
%%capture captured_output
# 实验环境已经预装了mindspore==2.2.14,如需更换mindspore版本,可更改下面mindspore的版本号
!pip uninstall mindspore -y
!pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14
# 查看当前 mindspore 版本
!pip show mindspore
Name: mindspore
Version: 2.2.14
Summary: MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
Home-page: https://www.mindspore.cn
Author: The MindSpore Authors
Author-email: contact@mindspore.cn
License: Apache 2.0
Location: /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages
Requires: asttokens, astunparse, numpy, packaging, pillow, protobuf, psutil, scipy
Required-by: mindnlp
from download import download# 下载data_en数据集
url = "https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/MindStudio-pc/data_en.zip"
path = download(url, "./", kind="zip", replace=True)
Downloading data from https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/MindStudio-pc/data_en.zip (21.3 MB)file_sizes: 100%|███████████████████████████| 22.4M/22.4M [00:00<00:00, 123MB/s]
Extracting zip file...
Successfully downloaded / unzipped to ./
from download import download# 下载预训练权重文件
url = "https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/ComputerVision/mobilenetV2-200_1067.zip"
path = download(url, "./", kind="zip", replace=True)
Downloading data from https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/ComputerVision/mobilenetV2-200_1067.zip (25.5 MB)file_sizes: 100%|███████████████████████████| 26.7M/26.7M [00:00<00:00, 109MB/s]
Extracting zip file...
Successfully downloaded / unzipped to ./
4.2数据加载
将模块导入,具体如下:
import math
import numpy as np
import os
import randomfrom matplotlib import pyplot as plt
from easydict import EasyDict
from PIL import Image
import numpy as np
import mindspore.nn as nn
from mindspore import ops as P
from mindspore.ops import add
from mindspore import Tensor
import mindspore.common.dtype as mstype
import mindspore.dataset as de
import mindspore.dataset.vision as C
import mindspore.dataset.transforms as C2
import mindspore as ms
from mindspore import set_context, nn, Tensor, load_checkpoint, save_checkpoint, export
from mindspore.train import Model
from mindspore.train import Callback, LossMonitor, ModelCheckpoint, CheckpointConfigos.environ['GLOG_v'] = '3' # Log level includes 3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).
os.environ['GLOG_logtostderr'] = '0' # 0:输出到文件,1:输出到屏幕
os.environ['GLOG_log_dir'] = '../../log' # 日志目录
os.environ['GLOG_stderrthreshold'] = '2' # 输出到目录也输出到屏幕:3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).
set_context(mode=ms.GRAPH_MODE, device_target="CPU", device_id=0) # 设置采用图模式执行,设备为Ascend#
配置后续训练、验证、推理用到的参数:
# 垃圾分类数据集标签,以及用于标签映射的字典。
garbage_classes = {'干垃圾': ['贝壳', '打火机', '旧镜子', '扫把', '陶瓷碗', '牙刷', '一次性筷子', '脏污衣服'],'可回收物': ['报纸', '玻璃制品', '篮球', '塑料瓶', '硬纸板', '玻璃瓶', '金属制品', '帽子', '易拉罐', '纸张'],'湿垃圾': ['菜叶', '橙皮', '蛋壳', '香蕉皮'],'有害垃圾': ['电池', '药片胶囊', '荧光灯', '油漆桶']
}class_cn = ['贝壳', '打火机', '旧镜子', '扫把', '陶瓷碗', '牙刷', '一次性筷子', '脏污衣服','报纸', '玻璃制品', '篮球', '塑料瓶', '硬纸板', '玻璃瓶', '金属制品', '帽子', '易拉罐', '纸张','菜叶', '橙皮', '蛋壳', '香蕉皮','电池', '药片胶囊', '荧光灯', '油漆桶']
class_en = ['Seashell', 'Lighter','Old Mirror', 'Broom','Ceramic Bowl', 'Toothbrush','Disposable Chopsticks','Dirty Cloth','Newspaper', 'Glassware', 'Basketball', 'Plastic Bottle', 'Cardboard','Glass Bottle', 'Metalware', 'Hats', 'Cans', 'Paper','Vegetable Leaf','Orange Peel', 'Eggshell','Banana Peel','Battery', 'Tablet capsules','Fluorescent lamp', 'Paint bucket']index_en = {'Seashell': 0, 'Lighter': 1, 'Old Mirror': 2, 'Broom': 3, 'Ceramic Bowl': 4, 'Toothbrush': 5, 'Disposable Chopsticks': 6, 'Dirty Cloth': 7,'Newspaper': 8, 'Glassware': 9, 'Basketball': 10, 'Plastic Bottle': 11, 'Cardboard': 12, 'Glass Bottle': 13, 'Metalware': 14, 'Hats': 15, 'Cans': 16, 'Paper': 17,'Vegetable Leaf': 18, 'Orange Peel': 19, 'Eggshell': 20, 'Banana Peel': 21,'Battery': 22, 'Tablet capsules': 23, 'Fluorescent lamp': 24, 'Paint bucket': 25}# 训练超参
config = EasyDict({"num_classes": 26,"image_height": 224,"image_width": 224,#"data_split": [0.9, 0.1],"backbone_out_channels":1280,"batch_size": 16,"eval_batch_size": 8,"epochs": 10,"lr_max": 0.05,"momentum": 0.9,"weight_decay": 1e-4,"save_ckpt_epochs": 1,"dataset_path": "./data_en","class_index": index_en,"pretrained_ckpt": "./mobilenetV2-200_1067.ckpt" # mobilenetV2-200_1067.ckpt
})
数据预处理操作
利用ImageFolderDataset方法读取垃圾分类数据集,并整体对数据集进行处理。
读取数据集时指定训练集和测试集,首先对整个数据集进行归一化,修改图像频道等预处理操作。然后对训练集的数据依次进行RandomCropDecodeResize、RandomHorizontalFlip、RandomColorAdjust、shuffle操作,以增加训练数据的丰富度;对测试集进行Decode、Resize、CenterCrop等预处理操作;最后返回处理后的数据集。
def create_dataset(dataset_path, config, training=True, buffer_size=1000):"""create a train or eval datasetArgs:dataset_path(string): the path of dataset.config(struct): the config of train and eval in diffirent platform.Returns:train_dataset, val_dataset"""data_path = os.path.join(dataset_path, 'train' if training else 'test')ds = de.ImageFolderDataset(data_path, num_parallel_workers=4, class_indexing=config.class_index)resize_height = config.image_heightresize_width = config.image_widthnormalize_op = C.Normalize(mean=[0.485*255, 0.456*255, 0.406*255], std=[0.229*255, 0.224*255, 0.225*255])change_swap_op = C.HWC2CHW()type_cast_op = C2.TypeCast(mstype.int32)if training:crop_decode_resize = C.RandomCropDecodeResize(resize_height, scale=(0.08, 1.0), ratio=(0.75, 1.333))horizontal_flip_op = C.RandomHorizontalFlip(prob=0.5)color_adjust = C.RandomColorAdjust(brightness=0.4, contrast=0.4, saturation=0.4)train_trans = [crop_decode_resize, horizontal_flip_op, color_adjust, normalize_op, change_swap_op]train_ds = ds.map(input_columns="image", operations=train_trans, num_parallel_workers=4)train_ds = train_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=4)train_ds = train_ds.shuffle(buffer_size=buffer_size)ds = train_ds.batch(config.batch_size, drop_remainder=True)else:decode_op = C.Decode()resize_op = C.Resize((int(resize_width/0.875), int(resize_width/0.875)))center_crop = C.CenterCrop(resize_width)eval_trans = [decode_op, resize_op, center_crop, normalize_op, change_swap_op]eval_ds = ds.map(input_columns="image", operations=eval_trans, num_parallel_workers=4)eval_ds = eval_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=4)ds = eval_ds.batch(config.eval_batch_size, drop_remainder=True)return ds
展示部分处理后的数据:
ds = create_dataset(dataset_path=config.dataset_path, config=config, training=False)
print(ds.get_dataset_size())
data = ds.create_dict_iterator(output_numpy=True)._get_next()
images = data['image']
labels = data['label']for i in range(1, 5):plt.subplot(2, 2, i)plt.imshow(np.transpose(images[i], (1,2,0)))plt.title('label: %s' % class_en[labels[i]])plt.xticks([])
plt.show()
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-1.9481792..2.4134204].
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-1.7240347..2.64].
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-0.9852941..2.3410366].
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Got range [-2.1007793..2.64].32
5、MobileNetV2模型搭建
使用MindSpore定义MobileNetV2网络的各模块时需要继承mindspore.nn.Cell。Cell是所有神经网络(Conv2d等)的基类。
神经网络的各层需要预先在__init__方法中定义,然后通过定义construct方法来完成神经网络的前向构造。原始模型激活函数为ReLU6,池化模块采用是全局平均池化层。
__all__ = ['MobileNetV2', 'MobileNetV2Backbone', 'MobileNetV2Head', 'mobilenet_v2']def _make_divisible(v, divisor, min_value=None):if min_value is None:min_value = divisornew_v = max(min_value, int(v + divisor / 2) // divisor * divisor)if new_v < 0.9 * v:new_v += divisorreturn new_vclass GlobalAvgPooling(nn.Cell):"""Global avg pooling definition.Args:Returns:Tensor, output tensor.Examples:>>> GlobalAvgPooling()"""def __init__(self):super(GlobalAvgPooling, self).__init__()def construct(self, x):x = P.mean(x, (2, 3))return xclass ConvBNReLU(nn.Cell):"""Convolution/Depthwise fused with Batchnorm and ReLU block definition.Args:in_planes (int): Input channel.out_planes (int): Output channel.kernel_size (int): Input kernel size.stride (int): Stride size for the first convolutional layer. Default: 1.groups (int): channel group. Convolution is 1 while Depthiwse is input channel. Default: 1.Returns:Tensor, output tensor.Examples:>>> ConvBNReLU(16, 256, kernel_size=1, stride=1, groups=1)"""def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):super(ConvBNReLU, self).__init__()padding = (kernel_size - 1) // 2in_channels = in_planesout_channels = out_planesif groups == 1:conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad_mode='pad', padding=padding)else:out_channels = in_planesconv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad_mode='pad',padding=padding, group=in_channels)layers = [conv, nn.BatchNorm2d(out_planes), nn.ReLU6()]self.features = nn.SequentialCell(layers)def construct(self, x):output = self.features(x)return outputclass InvertedResidual(nn.Cell):"""Mobilenetv2 residual block definition.Args:inp (int): Input channel.oup (int): Output channel.stride (int): Stride size for the first convolutional layer. Default: 1.expand_ratio (int): expand ration of input channelReturns:Tensor, output tensor.Examples:>>> ResidualBlock(3, 256, 1, 1)"""def __init__(self, inp, oup, stride, expand_ratio):super(InvertedResidual, self).__init__()assert stride in [1, 2]hidden_dim = int(round(inp * expand_ratio))self.use_res_connect = stride == 1 and inp == ouplayers = []if expand_ratio != 1:layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))layers.extend([ConvBNReLU(hidden_dim, hidden_dim,stride=stride, groups=hidden_dim),nn.Conv2d(hidden_dim, oup, kernel_size=1,stride=1, has_bias=False),nn.BatchNorm2d(oup),])self.conv = nn.SequentialCell(layers)self.cast = P.Cast()def construct(self, x):identity = xx = self.conv(x)if self.use_res_connect:return P.add(identity, x)return xclass MobileNetV2Backbone(nn.Cell):"""MobileNetV2 architecture.Args:class_num (int): number of classes.width_mult (int): Channels multiplier for round to 8/16 and others. Default is 1.has_dropout (bool): Is dropout used. Default is falseinverted_residual_setting (list): Inverted residual settings. Default is Noneround_nearest (list): Channel round to . Default is 8Returns:Tensor, output tensor.Examples:>>> MobileNetV2(num_classes=1000)"""def __init__(self, width_mult=1., inverted_residual_setting=None, round_nearest=8,input_channel=32, last_channel=1280):super(MobileNetV2Backbone, self).__init__()block = InvertedResidual# setting of inverted residual blocksself.cfgs = inverted_residual_settingif inverted_residual_setting is None:self.cfgs = [# t, c, n, s[1, 16, 1, 1],[6, 24, 2, 2],[6, 32, 3, 2],[6, 64, 4, 2],[6, 96, 3, 1],[6, 160, 3, 2],[6, 320, 1, 1],]# building first layerinput_channel = _make_divisible(input_channel * width_mult, round_nearest)self.out_channels = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)features = [ConvBNReLU(3, input_channel, stride=2)]# building inverted residual blocksfor t, c, n, s in self.cfgs:output_channel = _make_divisible(c * width_mult, round_nearest)for i in range(n):stride = s if i == 0 else 1features.append(block(input_channel, output_channel, stride, expand_ratio=t))input_channel = output_channelfeatures.append(ConvBNReLU(input_channel, self.out_channels, kernel_size=1))self.features = nn.SequentialCell(features)self._initialize_weights()def construct(self, x):x = self.features(x)return xdef _initialize_weights(self):"""Initialize weights.Args:Returns:None.Examples:>>> _initialize_weights()"""self.init_parameters_data()for _, m in self.cells_and_names():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.set_data(Tensor(np.random.normal(0, np.sqrt(2. / n),m.weight.data.shape).astype("float32")))if m.bias is not None:m.bias.set_data(Tensor(np.zeros(m.bias.data.shape, dtype="float32")))elif isinstance(m, nn.BatchNorm2d):m.gamma.set_data(Tensor(np.ones(m.gamma.data.shape, dtype="float32")))m.beta.set_data(Tensor(np.zeros(m.beta.data.shape, dtype="float32")))@propertydef get_features(self):return self.featuresclass MobileNetV2Head(nn.Cell):"""MobileNetV2 architecture.Args:class_num (int): Number of classes. Default is 1000.has_dropout (bool): Is dropout used. Default is falseReturns:Tensor, output tensor.Examples:>>> MobileNetV2(num_classes=1000)"""def __init__(self, input_channel=1280, num_classes=1000, has_dropout=False, activation="None"):super(MobileNetV2Head, self).__init__()# mobilenet headhead = ([GlobalAvgPooling(), nn.Dense(input_channel, num_classes, has_bias=True)] if not has_dropout else[GlobalAvgPooling(), nn.Dropout(0.2), nn.Dense(input_channel, num_classes, has_bias=True)])self.head = nn.SequentialCell(head)self.need_activation = Trueif activation == "Sigmoid":self.activation = nn.Sigmoid()elif activation == "Softmax":self.activation = nn.Softmax()else:self.need_activation = Falseself._initialize_weights()def construct(self, x):x = self.head(x)if self.need_activation:x = self.activation(x)return xdef _initialize_weights(self):"""Initialize weights.Args:Returns:None.Examples:>>> _initialize_weights()"""self.init_parameters_data()for _, m in self.cells_and_names():if isinstance(m, nn.Dense):m.weight.set_data(Tensor(np.random.normal(0, 0.01, m.weight.data.shape).astype("float32")))if m.bias is not None:m.bias.set_data(Tensor(np.zeros(m.bias.data.shape, dtype="float32")))@propertydef get_head(self):return self.headclass MobileNetV2(nn.Cell):"""MobileNetV2 architecture.Args:class_num (int): number of classes.width_mult (int): Channels multiplier for round to 8/16 and others. Default is 1.has_dropout (bool): Is dropout used. Default is falseinverted_residual_setting (list): Inverted residual settings. Default is Noneround_nearest (list): Channel round to . Default is 8Returns:Tensor, output tensor.Examples:>>> MobileNetV2(backbone, head)"""def __init__(self, num_classes=1000, width_mult=1., has_dropout=False, inverted_residual_setting=None, \round_nearest=8, input_channel=32, last_channel=1280):super(MobileNetV2, self).__init__()self.backbone = MobileNetV2Backbone(width_mult=width_mult, \inverted_residual_setting=inverted_residual_setting, \round_nearest=round_nearest, input_channel=input_channel, last_channel=last_channel).get_featuresself.head = MobileNetV2Head(input_channel=self.backbone.out_channel, num_classes=num_classes, \has_dropout=has_dropout).get_headdef construct(self, x):x = self.backbone(x)x = self.head(x)return xclass MobileNetV2Combine(nn.Cell):"""MobileNetV2Combine architecture.Args:backbone (Cell): the features extract layers.head (Cell): the fully connected layers.Returns:Tensor, output tensor.Examples:>>> MobileNetV2(num_classes=1000)"""def __init__(self, backbone, head):super(MobileNetV2Combine, self).__init__(auto_prefix=False)self.backbone = backboneself.head = headdef construct(self, x):x = self.backbone(x)x = self.head(x)return xdef mobilenet_v2(backbone, head):return MobileNetV2Combine(backbone, head)
6、MobileNetV2模型的训练与测试
训练策略
一般情况下,模型训练时采用静态学习率,如0.01。随着训练步数的增加,模型逐渐趋于收敛,对权重参数的更新幅度应该逐渐降低,以减小模型训练后期的抖动。所以,模型训练时可以采用动态下降的学习率,常见的学习率下降策略有:
- polynomial decay/square decay;
- cosine decay;
- exponential decay;
- stage decay.
这里使用cosine decay下降策略:
def cosine_decay(total_steps, lr_init=0.0, lr_end=0.0, lr_max=0.1, warmup_steps=0):"""Applies cosine decay to generate learning rate array.Args:total_steps(int): all steps in training.lr_init(float): init learning rate.lr_end(float): end learning ratelr_max(float): max learning rate.warmup_steps(int): all steps in warmup epochs.Returns:list, learning rate array."""lr_init, lr_end, lr_max = float(lr_init), float(lr_end), float(lr_max)decay_steps = total_steps - warmup_stepslr_all_steps = []inc_per_step = (lr_max - lr_init) / warmup_steps if warmup_steps else 0for i in range(total_steps):if i < warmup_steps:lr = lr_init + inc_per_step * (i + 1)else:cosine_decay = 0.5 * (1 + math.cos(math.pi * (i - warmup_steps) / decay_steps))lr = (lr_max - lr_end) * cosine_decay + lr_endlr_all_steps.append(lr)return lr_all_steps
在模型训练过程中,可以添加检查点(Checkpoint)用于保存模型的参数,以便进行推理及中断后再训练使用。使用场景如下:
- 训练后推理场景
- 模型训练完毕后保存模型的参数,用于推理或预测操作。
- 训练过程中,通过实时验证精度,把精度最高的模型参数保存下来,用于预测操作。
- 再训练场景
- 进行长时间训练任务时,保存训练过程中的Checkpoint文件,防止任务异常退出后从初始状态开始训练。
- Fine-tuning(微调)场景,即训练一个模型并保存参数,基于该模型,面向第二个类似任务进行模型训练。
这里加载ImageNet数据上预训练的MobileNetv2进行Fine-tuning,只训练最后修改的FC层,并在训练过程中保存Checkpoint。
def switch_precision(net, data_type):if ms.get_context('device_target') == "Ascend":net.to_float(data_type)for _, cell in net.cells_and_names():if isinstance(cell, nn.Dense):cell.to_float(ms.float32)
模型训练与测试
在进行正式的训练之前,定义训练函数,读取数据并对模型进行实例化,定义优化器和损失函数。
首先简单介绍损失函数及优化器的概念:
-
损失函数:又叫目标函数,用于衡量预测值与实际值差异的程度。深度学习通过不停地迭代来缩小损失函数的值。定义一个好的损失函数,可以有效提高模型的性能。
-
优化器:用于最小化损失函数,从而在训练过程中改进模型。
定义了损失函数后,可以得到损失函数关于权重的梯度。梯度用于指示优化器优化权重的方向,以提高模型性能。
在训练MobileNetV2之前对MobileNetV2Backbone层的参数进行了固定,使其在训练过程中对该模块的权重参数不进行更新;只对MobileNetV2Head模块的参数进行更新。
MindSpore支持的损失函数有SoftmaxCrossEntropyWithLogits、L1Loss、MSELoss等。这里使用SoftmaxCrossEntropyWithLogits损失函数。
训练测试过程中会打印loss值,loss值会波动,但总体来说loss值会逐步减小,精度逐步提高。每个人运行的loss值有一定随机性,不一定完全相同。
每打印一个epoch后模型都会在测试集上的计算测试精度,从打印的精度值分析MobileNetV2模型的预测能力在不断提升。
from mindspore.amp import FixedLossScaleManager
import time
LOSS_SCALE = 1024train_dataset = create_dataset(dataset_path=config.dataset_path, config=config)
eval_dataset = create_dataset(dataset_path=config.dataset_path, config=config)
step_size = train_dataset.get_dataset_size()backbone = MobileNetV2Backbone() #last_channel=config.backbone_out_channels
# Freeze parameters of backbone. You can comment these two lines.
for param in backbone.get_parameters():param.requires_grad = False
# load parameters from pretrained model
load_checkpoint(config.pretrained_ckpt, backbone)head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
network = mobilenet_v2(backbone, head)# define loss, optimizer, and model
loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
loss_scale = FixedLossScaleManager(LOSS_SCALE, drop_overflow_update=False)
lrs = cosine_decay(config.epochs * step_size, lr_max=config.lr_max)
opt = nn.Momentum(network.trainable_params(), lrs, config.momentum, config.weight_decay, loss_scale=LOSS_SCALE)# 定义用于训练的train_loop函数。
def train_loop(model, dataset, loss_fn, optimizer):# 定义正向计算函数def forward_fn(data, label):logits = model(data)loss = loss_fn(logits, label)return loss# 定义微分函数,使用mindspore.value_and_grad获得微分函数grad_fn,输出loss和梯度。# 由于是对模型参数求导,grad_position 配置为None,传入可训练参数。grad_fn = ms.value_and_grad(forward_fn, None, optimizer.parameters)# 定义 one-step training函数def train_step(data, label):loss, grads = grad_fn(data, label)optimizer(grads)return losssize = dataset.get_dataset_size()model.set_train()for batch, (data, label) in enumerate(dataset.create_tuple_iterator()):loss = train_step(data, label)if batch % 10 == 0:loss, current = loss.asnumpy(), batchprint(f"loss: {loss:>7f} [{current:>3d}/{size:>3d}]")# 定义用于测试的test_loop函数。
def test_loop(model, dataset, loss_fn):num_batches = dataset.get_dataset_size()model.set_train(False)total, test_loss, correct = 0, 0, 0for data, label in dataset.create_tuple_iterator():pred = model(data)total += len(data)test_loss += loss_fn(pred, label).asnumpy()correct += (pred.argmax(1) == label).asnumpy().sum()test_loss /= num_batchescorrect /= totalprint(f"Test: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")print("============== Starting Training ==============")
# 由于时间问题,训练过程只进行了2个epoch ,可以根据需求调整。
epoch_begin_time = time.time()
epochs = 2
for t in range(epochs):begin_time = time.time()print(f"Epoch {t+1}\n-------------------------------")train_loop(network, train_dataset, loss, opt)ms.save_checkpoint(network, "save_mobilenetV2_model.ckpt")end_time = time.time()times = end_time - begin_timeprint(f"per epoch time: {times}s")test_loop(network, eval_dataset, loss)
epoch_end_time = time.time()
times = epoch_end_time - epoch_begin_time
print(f"total time: {times}s")
print("============== Training Success ==============")
============== Starting Training ==============
Epoch 1
-------------------------------[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:24:34.983.974 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/1438112663.py]
[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:24:34.984.070 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/1438112663.py]
[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:24:34.984.128 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/1438112663.py]loss: 3.231806 [ 0/162]
loss: 3.274458 [ 10/162]
loss: 3.288357 [ 20/162]
loss: 3.295315 [ 30/162]
loss: 3.266681 [ 40/162]
loss: 3.232827 [ 50/162]
loss: 3.297465 [ 60/162]
loss: 3.208950 [ 70/162]
loss: 3.171727 [ 80/162]
loss: 3.242758 [ 90/162]
loss: 3.154859 [100/162]
loss: 3.182621 [110/162]
loss: 3.216053 [120/162]
loss: 3.188997 [130/162]
loss: 3.240057 [140/162]
loss: 3.146155 [150/162]
loss: 3.177748 [160/162]
per epoch time: 77.57616019248962s[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:25:51.551.049 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/3136751602.py]
[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:25:51.551.157 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/3136751602.py]Test: Accuracy: 9.3%, Avg loss: 3.181313 Epoch 2
-------------------------------[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:27:19.093.786 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/1438112663.py]
[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:27:19.093.873 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/1438112663.py]
[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:27:19.093.930 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/1438112663.py]loss: 3.176034 [ 0/162]
loss: 3.144456 [ 10/162]
loss: 3.212293 [ 20/162]
loss: 3.193235 [ 30/162]
loss: 3.173748 [ 40/162]
loss: 3.204128 [ 50/162]
loss: 3.166653 [ 60/162]
loss: 3.177643 [ 70/162]
loss: 3.202046 [ 80/162]
loss: 3.124865 [ 90/162]
loss: 3.182923 [100/162]
loss: 3.134150 [110/162]
loss: 3.077918 [120/162]
loss: 3.128475 [130/162]
loss: 3.131626 [140/162]
loss: 3.163661 [150/162]
loss: 3.156061 [160/162]
per epoch time: 97.27487087249756s
Test: Accuracy: 20.0%, Avg loss: 3.103985 total time: 337.72511100769043s
============== Training Success ==============
7、模型推理
加载模型Checkpoint进行推理,使用load_checkpoint接口加载数据时,需要把数据传入给原始网络,而不能传递给带有优化器和损失函数的训练网络。
CKPT="save_mobilenetV2_model.ckpt"
def image_process(image):"""Precess one image per time.Args:image: shape (H, W, C)"""mean=[0.485*255, 0.456*255, 0.406*255]std=[0.229*255, 0.224*255, 0.225*255]image = (np.array(image) - mean) / stdimage = image.transpose((2,0,1))img_tensor = Tensor(np.array([image], np.float32))return img_tensordef infer_one(network, image_path):image = Image.open(image_path).resize((config.image_height, config.image_width))logits = network(image_process(image))pred = np.argmax(logits.asnumpy(), axis=1)[0]print(image_path, class_en[pred])def infer():backbone = MobileNetV2Backbone(last_channel=config.backbone_out_channels)head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)network = mobilenet_v2(backbone, head)load_checkpoint(CKPT, network)for i in range(91, 100):infer_one(network, f'data_en/test/Cardboard/000{i}.jpg')
infer()
[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:30:12.265.129 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/3136751602.py]
[ERROR] CORE(99525,ffff8fa61930,python):2024-07-11-08:30:12.265.232 [mindspore/core/utils/file_utils.cc:253] GetRealPath] Get realpath failed, path[/tmp/ipykernel_99525/3136751602.py]data_en/test/Cardboard/00091.jpg Fluorescent lamp
data_en/test/Cardboard/00092.jpg Old Mirror
data_en/test/Cardboard/00093.jpg Vegetable Leaf
data_en/test/Cardboard/00094.jpg Fluorescent lamp
data_en/test/Cardboard/00095.jpg Fluorescent lamp
data_en/test/Cardboard/00096.jpg Orange Peel
data_en/test/Cardboard/00097.jpg Old Mirror
data_en/test/Cardboard/00098.jpg Fluorescent lamp
data_en/test/Cardboard/00099.jpg Glass Bottle
8、导出AIR/GEIR/ONNX模型文件
导出AIR模型文件,用于后续Atlas 200 DK上的模型转换与推理。当前仅支持MindSpore+Ascend环境。
backbone = MobileNetV2Backbone(last_channel=config.backbone_out_channels)
head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
network = mobilenet_v2(backbone, head)
load_checkpoint(CKPT, network)input = np.random.uniform(0.0, 1.0, size=[1, 3, 224, 224]).astype(np.float32)
# export(network, Tensor(input), file_name='mobilenetv2.air', file_format='AIR')
# export(network, Tensor(input), file_name='mobilenetv2.pb', file_format='GEIR')
export(network, Tensor(input), file_name='mobilenetv2.onnx', file_format='ONNX')
print("yangge mindspore打卡23天之基于MobileNetv2的垃圾分类函数式自动微分 2024 07 11")
yangge mindspore打卡23天之基于MobileNetv2的垃圾分类函数式自动微分 2024 07 11
K近邻算法实现红酒聚类
本实验主要介绍使用MindSpore在部分wine数据集上进行KNN实验。
1、实验目的
- 了解KNN的基本概念;
- 了解如何使用MindSpore进行KNN实验。
2、K近邻算法原理介绍
K近邻算法(K-Nearest-Neighbor, KNN)是一种用于分类和回归的非参数统计方法,最初由 Cover和Hart于1968年提出(Cover等人,1967),是机器学习最基础的算法之一。它正是基于以上思想:要确定一个样本的类别,可以计算它与所有训练样本的距离,然后找出和该样本最接近的k个样本,统计出这些样本的类别并进行投票,票数最多的那个类就是分类的结果。KNN的三个基本要素:
-
K值,一个样本的分类是由K个邻居的“多数表决”确定的。K值越小,容易受噪声影响,反之,会使类别之间的界限变得模糊。
-
距离度量,反映了特征空间中两个样本间的相似度,距离越小,越相似。常用的有Lp距离(p=2时,即为欧式距离)、曼哈顿距离、海明距离等。
-
分类决策规则,通常是多数表决,或者基于距离加权的多数表决(权值与距离成反比)。
2.1 分类问题
预测算法(分类)的流程如下:
(1)在训练样本集中找出距离待测样本x_test最近的k个样本,并保存至集合N中;
(2)统计集合N中每一类样本的个数 C i , i = 1 , 2 , 3 , . . . , c C_{i}, i=1,2,3,...,c Ci,i=1,2,3,...,c;
(3)最终的分类结果为argmax C i C_{i} Ci (最大的对应的 C i C_{i} Ci)那个类。
在上述实现过程中,k的取值尤为重要。它可以根据问题和数据特点来确定。在具体实现时,可以考虑样本的权重,即每个样本有不同的投票权重,这种方法称为带权重的k近邻算法,它是一种变种的k近邻算法。
2.2 回归问题
假设离测试样本最近的k个训练样本的标签值为 y i y_{i} yi,则对样本的回归预测输出值为:
y ^ = ( ∑ i = 1 n y i ) / k \hat y = (\sum_{i=1}^{n}{y_{i}})/k y^=(i=1∑nyi)/k
即为所有邻居的标签均值。
带样本权重的回归预测函数为:
y ^ = ( ∑ i = 1 n w i y i ) / k \hat y = (\sum_{i=1}^{n}{w_{i}y_{i}})/k y^=(i=1∑nwiyi)/k
其中 w i w_{i} wi为第个 i i i样本的权重。
2.3 距离的定义
KNN算法的实现依赖于样本之间的距离,其中最常用的距离函数就是欧氏距离(欧几里得距离)。 R n \mathbb{R}^{n} Rn空间中的两点 x x x和 y y y,它们之间的欧氏距离定义为:
d ( x , y ) = ∑ i = 1 n ( x i − y i ) 2 d(x,y) = \sqrt{\sum_{i=1}^{n}{(x_{i}-y_{i})^2}} d(x,y)=i=1∑n(xi−yi)2
需要特别注意的是,使用欧氏距离时,应将特征向量的每个分量归一化,以减少因为特征值的尺度范围不同所带来的干扰,否则数值小的特征分量会被数值大的特征分量淹没。
其它的距离计算方式还有Mahalanobis距离、Bhattacharyya距离等。
3、实验环境
预备知识:
- 熟练使用Python。
- 具备一定的机器学习理论知识,如KNN、无监督学习、 欧式距离等。
实验环境:
- MindSpore 2.0(MindSpore版本会定期更新,本指导也会定期刷新,与版本配套);
- 本案例支持win_x86和Linux系统,CPU/GPU/Ascend均可运行。
- 如果在本地运行此实验,请参考《MindSpore环境搭建实验手册》在本地安装MindSpore。
4、数据处理
4.1 数据准备
Wine数据集是模式识别最著名的数据集之一,Wine数据集的官网:Wine Data Set。这些数据是对来自意大利同一地区但来自三个不同品种的葡萄酒进行化学分析的结果。数据集分析了三种葡萄酒中每种所含13种成分的量。这些13种属性是
- Alcohol,酒精
- Malic acid,苹果酸
- Ash,灰
- Alcalinity of ash,灰的碱度
- Magnesium,镁
- Total phenols,总酚
- Flavanoids,类黄酮
- Nonflavanoid phenols,非黄酮酚
- Proanthocyanins,原花青素
- Color intensity,色彩强度
- Hue,色调
- OD280/OD315 of diluted wines,稀释酒的OD280/OD315
- Proline,脯氨酸
- 方式一,从Wine数据集官网下载wine.data文件。
- 方式二,从华为云OBS中下载wine.data文件。
Key | Value | Key | Value |
---|---|---|---|
Data Set Characteristics: | Multivariate | Number of Instances: | 178 |
Attribute Characteristics: | Integer, Real | Number of Attributes: | 13 |
Associated Tasks: | Classification | Missing Values? | No |
%%capture captured_output
# 实验环境已经预装了mindspore==2.2.14,如需更换mindspore版本,可更改下面mindspore的版本号
!pip uninstall mindspore -y
!pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14
# 查看当前 mindspore 版本
!pip show mindspore
Name: mindspore
Version: 2.2.14
Summary: MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
Home-page: https://www.mindspore.cn
Author: The MindSpore Authors
Author-email: contact@mindspore.cn
License: Apache 2.0
Location: /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages
Requires: asttokens, astunparse, numpy, packaging, pillow, protobuf, psutil, scipy
Required-by: mindnlp
from download import download# 下载红酒数据集
url = "https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/MachineLearning/wine.zip"
path = download(url, "./", kind="zip", replace=True)
Downloading data from https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/MachineLearning/wine.zip (4 kB)file_sizes: 100%|██████████████████████████| 4.09k/4.09k [00:00<00:00, 2.35MB/s]
Extracting zip file...
Successfully downloaded / unzipped to ./
4.2 数据读取与处理
导入MindSpore模块和辅助模块
在生成数据之前,导入需要的Python库。
目前使用到os库,为方便理解,其他需要的库,我们在具体使用到时再说明。
详细的MindSpore的模块说明,可以在MindSpore API页面中搜索查询。
可以通过context.set_context来配置运行需要的信息,譬如运行模式、后端信息、硬件等信息。
导入context模块,配置运行需要的信息。
%matplotlib inline
import os
import csv
import numpy as np
import matplotlib.pyplot as pltimport mindspore as ms
from mindspore import nn, opsms.set_context(device_target="CPU")
读取Wine数据集wine.data
,并查看部分数据。
with open('wine.data') as csv_file:data = list(csv.reader(csv_file, delimiter=','))
print(data[56:62]+data[130:133])
[['1', '14.22', '1.7', '2.3', '16.3', '118', '3.2', '3', '.26', '2.03', '6.38', '.94', '3.31', '970'], ['1', '13.29', '1.97', '2.68', '16.8', '102', '3', '3.23', '.31', '1.66', '6', '1.07', '2.84', '1270'], ['1', '13.72', '1.43', '2.5', '16.7', '108', '3.4', '3.67', '.19', '2.04', '6.8', '.89', '2.87', '1285'], ['2', '12.37', '.94', '1.36', '10.6', '88', '1.98', '.57', '.28', '.42', '1.95', '1.05', '1.82', '520'], ['2', '12.33', '1.1', '2.28', '16', '101', '2.05', '1.09', '.63', '.41', '3.27', '1.25', '1.67', '680'], ['2', '12.64', '1.36', '2.02', '16.8', '100', '2.02', '1.41', '.53', '.62', '5.75', '.98', '1.59', '450'], ['3', '12.86', '1.35', '2.32', '18', '122', '1.51', '1.25', '.21', '.94', '4.1', '.76', '1.29', '630'], ['3', '12.88', '2.99', '2.4', '20', '104', '1.3', '1.22', '.24', '.83', '5.4', '.74', '1.42', '530'], ['3', '12.81', '2.31', '2.4', '24', '98', '1.15', '1.09', '.27', '.83', '5.7', '.66', '1.36', '560']]
取三类样本(共178条),将数据集的13个属性作为自变量 X X X。将数据集的3个类别作为因变量 Y Y Y。
X = np.array([[float(x) for x in s[1:]] for s in data[:178]], np.float32)
Y = np.array([s[0] for s in data[:178]], np.int32)
取样本的某两个属性进行2维可视化,可以看到在某两个属性上样本的分布情况以及可分性。
attrs = ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols','Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue','OD280/OD315 of diluted wines', 'Proline']
plt.figure(figsize=(10, 8))
for i in range(0, 4):plt.subplot(2, 2, i+1)a1, a2 = 2 * i, 2 * i + 1plt.scatter(X[:59, a1], X[:59, a2], label='1')plt.scatter(X[59:130, a1], X[59:130, a2], label='2')plt.scatter(X[130:, a1], X[130:, a2], label='3')plt.xlabel(attrs[a1])plt.ylabel(attrs[a2])plt.legend()
plt.show()
将数据集按128:50划分为训练集(已知类别样本)和验证集(待验证样本):
train_idx = np.random.choice(178, 128, replace=False)
test_idx = np.array(list(set(range(178)) - set(train_idx)))
X_train, Y_train = X[train_idx], Y[train_idx]
X_test, Y_test = X[test_idx], Y[test_idx]
X_train.shape,Y_train.shape
((128, 13), (128,))
5、模型构建–计算距离
利用MindSpore提供的tile, square, ReduceSum, sqrt, TopK
等算子,通过矩阵运算的方式同时计算输入样本x和已明确分类的其他样本X_train的距离,并计算出top k近邻
KnnNet
__main__.KnnNet
class KnnNet(nn.Cell):def __init__(self, k):super(KnnNet, self).__init__()self.k = kdef construct(self, x, X_train):#平铺输入x以匹配X_train中的样本数x_tile = ops.tile(x, (128, 1))square_diff = ops.square(x_tile - X_train)square_dist = ops.sum(square_diff, 1)dist = ops.sqrt(square_dist)#-dist表示值越大,样本就越接近values, indices = ops.topk(-dist, self.k)return indicesdef knn(knn_net, x, X_train, Y_train):x, X_train = ms.Tensor(x), ms.Tensor(X_train)indices = knn_net(x, X_train)topk_cls = [0]*len(indices.asnumpy())for idx in indices.asnumpy():topk_cls[Y_train[idx]] += 1cls = np.argmax(topk_cls)return cls
这段代码定义了一个简单的K最近邻(K-Nearest Neighbors, KNN)分类器,并用MindSpore框架实现。下面是对这个过程的数学描述:
-
初始化(
__init__
方法):- 初始化KNN分类器,设置参数
k
,它表示最近邻居的数目。
- 初始化KNN分类器,设置参数
-
构建(
construct
方法):- 输入参数
x
是待分类的样本特征向量,X_train
是训练样本的特征矩阵。 x_tile
表示将样本x
平铺(tile)以匹配X_train
的行数,假设X_train
有128个样本。square_diff
计算x_tile
和X_train
之间每个样本的欧几里得距离的平方差。square_dist
对square_diff
的每一行(每个训练样本)求和,得到所有样本与x
之间的平方距离。dist
通过对square_dist
开平方根,得到实际的欧几里得距离。-values, indices = ops.topk(-dist, self.k)
找到k
个最近的邻居,-values
表示这些邻居的距离(取负值得到从小到大的顺序),indices
是这些邻居在X_train
中的索引。
- 输入参数
-
KNN分类(
knn
函数):- 将输入
x
和X_train
转换为MindSpore的Tensor。 - 调用
knn_net
的construct
方法得到最近邻居的索引。 - 初始化一个长度等于
k
的列表topk_cls
,用于统计每个类别在最近邻居中的出现次数。 - 遍历
indices
中的每个索引,根据Y_train
中对应的标签增加topk_cls
中相应类别的计数。 cls
通过计算topk_cls
中出现次数最多的类别索引来确定待分类样本x
的预测类别。
- 将输入
数学上,这个过程可以描述为:
设 ( x \in \mathbb{R}^d ) 是待分类样本的特征向量,( X_{\text{train}} \in \mathbb{R}^{n \times d} ) 是训练样本的特征矩阵,其中 ( n ) 是训练样本的数量,( d ) 是特征的维度。对于每个训练样本 ( X_{\text{train}, i} ),计算与 ( x ) 之间的欧几里得距离:
[ d_i = \sqrt{\sum_{j=1}^d (x_j - X_{\text{train}, i, j})^2} ]
然后,找到 ( k ) 个最小距离 ( d_i ) 对应的索引 ( I_k ):
[ I_k = { \text{indices of the } k \text{ smallest } d_i } ]
接着,统计 ( Y_{\text{train}} ) 中与 ( I_k ) 对应的类别出现的次数:
[ \text{topk_cls} = \text{count occurrences of each class in } Y_{\text{train}}[I_k] ]
最后,选择出现次数最多的类别作为 ( x ) 的预测类别:
[ \hat{c} = \arg\max_{c \in \text{classes}} \text{topk_cls}[c] ]
这就是KNN分类器的基本数学原理。
你的描述非常准确,很好地概述了使用MindSpore框架实现K最近邻(KNN)分类器的过程。为了进一步澄清和补充你提到的内容,让我们详细看看KNN算法的数学描述以及在MindSpore中如何实现它。
KNN算法数学描述
-
计算距离:给定测试样本 ( x ) 和训练数据集 ( X_{\text{train}} ),计算 ( x ) 与 ( X_{\text{train}} ) 中每个样本 ( X_{\text{train}, i} ) 的距离。通常使用欧几里得距离:
[
d(x, X_{\text{train}, i}) = \sqrt{\sum_{j=1}^d (x_j - X_{\text{train}, i, j})^2}
] -
找到最近邻居:找到距离 ( x ) 最近的 ( k ) 个训练样本,即找到 ( k ) 个最小的 ( d_i )。
-
分类决策:基于这 ( k ) 个最近邻居的类别,进行投票来决定 ( x ) 的类别。如果任务是分类,则选择出现次数最多的类别;如果是回归,则可能采用平均值或加权平均值。
MindSpore实现
在MindSpore中实现KNN涉及以下几个步骤:
-
初始化:设置KNN分类器的参数,如 ( k ) 值。
-
构建网络:定义
construct
方法,在该方法中实现距离计算、最近邻居查找和分类决策。 -
距离计算:利用MindSpore的张量操作来高效计算距离,例如使用
ops.ReduceSum
和ops.Sqrt
来计算欧几里得距离。 -
最近邻居查找:使用
ops.TopK
来找到最近的 ( k ) 个邻居及其索引。 -
分类决策:遍历最近邻居的索引,统计每个类别的数量,然后选择出现次数最多的类别作为预测结果。
-
封装函数:将上述步骤封装在函数中,如
knn
函数,以便于调用和复用。
这种实现方式利用了MindSpore的自动微分和动态图执行能力,使得代码更加简洁且易于理解。此外,MindSpore的高效内核和优化器可以确保即使在大规模数据集上也能快速运行KNN算法。
这段代码定义了一个使用MindSpore框架的K最近邻(KNN)分类器的实现。以下是其核心过程的数学描述:
初始化 (init 方法)
当创建 KnnNet 类的一个实例时,会调用 init 方法,它接受一个参数 k,代表在分类过程中考虑的最近邻居的数量。
构建 (construct 方法)
在 construct 方法中,我们对输入数据进行处理,以找出最邻近的 k 个点:
平铺输入:首先,将输入向量 x 平铺成一个形状与训练集 X_train 相同的矩阵 x_tile。假设 X_train 有128个样本,那么 x 将被复制128次,形成一个与 X_train 同样大小的矩阵。
计算平方差:接下来,从 x_tile 和 X_train 中的对应元素相减并求平方,得到 square_diff,这是一个包含所有元素差的平方的矩阵。
求和:对 square_diff 的每一行求和,得到一个包含所有训练样本与输入样本 x 之间的欧几里得距离的平方的向量 square_dist。
开方:对 square_dist 开平方,得到实际的欧几里得距离 dist。
寻找最近邻居:由于之前我们对距离取了负号,因此使用 topk 函数找到最大的 k 个值(实际上是最小的 k 个距离),并返回它们的值和在 X_train 中的索引 indices。
KNN分类 (knn 函数)
knn 函数负责调用 KnnNet 实例的 construct 方法,并完成分类:
转换为Tensor:将输入 x 和训练集 X_train 转换为MindSpore的Tensor类型。
获取最近邻居索引:调用 KnnNet 的 construct 方法,传入 x 和 X_train,获得最近邻居的索引 indices。
统计类别频率:初始化一个列表 topk_cls,长度等于 indices 的长度,用于统计最近邻居中每个类别的出现频率。然后,遍历 indices,根据训练集标签 Y_train 中对应的标签,更新 topk_cls 中相应类别的计数。
确定预测类别:使用 np.argmax 函数找到 topk_cls 中最大值的索引,这代表了出现次数最多的类别,即为 x 的预测类别 cls。
整个过程利用了MindSpore的张量运算和自动梯度计算能力,虽然KNN本身不涉及梯度下降等优化算法,但MindSpore提供的张量操作使这一过程高效且易于实现。
6、模型预测
在验证集上验证KNN算法的有效性,取 k = 5 k = 5 k=5,验证精度接近80%,说明KNN算法在该3分类任务上有效,能根据酒的13种属性判断出酒的品种。
acc = 0
knn_net = KnnNet(5)
for x, y in zip(X_test, Y_test):pred = knn(knn_net, x, X_train, Y_train)acc += (pred == y)print('label: %d, prediction: %s' % (y, pred))
print('Validation accuracy is %f' % (acc/len(Y_test)))
label: 1, prediction: 1
label: 1, prediction: 1
label: 3, prediction: 3
label: 3, prediction: 2
label: 1, prediction: 1
label: 1, prediction: 1
label: 1, prediction: 1
label: 1, prediction: 1
label: 3, prediction: 3
label: 3, prediction: 2
label: 1, prediction: 3
label: 3, prediction: 2
label: 1, prediction: 3
label: 1, prediction: 1
label: 1, prediction: 1
label: 3, prediction: 2
label: 3, prediction: 3
label: 3, prediction: 3
label: 1, prediction: 1
label: 3, prediction: 1
label: 1, prediction: 1
label: 1, prediction: 1
label: 1, prediction: 1
label: 3, prediction: 2
label: 1, prediction: 1
label: 1, prediction: 3
label: 3, prediction: 3
label: 1, prediction: 1
label: 1, prediction: 1
label: 1, prediction: 1
label: 1, prediction: 1
label: 1, prediction: 1
label: 1, prediction: 1
label: 2, prediction: 2
label: 2, prediction: 2
label: 2, prediction: 2
label: 2, prediction: 2
label: 2, prediction: 1
label: 2, prediction: 2
label: 2, prediction: 1
label: 2, prediction: 2
label: 2, prediction: 3
label: 2, prediction: 2
label: 2, prediction: 3
label: 2, prediction: 2
label: 2, prediction: 3
label: 2, prediction: 2
label: 2, prediction: 2
label: 2, prediction: 2
label: 2, prediction: 2
Validation accuracy is 0.720000
实验小结
本实验使用MindSpore实现了KNN算法,用来解决3分类问题。取wine数据集上的3类样本,分为已知类别样本和待验证样本,从验证结果可以看出KNN算法在该任务上有效,能根据酒的13种属性判断出酒的品种。