Pytorch学习笔记day4——训练mnist数据集和初步研读

该来的还是来了hhhhhhhhhh,基本上机器学习的初学者都躲不开这个例子。开源,数据质量高,数据尺寸整齐,问题简单,实在太适合初学者食用了。

今天把代码跑通,趁着周末好好的琢磨一下里面的各种细节。

代码实现

首先鸣谢百度AI,真的直接生成的代码就能跑,不要太爽。差不多九年前大二的时候,这一点点代码,是要看完一个几小时的英文视频才能获取的。看着网络非常非常浅,就已经达到了比较好的预测效果。

import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms# 定义模型
class Net(nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(1, 10, kernel_size=5) #输入为1,输出为10,卷积核大小5self.conv2 = nn.Conv2d(10, 20, kernel_size=5)self.fc = nn.Linear(20 * 4 * 4, 10)def forward(self, x):batch_size = x.size(0)   #第一个维度是batch维度,图片为1*28*28时,输入为64*1*28*28x = torch.relu(self.conv1(x))  # 输入64*1*28*28, 输出64*10*24*24x = torch.max_pool2d(x, 2, 2)  # 输入64*10*24*24, 输出64*10*12*12,池化层x = torch.relu(self.conv2(x))  # 输入64*10*12*12, 输出64*20*8*8x = torch.max_pool2d(x, 2, 2)  # 输入64*20*8*8, 输出64*20*4*4x = x.view(batch_size, -1)     # 输入64*20*4*4, 输出64*320x = self.fc(x)                 # 输入64*320, 输出64*10return xif __name__=="__main__":# 定义超参数batch_size = 64epochs = 10learning_rate = 0.01# 数据预处理transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))])# 加载训练/测试数据  batch_size:每次训练的规模  shuffle: 是否每次训练完对数据进行洗牌train_dataset = datasets.MNIST('data', train=True, download=True, transform=transform)train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)test_dataset = datasets.MNIST('data', train=False, transform=transform)test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)# 实例化模型、损失函数和优化器model = Net()optimizer = optim.Adam(model.parameters(), lr=learning_rate)criterion = nn.CrossEntropyLoss()# 训练模型for epoch in range(epochs):for batch_idx, (data, target) in enumerate(train_loader): #自动打batchoptimizer.zero_grad()   #典型的训练步骤output = model(data)loss = criterion(output, target)loss.backward()optimizer.step()if batch_idx % 100 == 0:print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch, batch_idx * len(data), len(train_loader.dataset),0. * batch_idx / len(train_loader), loss.item()))# 测试模型model.eval()test_loss = 0correct = 0with torch.no_grad():for data, target in test_loader:output = model(data)test_loss += criterion(output, target).item()pred = output.argmax(dim=1, keepdim=True)correct += pred.eq(target.view_as(pred)).sum().item()test_loss /= len(test_loader.dataset)print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset)))

运行结果如下:

Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Failed to download (trying next):
HTTP Error 403: ForbiddenDownloading https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz
Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz to data\MNIST\raw\train-images-idx3-ubyte.gz
100%|██████████| 9912422/9912422 [02:41<00:00, 61401.03it/s]
Extracting data\MNIST\raw\train-images-idx3-ubyte.gz to data\MNIST\rawDownloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Failed to download (trying next):
HTTP Error 403: ForbiddenDownloading https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz
Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz to data\MNIST\raw\train-labels-idx1-ubyte.gz
100%|██████████| 28881/28881 [00:00<00:00, 97971.03it/s]
Extracting data\MNIST\raw\train-labels-idx1-ubyte.gz to data\MNIST\rawDownloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Failed to download (trying next):
HTTP Error 403: ForbiddenDownloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz
Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz to data\MNIST\raw\t10k-images-idx3-ubyte.gz
100%|██████████| 1648877/1648877 [00:29<00:00, 56423.58it/s]
Extracting data\MNIST\raw\t10k-images-idx3-ubyte.gz to data\MNIST\rawDownloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Failed to download (trying next):
HTTP Error 403: ForbiddenDownloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz
Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz to data\MNIST\raw\t10k-labels-idx1-ubyte.gz
100%|██████████| 4542/4542 [00:00<00:00, 4339528.19it/s]
Extracting data\MNIST\raw\t10k-labels-idx1-ubyte.gz to data\MNIST\rawTrain Epoch: 0 [0/60000 (0%)]	Loss: 2.275243
Train Epoch: 0 [6400/60000 (0%)]	Loss: 0.200208
Train Epoch: 0 [12800/60000 (0%)]	Loss: 0.064670
Train Epoch: 0 [19200/60000 (0%)]	Loss: 0.066074
Train Epoch: 0 [25600/60000 (0%)]	Loss: 0.115960
Train Epoch: 0 [32000/60000 (0%)]	Loss: 0.171170
Train Epoch: 0 [38400/60000 (0%)]	Loss: 0.041663
Train Epoch: 0 [44800/60000 (0%)]	Loss: 0.179172
Train Epoch: 0 [51200/60000 (0%)]	Loss: 0.014898
Train Epoch: 0 [57600/60000 (0%)]	Loss: 0.035095
Train Epoch: 1 [0/60000 (0%)]	Loss: 0.016566
Train Epoch: 1 [6400/60000 (0%)]	Loss: 0.008371
Train Epoch: 1 [12800/60000 (0%)]	Loss: 0.006069
Train Epoch: 1 [19200/60000 (0%)]	Loss: 0.009995
Train Epoch: 1 [25600/60000 (0%)]	Loss: 0.020422
Train Epoch: 1 [32000/60000 (0%)]	Loss: 0.155348
Train Epoch: 1 [38400/60000 (0%)]	Loss: 0.059595
Train Epoch: 1 [44800/60000 (0%)]	Loss: 0.038654
Train Epoch: 1 [51200/60000 (0%)]	Loss: 0.084179
Train Epoch: 1 [57600/60000 (0%)]	Loss: 0.147250
Train Epoch: 2 [0/60000 (0%)]	Loss: 0.040161
Train Epoch: 2 [6400/60000 (0%)]	Loss: 0.147080
Train Epoch: 2 [12800/60000 (0%)]	Loss: 0.037228
Train Epoch: 2 [19200/60000 (0%)]	Loss: 0.257872
Train Epoch: 2 [25600/60000 (0%)]	Loss: 0.052811
Train Epoch: 2 [32000/60000 (0%)]	Loss: 0.005805
Train Epoch: 2 [38400/60000 (0%)]	Loss: 0.092318
Train Epoch: 2 [44800/60000 (0%)]	Loss: 0.084066
Train Epoch: 2 [51200/60000 (0%)]	Loss: 0.000331
Train Epoch: 2 [57600/60000 (0%)]	Loss: 0.011482
Train Epoch: 3 [0/60000 (0%)]	Loss: 0.042851
Train Epoch: 3 [6400/60000 (0%)]	Loss: 0.004001
Train Epoch: 3 [12800/60000 (0%)]	Loss: 0.008942
Train Epoch: 3 [19200/60000 (0%)]	Loss: 0.045065
Train Epoch: 3 [25600/60000 (0%)]	Loss: 0.099309
Train Epoch: 3 [32000/60000 (0%)]	Loss: 0.054098
Train Epoch: 3 [38400/60000 (0%)]	Loss: 0.059155
Train Epoch: 3 [44800/60000 (0%)]	Loss: 0.016098
Train Epoch: 3 [51200/60000 (0%)]	Loss: 0.114458
Train Epoch: 3 [57600/60000 (0%)]	Loss: 0.231477
Train Epoch: 4 [0/60000 (0%)]	Loss: 0.003781
Train Epoch: 4 [6400/60000 (0%)]	Loss: 0.068822
Train Epoch: 4 [12800/60000 (0%)]	Loss: 0.103501
Train Epoch: 4 [19200/60000 (0%)]	Loss: 0.002396
Train Epoch: 4 [25600/60000 (0%)]	Loss: 0.174503
Train Epoch: 4 [32000/60000 (0%)]	Loss: 0.027796
Train Epoch: 4 [38400/60000 (0%)]	Loss: 0.013167
Train Epoch: 4 [44800/60000 (0%)]	Loss: 0.011576
Train Epoch: 4 [51200/60000 (0%)]	Loss: 0.000726
Train Epoch: 4 [57600/60000 (0%)]	Loss: 0.069251
Train Epoch: 5 [0/60000 (0%)]	Loss: 0.006919
Train Epoch: 5 [6400/60000 (0%)]	Loss: 0.015165
Train Epoch: 5 [12800/60000 (0%)]	Loss: 0.117820
Train Epoch: 5 [19200/60000 (0%)]	Loss: 0.031030
Train Epoch: 5 [25600/60000 (0%)]	Loss: 0.031566
Train Epoch: 5 [32000/60000 (0%)]	Loss: 0.046268
Train Epoch: 5 [38400/60000 (0%)]	Loss: 0.055709
Train Epoch: 5 [44800/60000 (0%)]	Loss: 0.021299
Train Epoch: 5 [51200/60000 (0%)]	Loss: 0.004246
Train Epoch: 5 [57600/60000 (0%)]	Loss: 0.014340
Train Epoch: 6 [0/60000 (0%)]	Loss: 0.056358
Train Epoch: 6 [6400/60000 (0%)]	Loss: 0.104084
Train Epoch: 6 [12800/60000 (0%)]	Loss: 0.097005
Train Epoch: 6 [19200/60000 (0%)]	Loss: 0.009379
Train Epoch: 6 [25600/60000 (0%)]	Loss: 0.078417
Train Epoch: 6 [32000/60000 (0%)]	Loss: 0.217889
Train Epoch: 6 [38400/60000 (0%)]	Loss: 0.079795
Train Epoch: 6 [44800/60000 (0%)]	Loss: 0.052873
Train Epoch: 6 [51200/60000 (0%)]	Loss: 0.127716
Train Epoch: 6 [57600/60000 (0%)]	Loss: 0.087016
Train Epoch: 7 [0/60000 (0%)]	Loss: 0.045884
Train Epoch: 7 [6400/60000 (0%)]	Loss: 0.087923
Train Epoch: 7 [12800/60000 (0%)]	Loss: 0.164549
Train Epoch: 7 [19200/60000 (0%)]	Loss: 0.111163
Train Epoch: 7 [25600/60000 (0%)]	Loss: 0.300172
Train Epoch: 7 [32000/60000 (0%)]	Loss: 0.045357
Train Epoch: 7 [38400/60000 (0%)]	Loss: 0.087294
Train Epoch: 7 [44800/60000 (0%)]	Loss: 0.110581
Train Epoch: 7 [51200/60000 (0%)]	Loss: 0.001932
Train Epoch: 7 [57600/60000 (0%)]	Loss: 0.066714
Train Epoch: 8 [0/60000 (0%)]	Loss: 0.047415
Train Epoch: 8 [6400/60000 (0%)]	Loss: 0.106327
Train Epoch: 8 [12800/60000 (0%)]	Loss: 0.016832
Train Epoch: 8 [19200/60000 (0%)]	Loss: 0.013452
Train Epoch: 8 [25600/60000 (0%)]	Loss: 0.035256
Train Epoch: 8 [32000/60000 (0%)]	Loss: 0.026502
Train Epoch: 8 [38400/60000 (0%)]	Loss: 0.011809
Train Epoch: 8 [44800/60000 (0%)]	Loss: 0.171943
Train Epoch: 8 [51200/60000 (0%)]	Loss: 0.209570
Train Epoch: 8 [57600/60000 (0%)]	Loss: 0.047113
Train Epoch: 9 [0/60000 (0%)]	Loss: 0.126423
Train Epoch: 9 [6400/60000 (0%)]	Loss: 0.016720
Train Epoch: 9 [12800/60000 (0%)]	Loss: 0.210951
Train Epoch: 9 [19200/60000 (0%)]	Loss: 0.072410
Train Epoch: 9 [25600/60000 (0%)]	Loss: 0.042366
Train Epoch: 9 [32000/60000 (0%)]	Loss: 0.002912
Train Epoch: 9 [38400/60000 (0%)]	Loss: 0.074261
Train Epoch: 9 [44800/60000 (0%)]	Loss: 0.004673
Train Epoch: 9 [51200/60000 (0%)]	Loss: 0.074964
Train Epoch: 9 [57600/60000 (0%)]	Loss: 0.040360Test set: Average loss: 0.0011, Accuracy: 9795/10000 (98%)

部分解读

下面这个语法是定义了一个二维卷积层,

nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')

可以参考一下这篇博客 https://blog.csdn.net/qq_60245590/article/details/135856418
百度AI也给出了解释
在这里插入图片描述
训练数据是python实时从网上下载的,打开看看,里面还挺东西,应该最主要的就是训练数据和测试数据。可是这样的话,为啥要分布下载个train_dataset和test_dataset呢?我略有些迷茫。
在这里插入图片描述
batch居然不用我们自己打,咦?这个功能mindspore有吗?我自己捏的数据能自动打batch吗?能的话就很方便了。
在这里插入图片描述
好!今天崩铁前瞻~打游戏去咯~

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/48922.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Spring MVC的高级功能——拦截器(三)拦截器的执行流程

一、单个拦截器的执行流程 如果在项目中只定义了一个拦截器&#xff0c;单个拦截器的执行流程如图所示。 二、单个拦截器的执行流程分析 从单个拦截器的执行流程图中可以看出&#xff0c;程序收到请求后&#xff0c;首先会执行拦截器中的preHandle()方法&#xff0c;如果preHa…

bug诞生记——动态库加载错乱导致程序执行异常

大纲 背景问题发生问题猜测和分析过程是不是编译了本工程中的其他代码是不是有缓存是不是编译了非本工程的文件是不是调用了其他可执行文件查看CMakefiles分析源码检查正在运行程序的动态库 解决方案 这个案例发生在我研究ROS 2的测试Demo时发生的。 整体现象是&#xff1a;修改…

聊一聊前端动画的种类,以及动画的触发方式有哪些?

引言 动画在前端开发中扮演着重要的角色。它不仅可以提升用户体验&#xff0c;还可以使界面更加生动和有趣。在这篇文章中&#xff0c;我们将深入探讨前端动画的各种实现方式&#xff0c;包括 CSS 动画、JavaScript 动画、SVG 动画等。我们还将讨论一些触发动画的方式和动画在…

【MQTT(2)】开发一个客户端,ubuntu版本

基本流程如下&#xff0c;先生成Mosquitto的库&#xff0c;然后qt调用库进行开发界面。 文章目录 0 生成库1 有界面的QT版本2 无界面版本 0 生成库 下载源码&#xff1a;https://github.com/eclipse/mosquitto.git 编译ubuntu 版本很简单&#xff0c;安装官方说明直接make&am…

rk3568 OpenHarmony4.1 Launcher定制开发—桌面壁纸替换

Launcher 作为系统人机交互的首要入口&#xff0c;提供应用图标的显示、点击启动、卸载应用&#xff0c;并提供桌面布局设置以及最近任务管理等功能。本文将介绍如何使用Deveco Studio进行单独launcher定制开发、然后编译并下载到开发板&#xff0c;以通过Launcher修改桌面背景…

记录|如何打包C#项目

参考文章&#xff1a; c#窗体应用程序怎么打包 经过检验确实有效 Step1. 生成发布文件 在Visual Studio的菜单中&#xff0c;找到“生成”->“发布” 第一次会有个向导&#xff0c;基本上一路next下来既可以 最后&#xff0c;点击完成即可以 Step2. 获得publish文件 自…

【JavaEE】AQS原理

本文将介绍AQS的简单原理。 首先有个整体认识&#xff0c;全称是 AbstractQueuedSynchronizer&#xff0c;是阻塞式锁和相关的同步器工具的框架。常用的ReentrantLock、Semaphore、CountDownLatch等都有实现它。 本文参考&#xff1a; 深入理解AbstractQueuedSynchronizer只需…

[C++]TinyWebServer

TinyWebServer 文章目录 TinyWebServer1 主体框架2 Buffer2.1 向Buffer写入数据2.2 从Buffer读取数据2.3 动态扩容2.4 从socket中读取数据2.5 具体实现 3 日志系统3.1 生产者-消费者模型3.2 数据一致3.3 代码 4 定时器4.1 调整堆中元素操作4.2 堆的操作4.2.1 增4.2.2 删4.2.3 改…

微信小程序-应用,页面和组件生命周期总结

情景1&#xff1a;小程序冷启动时候的顺序 情景2: 使用navigator&#xff08;保留并打开另一个页面&#xff09;和redirect&#xff08;关闭并打开另一个页面&#xff09;的执行顺序 情景3&#xff1a;切后台和切前台

Linux——组管理和权限管理

目录 组管理 Linux 组基本介绍 文件/目录所有者 组的创建 查看&修改文件/目录所在组 改变用户所在组 权限管理 基本介绍 rwx 文件/目录权限详解 chmod 修改文件或目录权限 chown 修改文件所有者 组管理 Linux 组基本介绍 关于第二张图中问题&#xff0c;答案…

【Qt】Qt的坐标转换(mapToGlobal)

1、QPoint QWidget::mapToGlobal(const QPoint &pos) const 将小部件坐标转换为全局坐标。mapToGlobal(QPoint(0,0))可以得到小部件左上角像素的全局坐标。2、QPoint QWidget::mapToParent(const QPoint &pos) const 将小部件坐标转换为父部件坐标。如果小部件没有父部…

Jmeter之count函数

counter函数 1、功能解释 count函数--计数器&#xff0c;每调用这个函数一次&#xff0c;它就会自动加1。它有两个参数&#xff0c;第一个参数是布尔型的&#xff0c;只能设置成 “TRUE”或者“FALSE”&#xff0c;如果是TRUE&#xff0c;那么每个用户有自己的计数器&#xf…

常用的网络爬虫工具推荐

在推荐常用的网络爬虫工具时&#xff0c;我们可以根据工具的易用性、功能强大性、用户口碑以及是否支持多种操作系统等多个维度进行考量。以下是一些常用的网络爬虫工具推荐&#xff1a; 1. 八爪鱼 简介&#xff1a;八爪鱼是一款免费且功能强大的网站爬虫&#xff0c;能够满足…

vxe-table——实现切换页码时排序状态的回显问题(ant-design+elementUi中table排序不同时回显的bug)——js技能提升

之前写的后台管理系统&#xff0c;都是用的antdelement&#xff0c;table组件中的【排序】问题是有一定的缺陷的。 想要实现的效果&#xff1a; antv——table组件一次只支持一个参数的排序 如下图&#xff1a; 就算是可以自行将排序字段拼接到列表接口的入参中&#xff0c…

环信+亚马逊云科技服务:助力出海AI社交应用扬帆起航

随着大模型技术的飞速发展&#xff0c;AI智能体的社交体验得到了显著提升&#xff0c;AI社交类应用在全球范围内持续火热。尤其是年轻一代对新技术和新体验的热情&#xff0c;使得AI社交产品在海外市场迅速崛起。作为领先的即时通讯解决方案提供商&#xff0c;环信与亚马逊云科…

计算机体系结构|| 再定序缓冲(ROB)原理(6)

实验6 再定序缓冲&#xff08;ROB&#xff09;原理 6.1实验目的 &#xff08;1&#xff09;加深对指令级并行性及其开发的理解。 &#xff08;2&#xff09;加深对基于硬件的前瞻执行的理解。 &#xff08;3&#xff09;掌握 ROB 在流出、执行、写结果确认4 个阶段所进行的…

vue3 -layui项目-左侧导航菜单栏

1.创建目录结构 进入cmd,先cd到项目目录&#xff08;项目vue3-project&#xff09; cd vue3-project mkdir -p src\\views\\home\\components\\menubar 2.创建组件文件 3.编辑menu-item-content.vue <template><template v-if"item.icon"><lay-ic…

SQL injection UNION attacks SQL注入联合查询攻击

通过使用UNION关键字&#xff0c;拼接新的SQL语句从而获得额外的内容&#xff0c;例如 select a,b FROM table1 UNION select c,d FROM table2&#xff0c;可以一次性查询 2行数据&#xff0c;一行是a&#xff0c;b&#xff0c;一行是c&#xff0c;d。 UNION查询必须满足2个条…

java面试题,有synchronized锁,threadlocal、数据可以设置默认值、把redis中的json转为对象

有面试题&#xff0c;有synchronized锁&#xff0c;threadlocal 一、面试题小记二、加锁synchronized1. 先看代码2. synchronized 讲解2.1. 同步代码块2.2. 同步方法2.3. 锁的选择和影响2.4. 注意事项2.5 锁的操作&#xff0c;手动释放锁&#xff0c;显式地获取锁&#xff08;属…

开源XDR-SIEM一体化平台 Wazuh (1)基础架构

简介 Wazuh平台提供了XDR和SIEM功能&#xff0c;保护云、容器和服务器工作负载。这些功能包括日志数据分析、入侵和恶意软件检测、文件完整性监控、配置评估、漏洞检测以及对法规遵从性的支持。详细信息可以参考Wazuh - Open Source XDR. Open Source SIEM.官方网站 Wazuh解决…