深度学习Week17——优化器对比实验

文章目录
深度学习Week17——优化器对比实验
一、前言
二、我的环境
三、前期工作
1、配置环境
2、导入数据
2.1 加载数据
2.2 检查数据
2.3 配置数据集
2.4 数据可视化
四、构建模型
五、训练模型
1、将其嵌入model中
2、在Dataset数据集中进行数据增强
六、模型评估
1、Accuracy与Loss图
2、模型评估

一、前言

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊 | 接辅导、项目定制

终于,基础篇打卡已经基本结束,深度学习的基础学习到了很多,相关函数已经可以比较熟练的使用,最后一篇主要是探究不同优化器、以及不同参数配置对模型的影响,在论文当中我们也可以进行优化器的比对,以增加论文工作量。

我记得在今年4月份,我和学长一同发表了一篇EI会议论文,里面关于优化器的对比就占了一定的篇幅,那么今天,通过学习,我一定也会自己清楚如何弄清楚优化器对比,本次学习由于临近期末周,仅做了基础学习,没有过多扩展,未来会抽一周补全。

二、我的环境

  • 电脑系统:Windows 10
  • 语言环境:Python 3.8.0
  • 编译器:Pycharm2023.2.3
    深度学习环境:TensorFlow
    显卡及显存:RTX 3060 8G

三、前期工作

1、配置环境

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPUtf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用tf.config.set_visible_devices([gpu0],"GPU")from tensorflow          import keras
import matplotlib.pyplot as plt
import pandas            as pd
import numpy             as np
import warnings,os,PIL,pathlibwarnings.filterwarnings("ignore")             #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False    # 用来正常显示负号

2、 导入数据

导入所有猫狗图片数据,依次分别为训练集图片(train_images)、训练集标签(train_labels)、测试集图片(test_images)、测试集标签(test_labels),数据集来源于K同学啊

2.1 加载数据
data_dir   = "/home/mw/input/dogcat3675/365-7-data"data_dir    = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)
图片总数为: 3400
img_height = 256
img_width  = 256
batch_size = 32
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="training",seed=12,image_size=(img_height, img_width),batch_size=batch_size)

输出:

Found 3400 files belonging to 2 classes.
Using 2720 files for training.
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="validation",seed=12,image_size=(img_height, img_width),batch_size=batch_size)
Found 3400 files belonging to 2 classes.
Using 680 files for validation.

我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称。

class_names = train_ds.class_names
print(class_names)

[‘cat’, ‘dog’]

2.2 检查数据
for image_batch, labels_batch in train_ds:print(image_batch.shape)print(labels_batch.shape)break
(32, 256, 256, 3)
(32,)
2.3 配置数据集
AUTOTUNE = tf.data.AUTOTUNEdef train_preprocessing(image,label):return (image/255.0,label)train_ds = (train_ds.cache().shuffle(1000).map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)           # 在image_dataset_from_directory处已经设置了batch_size.prefetch(buffer_size=AUTOTUNE)
)val_ds = (val_ds.cache().shuffle(1000).map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)         # 在image_dataset_from_directory处已经设置了batch_size.prefetch(buffer_size=AUTOTUNE)
)
2.4 数据可视化
plt.figure(figsize=(10, 8))  # 图形的宽为10高为5
plt.suptitle("数据展示")for images, labels in train_ds.take(1):for i in range(15):plt.subplot(4, 5, i + 1)plt.xticks([])plt.yticks([])plt.grid(False)# 显示图片plt.imshow(images[i])# 显示标签plt.xlabel(class_names[labels[i]])plt.show()

在这里插入图片描述

四 、构建模型

from tensorflow.keras.layers import Dropout,Dense,BatchNormalization
from tensorflow.keras.models import Modeldef create_model(optimizer='adam'):# 加载预训练模型vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='avg')for layer in vgg16_base_model.layers:layer.trainable = FalseX = vgg16_base_model.outputX = Dense(170, activation='relu')(X)X = BatchNormalization()(X)X = Dropout(0.5)(X)output = Dense(len(class_names), activation='softmax')(X)vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)vgg16_model.compile(optimizer=optimizer,loss='sparse_categorical_crossentropy',metrics=['accuracy'])return vgg16_modelmodel1 = create_model(optimizer=tf.keras.optimizers.Adam())
model2 = create_model(optimizer=tf.keras.optimizers.SGD())
model2.summary()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 3s 0us/step
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 256, 256, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 256, 256, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 256, 256, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 128, 128, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 128, 128, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 128, 128, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 64, 64, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 64, 64, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 64, 64, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 64, 64, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 32, 32, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 32, 32, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 16, 16, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 16, 16, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 8, 8, 512)         0         
_________________________________________________________________
global_average_pooling2d_1 ( (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 170)               87210     
_________________________________________________________________
batch_normalization_1 (Batch (None, 170)               680       
_________________________________________________________________
dropout_1 (Dropout)          (None, 170)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 2)                 342       
=================================================================
Total params: 14,802,920
Trainable params: 87,892
Non-trainable params: 14,715,028
_________________________________________________________________

五、训练模型

NO_EPOCHS = 50history_model1  = model1.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
history_model2  = model2.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
Epoch 1/50
85/85 [==============================] - 40s 230ms/step - loss: 0.2183 - accuracy: 0.8948 - val_loss: 0.1916 - val_accuracy: 0.9912
Epoch 2/50
85/85 [==============================] - 15s 182ms/step - loss: 0.0372 - accuracy: 0.9901 - val_loss: 0.0799 - val_accuracy: 0.9956
Epoch 3/50
85/85 [==============================] - 16s 184ms/step - loss: 0.0213 - accuracy: 0.9941 - val_loss: 0.0456 - val_accuracy: 0.9897
Epoch 4/50
85/85 [==============================] - 16s 185ms/step - loss: 0.0191 - accuracy: 0.9916 - val_loss: 0.0220 - val_accuracy: 0.9941
Epoch 5/50
85/85 [==============================] - 16s 187ms/step - loss: 0.0177 - accuracy: 0.9938 - val_loss: 0.0319 - val_accuracy: 0.9868
Epoch 6/50
85/85 [==============================] - 16s 188ms/step - loss: 0.0135 - accuracy: 0.9952 - val_loss: 0.0139 - val_accuracy: 0.9941
Epoch 7/50
85/85 [==============================] - 16s 189ms/step - loss: 0.0108 - accuracy: 0.9965 - val_loss: 0.0343 - val_accuracy: 0.9868
Epoch 8/50
85/85 [==============================] - 16s 190ms/step - loss: 0.0088 - accuracy: 0.9979 - val_loss: 0.0919 - val_accuracy: 0.9676
Epoch 9/50
85/85 [==============================] - 16s 191ms/step - loss: 0.0141 - accuracy: 0.9958 - val_loss: 0.0476 - val_accuracy: 0.9824
Epoch 10/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0093 - accuracy: 0.9985 - val_loss: 0.0067 - val_accuracy: 0.9985
Epoch 11/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0055 - accuracy: 0.9995 - val_loss: 0.0160 - val_accuracy: 0.9941
Epoch 12/50
85/85 [==============================] - 16s 194ms/step - loss: 0.0085 - accuracy: 0.9986 - val_loss: 0.0076 - val_accuracy: 0.9956
Epoch 13/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0073 - accuracy: 0.9982 - val_loss: 0.0131 - val_accuracy: 0.9971
Epoch 14/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0060 - accuracy: 0.9999 - val_loss: 0.0126 - val_accuracy: 0.9941
Epoch 15/50
85/85 [==============================] - 17s 196ms/step - loss: 0.0031 - accuracy: 0.9993 - val_loss: 0.0053 - val_accuracy: 0.9985
Epoch 16/50
85/85 [==============================] - 17s 196ms/step - loss: 0.0031 - accuracy: 0.9999 - val_loss: 0.0108 - val_accuracy: 0.9941
Epoch 17/50
85/85 [==============================] - 17s 196ms/step - loss: 0.0038 - accuracy: 0.9989 - val_loss: 0.0054 - val_accuracy: 0.9985
Epoch 18/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0048 - val_accuracy: 0.9985
Epoch 19/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.0060 - val_accuracy: 0.9985
Epoch 20/50
85/85 [==============================] - 17s 195ms/step - loss: 0.0023 - accuracy: 0.9988 - val_loss: 0.0045 - val_accuracy: 0.9985
Epoch 21/50
85/85 [==============================] - 16s 194ms/step - loss: 0.0066 - accuracy: 0.9980 - val_loss: 0.0048 - val_accuracy: 1.0000
Epoch 22/50
85/85 [==============================] - 16s 194ms/step - loss: 0.0019 - accuracy: 0.9999 - val_loss: 0.0212 - val_accuracy: 0.9941
Epoch 23/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0038 - accuracy: 0.9990 - val_loss: 0.0049 - val_accuracy: 0.9985
Epoch 24/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0037 - accuracy: 0.9987 - val_loss: 0.0054 - val_accuracy: 0.9985
Epoch 25/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9721
Epoch 26/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0037 - accuracy: 0.9983 - val_loss: 0.0084 - val_accuracy: 0.9985
Epoch 27/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0022 - accuracy: 1.0000 - val_loss: 0.0078 - val_accuracy: 0.9985
Epoch 28/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0030 - accuracy: 0.9992 - val_loss: 0.0044 - val_accuracy: 0.9985
Epoch 29/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0011 - accuracy: 0.9999 - val_loss: 0.0047 - val_accuracy: 0.9971
Epoch 30/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.0082 - val_accuracy: 0.9985
Epoch 31/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0015 - accuracy: 0.9998 - val_loss: 0.0099 - val_accuracy: 0.9956
Epoch 32/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 0.0058 - val_accuracy: 0.9985
Epoch 33/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.0067 - val_accuracy: 0.9985
Epoch 34/50
85/85 [==============================] - 16s 193ms/step - loss: 9.7591e-04 - accuracy: 1.0000 - val_loss: 0.0058 - val_accuracy: 0.9985
Epoch 35/50
85/85 [==============================] - 16s 193ms/step - loss: 6.5250e-04 - accuracy: 1.0000 - val_loss: 0.0106 - val_accuracy: 0.9971
Epoch 36/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0016 - accuracy: 0.9997 - val_loss: 0.0140 - val_accuracy: 0.9926
Epoch 37/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0070 - accuracy: 0.9960 - val_loss: 0.0359 - val_accuracy: 0.9868
Epoch 38/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0023 - accuracy: 0.9996 - val_loss: 0.0309 - val_accuracy: 0.9897
Epoch 39/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0023 - accuracy: 0.9991 - val_loss: 0.0062 - val_accuracy: 0.9985
Epoch 40/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0014 - accuracy: 0.9999 - val_loss: 0.0133 - val_accuracy: 0.9956
Epoch 41/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0028 - accuracy: 0.9986 - val_loss: 0.1411 - val_accuracy: 0.9603
Epoch 42/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0036 - accuracy: 0.9991 - val_loss: 0.0233 - val_accuracy: 0.9941
Epoch 43/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0025 - accuracy: 0.9995 - val_loss: 0.0018 - val_accuracy: 1.0000
Epoch 44/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.0043 - val_accuracy: 0.9985
Epoch 45/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0040 - accuracy: 0.9988 - val_loss: 0.0049 - val_accuracy: 0.9985
Epoch 46/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 0.0034 - val_accuracy: 0.9985
Epoch 47/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 0.0055 - val_accuracy: 0.9971
Epoch 48/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0023 - accuracy: 0.9989 - val_loss: 0.0639 - val_accuracy: 0.9824
Epoch 49/50
85/85 [==============================] - 16s 193ms/step - loss: 8.3466e-04 - accuracy: 1.0000 - val_loss: 0.0049 - val_accuracy: 0.9985
Epoch 50/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0021 - accuracy: 0.9993 - val_loss: 0.0163 - val_accuracy: 0.9926
Epoch 1/50
85/85 [==============================] - 17s 194ms/step - loss: 0.3235 - accuracy: 0.8649 - val_loss: 0.4082 - val_accuracy: 0.8088
Epoch 2/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0971 - accuracy: 0.9641 - val_loss: 0.2085 - val_accuracy: 0.9882
Epoch 3/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0705 - accuracy: 0.9773 - val_loss: 0.1053 - val_accuracy: 0.9926
Epoch 4/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0675 - accuracy: 0.9780 - val_loss: 0.0565 - val_accuracy: 0.9926
Epoch 5/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0510 - accuracy: 0.9841 - val_loss: 0.0317 - val_accuracy: 0.9941
Epoch 6/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0466 - accuracy: 0.9802 - val_loss: 0.0229 - val_accuracy: 0.9956
Epoch 7/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0424 - accuracy: 0.9869 - val_loss: 0.0160 - val_accuracy: 0.9985
Epoch 8/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0488 - accuracy: 0.9843 - val_loss: 0.0152 - val_accuracy: 0.9956
Epoch 9/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0405 - accuracy: 0.9892 - val_loss: 0.0134 - val_accuracy: 0.9971
Epoch 10/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0398 - accuracy: 0.9875 - val_loss: 0.0128 - val_accuracy: 0.9956
Epoch 11/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0387 - accuracy: 0.9856 - val_loss: 0.0139 - val_accuracy: 0.9956
Epoch 12/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0334 - accuracy: 0.9900 - val_loss: 0.0155 - val_accuracy: 0.9956
Epoch 13/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0321 - accuracy: 0.9915 - val_loss: 0.0119 - val_accuracy: 0.9956
Epoch 14/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0358 - accuracy: 0.9878 - val_loss: 0.0116 - val_accuracy: 0.9971
Epoch 15/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0270 - accuracy: 0.9898 - val_loss: 0.0098 - val_accuracy: 0.9985
Epoch 16/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0233 - accuracy: 0.9936 - val_loss: 0.0102 - val_accuracy: 0.9956
Epoch 17/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0274 - accuracy: 0.9915 - val_loss: 0.0106 - val_accuracy: 0.9971
Epoch 18/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0233 - accuracy: 0.9942 - val_loss: 0.0090 - val_accuracy: 0.9971
Epoch 19/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0284 - accuracy: 0.9894 - val_loss: 0.0111 - val_accuracy: 0.9971
Epoch 20/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0232 - accuracy: 0.9934 - val_loss: 0.0098 - val_accuracy: 0.9956
Epoch 21/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0262 - accuracy: 0.9928 - val_loss: 0.0108 - val_accuracy: 0.9941
Epoch 22/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0239 - accuracy: 0.9940 - val_loss: 0.0112 - val_accuracy: 0.9941
Epoch 23/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0251 - accuracy: 0.9927 - val_loss: 0.0089 - val_accuracy: 0.9956
Epoch 24/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0251 - accuracy: 0.9910 - val_loss: 0.0086 - val_accuracy: 0.9956
Epoch 25/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0254 - accuracy: 0.9904 - val_loss: 0.0102 - val_accuracy: 0.9956
Epoch 26/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0245 - accuracy: 0.9916 - val_loss: 0.0084 - val_accuracy: 0.9985
Epoch 27/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0226 - accuracy: 0.9932 - val_loss: 0.0086 - val_accuracy: 0.9985
Epoch 28/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0289 - accuracy: 0.9905 - val_loss: 0.0088 - val_accuracy: 0.9971
Epoch 29/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0219 - accuracy: 0.9928 - val_loss: 0.0091 - val_accuracy: 0.9956
Epoch 30/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0215 - accuracy: 0.9943 - val_loss: 0.0077 - val_accuracy: 0.9985
Epoch 31/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0178 - accuracy: 0.9952 - val_loss: 0.0074 - val_accuracy: 0.9971
Epoch 32/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0231 - accuracy: 0.9929 - val_loss: 0.0117 - val_accuracy: 0.9956
Epoch 33/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0183 - accuracy: 0.9962 - val_loss: 0.0078 - val_accuracy: 0.9971
Epoch 34/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0186 - accuracy: 0.9950 - val_loss: 0.0076 - val_accuracy: 0.9971
Epoch 35/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0185 - accuracy: 0.9949 - val_loss: 0.0089 - val_accuracy: 0.9956
Epoch 36/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0199 - accuracy: 0.9924 - val_loss: 0.0080 - val_accuracy: 0.9971
Epoch 37/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0165 - accuracy: 0.9960 - val_loss: 0.0076 - val_accuracy: 0.9971
Epoch 38/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0180 - accuracy: 0.9945 - val_loss: 0.0083 - val_accuracy: 0.9985
Epoch 39/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0219 - accuracy: 0.9904 - val_loss: 0.0087 - val_accuracy: 0.9971
Epoch 40/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0179 - accuracy: 0.9971 - val_loss: 0.0086 - val_accuracy: 0.9956
Epoch 41/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0166 - accuracy: 0.9937 - val_loss: 0.0076 - val_accuracy: 0.9971
Epoch 42/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0159 - accuracy: 0.9946 - val_loss: 0.0074 - val_accuracy: 0.9985
Epoch 43/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0161 - accuracy: 0.9942 - val_loss: 0.0079 - val_accuracy: 0.9971
Epoch 44/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0173 - accuracy: 0.9957 - val_loss: 0.0073 - val_accuracy: 0.9971
Epoch 45/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0149 - accuracy: 0.9965 - val_loss: 0.0069 - val_accuracy: 0.9985
Epoch 46/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0157 - accuracy: 0.9963 - val_loss: 0.0076 - val_accuracy: 0.9971
Epoch 47/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0153 - accuracy: 0.9959 - val_loss: 0.0074 - val_accuracy: 0.9971
Epoch 48/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0162 - accuracy: 0.9957 - val_loss: 0.0068 - val_accuracy: 0.9985
Epoch 49/50
85/85 [==============================] - 16s 193ms/step - loss: 0.0132 - accuracy: 0.9955 - val_loss: 0.0079 - val_accuracy: 0.9971
Epoch 50/50
85/85 [==============================] - 16s 192ms/step - loss: 0.0131 - accuracy: 0.9954 - val_loss: 0.0085 - val_accuracy: 0.9971

六、评估模型

1、Accuracy与Loss图

from matplotlib.ticker import MultipleLocator
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi']  = 300 #分辨率acc1     = history_model1.history['accuracy']
acc2     = history_model2.history['accuracy']
val_acc1 = history_model1.history['val_accuracy']
val_acc2 = history_model2.history['val_accuracy']loss1     = history_model1.history['loss']
loss2     = history_model2.history['loss']
val_loss1 = history_model1.history['val_loss']
val_loss2 = history_model2.history['val_loss']epochs_range = range(len(acc1))plt.figure(figsize=(16, 4))
plt.subplot(1, 2, 1)plt.plot(epochs_range, acc1, label='Training Accuracy-Adam')
plt.plot(epochs_range, acc2, label='Training Accuracy-SGD')
plt.plot(epochs_range, val_acc1, label='Validation Accuracy-Adam')
plt.plot(epochs_range, val_acc2, label='Validation Accuracy-SGD')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss1, label='Training Loss-Adam')
plt.plot(epochs_range, loss2, label='Training Loss-SGD')
plt.plot(epochs_range, val_loss1, label='Validation Loss-Adam')
plt.plot(epochs_range, val_loss2, label='Validation Loss-SGD')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))plt.show()

在这里插入图片描述

2、模型评估

def test_accuracy_report(model):score = model.evaluate(val_ds, verbose=0)print('Loss function: %s, accuracy:' % score[0], score[1])test_accuracy_report(model2)
Loss function: 0.008488086983561516, accuracy: 0.9970588088035583

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/31587.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

鸿蒙 登录界面示例

1.b站登录界面 我的b站教学视频:https://www.bilibili.com/video/BV1LQgQexEGm/?spm_id_from333.999.0.0&vd_sourced0ea58f1127eed138a4ba5421c577eb1 最终实现效果: 需要准备2张图片,分别是向下和向右边的图标 代码: En…

第1章 MySQL数据库概述

1.1 基本概念 数据库是什么? 存储数据的地方 DB:数据库(Database) 为什么要用数据库? 因为应用程序产生的数据是在内存中的,如果程序退出或者是断电了,则数据就会消失。使用数据库是为了…

硬件开发笔记(二十一):外部搜索不到的元器件封装可尝试使用AD21软件的“ManufacturerPart Search”功能

若该文为原创文章,转载请注明原文出处 本文章博客地址:https://hpzwl.blog.csdn.net/article/details/139869584 长沙红胖子Qt(长沙创微智科)博文大全:开发技术集合(包含Qt实用技术、树莓派、三维、OpenCV…

探索图神经网络(GNN):使用Python实现你的GNN模型

一、引言 图神经网络(Graph Neural Network, GNN)作为近年来机器学习和深度学习领域的热门话题,正逐渐吸引越来越多的研究者和开发者的关注。GNN能够处理图结构数据,在社交网络分析、推荐系统、化学分子结构预测等领域有着广泛的…

学校校园考场电子钟,同步授时,助力考场公平公正-讯鹏科技

随着教育技术的不断发展,学校对于考场管理的需求也日益提高。传统的考场时钟往往存在时间误差、维护不便等问题,这在一定程度上影响了考试的公平性和公正性。为了解决这些问题,越来越多的学校开始引入考场电子钟,通过同步授时技术…

HTTP 状态码详解及使用场景

目录 1xx 信息性状态码2xx 成功状态码3xx 重定向状态码4xx 客户端错误状态码5xx 服务器错误状态码 HTTP思维导图连接:https://note.youdao.com/s/A7QHimm0 1xx 信息性状态码 100 Continue:表示客户端应继续发送请求的其余部分。 使用场景:客…

学习分享-Redis 中的压缩列表 (Ziplist)

Redis 中的压缩列表 (Ziplist) 压缩列表 (Ziplist) 是 Redis 内部用于优化小规模数据存储的一种紧凑数据结构。它设计用于高效地存储包含少量元素的列表、哈希表或有序集合,以减少内存占用和提高性能。以下是压缩列表的详细介绍: 1. 压缩列表的结构 压…

Freertos-----任务之间的消息传递(使用消息队列信号量方法)

这次来分享任务之间的数据传递的方法,方法有很多种,我展示2种,让大家对freertos有更深刻的印象 目录 消息队列 信号量 消息队列 首先直接打开普中的例程,然后在里面加上ADC的驱动代码,先初始化外设先,我…

内江科技杂志内江科技杂志社内江科技编辑部2024年第13期目录

科教兴国 内江市科技局“五个强化”助力“五经普”工作有序推进 本刊通讯员; 1 内江市多措并举融入成渝中线科创走廊建设 本刊通讯员; 2 科学管理《内江科技》投稿:cnqikantg126.com 数字化社会公共图书馆的服务效能提升策略研究 闫永凤;臧萌;王亚博;王…

[极客大挑战 2020]Roamphp4-Rceme

rce,rce,rce!!! 右键源代码里给了提示&#xff0c;有备份文件index.php.swp,大伙都做到这来了&#xff0c;应该不用写了吧。看源码 <?php error_reporting(0); session_start(); if(!isset($_SESSION[code])){$_SESSION[code] substr(md5(mt_rand().sha1(mt_rand)),0,5);…

汽车IVI中控开发入门及进阶(二十九):i.MX6

前言: i.MX 6双/6Quad处理器集成多媒体应用处理器,是不断增长的多媒体产品系列的一部分,提供高性能处理,并针对最低功耗进行了优化。 i.MX 6Dual/6Quad处理器采用先进的quad-ArmCortex-A9内核,运行速度高达800 MHz,包括2D和3D图形处理器、1080p视频处理和集成电源管理。…

区块链中nonce是什么,什么作用

目录 区块链中nonce是什么,什么作用 区块链中nonce是什么,什么作用 Nonce在以太坊中是一个用于确保交易顺序性和唯一性的重要参数。以下是对Nonce的详细解释: 定义 Nonce是一个scalar值,它等于从该地址发送的交易数量,或在具有关联代码的账户的情况下,由该账户创建的合…

从xxl-job源码中学习Netty的使用

1. 启动与Spring实例化 com.xxl.job.core.executor.impl.XxlJobSpringExecutor.java类 继承SmartInitializingSingleton 类&#xff0c;在afterSingletonsInstantiated 实例化后方法中 调用initJobHandlerMethodRepository 把所有的xxljob任务管理起来&#xff1b; private…

使用ASP.NET Core封装接口请求参数格式

有些人获取接口请求参数是直接使用数据库实体类来获取的&#xff0c;这种方式虽然写起来很方便&#xff0c;但是会导致swagger接口文档出现很多没用的参数&#xff0c;让人看着不舒服。 比如&#xff0c;新增用户只需要传用户名、密码、邮箱就可以了&#xff0c;但是实体类也包…

数据可视化---绘制常用图表,组合图表,定制图表主题

题目一&#xff1a;绘制桑基图&#xff0c;展示某商铺新老客服群体的商品喜好 编写程序。根据第9.3.7&#xff0c;绘制桑基图&#xff0c;展示某商铺新老客服群体的商品喜好。 运行代码&#xff1a; #绘制桑基图&#xff0c;展示某商铺新老客服群体的商品喜好 from pyecharts…

34.构建核心注入代码

上一个内容&#xff1a;33.获取入口点 以 33.获取入口点 它的代码为基础进行修改 实现的功能是把LoadLibrary函数注入到目标进程实现加载我们的模块。LoadLibrary只有有程序使用过了它的代码就会加载到内存中&#xff08;因为动态链接库是内存加载&#xff09;就是a程序要用L…

大数据-数据分析师利用excel绘图

你会用excel&#xff0c;统计数据吗&#xff1f;我是大数据工程师&#xff0c;但是我不会excel。那咋办&#xff1f; 用sql&#xff0c;统计&#xff0c;导出到excel&#xff0c;在用excel统计。本文主要讨论的是导出到excel后&#xff0c;画图。 图是什么&#xff1f; x和y…

【学习笔记】Mybatis-Plus(二) :常用注解

常用注解 注解含义应用场景TableName表名注解&#xff0c;标识实体类对应的表表名和实体类名称不一致TableId主键注解&#xff0c;标识实体类的主键主键需要指定自增长TableField字段注解数据库名称和字段名称不一致TableLogic逻辑删除不是真正物理删除数据KeySequence序列主键…

Ilya出走记:SSI的超级安全革命

图片&#xff5c;OpenAI官网 ©自象限原创 作者丨罗辑、程心 和OpenAI分道扬镳以后&#xff0c;Ilya“神秘而伟大”的事业终于揭开了面纱。 6月20日&#xff0c;前OpenAI核心创始人 Ilya Stuskever&#xff0c;在官宣离职一个月后&#xff0c;Ilya在社交媒体平台公开了…

利氪科技拿下C轮超级融资,国产智能底盘黑马奔向黄金时代

“智能驾驶遗珠&#xff0c;国产替代富矿。” 这是海通证券在最近一期研报中&#xff0c;描述线控底盘产业的用语。它很巧妙地点明了&#xff0c;这个藏在车身之下的部分&#xff0c;拥有何种特征——稳坐技术体系的核心点位&#xff0c;拥有前景广阔的市场。 事实上&#xf…