tensorflow案例4--人脸识别(损失函数选取,调用VGG16模型以及改进写法)

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

前言

  • 这个模型结构算上之前的pytorch版本的,算是花了不少时间,但是效果一直没有达到理想情况,主要是验证集和训练集准确率差距过大;
  • 在猜想会不会是模型复杂性不够的问题,但是如果继续叠加,又会出现模型退化结果,解决方法,我感觉可以试一下ResNet模型,后面再更新吧;
  • 这一次对*VGG16的修改主要修改了3个方面,具体情况如下讲解;
  • 欢迎收藏 + 关注,本人将会持续更新。

1、知识讲解与API积累

1、模型改进与简介

VGG16模型

VGG16模型是一个很基础的模型,由13层卷积,3层全连接层构成,图示如下:

在这里插入图片描述

本次实验VGG16模型修改

  • 冻结前13层卷积,只修改全连接
  • 在全连接层添加BN层、全局平均池化层,起到降维作用,因为VGG16的计算量很大
  • 全连接层中添加Dropout层
  • 修改后代码:
# 导入官方VGG16模型
vgg16_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))# 冻结卷积权重
for layer in vgg16_model.layers:layer.trainble = False# 获取卷积层输出
x = vgg16_model.output# 添加BN层
x = layers.BatchNormalization()(x)# 添加平均池化,降低计算量
x = layers.GlobalAveragePooling2D()(x)# 添加全连接层和Dropout
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.5)(x)predict = layers.Dense(len(classnames))(x)# 创建模型
model = models.Model(inputs=vgg16_model.input, outputs=predict)model.summary()

结果

最好结果loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750,个人感觉想要继续提升精度,最简单方法,是结合ResNet网络, 这个后面我再试一下.

2、API积累

Loss函数

损失函数Loss详解(结合加载数据的时候,label_mode参数进行相应选取):

1. binary_crossentropy(对数损失函数)

sigmoid 相对应的损失函数,针对于二分类问题。

2. categorical_crossentropy(多分类的对数损失函数)

softmax 相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy

Tensorflow加载VGG16模型

1. 加载包含顶部全连接层的 VGG16 模型
from tensorflow.keras.applications import VGG16# 加载包含顶部全连接层的 VGG16 模型,使用 ImageNet 预训练权重
model = VGG16(include_top=True, weights='imagenet', input_shape=(224, 224, 3))model.summary()
2. 加载不包含顶部全连接层的 VGG16 模型
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.models import Model# 加载不包含顶部全连接层的 VGG16 模型,使用 ImageNet 预训练权重
base_model = VGG16(include_top=False, weights='imagenet', input_shape=(224, 224, 3))# 冻结卷积基的权重(可选)
for layer in base_model.layers:layer.trainable = False# 获取卷积基的输出
x = base_model.output# 添加新的全连接层,*******  就这一步结合实际场景进行修改 ****
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(2, activation='softmax')(x)  # 2 个输出类别# 创建新的模型
model = Model(inputs=base_model.input, outputs=predictions)model.summary()
3. 使用自定义输入张量
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import Input# 定义输入张量
input_tensor = Input(shape=(224, 224, 3))# 加载 VGG16 模型,使用自定义输入张量
model = VGG16(include_top=True, weights='imagenet', input_tensor=input_tensor)model.summary()
4. 不加载预训练权重
from tensorflow.keras.applications import VGG16# 加载 VGG16 模型,不加载预训练权重
model = VGG16(include_top=True, weights=None, input_shape=(224, 224, 3))model.summary()

2、人脸识别求解

1、数据处理

1、导入库

import tensorflow as tf 
from tensorflow.keras import datasets, models, layers
import numpy as np # 获取所有GPU
gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0]   # 有多块,取一块tf.config.experimental.set_memory_growth(gpu0, True)   # 设置显存空间tf.config.set_visible_devices([gpu0], "GPU")  # 设置第一块gpus  # 输出GPU
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

2、查看文件目录

在数据文件夹中,不同人的图像都存储在单独的文件夹中

import os, PIL, pathlibdata_dir = './data/'
data_dir = pathlib.Path(data_dir)# 获取所有改目文件夹下所有文件夹名字
classnames = os.listdir(data_dir)
classnames
['Angelina Jolie','Brad Pitt','Denzel Washington','Hugh Jackman','Jennifer Lawrence','Johnny Depp','Kate Winslet','Leonardo DiCaprio','Megan Fox','Natalie Portman','Nicole Kidman','Robert Downey Jr','Sandra Bullock','Scarlett Johansson','Tom Cruise','Tom Hanks','Will Smith']

3、数据划分

batch_size = 32train_ds = tf.keras.preprocessing.image_dataset_from_directory('./data/',batch_size=batch_size,shuffle=True,validation_split=0.2,   # 验证集 0.1,训练集 0.9subset='training',seed=42,label_mode='categorical',   # 使用独热编码编码数据分类image_size=(256, 256)
)val_ds = tf.keras.preprocessing.image_dataset_from_directory('./data/',batch_size=batch_size,shuffle=True,validation_split=0.2,seed=42,subset='validation',    image_size=(256, 256),label_mode='categorical'  # 使用独热编码对数据进行分类
)
Found 1800 files belonging to 17 classes.
Using 1440 files for training.
Found 1800 files belonging to 17 classes.
Using 360 files for validation.
# 输出数据维度
for image, label in train_ds.take(1):print(image.shape)print(label.shape)
(32, 256, 256, 3)
(32, 17)

4、数据展示

# 随机展示一批数据
import matplotlib.pyplot as plt plt.figure(figsize=(20,10))
for images, labels in train_ds.take(1):for i in range(20):plt.subplot(5, 10, i + 1)plt.imshow(images[i].numpy().astype("uint8"))plt.title(classnames[np.argmax(labels[i], axis=0)])    plt.axis('off')plt.show()


在这里插入图片描述

2、构建VGG16模型

VGG16修改:

  • 冻结前13层卷积,只修改全连接
  • 在全连接层添加BN层、全局平均池化层,起到降维作用,因为VGG16的计算量很大
  • 全连接层中添加Dropout层
# 导入官方VGG16模型
vgg16_model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))# 冻结卷积权重
for layer in vgg16_model.layers:layer.trainble = False# 获取卷积层输出
x = vgg16_model.output# 添加BN层
x = layers.BatchNormalization()(x)# 添加平均池化,降低计算量
x = layers.GlobalAveragePooling2D()(x)# 添加全连接层和Dropout
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.5)(x)predict = layers.Dense(len(classnames))(x)# 创建模型
model = models.Model(inputs=vgg16_model.input, outputs=predict)model.summary()
Model: "model"
_________________________________________________________________Layer (type)                Output Shape              Param #   
=================================================================input_1 (InputLayer)        [(None, 256, 256, 3)]     0         block1_conv1 (Conv2D)       (None, 256, 256, 64)      1792      block1_conv2 (Conv2D)       (None, 256, 256, 64)      36928     block1_pool (MaxPooling2D)  (None, 128, 128, 64)      0         block2_conv1 (Conv2D)       (None, 128, 128, 128)     73856     block2_conv2 (Conv2D)       (None, 128, 128, 128)     147584    block2_pool (MaxPooling2D)  (None, 64, 64, 128)       0         block3_conv1 (Conv2D)       (None, 64, 64, 256)       295168    block3_conv2 (Conv2D)       (None, 64, 64, 256)       590080    block3_conv3 (Conv2D)       (None, 64, 64, 256)       590080    block3_pool (MaxPooling2D)  (None, 32, 32, 256)       0         block4_conv1 (Conv2D)       (None, 32, 32, 512)       1180160   block4_conv2 (Conv2D)       (None, 32, 32, 512)       2359808   block4_conv3 (Conv2D)       (None, 32, 32, 512)       2359808   block4_pool (MaxPooling2D)  (None, 16, 16, 512)       0         block5_conv1 (Conv2D)       (None, 16, 16, 512)       2359808   block5_conv2 (Conv2D)       (None, 16, 16, 512)       2359808   block5_conv3 (Conv2D)       (None, 16, 16, 512)       2359808   block5_pool (MaxPooling2D)  (None, 8, 8, 512)         0         batch_normalization (BatchN  (None, 8, 8, 512)        2048      ormalization)                                                   global_average_pooling2d (G  (None, 512)              0         lobalAveragePooling2D)                                          dense (Dense)               (None, 1024)              525312    dropout (Dropout)           (None, 1024)              0         dense_1 (Dense)             (None, 512)               524800    dropout_1 (Dropout)         (None, 512)               0         dense_2 (Dense)             (None, 17)                8721      =================================================================
Total params: 15,775,569
Trainable params: 15,774,545
Non-trainable params: 1,024
_________________________________________________________________

3、模型训练

1、设置超参数

# 初始化学习率
learing_rate = 1e-3# 设置动态学习率
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(learing_rate,decay_steps=60,   # 每60步衰减一次decay_rate=0.96,   # 原来的0.96staircase=True
)# 定义优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
# 设置超参数
model.compile(optimizer=optimizer,loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),metrics=['accuracy'])

2、模型正式训练

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStoppingepoches = 100# 训练模型中最佳模型
checkpointer = ModelCheckpoint('best_model.h5',monitor='val_accuracy',  # 被检测参数verbose=1,save_best_only=True,save_weights_only=True
)# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy',verbose=1,  # 信息模型patience=20,  min_delta=0.01,  # 20次没有提示0.01,则停止
)history = model.fit(train_ds, validation_data=val_ds,epochs=epoches,callbacks=[checkpointer, earlystopper]   # 设置回调函数
)
Epoch 1/100
2024-11-01 19:31:48.093783: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8101
2024-11-01 19:31:50.361608: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
45/45 [==============================] - ETA: 0s - loss: 2.8548 - accuracy: 0.0826
Epoch 1: val_accuracy improved from -inf to 0.13056, saving model to best_model.h5
45/45 [==============================] - 14s 205ms/step - loss: 2.8548 - accuracy: 0.0826 - val_loss: 8.0561 - val_accuracy: 0.1306
Epoch 2/100
45/45 [==============================] - ETA: 0s - loss: 2.7271 - accuracy: 0.1181
Epoch 2: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.7271 - accuracy: 0.1181 - val_loss: 3.7047 - val_accuracy: 0.0639
Epoch 3/100
45/45 [==============================] - ETA: 0s - loss: 2.6583 - accuracy: 0.1354
Epoch 3: val_accuracy did not improve from 0.13056
45/45 [==============================] - 7s 144ms/step - loss: 2.6583 - accuracy: 0.1354 - val_loss: 8.0687 - val_accuracy: 0.0806
Epoch 4/100
45/45 [==============================] - ETA: 0s - loss: 2.5833 - accuracy: 0.1444
Epoch 4: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.5833 - accuracy: 0.1444 - val_loss: 4.7184 - val_accuracy: 0.1000
Epoch 5/100
45/45 [==============================] - ETA: 0s - loss: 2.5115 - accuracy: 0.1576
Epoch 5: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.5115 - accuracy: 0.1576 - val_loss: 61.5911 - val_accuracy: 0.0639
Epoch 6/100
45/45 [==============================] - ETA: 0s - loss: 2.4402 - accuracy: 0.1674
Epoch 6: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 142ms/step - loss: 2.4402 - accuracy: 0.1674 - val_loss: 4.6790 - val_accuracy: 0.0944
Epoch 7/100
45/45 [==============================] - ETA: 0s - loss: 2.3911 - accuracy: 0.1951
Epoch 7: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.3911 - accuracy: 0.1951 - val_loss: 2.7717 - val_accuracy: 0.1028
Epoch 8/100
45/45 [==============================] - ETA: 0s - loss: 2.3331 - accuracy: 0.1931
Epoch 8: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 144ms/step - loss: 2.3331 - accuracy: 0.1931 - val_loss: 8.2605 - val_accuracy: 0.0639
Epoch 9/100
45/45 [==============================] - ETA: 0s - loss: 2.2922 - accuracy: 0.2021
Epoch 9: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.2922 - accuracy: 0.2021 - val_loss: 51.5976 - val_accuracy: 0.0306
Epoch 10/100
45/45 [==============================] - ETA: 0s - loss: 2.2182 - accuracy: 0.2313
Epoch 10: val_accuracy did not improve from 0.13056
45/45 [==============================] - 6s 143ms/step - loss: 2.2182 - accuracy: 0.2313 - val_loss: 4.3942 - val_accuracy: 0.0611
Epoch 11/100
45/45 [==============================] - ETA: 0s - loss: 2.2049 - accuracy: 0.2361
Epoch 11: val_accuracy improved from 0.13056 to 0.17778, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 2.2049 - accuracy: 0.2361 - val_loss: 2.4072 - val_accuracy: 0.1778
Epoch 12/100
45/45 [==============================] - ETA: 0s - loss: 2.1242 - accuracy: 0.2576
Epoch 12: val_accuracy improved from 0.17778 to 0.18056, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 2.1242 - accuracy: 0.2576 - val_loss: 2.6218 - val_accuracy: 0.1806
Epoch 13/100
45/45 [==============================] - ETA: 0s - loss: 2.0634 - accuracy: 0.2639
Epoch 13: val_accuracy did not improve from 0.18056
45/45 [==============================] - 6s 142ms/step - loss: 2.0634 - accuracy: 0.2639 - val_loss: 14.2102 - val_accuracy: 0.1556
Epoch 14/100
45/45 [==============================] - ETA: 0s - loss: 2.0379 - accuracy: 0.2861
Epoch 14: val_accuracy did not improve from 0.18056
45/45 [==============================] - 6s 143ms/step - loss: 2.0379 - accuracy: 0.2861 - val_loss: 931.4739 - val_accuracy: 0.1556
Epoch 15/100
45/45 [==============================] - ETA: 0s - loss: 1.9782 - accuracy: 0.3063
Epoch 15: val_accuracy improved from 0.18056 to 0.21667, saving model to best_model.h5
45/45 [==============================] - 7s 144ms/step - loss: 1.9782 - accuracy: 0.3063 - val_loss: 2.3025 - val_accuracy: 0.2167
Epoch 16/100
45/45 [==============================] - ETA: 0s - loss: 1.9299 - accuracy: 0.3306
Epoch 16: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.9299 - accuracy: 0.3306 - val_loss: 2.2587 - val_accuracy: 0.2000
Epoch 17/100
45/45 [==============================] - ETA: 0s - loss: 1.8289 - accuracy: 0.3590
Epoch 17: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.8289 - accuracy: 0.3590 - val_loss: 2.5047 - val_accuracy: 0.1722
Epoch 18/100
45/45 [==============================] - ETA: 0s - loss: 1.7912 - accuracy: 0.3694
Epoch 18: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 142ms/step - loss: 1.7912 - accuracy: 0.3694 - val_loss: 3.1102 - val_accuracy: 0.1722
Epoch 19/100
45/45 [==============================] - ETA: 0s - loss: 1.7762 - accuracy: 0.3764
Epoch 19: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 142ms/step - loss: 1.7762 - accuracy: 0.3764 - val_loss: 2.7225 - val_accuracy: 0.2083
Epoch 20/100
45/45 [==============================] - ETA: 0s - loss: 1.7182 - accuracy: 0.3979
Epoch 20: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.7182 - accuracy: 0.3979 - val_loss: 3.4486 - val_accuracy: 0.1528
Epoch 21/100
45/45 [==============================] - ETA: 0s - loss: 1.6341 - accuracy: 0.4208
Epoch 21: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 143ms/step - loss: 1.6341 - accuracy: 0.4208 - val_loss: 2.7709 - val_accuracy: 0.1806
Epoch 22/100
45/45 [==============================] - ETA: 0s - loss: 1.5667 - accuracy: 0.4486
Epoch 22: val_accuracy did not improve from 0.21667
45/45 [==============================] - 6s 144ms/step - loss: 1.5667 - accuracy: 0.4486 - val_loss: 4.2764 - val_accuracy: 0.1583
Epoch 23/100
45/45 [==============================] - ETA: 0s - loss: 1.4579 - accuracy: 0.4875
Epoch 23: val_accuracy improved from 0.21667 to 0.26111, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 1.4579 - accuracy: 0.4875 - val_loss: 32579.7422 - val_accuracy: 0.2611
Epoch 24/100
45/45 [==============================] - ETA: 0s - loss: 1.4373 - accuracy: 0.4854
Epoch 24: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 145ms/step - loss: 1.4373 - accuracy: 0.4854 - val_loss: 8038.8555 - val_accuracy: 0.1972
Epoch 25/100
45/45 [==============================] - ETA: 0s - loss: 1.3630 - accuracy: 0.5139
Epoch 25: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 145ms/step - loss: 1.3630 - accuracy: 0.5139 - val_loss: 2.3408 - val_accuracy: 0.2528
Epoch 26/100
45/45 [==============================] - ETA: 0s - loss: 1.3181 - accuracy: 0.5375
Epoch 26: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 144ms/step - loss: 1.3181 - accuracy: 0.5375 - val_loss: 2.1877 - val_accuracy: 0.2500
Epoch 27/100
45/45 [==============================] - ETA: 0s - loss: 1.2544 - accuracy: 0.5583
Epoch 27: val_accuracy did not improve from 0.26111
45/45 [==============================] - 7s 144ms/step - loss: 1.2544 - accuracy: 0.5583 - val_loss: 2.6184 - val_accuracy: 0.1861
Epoch 28/100
45/45 [==============================] - ETA: 0s - loss: 1.1877 - accuracy: 0.5813
Epoch 28: val_accuracy did not improve from 0.26111
45/45 [==============================] - 6s 144ms/step - loss: 1.1877 - accuracy: 0.5813 - val_loss: 3.0485 - val_accuracy: 0.2500
Epoch 29/100
45/45 [==============================] - ETA: 0s - loss: 1.0968 - accuracy: 0.6132
Epoch 29: val_accuracy did not improve from 0.26111
45/45 [==============================] - 6s 143ms/step - loss: 1.0968 - accuracy: 0.6132 - val_loss: 61754.2734 - val_accuracy: 0.1917
Epoch 30/100
45/45 [==============================] - ETA: 0s - loss: 1.0537 - accuracy: 0.6424
Epoch 30: val_accuracy improved from 0.26111 to 0.26667, saving model to best_model.h5
45/45 [==============================] - 7s 148ms/step - loss: 1.0537 - accuracy: 0.6424 - val_loss: 2.3469 - val_accuracy: 0.2667
Epoch 31/100
45/45 [==============================] - ETA: 0s - loss: 1.0427 - accuracy: 0.6306
Epoch 31: val_accuracy did not improve from 0.26667
45/45 [==============================] - 6s 143ms/step - loss: 1.0427 - accuracy: 0.6306 - val_loss: 3.4498 - val_accuracy: 0.2250
Epoch 32/100
45/45 [==============================] - ETA: 0s - loss: 1.0697 - accuracy: 0.6403
Epoch 32: val_accuracy improved from 0.26667 to 0.37222, saving model to best_model.h5
45/45 [==============================] - 7s 146ms/step - loss: 1.0697 - accuracy: 0.6403 - val_loss: 2.8960 - val_accuracy: 0.3722
Epoch 33/100
45/45 [==============================] - ETA: 0s - loss: 0.9062 - accuracy: 0.6840
Epoch 33: val_accuracy did not improve from 0.37222
45/45 [==============================] - 6s 143ms/step - loss: 0.9062 - accuracy: 0.6840 - val_loss: 102.1351 - val_accuracy: 0.3028
Epoch 34/100
45/45 [==============================] - ETA: 0s - loss: 0.8220 - accuracy: 0.7118
Epoch 34: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.8220 - accuracy: 0.7118 - val_loss: 3.1855 - val_accuracy: 0.2583
Epoch 35/100
45/45 [==============================] - ETA: 0s - loss: 0.7424 - accuracy: 0.7431
Epoch 35: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.7424 - accuracy: 0.7431 - val_loss: 34309.0664 - val_accuracy: 0.3028
Epoch 36/100
45/45 [==============================] - ETA: 0s - loss: 0.7257 - accuracy: 0.7535
Epoch 36: val_accuracy did not improve from 0.37222
45/45 [==============================] - 6s 144ms/step - loss: 0.7257 - accuracy: 0.7535 - val_loss: 89.2148 - val_accuracy: 0.2361
Epoch 37/100
45/45 [==============================] - ETA: 0s - loss: 0.6695 - accuracy: 0.7799
Epoch 37: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 146ms/step - loss: 0.6695 - accuracy: 0.7799 - val_loss: 3590.8940 - val_accuracy: 0.1889
Epoch 38/100
45/45 [==============================] - ETA: 0s - loss: 0.5841 - accuracy: 0.7917
Epoch 38: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 145ms/step - loss: 0.5841 - accuracy: 0.7917 - val_loss: 5.1283 - val_accuracy: 0.2222
Epoch 39/100
45/45 [==============================] - ETA: 0s - loss: 0.5989 - accuracy: 0.7840
Epoch 39: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 145ms/step - loss: 0.5989 - accuracy: 0.7840 - val_loss: 3.7647 - val_accuracy: 0.2833
Epoch 40/100
45/45 [==============================] - ETA: 0s - loss: 0.5431 - accuracy: 0.8181
Epoch 40: val_accuracy did not improve from 0.37222
45/45 [==============================] - 7s 144ms/step - loss: 0.5431 - accuracy: 0.8181 - val_loss: 3.9703 - val_accuracy: 0.3028
Epoch 41/100
45/45 [==============================] - ETA: 0s - loss: 0.4810 - accuracy: 0.8333
Epoch 41: val_accuracy improved from 0.37222 to 0.40278, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.4810 - accuracy: 0.8333 - val_loss: 2.7934 - val_accuracy: 0.4028
Epoch 42/100
45/45 [==============================] - ETA: 0s - loss: 0.5016 - accuracy: 0.8278
Epoch 42: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.5016 - accuracy: 0.8278 - val_loss: 58485.9453 - val_accuracy: 0.2583
Epoch 43/100
45/45 [==============================] - ETA: 0s - loss: 0.4782 - accuracy: 0.8424
Epoch 43: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 144ms/step - loss: 0.4782 - accuracy: 0.8424 - val_loss: 3.6065 - val_accuracy: 0.3694
Epoch 44/100
45/45 [==============================] - ETA: 0s - loss: 0.3587 - accuracy: 0.8785
Epoch 44: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3587 - accuracy: 0.8785 - val_loss: 5.5882 - val_accuracy: 0.3806
Epoch 45/100
45/45 [==============================] - ETA: 0s - loss: 0.3143 - accuracy: 0.8889
Epoch 45: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3143 - accuracy: 0.8889 - val_loss: 2.7883 - val_accuracy: 0.3861
Epoch 46/100
45/45 [==============================] - ETA: 0s - loss: 0.3707 - accuracy: 0.8757
Epoch 46: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3707 - accuracy: 0.8757 - val_loss: 3.2097 - val_accuracy: 0.3583
Epoch 47/100
45/45 [==============================] - ETA: 0s - loss: 0.3418 - accuracy: 0.8799
Epoch 47: val_accuracy did not improve from 0.40278
45/45 [==============================] - 6s 144ms/step - loss: 0.3418 - accuracy: 0.8799 - val_loss: 3.1672 - val_accuracy: 0.4028
Epoch 48/100
45/45 [==============================] - ETA: 0s - loss: 0.3202 - accuracy: 0.8931
Epoch 48: val_accuracy did not improve from 0.40278
45/45 [==============================] - 7s 145ms/step - loss: 0.3202 - accuracy: 0.8931 - val_loss: 16.9275 - val_accuracy: 0.3944
Epoch 49/100
45/45 [==============================] - ETA: 0s - loss: 0.2668 - accuracy: 0.9118
Epoch 49: val_accuracy improved from 0.40278 to 0.41944, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.2668 - accuracy: 0.9118 - val_loss: 2.8230 - val_accuracy: 0.4194
Epoch 50/100
45/45 [==============================] - ETA: 0s - loss: 0.2676 - accuracy: 0.9021
Epoch 50: val_accuracy did not improve from 0.41944
45/45 [==============================] - 7s 144ms/step - loss: 0.2676 - accuracy: 0.9021 - val_loss: 2671.1196 - val_accuracy: 0.3639
Epoch 51/100
45/45 [==============================] - ETA: 0s - loss: 0.2152 - accuracy: 0.9306
Epoch 51: val_accuracy improved from 0.41944 to 0.45556, saving model to best_model.h5
45/45 [==============================] - 7s 147ms/step - loss: 0.2152 - accuracy: 0.9306 - val_loss: 2.5370 - val_accuracy: 0.4556
Epoch 52/100
45/45 [==============================] - ETA: 0s - loss: 0.1308 - accuracy: 0.9611
Epoch 52: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 144ms/step - loss: 0.1308 - accuracy: 0.9611 - val_loss: 2.9426 - val_accuracy: 0.4444
Epoch 53/100
45/45 [==============================] - ETA: 0s - loss: 0.1306 - accuracy: 0.9556
Epoch 53: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.1306 - accuracy: 0.9556 - val_loss: 3.2494 - val_accuracy: 0.3917
Epoch 54/100
45/45 [==============================] - ETA: 0s - loss: 0.1515 - accuracy: 0.9500
Epoch 54: val_accuracy did not improve from 0.45556
45/45 [==============================] - 6s 144ms/step - loss: 0.1515 - accuracy: 0.9500 - val_loss: 4461.8813 - val_accuracy: 0.3611
Epoch 55/100
45/45 [==============================] - ETA: 0s - loss: 0.2079 - accuracy: 0.9285
Epoch 55: val_accuracy did not improve from 0.45556
45/45 [==============================] - 6s 144ms/step - loss: 0.2079 - accuracy: 0.9285 - val_loss: 4.7424 - val_accuracy: 0.3917
Epoch 56/100
45/45 [==============================] - ETA: 0s - loss: 0.2407 - accuracy: 0.9076
Epoch 56: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.2407 - accuracy: 0.9076 - val_loss: 3.3555 - val_accuracy: 0.3889
Epoch 57/100
45/45 [==============================] - ETA: 0s - loss: 0.1948 - accuracy: 0.9333
Epoch 57: val_accuracy did not improve from 0.45556
45/45 [==============================] - 7s 145ms/step - loss: 0.1948 - accuracy: 0.9333 - val_loss: 3.4168 - val_accuracy: 0.3861
Epoch 58/100
45/45 [==============================] - ETA: 0s - loss: 0.1534 - accuracy: 0.9431
Epoch 58: val_accuracy improved from 0.45556 to 0.47222, saving model to best_model.h5
45/45 [==============================] - 7s 146ms/step - loss: 0.1534 - accuracy: 0.9431 - val_loss: 2.7895 - val_accuracy: 0.4722
Epoch 59/100
45/45 [==============================] - ETA: 0s - loss: 0.1457 - accuracy: 0.9549
Epoch 59: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.1457 - accuracy: 0.9549 - val_loss: 6.3610 - val_accuracy: 0.3444
Epoch 60/100
45/45 [==============================] - ETA: 0s - loss: 0.2078 - accuracy: 0.9306
Epoch 60: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.2078 - accuracy: 0.9306 - val_loss: 3.5834 - val_accuracy: 0.4056
Epoch 61/100
45/45 [==============================] - ETA: 0s - loss: 0.2005 - accuracy: 0.9361
Epoch 61: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.2005 - accuracy: 0.9361 - val_loss: 4.0683 - val_accuracy: 0.3861
Epoch 62/100
45/45 [==============================] - ETA: 0s - loss: 0.1815 - accuracy: 0.9375
Epoch 62: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1815 - accuracy: 0.9375 - val_loss: 3.1445 - val_accuracy: 0.4611
Epoch 63/100
45/45 [==============================] - ETA: 0s - loss: 0.1027 - accuracy: 0.9722
Epoch 63: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1027 - accuracy: 0.9722 - val_loss: 3.0654 - val_accuracy: 0.4500
Epoch 64/100
45/45 [==============================] - ETA: 0s - loss: 0.1370 - accuracy: 0.9535
Epoch 64: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1370 - accuracy: 0.9535 - val_loss: 3.1589 - val_accuracy: 0.4667
Epoch 65/100
45/45 [==============================] - ETA: 0s - loss: 0.1530 - accuracy: 0.9576
Epoch 65: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 145ms/step - loss: 0.1530 - accuracy: 0.9576 - val_loss: 19.4580 - val_accuracy: 0.3722
Epoch 66/100
45/45 [==============================] - ETA: 0s - loss: 0.1092 - accuracy: 0.9625
Epoch 66: val_accuracy did not improve from 0.47222
45/45 [==============================] - 6s 143ms/step - loss: 0.1092 - accuracy: 0.9625 - val_loss: 263474.1250 - val_accuracy: 0.2639
Epoch 67/100
45/45 [==============================] - ETA: 0s - loss: 0.1094 - accuracy: 0.9639
Epoch 67: val_accuracy did not improve from 0.47222
45/45 [==============================] - 7s 144ms/step - loss: 0.1094 - accuracy: 0.9639 - val_loss: 50495.4219 - val_accuracy: 0.4222
Epoch 68/100
45/45 [==============================] - ETA: 0s - loss: 0.0843 - accuracy: 0.9694
Epoch 68: val_accuracy improved from 0.47222 to 0.47500, saving model to best_model.h5
45/45 [==============================] - 7s 145ms/step - loss: 0.0843 - accuracy: 0.9694 - val_loss: 20.9734 - val_accuracy: 0.4750
Epoch 69/100
45/45 [==============================] - ETA: 0s - loss: 0.1767 - accuracy: 0.9458
Epoch 69: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1767 - accuracy: 0.9458 - val_loss: 1322.2261 - val_accuracy: 0.3583
Epoch 70/100
45/45 [==============================] - ETA: 0s - loss: 0.1305 - accuracy: 0.9479
Epoch 70: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.1305 - accuracy: 0.9479 - val_loss: 4.3810 - val_accuracy: 0.3889
Epoch 71/100
45/45 [==============================] - ETA: 0s - loss: 0.1202 - accuracy: 0.9569
Epoch 71: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.1202 - accuracy: 0.9569 - val_loss: 144.1233 - val_accuracy: 0.1361
Epoch 72/100
45/45 [==============================] - ETA: 0s - loss: 0.0746 - accuracy: 0.9785
Epoch 72: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 144ms/step - loss: 0.0746 - accuracy: 0.9785 - val_loss: 3.0208 - val_accuracy: 0.4417
Epoch 73/100
45/45 [==============================] - ETA: 0s - loss: 0.1549 - accuracy: 0.9542
Epoch 73: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1549 - accuracy: 0.9542 - val_loss: 4.0066 - val_accuracy: 0.4333
Epoch 74/100
45/45 [==============================] - ETA: 0s - loss: 0.1743 - accuracy: 0.9444
Epoch 74: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1743 - accuracy: 0.9444 - val_loss: 373.7328 - val_accuracy: 0.4250
Epoch 75/100
45/45 [==============================] - ETA: 0s - loss: 0.1104 - accuracy: 0.9611
Epoch 75: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.1104 - accuracy: 0.9611 - val_loss: 4.0707 - val_accuracy: 0.4222
Epoch 76/100
45/45 [==============================] - ETA: 0s - loss: 0.1021 - accuracy: 0.9639
Epoch 76: val_accuracy did not improve from 0.47500
45/45 [==============================] - 6s 144ms/step - loss: 0.1021 - accuracy: 0.9639 - val_loss: 4.0057 - val_accuracy: 0.3944
Epoch 77/100
45/45 [==============================] - ETA: 0s - loss: 0.1100 - accuracy: 0.9618
Epoch 77: val_accuracy did not improve from 0.47500
45/45 [==============================] - 6s 143ms/step - loss: 0.1100 - accuracy: 0.9618 - val_loss: 4.1805 - val_accuracy: 0.4389
Epoch 78/100
45/45 [==============================] - ETA: 0s - loss: 0.0505 - accuracy: 0.9847
Epoch 78: val_accuracy did not improve from 0.47500
45/45 [==============================] - 7s 145ms/step - loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
Epoch 78: early stopping

4、结果展示和预测

1、结果展示

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']loss = history.history['loss']
val_loss = history.history['val_loss']epochs_range = range(len(loss))plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

  • loss这个没办法,我试了好几次,都会有那么一次会很大,我之前用pytorch没有遇到这种情况。这次训练最好的结果是:loss: 0.0505 - accuracy: 0.9847 - val_loss: 3.7758 - val_accuracy: 0.4750
  • 验证集准确率一直上不去,本人感觉如果结果ResNet会更好,这次本人感觉有很多原因,比如说图片训练少,模型结果计算量有点大了,可以结合ResNet进行优化,这个就后面在更新吧。

2、预测

from PIL import Image # 加载模型权重
model.load_weights('best_model.h5')# 加载图片
img = Image.open("./data/Brad Pitt/001_c04300ef.jpg")
image = tf.image.resize(img, [256, 256])img_array = tf.expand_dims(image, 0)  # 新插入一个元素predict = model.predict(img_array)
print("预测结果: ", classnames[np.argmax(predict)])
1/1 [==============================] - 0s 384ms/step
预测结果:  Brad Pitt

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/diannao/58895.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

力扣每日一题 超级饮料的最大强化能量 动态规划(dp)

来自未来的体育科学家给你两个整数数组 energyDrinkA 和 energyDrinkB,数组长度都等于 n。这两个数组分别代表 A、B 两种不同能量饮料每小时所能提供的强化能量。 你需要每小时饮用一种能量饮料来 最大化 你的总强化能量。然而,如果从一种能量饮料切换到…

全国产 V7 690T+FT6678 高性能实时信号处理平台设计原理

1、概述 全国产 V7 690TFT6678 高性能实时信号处理平台组成如图 1 所示,包含 1 片SMQ7VX690TFFG1761 和两片 FT-6678(国防科大)的 DSP,总共 3 个主芯片;每个主芯片外部各搭配 1 组 64bit 的 DDR3 内存模组以及各芯片启…

0.STM32F1移植到F0的各种经验总结

1.结构体的声明需放在函数的最前面 源代码: /*开启时钟*/RCC_APB2PeriphClockCmd(RCC_APB2Periph_USART1, ENABLE); //开启USART1的时钟RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOA, ENABLE); //开启GPIOA的时钟/*GPIO初始化*/GPIO_InitTypeDef GPIO_InitStructu…

Linux高阶——1027—守护进程

1、守护进程的基本流程 1、父进程创建子进程,父进程退出 守护进程是孤儿进程,但是是工程师人为创建的孤儿进程,低开销模式运行,对系统没有压力 2、子进程(守护进程)脱离控制终端,创建新会话 …

Selective Generation for Language Models 语言模型的选择性生成

生成式语言模型(Generative Language Models, GLMs)在文本生成任务中取得了显著进展。然而,生成内容的“幻觉”现象,即生成内容与事实或真实语义不符的问题,仍是GLMs在实际应用中的一个重大挑战。为了解决这一问题&…

-bash: ./my_rename.sh: /bin/bash^M: bad interpreter: No such file or directory

在windows上写了一个shell脚本,在Linux上执行时报错,然后看下解决办法: 查了下,其实就是windows系统里文件行尾的换行符和Linux不同引起的, sed -i s/\r$// my.sh用这行代码处理一下,就可以正常运行了。 执…

flutter区别于vue的写法

View.dart 页面渲染&#xff1a; 类似于vue里面使用 <template> <div> <span> <textarea>等标签绘制页面, flutter 里面则是使用不同的控件来绘制页面 样式 与传统vue不同的是 flutter里面没有css/scss样式表&#xff0c; Flutter的理念是万物皆…

idea免费安装步骤,(java集成开发环境)超详细

第一步 点击链接下载 百度网盘 请输入提取码 提取码是idea 下载步骤 可设也可不设置 我就没有设置 下一步 就点击安装就大功告成了

SAP RFC 用户安全授权

一、SAP 通讯用户 对于RFC接口的用户&#xff0c;使用五种用户类型之一的“通讯”类型&#xff0c;这种类型的用户没有登陆SAPGUI的权限。 二、对调用的RFC授权 在通讯用户内部&#xff0c;权限对象&#xff1a;S_RFC中&#xff0c;限制进一步可以调用的RFC函数授权&#xff…

大数据-201 数据挖掘 机器学习理论 - 决策树 局部最优 剪枝 分裂 二叉分裂

点一下关注吧&#xff01;&#xff01;&#xff01;非常感谢&#xff01;&#xff01;持续更新&#xff01;&#xff01;&#xff01; 目前已经更新到了&#xff1a; Hadoop&#xff08;已更完&#xff09;HDFS&#xff08;已更完&#xff09;MapReduce&#xff08;已更完&am…

计算机网络-总线型以太网(ethernet)-知识点小结

广域网与局域网区分: 广域网: 广域网不使用局域网技术, 传输介质 主要是光纤和电话线 常见广域网技术 综合业务数字网&#xff08;ISDN&#xff09;、 帧中继&#xff08;Frame Relay&#xff09;、 异步传输模式 局域网: 以太网--ethernet 简介: 是一种总线型局域网技术&#…

透明加密技术是什么?透明加密技术的原理与应用实践(内含代表性软件分享)

触目惊心&#xff01;10大典型间谍案例回顾 张某离职前搜集大量文件资料&#xff0c;甚至拆开电脑主机拷贝文件 私自存有5200份文件资料 其中标注绝密级的59份 机密级848份 秘密级541份 在当今这个信息化高速发展的时代&#xff0c;透明加密技术已不容忽视。那么&#xff…

C/C++ 每日一练:二叉树的先序遍历

二叉树 binary tree 定义 二叉树是一种树状数据结构&#xff0c;非线性数据结构&#xff0c;代表“祖先”与“后代”之间的派生关系&#xff0c;体现了“一分为二”的分治逻辑。与链表类似&#xff0c;二叉树的基本单元是节点&#xff0c;二叉树的每个节点包含三个主要部分&am…

OpenCV开发笔记(八十二):两图拼接使用渐进色蒙版场景过渡缝隙

若该文为原创文章&#xff0c;转载请注明原文出处 本文章博客地址&#xff1a;https://hpzwl.blog.csdn.net/article/details/143432922 长沙红胖子Qt&#xff08;长沙创微智科&#xff09;博文大全&#xff1a;开发技术集合&#xff08;包含Qt实用技术、树莓派、三维、OpenCV…

Unity程序化生成地形

制作地形&#xff1a; 绘制方块逐个绘制方块并加噪波高度删除Gizmos和逐个绘制 1.draw quad using System.Collections; using System.Collections.Generic; using UnityEngine;[RequireComponent(typeof(MeshFilter))] public class mesh_generator : MonoBehaviour {Mesh m…

基于MoviNet检测视频中危险暴力行为

项目源码获取方式见文章末尾&#xff01; 600多个深度学习项目资料&#xff0c;快来加入社群一起学习吧。 《------往期经典推荐------》 项目名称 1.【Faster & Mask R-CNN模型实现啤酒瓶瑕疵检测】 2.【卫星图像道路检测DeepLabV3Plus模型】 3.【GAN模型实现二次元头像生…

Java项目实战II基于Java+Spring Boot+MySQL的桂林旅游景点导游平台(开发文档+数据库+源码)

目录 一、前言 二、技术介绍 三、系统实现 四、文档参考 五、核心代码 六、源码获取 全栈码农以及毕业设计实战开发&#xff0c;CSDN平台Java领域新星创作者&#xff0c;专注于大学生项目实战开发、讲解和毕业答疑辅导。获取源码联系方式请查看文末 一、前言 基于Java、…

每日读则推(十四)——Meta Movie Gen: the most advanced media foundation models to-date

premiere n.首映,首次公演 v.首次公演(戏剧、音乐、电影) a.首要的,最早的 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. 迄今,到现在为止 …

整数越界详解

目录 一、整数类型的范围 二、整数越界的原因 三、整数越界的示例 1.算术运算导致的整数越界 2.位运算导致的整数越界 3.数据类型转换导致的整数越界 四、整数越界的解决方法 在编程中&#xff0c;整数越界是一个需要特别注意的问题。当整数的计算结果超出了其所能表…

深度学习基础知识-编解码结构理论超详细讲解

编解码结构&#xff08;Encoder-Decoder&#xff09;是一种应用广泛且高效的神经网络架构&#xff0c;最早用于序列到序列&#xff08;Seq2Seq&#xff09;任务&#xff0c;如机器翻译、图像生成、文本生成等。随着深度学习的发展&#xff0c;编解码结构不断演变出多种模型变体…