使用TensorFlow1.x版本来实现手势识别任务中,并用图像增强的方式改进,基准训练准确率0.92,测试准确率0.77,改进后,训练准确率0.97,测试准确率0.88。
1 导入包
import math
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import h5py
import matplotlib.pyplot as pltimport tensorflow.compat.v1 as tf
tf.disable_v2_behavior()%matplotlib inline
np.random.seed(1)
2 读取数据集
def load_dataset():train_dataset = h5py.File('datasets/train_signs.h5', "r")train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # 训练集特征train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # 训练集标签test_dataset = h5py.File('datasets/test_signs.h5', "r")test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # 测试集特征test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # 测试集标签classes = np.array(test_dataset["list_classes"][:]) # 类别列表train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
#转变成one-hot编码
def convert_to_one_hot(Y, C):Y = np.eye(C)[Y.reshape(-1)].Treturn Y
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)
3 创建占位符
① TensorFlow要求您为运行会话时将输入到模型中的输入数据创建占位符。
② 现在要实现创建占位符的函数,因为使用的是小批量数据块,输入的样本数量可能不固定,所以在数量那里要使用None作为可变数量。
def create_placeholders(n_H0, n_W0, n_C0, n_y):"""为session创建占位符参数:n_H0 - 实数,输入图像的高度n_W0 - 实数,输入图像的宽度n_C0 - 实数,输入的通道数n_y - 实数,分类数输出:X - 输入数据的占位符,维度为[None, n_H0, n_W0, n_C0],类型为"float"Y - 输入数据的标签的占位符,维度为[None, n_y],维度为"float""""X = tf.placeholder(tf.float32,[None, n_H0, n_W0, n_C0])Y = tf.placeholder(tf.float32,[None, n_y])return X,Y
# 测试
X , Y = create_placeholders(64,64,3,6)
print ("X = " + str(X))
print ("Y = " + str(Y))
X = Tensor(“Placeholder:0”, shape=(?, 64, 64, 3), dtype=float32)
Y = Tensor(“Placeholder_1:0”, shape=(?, 6), dtype=float32)
4 初始化参数
① 现在将使用 tf.contrib.layers.xavier_initializer(seed = 0) 来初始化权值/过滤器 W 1 、 W 2 W1、W2 W1、W2
② 在这里,不需要考虑偏置,因为TensorFlow会考虑到的。
③ 只需要初始化为2D卷积函数,全连接层TensorFlow会自动初始化的。
def initialize_parameters():'''第一层卷积层的过滤器组:W1第二层卷积层的过滤器组:W2'''#采用he初始化initializer = tf.keras.initializers.glorot_normal()W1 = tf.compat.v1.Variable(initializer([4,4,3,8]))W2 = tf.compat.v1.Variable(initializer([2,2,8,16]))parameters = {'W1':W1,'W2':W2}return parameters# 测试
tf.reset_default_graph()
with tf.Session() as sess_test:parameters = initialize_parameters()init = tf.global_variables_initializer()sess_test.run(init)print("W1 = " + str(parameters["W1"].eval()[1,1,1]))print("W2 = " + str(parameters["W2"].eval()[1,1,1]))sess_test.close()
# 5 前向传播模型
① 在TensorFlow里面有一些可以直接拿来用的函数:
- tf.nn.conv2d(X,W1,strides=[1,s,s,1],padding=‘SAME’):给定输入 X X X和一组过滤器 W 1 W 1 W1,这个函数将会自动使用 W 1 W1 W1来对 X X X进行卷积,第三个输入参数是**[1,s,s,1]**是指对于输入 (m, n_H_prev, n_W_prev, n_C_prev)而言,每次滑动的步伐。
- tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = ‘SAME’):给定输入 X X X,该函数将会使用大小为(f,f)以及步伐为(s,s)的窗口对其进行滑动取最大值。
- tf.nn.relu(Z1):计算Z1的ReLU激活。
- tf.contrib.layers.flatten§:给定一个输入P,此函数将会把每个样本转化成一维的向量,然后返回一个tensor变量,其维度为(batch_size,k)。
- tf.contrib.layers.fully_connected(F, num_outputs):给定一个已经一维化了的输入F,此函数将会返回一个由全连接层计算过后的输出。
② 使用tf.contrib.layers.fully_connected(F, num_outputs)的时候,全连接层会自动初始化权值且在你训练模型的时候它也会一直参与,所以当初始化参数的时候不需要专门去初始化它的权值。
① 实现前向传播的时候,需要定义一下模型的大概样子:
- CONV2D→RELU→MAXPOOL→CONV2D→RELU→MAXPOOL→FULLCONNECTED
② 具体实现的时候,需要使用以下的步骤和参数:
- Conv2d : 步伐:1,填充方式:“SAME”
- ReLU
- Max pool : 过滤器大小:8x8,步伐:8x8,填充方式:“SAME”
- Conv2d : 步伐:1,填充方式:“SAME”
- ReLU
- Max pool : 过滤器大小:4x4,步伐:4x4,填充方式:“SAME”
- 一维化上一层的输出
- 全连接层(FC):使用没有非线性激活函数的全连接层。这里不要调用SoftMax, 这将导致输出层中有6个神经元,然后再传递到softmax。 在TensorFlow中,softmax和cost函数被集中到一个函数中,在计算成本时您将调用不同的函数。
def forward_propagation(X,parameters):'''CONV2D->RELU->MAXPOOL->CONV2D->RELU->MAXPOOL->FLATTEN->FULLYCONNECTED'''W1,W2 = parameters['W1'],parameters['W2']#SAME卷积Z1 = tf.nn.conv2d(X,W1,strides=[1,1,1,1],padding="SAME")#通过激活函数A1 = tf.nn.relu(Z1)#最大池化P1 = tf.nn.max_pool(A1,ksize=[1,8,8,1],strides=[1,8,8,1],padding="SAME")#第二次SAME卷积Z2 = tf.nn.conv2d(P1,W2,strides=[1,1,1,1],padding="SAME")#经过激活函数A2 = tf.nn.relu(Z2)#最大池化P2 = tf.nn.max_pool(A2,ksize=[1,4,4,1],strides=[1,4,4,1],padding="SAME")#平铺卷积结构P = tf.compat.v1.layers.flatten(P2)#经过一个全连接层Z3 = tf.compat.v1.layers.dense(P,6)return Z3
print("=====测试一下=====")
tf.reset_default_graph()
np.random.seed(1)with tf.Session() as sess_test:X,Y = create_placeholders(64,64,3,6)parameters = initialize_parameters()Z3 = forward_propagation(X,parameters)init = tf.global_variables_initializer()sess_test.run(init)a = sess_test.run(Z3,{X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})print("Z3 = " + str(a))sess_test.close()
6 定义损失函数
def compute_cost(Z3,Y):cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3,labels=Y))return costdef random_mini_batches(X, Y, mini_batch_size = 64):"""Creates a list of random minibatches from (X, Y)Arguments:X -- input data, of shape (input size, number of examples) (m, Hi, Wi, Ci)Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) (m, n_y)mini_batch_size - size of the mini-batches, integerseed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.Returns:mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)"""m = X.shape[0] # number of training examplesmini_batches = []# Step 1: Shuffle (X, Y)permutation = list(np.random.permutation(m))shuffled_X = X[permutation,:,:,:]shuffled_Y = Y[permutation,:]# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionningfor k in range(0, num_complete_minibatches):mini_batch_X = shuffled_X[k * mini_batch_size : k * mini_batch_size + mini_batch_size,:,:,:]mini_batch_Y = shuffled_Y[k * mini_batch_size : k * mini_batch_size + mini_batch_size,:]mini_batch = (mini_batch_X, mini_batch_Y)mini_batches.append(mini_batch)# Handling the end case (last mini-batch < mini_batch_size)if m % mini_batch_size != 0:mini_batch_X = shuffled_X[num_complete_minibatches * mini_batch_size : m,:,:,:]mini_batch_Y = shuffled_Y[num_complete_minibatches * mini_batch_size : m,:]mini_batch = (mini_batch_X, mini_batch_Y)mini_batches.append(mini_batch)return mini_batches
7 模型训练
7.1 方法一
def model(X_train,Y_train,X_test,Y_test,learning_rate=0.009,epochs=100,mini_batch_size=64):# tf.random.set_seed(1)tf.random.set_random_seed(1)#获取输入维度m,n_h0,n_w0,n_c0 = X_train.shape#分类数C = Y_train.shape[1]costs = []#为输入输出创建palcehoderX,Y = create_placeholders(n_h0,n_w0,n_c0,C)#初始化变量filterparameters = initialize_parameters()#前向传播Z3 = forward_propagation(X,parameters)cost = compute_cost(Z3,Y)#创建优化器(即梯度下降的过程)optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)#初始化所有变量init = tf.compat.v1.global_variables_initializer()with tf.compat.v1.Session() as sess:sess.run(init)for epoch in range(epochs):epoch_cost = 0mini_batch_num = m//mini_batch_size# 使用ImageDataGeneratormini_batchs = datagen.flow(X_train, Y_train, batch_size=mini_batch_size)# mini_batchs = random_mini_batches(X_train, Y_train, mini_batch_size)for mini in mini_batchs:(mini_x,mini_y) = mini#执行优化器/梯度下降_,mini_batch_cost = sess.run([optimizer,cost],feed_dict={X:mini_x,Y:mini_y})epoch_cost = epoch_cost + mini_batch_cost/mini_batch_numif epoch%5 == 0:costs.append(epoch_cost)if epoch%5 == 0:print("当前是第 " + str(epoch) + " 代,成本值为:" + str(epoch_cost))plt.plot(costs)plt.ylabel('cost')plt.xlabel('epoch')plt.show()#保存参数到seseeionparameters = sess.run(parameters)#获取预测正确的样本下标correct_prediction = tf.equal(tf.argmax(Z3,axis=1),tf.argmax(Y,axis=1))accuracy = tf.compat.v1.reduce_mean(tf.cast(correct_prediction,"float"))print("训练集的准确率:", accuracy.eval({X: X_train, Y: Y_train}))print("测试集的准确率:", accuracy.eval({X: X_test, Y: Y_test}))return parameters
import time
start_time = time.perf_counter()
parameters = model_aug(X_train, Y_train, X_test, Y_test,learning_rate=0.007,epochs=200,mini_batch_size=64)
end_time = time.perf_counter()
print("CPU的执行时间 = " + str(end_time - start_time) + " 秒" )
epoch100
训练集的准确率: 0.92314816
测试集的准确率: 0.775
CPU的执行时间 = 56.44441370000004 秒
7.2 方法二
用图像增强的方法改进,图像增强的功能,包括随机水平翻转、随机亮度调整和随机对比度调整。通过随机翻转增加了数据的多样性,而随机亮度和对比度的调整则可以使模型更具鲁棒性。
# 定义模型函数
def model_aug(X_train, Y_train, X_test, Y_test, learning_rate=0.009, epochs=100, mini_batch_size=64):tf.random.set_random_seed(1)#获取输入维度m,n_h0,n_w0,n_c0 = X_train.shape#分类数C = Y_train.shape[1]costs = []#为输入输出创建palcehoderX,Y = create_placeholders(n_h0,n_w0,n_c0,C)#初始化变量filterparameters = initialize_parameters()#前向传播Z3 = forward_propagation(X,parameters)cost = compute_cost(Z3,Y)#创建优化器(即梯度下降的过程)optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)'''改进部分,图像增强'''# 图像增强部分def image_augmentation(image, label):image = tf.image.random_flip_left_right(image)image = tf.image.random_brightness(image, max_delta=0.2) # 随机亮度image = tf.image.random_contrast(image, lower=0.8, upper=1.2) # 随机对比度return image, label# 创建数据集,应用数据增强,并批量获取数据dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train))dataset = dataset.map(image_augmentation)dataset = dataset.batch(mini_batch_size).repeat()# 定义迭代器iterator = dataset.make_initializable_iterator()next_batch = iterator.get_next()# 在会话中初始化迭代器with tf.Session() as sess:sess.run(tf.global_variables_initializer())sess.run(iterator.initializer)for epoch in range(epochs):epoch_cost = 0.mini_batch_num = m // mini_batch_size# 使用 `next_batch` 函数替代原始的 mini_batchs 获取数据for _ in range(mini_batch_num):mini_x, mini_y = sess.run(next_batch)_, mini_batch_cost = sess.run([optimizer, cost], feed_dict={X: mini_x, Y: mini_y})epoch_cost += mini_batch_cost / mini_batch_numif epoch%5 == 0:costs.append(epoch_cost)if epoch%5 == 0:print("当前是第 " + str(epoch) + " 代,成本值为:" + str(epoch_cost))plt.plot(costs)plt.ylabel('cost')plt.xlabel('epoch')plt.show()#保存参数到seseeionparameters = sess.run(parameters)#获取预测正确的样本下标correct_prediction = tf.equal(tf.argmax(Z3,axis=1),tf.argmax(Y,axis=1))accuracy = tf.compat.v1.reduce_mean(tf.cast(correct_prediction,"float"))print("训练集的准确率:", accuracy.eval({X: X_train, Y: Y_train}))print("测试集的准确率:", accuracy.eval({X: X_test, Y: Y_test}))return parameters
import time
start_time = time.perf_counter()
parameters = model_aug(X_train, Y_train, X_test, Y_test,learning_rate=0.007,epochs=200,mini_batch_size=64)
end_time = time.perf_counter()
print("CPU的执行时间 = " + str(end_time - start_time) + " 秒" )
model_aug
epoch 200
训练集的准确率: 0.97037035
测试集的准确率: 0.8833333
CPU的执行时间 = 84.42098270000002 秒