【TensorFlow-windows】学习笔记四——模型构建、保存与使用

前言

上一章研究了一些基本的构建神经网络所需的结构:层、激活函数、损失函数、优化器之类的,这一篇就解决上一章遗留的问题:使用CNN构建手写数字识别网络、保存模型参数、单张图片的识别

国际惯例,参考博客:

tensorflow之保存模型与加载模型

【tensorflow】保存模型、再次加载模型等操作

tensorflow_二分类模型之单张图片测试

TensorFlow-Examples

训练数据实现

先贴一下仿真数据集:链接:https://pan.baidu.com/s/1ugEy85182vjcXQ8VoMJAbg 密码:1o83

主要就是把手写数字数据集可视化成png图片保存,利用txt文本文档保存路径和标签,看上一篇博客就懂了。话不多说,折腾起来。

数据集处理

IMG_HEIGHT = 28 # 高
IMG_WIDTH = 28 # 宽
CHANNELS = 3 # 通道数
def read_images(dataset_path, batch_size):imagepaths, labels = list(), list()data = open(dataset_path, 'r').read().splitlines()for d in data:imagepaths.append(d.split(' ')[0])labels.append(int(d.split(' ')[1]))   # 转换为张量imagepaths = tf.convert_to_tensor(imagepaths, dtype=tf.string)labels = tf.convert_to_tensor(labels, dtype=tf.int32)# 建立TF队列,打乱数据image, label = tf.train.slice_input_producer([imagepaths, labels],shuffle=True)# 读取数据image = tf.read_file(image)image = tf.image.decode_jpeg(image, channels=CHANNELS)# 将图像resize成规定大小image = tf.image.resize_images(image, [IMG_HEIGHT, IMG_WIDTH])# 手动归一化image = image * 1.0/127.5 - 1.0# 创建batchinputX, inputY = tf.train.batch([image, label], batch_size=batch_size,capacity=batch_size * 8,num_threads=4)return inputX, inputY

构建网络

#训练参数
learning_rate = 0.001
num_steps = 1000
batch_size = 128
display_step = 10
#网络参数
num_classes = 10 
dropout = 0.75#定义卷积操作
def conv2d(x, W, b, strides=1):   x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')x = tf.nn.bias_add(x, b)return tf.nn.relu(x)
#定义池化操作
def maxpool2d(x, k=2):return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],padding='SAME')
#定义网络结构
def conv_net(x, weights, biases, dropout):#输入数据x = tf.reshape(x, shape=[-1, IMG_HEIGHT, IMG_WIDTH, CHANNELS])#第一层卷积conv1 = conv2d(x, weights['wc1'], biases['bc1'])conv1 = maxpool2d(conv1, k=2)#第二层卷积conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])conv2 = maxpool2d(conv2, k=2)#全连接fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])fc1 = tf.nn.relu(fc1)fc1 = tf.nn.dropout(fc1, dropout)# 输出out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])return out#初始化权重,以后便于保存
weights = {'wc1': tf.Variable(tf.random_normal([5, 5, CHANNELS, 32])),'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),'out': tf.Variable(tf.random_normal([1024, num_classes]))
}
#初始化偏置,以后便于保存
biases = {'bc1': tf.Variable(tf.random_normal([32])),'bc2': tf.Variable(tf.random_normal([64])),'bd1': tf.Variable(tf.random_normal([1024])),'out': tf.Variable(tf.random_normal([num_classes]))
}

定义网络输入输出

注意:如果想留个接口以便后续丢入单张图片进行测试,一定要留输入接口,方法如下:

#定义图结构的输入
X = tf.placeholder(tf.float32, [None, IMG_HEIGHT, IMG_WIDTH,CHANNELS],name='X')
Y = tf.placeholder(tf.float32, [None, num_classes],name='Y')
keep_prob = tf.placeholder(tf.float32,name='keep_prob')

一定要输入name,因为后续我们取这个placeholder就是依据这个名字来取的,因为这个方法比较常用

定义损失、优化器、训练/评估函数

# 构建模型
logits = conv_net(X, weights, biases, keep_prob)
prediction = tf.nn.softmax(logits,name='prediction')
#损失函数
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
#评估函数
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

训练

记得单热度编码tf.one_hot()

#保存模型
saver=tf.train.Saver()
# tf.add_to_collection('predict',prediction)
#训练
init = tf.global_variables_initializer()#初始化模型
print('读取数据集:')
input_img,input_label=read_images('./mnist/train_labels.txt',batch_size=batch_size)
print('训练模型')
with tf.Session() as sess:coord=tf.train.Coordinator()sess.run(init)#初始化参数   tf.train.start_queue_runners(sess=sess,coord=coord)for step in range(1, num_steps+1):        batch_x, batch_y = sess.run([input_img,tf.one_hot(input_label,num_classes,1,0)])       sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.8})if step % display_step == 0 or step == 1:# Calculate batch loss and accuracyloss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,Y: batch_y,keep_prob: 1.0})print("Step " + str(step) + ", Minibatch Loss= " + \"{:.4f}".format(loss) + ", Training Accuracy= " + \"{:.3f}".format(acc))coord.request_stop()coord.join()print("Optimization Finished!")saver.save(sess,'./cnn_mnist_model/CNN_Mnist')

输出:

读取数据集:
训练模型
2018-08-03 12:38:05.596648: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-08-03 12:38:05.882851: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1392] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.759
pciBusID: 0000:01:00.0
totalMemory: 6.00GiB freeMemory: 4.96GiB
2018-08-03 12:38:05.889153: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1471] Adding visible gpu devices: 0
2018-08-03 12:38:06.600411: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-03 12:38:06.604687: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958]      0
2018-08-03 12:38:06.606494: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0:   N
2018-08-03 12:38:06.608588: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4726 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
Step 1, Minibatch Loss= 206875.5000, Training Accuracy= 0.219
Step 10, Minibatch Loss= 71490.0000, Training Accuracy= 0.164
Step 20, Minibatch Loss= 27775.7266, Training Accuracy= 0.398
Step 30, Minibatch Loss= 15692.2725, Training Accuracy= 0.641
Step 40, Minibatch Loss= 18211.4141, Training Accuracy= 0.625
Step 50, Minibatch Loss= 7250.1758, Training Accuracy= 0.789
Step 60, Minibatch Loss= 10694.9902, Training Accuracy= 0.750
Step 70, Minibatch Loss= 10783.8535, Training Accuracy= 0.766
Step 80, Minibatch Loss= 6080.1138, Training Accuracy= 0.844
Step 90, Minibatch Loss= 6720.9380, Training Accuracy= 0.867
Step 100, Minibatch Loss= 3673.7524, Training Accuracy= 0.922
Step 110, Minibatch Loss= 7893.8228, Training Accuracy= 0.836
Step 120, Minibatch Loss= 6805.0176, Training Accuracy= 0.852
Step 130, Minibatch Loss= 2863.3728, Training Accuracy= 0.906
Step 140, Minibatch Loss= 3335.6992, Training Accuracy= 0.883
Step 150, Minibatch Loss= 3514.4031, Training Accuracy= 0.914
Step 160, Minibatch Loss= 1842.5328, Training Accuracy= 0.945
Step 170, Minibatch Loss= 3443.9966, Training Accuracy= 0.914
Step 180, Minibatch Loss= 1961.7180, Training Accuracy= 0.945
Step 190, Minibatch Loss= 2919.5215, Training Accuracy= 0.898
Step 200, Minibatch Loss= 4270.7686, Training Accuracy= 0.891
Step 210, Minibatch Loss= 3591.2534, Training Accuracy= 0.922
Step 220, Minibatch Loss= 4692.2163, Training Accuracy= 0.867
Step 230, Minibatch Loss= 1537.0554, Training Accuracy= 0.914
Step 240, Minibatch Loss= 3574.1797, Training Accuracy= 0.898
Step 250, Minibatch Loss= 5143.3276, Training Accuracy= 0.898
Step 260, Minibatch Loss= 2142.9756, Training Accuracy= 0.922
Step 270, Minibatch Loss= 1323.6707, Training Accuracy= 0.945
Step 280, Minibatch Loss= 2004.2051, Training Accuracy= 0.961
Step 290, Minibatch Loss= 1112.9484, Training Accuracy= 0.938
Step 300, Minibatch Loss= 1977.6018, Training Accuracy= 0.922
Step 310, Minibatch Loss= 876.0104, Training Accuracy= 0.977
Step 320, Minibatch Loss= 3448.3142, Training Accuracy= 0.953
Step 330, Minibatch Loss= 1173.9749, Training Accuracy= 0.961
Step 340, Minibatch Loss= 2152.9966, Training Accuracy= 0.938
Step 350, Minibatch Loss= 3113.6838, Training Accuracy= 0.938
Step 360, Minibatch Loss= 1779.6680, Training Accuracy= 0.922
Step 370, Minibatch Loss= 2738.2637, Training Accuracy= 0.930
Step 380, Minibatch Loss= 1666.9695, Training Accuracy= 0.922
Step 390, Minibatch Loss= 2076.6716, Training Accuracy= 0.914
Step 400, Minibatch Loss= 3356.1475, Training Accuracy= 0.914
Step 410, Minibatch Loss= 1222.7729, Training Accuracy= 0.953
Step 420, Minibatch Loss= 2422.6355, Training Accuracy= 0.898
Step 430, Minibatch Loss= 4377.9385, Training Accuracy= 0.914
Step 440, Minibatch Loss= 1566.1058, Training Accuracy= 0.969
Step 450, Minibatch Loss= 3540.1555, Training Accuracy= 0.875
Step 460, Minibatch Loss= 1136.4354, Training Accuracy= 0.961
Step 470, Minibatch Loss= 2821.9456, Training Accuracy= 0.938
Step 480, Minibatch Loss= 1804.5267, Training Accuracy= 0.945
Step 490, Minibatch Loss= 625.0988, Training Accuracy= 0.977
Step 500, Minibatch Loss= 2406.8958, Training Accuracy= 0.930
Step 510, Minibatch Loss= 1198.2866, Training Accuracy= 0.961
Step 520, Minibatch Loss= 680.7784, Training Accuracy= 0.953
Step 530, Minibatch Loss= 2329.2104, Training Accuracy= 0.961
Step 540, Minibatch Loss= 848.0190, Training Accuracy= 0.945
Step 550, Minibatch Loss= 1327.9423, Training Accuracy= 0.938
Step 560, Minibatch Loss= 1020.9082, Training Accuracy= 0.961
Step 570, Minibatch Loss= 1885.4563, Training Accuracy= 0.922
Step 580, Minibatch Loss= 820.5620, Training Accuracy= 0.953
Step 590, Minibatch Loss= 1448.5205, Training Accuracy= 0.938
Step 600, Minibatch Loss= 857.7993, Training Accuracy= 0.969
Step 610, Minibatch Loss= 1193.5856, Training Accuracy= 0.930
Step 620, Minibatch Loss= 1337.5518, Training Accuracy= 0.961
Step 630, Minibatch Loss= 2121.9165, Training Accuracy= 0.953
Step 640, Minibatch Loss= 1516.9609, Training Accuracy= 0.938
Step 650, Minibatch Loss= 666.7323, Training Accuracy= 0.977
Step 660, Minibatch Loss= 1004.4291, Training Accuracy= 0.953
Step 670, Minibatch Loss= 193.3173, Training Accuracy= 0.984
Step 680, Minibatch Loss= 1339.3765, Training Accuracy= 0.945
Step 690, Minibatch Loss= 709.9714, Training Accuracy= 0.961
Step 700, Minibatch Loss= 1380.6301, Training Accuracy= 0.953
Step 710, Minibatch Loss= 630.5464, Training Accuracy= 0.977
Step 720, Minibatch Loss= 667.1447, Training Accuracy= 0.953
Step 730, Minibatch Loss= 1253.6014, Training Accuracy= 0.977
Step 740, Minibatch Loss= 473.8666, Training Accuracy= 0.984
Step 750, Minibatch Loss= 809.3101, Training Accuracy= 0.961
Step 760, Minibatch Loss= 508.8592, Training Accuracy= 0.984
Step 770, Minibatch Loss= 308.9244, Training Accuracy= 0.969
Step 780, Minibatch Loss= 1291.0034, Training Accuracy= 0.984
Step 790, Minibatch Loss= 1884.8574, Training Accuracy= 0.938
Step 800, Minibatch Loss= 1481.6635, Training Accuracy= 0.961
Step 810, Minibatch Loss= 463.2684, Training Accuracy= 0.969
Step 820, Minibatch Loss= 1116.5591, Training Accuracy= 0.961
Step 830, Minibatch Loss= 2422.9155, Training Accuracy= 0.953
Step 840, Minibatch Loss= 471.8990, Training Accuracy= 0.984
Step 850, Minibatch Loss= 1480.4053, Training Accuracy= 0.945
Step 860, Minibatch Loss= 1062.6339, Training Accuracy= 0.938
Step 870, Minibatch Loss= 833.3881, Training Accuracy= 0.953
Step 880, Minibatch Loss= 2153.9014, Training Accuracy= 0.953
Step 890, Minibatch Loss= 1617.7456, Training Accuracy= 0.953
Step 900, Minibatch Loss= 347.2119, Training Accuracy= 0.969
Step 910, Minibatch Loss= 175.5020, Training Accuracy= 0.977
Step 920, Minibatch Loss= 680.8482, Training Accuracy= 0.969
Step 930, Minibatch Loss= 240.1681, Training Accuracy= 0.977
Step 940, Minibatch Loss= 882.4927, Training Accuracy= 0.977
Step 950, Minibatch Loss= 407.1322, Training Accuracy= 0.977
Step 960, Minibatch Loss= 300.9460, Training Accuracy= 0.969
Step 970, Minibatch Loss= 1848.9391, Training Accuracy= 0.945
Step 980, Minibatch Loss= 496.5137, Training Accuracy= 0.969
Step 990, Minibatch Loss= 473.6212, Training Accuracy= 0.969
Step 1000, Minibatch Loss= 124.8958, Training Accuracy= 0.992
Optimization Finished!

【更新日志】2019-9-2
tf.train.Saver中还可以提供额外三个参数,详细可以参考官方文档

  • max_to_keep:保存多少个最新模型,避免模型一直存,存到磁盘满了,当存到设置的数值时,会删掉前面的一个模型,保证当前存储的模型数目是设置值
  • keep_checkpoint_every_n_hours:每多少小时保存一次模型

模型载入与测试单张图片

我折腾很久的原因就在于:在定义训练网络的输入参数时,没有给name值,导致get_tensor_by_name一直取不出来对应输入接口,测试图片无法丢入到网络中,最后发现的时候气出猪叫。

读图片

直接使用opencv的读取函数处理图像,记得与训练时图像的处理方法一致,最后要reshape(1,28,28,3)(1,28,28,3)(1,28,28,3)大小,其实这里也卡了好久,待会后面再说

images = []
image = cv2.imread('./mnist/test/5/5_9.png')
images.append(image)
images = np.array(images, dtype=np.uint8)
images = images.astype('float32')
images = np.subtract(np.multiply(images, 1.0/127.5) , 1.0)
x_batch = images.reshape(1,28,28,3)

载入模型

saver=tf.train.import_meta_graph('./cnn_mnist_model/CNN_Mnist.meta')
saver.restore(sess,'./cnn_mnist_model/CNN_Mnist')

预测

取出预测函数和测试图片接收器:

graph=tf.get_default_graph()    
pred=graph.get_tensor_by_name('prediction:0')
#X = graph.get_operation_by_name('X').outputs[0]
X=graph.get_tensor_by_name('X:0')
keep_prob=graph.get_tensor_by_name('keep_prob:0')

直接预测

result=sess.run(pred,feed_dict={X:x_batch,keep_prob:1.0})
print(result)

结果

2018-08-03 12:46:50.098990: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958]      0
2018-08-03 12:46:50.101351: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0:   N
2018-08-03 12:46:50.104446: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4726 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
[[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]]

我也预测了另外几张图片,结果基本没问题。

上面说读取图片卡了很久,原因在于我之前使用训练时候读取图片的方法来读图片,也就是使用tensorflowimage类来读图片:

    image = tf.read_file('./mnist/test/5/5_9.png')image = tf.image.decode_jpeg(image, channels=3)# 将图像resize成规定大小image = tf.image.resize_images(image, [28, 28])# 手动归一化image = image * 1.0/127.5 - 1.0image = tf.reshape(image, shape=[1, 28, 28, 3])

出现如下错误:

TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.For reference, the tensor object was Tensor("Reshape:0", shape=(1, 28, 28, 3), dtype=float32) which was passed to the feed with key Tensor("X:0", shape=(?, 28, 28, 3), dtype=float32).

这个错误意思是说无法将tf.Tensor类型的变量feed给输入接口,能够接受的有Pyhon的巴拉巴拉类型,真皮。所以又得把tf.Tensor的值取出来,如果看过之前的博客,就知道用eval()取值,跟Theano一模一样:

将原来测试的那句话

result=sess.run(pred,feed_dict={X:image,keep_prob:1.0})

改成:

result=sess.run(pred,feed_dict={X:image.eval(),keep_prob:1.0})

就阔以了,所以还是建议测试的时候直接用numpy变量处理,不要折腾成tensorflow的变量类型,最后还得转回来。

后记

好羡慕TensorLayertflearn那么简介的模型构建、保存、载入方法,好想脱坑。

博文代码:

训练:链接:https://pan.baidu.com/s/1zZYGgnGj3kttklzZnyJLQA 密码:zepl

测试:链接:https://pan.baidu.com/s/1BygjSatjxtuVIq_HHt9o7A 密码:ky87

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/246607.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【TensorFlow-windows】学习笔记五——自编码器

前言 上一篇博客介绍的是构建简单的CNN去识别手写数字,这一篇博客折腾一下自编码,理论很简单,就是实现对输入数据的重构,具体理论可以看我前面的【theano-windows】学习笔记十三——去噪自编码器 国际惯例,参考博客&…

【TensorFlow-windows】学习笔记六——变分自编码器

#前言 对理论没兴趣的直接看代码吧,理论一堆,而且还有点复杂,我自己的描述也不一定准确,但是代码就两三句话搞定了。 国际惯例,参考博文 论文:Tutorial on Variational Autoencoders 【干货】一文读懂…

【TensorFlow-windows】学习笔记七——生成对抗网络

前言 既然学习了变分自编码(VAE),那也必须来一波生成对抗网络(GAN)。 国际惯例,参考网址: 论文: Generative Adversarial Nets PPT:Generative Adversarial Networks (GANs) Generative Adversarial Nets in TensorFlow GAN原理学习笔记…

Openpose——windows编译(炒鸡简单)

前言 最近准备看看rtpose的代码,发现已经由openpose这个项目维护着了,由于经常在windows下调试代码,所以尝试了一下如何在windows下编译openpose源码,整体来说非常简单的。 国际惯例,参考博客: [OpenPos…

【TensorFlow-windows】学习笔记八——简化网络书写

前言 之前写代码的时候都要预先初始化权重,还得担心变量是否会出现被重复定义的错误,但是看网上有直接用tf.layers构建网络,很简洁的方法。 这里主要尝试了不预定义权重,是否能够实现正常训练、模型保存和调用,事实证…

强化学习——Qlearning

前言 在控制决策领域里面强化学习还是占很重比例的,最近出了几篇角色控制的论文需要研究,其中部分涉及到强化学习,都有开源,有兴趣可以点开看看: A Deep Learning Framework For Character Motion Synthesis and Edit…

【TensorFlow-windows】keras接口学习——线性回归与简单的分类

前言 之前有写过几篇TensorFlow相关文章,但是用的比较底层的写法,比如tf.nn和tf.layers,也写了部分基本模型如自编码和对抗网络等,感觉写起来不太舒服,最近看官方文档发现它的教程基本都使用的keras API,这…

【TensorFlow-windows】keras接口——卷积手写数字识别,模型保存和调用

前言 上一节学习了以TensorFlow为底端的keras接口最简单的使用,这里就继续学习怎么写卷积分类模型和各种保存方法(仅保存权重、权重和网络结构同时保存) 国际惯例,参考博客: 官方教程 【注】其实不用看博客,直接翻到文末看我的c…

【TensorFlow-windows】keras接口——BatchNorm和ResNet

前言 之前学习利用Keras简单地堆叠卷积网络去构建分类模型的方法,但是对于很深的网络结构很难保证梯度在各层能够正常传播,经常发生梯度消失、梯度爆炸或者其它奇奇怪怪的问题。为了解决这类问题,大佬们想了各种办法,比如最原始的…

【TensorFlow-windows】keras接口——卷积核可视化

前言 在机器之心上看到了关于卷积核可视化相关理论,但是作者的源代码是基于fastai写的,而fastai的底层是pytorch,本来准备自己用Keras复现一遍的,但是尴尬地发现Keras还没玩熟练,随后发现了一个keras-vis包可以用于做…

【TensorFlow-windows】投影变换

前言 没什么重要的,就是想测试一下tensorflow的投影变换函数tf.contrib.image.transform中每个参数的含义 国际惯例,参考文档 官方文档 描述 调用方法与默认参数: tf.contrib.image.transform(images,transforms,interpolationNEAREST,…

【TensorFlow-windows】扩展层之STN

前言 读TensorFlow相关代码看到了STN的应用,搜索以后发现可替代池化,增强网络对图像变换(旋转、缩放、偏移等)的抗干扰能力,简单说就是提高卷积神经网络的空间不变性。 国际惯例,参考博客: 理解Spatial Transformer…

【TensorFlow-windows】部分损失函数测试

前言 在TensorFlow中提供了挺多损失函数的,这里主要测试一下均方差与交叉熵相关的几个函数的计算流程。主要是测试来自于tf.nn与tf.losses的mean_square_error、sigmoid_cross_entry、softmax_cross_entry、sparse_softmax_cross_entry 国际惯例,参考博…

RS编码-Python工具包使用

前言 最近学习二维码相关知识,遇到了ReedSolomon编码,简称RS编码,中文名里德所罗门编码。遇到的问题是使用的工具包返回的编码是bytearray类型,而二维码是二进制01编码,所以本博客主要验证,如何将bytearra…

【TensorFlow-windows】MobileNet理论概览与实现

前言 轻量级神经网络中,比较重要的有MobileNet和ShuffleNet,其实还有其它的,比如SqueezeNet、Xception等。 本博客为MobileNet的前两个版本的理论简介与Keras中封装好的模块的对应实现方案。 国际惯例,参考博客: 纵…

【TensorFlow-windows】keras接口——ImageDataGenerator裁剪

前言 Keras中有一个图像数据处理器ImageDataGenerator,能够很方便地进行数据增强,并且从文件中批量加载图片,避免数据集过大时,一下子加载进内存会崩掉。但是从官方文档发现,并没有一个比较重要的图像增强方式&#x…

【TensorFlow-windows】name_scope与variable_scope

前言 探索一下variable_scope和name_scope相关的作用域,为下一章节tensorboard的学习做准备 其实关于variable_scope与get_variable实现变量共享,在最开始的博客有介绍过: 【TensorFlow-windows】学习笔记二——低级API 当然还是国际惯例…

【TensorFlow-windows】TensorBoard可视化

前言 紧接上一篇博客,学习tensorboard可视化训练过程。 国际惯例,参考博客: MNIST机器学习入门 Tensorboard 详解(上篇) Tensorboard 可视化好帮手 2 tf-dev-summit-tensorboard-tutorial tensorflow官方mnist_…

深度学习特征归一化方法——BN、LN、IN、GN

前言 最近看到Group Normalization的论文,主要提到了四个特征归一化方法:Batch Norm、Layer Norm、Instance Norm、Group Norm。此外,论文还提到了Local Response Normalization(LRN)、Weight Normalization(WN)、Batch Renormalization(BR)…

【TensorFlow-windows】keras接口——利用tensorflow的方法加载数据

前言 之前使用tensorflow和keras的时候,都各自有一套数据读取方法,但是遇到一个问题就是,在训练的时候,GPU的利用率忽高忽低,极大可能是由于训练过程中读取每个batch数据造成的,所以又看了tensorflow官方的…