Python 3深度置信网络(DBN)在Tensorflow中的实现MNIST手写数字识别

任何程序错误,以及技术疑问或需要解答的,请扫码添加作者VX:1755337994

使用DBN识别手写体
传统的多层感知机或者神经网络的一个问题: 反向传播可能总是导致局部最小值。
当误差表面(error surface)包含了多个凹槽,当你做梯度下降时,你找到的并不是最深的凹槽。 下面你将会看到DBN是怎么解决这个问题的。

深度置信网络

深度置信网络可以通过额外的预训练规程解决局部最小值的问题。 预训练在反向传播之前做完,这样可以使错误率离最优的解不是那么远,也就是我们在最优解的附近。再通过反向传播慢慢地降低错误率。
深度置信网络主要分成两部分。第一部分是多层玻尔兹曼感知机,用于预训练我们的网络。第二部分是前馈反向传播网络,这可以使RBM堆叠的网络更加精细化。

1. 加载必要的深度置信网络库

# urllib is used to download the utils file from deeplearning.net
import urllib.request
response = urllib.request.urlopen('http://deeplearning.net/tutorial/code/utils.py')
content = response.read().decode('utf-8')
target = open('utils.py', 'w')
target.write(content)
target.close()
# Import the math function for calculations
import math
# Tensorflow library. Used to implement machine learning models
import tensorflow as tf
# Numpy contains helpful functions for efficient mathematical calculations
import numpy as np
# Image library for image manipulation
from PIL import Image
# import Image
# Utils file
from utils import tile_raster_images

2. 构建RBM层

RBM的细节参考【受限玻尔兹曼机(RBM)与python在Tensorflow的实现_青年夏日科技的博客-CSDN博客_python玻尔兹曼机】
rbm
为了在Tensorflow中应用DBN, 下面创建一个RBM的类

class RBM(object):def __init__(self, input_size, output_size):# Defining the hyperparametersself._input_size = input_size  # Size of inputself._output_size = output_size  # Size of outputself.epochs = 5  # Amount of training iterationsself.learning_rate = 1.0  # The step used in gradient descentself.batchsize = 100  # The size of how much data will be used for training per sub iteration# Initializing weights and biases as matrices full of zeroesself.w = np.zeros([input_size, output_size], np.float32)  # Creates and initializes the weights with 0self.hb = np.zeros([output_size], np.float32)  # Creates and initializes the hidden biases with 0self.vb = np.zeros([input_size], np.float32)  # Creates and initializes the visible biases with 0# Fits the result from the weighted visible layer plus the bias into a sigmoid curvedef prob_h_given_v(self, visible, w, hb):# Sigmoidreturn tf.nn.sigmoid(tf.matmul(visible, w) + hb)# Fits the result from the weighted hidden layer plus the bias into a sigmoid curvedef prob_v_given_h(self, hidden, w, vb):return tf.nn.sigmoid(tf.matmul(hidden, tf.transpose(w)) + vb)# Generate the sample probabilitydef sample_prob(self, probs):return tf.nn.relu(tf.sign(probs - tf.random_uniform(tf.shape(probs))))# Training method for the modeldef train(self, X):# Create the placeholders for our parameters_w = tf.placeholder("float", [self._input_size, self._output_size])_hb = tf.placeholder("float", [self._output_size])_vb = tf.placeholder("float", [self._input_size])prv_w = np.zeros([self._input_size, self._output_size],np.float32)  # Creates and initializes the weights with 0prv_hb = np.zeros([self._output_size], np.float32)  # Creates and initializes the hidden biases with 0prv_vb = np.zeros([self._input_size], np.float32)  # Creates and initializes the visible biases with 0cur_w = np.zeros([self._input_size, self._output_size], np.float32)cur_hb = np.zeros([self._output_size], np.float32)cur_vb = np.zeros([self._input_size], np.float32)v0 = tf.placeholder("float", [None, self._input_size])# Initialize with sample probabilitiesh0 = self.sample_prob(self.prob_h_given_v(v0, _w, _hb))v1 = self.sample_prob(self.prob_v_given_h(h0, _w, _vb))h1 = self.prob_h_given_v(v1, _w, _hb)# Create the Gradientspositive_grad = tf.matmul(tf.transpose(v0), h0)negative_grad = tf.matmul(tf.transpose(v1), h1)# Update learning rates for the layersupdate_w = _w + self.learning_rate * (positive_grad - negative_grad) / tf.to_float(tf.shape(v0)[0])update_vb = _vb + self.learning_rate * tf.reduce_mean(v0 - v1, 0)update_hb = _hb + self.learning_rate * tf.reduce_mean(h0 - h1, 0)# Find the error rateerr = tf.reduce_mean(tf.square(v0 - v1))# Training loopwith tf.Session() as sess:sess.run(tf.global_variables_initializer())# For each epochfor epoch in range(self.epochs):# For each step/batchfor start, end in zip(range(0, len(X), self.batchsize), range(self.batchsize, len(X), self.batchsize)):batch = X[start:end]# Update the ratescur_w = sess.run(update_w, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})cur_hb = sess.run(update_hb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})cur_vb = sess.run(update_vb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})prv_w = cur_wprv_hb = cur_hbprv_vb = cur_vberror = sess.run(err, feed_dict={v0: X, _w: cur_w, _vb: cur_vb, _hb: cur_hb})print('Epoch: %d' % epoch, 'reconstruction error: %f' % error)self.w = prv_wself.hb = prv_hbself.vb = prv_vb# Create expected output for our DBNdef rbm_outpt(self, X):input_X = tf.constant(X)_w = tf.constant(self.w)_hb = tf.constant(self.hb)out = tf.nn.sigmoid(tf.matmul(input_X, _w) + _hb)with tf.Session() as sess:sess.run(tf.global_variables_initializer())return sess.run(out)

3. 导入MNIST数据

使用one-hot encoding标注的形式载入MNIST图像数据。

# Getting the MNIST data provided by Tensorflow
from tensorflow.examples.tutorials.mnist import input_data# Loading in the mnist data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images,\mnist.test.labels

Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz

4. 建立DBN

RBM_hidden_sizes = [500, 200 , 50 ] #create 4 layers of RBM with size 785-500-200-50#Since we are training, set input as training data
inpX = trX#Create list to hold our RBMs
rbm_list = []#Size of inputs is the number of inputs in the training set
input_size = inpX.shape[1]#For each RBM we want to generate
for i, size in enumerate(RBM_hidden_sizes):print('RBM: ',i,' ',input_size,'->', size)rbm_list.append(RBM(input_size, size))input_size = size
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
RBM:  0   784 -> 500
RBM:  1   500 -> 200
RBM:  2   200 -> 50

rbm的类创建好了和数据都已经载入,可以创建DBN。 在这个例子中,我们使用了3个RBM,一个的隐藏层单元个数为500, 第二个RBM的隐藏层个数为200,最后一个为50. 我们想要生成训练数据的深层次表示形式。

5.训练RBM

我们将使用***rbm.train()***开始预训练步骤, 单独训练堆中的每一个RBM,并将当前RBM的输出作为下一个RBM的输入。

#For each RBM in our list
for rbm in rbm_list:print('New RBM:')#Train a new onerbm.train(inpX) #Return the output layerinpX = rbm.rbm_outpt(inpX)
New RBM:
Epoch: 0 reconstruction error: 0.061174
Epoch: 1 reconstruction error: 0.052962
Epoch: 2 reconstruction error: 0.049679
Epoch: 3 reconstruction error: 0.047683
Epoch: 4 reconstruction error: 0.045691
New RBM:
Epoch: 0 reconstruction error: 0.035260
Epoch: 1 reconstruction error: 0.030811
Epoch: 2 reconstruction error: 0.028873
Epoch: 3 reconstruction error: 0.027428
Epoch: 4 reconstruction error: 0.026980
New RBM:
Epoch: 0 reconstruction error: 0.059593
Epoch: 1 reconstruction error: 0.056837
Epoch: 2 reconstruction error: 0.055571
Epoch: 3 reconstruction error: 0.053817
Epoch: 4 reconstruction error: 0.054142

现在我们可以将输入数据的学习好的表示转换为有监督的预测,比如一个线性分类器。特别地,我们使用这个浅层神经网络的最后一层的输出对数字分类。

6. 神经网络

下面的类使用了上面预训练好的RBMs实现神经网络。

import numpy as np
import math
import tensorflow as tfclass NN(object):def __init__(self, sizes, X, Y):# Initialize hyperparametersself._sizes = sizesself._X = Xself._Y = Yself.w_list = []self.b_list = []self._learning_rate = 1.0self._momentum = 0.0self._epoches = 10self._batchsize = 100input_size = X.shape[1]# initialization loopfor size in self._sizes + [Y.shape[1]]:# Define upper limit for the uniform distribution rangemax_range = 4 * math.sqrt(6. / (input_size + size))# Initialize weights through a random uniform distributionself.w_list.append(np.random.uniform(-max_range, max_range, [input_size, size]).astype(np.float32))# Initialize bias as zeroesself.b_list.append(np.zeros([size], np.float32))input_size = size# load data from rbmdef load_from_rbms(self, dbn_sizes, rbm_list):# Check if expected sizes are correctassert len(dbn_sizes) == len(self._sizes)for i in range(len(self._sizes)):# Check if for each RBN the expected sizes are correctassert dbn_sizes[i] == self._sizes[i]# If everything is correct, bring over the weights and biasesfor i in range(len(self._sizes)):self.w_list[i] = rbm_list[i].wself.b_list[i] = rbm_list[i].hb# Training methoddef train(self):# Create placeholders for input, weights, biases, output_a = [None] * (len(self._sizes) + 2)_w = [None] * (len(self._sizes) + 1)_b = [None] * (len(self._sizes) + 1)_a[0] = tf.placeholder("float", [None, self._X.shape[1]])y = tf.placeholder("float", [None, self._Y.shape[1]])# Define variables and activation functoinfor i in range(len(self._sizes) + 1):_w[i] = tf.Variable(self.w_list[i])_b[i] = tf.Variable(self.b_list[i])for i in range(1, len(self._sizes) + 2):_a[i] = tf.nn.sigmoid(tf.matmul(_a[i - 1], _w[i - 1]) + _b[i - 1])# Define the cost functioncost = tf.reduce_mean(tf.square(_a[-1] - y))# Define the training operation (Momentum Optimizer minimizing the Cost function)train_op = tf.train.MomentumOptimizer(self._learning_rate, self._momentum).minimize(cost)# Prediction operationpredict_op = tf.argmax(_a[-1], 1)# Training Loopwith tf.Session() as sess:# Initialize Variablessess.run(tf.global_variables_initializer())# For each epochfor i in range(self._epoches):# For each stepfor start, end in zip(range(0, len(self._X), self._batchsize), range(self._batchsize, len(self._X), self._batchsize)):# Run the training operation on the input datasess.run(train_op, feed_dict={_a[0]: self._X[start:end], y: self._Y[start:end]})for j in range(len(self._sizes) + 1):# Retrieve weights and biasesself.w_list[j] = sess.run(_w[j])self.b_list[j] = sess.run(_b[j])print("Accuracy rating for epoch " + str(i) + ": " + str(np.mean(np.argmax(self._Y, axis=1) == \sess.run(predict_op, feed_dict={_a[0]: self._X, y: self._Y}))))

7. 运行

nNet = NN(RBM_hidden_sizes, trX, trY)
nNet.load_from_rbms(RBM_hidden_sizes,rbm_list)
nNet.train()
Accuracy rating for epoch 0: 0.46683636363636366
Accuracy rating for epoch 1: 0.6561272727272728
Accuracy rating for epoch 2: 0.7678363636363637
Accuracy rating for epoch 3: 0.8370727272727273
Accuracy rating for epoch 4: 0.8684181818181819
Accuracy rating for epoch 5: 0.885
Accuracy rating for epoch 6: 0.8947636363636363
Accuracy rating for epoch 7: 0.9024909090909091
Accuracy rating for epoch 8: 0.9080363636363636
Accuracy rating for epoch 9: 0.9124181818181818

完整代码

pip install  tensorflow==1.13.1

# Import the math function for calculations
import math
# Tensorflow library. Used to implement machine learning models
import tensorflow as tf
# Numpy contains helpful functions for efficient mathematical calculations
import numpy as np
# Image library for image manipulation
# import Image
# Utils file
# Getting the MNIST data provided by Tensorflow
from tensorflow.examples.tutorials.mnist import input_data""" This file contains different utility functions that are not connected
in anyway to the networks presented in the tutorials, but rather help in
processing the outputs into a more understandable way.For example ``tile_raster_images`` helps in generating a easy to grasp
image from a set of samples or weights.
"""import numpydef scale_to_unit_interval(ndar, eps=1e-8):""" Scales all values in the ndarray ndar to be between 0 and 1 """ndar = ndar.copy()ndar -= ndar.min()ndar *= 1.0 / (ndar.max() + eps)return ndardef tile_raster_images(X, img_shape, tile_shape, tile_spacing=(0, 0),scale_rows_to_unit_interval=True,output_pixel_vals=True):"""Transform an array with one flattened image per row, into an array inwhich images are reshaped and layed out like tiles on a floor.This function is useful for visualizing datasets whose rows are images,and also columns of matrices for transforming those rows(such as the first layer of a neural net).:type X: a 2-D ndarray or a tuple of 4 channels, elements of which canbe 2-D ndarrays or None;:param X: a 2-D array in which every row is a flattened image.:type img_shape: tuple; (height, width):param img_shape: the original shape of each image:type tile_shape: tuple; (rows, cols):param tile_shape: the number of images to tile (rows, cols):param output_pixel_vals: if output should be pixel values (i.e. int8values) or floats:param scale_rows_to_unit_interval: if the values need to be scaled beforebeing plotted to [0,1] or not:returns: array suitable for viewing as an image.(See:`Image.fromarray`.):rtype: a 2-d array with same dtype as X."""assert len(img_shape) == 2assert len(tile_shape) == 2assert len(tile_spacing) == 2# The expression below can be re-written in a more C style as# follows :## out_shape    = [0,0]# out_shape[0] = (img_shape[0]+tile_spacing[0])*tile_shape[0] -#                tile_spacing[0]# out_shape[1] = (img_shape[1]+tile_spacing[1])*tile_shape[1] -#                tile_spacing[1]out_shape = [(ishp + tsp) * tshp - tspfor ishp, tshp, tsp in zip(img_shape, tile_shape, tile_spacing)]if isinstance(X, tuple):assert len(X) == 4# Create an output numpy ndarray to store the imageif output_pixel_vals:out_array = numpy.zeros((out_shape[0], out_shape[1], 4),dtype='uint8')else:out_array = numpy.zeros((out_shape[0], out_shape[1], 4),dtype=X.dtype)#colors default to 0, alpha defaults to 1 (opaque)if output_pixel_vals:channel_defaults = [0, 0, 0, 255]else:channel_defaults = [0., 0., 0., 1.]for i in range(4):if X[i] is None:# if channel is None, fill it with zeros of the correct# dtypedt = out_array.dtypeif output_pixel_vals:dt = 'uint8'out_array[:, :, i] = numpy.zeros(out_shape,dtype=dt) + channel_defaults[i]else:# use a recurrent call to compute the channel and store it# in the outputout_array[:, :, i] = tile_raster_images(X[i], img_shape, tile_shape, tile_spacing,scale_rows_to_unit_interval, output_pixel_vals)return out_arrayelse:# if we are dealing with only one channelH, W = img_shapeHs, Ws = tile_spacing# generate a matrix to store the outputdt = X.dtypeif output_pixel_vals:dt = 'uint8'out_array = numpy.zeros(out_shape, dtype=dt)for tile_row in range(tile_shape[0]):for tile_col in range(tile_shape[1]):if tile_row * tile_shape[1] + tile_col < X.shape[0]:this_x = X[tile_row * tile_shape[1] + tile_col]if scale_rows_to_unit_interval:# if we should scale values to be between 0 and 1# do this by calling the `scale_to_unit_interval`# functionthis_img = scale_to_unit_interval(this_x.reshape(img_shape))else:this_img = this_x.reshape(img_shape)# add the slice to the corresponding position in the# output arrayc = 1if output_pixel_vals:c = 255out_array[tile_row * (H + Hs): tile_row * (H + Hs) + H,tile_col * (W + Ws): tile_col * (W + Ws) + W] = this_img * creturn out_array# Class that defines the behavior of the RBM
class RBM(object):def __init__(self, input_size, output_size):# Defining the hyperparametersself._input_size = input_size  # Size of inputself._output_size = output_size  # Size of outputself.epochs = 5  # Amount of training iterationsself.learning_rate = 1.0  # The step used in gradient descentself.batchsize = 100  # The size of how much data will be used for training per sub iteration# Initializing weights and biases as matrices full of zeroesself.w = np.zeros([input_size, output_size], np.float32)  # Creates and initializes the weights with 0self.hb = np.zeros([output_size], np.float32)  # Creates and initializes the hidden biases with 0self.vb = np.zeros([input_size], np.float32)  # Creates and initializes the visible biases with 0# Fits the result from the weighted visible layer plus the bias into a sigmoid curvedef prob_h_given_v(self, visible, w, hb):# Sigmoidreturn tf.nn.sigmoid(tf.matmul(visible, w) + hb)# Fits the result from the weighted hidden layer plus the bias into a sigmoid curvedef prob_v_given_h(self, hidden, w, vb):return tf.nn.sigmoid(tf.matmul(hidden, tf.transpose(w)) + vb)# Generate the sample probabilitydef sample_prob(self, probs):return tf.nn.relu(tf.sign(probs - tf.random_uniform(tf.shape(probs))))# Training method for the modeldef train(self, X):# Create the placeholders for our parameters_w = tf.placeholder("float", [self._input_size, self._output_size])_hb = tf.placeholder("float", [self._output_size])_vb = tf.placeholder("float", [self._input_size])prv_w = np.zeros([self._input_size, self._output_size],np.float32)  # Creates and initializes the weights with 0prv_hb = np.zeros([self._output_size], np.float32)  # Creates and initializes the hidden biases with 0prv_vb = np.zeros([self._input_size], np.float32)  # Creates and initializes the visible biases with 0cur_w = np.zeros([self._input_size, self._output_size], np.float32)cur_hb = np.zeros([self._output_size], np.float32)cur_vb = np.zeros([self._input_size], np.float32)v0 = tf.placeholder("float", [None, self._input_size])# Initialize with sample probabilitiesh0 = self.sample_prob(self.prob_h_given_v(v0, _w, _hb))v1 = self.sample_prob(self.prob_v_given_h(h0, _w, _vb))h1 = self.prob_h_given_v(v1, _w, _hb)# Create the Gradientspositive_grad = tf.matmul(tf.transpose(v0), h0)negative_grad = tf.matmul(tf.transpose(v1), h1)# Update learning rates for the layersupdate_w = _w + self.learning_rate * (positive_grad - negative_grad) / tf.to_float(tf.shape(v0)[0])update_vb = _vb + self.learning_rate * tf.reduce_mean(v0 - v1, 0)update_hb = _hb + self.learning_rate * tf.reduce_mean(h0 - h1, 0)# Find the error rateerr = tf.reduce_mean(tf.square(v0 - v1))# Training loopwith tf.Session() as sess:sess.run(tf.global_variables_initializer())# For each epochfor epoch in range(self.epochs):# For each step/batchfor start, end in zip(range(0, len(X), self.batchsize), range(self.batchsize, len(X), self.batchsize)):batch = X[start:end]# Update the ratescur_w = sess.run(update_w, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})cur_hb = sess.run(update_hb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})cur_vb = sess.run(update_vb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})prv_w = cur_wprv_hb = cur_hbprv_vb = cur_vberror = sess.run(err, feed_dict={v0: X, _w: cur_w, _vb: cur_vb, _hb: cur_hb})print('Epoch: %d' % epoch, 'reconstruction error: %f' % error)self.w = prv_wself.hb = prv_hbself.vb = prv_vb# Create expected output for our DBNdef rbm_outpt(self, X):input_X = tf.constant(X)_w = tf.constant(self.w)_hb = tf.constant(self.hb)out = tf.nn.sigmoid(tf.matmul(input_X, _w) + _hb)with tf.Session() as sess:sess.run(tf.global_variables_initializer())return sess.run(out)class NN(object):def __init__(self, sizes, X, Y):# Initialize hyperparametersself._sizes = sizesself._X = Xself._Y = Yself.w_list = []self.b_list = []self._learning_rate = 1.0self._momentum = 0.0self._epoches = 10self._batchsize = 100input_size = X.shape[1]# initialization loopfor size in self._sizes + [Y.shape[1]]:# Define upper limit for the uniform distribution rangemax_range = 4 * math.sqrt(6. / (input_size + size))# Initialize weights through a random uniform distributionself.w_list.append(np.random.uniform(-max_range, max_range, [input_size, size]).astype(np.float32))# Initialize bias as zeroesself.b_list.append(np.zeros([size], np.float32))input_size = size# load data from rbmdef load_from_rbms(self, dbn_sizes, rbm_list):# Check if expected sizes are correctassert len(dbn_sizes) == len(self._sizes)for i in range(len(self._sizes)):# Check if for each RBN the expected sizes are correctassert dbn_sizes[i] == self._sizes[i]# If everything is correct, bring over the weights and biasesfor i in range(len(self._sizes)):self.w_list[i] = rbm_list[i].wself.b_list[i] = rbm_list[i].hb# Training methoddef train(self):# Create placeholders for input, weights, biases, output_a = [None] * (len(self._sizes) + 2)_w = [None] * (len(self._sizes) + 1)_b = [None] * (len(self._sizes) + 1)_a[0] = tf.placeholder("float", [None, self._X.shape[1]])y = tf.placeholder("float", [None, self._Y.shape[1]])# Define variables and activation functoinfor i in range(len(self._sizes) + 1):_w[i] = tf.Variable(self.w_list[i])_b[i] = tf.Variable(self.b_list[i])for i in range(1, len(self._sizes) + 2):_a[i] = tf.nn.sigmoid(tf.matmul(_a[i - 1], _w[i - 1]) + _b[i - 1])# Define the cost functioncost = tf.reduce_mean(tf.square(_a[-1] - y))# Define the training operation (Momentum Optimizer minimizing the Cost function)train_op = tf.train.MomentumOptimizer(self._learning_rate, self._momentum).minimize(cost)# Prediction operationpredict_op = tf.argmax(_a[-1], 1)# Training Loopwith tf.Session() as sess:# Initialize Variablessess.run(tf.global_variables_initializer())# For each epochfor i in range(self._epoches):# For each stepfor start, end in zip(range(0, len(self._X), self._batchsize), range(self._batchsize, len(self._X), self._batchsize)):# Run the training operation on the input datasess.run(train_op, feed_dict={_a[0]: self._X[start:end], y: self._Y[start:end]})for j in range(len(self._sizes) + 1):# Retrieve weights and biasesself.w_list[j] = sess.run(_w[j])self.b_list[j] = sess.run(_b[j])print("Accuracy rating for epoch " + str(i) + ": " + str(np.mean(np.argmax(self._Y, axis=1) == \sess.run(predict_op, feed_dict={_a[0]: self._X, y: self._Y}))))if __name__ == '__main__':# Loading in the mnist datamnist = input_data.read_data_sets("MNIST_data/", one_hot=True)trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images,\mnist.test.labelsRBM_hidden_sizes = [500, 200, 50]  # create 4 layers of RBM with size 785-500-200-50# Since we are training, set input as training datainpX = trX# Create list to hold our RBMsrbm_list = []# Size of inputs is the number of inputs in the training setinput_size = inpX.shape[1]# For each RBM we want to generatefor i, size in enumerate(RBM_hidden_sizes):print('RBM: ', i, ' ', input_size, '->', size)rbm_list.append(RBM(input_size, size))input_size = size# For each RBM in our listfor rbm in rbm_list:print('New RBM:')# Train a new onerbm.train(inpX)# Return the output layerinpX = rbm.rbm_outpt(inpX)nNet = NN(RBM_hidden_sizes, trX, trY)nNet.load_from_rbms(RBM_hidden_sizes, rbm_list)nNet.train()

任何程序错误,以及技术疑问或需要解答的,请添加

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/546910.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

PHP安装ZIP扩展

2019独角兽企业重金招聘Python工程师标准>>> 下载ZIP扩展包 wget http://pecl.php.net/get/zip-1.10.2.tgztar zxvf zip-1.10.2.tgz 进入解压后的目录&#xff0c;执行 /usr/local/php/bin/phpize 编译 ./configure --with-php-config/usr/local/php/bin/php-config…

使用Docker部署RabbitMQ集群

使用Docker部署RabbitMQ集群 概述 本文重点介绍的Docker的使用&#xff0c;以及如何部署RabbitMQ集群&#xff0c;最基础的Docker安装&#xff0c;本文不做过多的描述&#xff0c;读者可以自行度娘。 Windows10上Docker的安装 因为本人用的是Windows系统&#xff0c;所有推…

Python官方中文开发文档

Python官方中文开发文档&#xff1a; https://docs.python.org/zh-cn/3/

perl anyevent socket监控web日志server

上篇已经讲过client端的CODE这部分code主要用来接收client端发送来的日志,从数据库中读取reglar然后去匹配.如果出现匹配则判断为XSS***.server端的SOCKET接收用了coro相关的模块.配置文件仿照前一篇博客读取即可.#!/usr/bin/perl use warnings; use strict; use AnyEvent; use…

Tensorflow No module named ‘tensorflow.examples.tutorials‘解决办法,有用

任何程序错误&#xff0c;以及技术疑问或需要解答的&#xff0c;请扫码添加作者VX&#xff1a;&#xff1a;1755337994 1 .利用TensorFlow代码下载MNIS丁 TensorFlow 提供了一个库&#xff0c; 可以直接用来自动下载与安装MNIST &#xff0c; 见如下代码&#xff1a; 代码5-1 M…

你不知道的RabbitMQ集群架构全解

你不知道的RabbitMQ集群架构全解 前言 本文将系统的介绍一下RabbitMQ集群架构的特点、异常处理、搭建和使用中要注意的一些细节。 知识点 一、为什么使用集群&#xff1f; 二、集群的特点 三、集群异常处理 四、集群节点类型 五、集群搭建方法 六、镜像队列 一、为什…

IPFS搭建HTTPS去中心化网站,真实可用

、 首先&#xff0c;我们需要知道IPFS是什么&#xff1f; 其实IPFS是一种协议&#xff0c;全称为Inter-Planetary File System&#xff0c;是一种点对点超媒体协议&#xff0c;旨在取代旧的HTTP&#xff0c;使网络更快&#xff0c;更安全&#xff0c;更开放。 我们平常都通过…

Ubuntu 18.04.1 搭建Java环境和HelloWorld

一、搭建Java环境 系统环境 Ubuntu 18.04.1JDK 8IDEA 2018.2 1.下载JDK 官网地址&#xff1a;http://www.oracle.com/technetwork/java/javase/downloads/index.html 选择相应的版本&#xff0c;点击jdk&#xff0c;进入下载页面&#xff0c;选择“Linux x64”版本的后缀为…

Python openpyxl打开有公式的excel表取值错误的解决办法,Python openpyxl获取excel有公式的单元格的数值错误,Python操作excel(.xlsx)封装类

Python openpyxl打开有公式的表格&#xff0c;如果直接读取&#xff0c;会出现有公式的单元格为空或零的情况。 参见&#xff1a; https://blog.csdn.net/weixin_45903952/article/details/105073611?utm_mediumdistribute.wap_relevant.none-task-blog-title-3 wb openpyxl…

Python实现GCS bucket断点续传功能,分块上传文件

Python实现GCS bucket断点续传功能&#xff0c;分块上传文件 环境&#xff1a;Python 3.6 我有一个关于使用断点续传到Google Cloud Storage的上传速度的问题。我已经编写了一个Python客户端&#xff0c;用于将大文件上传到GCS&#xff08;它具有一些特殊功能&#xff0c;这…

Spring Boot 最佳实践(一)快速入门

一、关于Spring Boot 在开始了解Spring Boot之前&#xff0c;我们需要先了解一下Spring&#xff0c;因为Spring Boot的诞生和Spring是息息相关的&#xff0c;Spring Boot是Spring发展到一定程度的一个产物&#xff0c;但并不是Spring的替代品&#xff0c;Spring Boot是为了让程…

Wo Cloud CentOS 挂载磁盘小计

为什么80%的码农都做不了架构师&#xff1f;>>> 涉及到的命令&#xff1a;fdisk/mkfs/mount 列出当前磁盘[rootvity ~]# fdisk -lDisk /dev/vda: 21.5 GB, 21474836480 bytes 16 heads, 63 sectors/track, 41610 cylinders Units cylinders of 1008 * 512 516096…

PC通过IE浏览器对华为S5700交换机进行WEB管理

1.PC和交换机通过网线连接,通过CONSOLE线缆连接华为S5700交换机,使用如下命令查看是否有web.7z文件 <Quidway>dir2.新建VLAN和配置VLAN的IP <Quidway>system-view [Quidway]<

最邻近插值、双线性插值、三次卷积插值最通俗入门理论解析,论文材料

如有任何问题&#xff0c;请联系VX&#xff1a;1755337994 前言 图像处理中有三种常用的插值算法&#xff1a; 最邻近插值 双线性插值 双立方&#xff08;三次卷积&#xff09;插值 其中效果最好的是双立方&#xff08;三次卷积&#xff09;插值&#xff0c;本文介绍它的原…

Spring Boot 最佳实践(二)集成Jsp与生产环境部署

一、简介 提起Java不得不说的一个开发场景就是Web开发&#xff0c;也是Java最热门的开发场景之一&#xff0c;说到Web开发绕不开的一个技术就是JSP&#xff0c;因为目前市面上仍有很多的公司在使用JSP&#xff0c;所以本文就来介绍一下Spring Boot 怎么集成JSP开发&#xff0c…

全网最详细 Python如何读取NIFTI格式图像(.nii文件)和 .npy格式文件和pkl标签文件内容

在医学图像处理中&#xff0c;我们经常使用一种NIFTI格式图像&#xff08;.nii文件&#xff09;&#xff0c;现在我们来看看 什么是.nii文件&#xff1f;该如何读取.nii文件&#xff1f; 1. NIFTI格式图像 什么是NIFTI&#xff08;Neuroimaging Informatics Technology Initia…

Spring Boot 最佳实践(三)模板引擎FreeMarker集成

一、FreeMaker介绍 FreeMarker是一款免费的Java模板引擎&#xff0c;是一种基于模板和数据生成文本&#xff08;HMLT、电子邮件、配置文件、源代码等&#xff09;的工具&#xff0c;它不是面向最终用户的&#xff0c;而是一款程序员使用的组件。 FreeMarker最初设计是用来在M…

Android开发之通过浏览器链接打开任意app页面

老套路先上图&#xff1a; 先说下上面的流程&#xff0c;第一张图是模拟浏览器的网页点击链接打开app,第二张图系统弹框提示是否打开app,第三张图已打开APP&#xff0c;弹出的吐司是打开APP携带的数据 具体实现分为两步&#xff0c;第一步配置你要打开的activity页面如下&…

DVWA下载、安装、使用(漏洞测试环境搭建)教程

DVWA&#xff08;Damn Vulnerable Web Application&#xff09;是一个用来进行安全脆弱性鉴定的PHP/MySQL Web应用&#xff0c;旨在为安全专业人员测试自己的专业技能和工具提供合法的环境&#xff0c;帮助web开发者更好的理解web应用安全防范的过程。 一共有十个模块&#xf…

Spring Boot 最佳实践(四)模板引擎Thymeleaf集成

## 一、Thymeleaf介绍 Thymeleaf是一种Java XML / XHTML / HTML5模板引擎&#xff0c;可以在Web和非Web环境中使用。它更适合在基于MVC的Web应用程序的视图层提供XHTML / HTML5&#xff0c;但即使在脱机环境中&#xff0c;它也可以处理任何XML文件。它提供了完整的Spring Fram…