基于RNN的语言模型
RNN语言模型理论基础
参考文献
cbow/skip gram 的局限性####
- 解决方案
rnn模型细节
- 数学表示
一个输入一个输出的不是循环神经网络。
RNN语言模型实践
demo1
- 1A. 优化上一节课的RNN模型
在第一个版本里面,我们将上一节课的代码包装为Class,并且使用tensorflow 自带的 rnn 实现 forward-propagation 功能
要讨论如何使用交叉验证
class BatchGenerator(object):#产生minibatch的类,每次返回一个minibatchdef __init__(self, tensor_in, tensor_out, batch_size, seq_length):"""初始化mini-batch产生器,BaseClassInput:batch_size: 每一个mini-batch里面有多少样本。seq_length: 每一个样本的长度,和batch_size一起决定了每个minibatch的数据量。"""self.batch_size = batch_sizeself.seq_length = seq_lengthself.tensor_in = tensor_inself.tensor_out = tensor_outself.create_batches()self.reset_batch_pointer()def reset_batch_pointer(self):self.pointer = 0def create_batches(self):self.num_batches = int(self.tensor_in.size / (self.batch_size * self.seq_length))self.tensor_in = self.tensor_in[:self.num_batches * self.batch_size * self.seq_length]self.tensor_out = self.tensor_out[:self.num_batches * self.batch_size * self.seq_length]# When the data (tesor) is too small, let's give them a better error messageif self.num_batches == 0:assert False, "Not enough data. Make seq_length and batch_size small."# np.split将tensor_in沿着轴1分割成num_batches组self.x_batches = np.split(self.tensor_in.reshape(self.batch_size, -1), self.num_batches, 1)self.y_batches = np.split(self.tensor_out.reshape(self.batch_size, -1), self.num_batches, 1)def next_batch(self):x, y = self.x_batches[self.pointer], self.y_batches[self.pointer]self.pointer += 1return x, yclass CopyBatchGenerator(BatchGenerator):def __init__(self, data, batch_size, seq_length):"""初始化mini-batch产生器输入一个长度为T的sequence,sequence的前T-1个元素为input,sequence的后面T-1个元素为output。用来训练RNNLM。Input:batch_size: 每一个mini-batch里面有多少样本。seq_length: 每一个样本的长度,和batch_size一起决定了每个minibatch的数据量。"""self.batch_size = batch_sizeself.seq_length = seq_lengthtensor_in = np.array(data)tensor_out = np.copy(tensor_in)tensor_out[:-1] = tensor_in[1:]tensor_out[-1] = tensor_in[0]super(CopyBatchGenerator, self).__init__(tensor_in, tensor_out, batch_size, seq_length)class PredBatchGenerator(BatchGenerator):def __init__(self, data_in, data_out, batch_size, seq_length):"""初始化mini-batch产生器输入两个长度为T的sequence,其中一个是输入sequence,另一个是输出sequence。Input:batch_size: 每一个mini-batch里面有多少样本。seq_length: 每一个样本的长度,和batch_size一起决定了每个minibatch的数据量。"""self.batch_size = batch_sizeself.seq_length = seq_lengthtensor_in = np.array(data_in)tensor_out = np.array(data_out)super(PredBatchGenerator, self).__init__(tensor_in, tensor_out, batch_size, seq_length)
- 定义CharRNN 模型
和上一节课一样,这一节课里,我们的RNN模型的输入和输出是同样长度的序列,我们叫做char-level-RNN模型
下周我们将研究以句子为单位输入输出
BasicRNNCell是抽象类RNNCell的一个最简单的实现。是tensorflow python自己的函数,这里面的rnn输出会输出两次,一边给外面,一边给下一个cell
class BasicRNNCell(RNNCell):def __init__(self, num_units, activation=None, reuse=None):super(BasicRNNCell, self).__init__(_reuse=reuse)self._num_units = num_unitsself._activation = activation or math_ops.tanhself._linear = None@propertydef state_size(self):return self._num_units@propertydef output_size(self):return self._num_unitsdef call(self, inputs, state):if self._linear is None:self._linear = _Linear([inputs, state], self._num_units, True)output = self._activation(self._linear([inputs, state]))return output, output
厚度为词向量的维度。
class CharRNNLM(object):def __init__(self, batch_size, num_unrollings, vocab_size,hidden_size, embedding_size, learning_rate):"""Character-2-Character RNN 模型。这个模型的训练数据是两个相同长度的sequence,其中一个sequence是input,另外一个sequence是output。"""self.batch_size = batch_sizeself.num_unrollings = num_unrollingsself.hidden_size = hidden_sizeself.vocab_size = vocab_sizeself.embedding_size = embedding_sizeself.input_data = tf.placeholder(tf.int64, [self.batch_size, self.num_unrollings], name='inputs')self.targets = tf.placeholder(tf.int64, [self.batch_size, self.num_unrollings], name='targets')cell_fn = tf.nn.rnn_cell.BasicRNNCellparams = dict()cell = cell_fn(self.hidden_size, **params)with tf.name_scope('initial_state'):self.zero_state = cell.zero_state(self.batch_size, tf.float32)self.initial_state = tf.placeholder(tf.float32,[self.batch_size, cell.state_size],'initial_state')with tf.name_scope('embedding_layer'):## 定义词向量参数,并通过查询将输入的整数序列每一个元素转换为embedding向量# 如果提供了embedding的维度,我们声明一个embedding参数,即词向量参数矩阵# 否则,我们使用Identity矩阵作为词向量参数矩阵#embedding一行一个次向量if embedding_size > 0:self.embedding = tf.get_variable('embedding', [self.vocab_size, self.embedding_size])else:self.embedding = tf.constant(np.eye(self.vocab_size), dtype=tf.float32)inputs = tf.nn.embedding_lookup(self.embedding, self.input_data)with tf.name_scope('slice_inputs'):# 我们将要使用static_rnn方法,需要将长度为num_unrolling的序列切割成# num_unrolling个单位,存在一个list里面,# 即,输入格式为:# [ num_unrollings, (batch_size, embedding_size)]sliced_inputs = [tf.squeeze(input_, [1]) for input_ in tf.split(axis=1, num_or_size_splits=self.num_unrollings, value=inputs)]# 调用static_rnn方法,作forward propagation# 为方便阅读,我们将static_rnn的注释贴到这里#tf.nn.rnn创建一个展开图的一个固定的网络长度。这意味着,如果有200次输入的步骤你与200步骤创建一个静态的图tf.nn.rnn RNN。# 首先,创建graphh较慢。第二,您无法传递比最初指定的更长的序列(> 200)。但是动态的tf.nn.dynamic_rnn解决这。当它被执行# 时,它使用循环来动态构建图形。这意味着图形创建速度更快,并且可以提供可变大小的批处理。#静态rnn就是先建立完成所有的cell,再开始运行## 输入:# inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size]# initial_state: An initial state for the RNN.# If cell.state_size is an integer, this must be a Tensor of appropriate# type and shape [batch_size, cell.state_size]# 输出:# outputs: a length T list of outputs (one for each input), or a nested tuple of such elements.# state: the final stateoutputs, final_state = tf.nn.static_rnn(cell=cell,#我的rnn cell内部结构inputs=sliced_inputs,#输入initial_state=self.initial_state)self.final_state = final_statewith tf.name_scope('flatten_outputs'):flat_outputs = tf.reshape(tf.concat(axis=1, values=outputs), [-1, hidden_size])with tf.name_scope('flatten_targets'):flat_targets = tf.reshape(tf.concat(axis=1, values=self.targets), [-1])with tf.variable_scope('softmax') as sm_vs:softmax_w = tf.get_variable('softmax_w', [hidden_size, vocab_size])softmax_b = tf.get_variable('softmax_b', [vocab_size])self.logits = tf.matmul(flat_outputs, softmax_w) + softmax_bself.probs = tf.nn.softmax(self.logits)with tf.name_scope('loss'):loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.logits, labels=flat_targets)self.mean_loss = tf.reduce_mean(loss)with tf.name_scope('loss_montor'):count = tf.Variable(1.0, name='count')sum_mean_loss = tf.Variable(1.0, name='sum_mean_loss')self.reset_loss_monitor = tf.group(sum_mean_loss.assign(0.0),count.assign(0.0), name='reset_loss_monitor')self.update_loss_monitor = tf.group(sum_mean_loss.assign(sum_mean_loss + self.mean_loss),count.assign(count + 1), name='update_loss_monitor')with tf.control_dependencies([self.update_loss_monitor]):self.average_loss = sum_mean_loss / countself.ppl = tf.exp(self.average_loss)self.global_step = tf.get_variable('global_step', [], initializer=tf.constant_initializer(0.0))self.learning_rate = tf.placeholder(tf.float32, [], name='learning_rate')tvars = tf.trainable_variables()grads = tf.gradients(self.mean_loss, tvars)optimizer = tf.train.AdamOptimizer(self.learning_rate)self.train_op = optimizer.apply_gradients(zip(grads, tvars), global_step=self.global_step)# 运行一个epoch# 注意我们将session作为一个input argument# 参考下图解释def run_epoch(self, session, batch_generator, learning_rate, freq=10):epoch_size = batch_generator.num_batchesextra_op = self.train_opstate = self.zero_state.eval()self.reset_loss_monitor.run()batch_generator.reset_batch_pointer()start_time = time.time()for step in range(epoch_size):x, y = batch_generator.next_batch()ops = [self.average_loss, self.ppl, self.final_state, extra_op, self.global_step]feed_dict = {self.input_data: x, self.targets: y,self.initial_state: state,self.learning_rate: learning_rate}results = session.run(ops, feed_dict)# option 1. 将上一个 minibatch 的 final state# 作为下一个 minibatch 的 initial stateaverage_loss, ppl, state, _, global_step = results# option 2. 总是使用 0-tensor 作为下一个 minibatch 的 initial state# average_loss, ppl, final_state, _, global_step = resultsreturn ppl, global_step
调用产生合成数据的module
from data.synthetic.synthetic_binary import gen_data
演示variable scope的冲突
如果下面的code cell被连续调用两次,则会有下述错误(注意reuse):
ValueError: Variable embedding already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "<ipython-input-1-2c5b9a1002a7>", line 36, in __init__self.embedding = tf.get_variable('embedding', [self.vocab_size, self.embedding_size])File "<ipython-input-2-44e044623871>", line 9, in <module>vocab_size, hidden_size, embedding_size, learning_rate)File "/home/dong/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in run_codeexec(code_obj, self.user_global_ns, self.user_ns)
测试下效果
batch_size = 16
num_unrollings = 20
vocab_size = 2
hidden_size = 16
embedding_size = 16
learning_rate = 0.01model = CharRNNLM(batch_size, num_unrollings,vocab_size, hidden_size, embedding_size, learning_rate)
dataset = gen_data(size = 1000000)
batch_size = 16
seq_length = num_unrollings
batch_generator = PredBatchGenerator(data_in = dataset[0],data_out = dataset[1],batch_size = batch_size,seq_length = seq_length)
#batch_generator = BatchGenerator(dataset[0], batch_size, seq_length)
session = tf.Session()
with session.as_default():for epoch in range(1):session.run(tf.global_variables_initializer())ppl, global_step = model.run_epoch(session, batch_generator, learning_rate, freq=10)print(ppl)
输出
1.58694
1.59246
1.59855
1.59121
1.59335
打印一下变量
all_vars = [node.name for node in tf.global_variables()]for var in all_vars:print(var)
#打印结果
embedding:0
rnn/basic_rnn_cell/kernel:0
rnn/basic_rnn_cell/bias:0
softmax/softmax_w:0
softmax/softmax_b:0
loss_montor/count:0
loss_montor/sum_mean_loss:0
global_step:0
beta1_power:0
beta2_power:0
embedding/Adam:0
embedding/Adam_1:0
rnn/basic_rnn_cell/kernel/Adam:0
rnn/basic_rnn_cell/kernel/Adam_1:0
rnn/basic_rnn_cell/bias/Adam:0
rnn/basic_rnn_cell/bias/Adam_1:0
softmax/softmax_w/Adam:0
softmax/softmax_w/Adam_1:0
softmax/softmax_b/Adam:0
softmax/softmax_b/Adam_1:0
改进: 如何cross-validation?
定义另一个CharRNN对象,使用validation数据计算ppl
tf.get_variable_scope().reuse_variables()
valid_model = CharRNNLM(batch_size, num_unrollings,vocab_size, hidden_size, embedding_size, learning_rate)
画重点:一个debug练习
ValueError: Variable embedding/Adam_2/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
关键是Adam_2,参考上面的Variable 列表,注意Adam_1
我们创建validation(以及test)对象的时候,应该disable优化器
- 我们evaluate模型的时候,我们并不希望更新参数
- 去掉优化器,我们可以避免上面的错误
除此以外,我们添加summary功能,见第二版。
为了解决上述问题,加上参数
加上is_training参数
import time
import numpy as np
import tensorflow as tfclass CharRNNLM(object):def __init__(self, is_training, batch_size, num_unrollings, vocab_size,hidden_size, embedding_size, learning_rate):"""New arguments:is_training: 是否在训练阶段"""
在训练的时候才定义优化器,不训练就不定义优化器了,也就不存在version1的错误了
# mark: 从version1到version2的更新:if is_training:tvars = tf.trainable_variables()grads = tf.gradients(self.mean_loss, tvars)optimizer = tf.train.AdamOptimizer(self.learning_rate)self.train_op = optimizer.apply_gradients(zip(grads, tvars), global_step=self.global_step)
添加了summary功能
# mark: version 1 --> version 2# 增加总结summary,方便通过tensorboard观察训练过程average_loss_summary = tf.summary.scalar(name = 'average_loss', tensor = self.average_loss)ppl_summary = tf.summary.scalar(name = 'perplexity', tensor = self.ppl)self.summaries = tf.summary.merge(inputs = [average_loss_summary, ppl_summary], name='loss_monitor')
- ppl-》Perplexity,有可计算出来的理论值吗,不会为0。未必会单调收敛到一个值,训练数据的会,但是test和valid的ppl不一定。可能会先下降再上升(过拟合)
- 当计算完损失函数后使用优化器
- char-rnn是特例,rnnlm思路上两者一样
- 调参:使用统一规则产生的train和valid,结果很好。
- dnn、rnn深度学习的机器人说的话未知,而模板机器人(规则)说的话可以预测到,固定领域的机器人
- 预测下一个字是什么??