我试图复制使用Tensorflow 在优秀文章http://karpathy.github.io/2015/05/21/rnn-effectiveness/中演示的字符级语言建模.
到目前为止,我的尝试失败了.我的网络通常在处理800个左右的字符后输出单个字符.我相信我已经从根本上误解了张量流实现LSTM的方式,也许还有rnns.我发现难以遵循的文档.
这是我的代码的本质:
图形定义
idata = tf.placeholder(tf.int32,[None,1]) #input byte, use value 256 for start and end of file odata = tf.placeholder(tf.int32,[None,1]) #target output byte, ie, next byte in sequence.. source = tf.to_float(tf.one_hot(idata,257)) #input byte as 1-hot float target = tf.to_float(tf.one_hot(odata,257)) #target output as 1-hot float with tf.variable_scope("lstm01"): cell1 = tf.nn.rnn_cell.BasicLSTMCell(257) val1, state1 = tf.nn.dynamic_rnn(cell1, source, dtype=tf.float32) output = val1
损失计算
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(output, target)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) output_am = tf.argmax(output,2) target_am = tf.argmax(target,2) correct_prediction = tf.equal(output_am, target_am) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
训练
for i in range(0, source_data.size-1, batch_size): start = i stop = i + batch_size i_data = source_data[start:stop].reshape([-1,1]) o_data = source_data[start+1:stop+1].reshape([-1,1]) train_step.run(feed_dict={idata: i_data, odata: o_data}) if i%(report_interval*batch_size) == 0: batch_out, fa = sess.run([output_am, accuracy], feed_dict={idata: i_data, odata: o_data, keep_prob: 1.0}) print("step %d, training accuracy %s"%(i, str(fa))) print("i_data sample: %s"%str(squeeze(i_data))) print("o_data sample: %s"%str(squeeze(o_data))) print("batch sample: %s"%str(squeeze(batch_out)))
输出,使用1MB Shakespere文件进行训练
step 0, training accuracy 0.0 i_data sample: [ 256. 70. 105. 114. 115. 116. 32. 67. 105. 116.] o_data sample: [ 70. 105. 114. 115. 116. 32. 67. 105. 116. 105.] batch sample: [254 18 151 64 51 199 83 174 151 199] step 400, training accuracy 0.2 i_data sample: [ 32. 98. 101. 32. 100. 111. 110. 101. 58. 32.] o_data sample: [ 98. 101. 32. 100. 111. 110. 101. 58. 32. 97.] batch sample: [ 32 101 32 32 32 32 10 32 101 32] step 800, training accuracy 0.0 i_data sample: [ 112. 97. 114. 116. 105. 99. 117. 108. 97. 114.] o_data sample: [ 97. 114. 116. 105. 99. 117. 108. 97. 114. 105.] batch sample: [101 101 101 32 101 101 32 101 101 101] step 1200, training accuracy 0.1 i_data sample: [ 63. 10. 10. 70. 105. 114. 115. 116. 32. 67.] o_data sample: [ 10. 10. 70. 105. 114. 115. 116. 32. 67. 105.] batch sample: [ 32 32 32 101 32 32 32 32 32 32] step 1600, training accuracy 0.2 i_data sample: [ 32. 116. 105. 108. 108. 32. 116. 104. 101. 32.] o_data sample: [ 116. 105. 108. 108. 32. 116. 104. 101. 32. 97.] batch sample: [32 32 32 32 32 32 32 32 32 32]
这显然是不正确的.
我想我对'批次'和'序列'之间的区别感到困惑,以及LSTM的状态是否保留在我称之为'批次'(即子序列)之间的状态
我得到的印象是我使用长度为1的序列"批量"训练它,并且在每批之间,状态数据被丢弃.因此,它只是找到最常出现的符号.
任何人都可以证实这一点,或以其他方式纠正我的错误,并通过使用非常长的训练序列给出一些关于如何通过角色预测来完成角色任务的指示?
非常感谢.