我有以下基于MNIST示例的代码。它有两种修改方式:
1)我没有使用单热向量,所以我只使用 tf.equal(y, y_)
2)我的结果是二进制的:0或1
import tensorflow as tf import numpy as np # get the data train_data, train_results = get_data(2000, 2014) test_data, test_results = get_data(2014, 2015) # setup a session sess = tf.Session() x_len = len(train_data[0]) y_len = len(train_results[0]) # make placeholders for inputs and outputs x = tf.placeholder(tf.float32, shape=[None, x_len]) y_ = tf.placeholder(tf.float32, shape=[None, y_len]) # create the weights and bias W = tf.Variable(tf.zeros([x_len, 1])) b = tf.Variable(tf.zeros([1])) # initialize everything sess.run(tf.initialize_all_variables()) # create the "equation" for y in terms of x y_prime = tf.matmul(x, W) + b y = tf.nn.softmax(y_prime) # construct the error function cross_entropy = tf.nn.softmax_cross_entropy_with_logits(y_prime, y_) # setup the training algorithm train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) # train the thing for i in range(1000): rand_rows = np.random.choice(train_data.shape[0], 100, replace=False) _, w_out, b_out, ce_out = sess.run([train_step, W, b, cross_entropy], feed_dict={x: train_data[rand_rows, :], y_: train_results[rand_rows, :]}) print("%d: %s %s %s" % (i, str(w_out), str(b_out), str(ce_out))) # compute how many times it was correct correct_prediction = tf.equal(y, y_) # find the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print(sess.run(accuracy, feed_dict={x: test_data, y_: test_results})) for i in range(0, len(test_data)): res = sess.run(y, {x: [test_data[i]]}) print("RES: " + str(res) + " ACT: " + str(test_results[i]))
精度始终为0.5(因为我的测试数据的1s与0s差不多)。的值W
,并b
似乎总是增加,可能是因为值cross_entropy
总是全部为零的向量。
当我尝试将此模型用于预测时,预测始终为1:
RES: [[ 1.]] ACT: [ 0.] RES: [[ 1.]] ACT: [ 1.] RES: [[ 1.]] ACT: [ 0.] RES: [[ 1.]] ACT: [ 1.] RES: [[ 1.]] ACT: [ 0.] RES: [[ 1.]] ACT: [ 1.] RES: [[ 1.]] ACT: [ 0.] RES: [[ 1.]] ACT: [ 0.] RES: [[ 1.]] ACT: [ 1.] RES: [[ 1.]] ACT: [ 0.] RES: [[ 1.]] ACT: [ 1.]
我在这里做错了什么?
您似乎在预测单个标量,而不是矢量。softmax op会为每个示例生成矢量值的预测。此向量必须总和为1。当向量仅包含一个元素时,该元素必须始终为1。如果要使用softmax解决此问题,可以将[1,0]用作当前输出的目标使用[0],并在当前使用[1]的地方使用[0,1]。另一种选择是您可以继续使用一个数字,但是将输出层更改为Sigmoid而不是softmax,并将cost函数也更改为基于Sigmoid的cost函数。