我正面临着tensorFlow的麻烦.执行以下代码
import tensorflow as tf import input_data learning_rate = 0.01 training_epochs = 25 batch_size = 100 display_step = 1 mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # tensorflow graph input X = tf.placeholder('float', [None, 784]) # mnist data image of shape 28 * 28 = 784 Y = tf.placeholder('float', [None, 10]) # 0-9 digits recognition = > 10 classes # set model weights W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) # Our hypothesis activation = tf.add(tf.matmul(X, W),b) # Softmax # Cost function: cross entropy cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=activation, logits=Y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Gradient Descen
我收到以下错误:
ValueError:没有为任何变量提供渐变,检查图表中不支持渐变的ops,变量之间['Tensor("Variable/read:0",shape =(784,10),dtype = float32)','Tensor ("Variable_1/read:0",shape =(10,),dtype = float32)']和丢失Tensor("Mean:0",shape =(),dtype = float32).
此问题是由以下行引起的: tf.nn.softmax_cross_entropy_with_logits(labels=activation, logits=Y)
根据你应该有的文件
标签:每行标签[i]必须是有效的概率分布.
logits:未缩放的日志概率.
因此logits假设是你的假设,因此等于activation
并且有效的概率分布是Y
.所以只需改变它tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=activation)