当前位置:  开发笔记 > 编程语言 > 正文

无法在Tensorflow中优化多元线性回归

如何解决《无法在Tensorflow中优化多元线性回归》经验,为你挑选了1个好方法。

我使用了tensorflow教程中的单个变量示例,但是我在Tensorflow中优化多元线性回归问题时遇到了问题.

我正在使用此处使用的波特兰房价数据集.

我是Tensorflow的新手,我相信这里有一些可怕的东西.

优化似乎根本不起作用.它很快就会爆炸到无穷大.任何帮助表示赞赏.

import tensorflow as tf
import numpy as np

X = np.array( [[  2.10400000e+03,   3.00000000e+00],
   [  1.60000000e+03,   3.00000000e+00],
   [  2.40000000e+03,   3.00000000e+00],
   [  1.41600000e+03,   2.00000000e+00],
   [  3.00000000e+03,   4.00000000e+00],
   [  1.98500000e+03,   4.00000000e+00],
   [  1.53400000e+03,   3.00000000e+00],
   [  1.42700000e+03,   3.00000000e+00],
   [  1.38000000e+03,   3.00000000e+00],
   [  1.49400000e+03,   3.00000000e+00],
   [  1.94000000e+03,   4.00000000e+00],
   [  2.00000000e+03,   3.00000000e+00],
   [  1.89000000e+03,   3.00000000e+00],
   [  4.47800000e+03,   5.00000000e+00],
   [  1.26800000e+03,   3.00000000e+00],
   [  2.30000000e+03,   4.00000000e+00],
   [  1.32000000e+03,   2.00000000e+00],
   [  1.23600000e+03,   3.00000000e+00],
   [  2.60900000e+03,   4.00000000e+00],
   [  3.03100000e+03,   4.00000000e+00],
   [  1.76700000e+03,   3.00000000e+00],
   [  1.88800000e+03,   2.00000000e+00],
   [  1.60400000e+03,   3.00000000e+00],
   [  1.96200000e+03,   4.00000000e+00],
   [  3.89000000e+03,   3.00000000e+00],
   [  1.10000000e+03,   3.00000000e+00],
   [  1.45800000e+03,   3.00000000e+00],
   [  2.52600000e+03,   3.00000000e+00],
   [  2.20000000e+03,   3.00000000e+00],
   [  2.63700000e+03,   3.00000000e+00],
   [  1.83900000e+03,   2.00000000e+00],
   [  1.00000000e+03,   1.00000000e+00],
   [  2.04000000e+03,   4.00000000e+00],
   [  3.13700000e+03,   3.00000000e+00],
   [  1.81100000e+03,   4.00000000e+00],
   [  1.43700000e+03,   3.00000000e+00],
   [  1.23900000e+03,   3.00000000e+00],
   [  2.13200000e+03,   4.00000000e+00],
   [  4.21500000e+03,   4.00000000e+00],
   [  2.16200000e+03,   4.00000000e+00],
   [  1.66400000e+03,   2.00000000e+00],
   [  2.23800000e+03,   3.00000000e+00],
   [  2.56700000e+03,   4.00000000e+00],
   [  1.20000000e+03,   3.00000000e+00],
   [  8.52000000e+02,   2.00000000e+00],
   [  1.85200000e+03,   4.00000000e+00],
   [  1.20300000e+03,   3.00000000e+00]]
).astype('float32')

y_data = np.array([[ 399900.],
   [ 329900.],
   [ 369000.],
   [ 232000.],
   [ 539900.],
   [ 299900.],
   [ 314900.],
   [ 198999.],
   [ 212000.],
   [ 242500.],
   [ 239999.],
   [ 347000.],
   [ 329999.],
   [ 699900.],
   [ 259900.],
   [ 449900.],
   [ 299900.],
   [ 199900.],
   [ 499998.],
   [ 599000.],
   [ 252900.],
   [ 255000.],
   [ 242900.],
   [ 259900.],
   [ 573900.],
   [ 249900.],
   [ 464500.],
   [ 469000.],
   [ 475000.],
   [ 299900.],
   [ 349900.],
   [ 169900.],
   [ 314900.],
   [ 579900.],
   [ 285900.],
   [ 249900.],
   [ 229900.],
   [ 345000.],
   [ 549000.],
   [ 287000.],
   [ 368500.],
   [ 329900.],
   [ 314000.],
   [ 299000.],
   [ 179900.],
   [ 299900.],
   [ 239500.]]
).astype('float32')

m = 47

W = tf.Variable(tf.zeros([2,1]))
b = tf.Variable(tf.zeros([1]))

b = tf.Print(b, [b], "Bias: ")
W = tf.Print(W, [W], "Weights: ")

y = tf.add( tf.matmul(X,W), b)
y = tf.Print(y, [y], "y: ")

loss = tf.reduce_sum(tf.square(y - y_data)) / (2 * m)
loss = tf.Print(loss, [loss], "loss: ")
optimizer = tf.train.GradientDescentOptimizer(.01)

train = optimizer.minimize(loss)

init = tf.initialize_all_variables()

sess = tf.Session()
sess.run(init)                                

for i in range(10):
  sess.run(train)
  #if i % 20 == 0:
        #print(sess.run(W), sess.run(b))  

sess.close()

对于输出,我得到以下内容:

I tensorflow/core/kernels/logging_ops.cc:79] Weights: [0 0]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [0]
I tensorflow/core/kernels/logging_ops.cc:79] y: [0 0 0...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [6.5591554e+10]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [38210460 56018.387]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [17020.631]
I tensorflow/core/kernels/logging_ops.cc:79] y: [8.0394994e+10 6.1136921e+10 9.1705295e+10...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [3.373289e+21]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-3.8223224e+09]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-8.8281616e+12 -1.2750791e+10]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-1.8574494e+16 -1.4125102e+16 -2.118763e+16...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [1.8006666e+32]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [2.0396713e+18 2.9459613e+15]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [8.8311514e+14]
I tensorflow/core/kernels/logging_ops.cc:79] y: [4.2914781e+21 3.2634836e+21 4.8952214e+21...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-2.040362e+20]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-4.7124867e+23 -6.8063922e+20]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-9.9150947e+26 -7.5400019e+26 -1.1309991e+27...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [4.7140825e+25]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [1.0887797e+29 1.5725587e+26]
I tensorflow/core/kernels/logging_ops.cc:79] y: [2.2907974e+32 1.7420524e+32 2.6130761e+32...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-1.0891484e+31]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-2.515532e+34 -3.6332629e+31]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-5.2926912e+37 -4.0248632e+37 -6.0372884e+37...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [2.5163837e+36]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [inf 8.3943417e+36]
I tensorflow/core/kernels/logging_ops.cc:79] y: [inf inf inf...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [inf]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-inf]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-nan -inf]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-nan -nan -nan...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [-nan]
I tensorflow/core/kernels/logging_ops.cc:79] Bias: [-nan]
I tensorflow/core/kernels/logging_ops.cc:79] Weights: [-nan -nan]
I tensorflow/core/kernels/logging_ops.cc:79] y: [-nan -nan -nan...]
I tensorflow/core/kernels/logging_ops.cc:79] loss: [-nan]

我是从源码构建的,我正在使用python3,在修复程序完成后允许这样做.我怀疑这与它有什么关系,但只是想确定.我确信它缺乏用户知识.



1> mdaoust..:

你的学习率太高,因此解决方案来回跳得越来越远.

例如,对这样的问题进行标准化输入范围通常是很好的做法,因此它们具有mean(0)和var(1).

推荐阅读
mobiledu2402851173
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有