当前位置:  开发笔记 > 编程语言 > 正文

TensorFlow似乎不使用GPU

如何解决《TensorFlow似乎不使用GPU》经验,为你挑选了0个好方法。

TensorFlow在Windows 8和Python 3.5上使用.我改变了这个简短的例子来看看,如果GPU支持(Titan X)工作.不幸的是运行时用(tf.device("/gpu:0")和没有(tf.device("/cpu:0"))使用GPU是相同的.Windows CPU监视显示,在两种情况下,CPU负载在计算期间约为100%.

这是代码示例:

import numpy as np
import tensorflow as tf
import datetime

#num of multiplications to perform
n = 100

# Create random large matrix
matrix_size = 1e3
A = np.random.rand(matrix_size, matrix_size).astype('float32')
B = np.random.rand(matrix_size, matrix_size).astype('float32')

# Creates a graph to store results
c1 = []

# Define matrix power
def matpow(M, n):
    if n < 1: #Abstract cases where n < 1
        return M
    else:
        return tf.matmul(M, matpow(M, n-1))

with tf.device("/gpu:0"):
    a = tf.constant(A)
    b = tf.constant(B)
    #compute A^n and B^n and store results in c1
    c1.append(matpow(a, n))
    c1.append(matpow(b, n))

    sum = tf.add_n(c1)

t1 = datetime.datetime.now()
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
    # Runs the op.
    sess.run(sum)
t2 = datetime.datetime.now()

print("computation time: " + str(t2-t1))

以下是GPU案例的输出:

I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cublas64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cudnn64_5.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library cufft64_80.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library nvcuda.dll locally
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\dso_loader.cc:128] successfully opened CUDA library curand64_80.dll locally
C:/Users/schlichting/.spyder-py3/temp.py:16: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
  A = np.random.rand(matrix_size, matrix_size).astype('float32')
C:/Users/schlichting/.spyder-py3/temp.py:17: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
  B = np.random.rand(matrix_size, matrix_size).astype('float32')
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:01:00.0
Total memory: 12.00GiB
Free memory: 2.40GiB
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:906] DMA: 0 
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:916] 0:   Y 
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\gpu\gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0)
D c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\direct_session.cc:255] Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0

Ievice mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0

C:0/task:0/gpu:0
host/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_108: (MatMul)/job:localhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_109: (MatMul)/job:localhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_110: (MatMul)/job:localhost/replicacalhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_107: (MatMul)/job:localgpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_103: (MatMul)/job:localhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_104: (MatMul)/job:localhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_105: (MatMul)/job:localhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_106: (MatMul)/job:lo c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] Const_1: (Const)/job:localhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_100: (MatMul)/job:localhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_101: (MatMul)/job:localhost/replica:0/task:0/gpu:0
I c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\common_runtime\simple_placer.cc:827] MatMul_102: (MatMul)/job:localhost/replica:0/task:0/Ionst_1: (Const): /job:localhost/replica:0/task:0/gpu:0


MatMul_100: (MatMul): /job:localhost/replica:0/task:0/gpu:0
MatMul_101: (MatMul): /job:localhost/replica:0/task:0/gpu:0
...
MatMul_198: (MatMul): /job:localhost/replica:0/task:0/gpu:0
MatMul_199: (MatMul): /job:localhost/replica:0/task:0/gpu:0
Const: (Const): /job:localhost/replica:0/task:0/gpu:0
MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0
MatMul_1: (MatMul): /job:localhost/replica:0/task:0/gpu:0
MatMul_2: (MatMul): /job:localhost/replica:0/task:0/gpu:0
MatMul_3: (MatMul): /job:localhost/replica:0/task:0/gpu:0
...
MatMul_98: (MatMul): /job:localhost/replica:0/task:0/gpu:0
MatMul_99: (MatMul): /job:localhost/replica:0/task:0/gpu:0
AddN: (AddN): /job:localhost/replica:0/task:0/gpu:0
computation time: 0:00:05.066000

在CPU的情况下输出是相同的,使用cpu:0而不是gpu:0.计算时间不会改变.即使我使用更多操作,例如运行时间约为1分钟,GPU和CPU也是相同的.提前谢谢了!

推荐阅读
手机用户2402851155
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有