最近我一直在玩TensorFlow,我提到框架无法使用我所有可用的计算资源.在卷积神经网络教程中,他们提到了这一点
天真地使用模型参数的异步更新导致次优的训练性能,因为可以在模型参数的陈旧副本上训练单个模型副本.相反,采用完全同步更新将与最慢的模型副本一样慢.
虽然他们在教程和白皮书中都提到了它,但我并没有真正找到在本地机器上进行异步并行计算的方法.它甚至可能吗?或者它是TensorFlow的分布式待发布版本的一部分.如果是,那怎么样?
TensorFlow的开源版本支持异步梯度下降,甚至无需修改图形.最简单的方法是并行执行多个并发步骤:
loss = ...
# Any of the optimizer classes can be used here.
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
def train_function():
# TODO: Better termination condition, e.g. using a `max_steps` counter.
while True:
sess.run(train_op)
# Create multiple threads to run `train_function()` in parallel
train_threads = []
for _ in range(NUM_CONCURRENT_STEPS):
train_threads.append(threading.Thread(target=train_function))
# Start the threads, and block on their completion.
for t in train_threads:
t.start()
for t in train_threads:
t.join()
此示例设置NUM_CONCURRENT_STEPS
调用sess.run(train_op)
.由于这些线程之间没有协调,因此它们以异步方式进行.
实现同步并行训练(目前)实际上更具挑战性,因为这需要额外的协调以确保所有副本读取相同版本的参数,并且所有更新同时变得可见.CIFAR-10训练的多GPU示例通过在训练图中使用共享参数制作"塔"的多个副本来执行同步更新,并在应用更新之前明确平均塔上的梯度.
注意:此答案中的代码将所有计算放在同一设备上,如果您的计算机中有多个GPU,则这些计算不是最佳选择.如果您想使用所有GPU,请遵循多GPU CIFAR-10型号的示例,并创建多个"塔",其操作固定到每个GPU.代码大致如下:
train_ops = []
for i in range(NUM_GPUS):
with tf.device("/gpu:%d" % i):
# Define a tower on GPU `i`.
loss = ...
train_ops.append(tf.train.GradientDescentOptimizer(0.01).minimize(loss))
def train_function(train_op):
# TODO: Better termination condition, e.g. using a `max_steps` counter.
while True:
sess.run(train_op)
# Create multiple threads to run `train_function()` in parallel
train_threads = []
for train_op in train_ops:
train_threads.append(threading.Thread(target=train_function, args=(train_op,))
# Start the threads, and block on their completion.
for t in train_threads:
t.start()
for t in train_threads:
t.join()
请注意,您可能会发现使用"变量范围"来方便塔之间的变量共享.