Python 在 TensorFlow 训练期间打印损失

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/33833818/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 14:04:26  来源:igfitidea点击:

Printing the loss during TensorFlow training

pythontensorflow

提问by Karnivaurus

I am looking at the TensorFlow "MNIST For ML Beginners" tutorial, and I want to print out the training loss after every training step.

我正在看 TensorFlow“ MNIST For ML Beginners”教程,我想在每个训练步骤后打印出训练损失。

My training loop currently looks like this:

我的训练循环目前看起来像这样:

for i in range(100):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

Now, train_stepis defined as:

现在,train_step定义为:

train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

Where cross_entropyis the loss which I want to print out:

cross_entropy我想打印的损失在哪里:

cross_entropy = -tf.reduce_sum(y_ * tf.log(y))

One way to print this would be to explicitly compute cross_entropyin the training loop:

打印它的一种方法是cross_entropy在训练循环中显式计算:

for i in range(100):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
    print 'loss = ' + str(cross_entropy)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

I now have two questions regarding this:

我现在有两个关于此的问题:

  1. Given that cross_entropyis already computed during sess.run(train_step, ...), it seems inefficient to compute it twice, requiring twice the number of forward passes of all the training data. Is there a way to access the value of cross_entropywhen it was computed during sess.run(train_step, ...)?

  2. How do I even print a tf.Variable? Using str(cross_entropy)gives me an error...

  1. 鉴于cross_entropy已经在 期间sess.run(train_step, ...)计算过,计算两次似乎效率低下,需要所有训练数据的前向传递次数的两倍。有没有办法访问在cross_entropy期间计算的值sess.run(train_step, ...)

  2. 我什至如何打印tf.Variable?使用str(cross_entropy)给我一个错误...

Thank you!

谢谢!

采纳答案by mrry

You can fetch the value of cross_entropyby adding it to the list of arguments to sess.run(...). For example, your for-loop could be rewritten as follows:

您可以cross_entropy通过将的值添加到 的参数列表来获取sess.run(...)。例如,您的for-loop 可以改写如下:

for i in range(100):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
    _, loss_val = sess.run([train_step, cross_entropy],
                           feed_dict={x: batch_xs, y_: batch_ys})
    print 'loss = ' + loss_val

The same approach can be used to print the current value of a variable. Let's say, in addition to the value of cross_entropy, you wanted to print the value of a tf.Variablecalled W, you could do the following:

可以使用相同的方法打印变量的当前值。假设,除了 的值之外cross_entropy,您还想打印一个tf.Variable被调用的值W,您可以执行以下操作:

for i in range(100):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
    _, loss_val, W_val = sess.run([train_step, cross_entropy, W],
                                  feed_dict={x: batch_xs, y_: batch_ys})
    print 'loss = %s' % loss_val
    print 'W = %s' % W_val

回答by dga

Instead of just running the training_step, run also the cross_entropy node so that its value is returned to you. Remember that:

除了运行 training_step,还运行 cross_entropy 节点,以便将其值返回给您。请记住:

var_as_a_python_value = sess.run(tensorflow_variable)

will give you what you want, so you can do this:

会给你你想要的,所以你可以这样做:

[_, cross_entropy_py] = sess.run([train_step, cross_entropy],
                                 feed_dict={x: batch_xs, y_: batch_ys})

to both run the training and pull out the value of the cross entropy as it was computed during the iteration. Note that I turned both the arguments to sess.run and the return values into a list so that both happen.

运行训练并提取在迭代期间计算的交叉熵值。请注意,我将 sess.run 的参数和返回值都转换为一个列表,以便两者都发生。