Python 如何在 Tensorflow 中关闭 dropout 以进行测试?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/44971349/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-20 00:42:24  来源:igfitidea点击:

How to turn off dropout for testing in Tensorflow?

pythonmachine-learningtensorflowneural-networkconv-neural-network

提问by G. Mesch

I am fairly new to Tensorflow and ML in general, so I hereby apologize for a (likely) trivial question.

总的来说,我对 Tensorflow 和 ML 还很陌生,所以我在此为一个(可能的)微不足道的问题道歉。

I use the dropout technique to improve learning rates of my network, and it seems to work just fine. Then, I would like to test the network on some data to see if it works like this:

我使用 dropout 技术来提高我的网络的学习率,它似乎工作得很好。然后,我想在一些数据上测试网络,看看它是否像这样工作:

   def Ask(self, image):
        return self.session.run(self.model, feed_dict = {self.inputPh: image})

Obviously, it yields different results each time as the dropout is still in place. One solution I can think of is to create two separate models - one for a training and the other one for an actual later use of the network, however, such a solution seems impractical to me.

显然,由于辍学仍然存在,因此每次都会产生不同的结果。我能想到的一个解决方案是创建两个单独的模型——一个用于训练,另一个用于网络的实际使用,但是,这样的解决方案对我来说似乎不切实际。

What's the common approach to solving this problem?

解决这个问题的常用方法是什么?

回答by nessuno

The easiest way is to change the keep_probparameter using a placeholder_with_default:

最简单的方法是keep_prob使用 a更改参数placeholder_with_default

prob = tf.placeholder_with_default(1.0, shape=())
layer = tf.nn.dropout(layer, prob)

in this way when you train you can set the parameter like this:

这样,当您训练时,您可以像这样设置参数:

sess.run(train_step, feed_dict={prob: 0.5})

and when you evaluate the default value of 1.0 is used.

并且在您评估时使用默认值 1.0。

回答by robbisg

you should set the keep_probin tensorflow dropout layer, that is the probability to keep the weight, I think you set that variable with values between 0.5 and 0.8. When testing the network you must simply feed keep_probwith 1.

您应该设置keep_probin tensorflow dropout 层,即保持权重的概率,我认为您将该变量设置为 0.5 到 0.8 之间的值。在测试网络时,您必须简单地输入keep_prob1。

You should define something like that:

你应该定义这样的东西:

keep_prob = tf.placeholder(tf.float32, name='keep_prob')
drop = tf.contrib.rnn.DropoutWrapper(layer1, output_keep_prob=keep_prob)

Then change the values in the session:

然后更改会话中的值:

_ = sess.run(cost, feed_dict={'input':training_set, 'output':training_labels, 'keep_prob':0.8}) # During training
_ = sess.run(cost, feed_dict={'input':testing_set, 'output':testing_labels, 'keep_prob':1.}) # During testing

回答by Jarno

With the new tf.estimator APIyou specify a model function, that returns different models, based on whether you are training or testing, but still allows you to reuse your model code. In your model function you would do something similar to:

使用新的tf.estimator API你指定一个模型函数,它根据你是在训练还是测试返回不同的模型,但仍然允许你重用你的模型代码。在您的模型函数中,您将执行类似以下操作:

def model_fn(features, labels, mode):

    training = (mode == tf.estimator.ModeKeys.TRAIN)
    ...
    t = tf.layers.dropout(t, rate=0.25, training=training, name='dropout_1')
    ...

The modeargument is automatically passed depending on whether you call estimator.train(...)or estimator.predict(...).

mode参数取决于你是否调用自动传递estimator.train(...)estimator.predict(...)

回答by cinqS

if you don't want to use Estimator API, you can create the dropout this way:

如果您不想使用Estimator API,您可以通过以下方式创建 dropout:

tf_is_traing_pl = tf.placeholder_with_default(True, shape=())
tf_drop_out = tf.layers.dropout(last_output, rate=0.8, training=tf.is_training_pl)

So, you feed the session with {'tf_is_training': False}when doing evaluation instead of changing the dropout rate.

因此,您{'tf_is_training': False}在进行评估时为会话提供信息,而不是更改辍学率。

回答by David Bernat

With the update of Tensorflow, the class tf.layer.dropout should be used instead of tf.nn.dropout.

随着 Tensorflow 的更新,应该使用类 tf.layer.dropout 而不是 tf.nn.dropout。

This supports an is_training parameter. Using this allows your models to define keep_prob once, and not rely on your feed_dict to manage the external parameter. This allows for better refactored code.

这支持 is_training 参数。使用它可以让您的模型定义一次 keep_prob,而不是依赖您的 feed_dict 来管理外部参数。这允许更好地重构代码。

More info: https://www.tensorflow.org/api_docs/python/tf/layers/dropout

更多信息:https: //www.tensorflow.org/api_docs/python/tf/layers/dropout

回答by Epsilon1024

When you test you are supposed to multiply the output of the layer by 1/drop_prob.

当您测试时,您应该将层的输出乘以 1/drop_prob。

In that case you would have to put an additional multiplication step in the test phase.

在这种情况下,您必须在测试阶段添加一个额外的乘法步骤。