Python 如何根据损失值告诉 Keras 停止训练?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/37293642/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 19:12:19  来源:igfitidea点击:

How to tell Keras stop training based on loss value?

pythonmachine-learningneural-networkconv-neural-networkkeras

提问by ZFTurbo

Currently I use the following code:

目前我使用以下代码:

callbacks = [
    EarlyStopping(monitor='val_loss', patience=2, verbose=0),
    ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
      shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
      callbacks=callbacks)

It tells Keras to stop training when loss didn't improve for 2 epochs. But I want to stop training after loss became smaller than some constant "THR":

它告诉 Keras 在损失没有改善 2 个时期时停止训练。但是我想在损失变得小于某个恒定的“THR”后停止训练:

if val_loss < THR:
    break

I've seen in documentation there are possibility to make your own callback: http://keras.io/callbacks/But nothing found how to stop training process. I need an advice.

我在文档中看到有可能进行自己的回调:http: //keras.io/callbacks/但没有找到如何停止训练过程。我需要一个建议。

采纳答案by ZFTurbo

I found the answer. I looked into Keras sources and find out code for EarlyStopping. I made my own callback, based on it:

我找到了答案。我查看了 Keras 源代码并找到了 EarlyStopping 的代码。我基于它做了我自己的回调:

class EarlyStoppingByLossVal(Callback):
    def __init__(self, monitor='val_loss', value=0.00001, verbose=0):
        super(Callback, self).__init__()
        self.monitor = monitor
        self.value = value
        self.verbose = verbose

    def on_epoch_end(self, epoch, logs={}):
        current = logs.get(self.monitor)
        if current is None:
            warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)

        if current < self.value:
            if self.verbose > 0:
                print("Epoch %05d: early stopping THR" % epoch)
            self.model.stop_training = True

And usage:

和用法:

callbacks = [
    EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),
    # EarlyStopping(monitor='val_loss', patience=2, verbose=0),
    ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
      shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
      callbacks=callbacks)

回答by devin

The keras.callbacks.EarlyStopping callback does have a min_delta argument. From Keras documentation:

keras.callbacks.EarlyStopping 回调确实有一个 min_delta 参数。从 Keras 文档:

min_delta: minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.

min_delta:被监控数量的最小变化被视为改进,即小于 min_delta 的绝对变化,将被视为没有改进。

回答by 1''

One solution is to call model.fit(nb_epoch=1, ...)inside a for loop, then you can put a break statement inside the for loop and do whatever other custom control flow you want.

一种解决方案是model.fit(nb_epoch=1, ...)在 for 循环内部调用,然后您可以在 for 循环中放置一个 break 语句并执行您想要的任何其他自定义控制流。

回答by Rushin Tilva

I am bit late to answer XD. But I solved the same problem using custom callback.

我来晚了回答XD。但我使用自定义回调解决了同样的问题。

In the following custom callback code assign THR with the value at which you want to stop training and add the callback to your model.

在以下自定义回调代码中,为 THR 分配您想要停止训练并将回调添加到模型的值。

from keras.callbacks import Callback

class stopAtLossValue(Callback):

        def on_batch_end(self, batch, logs={}):
            THR = 0.03 #Assign THR with the value at which you want to stop training.
            if logs.get('loss') <= THR:
                 self.model.stop_training = True

回答by Juan Antonio Barragan

For me the model would only stop training if I added a return statement after setting the stop_training parameter to True because I was calling after self.model.evaluate. So either make sure to put stop_training = True at the end of the function or add a return statement.

对我来说,如果我在将 stop_training 参数设置为 True 后添加 return 语句,模型只会停止训练,因为我是在 self.model.evaluate 之后调用的。因此,请确保在函数末尾放置 stop_training = True 或添加 return 语句。

def on_epoch_end(self, batch, logs):
        self.epoch += 1
        self.stoppingCounter += 1
        print('\nstopping counter \n',self.stoppingCounter)

        #Stop training if there hasn't been any improvement in 'Patience' epochs
        if self.stoppingCounter >= self.patience:
            self.model.stop_training = True
            return

        # Test on additional set if there is one
        if self.testingOnAdditionalSet:
            evaluation = self.model.evaluate(self.val2X, self.val2Y, verbose=0)
            self.validationLoss2.append(evaluation[0])
            self.validationAcc2.append(evaluation[1])enter code here

回答by Suvo

While I was taking the TensorFlow in practice specialization, I learned a very elegant technique. Just little modified from the accepted answer.

当我在实践专业化中学习TensorFlow 时,我学到了一种非常优雅的技术。只是从接受的答案中稍作修改。

Let's set the example with our favorite MNIST data.

让我们以我们最喜欢的 MNIST 数据为例。

import tensorflow as tf

class new_callback(tf.keras.callbacks.Callback):
    def epoch_end(self, epoch, logs={}): 
        if(logs.get('accuracy')> 0.90): # select the accuracy
            print("\n !!! 90% accuracy, no further training !!!")
            self.model.stop_training = True

mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 #normalize

callbacks = new_callback()

# model = tf.keras.models.Sequential([# define your model here])

model.compile(optimizer=tf.optimizers.Adam(),
          loss='sparse_categorical_crossentropy',
          metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])

So, here I set the metrics=['accuracy'], and thus in the callback class the condition is set to 'accuracy'> 0.90.

所以,在这里我设置了metrics=['accuracy'],因此在回调类中,条件设置为'accuracy'> 0.90

You can choose any metric and monitor the training like this example. Most importantly you can set different conditions for different metric and use them simultaneously.

您可以选择任何指标并像此示例一样监控训练。最重要的是,您可以为不同的指标设置不同的条件并同时使用它们。

Hopefully this helps!

希望这会有所帮助!