Python 在 keras 中保存最佳模型

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/48285129/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 18:37:53  来源:igfitidea点击:

Saving best model in keras

pythonkerasdeep-learningneural-network

提问by dJOKER_dUMMY

I use the following code when training a model in keras

我在 keras 中训练模型时使用以下代码

from keras.callbacks import EarlyStopping

model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(1))

model_2.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])


model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[early_stopping_monitor], verbose=False)

model.predict(X_test)

but recently I wanted to get the best trained model saved as the data I am training on gives a lot of peaks in "high val_loss vs epochs" graph and I want to use the best one possible yet from the model.

但最近我想保存最好的训练模型,因为我正在训练的数据在“高 val_loss vs epochs”图中给出了很多峰值,我想使用模型中最好的一个。

Is there any method or function to help with that?

有什么方法或功能可以帮助解决这个问题吗?

回答by Shridhar R Kulkarni

EarlyStoppingand ModelCheckpointis what you need from Keras documentation.

EarlyStoppingModelCheckpoint是您从 Keras 文档中需要的。

You should set save_best_only=Truein ModelCheckpoint. If any other adjustments needed, are trivial.

您应该save_best_only=True在 ModelCheckpoint 中进行设置。如果需要任何其他调整,都是微不足道的。

Just to help you more you can see a usage here on Kaggle.

为了帮助你更多,你可以在 Kaggle 上看到一个用法。



Adding the code here in case the above Kaggle example link is not available:

如果上面的 Kaggle 示例链接不可用,请在此处添加代码:

model = getModel()
model.summary()

batch_size = 32

earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')

model.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)

回答by Vivek

I guess model_2.compilewas a typo. This should help if you want to save the best model w.r.t to the val_losses -

我猜model_2.compile是打字错误。如果您想将最佳模型 wrt 保存到 val_losses,这应该会有所帮助 -

checkpoint = ModelCheckpoint('model-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto')  

model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])

model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[checkpoint], verbose=False)

回答by jorijnsmit

EarlyStopping's restore_best_weightsargument will do the trick:

EarlyStoppingrestore_best_weights论点可以解决问题:

restore_best_weights:whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used.

restore_best_weights:是否从监测数量的最佳值的epoch恢复模型权重。如果为 False,则使用在训练的最后一步获得的模型权重。

So not sure how your early_stopping_monitoris defined, but going with all the default settings and seeing you already imported EarlyStoppingyou could do this:

所以不确定你early_stopping_monitor是如何定义的,但是使用所有默认设置并看到你已经导入EarlyStopping你可以这样做:

early_stopping_monitor = EarlyStopping(
    monitor='val_loss',
    min_delta=0,
    patience=0,
    verbose=0,
    mode='auto',
    baseline=None,
    restore_best_weights=True
)

And then just call model.fit()with callbacks=[early_stopping_monitor]like you already do.

然后只需调用model.fit()callbacks=[early_stopping_monitor]像你已经做的。