Python 在 keras 中保存最佳模型
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/48285129/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Saving best model in keras
提问by dJOKER_dUMMY
I use the following code when training a model in keras
我在 keras 中训练模型时使用以下代码
from keras.callbacks import EarlyStopping
model = Sequential()
model.add(Dense(100, activation='relu', input_shape = input_shape))
model.add(Dense(1))
model_2.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[early_stopping_monitor], verbose=False)
model.predict(X_test)
but recently I wanted to get the best trained model saved as the data I am training on gives a lot of peaks in "high val_loss vs epochs" graph and I want to use the best one possible yet from the model.
但最近我想保存最好的训练模型,因为我正在训练的数据在“高 val_loss vs epochs”图中给出了很多峰值,我想使用模型中最好的一个。
Is there any method or function to help with that?
有什么方法或功能可以帮助解决这个问题吗?
回答by Shridhar R Kulkarni
EarlyStoppingand ModelCheckpointis what you need from Keras documentation.
EarlyStopping和ModelCheckpoint是您从 Keras 文档中需要的。
You should set save_best_only=True
in ModelCheckpoint. If any other adjustments needed, are trivial.
您应该save_best_only=True
在 ModelCheckpoint 中进行设置。如果需要任何其他调整,都是微不足道的。
Just to help you more you can see a usage here on Kaggle.
为了帮助你更多,你可以在 Kaggle 上看到一个用法。
Adding the code here in case the above Kaggle example link is not available:
如果上面的 Kaggle 示例链接不可用,请在此处添加代码:
model = getModel()
model.summary()
batch_size = 32
earlyStopping = EarlyStopping(monitor='val_loss', patience=10, verbose=0, mode='min')
mcp_save = ModelCheckpoint('.mdl_wts.hdf5', save_best_only=True, monitor='val_loss', mode='min')
reduce_lr_loss = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=7, verbose=1, epsilon=1e-4, mode='min')
model.fit(Xtr_more, Ytr_more, batch_size=batch_size, epochs=50, verbose=0, callbacks=[earlyStopping, mcp_save, reduce_lr_loss], validation_split=0.25)
回答by Vivek
I guess model_2.compile
was a typo.
This should help if you want to save the best model w.r.t to the val_losses -
我猜model_2.compile
是打字错误。如果您想将最佳模型 wrt 保存到 val_losses,这应该会有所帮助 -
checkpoint = ModelCheckpoint('model-{epoch:03d}-{acc:03f}-{val_acc:03f}.h5', verbose=1, monitor='val_loss',save_best_only=True, mode='auto')
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.fit(X, y, epochs=15, validation_split=0.4, callbacks=[checkpoint], verbose=False)
回答by jorijnsmit
EarlyStopping
's restore_best_weights
argument will do the trick:
EarlyStopping
的restore_best_weights
论点可以解决问题:
restore_best_weights:whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used.
restore_best_weights:是否从监测数量的最佳值的epoch恢复模型权重。如果为 False,则使用在训练的最后一步获得的模型权重。
So not sure how your early_stopping_monitor
is defined, but going with all the default settings and seeing you already imported EarlyStopping
you could do this:
所以不确定你early_stopping_monitor
是如何定义的,但是使用所有默认设置并看到你已经导入EarlyStopping
你可以这样做:
early_stopping_monitor = EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=True
)
And then just call model.fit()
with callbacks=[early_stopping_monitor]
like you already do.
然后只需调用model.fit()
与callbacks=[early_stopping_monitor]
像你已经做的。