Python 在 Keras 批量训练期间显示每个 epoch 的进度条

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/39124676/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 21:53:50  来源:igfitidea点击:

Show progress bar for each epoch during batchwise training in Keras

pythonmachine-learningkeras

提问by Anish Shah

When I load the whole dataset in memory and train the network in Keras using following code:

当我在内存中加载整个数据集并使用以下代码在 Keras 中训练网络时:

model.fit(X, y, nb_epoch=40, batch_size=32, validation_split=0.2, verbose=1)

This generates a progress bar per epoch with metrics like ETA, accuracy, loss, etc

这会在每个时期生成一个进度条,其中包含 ETA、准确性、损失等指标

When I train the network in batches, I'm using the following code

当我批量训练网络时,我使用以下代码

for e in range(40):
        for X, y in data.next_batch():
            model.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=1)

This will generate a progress bar for each batch instead of each epoch. Is it possible to generate a progress bar for each epoch during batchwise training?

这将为每个批次而不是每个时期生成一个进度条。是否可以在批量训练期间为每个 epoch 生成进度条?

回答by Abhijay Ghildyal

1.

1.

model.fit(X, y, nb_epoch=40, batch_size=32, validation_split=0.2, verbose=1)

In the above change to verbose=2, as it is mentioned in the documentation: "verbose: 0 for no logging to stdout, 1 for progress bar logging, 2 for one log line per epoch."

在对 的上述更改中verbose=2,如文档中所述:“详细:0 表示不记录到标准输出,1 表示进度条记录,” 2 for one log line per epoch

It'll show your output as:

它会将您的输出显示为:

Epoch 1/100
0s - loss: 0.2506 - acc: 0.5750 - val_loss: 0.2501 - val_acc: 0.3750
Epoch 2/100
0s - loss: 0.2487 - acc: 0.6250 - val_loss: 0.2498 - val_acc: 0.6250
Epoch 3/100
0s - loss: 0.2495 - acc: 0.5750 - val_loss: 0.2496 - val_acc: 0.6250
.....
.....

2.

2.

If you want to show a progress bar for completion of epochs, keep verbose=0(which shuts out logging to stdout) and implement in the following manner:

如果要显示完成 epoch 的进度条,请保留verbose=0(关闭记录到标准输出)并按以下方式实施:

from time import sleep
import sys

epochs = 10

for e in range(epochs):
    sys.stdout.write('\r')

    for X, y in data.next_batch():
        model.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=0)

    # print loss and accuracy

    # the exact output you're looking for:
    sys.stdout.write("[%-60s] %d%%" % ('='*(60*(e+1)/10), (100*(e+1)/10)))
    sys.stdout.flush()
    sys.stdout.write(", epoch %d"% (e+1))
    sys.stdout.flush()

The output will be as follows:

输出如下:

[============================================================] 100%, epoch 10

[================================================ ============] 100%,纪元 10

3.

3.

If you want to show loss after every n batches, you can use:

如果要在每 n 个批次后显示损失,可以使用:

out_batch = NBatchLogger(display=1000)
model.fit([X_train_aux,X_train_main],Y_train,batch_size=128,callbacks=[out_batch])

Though, I haven't ever tried it before. The above example was taken from this keras github issue: Show Loss Every N Batches #2850

虽然,我以前从未尝试过。上面的例子取自这个 keras github 问题:Show Loss Every N Batches #2850

You can also follow a demo of NBatchLoggerhere:

您还可以按照NBatchLogger此处的演示进行操作:

class NBatchLogger(Callback):
    def __init__(self, display):
        self.seen = 0
        self.display = display

    def on_batch_end(self, batch, logs={}):
        self.seen += logs.get('size', 0)
        if self.seen % self.display == 0:
            metrics_log = ''
            for k in self.params['metrics']:
                if k in logs:
                    val = logs[k]
                    if abs(val) > 1e-3:
                        metrics_log += ' - %s: %.4f' % (k, val)
                    else:
                        metrics_log += ' - %s: %.4e' % (k, val)
            print('{}/{} ... {}'.format(self.seen,
                                        self.params['samples'],
                                        metrics_log))

4.

4.

You can also use progbarfor progress, but it'll print progress batchwise

您也可以progbar用于进度,但它会批量打印进度

from keras.utils import generic_utils

progbar = generic_utils.Progbar(X_train.shape[0])

for X_batch, Y_batch in datagen.flow(X_train, Y_train):
    loss, acc = model_test.train([X_batch]*2, Y_batch, accuracy=True)
    progbar.add(X_batch.shape[0], values=[("train loss", loss), ("acc", acc)])

回答by casper.dcl

tqdm(version >= 4.41.0) has also just added built-in support for kerasso you could do:

tqdm(版本 >= 4.41.0)还刚刚添加了内置支持,keras因此您可以执行以下操作:

from tqdm.keras import TqdmCallback
...
model.fit(..., verbose=0, callbacks=[TqdmCallback(verbose=2)])

This turns off keras' progress (verbose=0), and uses tqdminstead. For the callback, verbose=2means separate progressbars for epochs and batches. 1means clear batch bars when done. 0means only show epochs (never show batch bars).

这会关闭keras' 进度 ( verbose=0),并tqdm改为使用。对于回调,verbose=2意味着 epochs 和 batches 的单独进度条。1表示完成后清除批处理条。0表示只显示纪元(从不显示批次条)。

回答by quester

you can set verbose=0 and set callbacks that will update progress at the end of each fitting,

您可以设置 verbose=0 并设置回调,以在每次拟合结束时更新进度,

clf.fit(X, y, nb_epoch=1, batch_size=data.batch_size, verbose=0, callbacks=[some_callback])

https://keras.io/callbacks/#example-model-checkpoints

https://keras.io/callbacks/#example-model-checkpoints

or set callback https://keras.io/callbacks/#remotemonitor

或设置回调https://keras.io/callbacks/#remotemonitor