Python 如何在 Keras 中返回验证丢失的历史记录

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/36952763/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 18:35:53  来源:igfitidea点击:

How to return history of validation loss in Keras

pythonneural-networknlpdeep-learningkeras

提问by ishido

Using Anaconda Python 2.7 Windows 10.

使用 Anaconda Python 2.7 Windows 10。

I am training a language model using the Keras exmaple:

我正在使用 Keras 示例训练语言模型:

print('Build model...')
model = Sequential()
model.add(GRU(512, return_sequences=True, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(GRU(512, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

def sample(a, temperature=1.0):
    # helper function to sample an index from a probability array
    a = np.log(a) / temperature
    a = np.exp(a) / np.sum(np.exp(a))
    return np.argmax(np.random.multinomial(1, a, 1))


# train the model, output generated text after each iteration
for iteration in range(1, 3):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    model.fit(X, y, batch_size=128, nb_epoch=1)
    start_index = random.randint(0, len(text) - maxlen - 1)

    for diversity in [0.2, 0.5, 1.0, 1.2]:
        print()
        print('----- diversity:', diversity)

        generated = ''
        sentence = text[start_index: start_index + maxlen]
        generated += sentence
        print('----- Generating with seed: "' + sentence + '"')
        sys.stdout.write(generated)

        for i in range(400):
            x = np.zeros((1, maxlen, len(chars)))
            for t, char in enumerate(sentence):
                x[0, t, char_indices[char]] = 1.

            preds = model.predict(x, verbose=0)[0]
            next_index = sample(preds, diversity)
            next_char = indices_char[next_index]

            generated += next_char
            sentence = sentence[1:] + next_char

            sys.stdout.write(next_char)
            sys.stdout.flush()
        print()

According to Keras documentation, the model.fitmethod returns a History callback, which has a history attribute containing the lists of successive losses and other metrics.

根据 Keras 文档,该model.fit方法返回一个 History 回调,该回调具有一个 history 属性,其中包含连续损失和其他指标的列表。

hist = model.fit(X, y, validation_split=0.2)
print(hist.history)

After training my model, if I run print(model.history)I get the error:

训练我的模型后,如果我运行,print(model.history)我会收到错误消息:

 AttributeError: 'Sequential' object has no attribute 'history'

How do I return my model history after training my model with the above code?

使用上述代码训练模型后,如何返回模型历史记录?

UPDATE

更新

The issue was that:

问题是:

The following had to first be defined:

必须首先定义以下内容:

from keras.callbacks import History 
history = History()

The callbacks option had to be called

必须调用回调选项

model.fit(X_train, Y_train, nb_epoch=5, batch_size=16, callbacks=[history])

But now if I print

但是现在如果我打印

print(history.History)

it returns

它返回

{}

even though I ran an iteration.

即使我进行了迭代。

采纳答案by ishido

It's been solved.

已经解决了。

The losses only save to the History over the epochs. I was running iterations instead of using the Keras built in epochs option.

损失只保存到历史上的时代。我正在运行迭代而不是使用 Keras 内置的 epochs 选项。

so instead of doing 4 iterations I now have

所以我现在没有进行 4 次迭代

model.fit(......, nb_epoch = 4)

Now it returns the loss for each epoch run:

现在它返回每个 epoch 运行的损失:

print(hist.history)
{'loss': [1.4358016599558268, 1.399221191623641, 1.381293383180471, h1.3758836857303727]}

回答by Jeremy Anifacc

Just an example started from

只是一个例子从

history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=0)

You can use

您可以使用

print(history.history.keys())

to list all data in history.

列出历史记录中的所有数据。

Then, you can print the history of validation loss like this:

然后,您可以像这样打印验证损失的历史记录:

print(history.history['val_loss'])

回答by Rami Alloush

The following simple code works great for me:

以下简单的代码对我很有用:

    seqModel =model.fit(x_train, y_train,
          batch_size      = batch_size,
          epochs          = num_epochs,
          validation_data = (x_test, y_test),
          shuffle         = True,
          verbose=0, callbacks=[TQDMNotebookCallback()]) #for visualization

Make sure you assign the fit function to an output variable. Then you can access that variable very easily

确保将拟合函数分配给输出变量。然后您可以非常轻松地访问该变量

# visualizing losses and accuracy
train_loss = seqModel.history['loss']
val_loss   = seqModel.history['val_loss']
train_acc  = seqModel.history['acc']
val_acc    = seqModel.history['val_acc']
xc         = range(num_epochs)

plt.figure()
plt.plot(xc, train_loss)
plt.plot(xc, val_loss)

Hope this helps. source: https://keras.io/getting-started/faq/#how-can-i-record-the-training-validation-loss-accuracy-at-each-epoch

希望这可以帮助。来源:https: //keras.io/getting-started/faq/#how-can-i-record-the-training-validation-loss-accuracy-at-each-epoch

回答by Marcin Mo?ejko

The dictionary with histories of "acc", "loss", etc. is available and saved in hist.historyvariable.

具有“acc”、“loss”等历史的字典可用并保存在hist.history变量中。

回答by Jimmy

Another option is CSVLogger: https://keras.io/callbacks/#csvlogger. It creates a csv file appending the result of each epoch. Even if you interrupt training, you get to see how it evolved.

另一种选择是 CSVLogger:https://keras.io/callbacks/#csvlogger 。它创建一个 csv 文件,附加每个时期的结果。即使你中断训练,你也能看到它是如何演变的。

回答by Roozbeh Zabihollahi

I have also found that you can use verbose=2to make keras print out the Losses:

我还发现您可以使用verbose=2keras 打印出损失:

history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=2)

And that would print nice lines like this:

这会打印出像这样的漂亮线条:

Epoch 1/1
 - 5s - loss: 0.6046 - acc: 0.9999 - val_loss: 0.4403 - val_acc: 0.9999

According to their documentation:

根据他们的文档

verbose: 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch.

回答by Raven Cheuk

Actually, you can also do it with the iteration method. Because sometimes we might need to use the iteration method instead of the built-in epochs method to visualize the training results after each iteration.

其实,你也可以用迭代法来做。因为有时我们可能需要使用迭代方法而不是内置的 epochs 方法来可视化每次迭代后的训练结果。

history = [] #Creating a empty list for holding the loss later
for iteration in range(1, 3):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    result = model.fit(X, y, batch_size=128, nb_epoch=1) #Obtaining the loss after each training
    history.append(result.history['loss']) #Now append the loss after the training to the list.
    start_index = random.randint(0, len(text) - maxlen - 1)
print(history)

This way allows you to get the loss you want while maintaining your iteration method.

这种方式可以让您在保持迭代方法的同时获得想要的损失。

回答by horseshoe

For plotting the loss directly the following works:

为了直接绘制损失,以下工作:

model_ = model.fit(X, Y, epochs= ..., verbose=1 )
plt.plot(list(model_.history.values())[0],'k-o')

回答by Aniket Sawale

Those who got still error like me:

那些仍然像我一样错误的人:

Convert model.fit_generator()to model.fit()

model.fit_generator()转换为model.fit()