Python 如何使用keras获得模型的准确性?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/51047676/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 19:41:59  来源:igfitidea点击:

How to get accuracy of model using keras?

pythontensorflowmachine-learningkerasdeep-learning

提问by ZelelB

After fitting the model (which was running for a couple of hours), I wanted to get the accuracy with the following code:

拟合模型(运行了几个小时)后,我想使用以下代码获得准确性:

train_loss=hist.history['loss']
val_loss=hist.history['val_loss']
train_acc=hist.history['acc']
val_acc=hist.history['val_acc']
xc=range(nb_epoch)

of the trained model, but was getting an error, which is caused by the deprecated methods I was using.

经过训练的模型,但出现错误,这是由我使用的弃用方法引起的。

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-233-081ed5e89aa4> in <module>()
      3 train_loss=hist.history['loss']
      4 val_loss=hist.history['val_loss']
----> 5 train_acc=hist.history['acc']
      6 val_acc=hist.history['val_acc']
      7 xc=range(nb_epoch)

KeyError: 'acc'

The code I used to fit the model before trying to read the accuracy, is the following:

在尝试读取准确性之前,我用来拟合模型的代码如下:

hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
            verbose=1, validation_data=(X_test, Y_test))


hist = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, 
            verbose=1, validation_split=0.2)

Which produces this output when running it:

运行它时会产生此输出:

Epoch 1/20
237/237 [==============================] - 104s 440ms/step - loss: 6.2802 - val_loss: 2.4209
    .....
    .....
    .....
Epoch 19/20
    189/189 [==============================] - 91s 480ms/step - loss: 0.0590 - val_loss: 0.2193
    Epoch 20/20
    189/189 [==============================] - 85s 451ms/step - loss: 0.0201 - val_loss: 0.2312

I've noticed that I was running deprecated methods & arguments.

我注意到我正在运行不推荐使用的方法和参数。

So how can I read the accuracy and val_accuracy without having to fit again, and waiting for a couple of hours again? I tried to replace train_acc=hist.history['acc']with train_acc=hist.history['accuracy']but it didn't help.

那么我如何才能读取准确度和 val_accuracy 而不必再次拟合,并再次等待几个小时?我试图用替换train_acc=hist.history['acc']train_acc=hist.history['accuracy']但它没有帮助。

回答by Daniel M?ller

You probably didn't add "acc" as a metric when compiling the model.

编译模型时,您可能没有添加“acc”作为度量。

model.compile(optimizer=..., loss=..., metrics=['accuracy',...])

You can get the metrics and loss from any data without training again with:

您无需再次训练即可从任何数据中获取指标和损失:

model.evaluate(X, Y)

回答by user1906450

  1. add a metrics = ['accuracy'] when you compile the model

  2. simply get the accuracy of the last epoch . hist.history.get('acc')[-1]

  3. what i would do actually is use a GridSearchCV and then get the best_score_ parameter to print the best metrics

  1. 编译模型时添加一个 metrics = ['accuracy']

  2. 只需获得最后一个时代的准确性。hist.history.get('acc')[-1]

  3. 我实际上会做的是使用 GridSearchCV 然后获取 best_score_ 参数来打印最佳指标

回答by Daniel B.

Just tried it in tensorflow==2.0.0. With the following result:

刚刚试了一下tensorflow==2.0.0。结果如下:

Given a training call like:

给定一个培训电话,如:

history = model.fit(train_data, train_labels, epochs=100,
                    validation_data=(test_images, test_labels))

The final accuracy for the above call can be read out as follows:

上述调用的最终精度可以读出如下:

history.history['accuracy']

Printing the entire dict history.historygives you overview of all the contained values. You will find that all the values reported in a line such as:

打印整个 dict 可以history.history让您概览所有包含的值。您会发现在一行中报告了所有值,例如:

7570/7570 [==============================] - 42s 6ms/sample - loss: 1.1612 - accuracy: 0.5715 - val_loss: 0.5541 - val_accuracy: 0.8300

can be read out from that dict.

可以从那个字典中读出。

For the sake of completeness, I created the model as follows:

为了完整起见,我创建了如下模型:

model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.0001,
                                       beta_1=0.9,
                                       beta_2=0.999,
                                       epsilon=1e-07,
                                       amsgrad=False,
                                       name='Adam'
                                       ),
          loss='sparse_categorical_crossentropy',
          metrics=['accuracy']