Python model.predict() 和 model.fit() 有什么作用?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/37973005/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 20:11:11  来源:igfitidea点击:

What do model.predict() and model.fit() do?

pythondeep-learningkerasreinforcement-learning

提问by Soham

I'm going through this reinforcement learning tutorialand It's been really great so far but could someone please explain what

我正在学习这个强化学习教程,到目前为止它真的很棒,但有人可以解释一下吗

newQ = model.predict(new_state.reshape(1,64), batch_size=1)

and

model.fit(X_train, y_train, batch_size=batchSize, nb_epoch=1, verbose=1)

mean?

意思?

As in what do the arguments bach_size, nb_epochand verbosedo? I know neural networks so explaining in terms of that would be helpful.

至于在什么做的论点bach_sizenb_epochverbose做什么?我知道神经网络,所以用它来解释会很有帮助。

You could also send me a link where the documentation of these functions can be found.

您还可以向我发送一个链接,其中可以找到这些函数的文档。

采纳答案by nemo

First of all it surprises me that you could not find the documentationbut I guess you just had bad luck while searching.

首先让我惊讶的是你找不到文档,但我猜你在搜索时运气不好。

The documentation states for model.fit:

文档说明model.fit

fit(self, x, y, batch_size=32, nb_epoch=10, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None)

  • batch_size: integer. Number of samples per gradient update.
  • nb_epoch: integer, the number of times to iterate over the training data arrays.
  • verbose: 0, 1, or 2. Verbosity mode. 0 = silent, 1 = verbose, 2 = one log line per epoch.

fit(self, x, y, batch_size=32, nb_epoch=10, verbose=1, callbacks=[], validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None)

  • batch_size: 整数。每次梯度更新的样本数。
  • nb_epoch:整数,迭代训练数据数组的次数。
  • verbose: 0、1 或 2。详细模式。0 = 无声,1 = 冗长,2 = 每个 epoch 一个日志行。

The batch_sizeparameter in case of model.predictis just the number of samples used for each prediction step. So calling model.predictone time consumes batch_sizenumber of data samples. This helps for devices that can process large matrices quickly (such as GPUs).

batch_size情况下的参数model.predict只是用于每个预测步骤的样本数。所以调用model.predict一次会消耗batch_size大量的数据样本。这有助于可以快速处理大型矩阵的设备(例如 GPU)。