Python TensorFlow 推理
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/43708616/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
TensorFlow Inference
提问by David Crook
I've been digging around on this for a while. I have found a ton of articles; but none really show just tensorflow inference as a plain inference. Its always "use the serving engine" or using a graph that is pre-coded/defined.
我已经研究了一段时间。我发现了很多文章;但没有一个真正将张量流推理显示为简单的推理。它总是“使用服务引擎”或使用预编码/定义的图形。
Here is the problem: I have a device which occasionally checks for updated models. It then needs to load that model and run input predictions through the model.
问题是:我有一个设备,它偶尔会检查更新的模型。然后它需要加载该模型并通过模型运行输入预测。
In keras this was simple: build a model; train the model and the call model.predict(). In scikit-learn same thing.
在 keras 中,这很简单:构建模型;训练模型并调用model.predict()。在 scikit-learn 中也是一样。
I am able to grab a new model and load it; I can print out all of the weights; but how in the world do I run inference against it?
我能够抓取一个新模型并加载它;我可以打印出所有的重量;但是我到底要如何对它进行推理呢?
Code to load model and print weights:
加载模型和打印重量的代码:
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta', clear_devices=True)
new_saver.restore(sess, MODEL_PATH)
for var in tf.trainable_variables():
print(sess.run(var))
I printed out all of my collections and I have: ['queue_runners', 'variables', 'losses', 'summaries', 'train_op', 'cond_context', 'trainable_variables']
我打印了我所有的收藏,我有:['queue_runners', 'variables', 'losses', 'summaries', 'train_op', 'cond_context', 'trainable_variables']
I tried using sess.run(train_op)
; however that just started kicking up a full training session; which is not what I want to do. I just want to run inference against a different set of inputs that I provide which are not TF Records.
我尝试使用sess.run(train_op)
; 然而,这才刚刚开始一个完整的培训课程;这不是我想做的。我只想对我提供的一组不同的输入进行推理,这些输入不是 TF 记录。
Just a little more detail:
再详细一点:
The device can use C++ or Python; as long as I can produce a .exe. I can set up a feed dict if I want to feed the system. I trained with TFRecords
; but in production I'm not going to use TFRecords
; its a real/near real time system.
设备可以使用C++或Python;只要我能生成一个.exe。如果我想为系统提供内容,我可以设置一个提要字典。我训练了TFRecords
; 但在生产中我不会使用TFRecords
;它是一个实时/近实时系统。
Thanks for any input. I am posting sample code to this repo: https://github.com/drcrook1/CIFAR10/TensorFlowwhich does all the training and sample inference.
感谢您提供任何意见。我将示例代码发布到此存储库:https: //github.com/drcrook1/CIFAR10/TensorFlow,它执行所有训练和示例推理。
Any hints are greatly appreciated!
任何提示都非常感谢!
------------EDITS----------------- I rebuilt the model to be as below:
------------EDITS----------------- 我重建模型如下:
def inference(images):
'''
Portion of the compute graph that takes an input and converts it into a Y output
'''
with tf.variable_scope('Conv1') as scope:
C_1_1 = ld.cnn_layer(images, (5, 5, 3, 32), (1, 1, 1, 1), scope, name_postfix='1')
C_1_2 = ld.cnn_layer(C_1_1, (5, 5, 32, 32), (1, 1, 1, 1), scope, name_postfix='2')
P_1 = ld.pool_layer(C_1_2, (1, 2, 2, 1), (1, 2, 2, 1), scope)
with tf.variable_scope('Dense1') as scope:
P_1 = tf.reshape(C_1_2, (CONSTANTS.BATCH_SIZE, -1))
dim = P_1.get_shape()[1].value
D_1 = ld.mlp_layer(P_1, dim, NUM_DENSE_NEURONS, scope, act_func=tf.nn.relu)
with tf.variable_scope('Dense2') as scope:
D_2 = ld.mlp_layer(D_1, NUM_DENSE_NEURONS, CONSTANTS.NUM_CLASSES, scope)
H = tf.nn.softmax(D_2, name='prediction')
return H
notice I add the name 'prediction'
to the TF operation so I can retrieve it later.
请注意,我将名称添加'prediction'
到 TF 操作中,以便稍后检索。
When training I used the input pipeline for tfrecords
and input queues.
在训练时,我使用了输入管道tfrecords
和输入队列。
GRAPH = tf.Graph()
with GRAPH.as_default():
examples, labels = Inputs.read_inputs(CONSTANTS.RecordPaths,
batch_size=CONSTANTS.BATCH_SIZE,
img_shape=CONSTANTS.IMAGE_SHAPE,
num_threads=CONSTANTS.INPUT_PIPELINE_THREADS)
examples = tf.reshape(examples, [CONSTANTS.BATCH_SIZE, CONSTANTS.IMAGE_SHAPE[0],
CONSTANTS.IMAGE_SHAPE[1], CONSTANTS.IMAGE_SHAPE[2]])
logits = Vgg3CIFAR10.inference(examples)
loss = Vgg3CIFAR10.loss(logits, labels)
OPTIMIZER = tf.train.AdamOptimizer(CONSTANTS.LEARNING_RATE)
I am attempting to use feed_dict
on the loaded operation in the graph; however now it is just simply hanging....
我试图feed_dict
在图中的加载操作上使用;然而现在它只是简单地挂....
MODEL_PATH = 'models/' + CONSTANTS.MODEL_NAME + '.model'
images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3))
def run_inference():
'''Runs inference against a loaded model'''
with tf.Session() as sess:
#sess.run(tf.global_variables_initializer())
new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta', clear_devices=True)
new_saver.restore(sess, MODEL_PATH)
pred = tf.get_default_graph().get_operation_by_name('prediction')
rand = np.random.rand(1, 32, 32, 3)
print(rand)
print(pred)
print(sess.run(pred, feed_dict={images: rand}))
print('done')
run_inference()
I believe this is not working because the original network was trained using TFRecords. In the sample CIFAR data set the data is small; our real data set is huge and it is my understanding TFRecords the the default best practice for training a network. The feed_dict
makes great perfect sense from a productionizing perspective; we can spin up some threads and populate that thing from our input systems.
我相信这是行不通的,因为原始网络是使用 TFRecords 训练的。在样本 CIFAR 数据集中,数据很小;我们的真实数据集很大,据我所知,TFRecords 是训练网络的默认最佳实践。feed_dict
从生产的角度来看,这非常完美;我们可以启动一些线程并从我们的输入系统填充那个东西。
So I guess I have a network that is trained, I can get the predict operation; but how do I tell it to stop using the input queues and start using the feed_dict
? Remember that from the production perspective I do not have access to whatever the scientists did to make it. They do their thing; and we stick it in production using whatever agreed upon standard.
所以我想我有一个经过训练的网络,我可以得到预测操作;但是我如何告诉它停止使用输入队列并开始使用feed_dict
?请记住,从生产的角度来看,我无法获得科学家为制作它所做的任何事情。他们做他们的事;我们使用任何商定的标准将其投入生产。
-------INPUT OPS--------
-------输入操作--------
tf.Operation 'input/input_producer/Const' type=Const, tf.Operation 'input/input_producer/Size' type=Const, tf.Operation 'input/input_producer/Greater/y' type=Const, tf.Operation 'input/input_producer/Greater' type=Greater, tf.Operation 'input/input_producer/Assert/Const' type=Const, tf.Operation 'input/input_producer/Assert/Assert/data_0' type=Const, tf.Operation 'input/input_producer/Assert/Assert' type=Assert, tf.Operation 'input/input_producer/Identity' type=Identity, tf.Operation 'input/input_producer/RandomShuffle' type=RandomShuffle, tf.Operation 'input/input_producer' type=FIFOQueueV2, tf.Operation 'input/input_producer/input_producer_EnqueueMany' type=QueueEnqueueManyV2, tf.Operation 'input/input_producer/input_producer_Close' type=QueueCloseV2, tf.Operation 'input/input_producer/input_producer_Close_1' type=QueueCloseV2, tf.Operation 'input/input_producer/input_producer_Size' type=QueueSizeV2, tf.Operation 'input/input_producer/Cast' type=Cast, tf.Operation 'input/input_producer/mul/y' type=Const, tf.Operation 'input/input_producer/mul' type=Mul, tf.Operation 'input/input_producer/fraction_of_32_full/tags' type=Const, tf.Operation 'input/input_producer/fraction_of_32_full' type=ScalarSummary, tf.Operation 'input/TFRecordReaderV2' type=TFRecordReaderV2, tf.Operation 'input/ReaderReadV2' type=ReaderReadV2,
tf.Operation 'input/input_producer/Const' type=Const, tf.Operation 'input/input_producer/Size' type=Const, tf.Operation 'input/input_producer/Greater/y' type=Const, tf.Operation 'input/ input_producer/Greater' type=Greater, tf.Operation 'input/input_producer/Assert/Const' type=Const, tf.Operation 'input/input_producer/Assert/Assert/data_0' type=Const, tf.Operation 'input/input_producer/ Assert/Assert' type=Assert, tf.Operation 'input/input_producer/Identity' type=Identity, tf.Operation 'input/input_producer/RandomShuffle' type=RandomShuffle, tf.Operation 'input/input_producer' type=FIFOQueueV2, tf.操作 'input/input_producer/input_producer_EnqueueMany' type=QueueEnqueueManyV2,tf.Operation 'input/input_producer/input_producer_Close' type=QueueCloseV2,tf.Operation 'input/input_producer/input_producer_Close_1' type=QueueCloseV2, tf.Operation 'input/input_producer/input_producer_Size' type=QueueSizeV2, tf.Operation 'input/input_producer/Cast' type=Cast, tf.Operation 'input/input_size' mul/y' type=Const, tf.Operation 'input/input_producer/mul' type=Mul, tf.Operation 'input/input_producer/fraction_of_32_full/tags' type=Const, tf.Operation 'input/input_producer/fraction_of_32_full' type= ScalarSummary, tf.Operation 'input/TFRecordReaderV2' type=TFRecordReaderV2, tf.Operation 'input/ReaderReadV2' type=ReaderReadV2,type=Cast, tf.Operation 'input/input_producer/mul/y' type=Const, tf.Operation 'input/input_producer/mul' type=Mul, tf.Operation 'input/input_producer/fraction_of_32_full/tags' type=Const, tf.Operation 'input/input_producer/fraction_of_32_full' type=ScalarSummary,tf.Operation 'input/TFRecordReaderV2' type=TFRecordReaderV2,tf.Operation 'input/ReaderReadV2' type=ReaderReadV2,type=Cast, tf.Operation 'input/input_producer/mul/y' type=Const, tf.Operation 'input/input_producer/mul' type=Mul, tf.Operation 'input/input_producer/fraction_of_32_full/tags' type=Const, tf.Operation 'input/input_producer/fraction_of_32_full' type=ScalarSummary,tf.Operation 'input/TFRecordReaderV2' type=TFRecordReaderV2,tf.Operation 'input/ReaderReadV2' type=ReaderReadV2,
------END INPUT OPS-----
------结束输入操作-----
----UPDATE 3----
----更新3----
I believe what I need to do is to kill the input section of the graph trained with TF Records and rewire the input to the first layer to a new input. Its kinda like performing surgery; but this is the only way I can find to do inference if I trained using TFRecords as crazy as it sounds...
我相信我需要做的是杀死使用 TF Records 训练的图形的输入部分,并将第一层的输入重新连接到新的输入。这有点像做手术;但如果我使用 TFRecords 进行训练,这听起来很疯狂,这是我能找到的唯一方法......
Full Graph:
完整图表:
Section to kill:
段杀:
So I think the question becomes: How does one kill the input section of the graph and replace it with a feed_dict
?
所以我认为问题变成了:如何杀死图形的输入部分并将其替换为feed_dict
?
A follow up to this would be: is this really the right way to do it? This seems bonkers.
对此的跟进将是:这真的是正确的做法吗?这看起来很疯狂。
----END UPDATE 3----
----结束更新3----
---link to checkpoint files---
---链接到检查点文件---
--end link to checkpoint files---
--end 链接到检查点文件---
-----UPDATE 4 -----
-----更新4 -----
I gave in and just gave a shot at the 'normal' way of performing inference assuming I could have the scientists simply just pickle their models and we could grab the model pickle; unpack it and then run inference on it. So to test I tried the normal way assuming we already unpacked it...It doesn't work worth a beans either...
我屈服了,只是尝试了执行推理的“正常”方式,假设我可以让科学家们简单地腌制他们的模型,而我们可以抓住模型腌制;解压它,然后对其运行推理。所以为了测试,我尝试了正常的方法,假设我们已经打开了它......它也不值得一个豆子......
import tensorflow as tf
import CONSTANTS
import Vgg3CIFAR10
import numpy as np
from scipy import misc
import time
MODEL_PATH = 'models/' + CONSTANTS.MODEL_NAME + '.model'
imgs_bsdir = 'C:/data/cifar_10/train/'
images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3))
logits = Vgg3CIFAR10.inference(images)
def run_inference():
'''Runs inference against a loaded model'''
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
new_saver = tf.train.import_meta_graph(MODEL_PATH + '.meta')#, import_scope='1', input_map={'input:0': images})
new_saver.restore(sess, MODEL_PATH)
pred = tf.get_default_graph().get_operation_by_name('prediction')
enq = sess.graph.get_operation_by_name(enqueue_op)
#tf.train.start_queue_runners(sess)
print(rand)
print(pred)
print(enq)
for i in range(1, 25):
img = misc.imread(imgs_bsdir + str(i) + '.png').astype(np.float32) / 255.0
img = img.reshape(1, 32, 32, 3)
print(sess.run(logits, feed_dict={images : img}))
time.sleep(3)
print('done')
run_inference()
Tensorflow ends up building a new graph with the inference function from the loaded model; then it appends all the other stuff from the other graph to the end of it. So then when I populate a feed_dict
expecting to get inferences back; I just get a bunch of random garbage as if it were the first pass through the network...
Tensorflow 最终使用加载模型的推理函数构建了一个新图;然后它将其他图形中的所有其他内容附加到它的末尾。那么当我填充一个feed_dict
期望得到推论时;我只是得到一堆随机垃圾,就好像它是第一次通过网络一样......
Again; this seems nuts; do I really need to write my own framework for serializing and deserializing random networks? This has had to have been done before...
再次; 这看起来很疯狂;我真的需要编写自己的框架来序列化和反序列化随机网络吗?以前必须这样做...
-----UPDATE 4 -----
-----更新4 -----
Again; thanks!
再次; 谢谢!
采纳答案by David Crook
Alright, this took way too much time to figure out; so here is the answer for the rest of the world.
好吧,这花了太多时间来弄清楚;所以这是世界其他地区的答案。
Quick Reminder: I needed to persist a model that can be dynamically loaded and inferred against without knowledge as to the under pinnings or insides of how it works.
快速提醒:我需要保留一个可以动态加载和推断的模型,而无需了解其工作原理或内部结构。
Step 1: Create a model as a Class and ideally use an interface definition
第 1 步:将模型创建为类并理想地使用接口定义
class Vgg3Model:
NUM_DENSE_NEURONS = 50
DENSE_RESHAPE = 32 * (CONSTANTS.IMAGE_SHAPE[0] // 2) * (CONSTANTS.IMAGE_SHAPE[1] // 2)
def inference(self, images):
'''
Portion of the compute graph that takes an input and converts it into a Y output
'''
with tf.variable_scope('Conv1') as scope:
C_1_1 = ld.cnn_layer(images, (5, 5, 3, 32), (1, 1, 1, 1), scope, name_postfix='1')
C_1_2 = ld.cnn_layer(C_1_1, (5, 5, 32, 32), (1, 1, 1, 1), scope, name_postfix='2')
P_1 = ld.pool_layer(C_1_2, (1, 2, 2, 1), (1, 2, 2, 1), scope)
with tf.variable_scope('Dense1') as scope:
P_1 = tf.reshape(P_1, (-1, self.DENSE_RESHAPE))
dim = P_1.get_shape()[1].value
D_1 = ld.mlp_layer(P_1, dim, self.NUM_DENSE_NEURONS, scope, act_func=tf.nn.relu)
with tf.variable_scope('Dense2') as scope:
D_2 = ld.mlp_layer(D_1, self.NUM_DENSE_NEURONS, CONSTANTS.NUM_CLASSES, scope)
H = tf.nn.softmax(D_2, name='prediction')
return H
def loss(self, logits, labels):
'''
Adds Loss to all variables
'''
cross_entr = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels)
cross_entr = tf.reduce_mean(cross_entr)
tf.summary.scalar('cost', cross_entr)
tf.add_to_collection('losses', cross_entr)
return tf.add_n(tf.get_collection('losses'), name='total_loss')
Step 2: Train your network with whatever inputs you want; in my case I used Queue Runners and TF Records. Note that this step is done by a different team which iterates, builds, designs and optimizes models. This can also change over time. The output they produce must be able to be pulled from a remote location so we can dynamically load the updated models on devices (reflashing hardware is a pain especially if it is geographically distributed). In this instance; the team drops the 3 files associated with a graph saver; but also a pickle of the model used for that training session
第 2 步:用你想要的任何输入训练你的网络;就我而言,我使用了 Queue Runners 和 TF Records。请注意,此步骤由迭代、构建、设计和优化模型的不同团队完成。这也可能随着时间而改变。它们产生的输出必须能够从远程位置提取,以便我们可以在设备上动态加载更新的模型(重新刷新硬件是一种痛苦,尤其是如果它在地理上是分布式的)。在这种情况下; 该团队删除了与图形保护程序关联的 3 个文件;但也是用于该培训课程的模型的泡菜
model = vgg3.Vgg3Model()
def create_sess_ops():
'''
Creates and returns operations needed for running
a tensorflow training session
'''
GRAPH = tf.Graph()
with GRAPH.as_default():
examples, labels = Inputs.read_inputs(CONSTANTS.RecordPaths,
batch_size=CONSTANTS.BATCH_SIZE,
img_shape=CONSTANTS.IMAGE_SHAPE,
num_threads=CONSTANTS.INPUT_PIPELINE_THREADS)
examples = tf.reshape(examples, [-1, CONSTANTS.IMAGE_SHAPE[0],
CONSTANTS.IMAGE_SHAPE[1], CONSTANTS.IMAGE_SHAPE[2]], name='infer/input')
logits = model.inference(examples)
loss = model.loss(logits, labels)
OPTIMIZER = tf.train.AdamOptimizer(CONSTANTS.LEARNING_RATE)
gradients = OPTIMIZER.compute_gradients(loss)
apply_gradient_op = OPTIMIZER.apply_gradients(gradients)
gradients_summary(gradients)
summaries_op = tf.summary.merge_all()
return [apply_gradient_op, summaries_op, loss, logits], GRAPH
def main():
'''
Run and Train CIFAR 10
'''
print('starting...')
ops, GRAPH = create_sess_ops()
total_duration = 0.0
with tf.Session(graph=GRAPH) as SESSION:
COORDINATOR = tf.train.Coordinator()
THREADS = tf.train.start_queue_runners(SESSION, COORDINATOR)
SESSION.run(tf.global_variables_initializer())
SUMMARY_WRITER = tf.summary.FileWriter('Tensorboard/' + CONSTANTS.MODEL_NAME, graph=GRAPH)
GRAPH_SAVER = tf.train.Saver()
for EPOCH in range(CONSTANTS.EPOCHS):
duration = 0
error = 0.0
start_time = time.time()
for batch in range(CONSTANTS.MINI_BATCHES):
_, summaries, cost_val, prediction = SESSION.run(ops)
error += cost_val
duration += time.time() - start_time
total_duration += duration
SUMMARY_WRITER.add_summary(summaries, EPOCH)
print('Epoch %d: loss = %.2f (%.3f sec)' % (EPOCH, error, duration))
if EPOCH == CONSTANTS.EPOCHS - 1 or error < 0.005:
print(
'Done training for %d epochs. (%.3f sec)' % (EPOCH, total_duration)
)
break
GRAPH_SAVER.save(SESSION, 'models/' + CONSTANTS.MODEL_NAME + '.model')
with open('models/' + CONSTANTS.MODEL_NAME + '.pkl', 'wb') as output:
pickle.dump(model, output)
COORDINATOR.request_stop()
COORDINATOR.join(THREADS)
Step 3: Run some Inference. Load your pickled model; create a new graph by piping in the new placeholder to the logits; and then call session restore. DO NOT RESTORE THE WHOLE GRAPH; JUST THE VARIABLES.
第 3 步:运行一些推理。加载您的腌制模型;通过将新占位符管道输送到 logits 来创建新图形;然后调用会话恢复。不要恢复整个图形;只是变量。
MODEL_PATH = 'models/' + CONSTANTS.MODEL_NAME + '.model'
imgs_bsdir = 'C:/data/cifar_10/train/'
images = tf.placeholder(tf.float32, shape=(1, 32, 32, 3))
with open('models/vgg3.pkl', 'rb') as model_in:
model = pickle.load(model_in)
logits = model.inference(images)
def run_inference():
'''Runs inference against a loaded model'''
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
new_saver = tf.train.Saver()
new_saver.restore(sess, MODEL_PATH)
print("Starting...")
for i in range(20, 30):
print(str(i) + '.png')
img = misc.imread(imgs_bsdir + str(i) + '.png').astype(np.float32) / 255.0
img = img.reshape(1, 32, 32, 3)
pred = sess.run(logits, feed_dict={images : img})
max_node = np.argmax(pred)
print('predicted label: ' + str(max_node))
print('done')
run_inference()
There definitely ways to improve on this using interfaces and maybe packaging up everything better; but this is working and sets the stage for how we will be moving forward.
肯定有方法可以使用接口改进这一点,也许可以更好地打包一切;但这正在奏效,并为我们将如何前进奠定了基础。
FINAL NOTEWhen we finally pushed this to production, we ended up having to ship the stupid `mymodel_model.py file down with everything to build up the graph. So we now enforce a naming convention for all models and there is also a coding standard for production model runs so we can do this properly.
最后一点当我们最终将其推向生产时,我们最终不得不将愚蠢的 `mymodel_model.py 文件连同构建图表的所有内容一起发送。因此,我们现在为所有模型强制执行命名约定,并且还有用于生产模型运行的编码标准,因此我们可以正确执行此操作。
Good Luck!
祝你好运!
回答by David Parks
While it's not as cut and dry as model.predict(), it's still really trivial.
虽然它不像 model.predict() 那样干脆利落,但它仍然非常微不足道。
In your model you should have a tensor that computes the final output you're interested in, let's name that tensor output
. You may currently just have a loss function. If so create another tensor (variable in the model) that actually computes the output you want.
在您的模型中,您应该有一个计算您感兴趣的最终输出的张量,让我们将其命名为 tensor output
。您目前可能只有一个损失函数。如果是这样,请创建另一个实际计算您想要的输出的张量(模型中的变量)。
For example, if your loss function is:
例如,如果您的损失函数是:
tf.nn.sigmoid_cross_entropy_with_logits(last_layer_activation, labels)
And you expect your outputs to be in the range [0,1] per class, create another variable:
并且您希望您的输出在每个类的 [0,1] 范围内,创建另一个变量:
output = tf.sigmoid(last_layer_activation)
Now, when you call sess.run(...)
just request the output
tensor. Don't request the optimization OP you normally would to train it. When you request this variable tensorflow will do the minimum work necessary to produce the value (e.g. it won't bother with backprop, loss functions, and all that because a simple feed forward pass is all that's necessary to compute output
.
现在,当您调用时sess.run(...)
只需请求output
张量。不要请求您通常会训练它的优化 OP。当您请求此变量时,tensorflow 将执行生成该值所需的最少工作(例如,它不会打扰反向传播、损失函数以及所有这些,因为只需简单的前馈传递即可计算output
.
So if you're creating a service to return inferences of the model you'll want to keep the model loaded in memory/gpu, and repeat:
因此,如果您正在创建一个服务来返回模型的推断,您需要将模型加载到内存/gpu 中,然后重复:
sess.run(output, feed_dict={X: input_data})
You won't need to feed it the labels because tensorflow won't bother to compute ops that aren't needed to produce the output you are requesting. You don't have to change your model or anything.
您不需要为它提供标签,因为 tensorflow 不会费心计算生成您请求的输出不需要的操作。你不必改变你的模型或任何东西。
While this approach might not be as obvious as model.predict(...)
I'd argue that it's vastly more flexible. If you start playing with more complex models you'll probably learn to love this approach. model.predict()
is like "thinking inside the box."
虽然这种方法可能不像model.predict(...)
我认为的那样明显,但它要灵活得多。如果您开始使用更复杂的模型,您可能会爱上这种方法。model.predict()
就像“在盒子里思考”。