Python Tensorflow:如何按名称获取张量?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/36612512/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Tensorflow: How to get a tensor by name?
提问by protas
I'm having trouble recovering a tensor by name, I don't even know if it's possible.
我无法按名称恢复张量,我什至不知道是否可能。
I have a function that creates my graph:
我有一个创建图形的函数:
def create_structure(tf, x, input_size,dropout):
with tf.variable_scope("scale_1") as scope:
W_S1_conv1 = deep_dive.weight_variable_scaling([7,7,3,64], name='W_S1_conv1')
b_S1_conv1 = deep_dive.bias_variable([64])
S1_conv1 = tf.nn.relu(deep_dive.conv2d(x_image, W_S1_conv1,strides=[1, 2, 2, 1], padding='SAME') + b_S1_conv1, name="Scale1_first_relu")
.
.
.
return S3_conv1,regularizer
I want to access the variable S1_conv1 outside this function. I tried:
我想在这个函数之外访问变量 S1_conv1。我试过:
with tf.variable_scope('scale_1') as scope_conv:
tf.get_variable_scope().reuse_variables()
ft=tf.get_variable('Scale1_first_relu')
But that is giving me an error:
但这给了我一个错误:
ValueError: Under-sharing: Variable scale_1/Scale1_first_relu does not exist, disallowed. Did you mean to set reuse=None in VarScope?
ValueError: Under-sharing: Variable scale_1/Scale1_first_relu 不存在,不允许。您的意思是在 VarScope 中设置重用=无吗?
But this works:
但这有效:
with tf.variable_scope('scale_1') as scope_conv:
tf.get_variable_scope().reuse_variables()
ft=tf.get_variable('W_S1_conv1')
I can get around this with
我可以解决这个问题
return S3_conv1,regularizer, S1_conv1
but I don't want to do that.
但我不想那样做。
I think my problem is that S1_conv1 is not really a variable, it's just a tensor. Is there a way to do what I want?
我认为我的问题是 S1_conv1 并不是一个真正的变量,它只是一个张量。有没有办法做我想做的事?
回答by apfalz
There is a function tf.Graph.get_tensor_by_name(). For instance:
有一个函数 tf.Graph.get_tensor_by_name()。例如:
import tensorflow as tf
c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
e = tf.matmul(c, d, name='example')
with tf.Session() as sess:
test = sess.run(e)
print e.name #example:0
test = tf.get_default_graph().get_tensor_by_name("example:0")
print test #Tensor("example:0", shape=(2, 2), dtype=float32)
回答by Yaroslav Bulatov
All tensors have string names which you can see as follows
所有张量都有字符串名称,您可以看到如下
[tensor.name for tensor in tf.get_default_graph().as_graph_def().node]
Once you know the name you can fetch the Tensor using <name>:0
(0 refers to endpoint which is somewhat redundant)
一旦知道名称,您就可以使用获取张量<name>:0
(0 指的是端点,这有点多余)
For instance if you do this
例如,如果你这样做
tf.constant(1)+tf.constant(2)
You have the following Tensor names
您有以下张量名称
[u'Const', u'Const_1', u'add']
So you can fetch output of addition as
所以你可以获取加法的输出作为
sess.run('add:0')
Note, this is part not part of public API. Automatically generated string tensor names are an implementation detail and may change.
请注意,这不是公共 API 的一部分。自动生成的字符串张量名称是实现细节,可能会更改。
回答by Kislay Kunal
All you gotta do in this case is:
在这种情况下你要做的就是:
ft=tf.get_variable('scale1/Scale1_first_relu:0')