Python 循环张量

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/43327668/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 22:53:23  来源:igfitidea点击:

Looping over a tensor

pythontensorflow

提问by Mohamed Lakhal

I am trying to process a tensor of variable size, in a python way that would be something like:

我正在尝试以类似于以下内容的 python 方式处理可变大小的张量:

# X is of shape [m, n]
for x in X:
    process(x)

I have tried to use tf.scan, the thing is that I want to process every sub-tensor, so I have tried to use a nested scan, but I was enable to do it, because tf.scanwork with the accumulator, if not found it will take the first entry of the elemsas initializer, which I don't want to do. As an example, suppose I want to add one to every element of my tensor (this is just an example), and I want to process it element by element. If I run the code bellow, I will only have one added to a sub-tensor, because scanconsider the first tensor as initializer, along with the first element of every sub-tensor.

我尝试使用tf.scan,问题是我想处理每个子张量,所以我尝试使用嵌套scan,但我可以这样做,因为tf.scan与累加器一起工作,如果not found 它将把elems的第一个条目作为初始值设定项,我不想这样做。例如,假设我想为张量的每个元素添加一个(这只是一个示例),并且我想逐个元素地处理它。如果我运行下面的代码,我只会将一个添加到子张量,因为scan将第一个张量视为初始化器,以及每个子张量的第一个元素。

import numpy as np
import tensorflow as tf

batch_x = np.random.randint(0, 10, size=(5, 10))
x = tf.placeholder(tf.float32, shape=[None, 10])

def inner_loop(x_in):
    return tf.scan(lambda _, x_: x_ + 1, x_in)

outer_loop = tf.scan(lambda _, input_: inner_loop(input_), x, back_prop=True)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    rs = sess.run(outer_loop, feed_dict={x: batch_x})

Any suggestions ?

有什么建议 ?

采纳答案by Dmitriy Danevskiy

Most of tensorflow built-in functions could be applied elementwise. So you could just pass a tensor into a function. Like:

大多数 tensorflow 内置函数都可以按元素应用。因此,您可以将张量传递给函数。喜欢:

outer_loop = inner_loop(x)

However, if you have some function that could not be applied this way (it's really tempting to see that function), you could use map_fn.

但是,如果您有一些无法以这种方式应用的功能(看到该功能真的很诱人),您可以使用map_fn.

Say, your function simply adds 1 to every element of a tensor (or whatever):

比如说,您的函数只是将张量(或其他)的每个元素加 1:

inputs = tf.placeholder...

def my_elementwise_func(x):
    return x + 1

def recursive_map(inputs):
   if tf.shape(inputs).ndims > 0:
       return tf.map_fn(recursive_map, inputs)
   else:
       return my_elementwise_func(inputs)

result = recursive_map(inputs)  

回答by Dzjkb

To loop over a tensor you could try tf.unstack

要循环张量,您可以尝试tf.unstack

Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.

将 rank-R 张量的给定维度解包为 rank-(R-1) 张量。

So adding 1 to each tensor would look something like:

所以给每个张量加 1 看起来像这样:

import tensorflow as tf
x = tf.placeholder(tf.float32, shape=(None, 10))
x_unpacked = tf.unstack(x) # defaults to axis 0, returns a list of tensors

processed = [] # this will be the list of processed tensors
for t in x_unpacked:
    # do whatever
    result_tensor = t + 1
    processed.append(result_tensor)

output = tf.concat(processed, 0)

with tf.Session() as sess:
    print(sess.run([output], feed_dict={x: np.zeros((5, 10))}))

Obviously you can further unpack each tensor from the list to process it, down to single elements. To avoid lots of nested unpacking though, you could maybe try flattening x with tf.reshape(x, [-1])first, and then loop over it like

显然,您可以进一步从列表中解压缩每个张量以对其进行处理,直至分解为单个元素。不过,为了避免大量嵌套解包,您可以尝试tf.reshape(x, [-1])先将x 展平,然后像这样循环

flattened_unpacked = tf.unstack(tf.reshape(x, [-1])
for elem in flattened_unpacked:
    process(elem)

In this case elemis a scalar.

在这种情况下elem是一个标量。