Python 不可散列类型:张量流中的“numpy.ndarray”错误

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/43081403/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 22:29:50  来源:igfitidea点击:

unhashable type: 'numpy.ndarray' error in tensorflow

pythonnumpytensorflowlinear-regression

提问by madsthaks

data = pd.read_excel("/Users/madhavthaker/Downloads/Reduced_Car_Data.xlsx")

train = np.random.rand(len(data)) < 0.8

data_train = data[train]
data_test = data[~train]


x_train = data_train.ix[:,0:3].values
y_train = data_train.ix[:,-1].values
x_test = data_test.ix[:,0:3].values
y_test = data_test.ix[:,-1].values

y_label = tf.placeholder(shape=[None,1], dtype=tf.float32, name='y_label')
x = tf.placeholder(shape=[None,3], dtype=tf.float32, name='x')
W = tf.Variable(tf.random_normal([3,1]), name='weights')
b = tf.Variable(tf.random_normal([1]), name='bias')
y = tf.matmul(x,W)  + b

init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    summary_op = tf.summary.merge_all()
    #Fit all training data
    for epoch in range(1000):
        sess.run(train, feed_dict={x: x_train, y_label: y_train})

        # Display logs per epoch step
        if (epoch+1) % display_step == 0:
            c = sess.run(loss, feed_dict={x: x_train, y_label:y_train})
            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \
                "W=", sess.run(W), "b=", sess.run(b))

    print("Optimization Finished!")
    training_cost = sess.run(loss, feed_dict={x: x_train, y_label: y_train})
    print("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')

Here is the error:

这是错误:

x---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-37-50102cbac823> in <module>()
      6     #Fit all training data
      7     for epoch in range(1000):
----> 8         sess.run(train, feed_dict={x: x_train, y_label: y_train})
      9 
     10         # Display logs per epoch step

TypeError: unhashable type: 'numpy.ndarray'

Here are the shapes of both of the numpy arrays that I am inputting:

这是我输入的两个 numpy 数组的形状:

y_train.shape = (78,)
x_train.shape = (78, 3)

I have no idea what is causing this. All of my shapes match up and I shouldn't have any issues. Let me know if you need any more information.

我不知道是什么原因造成的。我所有的形状都匹配,我不应该有任何问题。如果您需要更多信息,请告诉我。

Edit:From my comment on one of the answers below, it seems as though I had to specify a specific size for my placeholders. Nonewas not satisfactory. When I changed that and re-ran my code, everything worked fine. Still not quite sure why that is.

编辑:从我对以下答案之一的评论来看,似乎我必须为占位符指定特定的大小。None并不令人满意。当我改变它并重新运行我的代码时,一切正常。仍然不太确定为什么会这样。

回答by Andreas Forsl?w

In my case, the problem was naming the input parameter the same as the placeholder variable. This, of course, replaces your tensorflow variable with the input variable; resulting in a different key for the feed_dict.

就我而言,问题是将输入参数命名为与占位符变量相同的名称。当然,这会用输入变量替换您的 tensorflow 变量;导致 feed_dict 的键不同。

A tensorflow variable is hashable, but your input parameter (np.ndarray) isn't. The unhashable error is therefore a result of you trying to pass your parameter as the key instead of a tensorflow variable. Some code to visualize what I'm trying to say:

tensorflow 变量是可散列的,但您的输入参数 (np.ndarray) 不是。因此,不可散列的错误是您尝试将参数作为键而不是 tensorflow 变量传递的结果。一些代码来可视化我想说的话:

a = tf.placeholder(dtype=tf.float32, shape=[1,2,3])
b = tf.identity(a)

with tf.Session() as sess:
    your_var = np.ones((1,2,3))
    a = your_var
    sess.run(b, feed_dict={a: a})

Hopes this helps anyone stumbling upon this problem in the future!

希望这可以帮助将来遇到此问题的任何人!

回答by zero

Please carefully check the datatype you feed "x_train/y_train"and the tensor "x/y_label"you defined by 'tf.placeholder(...)'

请仔细检查您提供的数据类型“x_train/y_train”和由“tf.placeholder(...)”定义的张量“x/y_label”

I have met the same problem with you. And the reason is x_train in my code is "np.float64", but what I defined by tf.placeholder() is tf.float32. The date type float64 and float32 is mismatching.

我和你遇到了同样的问题。而原因就是x_train在我的代码是“NP。float64”,但我通过tf.placeholder()定义为TF。浮动32。日期类型 float64 和 float32 不匹配。

回答by hpaulj

I think problem is in defining the dictionary. A dictionary key has to be a 'hashable type', e.g. a number, a string or a tuple are common. A list or an array don't work:

我认为问题在于定义字典。字典键必须是“可散列类型”,例如数字、字符串或元组是常见的。列表或数组不起作用:

In [256]: {'x':np.array([1,2,3])}
Out[256]: {'x': array([1, 2, 3])}
In [257]: x=np.array([1,2,3])
In [258]: {x:np.array([1,2,3])}
...
TypeError: unhashable type: 'numpy.ndarray'

I don't know enough of tensorflow to know what these are:

我对 tensorflow 的了解不够,不知道这些是什么:

y_label = tf.placeholder(shape=[None,1], dtype=tf.float32, name='y_label')
x = tf.placeholder(shape=[None,3], dtype=tf.float32, name='x')

The error indicates that they are are numpy arrays, not strings. Does xhave a nameattribute?

错误表明它们是 numpy 数组,而不是字符串。是否x有一个name属性?

Or maybe the dictionary should be specified as:

或者字典应该指定为:

{'x': x_train, 'y_label': y_train}

回答by ChaosPredictor

Strange, I had this issue too. After I close python shell and run the code from a file I didn't succeed to reproduce it even in the shell (it just works w/o an error).

奇怪,我也有这个问题。在我关闭 python shell 并从文件中运行代码后,即使在 shell 中我也没有成功重现它(它只是在没有错误的情况下工作)。