Python Pandas concat:ValueError:传递值的形状是废话,索引意味着废话2

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/27719407/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 02:10:42  来源:igfitidea点击:

Pandas concat: ValueError: Shape of passed values is blah, indices imply blah2

pythonpandas

提问by birone

I'm trying to merge a (Pandas 14.1) dataframe and a series. The series should form a new column, with some NAs (since the index values of the series are a subset of the index values of the dataframe).

我正在尝试合并一个(Pandas 14.1)数据框和一个系列。该系列应形成一个带有一些 NA 的新列(因为该系列的索引值是数据帧索引值的子集)。

This works for a toy example, but not with my data (detailed below).

这适用于玩具示例,但不适用于我的数据(详情如下)。

Example:

例子:

import pandas as pd
import numpy as np

df1 = pd.DataFrame(np.random.randn(6, 4), columns=['A', 'B', 'C', 'D'], index=pd.date_range('1/1/2011', periods=6, freq='D'))
df1

A   B   C   D
2011-01-01  -0.487926   0.439190    0.194810    0.333896
2011-01-02  1.708024    0.237587    -0.958100   1.418285
2011-01-03  -1.228805   1.266068    -1.755050   -1.476395
2011-01-04  -0.554705   1.342504    0.245934    0.955521
2011-01-05  -0.351260   -0.798270   0.820535    -0.597322
2011-01-06  0.132924    0.501027    -1.139487   1.107873

s1 = pd.Series(np.random.randn(3), name='foo', index=pd.date_range('1/1/2011', periods=3, freq='2D'))
s1

2011-01-01   -1.660578
2011-01-03   -0.209688
2011-01-05    0.546146
Freq: 2D, Name: foo, dtype: float64

pd.concat([df1, s1],axis=1)

A   B   C   D   foo
2011-01-01  -0.487926   0.439190    0.194810    0.333896    -1.660578
2011-01-02  1.708024    0.237587    -0.958100   1.418285    NaN
2011-01-03  -1.228805   1.266068    -1.755050   -1.476395   -0.209688
2011-01-04  -0.554705   1.342504    0.245934    0.955521    NaN
2011-01-05  -0.351260   -0.798270   0.820535    -0.597322   0.546146
2011-01-06  0.132924    0.501027    -1.139487   1.107873    NaN

The situation with the data (see below) seems basically identical - concatting a series with a DatetimeIndex whose values are a subset of the dataframe's. But it gives the ValueError in the title (blah1 = (5, 286) blah2 = (5, 276) ). Why doesn't it work?:

数据的情况(见下文)似乎基本相同 - 将一个系列与一个 DatetimeIndex 连接起来,其值是数据帧的一个子集。但它在标题中给出了 ValueError (blah1 = (5, 286) blah2 = (5, 276) )。为什么它不起作用?:

In[187]: df.head()
Out[188]:
high    low loc_h   loc_l
time                
2014-01-01 17:00:00 1.376235    1.375945    1.376235    1.375945
2014-01-01 17:01:00 1.376005    1.375775    NaN NaN
2014-01-01 17:02:00 1.375795    1.375445    NaN 1.375445
2014-01-01 17:03:00 1.375625    1.375515    NaN NaN
2014-01-01 17:04:00 1.375585    1.375585    NaN NaN
In [186]: df.index
Out[186]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01 17:00:00, ..., 2014-01-01 21:30:00]
Length: 271, Freq: None, Timezone: None

In [189]: hl.head()
Out[189]:
2014-01-01 17:00:00    1.376090
2014-01-01 17:02:00    1.375445
2014-01-01 17:05:00    1.376195
2014-01-01 17:10:00    1.375385
2014-01-01 17:12:00    1.376115
dtype: float64

In [187]:hl.index
Out[187]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01 17:00:00, ..., 2014-01-01 21:30:00]
Length: 89, Freq: None, Timezone: None

In: pd.concat([df, hl], axis=1)
Out: [stack trace] ValueError: Shape of passed values is (5, 286), indices imply (5, 276)

回答by birone

Aus_lacy's post gave me the idea of trying related methods, of which joindoes work:

Aus_lacy 的帖子给了我尝试相关方法的想法,其中join确实有效:

In [196]:

hl.name = 'hl'
Out[196]:
'hl'
In [199]:

df.join(hl).head(4)
Out[199]:
high    low loc_h   loc_l   hl
2014-01-01 17:00:00 1.376235    1.375945    1.376235    1.375945    1.376090
2014-01-01 17:01:00 1.376005    1.375775    NaN NaN NaN
2014-01-01 17:02:00 1.375795    1.375445    NaN 1.375445    1.375445
2014-01-01 17:03:00 1.375625    1.375515    NaN NaN NaN

Some insight into why concat works on the example but not this data would be nice though!

深入了解为什么 concat 对示例有效但对这些数据无效的一些见解会很好!

回答by lmart999

I had a similar problem (joinworked, but concatfailed).

我有一个类似的问题(join工作,但concat失败)。

Check for duplicate index values in df1and s1, (e.g. df1.index.is_unique)

检查重复索引值df1s1(例如df1.index.is_unique)

Removing duplicate index values (e.g., df.drop_duplicates(inplace=True)) or one of the methods here https://stackoverflow.com/a/34297689/7163376should resolve it.

删除重复的索引值(例如,df.drop_duplicates(inplace=True))或这里的方法之一https://stackoverflow.com/a/34297689/7163376应该可以解决它。

回答by flow

My problem where different indices, the following code solved my problem.

我的问题是不同的索引,下面的代码解决了我的问题。

df1.reset_index(drop=True, inplace=True)
df2.reset_index(drop=True, inplace=True)
df = pd.concat([df1, df2], axis=1)

回答by Károly Frendrich

Your indexes probably contains duplicated values.

您的索引可能包含重复值。

import pandas as pd

T1_INDEX = [
    0,
    1,  # <= !!! if I write e.g.: "0" here then it fails
    0.2,
]
T1_COLUMNS = [
    'A', 'B', 'C', 'D'
]
T1 = [
    [1.0, 1.1, 1.2, 1.3],
    [2.0, 2.1, 2.2, 2.3],
    [3.0, 3.1, 3.2, 3.3],
]

T2_INDEX = [
    1.2,
    2.11,
]

T2_COLUMNS = [
    'D', 'E', 'F',
]
T2 = [
    [54.0, 5324.1, 3234.2],
    [55.0, 14.5324, 2324.2],
    # [3.0, 3.1, 3.2],
]
df1 = pd.DataFrame(T1, columns=T1_COLUMNS, index=T1_INDEX)
df2 = pd.DataFrame(T2, columns=T2_COLUMNS, index=T2_INDEX)


print(pd.concat([pd.DataFrame({})] + [df2, df1], axis=1))

回答by jibran abbasi

Try sorting index after concatenating them

连接后尝试对索引进行排序

result=pd.concat([df1,df2]).sort_index()

回答by Jeremy Matt

To drop duplicate indices, use df = df.loc[df.index.drop_duplicates()]. C.f. pandas.pydata.org/pandas-docs/stable/generated/… – BallpointBen Apr 18 at 15:25

To drop duplicate indices, use df = df.loc[df.index.drop_duplicates()]. C.f. pandas.pydata.org/pandas-docs/stable/generated/… – BallpointBen Apr 18 at 15:25

This is wrong but I can't reply directly to BallpointBen's comment due to low reputation. The reason its wrong is that df.index.drop_duplicates()returns a list of unique indices, but when you index back into the dataframe using those the unique indices it still returns all records. I think this is likely because indexing using one of the duplicated indices will return all instances of the index.

这是错误的,但由于声誉低,我无法直接回复 BallpointBen 的评论。其错误的原因是df.index.drop_duplicates()返回唯一索引列表,但是当您使用这些唯一索引索引回数据帧时,它仍然返回所有记录。我认为这很可能是因为使用重复索引之一进行索引将返回索引的所有实例。

Instead, use df.index.duplicated(), which returns a boolean list (add the ~ to get the not-duplicated records):

相反,使用df.index.duplicated(),它返回一个布尔列表(添加 ~ 以获取不重复的记录):

df = df.loc[~df.index.duplicated()].

df = df.loc[~df.index.duplicated()].

回答by Mmonwu Enugu-Ezike

I tried Join and Append but none of them worked. I used a 'try: ..., except: continue' around that section of my code and it worked perfectly.

我尝试了 Join 和 Append,但它们都不起作用。我在我的代码的那部分使用了“尝试:...,除了:继续”,它运行得很好。