在 Python Pandas DataFrame 中删除重复项而不删除重复项
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/16331581/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
drop duplicates in Python Pandas DataFrame not removing duplicates
提问by Oniropolo
I have a problem with removing the duplicates. My program is based around a loop which generates tuples (x,y) which are then used as nodes in a graph. The final array/matrix of nodes is :
我在删除重复项时遇到问题。我的程序基于一个循环,该循环生成元组 (x,y),然后将其用作图中的节点。节点的最终数组/矩阵是:
[[ 1. 1. ]
[ 1.12273268 1.15322175]
[..........etc..........]
[ 0.94120695 0.77802849]
**[ 0.84301344 0.91660517]**
[ 0.93096269 1.21383287]
**[ 0.84301344 0.91660517]**
[ 0.75506418 1.0798641 ]]
The length of the array is 22. Now, I need to remove the duplicate entries (see **). So I used:
数组的长度是 22。现在,我需要删除重复的条目(参见 **)。所以我使用了:
def urows(array):
df = pandas.DataFrame(array)
df.drop_duplicates(take_last=True)
return df.drop_duplicates(take_last=True).values
Fantastic, but I still get :
太棒了,但我仍然得到:
0 1
0 1.000000 1.000000
....... etc...........
17 1.039400 1.030320
18 0.941207 0.778028
**19 0.843013 0.916605**
20 0.930963 1.213833
**21 0.843013 0.916605**
So drop duplicates is not removing anything. I tested to see if the nodes where actually the same and I get:
因此删除重复项不会删除任何内容。我测试了节点是否实际相同,我得到:
print urows(total_nodes)[19,:]
---> [ 0.84301344 0.91660517]
print urows(total_nodes)[21,:]
---> [ 0.84301344 0.91660517]
print urows(total_nodes)[12,:] - urows(total_nodes)[13,:]
---> [ 0. 0.]
Why is it not working ??? How can I remove those duplicate values ???
为什么它不起作用???我怎样才能删除那些重复的值???
One more question....
还有一个问题....
Say two values are "nearly" equal (say x1 and x2), is there any way to replace them in a way that they are both equal ???? What I want is to replace x2 with x1 if they are "nearly" equal.
假设两个值“几乎”相等(比如 x1 和 x2),有没有办法以它们都相等的方式替换它们????如果它们“几乎”相等,我想要的是用 x1 替换 x2 。
回答by Dougal
If I copy-paste in your data, I get:
如果我复制粘贴你的数据,我会得到:
>>> df
0 1
0 1.000000 1.000000
1 1.122733 1.153222
2 0.941207 0.778028
3 0.843013 0.916605
4 0.930963 1.213833
5 0.843013 0.916605
6 0.755064 1.079864
>>> df.drop_duplicates()
0 1
0 1.000000 1.000000
1 1.122733 1.153222
2 0.941207 0.778028
3 0.843013 0.916605
4 0.930963 1.213833
6 0.755064 1.079864
so it is actually removed, and your problem is that the arrays aren't exactlyequal (though their difference rounds to 0 for display).
所以它实际上被删除了,你的问题是数组不完全相等(尽管它们的差异四舍五入为 0 以显示)。
One workaround would be to round the data to however many decimal places are applicable with something like df.apply(np.round, args=[4]), then drop the duplicates. If you want to keep the original data but remove rows that are duplicate up to rounding, you can use something like
一种解决方法是将数据四舍五入到适用于诸如 之类的许多小数位df.apply(np.round, args=[4]),然后删除重复项。如果您想保留原始数据但删除重复的行以进行四舍五入,您可以使用类似
df = df.ix[~df.apply(np.round, args=[4]).duplicated()]
Here's one really clumsy way to do what you're asking for with setting nearly-equal values to be actually equal:
这是一种非常笨拙的方法,可以通过将几乎相等的值设置为实际上相等来执行您所要求的操作:
grouped = df.groupby([df[i].round(4) for i in df.columns])
subbed = grouped.apply(lambda g: g.apply(lambda row: g.irow(0), axis=1))
subbed.drop_index(level=list(df.columns), drop=True, inplace=True)
This reorders the dataframe, but you can then call .sort()to get them back in the original order if you need that.
这会重新排序数据帧,但.sort()如果需要,您可以调用以将它们恢复为原始顺序。
Explanation: the first line uses groupbyto group the data frame by the rounded values. Unfortunately, if you give a function to groupby it applies it to the labels rather than the rows (so you could maybe do df.groupby(lambda k: np.round(df.ix[k], 4)), but that sucks too).
说明:第一行用于groupby按四舍五入的值对数据框进行分组。不幸的是,如果你给 groupby 一个函数,它会将它应用于标签而不是行(所以你可以这样做df.groupby(lambda k: np.round(df.ix[k], 4)),但这也很糟糕)。
The second line uses the applymethod on groupby to replace the dataframe of near-duplicate rows, g, with a new dataframe g.apply(lambda row: g.irow(0), axis=1). That uses the applymethod on dataframes to replace each row with the first row of the group.
第二行使用applygroupby 上的方法将接近重复的行的数据帧替换g为新的数据帧g.apply(lambda row: g.irow(0), axis=1)。它使用apply数据帧上的方法将每一行替换为组的第一行。
The result then looks like
结果看起来像
0 1
0 1
0.7551 1.0799 6 0.755064 1.079864
0.8430 0.9166 3 0.843013 0.916605
5 0.843013 0.916605
0.9310 1.2138 4 0.930963 1.213833
0.9412 0.7780 2 0.941207 0.778028
1.0000 1.0000 0 1.000000 1.000000
1.1227 1.1532 1 1.122733 1.153222
where groupbyhas inserted the rounded values as an index. The reset_indexline then drops those columns.
wheregroupby已插入四舍五入的值作为索引。reset_index然后该行删除这些列。
Hopefully someone who knows pandas better than I do will drop by and show how to do this better.
希望比我更了解Pandas的人会过来并展示如何更好地做到这一点。
回答by Jeff
Similar to @Dougal answer, but in a slightly different way
类似于@Dougal 的回答,但方式略有不同
In [20]: df.ix[~(df*1e6).astype('int64').duplicated(cols=[0])]
Out[20]:
0 1
0 1.000000 1.000000
1 1.122733 1.153222
2 0.941207 0.778028
3 0.843013 0.916605
4 0.930963 1.213833
6 0.755064 1.079864

