Pandas/Python:如何连接两个没有重复的数据帧?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/21317384/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 22:34:14  来源:igfitidea点击:

Pandas/Python: How to concatenate two dataframes without duplicates?

pythonpandas

提问by MJP

I'd like to concatenate two dataframes A, B to a new one without duplicate rows (if rows in B already exist in A, don't add):

我想将两个数据帧 A、B 连接到一个没有重复行的新数据帧(如果 B 中的行已经存在于 A 中,则不要添加):

Dataframe A: Dataframe B:

数据框 A: 数据框 B:

   I    II    I    II
0  1    2     5    6
1  3    1     3    1

New Dataframe:

新数据框:

     I    II
  0  1    2
  1  3    1
  2  5    6

How can I do this?

我怎样才能做到这一点?

采纳答案by Ryan G

The simplest way is to just do the concatenation, and then drop duplicates.

最简单的方法是只进行连接,然后删除重复项。

>>> df1
   A  B
0  1  2
1  3  1
>>> df2
   A  B
0  5  6
1  3  1
>>> pandas.concat([df1,df2]).drop_duplicates().reset_index(drop=True)
   A  B
0  1  2
1  3  1
2  5  6

The reset_index(drop=True)is to fix up the index after the concat()and drop_duplicates(). Without it you will have an index of [0,1,0]instead of [0,1,2]. This could cause problems for further operations on this dataframedown the road if it isn't reset right away.

Thereset_index(drop=True)是修复concat()and之后的索引drop_duplicates()。没有它,您将拥有一个索引[0,1,0]而不是[0,1,2]dataframe如果不立即重置,这可能会导致后续操作出现问题。

回答by marwan

In case you have a duplicate row already in DataFrame A, then concatenating and then dropping duplicate rows, will remove rows from DataFrame A that you might want to keep.

如果您在 DataFrame A 中已经有重复的行,则连接并删除重复的行,将从 DataFrame A 中删除您可能想要保留的行。

In this case, you will need to create a new column with a cumulative count, and then drop duplicates, it all depends on your use case, but this is common in time-series data

在这种情况下,您需要创建一个具有累积计数的新列,然后删除重复项,这完全取决于您的用例,但这在时间序列数据中很常见

Here is an example:

下面是一个例子:

df_1 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':34},])

df_2 = pd.DataFrame([
{'date':'11/20/2015', 'id':4, 'value':24},
{'date':'11/20/2015', 'id':6, 'value':14},
])


df_1['count'] = df_1.groupby(['date','id','value']).cumcount()
df_2['count'] = df_2.groupby(['date','id','value']).cumcount()

df_tot = pd.concat([df_1,df_2], ignore_index=False)
df_tot = df_tot.drop_duplicates()
df_tot = df_tot.drop(['count'], axis=1)
>>> df_tot

date    id  value
0   11/20/2015  4   24
1   11/20/2015  4   24
2   11/20/2015  6   34
1   11/20/2015  6   14

回答by Daniel Hoop

I'm surprised that pandas doesn't offer a native solution for this task. I don't think that it's efficient to just drop the duplicates if you work with large datasets (as Rian G suggested).

我很惊讶 Pandas 没有为此任务提供本地解决方案。如果您使用大型数据集(如 Rian G 建议的那样),我认为仅删除重复项并不有效。

It is probably most efficient to use sets to find the non-overlapping indices. Then use list comprehension to translate from index to 'row location' (boolean), which you need to access rows using iloc[,]. Below you find a function that performs the task. If you don't choose a specific column (col) to check for duplicates, then indexes will be used, as you requested. If you chose a specific column, be aware that existing duplicate entries in 'a' will remain in the result.

使用集合来查找非重叠索引可能是最有效的。然后使用列表理解将索引转换为“行位置”(布尔值),您需要使用 iloc[,] 访问行。下面是一个执行任务的函数。如果您不选择特定列 (col) 来检查重复项,则将根据您的要求使用索引。如果您选择了特定列,请注意“a”中现有的重复条目将保留在结果中。

import pandas as pd

def append_non_duplicates(a, b, col=None):
    if ((a is not None and type(a) is not pd.core.frame.DataFrame) or (b is not None and type(b) is not pd.core.frame.DataFrame)):
        raise ValueError('a and b must be of type pandas.core.frame.DataFrame.')
    if (a is None):
        return(b)
    if (b is None):
        return(a)
    if(col is not None):
        aind = a.iloc[:,col].values
        bind = b.iloc[:,col].values
    else:
        aind = a.index.values
        bind = b.index.values
    take_rows = list(set(bind)-set(aind))
    take_rows = [i in take_rows for i in bind]
    return(a.append( b.iloc[take_rows,:] ))

# Usage
a = pd.DataFrame([[1,2,3],[1,5,6],[1,12,13]], index=[1000,2000,5000])
b = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]], index=[1000,2000,3000])

append_non_duplicates(a,b)
#        0   1   2
# 1000   1   2   3    <- from a
# 2000   1   5   6    <- from a
# 5000   1  12  13    <- from a
# 3000   7   8   9    <- from b

append_non_duplicates(a,b,0)
#       0   1   2
# 1000  1   2   3    <- from a
# 2000  1   5   6    <- from a
# 5000  1  12  13    <- from a
# 2000  4   5   6    <- from b
# 3000  7   8   9    <- from b