Python pandas:如何将一列中的文本拆分为多行?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/17116814/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 00:30:14  来源:igfitidea点击:

pandas: How do I split text in a column into multiple rows?

pythonpandasdataframe

提问by Bradley

I'm working with a large csv file and the next to last column has a string of text that I want to split by a specific delimiter. I was wondering if there is a simple way to do this using pandas or python?

我正在处理一个大型 csv 文件,最后一列的下一列有一个我想用特定分隔符分割的文本字符串。我想知道是否有一种简单的方法可以使用 Pandas 或 python 来做到这一点?

CustNum  CustomerName     ItemQty  Item   Seatblocks                 ItemExt
32363    McCartney, Paul      3     F04    2:218:10:4,6                   60
31316    Lennon, John        25     F01    1:13:36:1,12 1:13:37:1,13     300

I want to split by the space(' ')and then the colon(':')in the Seatblockscolumn, but each cell would result in a different number of columns. I have a function to rearrange the columns so the Seatblockscolumn is at the end of the sheet, but I'm not sure what to do from there. I can do it in excel with the built in text-to-columnsfunction and a quick macro, but my dataset has too many records for excel to handle.

我想的空间分割(' '),然后结肠(':')Seatblocks列,但每个单元格将导致不同的列数。我有一个重新排列列的功能,因此该Seatblocks列位于工作表的末尾,但我不确定从那里开始做什么。我可以使用内置text-to-columns函数和快速宏在 excel 中完成,但我的数据集有太多记录,excel 无法处理。

Ultimately, I want to take records such John Lennon's and create multiple lines, with the info from each set of seats on a separate line.

最终,我想记录 John Lennon 的记录并创建多行,将每组座位的信息放在单独的行上。

采纳答案by Dan Allan

This splits the Seatblocks by space and gives each its own row.

这将按空间拆分座垫,并为每个座垫分配自己的行。

In [43]: df
Out[43]: 
   CustNum     CustomerName  ItemQty Item                 Seatblocks  ItemExt
0    32363  McCartney, Paul        3  F04               2:218:10:4,6       60
1    31316     Lennon, John       25  F01  1:13:36:1,12 1:13:37:1,13      300

In [44]: s = df['Seatblocks'].str.split(' ').apply(Series, 1).stack()

In [45]: s.index = s.index.droplevel(-1) # to line up with df's index

In [46]: s.name = 'Seatblocks' # needs a name to join

In [47]: s
Out[47]: 
0    2:218:10:4,6
1    1:13:36:1,12
1    1:13:37:1,13
Name: Seatblocks, dtype: object

In [48]: del df['Seatblocks']

In [49]: df.join(s)
Out[49]: 
   CustNum     CustomerName  ItemQty Item  ItemExt    Seatblocks
0    32363  McCartney, Paul        3  F04       60  2:218:10:4,6
1    31316     Lennon, John       25  F01      300  1:13:36:1,12
1    31316     Lennon, John       25  F01      300  1:13:37:1,13

Or, to give each colon-separated string in its own column:

或者,在其自己的列中给出每个以冒号分隔的字符串:

In [50]: df.join(s.apply(lambda x: Series(x.split(':'))))
Out[50]: 
   CustNum     CustomerName  ItemQty Item  ItemExt  0    1   2     3
0    32363  McCartney, Paul        3  F04       60  2  218  10   4,6
1    31316     Lennon, John       25  F01      300  1   13  36  1,12
1    31316     Lennon, John       25  F01      300  1   13  37  1,13

This is a little ugly, but maybe someone will chime in with a prettier solution.

这有点难看,但也许有人会提出更漂亮的解决方案。

回答by Pietro Battiston

Differently from Dan, I consider his answer quite elegant... but unfortunately it is also very very inefficient. So, since the question mentioned "a large csv file", let me suggest to try in a shell Dan's solution:

与丹不同,我认为他的回答非常优雅......但不幸的是,它也非常非常低效。所以,既然问题提到了"a large csv file",让我建议在 shell Dan 的解决方案中尝试:

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print df['col'].apply(lambda x : pd.Series(x.split(' '))).head()"

... compared to this alternative:

...与此替代方案相比:

time python -c "import pandas as pd;
from scipy import array, concatenate;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print pd.DataFrame(concatenate(df['col'].apply( lambda x : [x.split(' ')]))).head()"

... and this:

... 和这个:

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print pd.DataFrame(dict(zip(range(3), [df['col'].apply(lambda x : x.split(' ')[i]) for i in range(3)]))).head()"

The second simply refrains from allocating 100 000 Series, and this is enough to make it around 10 times faster. But the third solution, which somewhat ironically wastes a lot of calls to str.split() (it is called once per column per row, so three times more than for the others two solutions), is around 40 timesfaster than the first, because it even avoids to instance the 100 000 lists. And yes, it is certainly a little ugly...

第二个只是避免分配 100 000 系列,这足以使它快 10 倍左右。但具有讽刺意味的是,第三个解决方案浪费了大量对 str.split() 的调用(每列每行调用一次,因此是其他两个解决方案的三倍),比第一个解决方案快40 倍左右,因为它甚至避免实例化 100 000 个列表。是的,它肯定有点丑......

EDIT:this answersuggests how to use "to_list()" and to avoid the need for a lambda. The result is something like

编辑:此答案建议如何使用“to_list()”并避免使用 lambda。结果是这样的

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print pd.DataFrame(df.col.str.split().tolist()).head()"

which is even more efficient than the third solution, and certainly much more elegant.

这比第三个解决方案更有效,当然也更优雅。

EDIT:the even simpler

编辑:更简单

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print pd.DataFrame(list(df.col.str.split())).head()"

works too, and is almostas efficient.

也有效,而且效率几乎一样。

EDIT:even simpler! And handles NaNs (but less efficient):

编辑:更简单!并处理 NaN(但效率较低):

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print df.col.str.split(expand=True).head()"

回答by jezrael

import pandas as pd
import numpy as np

df = pd.DataFrame({'ItemQty': {0: 3, 1: 25}, 
                   'Seatblocks': {0: '2:218:10:4,6', 1: '1:13:36:1,12 1:13:37:1,13'}, 
                   'ItemExt': {0: 60, 1: 300}, 
                   'CustomerName': {0: 'McCartney, Paul', 1: 'Lennon, John'}, 
                   'CustNum': {0: 32363, 1: 31316}, 
                   'Item': {0: 'F04', 1: 'F01'}}, 
                    columns=['CustNum','CustomerName','ItemQty','Item','Seatblocks','ItemExt'])

print (df)
   CustNum     CustomerName  ItemQty Item                 Seatblocks  ItemExt
0    32363  McCartney, Paul        3  F04               2:218:10:4,6       60
1    31316     Lennon, John       25  F01  1:13:36:1,12 1:13:37:1,13      300

Another similar solution with chaining is use reset_indexand rename:

另一个类似的链接解决方案是使用reset_indexrename

print (df.drop('Seatblocks', axis=1)
             .join
             (
             df.Seatblocks
             .str
             .split(expand=True)
             .stack()
             .reset_index(drop=True, level=1)
             .rename('Seatblocks')           
             ))

   CustNum     CustomerName  ItemQty Item  ItemExt    Seatblocks
0    32363  McCartney, Paul        3  F04       60  2:218:10:4,6
1    31316     Lennon, John       25  F01      300  1:13:36:1,12
1    31316     Lennon, John       25  F01      300  1:13:37:1,13


If in column are NOTNaNvalues, the fastest solution is use listcomprehension with DataFrameconstructor:

如果列中不是NaN值,最快的解决方案是使用构造函数的list理解DataFrame

df = pd.DataFrame(['a b c']*100000, columns=['col'])

In [141]: %timeit (pd.DataFrame(dict(zip(range(3), [df['col'].apply(lambda x : x.split(' ')[i]) for i in range(3)]))))
1 loop, best of 3: 211 ms per loop

In [142]: %timeit (pd.DataFrame(df.col.str.split().tolist()))
10 loops, best of 3: 87.8 ms per loop

In [143]: %timeit (pd.DataFrame(list(df.col.str.split())))
10 loops, best of 3: 86.1 ms per loop

In [144]: %timeit (df.col.str.split(expand=True))
10 loops, best of 3: 156 ms per loop

In [145]: %timeit (pd.DataFrame([ x.split() for x in df['col'].tolist()]))
10 loops, best of 3: 54.1 ms per loop

But if column contains NaNonly works str.splitwith parameter expand=Truewhich return DataFrame(documentation), and it explain why it is slowier:

但是如果列NaN只包含返回的str.split参数(文档),它解释了为什么它更慢:expand=TrueDataFrame

df = pd.DataFrame(['a b c']*10, columns=['col'])
df.loc[0] = np.nan
print (df.head())
     col
0    NaN
1  a b c
2  a b c
3  a b c
4  a b c

print (df.col.str.split(expand=True))
     0     1     2
0  NaN  None  None
1    a     b     c
2    a     b     c
3    a     b     c
4    a     b     c
5    a     b     c
6    a     b     c
7    a     b     c
8    a     b     c
9    a     b     c

回答by Ben2018

Can also use groupby() with no need to join and stack().

也可以使用 groupby() 而无需加入和 stack()。

Use above example data:

使用上面的示例数据:

import pandas as pd
import numpy as np


df = pd.DataFrame({'ItemQty': {0: 3, 1: 25}, 
                   'Seatblocks': {0: '2:218:10:4,6', 1: '1:13:36:1,12 1:13:37:1,13'}, 
                   'ItemExt': {0: 60, 1: 300}, 
                   'CustomerName': {0: 'McCartney, Paul', 1: 'Lennon, John'}, 
                   'CustNum': {0: 32363, 1: 31316}, 
                   'Item': {0: 'F04', 1: 'F01'}}, 
                    columns=['CustNum','CustomerName','ItemQty','Item','Seatblocks','ItemExt']) 
print(df)

   CustNum     CustomerName  ItemQty Item                 Seatblocks  ItemExt
0  32363    McCartney, Paul  3        F04  2:218:10:4,6               60     
1  31316    Lennon, John     25       F01  1:13:36:1,12 1:13:37:1,13  300  


#first define a function: given a Series of string, split each element into a new series
def split_series(ser,sep):
    return pd.Series(ser.str.cat(sep=sep).split(sep=sep)) 
#test the function, 
split_series(pd.Series(['a b','c']),sep=' ')
0    a
1    b
2    c
dtype: object

df2=(df.groupby(df.columns.drop('Seatblocks').tolist()) #group by all but one column
          ['Seatblocks'] #select the column to be split
          .apply(split_series,sep=' ') # split 'Seatblocks' in each group
         .reset_index(drop=True,level=-1).reset_index()) #remove extra index created

print(df2)
   CustNum     CustomerName  ItemQty Item  ItemExt    Seatblocks
0    31316     Lennon, John       25  F01      300  1:13:36:1,12
1    31316     Lennon, John       25  F01      300  1:13:37:1,13
2    32363  McCartney, Paul        3  F04       60  2:218:10:4,6