pandas 熊猫:DataFrame.mean() 很慢。如何更快地计算列的均值?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 
原文地址: http://stackoverflow.com/questions/18701569/
Warning: these are provided under cc-by-sa 4.0 license.  You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
pandas: DataFrame.mean() very slow. How can I calculate means of columns faster?
提问by piokuc
I have a rather large CSV file, it contains 9917530 rows (without the header), and 54 columns. Columns are real or integer, only one contains dates. There is a few NULL values on the file, which are translated to nanafter I load it to pandas DataFrame, which I do like this:
我有一个相当大的 CSV 文件,它包含 9917530 行(没有标题)和 54 列。列是实数或整数,只有一个包含日期。文件中有一些 NULL 值,nan在我将其加载到 pandas 后将其转换为DataFrame,我喜欢这样:
import pandas as pd
data = pd.read_csv('data.csv')
After loading, which I think was very fast, cause it took around 30 seconds (pretty much the same time as counting lines with the Unix tool wc), the process was taking around 4Gb of RAM (the size of of the file on disk: 2.2 Gb. So far so good.
加载后,我认为它非常快,因为它花了大约 30 秒(几乎与使用 Unix 工具计算行的时间相同wc),该过程占用了大约 4Gb 的 RAM(磁盘上文件的大小:2.2 Gb. 到目前为止一切顺利。
Then I tried to do the following:
然后我尝试执行以下操作:
column_means = data.mean()
The process' occupied memory grew to ~22Gb very quickly. I could also see the processor (one core) was very very busy - for like three hours, after that I killed the process, cause I needed to use the machine for other things. I have a pretty fast PC with Linux - it has 2 processors, each having 4 cores, so it's 8 cores all together, and 32 Gb of RAM. I cannot believe calculating column means should take so long.
该进程占用的内存很快增长到~22Gb。我还可以看到处理器(一个内核)非常忙碌 - 大约三个小时,之后我终止了进程,因为我需要将机器用于其他事情。我有一台运行 Linux 的非常快的 PC - 它有 2 个处理器,每个处理器有 4 个内核,所以它总共有 8 个内核和 32 Gb 的 RAM。我不敢相信计算列的方法应该花这么长时间。
Can anybody explain why DataFrame.mean()is so slow? And more importantly, what is a better way of calculating means of columns of a file like that? Did I not load the file the best way possible, should I use a different function instead of DataFrame.mean()or perhaps a completely different tool?
谁能解释为什么DataFrame.mean()这么慢?更重要的是,像这样计算文件列的方法的更好方法是什么?我是否没有以最佳方式加载文件,我应该使用不同的功能而不是DataFrame.mean()完全不同的工具吗?
Many thanks in advance.
提前谢谢了。
EDIT. Here is what df.info()shows:
编辑。这是df.info()显示的内容:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9917530 entries, 0 to 9917529
Data columns (total 54 columns):
srch_id                        9917530  non-null values
date_time                      9917530  non-null values
site_id                        9917530  non-null values
visitor_location_country_id    9917530  non-null values
visitor_hist_starrating        505297  non-null values
visitor_hist_adr_usd           507612  non-null values
prop_country_id                9917530  non-null values
prop_id                        9917530  non-null values
prop_starrating                9917530  non-null values
prop_review_score              9902900  non-null values
prop_brand_bool                9917530  non-null values
prop_location_score1           9917530  non-null values
prop_location_score2           7739150  non-null values
prop_log_historical_price      9917530  non-null values
position                       9917530  non-null values
price_usd                      9917530  non-null values
promotion_flag                 9917530  non-null values
srch_destination_id            9917530  non-null values
srch_length_of_stay            9917530  non-null values
srch_booking_window            9917530  non-null values
srch_adults_count              9917530  non-null values
srch_children_count            9917530  non-null values
srch_room_count                9917530  non-null values
srch_saturday_night_bool       9917530  non-null values
srch_query_affinity_score      635564  non-null values
orig_destination_distance      6701069  non-null values
random_bool                    9917530  non-null values
comp1_rate                     235806  non-null values
comp1_inv                      254433  non-null values
comp1_rate_percent_diff        184907  non-null values
comp2_rate                     4040633  non-null values
comp2_inv                      4251538  non-null values
comp2_rate_percent_diff        1109847  non-null values
comp3_rate                     3059273  non-null values
comp3_inv                      3292221  non-null values
comp3_rate_percent_diff        944007  non-null values
comp4_rate                     620099  non-null values
comp4_inv                      692471  non-null values
comp4_rate_percent_diff        264213  non-null values
comp5_rate                     4444294  non-null values
comp5_inv                      4720833  non-null values
comp5_rate_percent_diff        1681006  non-null values
comp6_rate                     482487  non-null values
comp6_inv                      524145  non-null values
comp6_rate_percent_diff        193312  non-null values
comp7_rate                     631077  non-null values
comp7_inv                      713175  non-null values
comp7_rate_percent_diff        277838  non-null values
comp8_rate                     3819043  non-null values
comp8_inv                      3960388  non-null values
comp8_rate_percent_diff        1225707  non-null values
click_bool                     9917530  non-null values
gross_bookings_usd             276592  non-null values
booking_bool                   9917530  non-null values
dtypes: float64(34), int64(19), object(1)None
回答by Jeff
Here's a similar sized from , but without an object column
这是一个类似大小的 from ,但没有对象列
In [10]: nrows = 10000000
In [11]: df = pd.concat([DataFrame(randn(int(nrows),34),columns=[ 'f%s' % i for i in range(34) ]),DataFrame(randint(0,10,size=int(nrows*19)).reshape(int(nrows),19),columns=[ 'i%s' % i for i in range(19) ])],axis=1)
In [12]: df.iloc[1000:10000,0:20] = np.nan
In [13]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10000000 entries, 0 to 9999999
Data columns (total 53 columns):
f0     9991000  non-null values
f1     9991000  non-null values
f2     9991000  non-null values
f3     9991000  non-null values
f4     9991000  non-null values
f5     9991000  non-null values
f6     9991000  non-null values
f7     9991000  non-null values
f8     9991000  non-null values
f9     9991000  non-null values
f10    9991000  non-null values
f11    9991000  non-null values
f12    9991000  non-null values
f13    9991000  non-null values
f14    9991000  non-null values
f15    9991000  non-null values
f16    9991000  non-null values
f17    9991000  non-null values
f18    9991000  non-null values
f19    9991000  non-null values
f20    10000000  non-null values
f21    10000000  non-null values
f22    10000000  non-null values
f23    10000000  non-null values
f24    10000000  non-null values
f25    10000000  non-null values
f26    10000000  non-null values
f27    10000000  non-null values
f28    10000000  non-null values
f29    10000000  non-null values
f30    10000000  non-null values
f31    10000000  non-null values
f32    10000000  non-null values
f33    10000000  non-null values
i0     10000000  non-null values
i1     10000000  non-null values
i2     10000000  non-null values
i3     10000000  non-null values
i4     10000000  non-null values
i5     10000000  non-null values
i6     10000000  non-null values
i7     10000000  non-null values
i8     10000000  non-null values
i9     10000000  non-null values
i10    10000000  non-null values
i11    10000000  non-null values
i12    10000000  non-null values
i13    10000000  non-null values
i14    10000000  non-null values
i15    10000000  non-null values
i16    10000000  non-null values
i17    10000000  non-null values
i18    10000000  non-null values
dtypes: float64(34), int64(19)
Timings (similar machine specs to you)
时间(与您相似的机器规格)
In [14]: %timeit df.mean()
1 loops, best of 3: 21.5 s per loop
You can get a 2x speedup by pre-converting to floats (mean does this, but does it in a more general way, so slower)
您可以通过预先转换为浮点数来获得 2 倍的加速(意思是这样做,但以更通用的方式进行,所以速度较慢)
In [15]: %timeit df.astype('float64').mean()
1 loops, best of 3: 9.45 s per loop
You problem is the object column. Mean will try to calculate for all of the columns, but because of the object column everything is upcast to objectdtype which is not efficient for calculating.
你的问题是对象列。Mean 将尝试计算所有列,但由于对象列,所有内容都向上转换为objectdtype,这对计算效率不高。
Best bet is to do
最好的办法是做
 df._get_numeric_data().mean()
There is an option to do this numeric_only, at the lower level, but for some reason we don't directly support this via the top-level functions (e.g. mean). I think will create an issue to add this parameter. However will prob be Falseby default (to not-exclude).
有一个选项可以numeric_only在较低级别执行此操作,但出于某种原因,我们不直接通过顶级函数(例如平均值)支持此操作。我认为会创建一个问题来添加这个参数。但是False默认情况下会是 prob (不排除)。

