在 Python 中计算 numpy ndarray 中非 NaN 元素的数量
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 
原文地址: http://stackoverflow.com/questions/21778118/
Warning: these are provided under cc-by-sa 4.0 license.  You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Counting the number of non-NaN elements in a numpy ndarray in Python
提问by jjepsuomi
I need to calculate the number of non-NaN elements in a numpy ndarray matrix. How would one efficiently do this in Python? Here is my simple code for achieving this:
我需要计算 numpy ndarray 矩阵中非 NaN 元素的数量。如何在 Python 中有效地做到这一点?这是我实现此目的的简单代码:
import numpy as np
def numberOfNonNans(data):
    count = 0
    for i in data:
        if not np.isnan(i):
            count += 1
    return count 
Is there a built-in function for this in numpy? Efficiency is important because I'm doing Big Data analysis.
在 numpy 中是否有内置函数?效率很重要,因为我在做大数据分析。
Thnx for any help!
感谢您的帮助!
采纳答案by M4rtini
np.count_nonzero(~np.isnan(data))
~inverts the boolean matrix returned from np.isnan.
~反转从 返回的布尔矩阵np.isnan。
np.count_nonzerocounts values that is not 0\false. .sumshould give the same result. But maybe more clearly to use count_nonzero
np.count_nonzero计算不为 0\false 的值。.sum应该给出相同的结果。但也许更清楚地使用count_nonzero
Testing speed:
测试速度:
In [23]: data = np.random.random((10000,10000))
In [24]: data[[np.random.random_integers(0,10000, 100)],:][:, [np.random.random_integers(0,99, 100)]] = np.nan
In [25]: %timeit data.size - np.count_nonzero(np.isnan(data))
1 loops, best of 3: 309 ms per loop
In [26]: %timeit np.count_nonzero(~np.isnan(data))
1 loops, best of 3: 345 ms per loop
In [27]: %timeit data.size - np.isnan(data).sum()
1 loops, best of 3: 339 ms per loop
data.size - np.count_nonzero(np.isnan(data))seems to barely be the fastest here. other data might give different relative speed results. 
data.size - np.count_nonzero(np.isnan(data))似乎几乎不是这里最快的。其他数据可能会给出不同的相对速度结果。
回答by Manuel
An alternative, but a bit slower alternative is to do it over indexing.
一种替代方法,但稍微慢一点的替代方法是通过索引来完成。
np.isnan(data)[np.isnan(data) == False].size
In [30]: %timeit np.isnan(data)[np.isnan(data) == False].size
1 loops, best of 3: 498 ms per loop 
The double use of np.isnan(data)and the ==operator might be a bit overkill and so I posted the answer only for completeness.   
np.isnan(data)和==运算符的双重使用可能有点矫枉过正,所以我发布答案只是为了完整性。   
回答by G M
Quick-to-write alterantive
快速写入替代品
Even though is not the fastest choice, if performance is not an issue you can use:
即使不是最快的选择,如果性能不是问题,您可以使用:
sum(~np.isnan(data)).
sum(~np.isnan(data)).
Performance:
表现:
In [7]: %timeit data.size - np.count_nonzero(np.isnan(data))
10 loops, best of 3: 67.5 ms per loop
In [8]: %timeit sum(~np.isnan(data))
10 loops, best of 3: 154 ms per loop
In [9]: %timeit np.sum(~np.isnan(data))
10 loops, best of 3: 140 ms per loop
回答by Darren Weber
To determine if the array is sparse, it may help to get a proportion of nan values
要确定数组是否稀疏,获取 nan 值的比例可能会有所帮助
np.isnan(ndarr).sum() / ndarr.size
If that proportion exceeds a threshold, then use a sparse array, e.g. - https://sparse.pydata.org/en/latest/
如果该比例超过阈值,则使用稀疏数组,例如 - https://sparse.pydata.org/en/latest/

