Python 如何使用 Spark 查找中位数和分位数
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/31432843/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to find median and quantiles using Spark
提问by pr338
How can I find median of an RDD
of integers using a distributed method, IPython, and Spark? The RDD
is approximately 700,000 elements and therefore too large to collect and find the median.
如何RDD
使用分布式方法、IPython 和 Spark找到整数的中位数?的RDD
是约700 000元,因此过大,以收集和发现中位数。
This question is similar to this question. However, the answer to the question is using Scala, which I do not know.
这个问题类似于这个问题。但是,问题的答案是使用我不知道的 Scala。
How can I calculate exact median with Apache Spark?
Using the thinking for the Scala answer, I am trying to write a similar answer in Python.
使用 Scala 答案的思维,我试图用 Python 编写一个类似的答案。
I know I first want to sort the RDD
. I do not know how. I see the sortBy
(Sorts this RDD by the given keyfunc
) and sortByKey
(Sorts this RDD
, which is assumed to consist of (key, value) pairs.) methods. I think both use key value and my RDD
only has integer elements.
我知道我首先要对RDD
. 我不知道怎么。我看到了sortBy
(Sorts this RDD by the given keyfunc
) 和sortByKey
(Sorts this RDD
,它假定由 (key, value) 对组成。) 方法。我认为两者都使用键值,而我RDD
只有整数元素。
- First, I was thinking of doing
myrdd.sortBy(lambda x: x)
? - Next I will find the length of the rdd (
rdd.count()
). - Finally, I want to find the element or 2 elements at the center of the rdd. I need help with this method too.
- 首先,我在想做什么
myrdd.sortBy(lambda x: x)
? - 接下来我将找到 rdd (
rdd.count()
)的长度。 - 最后,我想在 rdd 的中心找到一个或两个元素。我也需要这个方法的帮助。
EDIT:
编辑:
I had an idea. Maybe I can index my RDD
and then key = index and value = element. And then I can try to sort by value? I don't know if this is possible because there is only a sortByKey
method.
我有一个想法。也许我可以索引 myRDD
然后 key = index 和 value = element。然后我可以尝试按值排序?我不知道这是否可能,因为只有一种sortByKey
方法。
采纳答案by zero323
Ongoing work
正在进行的工作
SPARK-30569- Add DSL functions invoking percentile_approx
SPARK-30569-添加调用 percentile_approx 的 DSL 函数
Spark 2.0+:
火花 2.0+:
You can use approxQuantile
method which implements Greenwald-Khanna algorithm:
您可以使用approxQuantile
实现Greenwald-Khanna 算法的方法:
Python:
蟒蛇:
df.approxQuantile("x", [0.5], 0.25)
Scala:
斯卡拉:
df.stat.approxQuantile("x", Array(0.5), 0.25)
where the last parameter is a relative error. The lower the number the more accurate results and more expensive computation.
其中最后一个参数是一个相对误差。数字越小,结果越准确,计算成本越高。
Since Spark 2.2 (SPARK-14352) it supports estimation on multiple columns:
从 Spark 2.2 ( SPARK-14352) 开始,它支持对多列的估计:
df.approxQuantile(["x", "y", "z"], [0.5], 0.25)
and
和
df.approxQuantile(Array("x", "y", "z"), Array(0.5), 0.25)
Underlying methods can be also used in SQL aggregation (both global and groped) using approx_percentile
function:
底层方法也可以使用approx_percentile
函数用于 SQL 聚合(全局和 groped):
> SELECT approx_percentile(10.0, array(0.5, 0.4, 0.1), 100);
[10.0,10.0,10.0]
> SELECT approx_percentile(10.0, 0.5, 100);
10.0
Spark < 2.0
火花 < 2.0
Python
Python
As I've mentioned in the comments it is most likely not worth all the fuss. If data is relatively small like in your case then simply collect and compute median locally:
正如我在评论中提到的,这很可能不值得大惊小怪。如果数据相对较小,例如您的情况,则只需在本地收集和计算中位数:
import numpy as np
np.random.seed(323)
rdd = sc.parallelize(np.random.randint(1000000, size=700000))
%time np.median(rdd.collect())
np.array(rdd.collect()).nbytes
It takes around 0.01 second on my few years old computer and around 5.5MB of memory.
在我几年前的电脑和大约 5.5MB 的内存上大约需要 0.01 秒。
If data is much larger sorting will be a limiting factor so instead of getting an exact value it is probably better to sample, collect, and compute locally. But if you really want a to use Spark something like this should do the trick (if I didn't mess up anything):
如果数据更大,排序将是一个限制因素,因此与其获取精确值,不如在本地进行采样、收集和计算。但是,如果你真的想使用 Spark,这样的事情应该可以解决问题(如果我没有搞砸任何事情):
from numpy import floor
import time
def quantile(rdd, p, sample=None, seed=None):
"""Compute a quantile of order p ∈ [0, 1]
:rdd a numeric rdd
:p quantile(between 0 and 1)
:sample fraction of and rdd to use. If not provided we use a whole dataset
:seed random number generator seed to be used with sample
"""
assert 0 <= p <= 1
assert sample is None or 0 < sample <= 1
seed = seed if seed is not None else time.time()
rdd = rdd if sample is None else rdd.sample(False, sample, seed)
rddSortedWithIndex = (rdd.
sortBy(lambda x: x).
zipWithIndex().
map(lambda (x, i): (i, x)).
cache())
n = rddSortedWithIndex.count()
h = (n - 1) * p
rddX, rddXPlusOne = (
rddSortedWithIndex.lookup(x)[0]
for x in int(floor(h)) + np.array([0L, 1L]))
return rddX + (h - floor(h)) * (rddXPlusOne - rddX)
And some tests:
还有一些测试:
np.median(rdd.collect()), quantile(rdd, 0.5)
## (500184.5, 500184.5)
np.percentile(rdd.collect(), 25), quantile(rdd, 0.25)
## (250506.75, 250506.75)
np.percentile(rdd.collect(), 75), quantile(rdd, 0.75)
(750069.25, 750069.25)
Finally lets define median:
最后让我们定义中位数:
from functools import partial
median = partial(quantile, p=0.5)
So far so good but it takes 4.66 s in a local mode without any network communication. There is probably way to improve this, but why even bother?
到目前为止一切顺利,但在没有任何网络通信的本地模式下需要 4.66 秒。可能有办法改善这一点,但为什么还要麻烦呢?
Language independent(Hive UDAF):
语言无关(Hive UDAF):
If you use HiveContext
you can also use Hive UDAFs. With integral values:
如果您使用,HiveContext
您还可以使用 Hive UDAF。使用积分值:
rdd.map(lambda x: (float(x), )).toDF(["x"]).registerTempTable("df")
sqlContext.sql("SELECT percentile_approx(x, 0.5) FROM df")
With continuous values:
具有连续值:
sqlContext.sql("SELECT percentile(x, 0.5) FROM df")
In percentile_approx
you can pass an additional argument which determines a number of records to use.
在percentile_approx
你可以通过它决定了多项纪录使用一个额外的参数。
回答by Vedant
Adding a solution if you want an RDD method only and dont want to move to DF. This snippet can get you a percentile for an RDD of double.
如果您只想要 RDD 方法并且不想移动到 DF,请添加解决方案。此代码段可以为您提供双倍 RDD 的百分位数。
If you input percentile as 50, you should obtain your required median. Let me know if there are any corner cases not accounted for.
如果您输入百分位数为 50,您应该获得所需的中位数。让我知道是否有任何未考虑的极端情况。
/**
* Gets the nth percentile entry for an RDD of doubles
*
* @param inputScore : Input scores consisting of a RDD of doubles
* @param percentile : The percentile cutoff required (between 0 to 100), e.g 90%ile of [1,4,5,9,19,23,44] = ~23.
* It prefers the higher value when the desired quantile lies between two data points
* @return : The number best representing the percentile in the Rdd of double
*/
def getRddPercentile(inputScore: RDD[Double], percentile: Double): Double = {
val numEntries = inputScore.count().toDouble
val retrievedEntry = (percentile * numEntries / 100.0 ).min(numEntries).max(0).toInt
inputScore
.sortBy { case (score) => score }
.zipWithIndex()
.filter { case (score, index) => index == retrievedEntry }
.map { case (score, index) => score }
.collect()(0)
}
回答by Beno?t Carne
Here is the method I used using window functions (with pyspark 2.2.0).
这是我使用窗口函数(使用 pyspark 2.2.0)使用的方法。
from pyspark.sql import DataFrame
class median():
""" Create median class with over method to pass partition """
def __init__(self, df, col, name):
assert col
self.column=col
self.df = df
self.name = name
def over(self, window):
from pyspark.sql.functions import percent_rank, pow, first
first_window = window.orderBy(self.column) # first, order by column we want to compute the median for
df = self.df.withColumn("percent_rank", percent_rank().over(first_window)) # add percent_rank column, percent_rank = 0.5 coressponds to median
second_window = window.orderBy(pow(df.percent_rank-0.5, 2)) # order by (percent_rank - 0.5)^2 ascending
return df.withColumn(self.name, first(self.column).over(second_window)) # the first row of the window corresponds to median
def addMedian(self, col, median_name):
""" Method to be added to spark native DataFrame class """
return median(self, col, median_name)
# Add method to DataFrame class
DataFrame.addMedian = addMedian
Then call the addMedian method to calculate the median of col2:
然后调用addMedian方法计算col2的中位数:
from pyspark.sql import Window
median_window = Window.partitionBy("col1")
df = df.addMedian("col2", "median").over(median_window)
Finally you can group by if needed.
最后,您可以根据需要进行分组。
df.groupby("col1", "median")
回答by Ankit Kumar Namdeo
I have written the function which takes data frame as an input and returns a dataframe which has median as an output over a partition and order_col is the column for which we want to calculate median for part_col is the level at which we want to calculate median for :
我已经编写了一个函数,它将数据帧作为输入并返回一个数据帧,该数据帧将中值作为分区上的输出,而 order_col 是我们要为其计算中值的列 part_col 是我们要计算中值的级别:
from pyspark.sql import Window
import pyspark.sql.functions as F
def calculate_median(dataframe, part_col, order_col):
win = Window.partitionBy(*part_col).orderBy(order_col)
# count_row = dataframe.groupby(*part_col).distinct().count()
dataframe.persist()
dataframe.count()
temp = dataframe.withColumn("rank", F.row_number().over(win))
temp = temp.withColumn(
"count_row_part",
F.count(order_col).over(Window.partitionBy(part_col))
)
temp = temp.withColumn(
"even_flag",
F.when(
F.col("count_row_part") %2 == 0,
F.lit(1)
).otherwise(
F.lit(0)
)
).withColumn(
"mid_value",
F.floor(F.col("count_row_part")/2)
)
temp = temp.withColumn(
"avg_flag",
F.when(
(F.col("even_flag")==1) &
(F.col("rank") == F.col("mid_value"))|
((F.col("rank")-1) == F.col("mid_value")),
F.lit(1)
).otherwise(
F.when(
F.col("rank") == F.col("mid_value")+1,
F.lit(1)
)
)
)
temp.show(10)
return temp.filter(
F.col("avg_flag") == 1
).groupby(
part_col + ["avg_flag"]
).agg(
F.avg(F.col(order_col)).alias("median")
).drop("avg_flag")