Python 如何在 PySpark 中使用窗口函数?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/31857863/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 10:40:07  来源:igfitidea点击:

How to use window functions in PySpark?

pythonsqlapache-sparkpysparkwindow-functions

提问by jegordon

I'm trying to use some windows functions (ntileand percentRank) for a data frame but I don't know how to use them.

我正在尝试将一些 Windows 函数(ntilepercentRank)用于数据框,但我不知道如何使用它们。

Can anyone help me with this please? In the Python API documentationthere are no examples about it.

任何人都可以帮我解决这个问题吗?在Python API 文档中没有关于它的示例。

Specifically, I'm trying to get quantiles of a numeric field in my data frame.

具体来说,我试图在我的数据框中获取数字字段的分位数。

I'm using spark 1.4.0.

我正在使用火花 1.4.0。

采纳答案by zero323

To be able to use window function you have to create a window first. Definition is pretty much the same as for normal SQL it means you can define either order, partition or both. First lets create some dummy data:

为了能够使用窗口函数,您必须先创建一个窗口。定义与普通 SQL 几乎相同,这意味着您可以定义顺序、分区或两者。首先让我们创建一些虚拟数据:

import numpy as np
np.random.seed(1)

keys = ["foo"] * 10 + ["bar"] * 10
values = np.hstack([np.random.normal(0, 1, 10), np.random.normal(10, 1, 100)])

df = sqlContext.createDataFrame([
   {"k": k, "v": round(float(v), 3)} for k, v in zip(keys, values)])

Make sure you're using HiveContext(Spark < 2.0 only):

确保您正在使用HiveContext(仅限 Spark < 2.0):

from pyspark.sql import HiveContext

assert isinstance(sqlContext, HiveContext)

Create a window:

创建一个窗口:

from pyspark.sql.window import Window

w =  Window.partitionBy(df.k).orderBy(df.v)

which is equivalent to

这相当于

(PARTITION BY k ORDER BY v) 

in SQL.

在 SQL 中。

As a rule of thumb window definitions should always contain PARTITION BYclause otherwise Spark will move all data to a single partition. ORDER BYis required for some functions, while in different cases (typically aggregates) may be optional.

根据经验,窗口定义应始终包含PARTITION BY子句,否则 Spark 会将所有数据移动到单个分区。ORDER BY某些功能需要,而在不同情况下(通常是聚合)可能是可选的。

There are also two optional which can be used to define window span - ROWS BETWEENand RANGE BETWEEN. These won't be useful for us in this particular scenario.

还有两个可选的可用于定义窗口跨度 -ROWS BETWEENRANGE BETWEEN. 在这种特殊情况下,这些对我们没有用。

Finally we can use it for a query:

最后,我们可以将其用于查询:

from pyspark.sql.functions import percentRank, ntile

df.select(
    "k", "v",
    percentRank().over(w).alias("percent_rank"),
    ntile(3).over(w).alias("ntile3")
)

Note that ntileis not related in any way to the quantiles.

请注意,ntile它与分位数没有任何关系。