Python Spark SQL Row_number() PartitionBy Sort Desc

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/35247168/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 16:11:17  来源:igfitidea点击:

Spark SQL Row_number() PartitionBy Sort Desc

pythonapache-sparkpysparkapache-spark-sqlwindow-functions

提问by jKraut

I've successfully create a row_number()partitionByby in Spark using Window, but would like to sort this by descending, instead of the default ascending. Here is my working code:

我已经row_number()partitionBy使用 Window 在 Spark 中成功创建了一个by,但是想通过降序而不是默认的升序对其进行排序。这是我的工作代码:

from pyspark import HiveContext
from pyspark.sql.types import *
from pyspark.sql import Row, functions as F
from pyspark.sql.window import Window

data_cooccur.select("driver", "also_item", "unit_count", 
    F.rowNumber().over(Window.partitionBy("driver").orderBy("unit_count")).alias("rowNum")).show()

That gives me this result:

这给了我这个结果:

 +------+---------+----------+------+
 |driver|also_item|unit_count|rowNum|
 +------+---------+----------+------+
 |   s10|      s11|         1|     1|
 |   s10|      s13|         1|     2|
 |   s10|      s17|         1|     3|

And here I add the desc() to order descending:

在这里我添加 desc() 以降序排列:

data_cooccur.select("driver", "also_item", "unit_count", F.rowNumber().over(Window.partitionBy("driver").orderBy("unit_count").desc()).alias("rowNum")).show()

And get this error:

并得到这个错误:

AttributeError: 'WindowSpec' object has no attribute 'desc'

AttributeError: 'WindowSpec' 对象没有属性 'desc'

What am I doing wrong here?

我在这里做错了什么?

采纳答案by zero323

descshould be applied on a column not a window definition. You can use either a method on a column:

desc应该应用于列而不是窗口定义。您可以对列使用任一方法:

from pyspark.sql.functions import col, row_number

F.row_number().over(
    Window.partitionBy("driver").orderBy(col("unit_count").desc())
)

or a standalone function:

或独立功能:

from pyspark.sql.functions import desc

F.row_mumber().over(
    Window.partitionBy("driver").orderBy(desc("unit_count"))
)

回答by kennyut

Or you can use the SQL code in Spark-SQL:

或者你可以使用 Spark-SQL 中的 SQL 代码:

from pyspark.sql import SparkSession

spark = SparkSession\
    .builder\
    .master('local[*]')\
    .appName('Test')\
    .getOrCreate()

spark.sql("""
    select driver
        ,also_item
        ,unit_count
        ,ROW_NUMBER() OVER (PARTITION BY driver ORDER BY unit_count DESC) AS rowNum
    from data_cooccur
""").show()

回答by information_interchange

UpdateActually, I tried looking more into this, and it appears to not work. (in fact it throws an error). The reason why it didn't work is that I had this code under a call to display()in Databricks (code after the display()call is never run). It seems like the orderBy()on a dataframe and the orderBy()on a windoware not actually the same. I will keep this answer up just for negative confirmation

更新实际上,我尝试对此进行更多研究,但似乎不起作用。(实际上它会引发错误)。它不起作用的原因是我display()在 Databricks 中调用了这段代码(调用后的代码display()永远不会运行)。似乎orderBy()数据帧上的 和orderBy()awindow上的实际上并不相同。我会保留这个答案只是为了否定确认

As of PySpark 2.4,(and probably earlier), simply adding in the keyword ascending=Falseinto the orderBycall works for me.

从 PySpark 2.4(可能更早)开始,只需将关键字添加ascending=FalseorderBy调用中即可。

Ex.

personal_recos.withColumn("row_number", F.row_number().over(Window.partitionBy("COLLECTOR_NUMBER").orderBy("count", ascending=False)))

and

personal_recos.withColumn("row_number", F.row_number().over(Window.partitionBy("COLLECTOR_NUMBER").orderBy(F.col("count").desc())))

前任。

personal_recos.withColumn("row_number", F.row_number().over(Window.partitionBy("COLLECTOR_NUMBER").orderBy("count", ascending=False)))

personal_recos.withColumn("row_number", F.row_number().over(Window.partitionBy("COLLECTOR_NUMBER").orderBy(F.col("count").desc())))

seem to give me the same behaviour.

似乎给了我同样的行为。