scala 如何在 spark 数据帧 groupBy 中执行 count(*)
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/46417118/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to do count(*) within a spark dataframe groupBy
提问by javadba
My intention is to do the equivalent of the basic sql
我的意图是做相当于基本sql
select shipgrp, shipstatus, count(*) cnt
from shipstatus group by shipgrp, shipstatus
The examples that I have seen for spark dataframes include rollups by other columns: e.g.
我看到的火花数据帧的例子包括其他列的汇总:例如
df.groupBy($"shipgrp", $"shipstatus").agg(sum($"quantity"))
But no other column is needed in my case shown above. So what is the syntax and/or method call combination here?
但是在我上面显示的情况下不需要其他列。那么这里的语法和/或方法调用组合是什么?
UpdateA reader has suggested this question were a duplicate of dataframe: how to groupBy/count then filter on count in Scala: but that one is about filtering by count: there is no filtering here.
更新一位读者认为这个问题是数据框的重复:如何在 Scala 中分组/计数然后过滤计数:但那个是关于过滤的count:这里没有过滤。
回答by Psidom
You can similarly do count("*")in spark aggfunction:
您可以类似地count("*")在 sparkagg函数中执行以下操作:
df.groupBy("shipgrp", "shipstatus").agg(count("*").as("cnt"))
val df = Seq(("a", 1), ("a", 1), ("b", 2), ("b", 3)).toDF("A", "B")
df.groupBy("A", "B").agg(count("*").as("cnt")).show
+---+---+---+
| A| B|cnt|
+---+---+---+
| b| 2| 1|
| a| 1| 2|
| b| 3| 1|
+---+---+---+

