java 如何在 GroupBy 操作后从 spark DataFrame Column 收集字符串列表?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/35324049/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How do I collect a List of Strings from spark DataFrame Column after a GroupBy operation?
提问by Kai
The solution described here(by zero323) is very close to what I want with two twists:
此处描述的解决方案(通过 zero323)非常接近我想要的两个曲折:
- How do I do it in Java?
- What if the column had a List of Strings instead of a single String and I want to collect all such lists into a single list after GroupBy(some other column)?
- 我如何在 Java 中做到这一点?
- 如果该列有一个字符串列表而不是单个字符串,并且我想在 GroupBy(其他某个列)之后将所有此类列表收集到一个列表中,该怎么办?
I am using Spark 1.6 and have tried to use
我正在使用 Spark 1.6 并尝试使用
org.apache.spark.sql.functions.collect_list(Column col)
as described in the solution to that question, but got the following error
org.apache.spark.sql.functions.collect_list(Column col)
如该问题的解决方案中所述,但出现以下错误
Exception in thread "main" org.apache.spark.sql.AnalysisException: undefined function collect_list; at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry$$anonfun$2.apply(FunctionRegistry.scala:65) at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry$$anonfun$2.apply(FunctionRegistry.scala:65) at scala.Option.getOrElse(Option.scala:121)
线程“main”org.apache.spark.sql.AnalysisException 中的异常:未定义函数 collect_list;在 org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry$$anonfun$2.apply(FunctionRegistry.scala:65) 在 org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry$$anonfun$2.apply(FunctionRegistry. scala:65) 在 scala.Option.getOrElse(Option.scala:121)
回答by zero323
Error you see suggests you use plain SQLContext
not HiveContext
. collect_list
is a Hive UDF and as such requires HiveContext
. It also doesn't support complex columns so the only option is to explode
first:
您看到的错误表明您使用了普通的SQLContext
not HiveContext
。collect_list
是一个 Hive UDF,因此需要HiveContext
. 它也不支持复杂的列,所以唯一的选择是explode
首先:
import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.hive.HiveContext;
import java.util.*;
import org.apache.spark.sql.DataFrame;
import static org.apache.spark.sql.functions.*;
public class App {
public static void main(String[] args) {
JavaSparkContext sc = new JavaSparkContext(new SparkConf());
SQLContext sqlContext = new HiveContext(sc);
List<String> data = Arrays.asList(
"{\"id\": 1, \"vs\": [\"a\", \"b\"]}",
"{\"id\": 1, \"vs\": [\"c\", \"d\"]}",
"{\"id\": 2, \"vs\": [\"e\", \"f\"]}",
"{\"id\": 2, \"vs\": [\"g\", \"h\"]}"
);
DataFrame df = sqlContext.read().json(sc.parallelize(data));
df.withColumn("vs", explode(col("vs")))
.groupBy(col("id"))
.agg(collect_list(col("vs")))
.show();
}
}
It is rather unlikely it will perform well though.
不过,它不太可能表现良好。