Python spark中的哪个函数是通过key来组合两个RDD的

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/26908031/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 01:09:10  来源:igfitidea点击:

Which function in spark is used to combine two RDDs by keys

pythonscalaapache-sparkrdd

提问by MetallicPriest

Let us say I have the following two RDDs, with the following key-pair values.

假设我有以下两个 RDD,具有以下密钥对值。

rdd1 = [ (key1, [value1, value2]), (key2, [value3, value4]) ]

and

rdd2 = [ (key1, [value5, value6]), (key2, [value7]) ]

Now, I want to join them by key values, so for example I want to return the following

现在,我想通过键值加入它们,例如我想返回以下内容

ret = [ (key1, [value1, value2, value5, value6]), (key2, [value3, value4, value7]) ] 

How I can I do this, in spark using Python or Scala? One way is to use join, but join would create a tuple inside the tuple. But I want to only have one tuple per key value pair.

我怎样才能做到这一点,在火花中使用 Python 或 Scala?一种方法是使用 join,但 join 会在元组内创建一个元组。但我希望每个键值对只有一个元组。

采纳答案by maasg

I would union the two RDDs and to a reduceByKey to merge the values.

我会将两个 RDD 合并到一个 reduceByKey 来合并这些值。

(rdd1 union rdd2).reduceByKey(_ ++ _)

回答by lmm

Just use joinand then mapthe resulting rdd.

只需使用join然后map产生的rdd。

rdd1.join(rdd2).map(case (k, (ls, rs)) => (k, ls ++ rs))