Python 在 PySpark 中爆炸

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/38210507/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 20:31:06  来源:igfitidea点击:

Explode in PySpark

pythonapache-sparkpysparkapache-spark-sql

提问by user1982118

I would like to transform from a DataFrame that contains lists of words into a DataFrame with each word in its own row.

我想从一个包含单词列表的 DataFrame 转换为一个 DataFrame,每个单词都在自己的行中。

How do I do explode on a column in a DataFrame?

如何在 DataFrame 中的列上爆炸?

Here is an example with some of my attempts where you can uncomment each code line and get the error listed in the following comment. I use PySpark in Python 2.7 with Spark 1.6.1.

这是我的一些尝试的示例,您可以在其中取消注释每个代码行并获得以下注释中列出的错误。我在 Python 2.7 和 Spark 1.6.1 中使用 PySpark。

from pyspark.sql.functions import split, explode
DF = sqlContext.createDataFrame([('cat \n\n elephant rat \n rat cat', )], ['word'])
print 'Dataset:'
DF.show()
print '\n\n Trying to do explode: \n'
DFsplit_explode = (
 DF
 .select(split(DF['word'], ' '))
#  .select(explode(DF['word']))  # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;"
#   .map(explode)  # AttributeError: 'PipelinedRDD' object has no attribute 'show'
#   .explode()  # AttributeError: 'DataFrame' object has no attribute 'explode'
).show()

# Trying without split
print '\n\n Only explode: \n'

DFsplit_explode = (
 DF 
 .select(explode(DF['word']))  # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;"
).show()

Please advice

请指教

回答by zero323

explodeand splitare SQL functions. Both operate on SQL Column. splittakes a Java regular expression as a second argument. If you want to separate data on arbitrary whitespace you'll need something like this:

explodesplit是 SQL 函数。两者都对 SQL 进行操作Columnsplit将 Java 正则表达式作为第二个参数。如果你想在任意空白处分离数据,你需要这样的东西:

df = sqlContext.createDataFrame(
    [('cat \n\n elephant rat \n rat cat', )], ['word']
)

df.select(explode(split(col("word"), "\s+")).alias("word")).show()

## +--------+
## |    word|
## +--------+
## |     cat|
## |elephant|
## |     rat|
## |     rat|
## |     cat|
## +--------+

回答by Alexander

To split on whitespace and also remove blank lines, add the whereclause.

要拆分空白并删除空行,请添加where子句。

DF = sqlContext.createDataFrame([('cat \n\n elephant rat \n rat cat\nmat\n', )], ['word'])

>>> (DF.select(explode(split(DF.word, "\s")).alias("word"))
       .where('word != ""')
       .show())

+--------+
|    word|
+--------+
|     cat|
|elephant|
|     rat|
|     rat|
|     cat|
|     mat|
+--------+