Python 'PipelinedRDD' 对象在 PySpark 中没有属性 'toDF'

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/32788387/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 12:15:04  来源:igfitidea点击:

'PipelinedRDD' object has no attribute 'toDF' in PySpark

pythonapache-sparkpysparkapache-spark-sqlrdd

提问by Frederico Oliveira

I'm trying to load an SVM file and convert it to a DataFrameso I can use the ML module (PipelineML) from Spark. I've just installed a fresh Spark 1.5.0 on an Ubuntu 14.04 (no spark-env.shconfigured).

我正在尝试加载一个 SVM 文件并将其转换为一个,DataFrame以便我可以使用PipelineSpark 中的 ML 模块 ( ML)。我刚刚在 Ubuntu 14.04(未spark-env.sh配置)上安装了新的 Spark 1.5.0 。

My my_script.pyis:

我的my_script.py是:

from pyspark.mllib.util import MLUtils
from pyspark import SparkContext

sc = SparkContext("local", "Teste Original")
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()

and I'm running using: ./spark-submit my_script.py

我正在使用: ./spark-submit my_script.py

And I get the error:

我得到错误:

Traceback (most recent call last):
File "/home/fred-spark/spark-1.5.0-bin-hadoop2.6/pipeline_teste_original.py", line 34, in <module>
data = MLUtils.loadLibSVMFile(sc, "/home/fred-spark/svm_capture").toDF()
AttributeError: 'PipelinedRDD' object has no attribute 'toDF'

What I can't understand is that if I run:

我无法理解的是,如果我运行:

data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()

directly inside PySpark shell, it works.

直接在 PySpark shell 内部,它可以工作。

采纳答案by zero323

toDFmethod is a monkey patch executed inside SparkSession(SQLContextconstructor in 1.x) constructorso to be able to use it you have to create a SQLContext(or SparkSession) first:

toDF方法是SparkSessionSQLContext1.x 中的构造函数)构造函数中执行的猴子补丁因此为了能够使用它,您必须先创建一个SQLContext(或SparkSession):

# SQLContext or HiveContext in Spark 1.x
from pyspark.sql import SparkSession
from pyspark import SparkContext

sc = SparkContext()

rdd = sc.parallelize([("a", 1)])
hasattr(rdd, "toDF")
## False

spark = SparkSession(sc)
hasattr(rdd, "toDF")
## True

rdd.toDF().show()
## +---+---+
## | _1| _2|
## +---+---+
## |  a|  1|
## +---+---+

Not to mention you need a SQLContextor SparkSessionto work with DataFramesin the first place.

更不用说您首先需要一个SQLContextSparkSession与之合作DataFrames

回答by Amirreza Mohammadi

Make sure you have spark session too.

确保你也有火花会议。

sc = SparkContext("local", "first app")
spark = SparkSession(sc)