Python 'PipelinedRDD' 对象在 PySpark 中没有属性 'toDF'
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/32788387/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
'PipelinedRDD' object has no attribute 'toDF' in PySpark
提问by Frederico Oliveira
I'm trying to load an SVM file and convert it to a DataFrame
so I can use the ML module (Pipeline
ML) from Spark.
I've just installed a fresh Spark 1.5.0 on an Ubuntu 14.04 (no spark-env.sh
configured).
我正在尝试加载一个 SVM 文件并将其转换为一个,DataFrame
以便我可以使用Pipeline
Spark 中的 ML 模块 ( ML)。我刚刚在 Ubuntu 14.04(未spark-env.sh
配置)上安装了新的 Spark 1.5.0 。
My my_script.py
is:
我的my_script.py
是:
from pyspark.mllib.util import MLUtils
from pyspark import SparkContext
sc = SparkContext("local", "Teste Original")
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()
and I'm running using: ./spark-submit my_script.py
我正在使用: ./spark-submit my_script.py
And I get the error:
我得到错误:
Traceback (most recent call last):
File "/home/fred-spark/spark-1.5.0-bin-hadoop2.6/pipeline_teste_original.py", line 34, in <module>
data = MLUtils.loadLibSVMFile(sc, "/home/fred-spark/svm_capture").toDF()
AttributeError: 'PipelinedRDD' object has no attribute 'toDF'
What I can't understand is that if I run:
我无法理解的是,如果我运行:
data = MLUtils.loadLibSVMFile(sc, "/home/svm_capture").toDF()
directly inside PySpark shell, it works.
直接在 PySpark shell 内部,它可以工作。
采纳答案by zero323
toDF
method is a monkey patch executed inside SparkSession
(SQLContext
constructor in 1.x) constructorso to be able to use it you have to create a SQLContext
(or SparkSession
) first:
toDF
方法是在SparkSession
(SQLContext
1.x 中的构造函数)构造函数中执行的猴子补丁,因此为了能够使用它,您必须先创建一个SQLContext
(或SparkSession
):
# SQLContext or HiveContext in Spark 1.x
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext()
rdd = sc.parallelize([("a", 1)])
hasattr(rdd, "toDF")
## False
spark = SparkSession(sc)
hasattr(rdd, "toDF")
## True
rdd.toDF().show()
## +---+---+
## | _1| _2|
## +---+---+
## | a| 1|
## +---+---+
Not to mention you need a SQLContext
or SparkSession
to work with DataFrames
in the first place.
更不用说您首先需要一个SQLContext
或SparkSession
与之合作DataFrames
。
回答by Amirreza Mohammadi
Make sure you have spark session too.
确保你也有火花会议。
sc = SparkContext("local", "first app")
spark = SparkSession(sc)