Python AttributeError: 'DataFrame' 对象没有属性 'map'
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/39535447/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
AttributeError: 'DataFrame' object has no attribute 'map'
提问by Edamame
I wanted to convert the spark data frame to add using the code below:
我想使用以下代码将 spark 数据框转换为添加:
from pyspark.mllib.clustering import KMeans
spark_df = sqlContext.createDataFrame(pandas_df)
rdd = spark_df.map(lambda data: Vectors.dense([float(c) for c in data]))
model = KMeans.train(rdd, 2, maxIterations=10, runs=30, initializationMode="random")
The detailed error message is:
详细的错误信息是:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-11-a19a1763d3ac> in <module>()
1 from pyspark.mllib.clustering import KMeans
2 spark_df = sqlContext.createDataFrame(pandas_df)
----> 3 rdd = spark_df.map(lambda data: Vectors.dense([float(c) for c in data]))
4 model = KMeans.train(rdd, 2, maxIterations=10, runs=30, initializationMode="random")
/home/edamame/spark/spark-2.0.0-bin-hadoop2.6/python/pyspark/sql/dataframe.pyc in __getattr__(self, name)
842 if name not in self.columns:
843 raise AttributeError(
--> 844 "'%s' object has no attribute '%s'" % (self.__class__.__name__, name))
845 jc = self._jdf.apply(name)
846 return Column(jc)
AttributeError: 'DataFrame' object has no attribute 'map'
Does anyone know what I did wrong here? Thanks!
有谁知道我在这里做错了什么?谢谢!
回答by David
You can't map
a dataframe, but you can convert the dataframe to an RDD and map that by doing spark_df.rdd.map()
. Prior to Spark 2.0, spark_df.map
would alias to spark_df.rdd.map()
. With Spark 2.0, you must explicitly call .rdd
first.
您不能map
使用数据帧,但您可以将数据帧转换为 RDD 并通过执行spark_df.rdd.map()
. 在 Spark 2.0 之前,spark_df.map
将别名为spark_df.rdd.map()
. 使用 Spark 2.0,您必须先显式调用.rdd
。