Python 创建 Spark 数据帧。无法推断类型的架构:<type 'float'>

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/32742004/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 12:09:50  来源:igfitidea点击:

Create Spark DataFrame. Can not infer schema for type: <type 'float'>

pythonapache-sparkdataframepysparkapache-spark-sql

提问by Breach

Could someone help me solve this problem I have with Spark DataFrame?

有人可以帮我解决 Spark DataFrame 遇到的这个问题吗?

When I do myFloatRDD.toDF()I get an error:

当我这样做时,myFloatRDD.toDF()我收到一个错误:

TypeError: Can not infer schema for type: type 'float'

类型错误:无法推断类型的架构:类型“浮动”

I don't understand why...

我不明白为什么...

Example:

例子:

myFloatRdd = sc.parallelize([1.0,2.0,3.0])
df = myFloatRdd.toDF()

Thanks

谢谢

采纳答案by zero323

SparkSession.createDataFrame, which is used under the hood, requires an RDD/ listof Row/tuple/list/dict* or pandas.DataFrame, unless schema with DataTypeis provided. Try to convert float to tuple like this:

SparkSession.createDataFrame,这是发动机罩下使用的,需要一个RDD/listRow/ tuple/ list/ dict*或者pandas.DataFrame,除非用模式DataType设置。尝试将浮点数转换为元组,如下所示:

myFloatRdd.map(lambda x: (x, )).toDF()

or even better:

甚至更好:

from pyspark.sql import Row

row = Row("val") # Or some other column name
myFloatRdd.map(row).toDF()

To create a DataFramefrom a list of scalars you'll have to use SparkSession.createDataFramedirectly and provide a schema***:

要从DataFrame标量列表创建一个,您必须SparkSession.createDataFrame直接使用并提供架构***:

from pyspark.sql.types import FloatType

df = spark.createDataFrame([1.0, 2.0, 3.0], FloatType())

df.show()

## +-----+
## |value|
## +-----+
## |  1.0|
## |  2.0|
## |  3.0|
## +-----+

but for a simple range it would be better to use SparkSession.range:

但对于一个简单的范围,最好使用SparkSession.range

from pyspark.sql.functions import col

spark.range(1, 4).select(col("id").cast("double"))


* No longer supported.

* 不再支持。

** Spark SQL also provides a limited support for schema inference on Python objects exposing __dict__.

** Spark SQL 还对暴露__dict__.

*** Supported only in Spark 2.0 or later.

*** 仅在 Spark 2.0 或更高版本中受支持。