Python 如何从 Spark SQL 中的列表创建数据框?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/43444925/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 23:04:47  来源:igfitidea点击:

How to create dataframe from list in Spark SQL?

pythonapache-sparkpyspark

提问by Liangju Zeng

Spark version : 2.1

火花版本:2.1

For example, in pyspark, i create a list

例如,在 pyspark 中,我创建了一个列表

test_list = [['Hello', 'world'], ['I', 'am', 'fine']]

test_list = [['Hello', 'world'], ['I', 'am', 'fine']]

then how to create a dataframe form the test_list, where the dataframe's type is like below:

那么如何从test_list中创建一个数据框,其中数据框的类型如下:

DataFrame[words: array<string>]

DataFrame[words: array<string>]

回答by Pushkr

here is how -

这是如何 -

from pyspark.sql.types import *

cSchema = StructType([StructField("WordList", ArrayType(StringType()))])

# notice extra square brackets around each element of list 
test_list = [['Hello', 'world']], [['I', 'am', 'fine']]

df = spark.createDataFrame(test_list,schema=cSchema) 

回答by Grant Shannon

i had to work with multiple columns and types - the example below has one string column and one integer column. A slight adjustment to Pushkr's code (above) gives:

我不得不处理多个列和类型——下面的例子有一个字符串列和一个整数列。对 Pushkr 的代码(上图)稍作调整,结果如下:

from pyspark.sql.types import *

cSchema = StructType([StructField("Words", StringType())\
                      ,StructField("total", IntegerType())])

test_list = [['Hello', 1], ['I am fine', 3]]

df = spark.createDataFrame(test_list,schema=cSchema) 

output:

输出:

 df.show()
 +---------+-----+
|    Words|total|
+---------+-----+
|    Hello|    1|
|I am fine|    3|
+---------+-----+

回答by hamza tuna

You should use list of Row objects([Row]) to create data frame.

您应该使用 Row 对象列表([Row])来创建数据框。

from pyspark.sql import Row

spark.createDataFrame(list(map(lambda x: Row(words=x), test_list)))

回答by Raju Bairishetti

   You can create a RDD first from the input and then convert to dataframe from the constructed RDD
   <code>  
     import sqlContext.implicits._
       val testList = Array(Array("Hello", "world"), Array("I", "am", "fine"))
       // CREATE RDD
       val testListRDD = sc.parallelize(testList)
     val flatTestListRDD = testListRDD.flatMap(entry => entry)
     // COnvert RDD to DF 
     val testListDF = flatTestListRDD.toDF
     testListDF.show
    </code>