如何将json字符串转换为spark上的数据帧

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/38271611/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-03 18:29:55  来源:igfitidea点击:

how to convert json string to dataframe on spark

jsonscalaapache-sparkdataframe

提问by lucas kim

I want to convert string variable below to dataframe on spark.

我想将下面的字符串变量转换为 spark 上的数据帧。

val jsonStr = "{ "metadata": { "key": 84896, "value": 54 }}"

I know how to create dataframe from json file.

我知道如何从 json 文件创建数据框。

sqlContext.read.json("file.json")

but I don't know how to create dataframe from string variable.

但我不知道如何从字符串变量创建数据框。

How can I convert json String variable to dataframe.

如何将 json 字符串变量转换为数据帧。

回答by Jean Logeart

For Spark 2.2+:

对于Spark 2.2+

import spark.implicits._
val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""
val df = spark.read.json(Seq(jsonStr).toDS)

For Spark 2.1.x:

对于Spark 2.1.x

val events = sc.parallelize("""{"action":"create","timestamp":"2016-01-07T00:01:17Z"}""" :: Nil)    
val df = sqlContext.read.json(events)

Hint: this is using sqlContext.read.json(jsonRDD: RDD[Stirng])overload. There is also sqlContext.read.json(path: String)where it reads a Json file directly.

提示:这是使用sqlContext.read.json(jsonRDD: RDD[Stirng])重载。还有sqlContext.read.json(path: String)直接读取Json文件的地方。

For older versions:

对于旧版本

val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""
val rdd = sc.parallelize(Seq(jsonStr))
val df = sqlContext.read.json(rdd)

回答by markus

Since the function for reading JSON from an RDD got deprecated in Spark 2.2, this would be another option:

由于从 RDD 读取 JSON 的函数在 Spark 2.2 中被弃用,这将是另一种选择:

val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""
import spark.implicits._ // spark is your SparkSession object
val df = spark.read.json(Seq(jsonStr).toDS)

回答by Dinesh Shinkar

To convert list of json Strings into DataFrame in Spark 2.2 =>

在 Spark 2.2 中将 json 字符串列表转换为 DataFrame =>

val spark = SparkSession
          .builder()
          .master("local")
          .appName("Test")
          .getOrCreate()

var strList = List.empty[String]
var jsonString1 = """{"ID" : "111","NAME":"Arkay","LOC":"Pune"}"""
var jsonString2 = """{"ID" : "222","NAME":"DineshS","LOC":"PCMC"}"""
strList = strList :+ jsonString1
strList = strList :+ jsonString2

val rddData = spark.sparkContext.parallelize(strList)
resultDF = spark.read.json(rddData)
resultDF.show()

Result:

结果:

+---+----+-------+
| ID| LOC|   NAME|
+---+----+-------+
|111|Pune|  Arkay|
|222|PCMC|DineshS|
+---+----+-------+

回答by Andrushenko Alexander

Here is an example how to convert Json string to Dataframe in Java (Spark 2.2+):

以下是如何在 Java (Spark 2.2+) 中将 Json 字符串转换为 Dataframe 的示例:

String str1 = "{\"_id\":\"123\",\"ITEM\":\"Item 1\",\"CUSTOMER\":\"Billy\",\"AMOUNT\":285.2}";
String str2 = "{\"_id\":\"124\",\"ITEM\":\"Item 2\",\"CUSTOMER\":\"Sam\",\"AMOUNT\":245.85}";
List<String> jsonList = new ArrayList<>();
jsonList.add(str1);
jsonList.add(str2);
SparkContext sparkContext = new SparkContext(new SparkConf()
        .setAppName("myApp").setMaster("local"));
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkContext);
SQLContext sqlContext = new SQLContext(sparkContext);
JavaRDD<String> javaRdd = javaSparkContext.parallelize(jsonList);
Dataset<Row> data = sqlContext.read().json(javaRdd);
data.show();

Here is the result:

结果如下:

+------+--------+------+---+
|AMOUNT|CUSTOMER|  ITEM|_id|
+------+--------+------+---+
| 285.2|   Billy|Item 1|123|
|245.85|     Sam|Item 2|124|
+------+--------+------+---+

回答by kaushalop

simple_json = '{"results":[{"a":1,"b":2,"c":"name"},{"a":2,"b":5,"c":"foo"}]}'
rddjson = sc.parallelize([simple_json])
df = sqlContext.read.json(rddjson)

The reference to the answer is https://stackoverflow.com/a/49399359/2187751

参考答案是https://stackoverflow.com/a/49399359/2187751

回答by MD Rijwan

There will be some error in some case like Illegal Patter component : XXX so for that you need to add .option with timestamp in spark.read so updated code will be.

在某些情况下会出现一些错误,例如 Illegal Patter component : XXX 因此您需要在 spark.read 中添加带有时间戳的 .option 以便更新代码。

val spark = SparkSession
          .builder()
          .master("local")
          .appName("Test")
          .getOrCreate()
import spark.implicits._
val jsonStr = """{ "metadata": { "key": 84896, "value": 54 }}"""
val df = spark.read.option("timestampFormat", "yyyy/MM/dd HH:mm:ss ZZ").json(Seq(jsonStr).toDS)
df.show()

回答by linehrr

you can now directly read json from Dataset[String]: https://spark.apache.org/docs/latest/sql-data-sources-json.html

您现在可以直接从 Dataset[String] 读取 json:https: //spark.apache.org/docs/latest/sql-data-sources-json.html

val otherPeopleDataset = spark.createDataset(
  """{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}""" :: Nil)
val otherPeople = spark.read.json(otherPeopleDataset)
otherPeople.show()
// +---------------+----+
// |        address|name|
// +---------------+----+
// |[Columbus,Ohio]| Yin|
// +---------------+----+