如何使用 Scala 将带毫秒的字符串列转换为带毫秒的 Spark 2.1 中的时间戳?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/44886772/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-22 09:19:35  来源:igfitidea点击:

How to convert a string column with milliseconds to a timestamp with milliseconds in Spark 2.1 using Scala?

scaladatetimeapache-spark

提问by keiv.fly

I am using Spark 2.1 with Scala.

我在 Scala 中使用 Spark 2.1。

How to convert a string column with milliseconds to a timestamp with milliseconds?

如何将毫秒的字符串列转换为毫秒的时间戳?

I tried the following code from the question Better way to convert a string field into timestamp in Spark

我从问题Better way to convert a string field into timestamp in Spark 中尝试了以下代码

import org.apache.spark.sql.functions.unix_timestamp
val tdf = Seq((1L, "05/26/2016 01:01:01.601"), (2L, "#$@#@#")).toDF("id", "dts")
val tts = unix_timestamp($"dts", "MM/dd/yyyy HH:mm:ss.SSS").cast("timestamp")
tdf.withColumn("ts", tts).show(2, false)

But I get the result without milliseconds:

但我得到的结果没有毫秒:

+---+-----------------------+---------------------+
|id |dts                    |ts                   |
+---+-----------------------+---------------------+
|1  |05/26/2016 01:01:01.601|2016-05-26 01:01:01.0|
|2  |#$@#@#                 |null                 |
+---+-----------------------+---------------------+

回答by keiv.fly

UDF with SimpleDateFormat works. The idea is taken from the Ram Ghadiyaram's link to an UDF logic.

带有 SimpleDateFormat 的 UDF 有效。这个想法取自 Ram Ghadiyaram 与 UDF逻辑的链接。

import java.text.SimpleDateFormat
import java.sql.Timestamp
import org.apache.spark.sql.functions.udf
import scala.util.{Try, Success, Failure}

val getTimestamp: (String => Option[Timestamp]) = s => s match {
  case "" => None
  case _ => {
    val format = new SimpleDateFormat("MM/dd/yyyy' 'HH:mm:ss.SSS")
    Try(new Timestamp(format.parse(s).getTime)) match {
      case Success(t) => Some(t)
      case Failure(_) => None
    }    
  }
}

val getTimestampUDF = udf(getTimestamp)
val tdf = Seq((1L, "05/26/2016 01:01:01.601"), (2L, "#$@#@#")).toDF("id", "dts")
val tts = getTimestampUDF($"dts")
tdf.withColumn("ts", tts).show(2, false)

with output:

带输出:

+---+-----------------------+-----------------------+
|id |dts                    |ts                     |
+---+-----------------------+-----------------------+
|1  |05/26/2016 01:01:01.601|2016-05-26 01:01:01.601|
|2  |#$@#@#                 |null                   |
+---+-----------------------+-----------------------+

回答by Paul Bendevis

There is an easier way than making a UDF. Just parse the millisecond data and add it to the unix timestamp (the following code works with pyspark and should be very close the scala equivalent):

有一种比制作 UDF 更简单的方法。只需解析毫秒数据并将其添加到 unix 时间戳(以下代码适用于 pyspark,应该非常接近 Scala 等效项):

timeFmt = "yyyy/MM/dd HH:mm:ss.SSS"
df = df.withColumn('ux_t', unix_timestamp(df.t, format=timeFmt) + substring(df.t, -3, 3).cast('float')/1000)

Result: '2017/03/05 14:02:41.865' is converted to 1488722561.865

结果:'2017/03/05 14:02:41.865' 转换为 1488722561.865

回答by gokulnath s

import org.apache.spark.sql.functions;
import org.apache.spark.sql.types.DataTypes;


dataFrame.withColumn(
    "time_stamp", 
    dataFrame.col("milliseconds_in_string")
        .cast(DataTypes.LongType)
        .cast(DataTypes.TimestampType)
)

the code is in java and it is easy to convert to scala

代码在java中,很容易转换为scala