• spark2.1操作json(save/read)


    建筑物配置信息:

    case class BuildingConfig(buildingid: String, building_height: Long, gridcount: Long, gis_display_name: String, wear_loss: Double, path_loss: Double) extends Serializable

    向hdfs写入json文件:

     sql(
          s"""|select buildingid,
              |height,
              |gridcount,
              |collect_list(gis_display_name)[0] as gis_display_name,
              |avg(wear_loss) as wear_loss,
              |avg(path_loss) as path_loss
              |from
              |xxx
              |""".stripMargin)
          .map(s => BuildingConfig(s.getAs[String]("buildingid"), s.getAs[Int]("height"), s.getAs[Long]("gridcount"), s.getAs[String]("gis_display_name"), s.getAs[Double]("wear_loss"), s.getAs[Double]("path_loss")))
          .toDF.write.format("org.apache.spark.sql.json").mode(SaveMode.Overwrite).save(s"/user/my/buidlingconfigjson/${p_city}")

    从hdfs中读取json文件:

     /**
          * scala> buildingConfig.printSchema
          * root
          * |-- building_height: long (nullable = true)
          * |-- buildingid: string (nullable = true)
          * |-- gis_display_name: string (nullable = true)
          * |-- gridcount: long (nullable = true)
          * |-- path_loss: double (nullable = true)
          * |-- wear_loss: double (nullable = true)
          **/
        spark.read.json(s"/user/my/buildingconfigjson/${p_city}")
          .map(s => BuildingConfig(s.getAs[String]("buildingid"), s.getAs[Long]("building_height"), s.getAs[Long]("gridcount"), s.getAs[String]("gis_display_name"), s.getAs[Double]("wear_loss"), s.getAs[Double]("path_loss")))
          .createOrReplaceTempView("building_scene_config")
  • 相关阅读:
    数据包发送
    linux 进程调度3
    linux 进程调度2
    linux 进程调度1
    进程间通信:信号
    fork vfork clone学习
    跳表
    【转】Linux内存管理综述
    如何优雅的写出链表代码
    This function or variable may be unsafe Consider using xxx instead
  • 原文地址:https://www.cnblogs.com/yy3b2007com/p/8564220.html
Copyright © 2020-2023  润新知