• restapi(8)- restapi-sql:用户自主的服务


      学习函数式编程初衷是看到自己熟悉的oop编程语言和sql数据库在现代商业社会中前景暗淡,准备完全放弃windows技术栈转到分布式大数据技术领域的。但是在现实中理想总是不如人意,本来想在一个规模较小的公司展展拳脚,以为小公司会少点历史包袱,有利于全面技术改造。但现实是:即使是小公司,一旦有个成熟的产品,那么进行全面的技术更新基本上是不可能的了,因为公司要生存,开发人员很难新旧技术之间随时切换。除非有狂热的热情,员工怠慢甚至抵制情绪不容易解决。只能采取逐步切换方式:保留原有产品的后期维护不动,新产品开发用一些新的技术。在我们这里的情况就是:以前一堆c#、sqlserver的东西必须保留,新的功能比如大数据、ai、识别等必须用新的手段如scala、python、dart、akka、kafka、cassandra、mongodb来开发。好了,新旧两个开发平台之间的软件系统对接又变成了一个问题。

       现在我们这里有个需求:把在linux-ubuntu akka-cluster集群环境里mongodb里数据处理的结果传给windows server下SQLServer里。这是一种典型的异系统集成场景。我的解决方案是通过一个restapi服务作为两个系统的数据桥梁,这个restapi的最基本要求是:

    1、支持任何操作系统前端:这个没什么问题,在http层上通过json交换数据

    2、能读写mongodb:在前面讨论的restapi-mongo已经实现了这一功能

    3、能读写windows server环境下的sqlserver:这个是本篇讨论的主题

    4、用户能够比较方便的对平台数据库进行操作,最好免去前后双方每类操作都需要进行协定model这一过程,也就是能达到用户随意调用服务

    前面曾经实现了一个jdbc-engine项目,基于scalikejdbc,不过只示范了slick-h2相关的功能。现在需要sqlserver-jdbc驱动,然后试试能不能在JVM里驱动windows下的sqlserver。maven里找不到sqlserver的驱动,但从微软官网可以下载mssql-jdbc-7.0.0.jre8.jar。这是个jar,在sbt里称作unmanagedjar,不能摆在build.sbt的dependency里。这个需要摆在项目根目录下的lib目录下即可(也可以在放在build.sbt里unmanagedBase :=?? 指定的路径下)。然后是数据库连接,下面是可以使用sqlserver的application.conf配置文件内容:

    # JDBC settings
    prod {
      db {
        h2 {
          driver = "org.h2.Driver"
          url = "jdbc:h2:tcp://localhost/~/slickdemo"
          user = ""
          password = ""
          poolFactoryName = "hikaricp"
          numThreads = 10
          maxConnections = 12
          minConnections = 4
          keepAliveConnection = true
        }
        mysql {
          driver = "com.mysql.cj.jdbc.Driver"
          url = "jdbc:mysql://localhost:3306/testdb"
          user = "root"
          password = "123"
          poolFactoryName = "hikaricp"
          numThreads = 10
          maxConnections = 12
          minConnections = 4
          keepAliveConnection = true
        }
        postgres {
          driver = "org.postgresql.Driver"
          url = "jdbc:postgresql://localhost:5432/testdb"
          user = "root"
          password = "123"
          poolFactoryName = "hikaricp"
          numThreads = 10
          maxConnections = 12
          minConnections = 4
          keepAliveConnection = true
        }
        mssql {
          driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
          url = "jdbc:sqlserver://192.168.11.164:1433;integratedSecurity=false;Connect Timeout=3000"
          user = "sa"
          password = "Tiger2020"
          poolFactoryName = "hikaricp"
          numThreads = 10
          maxConnections = 12
          minConnections = 4
          keepAliveConnection = true
          connectionTimeout = 3000
        }
        termtxns {
          driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
          url = "jdbc:sqlserver://192.168.11.164:1433;DATABASE=TERMTXNS;integratedSecurity=false;Connect Timeout=3000"
          user = "sa"
          password = "Tiger2020"
          poolFactoryName = "hikaricp"
          numThreads = 10
          maxConnections = 12
          minConnections = 4
          keepAliveConnection = true
          connectionTimeout = 3000
        }
        crmdb {
          driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
          url = "jdbc:sqlserver://192.168.11.164:1433;DATABASE=CRMDB;integratedSecurity=false;Connect Timeout=3000"
          user = "sa"
          password = "Tiger2020"
          poolFactoryName = "hikaricp"
          numThreads = 10
          maxConnections = 12
          minConnections = 4
          keepAliveConnection = true
          connectionTimeout = 3000
        }
      }
      # scallikejdbc Global settings
      scalikejdbc.global.loggingSQLAndTime.enabled = true
      scalikejdbc.global.loggingSQLAndTime.logLevel = info
      scalikejdbc.global.loggingSQLAndTime.warningEnabled = true
      scalikejdbc.global.loggingSQLAndTime.warningThresholdMillis = 1000
      scalikejdbc.global.loggingSQLAndTime.warningLogLevel = warn
      scalikejdbc.global.loggingSQLAndTime.singleLineMode = false
      scalikejdbc.global.loggingSQLAndTime.printUnprocessedStackTrace = false
      scalikejdbc.global.loggingSQLAndTime.stackTraceDepth = 10
    }

    这个文件里的mssql,termtxns,crmdb段落都是给sqlserver的,它们都使用hikaricp线程池管理。

    在jdbc-engine里启动数据库方式如下:

      ConfigDBsWithEnv("prod").setup('termtxns)
      ConfigDBsWithEnv("prod").setup('crmdb)
      ConfigDBsWithEnv("prod").loadGlobalSettings()

    这段打开了在配置文件中用termtxns,crmdb注明的数据库。

    下面是SqlHttpServer.scala的代码:

    package com.datatech.rest.sql
    import akka.http.scaladsl.Http
    import akka.http.scaladsl.server.Directives._
    import pdi.jwt._
    import AuthBase._
    import MockUserAuthService._
    import com.datatech.sdp.jdbc.config.ConfigDBsWithEnv
    
    import akka.actor.ActorSystem
    import akka.stream.ActorMaterializer
    
    import Repo._
    import SqlRoute._
    
    object SqlHttpServer extends App {
    
      implicit val httpSys = ActorSystem("sql-http-sys")
      implicit val httpMat = ActorMaterializer()
      implicit val httpEC = httpSys.dispatcher
    
      ConfigDBsWithEnv("prod").setup('termtxns)
      ConfigDBsWithEnv("prod").setup('crmdb)
      ConfigDBsWithEnv("prod").loadGlobalSettings()
    
      implicit val authenticator = new AuthBase()
        .withAlgorithm(JwtAlgorithm.HS256)
        .withSecretKey("OpenSesame")
        .withUserFunc(getValidUser)
    
      val route =
        path("auth") {
          authenticateBasic(realm = "auth", authenticator.getUserInfo) { userinfo =>
            post { complete(authenticator.issueJwt(userinfo))}
          }
        } ~
          pathPrefix("api") {
            authenticateOAuth2(realm = "api", authenticator.authenticateToken) { token =>
              new SqlRoute("sql", token)(new JDBCRepo)
                .route
              // ~ ...
            }
          }
    
      val (port, host) = (50081,"192.168.11.189")
    
      val bindingFuture = Http().bindAndHandle(route,host,port)
    
      println(s"Server running at $host $port. Press any key to exit ...")
    
      scala.io.StdIn.readLine()
      
      bindingFuture.flatMap(_.unbind())
        .onComplete(_ => httpSys.terminate())
    
    }

    服务入口在http://mydemo.com/api/sql,服务包括get,post,put三类,参考这个SqlRoute:

    package com.datatech.rest.sql
    import akka.http.scaladsl.server.Directives
    import akka.stream.ActorMaterializer
    import akka.http.scaladsl.model._
    import akka.actor.ActorSystem
    import com.datatech.rest.sql.Repo.JDBCRepo
    import akka.http.scaladsl.common._
    import spray.json.DefaultJsonProtocol
    import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport
    
    trait JsFormats extends SprayJsonSupport with DefaultJsonProtocol
    object JsConverters extends JsFormats {
      import SqlModels._
      implicit val brandFormat = jsonFormat2(Brand)
      implicit val customerFormat = jsonFormat6(Customer)
    }
    
    object SqlRoute {
      import JsConverters._
      implicit val jsonStreamingSupport = EntityStreamingSupport.json()
        .withParallelMarshalling(parallelism = 8, unordered = false)
    
      class SqlRoute(val pathName: String, val jwt: String)(repo: JDBCRepo)(
      implicit  sys: ActorSystem, mat: ActorMaterializer) extends Directives with JsonConverter {
        val route = pathPrefix(pathName) {
          path(Segment / Remaining) { case (db, tbl) =>
            (get & parameter('sqltext)) { sql => {
              val rsc = new RSConverter
              val rows = repo.query[Map[String,Any]](db, sql, rsc.resultSet2Map)
              complete(rows.map(m => toJson(m)))
            }
            } ~ (post & parameter('sqltext)) { sql =>
                  entity(as[String]){ json =>
                    repo.batchInsert(db,tbl,sql,json)
                    complete(StatusCodes.OK)
                  }
            } ~ put {
              entity(as[Seq[String]]) { sqls =>
                repo.update(db, sqls)
                complete(StatusCodes.OK)
              }
            }
          }
        }
      }
    }

    jdbc-engine的特点是可以用字符类型的sql语句来操作。所以我们可以通过传递字符串型的sql语句来实现服务调用,使用门槛低,方便通用。restapi-sql提供的是对服务器端sqlserver的普通操作,包括读get,写入post,更改put。这些sqlserver操作部分是在JDBCRepo里的:

    package com.datatech.rest.sql
    import com.datatech.sdp.jdbc.engine.JDBCEngine._
    import com.datatech.sdp.jdbc.engine.{JDBCQueryContext, JDBCUpdateContext}
    import scalikejdbc._
    import akka.stream.ActorMaterializer
    import com.datatech.sdp.result.DBOResult.DBOResult
    import akka.stream.scaladsl._
    import scala.concurrent._
    import SqlModels._
    
    object Repo {
    
      class JDBCRepo(implicit ec: ExecutionContextExecutor, mat: ActorMaterializer) {
        def query[R](db: String, sqlText: String, toRow: WrappedResultSet => R): Source[R,Any] = {
          //construct the context
          val ctx = JDBCQueryContext(
            dbName = Symbol(db),
            statement = sqlText
          )
          jdbcAkkaStream(ctx,toRow)
        }
    
        def query(db: String, tbl: String, sqlText: String) = {
          //construct the context
          val ctx = JDBCQueryContext(
            dbName = Symbol(db),
            statement = sqlText
          )
          jdbcQueryResult[Vector,RS](ctx,getConverter(tbl)).toFuture[Vector[RS]]
        }
    
        def update(db: String, sqlTexts: Seq[String]): DBOResult[Seq[Long]] = {
          val ctx = JDBCUpdateContext(
            dbName = Symbol(db),
            statements = sqlTexts
          )
          jdbcTxUpdates(ctx)
        }
        def bulkInsert[P](db: String, sqlText: String, prepParams: P => Seq[Any], params: Source[P,_]) = {
          val insertAction = JDBCActionStream(
            dbName = Symbol(db),
            parallelism = 4,
            processInOrder = false,
            statement = sqlText,
            prepareParams = prepParams
          )
          params.via(insertAction.performOnRow).to(Sink.ignore).run()
        }
        def batchInsert(db: String, tbl: String, sqlText: String, jsonParams: String):DBOResult[Seq[Long]] = {
          val ctx = JDBCUpdateContext(
            dbName = Symbol(db),
            statements = Seq(sqlText),
            batch = true,
            parameters = getSeqParams(jsonParams,sqlText)
          )
          jdbcBatchUpdate[Seq](ctx)
        }
      }
      import monix.execution.Scheduler.Implicits.global
      implicit class DBResultToFuture(dbr: DBOResult[_]){
        def toFuture[R] = {
          dbr.value.value.runToFuture.map {
            eor =>
              eor match {
                case Right(or) => or match {
                  case Some(r) => r.asInstanceOf[R]
                  case None => throw new RuntimeException("Operation produced None result!")
                }
                case Left(err) => throw new RuntimeException(err)
              }
          }
        }
      }
    }

    读query部分即 def query[R](db: String, sqlText: String, toRow: WrappedResultSet => R): Source[R,Any] = {...} 这个函数返回Source[R,Any],下面我们好好谈谈这个R:R是读的结果,通常是某个类或model,比如读取Person记录返回一组Person类的实例。这里有一种强类型的感觉。一开始我也是随大流坚持建model后用toJson[E],fromJson[E]这样做线上数据转换。现在的问题是restapi-sql是一项公共服务,使用者知道sqlserver上有些什么表,然后希望通过sql语句来从这些表里读取数据。这些sql语句可能超出表的界限如sql join, union等,如果我们坚持每个返回结果都必须有个对应的model,那么显然就会牺牲这个服务的通用性。实际上,http线上数据交换本身就不可能是强类型的,因为经过了json转换。对于json转换来说,只要求字段名称、字段类型对称就行了。至于从什么类型转换成了另一个什么类型都没问题。所以,字段名+字段值的表现形式不就是Map[K,V]吗,我们就用Map[K,V]作为万能model就行了,没人知道。也就是说用户方通过sql语句指定返回的字段名称,它们可能是任何类型Any,具体类型自然会由数据库补上。服务方从数据库读取结果ResultSet后转成Map[K,V]然后再转成json返回给用户,用户可以用Map[String,Any]信息产生任何类型,这就是自主。好,就来看看如何将ResultSet转成Map[String,Any]:

    package com.datatech.rest.sql
    import scalikejdbc._
    import java.sql.ResultSetMetaData
    class RSConverter {
      import RSConverterUtil._
      var rsMeta: ResultSetMetaData = _
      var columnCount: Int = 0
      var rsFields: List[(String,String)] = List[(String,String)]()
    
      def getFieldsInfo:List[(String,String)] =
        ( 1 until columnCount).foldLeft(List[(String,String)]()) {
        case (cons,i) =>
          (rsMeta.getColumnLabel(i) -> rsMeta.getColumnTypeName(i)) :: cons
      }
      def resultSet2Map(rs: WrappedResultSet): Map[String,Any] = {
        if(columnCount == 0) {
          rsMeta =  rs.underlying.getMetaData
          columnCount = rsMeta.getColumnCount
          rsFields = getFieldsInfo
        }
        rsFields.foldLeft(Map[String,Any]()) {
          case (m,(n,t)) =>
            m + (n -> rsFieldValue(n,t,rs))
        }
      }
    }
    object RSConverterUtil {
      import scala.collection.immutable.TreeMap
      def map2Params(stm: String, m: Map[String,Any]): Seq[Any] = {
        val sortedParams = m.foldLeft(TreeMap[Int,Any]()) {
          case (t,(k,v)) => t + (stm.indexOfSlice(k) -> v)
        }
        sortedParams.map(_._2).toSeq
      }
      def rsFieldValue(fldname: String, fldType: String, rs: WrappedResultSet): Any = fldType match {
        case "LONGVARCHAR" => rs.string(fldname)
        case "VARCHAR" => rs.string(fldname)
        case "CHAR" => rs.string(fldname)
        case "BIT" => rs.boolean(fldname)
        case "TIME" => rs.time(fldname)
        case "TIMESTAMP" => rs.timestamp(fldname)
        case "ARRAY" => rs.array(fldname)
        case "NUMERIC" => rs.bigDecimal(fldname)
        case "BLOB" => rs.blob(fldname)
        case "TINYINT" => rs.byte(fldname)
        case "VARBINARY" => rs.bytes(fldname)
        case "BINARY" => rs.bytes(fldname)
        case "CLOB" => rs.clob(fldname)
        case "DATE" => rs.date(fldname)
        case "DOUBLE" => rs.double(fldname)
        case "REAL" => rs.float(fldname)
        case "FLOAT" => rs.float(fldname)
        case "INTEGER" => rs.int(fldname)
        case "SMALLINT" => rs.int(fldname)
        case "Option[Int]" => rs.intOpt(fldname)
        case "BIGINT" => rs.long(fldname)
      }
    }

    这段主要功能是将JDBC的ResultSet转换成Map[String,Any]。在前面讨论的restapi-mongo我们可以进行Document到Map[String,Any]的转换以实现同样的目的。

    下面是个调用query服务的例子:

        val getAllRequest = HttpRequest(
          HttpMethods.GET,
          uri = "http://192.168.11.189:50081/api/sql/termtxns/brand?sqltext=SELECT%20*%20FROM%20BRAND",
        ).addHeader(authentication)
    
        (for {
          response <- Http().singleRequest(getAllRequest)
          json <- Unmarshal(response.entity).to[String]
        } yield message).andThen {
          case Success(msg) => println(s"Received json collection: $json")
          case Failure(err) => println(s"Error: ${err.getMessage}")
        }

    特点是我只需要提供sql语句,服务就会返回一个json数组,然后我怎么把json转成任何类型就随我高兴了。

    再看看post服务:在这里希望实现一种批次型插入表的功能,比如从一个数据表里把数据搬到另外一个表。一般来讲在jdbc操作里首先得提供一个模版,如:insert into person(fullname,code) values(?,?),然后通过提供一组参数值来实现批次插入。当然,为安全起见,我们还是需要确定正确的参数位置,这个可以从sql语句里获取:

      def map2Params(stm: String, m: Map[String,Any]): Seq[Any] = {
        val sortedParams = m.foldLeft(TreeMap[Int,Any]()) {
          case (t,(k,v)) => t + (stm.toUpperCase.indexOfSlice(k.toUpperCase) -> v)
        }
        sortedParams.map(_._2).toSeq
      }
    
      def getSeqParams(json: String, sql: String): Seq[Seq[Any]] = {
        val seqOfjson = fromJson[Seq[String]](json)
        val prs = seqOfjson.map(fromJson[Map[String,Any]])
        prs.map(RSConverterUtil.map2Params(sql,_))
      }

    下面是个批次插入的示范代码:

        val encodedSelect = URLEncode.encode("select id as code, name as fullname from members")
        val encodedInsert = URLEncode.encode("insert into person(fullname,code) values(?,?)")
        val getMembers = HttpRequest(
           HttpMethods.GET,
           uri = "http://192.168.0.189:50081/api/sql/h2/members?sqltext="+encodedSelect
          ).addHeader(authentication)
        val postRequest = HttpRequest(
          HttpMethods.POST,
          uri = "http://192.168.0.189:50081/api/sql/h2/person?sqltext="+encodedInsert,
        ).addHeader(authentication)
        
        (for {
          _ <- update("http://192.168.0.189:50081/api/sql/h2/person",Seq(createCTX))
          respMembers <- Http().singleRequest(getMembers)
          message <- Unmarshal(respMembers.entity).to[String]
          reqEntity <- Marshal(message).to[RequestEntity]
          respInsert <- Http().singleRequest(postRequest.copy(entity = reqEntity))
     //       HttpEntity(ContentTypes.`application/json`,ByteString(message))))
        } yield respInsert).onComplete {
          case Success(r@HttpResponse(StatusCodes.OK, _, entity, _)) =>
            println("builk insert successful!")
          case Success(_) => println("builk insert failed!")
          case Failure(err) => println(s"Error: ${err.getMessage}")
        }

    你看,我特别把参数值清单里字段位置和insert sql里字段先后位置颠倒了,但还是得到正确的结果。

    最后是put:这是为批次型的事物处理设计的。接受一条或者多条无参数sql指令,多条指令会在一个事物中执行。具体使用方式如下:

        def update(url: String, cmds: Seq[String])(implicit token: Authorization): Future[HttpResponse] =
        for {
          reqEntity <- Marshal(cmds).to[RequestEntity]
          response <- Http().singleRequest(HttpRequest(
            method=HttpMethods.PUT,uri=url,entity=reqEntity)
          .addHeader(token))
        } yield response

    在上面的讨论里介绍了基于sqlserver的rest服务,与前面讨论的restapi-mongo从原理上区别并不大,重点是实现了用户主导的数据库操作。

  • 相关阅读:
    判断用户分辨率调用不同的CSS样式文件
    译文:创建性感的CSS
    CSS控制文字的显示与隐藏时引出的Bug
    设计规范的理想
    浏览器不兼容的源头
    图片垂直居中的使用技巧
    CSS命名规范
    5.2 微格式
    如何在本地使用 Yahoo! BrowserPlus
    如何让 Firefox 2 和 Firefox 3 版本并存
  • 原文地址:https://www.cnblogs.com/tiger-xc/p/11754487.html
Copyright © 2020-2023  润新知