• sbt 编译打包(六)


    1. 安装 sbt

    cd /home/hadoop/apps
    mkdir sbt
    cd sbt
    cp ~/Download/sbt-1.3.8.tgz .
    
    // 解压
    tar -zxvf sbt-1.3.8.tgz
    
    // 将 sbt-launch.jar 拷贝到外层目录
    cp sbt/bin/sbt-launch.jar .
    
    // 新建 run.sh ,用于编译打包 scala 程序
    vim run.sh
    
    #!/bin/bash
    SBT_OPTS="-Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M"
    java $SBT_OPTS -jar `dirname $0`/sbt-launch.jar "$@"
    
    // 当前结构
    [hadoop@hadoop1 sbt]$ ll
    total 57592
    -rwxrwxr-x 1 hadoop hadoop      153 Oct 24 15:17 run.sh
    drwxrwxr-x 5 hadoop hadoop     4096 Feb  4  2020 sbt
    -rw-r--r-- 1 hadoop hadoop 57567520 Oct 24 15:13 sbt-1.3.8.tgz
    -rwxrwxr-x 1 hadoop hadoop  1389808 Oct 24 15:15 sbt-launch.jar
    drwxrwxr-x 3 hadoop hadoop     4096 Oct 24 15:29 target
    
    // 查看 sbt 版本信息,第一次执行命令会下载一些依赖
    ./run.sh sbtVersion
    

    2. 使用 sbt 编译 scala 独立应用

    1、创建应用目录:

    cd /home/hadoop/apps
    mkdir my_code
    cd my_code
    

    2、编写 scala 应用:

    mkdir -p first_spark/src/main/scala/
    vim first_spark/src/main/scala/
    
    import org.apache.spark.SparkContext
    import org.apache.spark.SparkContext._
    import org.apache.spark.SparkConf
    
    
    object SimpleApp {
      def main(args: Array[String]): Unit = {
        val conf = new SparkConf().setAppName("Simple Application")
        val sc = new SparkContext(conf)
    
        // 过滤出大于 20 的元素
        val rdd = sc.parallelize(List(30, 50, 7, 6, 1, 20), 2).filter(x => x > 20)
    
        rdd.collect().foreach(println)
        sc.stop()
      }
    }   
    

    3、编写 sbt 程序:

    cd /home/hadoop/apps/my_code/first_spark
    vim simple.sbt
    
    
    name := "Simple Project"
    version := "1.0"
    scalaVersion := "2.11.8"
    libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.0"
    

    注意:scala、spark-core 的版本可以通过 spark-shell 查看

    4、使用 sbt 打包 Scala 程序:

    cd /home/hadoop/apps/my_code/first_spark
    
    // 编译,第一次编译需要下载依赖,会有点慢
    [hadoop@hadoop1 first_spark]$ /home/hadoop/apps/sbt/run.sh package
    
    Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256M; support was removed in 8.0
    [info] Loading project definition from /home/hadoop/apps/my_code/first_spark/project
    [info] Loading settings for project first_spark from simple.sbt ...
    [info] Set current project to Simple Project (in build file:/home/hadoop/apps/my_code/first_spark/)
    [info] Compiling 1 Scala source to /home/hadoop/apps/my_code/first_spark/target/scala-2.11/classes ...
    [success] Total time: 25 s, completed Oct 24, 2021 3:57:27 PM
    

    编译成功后,在当前目录下会生成两个目录:project/、target/jar 包所在位置:arget/scala-2.11/simple-project_2.11-1.0.jar

    5、运行 scala 程序:

    /home/hadoop/apps/spark-2.2.0/bin/spark-submit --class "SimpleApp" /home/hadoop/apps/my_code/first_spark/target/scala-2.11/simple-project_2.11-1.0.jar
    

    参考文章

  • 相关阅读:
    小程序开发学习
    guava 学习一 函数,集合操作
    《构建之法》第四章 两人合作 读后感
    模拟退火学习笔记
    Haywire
    [JSOI2004]平衡点
    CF1039D You Are Given a Tree
    CF797E Array Queries
    [SHOI2014]三叉神经树
    [国家集训队]Tree II
  • 原文地址:https://www.cnblogs.com/midworld/p/15647016.html
Copyright © 2020-2023  润新知