• Apache Hadoop集群离线安装部署(二)——Spark-2.1.0 on Yarn安装


    Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS、YARN、MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html

    Apache Hadoop集群离线安装部署(二)——Spark-2.1.0 on Yarn安装:http://www.cnblogs.com/pojishou/p/6366570.html

    Apache Hadoop集群离线安装部署(三)——Hbase安装:http://www.cnblogs.com/pojishou/p/6366806.html

    〇、安装文件准备

    Scala:http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz

    Spark:http://www.apache.org/dist/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz

    一、安装Scala

    1、解压

    tar -zxvf scala-2.11.8.tgz -C /opt/program/
    ln -s /opt/program/scala-2.11.8 /opt/scala

    2、设置环境变量

    vi /etc/profile
    
    export SCALA_HOME=/opt/scala
    export PATH=$SCALA_HOME/bin:$JAVA_HOME/bin:$PATH

    3、生效

    source /etc/profile

    4、scp到其它节点

    二、安装Spark

    1、解压

    tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz -C /opt/program/
    ln -s /opt/program/spark-2.1.0-bin-hadoop2.7 /opt/spark

    2、修改配置文件

    vi /opt/spark/conf/spark-env.sh
    
    export JAVA_HOME=/opt/java
    export SCALA_HOME=/opt/scala
    export HADOOP_HOME=/opt/hadoop
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

    3、scp到其它节点

    4、测试

    /opt/hadoop/sbin/start-all.sh
    
    /opt/spark/bin/spark-submit 
        --class org.apache.spark.examples.SparkPi 
        --master yarn 
        --deploy-mode client 
        --driver-memory 1g 
        --executor-memory 1g 
        --executor-cores 2 
        /opt/spark/examples/jars/spark-examples*.jar 
        10

    求出pi便ok了

    Java为1.8时 ,默认配置Spark运行会报错,解决方案为切换成1.7或参考:http://www.cnblogs.com/pojishou/p/6358588.html

  • 相关阅读:
    UVALive 3664:Guess(贪心 Grade E)
    uva 1611:Crane(构造 Grade D)
    uva 177:Paper Folding(模拟 Grade D)
    UVALive 6514:Crusher’s Code(概率dp)
    uva 11491:Erasing and Winning(贪心)
    uva 1149:Bin Packing(贪心)
    uva 1442:Cave(贪心)
    学习 linux第一天
    字符编码问题
    orm 正向查询 反向查询
  • 原文地址:https://www.cnblogs.com/pojishou/p/6366570.html
Copyright © 2020-2023  润新知