• hadoop+spark环境配置


    部分参考来源:https://blog.csdn.net/l1028386804/article/details/80516740

    修改IP为静态IP:

    因为是虚拟机,我虚拟机IP及参数如下(主机为192.168.242.139,其它两台分别为192.168.242.140和192.168.242.141):

    UTE=yes
    IPV4_FAILURE_FATAL=no
    NAME=eno16777736
    UUID=076d5584-360f-4e57-b203-db6ff98e6341
    ONBOOT=yes
    HWADDR=00:0C:29:DF:37:9F
    IPADDR=192.168.242.141
    NETMASK=255.255.255.0
    GATEWAY0=192.168.242.2
    DNS1=8.8.8.8
    

    一、修改hosts文件

    vim /etc/hosts
    

    我的是三台云主机:在原文件的基础上加上;

    ip1 master worker0 namenode
    ip2 worker1 datanode1
    ip3 worker2 datanode2
    

    其中的ipN代表一个可用的集群IP,ip1为master的主节点,ip2和iip3为从节点。

    配置完后,直接ping worker1ping worker2试试
    我的配置如下,在/etc/hosts文件的开头添加:

    192.168.242.140 worker1
    192.168.242.141 worker2
    192.168.242.139 master
    

    注意
    这里有一个坑,本人在虚拟机(没有物理机操作,没办法)下面如果不改/etc/hostname主机名,则启动hadoop后,会发现Nodes节点的Node HTTP Address下面地址全部为本机localhost:端口,这样是完全不对的,所以这里还要更改各节点的主机名为上面host配置的名称。

    • 1、 对应上面host文件,在主节点master上执行
    vim /etc/hostname
    

    删掉原来的内容,改为master

    • 2、 对应上面host文件,在重节点worker1上执行
    vim /etc/hostname
    

    删掉原来的内容,改为worker1

    • 3、 对应上面host文件,在重节点worker2上执行
    vim /etc/hostname
    

    删掉原来的内容,改为worker2

    二、关闭防火墙

    即时生效,重启后失效:

    service iptables stop
    

    重启后永久生效:

    chkconfig iptables off
    

    三、ssh互信(免密码登录)

    注意我这里配置的是root用户,所以以下的家目录是/root
    如果你配置的是用户是xxxx,那么家目录应该是/home/xxxxx/

    #在主节点执行下面的命令:
    ssh-keygen -t rsa -P '' #一路回车直到生成公钥
    
    scp /root/.ssh/id_rsa.pub root@worker1:/root/.ssh/id_rsa.pub.master #从master节点拷贝id_rsa.pub到worker主机上,并且改名为id_rsa.pub.master
    scp /root/.ssh/id_rsa.pub root@worker2:/root/.ssh/id_rsa.pub.master #同上,以后使用workerN代表worker1和worker2.
    
    scp /etc/hosts root@workerN:/etc/hosts   #统一hosts文件,让几个主机能通过host名字来识别彼此
    
    #master主机执行如下命令:
    cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys #master主机
    
    #workerN主机执行如下命令(分别在worker1及worker2上执行):
    cat /root/.ssh/id_rsa.pub.master >> /root/.ssh/authorized_keys #workerN主机
    

    这样master主机就可以无密码登录到其他主机,这样子在运行master上的启动脚本时和使用scp命令时候,就可以不用输入密码了。

    四、安装基础环境(JAVA和SCALA环境)

    • 1.Java1.8环境搭建:
      配置master的java环境
    #下载jdk1.8的rpm包
    wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.rpm 
    rpm -ivh jdk-8u112-linux-x64.rpm 
    
    #增加JAVA_HOME
    vim /etc/profile
    
    #增加如下行:
    #Java home
    export JAVA_HOME=/usr/java/jdk1.8.0_112/
    
    #刷新配置:
    source /etc/profile #当然reboot也是可以的
    

    配置workerN主机的java环境,在master主机分别将jdk-8u112-linux-x64.rpm文件拷贝到worker1worker2root目录下

    #使用scp命令进行拷贝
    scp jdk-8u112-linux-x64.rpm root@worker1:/root
    scp jdk-8u112-linux-x64.rpm root@worker2:/root
    #其他的步骤如master节点配置一样,分别在worker1,worker2的`root`目录下执行rpm安装及配置环境变量
    
    • 2.Scala2.12.2环境搭建:
      Master节点:
    #下载scala安装包:
    wget -O "scala-2.12.2.rpm" "https://downloads.lightbend.com/scala/2.12.2/scala-2.12.2.rpm"
    #如果无法下载,请自行到https://www.scala-lang.org/download/2.12.2.html进行下载
    
    #安装rpm包:
    rpm -ivh scala-2.12.2.rpm
    
    #增加SCALA_HOME
    vim /etc/profile
    
    #增加如下内容;
    #Scala Home
    export SCALA_HOME=/usr/share/scala
    
    #刷新配置
    source /etc/profile
    

    WorkerN节点;

    #使用scp命令在master进行拷贝
    scp scala-2.12.2.rpm root@worker1:/root
    scp scala-2.12.2.rpm root@worker2:/root
    #其他的步骤如master节点配置一样
    

    五、Hadoop2.7.3完全分布式搭建

    MASTER节点:

    1.下载二进制包:

    wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
    

    2.解压并移动至相应目录

    我的习惯是将软件放置/opt目录下:

    tar -xvf hadoop-2.7.3.tar.gz
    mv hadoop-2.7.3 /opt
    

    3.修改相应的配置文件:

    (1)/etc/profile:

    增加如下内容:

    #hadoop enviroment 
    export HADOOP_HOME=/opt/hadoop-2.7.3/
    export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
    
    • 刷新配置
    source /etc/profile
    

    (2)$HADOOP_HOME/etc/hadoop/hadoop-env.sh

    修改JAVA_HOME 如下:
    按上面的步骤,就是:vim /opt/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

    export JAVA_HOME=/usr/java/jdk1.8.0_112/
    

    (3)$HADOOP_HOME/etc/hadoop/slaves

    按上面的步骤,就是:vim /opt/hadoop-2.7.3/etc/hadoop/slaves

    worker1
    workeri2
    

    (4)$HADOOP_HOME/etc/hadoop/core-site.xml

    vim /opt/hadoop-2.7.3/etc/hadoop/core-site.xml

    <configuration>
            <property>
                    <name>fs.defaultFS</name>
                    <value>hdfs://master:8020</value>
            </property>
            <property>
             <name>io.file.buffer.size</name>
             <value>131072</value>
           </property>
            <property>
                    <name>hadoop.tmp.dir</name>
                    <value>/opt/hadoop-2.7.3/tmp</value>
            </property>
    </configuration>
    

    (5)$HADOOP_HOME/etc/hadoop/hdfs-site.xml

    vim /opt/hadoop-2.7.3/etc/hadoop/hdfs-site.xml

    <configuration>
        <property>
          <name>dfs.namenode.secondary.http-address</name>
          <value>master:50090</value>
        </property>
        <property>
          <name>dfs.replication</name>
          <value>2</value>
        </property>
        <property>
          <name>dfs.namenode.name.dir</name>
          <value>file:/opt/hadoop-2.7.3/hdfs/name</value>
        </property>
        <property>
          <name>dfs.datanode.data.dir</name>
          <value>file:/opt/hadoop-2.7.3/tmp</value>
        </property>
    </configuration>
    

    (6)$HADOOP_HOME/etc/hadoop/mapred-site.xml

    复制template,生成xml:

    cp /opt/hadoop-2.7.3/etc/hadoop/mapred-site.xml.template mapred-site.xml
    

    vim /opt/hadoop-2.7.3/etc/hadoop/mapred-site.xml
    内容改为:

    <configuration>
     <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
      </property>
      <property>
              <name>mapreduce.jobhistory.address</name>
              <value>master:10020</value>
      </property>
      <property>
              <name>mapreduce.jobhistory.address</name>
              <value>master:19888</value>
      </property>
    </configuration>
    

    (7)$HADOOP_HOME/etc/hadoop/yarn-site.xml

    vim /opt/hadoop-2.7.3/etc/hadoop/yarn-site.xml
    内容改为:

    <!-- Site specific YARN configuration properties -->
             <property>
              <name>yarn.nodemanager.aux-services</name>
              <value>mapreduce_shuffle</value>
         </property>
         <property>
               <name>yarn.resourcemanager.address</name>
               <value>master:8032</value>
         </property>
         <property>
              <name>yarn.resourcemanager.scheduler.address</name>
              <value>master:8030</value>
          </property>
         <property>
             <name>yarn.resourcemanager.resource-tracker.address</name>
             <value>master:8031</value>
         </property>
         <property>
             <name>yarn.resourcemanager.admin.address</name>
             <value>master:8033</value>
         </property>
         <property>
             <name>yarn.resourcemanager.webapp.address</name>
             <value>master:8088</value>
         </property>
    

    至此master节点的hadoop搭建完毕
    再启动之前我们需要
    格式化一下namenode

    hadoop namenode -format
    

    4.WorkerN节点:

    (1)复制master节点的hadoop文件夹到worker上:

    scp -r /opt/hadoop-2.7.3 root@worker1:/opt
    scp -r /opt/hadoop-2.7.3 root@worker2:/opt 
    

    (2)修改/etc/profile:

    过程如master一样,分别在worker1和worker2上添加:

    #hadoop enviroment 
    export HADOOP_HOME=/opt/hadoop-2.7.3/
    export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
    
    • 刷新配置
    source /etc/profile
    

    六、Spark2.1.0完全分布式环境搭建:

    MASTER节点:

    1.下载文件:

    wget -O "spark-2.1.0-bin-hadoop2.7.tgz" "http://d3kbcqa49mib13.cloudfront.net/spark-2.1.0-bin-hadoop2.7.tgz"
    

    2.解压并移动至相应的文件夹;

    tar -xvf spark-2.1.0-bin-hadoop2.7.tgz
    mv spark-2.1.0-bin-hadoop2.7 /opt
    

    3.修改相应的配置文件:

    (1)/etc/profile

    vim /etc/profile
    
    #Spark enviroment
    export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/
    export PATH="$SPARK_HOME/bin:$PATH"
    
    • 刷新配置
    source /etc/profile
    

    (2)$SPARK_HOME/conf/spark-env.sh

    cd /opt/spark-2.1.0-bin-hadoop2.7/conf
    
    cp spark-env.sh.template spark-env.sh
    

    配置如下:

    vim spark-env.sh
    
    #配置内容如下:
    export SCALA_HOME=/usr/share/scala
    export JAVA_HOME=/usr/java/jdk1.8.0_112/
    export SPARK_MASTER_IP=master
    export SPARK_WORKER_MEMORY=1g
    export HADOOP_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop
    

    (3)$SPARK_HOME/conf/slaves

    cd /opt/spark-2.1.0-bin-hadoop2.7/conf
    
    cp slaves.template slaves
    
    vim slaves
    

    配置内容如下

    master
    worker1
    worker2
    

    (4)WorkerN节点:

    将配置好的spark文件复制到workerN节点

    scp -r spark-2.1.0-bin-hadoop2.7 root@worker1:/opt
    scp -r spark-2.1.0-bin-hadoop2.7 root@worker2:/opt
    

    修改/etc/profile,增加spark相关的配置,如MASTER节点一样

    分别在worker1worker2上修改/etc/profile添加

    vim /etc/profile
    
    #Spark enviroment
    export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/
    export PATH="$SPARK_HOME/bin:$PATH"
    
    • 刷新配置
    source /etc/profile
    

    七、启动集群的脚本

    编辑启动集群脚本start-cluster.sh如下:

    #!/bin/bash
    echo -e "33[31m ========Start The Cluster======== 33[0m"
    echo -e "33[31m Starting Hadoop Now !!! 33[0m"
    /opt/hadoop-2.7.3/sbin/start-all.sh
    echo -e "33[31m Starting Spark Now !!! 33[0m"
    /opt/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh
    echo -e "33[31m The Result Of The Command "jps" :  33[0m"
    jps
    echo -e "33[31m ========END======== 33[0m"
    

    开始启动:

    bash start-cluster.sh
    

    编辑关闭集群脚本stop-cluser.sh如下:

    #!/bin/bash
    echo -e "33[31m ===== Stoping The Cluster ====== 33[0m"
    echo -e "33[31m Stoping Spark Now !!! 33[0m"
    /opt/spark-2.1.0-bin-hadoop2.7/sbin/stop-all.sh
    echo -e "33[31m Stopting Hadoop Now !!! 33[0m"
    /opt/hadoop-2.7.3/sbin/stop-all.sh
    echo -e "33[31m The Result Of The Command "jps" :  33[0m"
    jps
    echo -e "33[31m ======END======== 33[0m"
    

    八、测试一下集群:

    这里我都用最简单最常用的Wordcount来测试好了!

    1、测试Hadoop

    编辑一个wordcount.txt文本:

    vim wordcount.txt
    

    输入:

    Hello hadoop
    hello spark
    hello bigdata
    

    然后执行下列命令:

    hadoop fs -mkdir -p /Hadoop/Input
    hadoop fs -put wordcount.txt /Hadoop/Input
    hadoop jar /opt/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /Hadoop/Input /Hadoop/Output
    

    等待mapreduce执行完毕后,查看结果:

    hadoop fs -cat /Hadoop/Output/*
    

    显示:

    Hello	1
    bigdata	1
    hadoop	1
    hello	2
    spark	1
    

    证明hadoop集群搭建成功!

    2.测试spark

    为了避免麻烦这里我们使用spark-shell,做一个简单的worcount的测试
    用于在测试hadoop的时候我们已经在hdfs上存储了测试的源文件,下面就是直接拿来用就好了!

    spark-shell
    

    然后输入如下内容:

    val file=sc.textFile("hdfs://master:8020/Hadoop/Input/wordcount.txt")
    val rdd = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
    rdd.collect()
    rdd.foreach(println)
    

    其实就是一行一行的输入,如下:

    scala> val file=sc.textFile("hdfs://master:8020/Hadoop/Input/wordcount.txt")
    file: org.apache.spark.rdd.RDD[String] = hdfs://master:8020/Hadoop/Input/wordcount.txt MapPartitionsRDD[11] at textFile at <console>:24
    
    scala> val rdd = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
    rdd: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[14] at reduceByKey at <console>:26
    
    scala> rdd.collect()
    res2: Array[(String, Int)] = Array((Hello,1), (hello,2), (bigdata,1), (spark,1), (hadoop,1))
    
    scala> rdd.foreach(println)
    (spark,1)
    (hadoop,1)
    (Hello,1)
    (hello,2)
    (bigdata,1)
    
    

    至此spark也成功了。退出的话,退出命令如下:

    :quit
    

    hadoop和spark环境都测试成功后分别在主、从节点上执行jps命令的显示情况

    hadoop和spark环境都测试成功后分别在主、从节点上执行jps命令显示如下

    • 1、主节点
    10265 SecondaryNameNode
    10010 NameNode
    13146 Jps
    10476 ResourceManager
    
    • 2、从节点1
    7130 NodeManager
    8810 Jps
    6955 DataNode
    
    • 3、从节点2
    4358 DataNode
    5655 Jps
    3819 NodeManager
    

    可以发现主节点是没有DataNode节点的

  • 相关阅读:
    constexpr函数"QAlgorithmsPrivate::qt_builtin_popcount"不会生成常数表达式
    Oracle 导出用户下的所有索引创建语句
    如何创建只读权限oracle账户
    CentOS7使用firewalld打开关闭防火墙与端口
    springboot异步线程(三)源码解析(二)
    springboot异步线程(三)源码解析(一)
    MethodInterceptor 的几种用法(二)
    ThreadLocal源码阅读
    MethodInterceptor 的几种用法
    springboot定时器(一)
  • 原文地址:https://www.cnblogs.com/zh672903/p/14113514.html
Copyright © 2020-2023  润新知