• hadoop-2.7.1基于QMJ高可用安装配置


    1.修改主机名及hosts文件

    10.205.22.185 nn1 (主)作用namenode,resourcemanager,datanode,JournalNode,zk,zkfc(hive,sqoop可选)
    10.205.22.186 nn2 (备)作用namenode,resourcemanager,datanode,JournalNode,zk,zkfc
    10.205.22.187 dn1      作用datanode,JournalNode,zk

    1.1配置ssh免密码登录

    主节点能免密码登录各个从节点

    ssh nn1
    ssh nn2
    ssh dn1

    2. 安装jdk1.8和zookeeper(可根据需求决定是否安装hive,sqoop)

    2.1修改profile文件,配置环境变量

    export JAVA_HOME=/usr/java/jdk1.8.0_65
    export JRE_HOME=/usr/java/jdk1.8.0_65/jre
    export HADOOP_HOME=/app/hadoop-2.7.1
    export HIVE_HOME=/app/hive
    export SQOOP_HOME=/app/sqoop
    export ZOOKEEPER_HOME=/app/zookeeper-3.4.6
    export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HIVE_HOME/bin:$SQOOP_HOME/bin:$MAVEN_HOME/bin
    export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
    ulimit -SHn 65536

    2.2 修改zookeeper配置文件zoo.cfg

    添加:

    dataDir=/home/zookeeper
    server.1= nn1:2888:3888 server.2= nn2:2888:3888 server.3= dn1:2888:3888

    2.3 修改zookeeper的ID号

    /home/zookeeper/myid
    1  #nn1服务器myid修改为1
    2  #nn2服务器myid修改为2
    3  #nn3服务器myid修改为3

    3.安装hadoop-2.7.1,修改配置文件

    创建相应的目录

    mkdir -p /home/hadoop/tmp
    mkdir -p /home/hadoop/hdfs/data
    mkdir -p /home/hadoop/journal
    mkdir -p /home/hadoop/name

    修改slaves文件

    nn1
    nn2
    dn1

    修改hadoop-env.sh文件

    export JAVA_HOME=/usr/java/jdk1.8.0_65
    export HADOOP_LOG_DIR=/home/hadoop/log/hadoop

    修改hadoop日志记录文件log4j.properties 

    hadoop.log.dir=/home/hadoop/log/hadoop

    定义yarn日志yarn-env.sh

    YARN_LOG_DIR="/home/hadoop/log/yarn"

    3.1配置hdfs-site.xml

    <configuration>
            <property>
                   <name>dfs.nameservices</name>
                   <value>masters</value>
            </property>
            <property>
                   <name>dfs.ha.namenodes.masters</name>
                   <value>nn1,nn2</value>
            </property>
            <property>
                   <name>dfs.namenode.rpc-address.masters.nn1</name>
                   <value>nn1:9000</value>
            </property>
            <property>
                   <name>dfs.namenode.http-address.masters.nn1</name>
                   <value>nn1:50070</value>
            </property>
            <property>
                   <name>dfs.namenode.rpc-address.masters.nn2</name>
                   <value>nn2:9000</value>
            </property>
            <property>
                   <name>dfs.namenode.http-address.masters.nn2</name>
                   <value>nn2:50070</value>
            </property>
            <property>
                   <name>dfs.datanode.data.dir</name>
                   <value>file:/home/hadoop/hdfs/data</value>
            </property>
            <property>
               <name>dfs.replication</name>
                  <value>2</value>
            </property>
            <property>
                   <name>dfs.namenode.name.dir</name>
                   <value>file:/home/hadoop/name</value>
            </property>
            <property>
                   <name>dfs.namenode.shared.edits.dir</name>
                   <value>qjournal://nn1:8485;nn2:8485;dn1:8485/masters</value>
            </property>
            <property>
                   <name>dfs.journalnode.edits.dir</name>
                   <value>/home/hadoop/journal</value>
            </property>
            <property>
                   <name>dfs.ha.automatic-failover.enabled</name>
                   <value>true</value>
            </property>
            <property>                                   
             <name>dfs.client.failover.proxy.provider.masters</name>             
             <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
            </property>
            <property>
                   <name>dfs.ha.fencing.methods</name>
                   <value>sshfence</value>
            </property>
            <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                   <value>/root/.ssh/id_rsa</value>
            </property>
            <property>
                   <name>dfs.ha.fencing.ssh.connect-timeout</name>
                   <value>30000</value>
            </property>
    </configuration>

    3.2配置core-site.xml文件

    <configuration>
       <property>
           <name>fs.defaultFS</name>
           <value>hdfs://masters</value>
       </property>
       <property>
           <name>hadoop.tmp.dir</name>
           <value>/home/hadoop/tmp</value>
       </property>
       <property>
           <name>ha.zookeeper.quorum</name>
           <value>nn1:2181,nn2:2181,dn1:2181</value>
       </property>
    
       <property>
           <name>io.compression.codecs</name>
           <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.BZip2Codec</value>
       </property>
       <property>
           <name>io.compression.codec.lzo.class</name>
           <value>com.hadoop.compression.lzo.LzoCodec</value>
       </property>
    </configuration>

    3.3配置yarn-site.xml文件

    <configuration>
        <property>
           <name>yarn.resourcemanager.ha.enabled</name>
           <value>true</value>
        </property>
        <property>
           <name>yarn.resourcemanager.cluster-id</name>
           <value>rm-cluster</value>
        </property>
        <property>
           <name>yarn.resourcemanager.ha.rm-ids</name>
           <value>rm1,rm2</value>
        </property>
        <property>
           <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
           <value>true</value>
        </property>
        <property>
           <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
           <value>true</value>
        </property>
        <property>
           <name>yarn.resourcemanager.hostname.rm1</name>
           <value>nn1</value>
        </property>
        <property>
           <name>yarn.resourcemanager.hostname.rm2</name>
           <value>nn2</value>
       </property>
        <property>
           <name>yarn.resourcemanager.store.class</name>
           <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
        </property>
        <property>
           <name>yarn.resourcemanager.zk-address</name>
           <value>nn1:2181,nn2:2181,dn1:2181</value>
        </property>
        <property>
           <name>yarn.resourcemanager.scheduler.address.rm1</name>
           <value>nn1:8030</value>
        </property>
        <property>
           <name>yarn.resourcemanager.scheduler.address.rm2</name>
           <value>nn2:8030</value>
        </property>
        <property>
           <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
           <value>nn1:8031</value>
        </property>
        <property>
           <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
           <value>nn2:8031</value>
        </property>
        <property>
           <name>yarn.resourcemanager.address.rm1</name>
           <value>nn1:8032</value>
        </property>
        <property>
           <name>yarn.resourcemanager.address.rm2</name>
           <value>nn2:8032</value>
        </property>
        <property>
           <name>yarn.resourcemanager.admin.address.rm1</name>
           <value>nn1:8033</value>
        </property>
        <property>
           <name>yarn.resourcemanager.admin.address.rm2</name>
           <value>nn2:8033</value>
        </property>
        <property>
           <name>yarn.resourcemanager.webapp.address.rm1</name>
           <value>nn1:8088</value>
        </property>
        <property>
           <name>yarn.resourcemanager.webapp.address.rm2</name>
           <value>nn2:8088</value>
        </property>
      <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>/home/hadoop/log/mapred</value>
      </property>
        <property>
           <name>yarn.nodemanager.aux-services</name>
           <value>mapreduce_shuffle</value>
        </property>
        <property>
           <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
           <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
           <name>yarn.client.failover-proxy-provider</name>
           <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
        </property>
    </configuration>

    3.4配置mapred-site.xml文件

    <configuration>
       <property>
           <name>mapreduce.framework.name</name>
           <value>yarn</value>
       </property>
       <property>
           <name>mapreduce.jobhistory.address</name>
           <value>nn1:10020</value>
       </property>
       <property>
           <name>mapreduce.jobhistory.webapp.address</name>
           <value>nn2:19888</value>
       </property>
    
       <property>
           <name>mapred.compress.map.output</name>
           <value>true</value>
       </property>
       <property>
           <name>mapred.map.output.compression.codec</name>
           <value>com.hadoop.compression.lzo.LzoCodec</value>
       </property>
       <property>
           <name>mapred.child.env</name>
           <value>LD_LIBRARY_PATH=/usr/local/lzo/lib</value>
       </property>
    </configuration>

    3.5同步hadoop到各个节点,并配置上述相关文件

    4.启动服务

    4.1在各个节点启动zookeeper,查看状态

    zkServer.sh start
    zkServer.sh status

    在主节点格式化zookeeper

    hdfs zkfc -formatZK


    4.2在三个journalnode节点启日志程序

    hadoop-daemon.sh start journalnode

    4.3在主namenode节点格式化hadoop

    hadoop namenode -format


    4.4在主namenode节点启动namenode进程

    hadoop-daemon.sh start namenode


    4.5在备namenode节点执行命令,把备namenode节点的目录格式化并把元数据从主namenode节点同步过来

    hdfs namenode -bootstrapStandby
    hadoop-daemon.sh start namenode 启动namenode
    yarn-daemon.sh start resourcemanager 启动resourcemanager

    4.6启动其他相关服务

    start-dfs.sh
    start-yarn.sh

    4.7 查看高可用状态

    hdfs haadmin -getServiceState nn1/nn2 查看namenode
    yarn rmadmin -getServiceState rm1/rm2 查看resourcemanager

    4.8登录web查看状态

    http://nn1:50070
    http://nn1:8088
  • 相关阅读:
    prometheus之五:kube-state-metrics
    prometheus之四:node-exporter
    go语言基础
    EFK+kafka集群实战
    K8S 集群排错指南
    短信倒计时
    微信消息模板
    阿里大鱼
    mui下拉加载
    php无限极分类
  • 原文地址:https://www.cnblogs.com/wsl222000/p/5148523.html
Copyright © 2020-2023  润新知