• Hadoop 2.7.4 HDFS+YRAN HA部署


    实验环境

    主机名称 IP地址 角色 统一安装目录 统一安装用户
    sht-sgmhadoopnn-01 172.16.101.55 namenode,resourcemanager

    /usr/local/hadoop(软连接)

    /usr/local/hadoop-2.7.4

    /usr/local/zookeeper(软连接)

    /usr/local/zookeeper-3.4.9

    root

    sht-sgmhadoopnn-02 172.16.101.56 namenode,resourcemanager
    sht-sgmhadoopdn-01 172.16.101.58 datanode,nodemanager,journalnode,zookeeper
    sht-sgmhadoopdn-02 172.16.101.59 datanode,nodemanager,journalnode,zookeeper
    sht-sgmhadoopdn-03 172.16.101.60 datanode,nodemanager,journalnode,zookeeper

    准备阶段

    软件

    • Apache Hadoop

    http://archive.apache.org/dist/hadoop/common/hadoop-2.7.4/hadoop-2.7.4.tar.gz

    • Apache Zookeeper

    https://archive.apache.org/dist/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz

    • Java

    https://download.oracle.com/otn/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz

    系统包

    • psmisc

    主要防止出现HDFS切换报错

    2019-03-26 22:48:29,200 WARN org.apache.hadoop.ha.SshFenceByTcpPort: PATH=$PATH:/sbin:/usr/sbin fuser -v -k -n tcp 8020 via ssh: bash: fuser: command not found
    2019-03-26 22:48:29,201 INFO org.apache.hadoop.ha.SshFenceByTcpPort: rc: 127
    2019-03-26 22:48:29,201 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Disconnecting from sht-sgmhadoopnn-02 port 22
    2019-03-26 22:48:29,201 WARN org.apache.hadoop.ha.NodeFencer: Fencing method org.apache.hadoop.ha.SshFenceByTcpPort(null) was unsuccessful.
    2019-03-26 22:48:29,201 ERROR org.apache.hadoop.ha.NodeFencer: Unable to fence service by any configured method.
    2019-03-26 22:48:29,201 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Caught an exception, leaving main loop due to Socket closed
    2019-03-26 22:48:29,201 WARN org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of election
    java.lang.RuntimeException: Unable to fence NameNode at sht-sgmhadoopnn-02/172.16.101.56:8020
    View Code

    一 配置各主机名和IP地址之间相互解析,所有节点一致

    # cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    172.16.101.55    sht-sgmhadoopnn-01
    172.16.101.56    sht-sgmhadoopnn-02
    172.16.101.58    sht-sgmhadoopdn-01
    172.16.101.59    sht-sgmhadoopdn-02
    172.16.101.60    sht-sgmhadoopdn-03

    二 配置各主机ssh无密码登录

    1. 生成公钥和私钥,每个节点均要执行

    # sshd-keygen -t rsa

    2. 将公钥复制到所有其他节点,每个节点均要执行

    # ssh-copy-id root@sht-sgmhadoopnn-01
    # ssh-copy-id root@sht-sgmhadoopnn-02
    # ssh-copy-id root@sht-sgmhadoopdn-01
    # ssh-copy-id root@sht-sgmhadoopdn-02
    # ssh-copy-id root@sht-sgmhadoopdn-03

    3. 测试无密码登录,每个节点均要执行

    # ssh root@sht-sgmhadoopnn-01 date
    # ssh root@sht-sgmhadoopnn-02 date
    # ssh root@sht-sgmhadoopdn-01 date
    # ssh root@sht-sgmhadoopdn-02 date
    # ssh root@sht-sgmhadoopdn-03 date

    三 配置.bash_profile 系统环境变量,每个节点均要执行

    # .bash_profile
    
    # Get the aliases and functions
    if [ -f ~/.bashrc ]; then
    . ~/.bashrc
    fi
    
    # User specific environment and startup programs
    
    ZOOKEEPER_HOME=/usr/local/zookeeper
    JAVA_HOME=/usr/java/jdk1.8.0_45
    JRE_HOME=$JAVA_HOME/jre
    HADOOP_HOME=/usr/local/hadoop
    CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$JRE_HOME/lib
    PATH=$ZOOKEEPER_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH:$HOME/bin
    export PATH

    四 安装java,所有节点一致

    # which java
    /usr/java/jdk1.8.0_45/bin/java

    五 安装zookepeer集群

     1. 配置文件zoo.cfg,集群zookeeper所有节点一致

    # cat /usr/local/zookeeper/conf/zoo.cfg 
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/usr/local/zookeeper/data
    clientPort=2181
    server.1=sht-sgmhadoopdn-01:2888:3888
    server.2=sht-sgmhadoopdn-02:2888:3888
    server.3=sht-sgmhadoopdn-03:2888:3888

    2.确保各节点如下文件存在,并且值与上步骤server.*的值一一对应

    sht-sgmhadoopdn-01

    # cat /usr/local/zookeeper/data/myid 
    1

    sht-sgmhadoopdn-02

    # cat /usr/local/zookeeper/data/myid 
    2

    sht-sgmhadoopdn-03

    # cat /usr/local/zookeeper/data/myid 
    3

    3.启动zookeeper,各节点依次执行

    # zkServer.sh start

    4. 查看集群中各个节点的角色和进程信息

    sht-sgmhadoopdn-01

    # zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
    
    # jps
    5273 QuorumPeerMain5678 Jps

    sht-sgmhadoopdn-02

    # zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Mode: leader
    
    # jps
    22720 QuorumPeerMain25180 Jps

    sht-sgmhadoopdn-03

    # zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Mode: follower
    
    # jps
    592 Jps32527 QuorumPeerMain

    5. zookeeper集群登录测试

    # zkCli.sh -server sht-sgmhadoopdn-01:2181
    # zkCli.sh -server sht-sgmhadoopdn-02:2181
    # zkCli.sh -server sht-sgmhadoopdn-03:2181

    六 配置HDFS HA

    1. 在namenode sht-sgmhadoopnn-01节点修改配置文件

    1). hadoop-env.sh

    export JAVA_HOME=/usr/java/jdk1.8.0_45

    2). core-site.xml

    <configuration>
    
      <property>
          <name>hadoop.tmp.dir</name>
          <value>/usr/local/hadoop/data</value>
      </property>
    
      <property>
          <name>fs.defaultFS</name>
          <value>hdfs://mycluster</value>
      </property>
    
      <property>
          <name>hadoop.http.staticuser.user</name>
          <value>admin</value>
      </property>
    
      <property>
          <name>dfs.permissions.superusergroup</name>
          <value>admingroup</value>
      </property>
    
      <property>
          <name>fs.trash.interval</name>
          <value>1440</value>
      </property>
    
      <property>
          <name>fs.trash.checkpoint.interval</name>
          <value>0</value>
      </property>
    
      <property>
          <name>io.file.buffer.size</name>
          <value>65536</value>
      </property>
    
      <property>
         <name>hadoop.proxyuser.hduser.groups</name>
         <value>*</value>
      </property>
                 
      <property>
          <name>hadoop.proxyuser.hduser.hosts</name>
          <value>*</value>
      </property>
    
      <property>
            <name>ha.zookeeper.quorum</name>
            <value>sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181</value>
      </property>
    
      <property>
            <name>ha.zookeeper.session-timeout.ms</name>
            <value>5000</value>
      </property>
    
      <property>
            <name>ha.zookeeper.parent-znode</name>
            <value>/hadoop-ha</value>
      </property>
    
    </configuration>
    View Code

     3). hdfs-site.xml

    # touch /usr/local/hadoop/etc/hadoop/hdfs_excludes
    # touch /usr/local/hadoop/etc/hadoop/hdfs_excludes
    <configuration>
    
      <property>
             <name>dfs.webhdfs.enabled</name>
             <value>true</value>
      </property>
    
      <property>
            <name>dfs.permissions.enabled</name>
            <value>false</value>
      </property>
    
      <property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/local/hadoop/data/dfs/name</value>
      </property>
    
      <property>
            <name>dfs.namenode.edits.dir</name>
            <value>${dfs.namenode.name.dir}</value>
      </property>
    
      <property>
            <name>dfs.datanode.data.dir</name>
            <value>/usr/local/hadoop/data/dfs/data</value>
      </property>
    
      <property>
            <name>dfs.replication</name>
            <value>3</value>
      </property>
    
      <property>
            <name>dfs.blocksize</name>
            <value>134217728</value>
      </property>
    
      <property>
            <name>dfs.nameservices</name>
            <value>mycluster</value>
      </property>
    
      <property>
            <name>dfs.ha.namenodes.mycluster</name>
            <value>nn1,nn2</value>
      </property>
    
      <property>
            <name>dfs.namenode.rpc-address.mycluster.nn1</name>
            <value>sht-sgmhadoopnn-01:8020</value>
      </property>
    
      <property>
            <name>dfs.namenode.rpc-address.mycluster.nn2</name>
            <value>sht-sgmhadoopnn-02:8020</value>
      </property>
    
      <property>
            <name>dfs.namenode.http-address.mycluster.nn1</name>
            <value>sht-sgmhadoopnn-01:50070</value>
      </property>
      <property>
            <name>dfs.namenode.http-address.mycluster.nn2</name>
            <value>sht-sgmhadoopnn-02:50070</value>
      </property>
    
      <property>
             <name>dfs.journalnode.http-address</name>
            <value>0.0.0.0:8480</value>
      </property>
    
      <property>            
            <name>dfs.journalnode.https-address</name>
            <value>0.0.0.0:8481</value>
      </property>
    
      <property>
            <name>dfs.journalnode.rpc-address</name>
            <value>0.0.0.0:8485</value>
      </property>
    
      <property>
            <name>dfs.ha.fencing.methods</name>
            <value>sshfence</value>
      </property>
    
      <property>
            <name>dfs.ha.fencing.ssh.private-key-files</name>
            <value>/root/.ssh/id_rsa</value>
      </property>
    
      <property>
            <name>dfs.ha.fencing.ssh.connect-timeout</name>
            <value>30000</value>
      </property>
    
      <property>            
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://sht-sgmhadoopdn-01:8485;sht-sgmhadoopdn-02:8485;sht-sgmhadoopdn-03:8485/mycluster</value>
      </property>
    
      <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/usr/local/hadoop/data/dfs/jn</value>
      </property>
    
      <property>
            <name>dfs.client.failover.proxy.provider.mycluster</name>
            <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
      </property>
    
      <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
      </property>
    
      <property>
            <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
            <value>20000</value>
      </property>
    
      <property>
            <name>ipc.client.connect.timeout</name>
            <value>20000</value>
      </property>
    
      <property>
            <name>dfs.hosts</name>
            <value>/usr/local/hadoop/etc/hadoop/hdfs_includes</value>
      </property>
    
      <property>
            <name>dfs.hosts.exclude</name>
            <value>/usr/local/hadoop/etc/hadoop/hdfs_excludes</value>
      </property>
    
      <property>
            <name>dfs.namenode.heartbeat.recheck-interval</name>
            <value>30000</value>
      </property>
    
      <property>
            <name>dfs.heartbeat.interval</name>
            <value>1</value>
      </property>
      
      <property>
            <name>dfs.blockreport.intervalMsec</name>
            <value>3600000</value>
      </property>
      
      <property>
            <name>dfs.datanode.balance.bandwidthPerSec</name>
            <value>67108864</value>
      </property>
    
      <property>
            <name>dfs.datanode.balance.max.concurrent.moves</name>
            <value>1024</value>
      </property>
      
      <property>
            <name>dfs.datanode.handler.count</name>
            <value>100</value>
      </property>
    
    </configuration>
    View Code

    4). slaves

    sht-sgmhadoopdn-01
    sht-sgmhadoopdn-02
    sht-sgmhadoopdn-03

    2. 将上述配置文件分别复制到所有节点对应目录

    # rsync -az --progress /usr/local/hadoop/etc/hadoop/* root@sht-sgmhadoopnn-02:/usr/local/hadoop/etc/hadoop
    # rsync -az --progress /usr/local/hadoop/etc/hadoop/* root@sht-sgmhadoopdn-01:/usr/local/hadoop/etc/hadoop
    # rsync -az --progress /usr/local/hadoop/etc/hadoop/* root@sht-sgmhadoopdn-02:/usr/local/hadoop/etc/hadoop
    # rsync -az --progress /usr/local/hadoop/etc/hadoop/* root@sht-sgmhadoopdn-03:/usr/local/hadoop/etc/hadoop

    3. 在namenode sht-sgmhadoopnn-01节点上创建zookeeper集群的hadoop znode节点

    # hdfs zkfc -formatZK

    输出log

    # hdfs zkfc -formatZK
    19/03/27 23:28:51 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at sht-sgmhadoopnn-01/172.16.101.55:8020
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopnn-01
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_45
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.8.0_45/jre
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/contrib/capacity-scheduler/*.jar
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/hadoop-2.7.4/lib/native
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:user.name=root
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/hadoop-2.7.4/data
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@74fe5c40
    19/03/27 23:28:51 INFO zookeeper.ClientCnxn: Opening socket connection to server sht-sgmhadoopdn-01/172.16.101.58:2181. Will not attempt to authenticate using SASL (unknown error)
    19/03/27 23:28:51 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01/172.16.101.58:2181, initiating session
    19/03/27 23:28:51 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01/172.16.101.58:2181, sessionid = 0x169bfa59c6a0001, negotiated timeout = 5000
    19/03/27 23:28:51 INFO ha.ActiveStandbyElector: Session connected.
    19/03/27 23:28:51 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
    19/03/27 23:28:51 INFO zookeeper.ZooKeeper: Session: 0x169bfa59c6a0001 closed
    19/03/27 23:28:51 INFO zookeeper.ClientCnxn: EventThread shut down
    View Code

    登录zookeeper查看

    # zkCli.sh -server sht-sgmhadoopdn-03:2181
    Connecting to sht-sgmhadoopdn-03:2181
    
    2019-03-27 23:30:43,433 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
    2019-03-27 23:30:43,440 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=sht-sgmhadoopdn-03
    2019-03-27 23:30:43,441 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_45
    2019-03-27 23:30:43,445 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
    2019-03-27 23:30:43,446 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_45/jre
    2019-03-27 23:30:43,446 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/local/zookeeper/bin/../build/classes:/usr/local/zookeeper/bin/../build/lib/*.jar:/usr/local/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper/bin/../zookeeper-3.4.9.jar:/usr/local/zookeeper/bin/../src/java/lib/*.jar:/usr/local/zookeeper/bin/../conf:
    2019-03-27 23:30:43,446 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
    2019-03-27 23:30:43,446 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
    2019-03-27 23:30:43,447 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
    2019-03-27 23:30:43,447 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
    2019-03-27 23:30:43,447 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
    2019-03-27 23:30:43,447 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.10.0-514.el7.x86_64
    2019-03-27 23:30:43,448 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
    2019-03-27 23:30:43,448 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
    2019-03-27 23:30:43,448 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/usr/local/zookeeper-3.4.9/conf
    2019-03-27 23:30:43,451 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=sht-sgmhadoopdn-03:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@3eb07fd3
    Welcome to ZooKeeper!
    2019-03-27 23:30:43,520 [myid:] - INFO  [main-SendThread(sht-sgmhadoopdn-03:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server sht-sgmhadoopdn-03/172.16.101.60:2181. Will not attempt to authenticate using SASL (unknown error)
    JLine support is enabled
    2019-03-27 23:30:43,691 [myid:] - INFO  [main-SendThread(sht-sgmhadoopdn-03:2181):ClientCnxn$SendThread@876] - Socket connection established to sht-sgmhadoopdn-03/172.16.101.60:2181, initiating session
    2019-03-27 23:30:43,720 [myid:] - INFO  [main-SendThread(sht-sgmhadoopdn-03:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server sht-sgmhadoopdn-03/172.16.101.60:2181, sessionid = 0x369bfa5e6bf0002, negotiated timeout = 30000
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    [zk: sht-sgmhadoopdn-03:2181(CONNECTED) 0] 
    [zk: sht-sgmhadoopdn-03:2181(CONNECTED) 0] 
    [zk: sht-sgmhadoopdn-03:2181(CONNECTED) 0] 
    [zk: sht-sgmhadoopdn-03:2181(CONNECTED) 0] 
    [zk: sht-sgmhadoopdn-03:2181(CONNECTED) 0] 
    [zk: sht-sgmhadoopdn-03:2181(CONNECTED) 0] ls /
    [zookeeper, hadoop-ha]
    [zk: sht-sgmhadoopdn-03:2181(CONNECTED) 1] stat /hadoop-ha
    cZxid = 0x100000008
    ctime = Wed Mar 27 23:28:51 CST 2019
    mZxid = 0x100000008
    mtime = Wed Mar 27 23:28:51 CST 2019
    pZxid = 0x100000009
    cversion = 1
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 0
    numChildren = 1
    View Code

     4. 启动journalnode角色

    # hadoop-daemon.sh start journalnode

     查看状态

    sht-sgmhadoopdn-01

    # netstat -antlp | grep -E ':8480|8481|:8485'
    tcp        0      0 0.0.0.0:8480            0.0.0.0:*               LISTEN      5614/java           
    tcp        0      0 0.0.0.0:8485            0.0.0.0:*               LISTEN      5614/java
    
    # jps
    5735 Jps
    5273 QuorumPeerMain
    5614 JournalNode

     sht-sgmhadoopdn-02

    # netstat -antlp | grep -E ':8480|8481|:8485'
    tcp        0      0 0.0.0.0:8480            0.0.0.0:*               LISTEN      24917/java          
    tcp        0      0 0.0.0.0:8485            0.0.0.0:*               LISTEN      24917/java          
    
    # jps
    22720 QuorumPeerMain
    25681 Jps
    24917 JournalNode

     sht-sgmhadoopdn-03

    # netstat -antlp | grep -E ':8480|8481|:8485'
    tcp        0      0 0.0.0.0:8480            0.0.0.0:*               LISTEN      530/java            
    tcp        0      0 0.0.0.0:8485            0.0.0.0:*               LISTEN      530/java            
    # jps
    530 JournalNode
    677 Jps
    32527 QuorumPeerMain

     5. 在namenode sht-sgmhadoopnn-01节点上格式化hdfs文件系统

    # hadoop namenode -format mycluster

    输出log

    # hadoop namenode -format mycluster
    DEPRECATED: Use of this script to execute hdfs command is deprecated.
    Instead use the hdfs command for it.
    
    19/03/27 23:52:51 INFO namenode.NameNode: STARTUP_MSG: 
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = sht-sgmhadoopnn-01/172.16.101.55
    STARTUP_MSG:   args = [-format, mycluster]
    STARTUP_MSG:   version = 2.7.4
    STARTUP_MSG:   classpath = /usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
    STARTUP_MSG:   build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r cd915e1e8d9d0131462a0b7301586c175728a282; compiled by 'kshvachk' on 2017-08-01T00:29Z
    STARTUP_MSG:   java = 1.8.0_45
    ************************************************************/
    19/03/27 23:52:51 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
    19/03/27 23:52:51 INFO namenode.NameNode: createNameNode [-format, mycluster]
    19/03/27 23:52:52 WARN common.Util: Path /usr/local/hadoop/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    19/03/27 23:52:52 WARN common.Util: Path /usr/local/hadoop/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    Formatting using clusterid: CID-2f443fae-4570-40e1-a09d-936ebcc203e3
    19/03/27 23:52:52 INFO namenode.FSNamesystem: No KeyProvider found.
    19/03/27 23:52:52 INFO namenode.FSNamesystem: fsLock is fair: true
    19/03/27 23:52:52 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
    19/03/27 23:52:52 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
    19/03/27 23:52:52 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Mar 27 23:52:52
    19/03/27 23:52:52 INFO util.GSet: Computing capacity for map BlocksMap
    19/03/27 23:52:52 INFO util.GSet: VM type       = 64-bit
    19/03/27 23:52:52 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
    19/03/27 23:52:52 INFO util.GSet: capacity      = 2^21 = 2097152 entries
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: defaultReplication         = 3
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: maxReplication             = 512
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: minReplication             = 1
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
    19/03/27 23:52:52 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
    19/03/27 23:52:52 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
    19/03/27 23:52:52 INFO namenode.FSNamesystem: supergroup          = supergroup
    19/03/27 23:52:52 INFO namenode.FSNamesystem: isPermissionEnabled = false
    19/03/27 23:52:52 INFO namenode.FSNamesystem: Determined nameservice ID: mycluster
    19/03/27 23:52:52 INFO namenode.FSNamesystem: HA Enabled: true
    19/03/27 23:52:52 INFO namenode.FSNamesystem: Append Enabled: true
    19/03/27 23:52:52 INFO util.GSet: Computing capacity for map INodeMap
    19/03/27 23:52:52 INFO util.GSet: VM type       = 64-bit
    19/03/27 23:52:52 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
    19/03/27 23:52:52 INFO util.GSet: capacity      = 2^20 = 1048576 entries
    19/03/27 23:52:52 INFO namenode.FSDirectory: ACLs enabled? false
    19/03/27 23:52:52 INFO namenode.FSDirectory: XAttrs enabled? true
    19/03/27 23:52:52 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
    19/03/27 23:52:52 INFO namenode.NameNode: Caching file names occuring more than 10 times
    19/03/27 23:52:52 INFO util.GSet: Computing capacity for map cachedBlocks
    19/03/27 23:52:52 INFO util.GSet: VM type       = 64-bit
    19/03/27 23:52:52 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
    19/03/27 23:52:52 INFO util.GSet: capacity      = 2^18 = 262144 entries
    19/03/27 23:52:52 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
    19/03/27 23:52:52 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
    19/03/27 23:52:52 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
    19/03/27 23:52:52 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
    19/03/27 23:52:52 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
    19/03/27 23:52:52 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
    19/03/27 23:52:52 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
    19/03/27 23:52:52 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
    19/03/27 23:52:52 INFO util.GSet: Computing capacity for map NameNodeRetryCache
    19/03/27 23:52:52 INFO util.GSet: VM type       = 64-bit
    19/03/27 23:52:52 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
    19/03/27 23:52:52 INFO util.GSet: capacity      = 2^15 = 32768 entries
    19/03/27 23:52:53 INFO namenode.FSImage: Allocated new BlockPoolId: BP-698223843-172.16.101.55-1553701973789
    19/03/27 23:52:53 INFO common.Storage: Storage directory /usr/local/hadoop-2.7.4/data/dfs/name has been successfully formatted.
    19/03/27 23:52:54 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop-2.7.4/data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
    19/03/27 23:52:54 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop-2.7.4/data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
    19/03/27 23:52:54 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
    19/03/27 23:52:54 INFO util.ExitUtil: Exiting with status 0
    19/03/27 23:52:54 INFO namenode.NameNode: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at sht-sgmhadoopnn-01/172.16.101.55
    ************************************************************/
    View Code

    6. 在namenode sht-sgmhadoopnn-01节点上启动namenode角色

    # hadoop-daemon.sh start namenode
    starting namenode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-namenode-sht-sgmhadoopnn-01.out
    
    # jps
    5041 NameNode
    5116 Jps

    7. 在namenode sht-sgmhadoopnn-02节点上从sht-sgmhadoopnn-01同步HDFS元数据

    # hdfs namenode -bootstrapStandby

    输出log

    # hdfs namenode -bootstrapStandby
    19/03/27 23:58:01 INFO namenode.NameNode: STARTUP_MSG: 
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = sht-sgmhadoopnn-02/172.16.101.56
    STARTUP_MSG:   args = [-bootstrapStandby]
    STARTUP_MSG:   version = 2.7.4
    STARTUP_MSG:   classpath = /usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/contrib/capacity-scheduler/*.jar
    STARTUP_MSG:   build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r cd915e1e8d9d0131462a0b7301586c175728a282; compiled by 'kshvachk' on 2017-08-01T00:29Z
    STARTUP_MSG:   java = 1.8.0_45
    ************************************************************/
    19/03/27 23:58:01 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
    19/03/27 23:58:01 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
    19/03/27 23:58:02 WARN common.Util: Path /usr/local/hadoop/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    19/03/27 23:58:02 WARN common.Util: Path /usr/local/hadoop/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    =====================================================
    About to bootstrap Standby ID nn2 from:
               Nameservice ID: mycluster
            Other Namenode ID: nn1
      Other NN's HTTP address: http://sht-sgmhadoopnn-01:50070
      Other NN's IPC  address: sht-sgmhadoopnn-01/172.16.101.55:8020
                 Namespace ID: 789891431
                Block pool ID: BP-698223843-172.16.101.55-1553701973789
                   Cluster ID: CID-2f443fae-4570-40e1-a09d-936ebcc203e3
               Layout version: -63
           isUpgradeFinalized: true
    =====================================================
    19/03/27 23:58:03 INFO common.Storage: Storage directory /usr/local/hadoop-2.7.4/data/dfs/name has been successfully formatted.
    19/03/27 23:58:03 WARN common.Util: Path /usr/local/hadoop/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    19/03/27 23:58:03 WARN common.Util: Path /usr/local/hadoop/data/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
    19/03/27 23:58:04 INFO namenode.TransferFsImage: Opening connection to http://sht-sgmhadoopnn-01:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:789891431:0:CID-2f443fae-4570-40e1-a09d-936ebcc203e3
    19/03/27 23:58:04 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
    19/03/27 23:58:04 INFO namenode.TransferFsImage: Transfer took 0.04s at 0.00 KB/s
    19/03/27 23:58:04 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 321 bytes.
    19/03/27 23:58:04 INFO util.ExitUtil: Exiting with status 0
    19/03/27 23:58:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at sht-sgmhadoopnn-02/172.16.101.56
    ************************************************************/
    View Code

    8. 在namenode sht-sgmhadoopnn-02节点上启动namenode角色

    # hadoop-daemon.sh start namenode
    starting namenode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-namenode-sht-sgmhadoopnn-02.out
    
    # jps
    30180 NameNode
    30255 Jps

    9. 停止两台namenode节点的namenode角色

    # hadoop-daemon.sh stop namenode

    10. 在namenode sht-sgmhadoopnn-01节点上启动HDFS集群服务

    # start-dfs.sh

    输出log

    # start-dfs.sh 
    Starting namenodes on [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]
    sht-sgmhadoopnn-01: ********************************************************************
    sht-sgmhadoopnn-01: *                                                                  *
    sht-sgmhadoopnn-01: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopnn-01: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopnn-01: *                                                                  *
    sht-sgmhadoopnn-01: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopnn-01: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopnn-01: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopnn-01: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopnn-01: *                                                                  *
    sht-sgmhadoopnn-01: ********************************************************************
    sht-sgmhadoopnn-02: ********************************************************************
    sht-sgmhadoopnn-02: *                                                                  *
    sht-sgmhadoopnn-02: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopnn-02: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopnn-02: *                                                                  *
    sht-sgmhadoopnn-02: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopnn-02: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopnn-02: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopnn-02: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopnn-02: *                                                                  *
    sht-sgmhadoopnn-02: ********************************************************************
    sht-sgmhadoopnn-01: starting namenode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-namenode-sht-sgmhadoopnn-01.out
    sht-sgmhadoopnn-02: starting namenode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-namenode-sht-sgmhadoopnn-02.out
    sht-sgmhadoopdn-01: ********************************************************************
    sht-sgmhadoopdn-01: *                                                                  *
    sht-sgmhadoopdn-01: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopdn-01: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopdn-01: *                                                                  *
    sht-sgmhadoopdn-01: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopdn-01: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopdn-01: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopdn-01: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopdn-01: *                                                                  *
    sht-sgmhadoopdn-01: ********************************************************************
    sht-sgmhadoopdn-03: ********************************************************************
    sht-sgmhadoopdn-03: *                                                                  *
    sht-sgmhadoopdn-03: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopdn-03: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopdn-03: *                                                                  *
    sht-sgmhadoopdn-03: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopdn-03: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopdn-03: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopdn-03: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopdn-03: *                                                                  *
    sht-sgmhadoopdn-03: ********************************************************************
    sht-sgmhadoopdn-02: ********************************************************************
    sht-sgmhadoopdn-02: *                                                                  *
    sht-sgmhadoopdn-02: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopdn-02: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopdn-02: *                                                                  *
    sht-sgmhadoopdn-02: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopdn-02: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopdn-02: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopdn-02: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopdn-02: *                                                                  *
    sht-sgmhadoopdn-02: ********************************************************************
    sht-sgmhadoopdn-02: starting datanode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-datanode-sht-sgmhadoopdn-02.telenav.cn.out
    sht-sgmhadoopdn-01: starting datanode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-datanode-sht-sgmhadoopdn-01.out
    sht-sgmhadoopdn-03: starting datanode, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-datanode-sht-sgmhadoopdn-03.telenav.cn.out
    Starting journal nodes [sht-sgmhadoopdn-01 sht-sgmhadoopdn-02 sht-sgmhadoopdn-03]
    sht-sgmhadoopdn-01: ********************************************************************
    sht-sgmhadoopdn-01: *                                                                  *
    sht-sgmhadoopdn-01: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopdn-01: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopdn-01: *                                                                  *
    sht-sgmhadoopdn-01: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopdn-01: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopdn-01: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopdn-01: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopdn-01: *                                                                  *
    sht-sgmhadoopdn-01: ********************************************************************
    sht-sgmhadoopdn-03: ********************************************************************
    sht-sgmhadoopdn-03: *                                                                  *
    sht-sgmhadoopdn-03: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopdn-03: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopdn-03: *                                                                  *
    sht-sgmhadoopdn-03: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopdn-03: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopdn-03: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopdn-03: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopdn-03: *                                                                  *
    sht-sgmhadoopdn-03: ********************************************************************
    sht-sgmhadoopdn-02: ********************************************************************
    sht-sgmhadoopdn-02: *                                                                  *
    sht-sgmhadoopdn-02: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopdn-02: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopdn-02: *                                                                  *
    sht-sgmhadoopdn-02: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopdn-02: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopdn-02: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopdn-02: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopdn-02: *                                                                  *
    sht-sgmhadoopdn-02: ********************************************************************
    sht-sgmhadoopdn-02: journalnode running as process 24917. Stop it first.
    sht-sgmhadoopdn-01: journalnode running as process 5614. Stop it first.
    sht-sgmhadoopdn-03: journalnode running as process 530. Stop it first.
    Starting ZK Failover Controllers on NN hosts [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]
    sht-sgmhadoopnn-01: ********************************************************************
    sht-sgmhadoopnn-01: *                                                                  *
    sht-sgmhadoopnn-01: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopnn-01: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopnn-01: *                                                                  *
    sht-sgmhadoopnn-01: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopnn-01: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopnn-01: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopnn-01: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopnn-01: *                                                                  *
    sht-sgmhadoopnn-01: ********************************************************************
    sht-sgmhadoopnn-02: ********************************************************************
    sht-sgmhadoopnn-02: *                                                                  *
    sht-sgmhadoopnn-02: * This system is for the use of authorized users only.  Usage of   *
    sht-sgmhadoopnn-02: * this system may be monitored and recorded by system personnel.   *
    sht-sgmhadoopnn-02: *                                                                  *
    sht-sgmhadoopnn-02: * Anyone using this system expressly consents to such monitoring   *
    sht-sgmhadoopnn-02: * and they are advised that if such monitoring reveals possible    *
    sht-sgmhadoopnn-02: * evidence of criminal activity, system personnel may provide the  *
    sht-sgmhadoopnn-02: * evidence from such monitoring to law enforcement officials.      *
    sht-sgmhadoopnn-02: *                                                                  *
    sht-sgmhadoopnn-02: ********************************************************************
    sht-sgmhadoopnn-01: starting zkfc, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-zkfc-sht-sgmhadoopnn-01.out
    sht-sgmhadoopnn-02: starting zkfc, logging to /usr/local/hadoop-2.7.4/logs/hadoop-root-zkfc-sht-sgmhadoopnn-02.out
    View Code

    11.验证HDFS

    namenode1节点

    http://172.16.101.55:50070

    namenode2节点

    http://172.16.101.56:50070

     

    12. 自动切换测试

    注:此时namenode的 active节点为sht-sgmhadoopnn-02,standby节点为sht-sgmhadoopnn-01

    也可以通过命令查看

    # hdfs haadmin -getServiceState nn1
    standby
    # hdfs haadmin -getServiceState nn2
    active

    手动kill掉active节点的namenode进程,验证standby节点是否会自动切换为active状态

    # jps
    30339 NameNode
    30435 DFSZKFailoverController
    30601 Jps
    
    # kill -9 30339

    同时查看standby节点的zkfc输出log

    2019-03-28 00:13:15,460 INFO org.apache.hadoop.ha.ZKFailoverController: Trying to make NameNode at sht-sgmhadoopnn-01/172.16.101.55:8020 active...
    2019-03-28 00:13:17,059 INFO org.apache.hadoop.ha.ZKFailoverController: Successfully transitioned NameNode at sht-sgmhadoopnn-01/172.16.101.55:8020 to active state

    完整输出log

    2019-03-28 00:13:13,586 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced...
    2019-03-28 00:13:13,621 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old node exists: 0a096d79636c757374657212036e6e321a127368742d73676d6861646f6f706e6e2d303220d43e28d33e
    2019-03-28 00:13:13,625 INFO org.apache.hadoop.ha.ZKFailoverController: Should fence: NameNode at sht-sgmhadoopnn-02/172.16.101.56:8020
    2019-03-28 00:13:14,633 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: sht-sgmhadoopnn-02/172.16.101.56:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
    2019-03-28 00:13:14,636 WARN org.apache.hadoop.ha.FailoverController: Unable to gracefully make NameNode at sht-sgmhadoopnn-02/172.16.101.56:8020 standby (unable to connect)
    java.net.ConnectException: Call From sht-sgmhadoopnn-01/172.16.101.55 to sht-sgmhadoopnn-02:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    	at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    	at org.apache.hadoop.ipc.Client.call(Client.java:1480)
    	at org.apache.hadoop.ipc.Client.call(Client.java:1413)
    	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    	at com.sun.proxy.$Proxy9.transitionToStandby(Unknown Source)
    	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToStandby(HAServiceProtocolClientSideTranslatorPB.java:112)
    	at org.apache.hadoop.ha.FailoverController.tryGracefulFence(FailoverController.java:172)
    	at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:515)
    	at org.apache.hadoop.ha.ZKFailoverController.fenceOldActive(ZKFailoverController.java:505)
    	at org.apache.hadoop.ha.ZKFailoverController.access$1100(ZKFailoverController.java:61)
    	at org.apache.hadoop.ha.ZKFailoverController$ElectorCallbacks.fenceOldActive(ZKFailoverController.java:892)
    	at org.apache.hadoop.ha.ActiveStandbyElector.fenceOldActive(ActiveStandbyElector.java:921)
    	at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:820)
    	at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:418)
    	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:599)
    	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
    Caused by: java.net.ConnectException: Connection refused
    	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615)
    	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
    	at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
    	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
    	at org.apache.hadoop.ipc.Client.call(Client.java:1452)
    	... 14 more
    2019-03-28 00:13:14,641 INFO org.apache.hadoop.ha.NodeFencer: ====== Beginning Service Fencing Process... ======
    2019-03-28 00:13:14,642 INFO org.apache.hadoop.ha.NodeFencer: Trying method 1/1: org.apache.hadoop.ha.SshFenceByTcpPort(null)
    2019-03-28 00:13:14,680 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Connecting to sht-sgmhadoopnn-02...
    2019-03-28 00:13:14,682 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Connecting to sht-sgmhadoopnn-02 port 22
    2019-03-28 00:13:14,689 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Connection established
    2019-03-28 00:13:14,700 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Remote version string: SSH-2.0-OpenSSH_6.6.1
    2019-03-28 00:13:14,700 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Local version string: SSH-2.0-JSCH-0.1.54
    2019-03-28 00:13:14,700 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: CheckCiphers: aes256-ctr,aes192-ctr,aes128-ctr,aes256-cbc,aes192-cbc,aes128-cbc,3des-ctr,arcfour,arcfour128,arcfour256
    2019-03-28 00:13:15,043 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: aes256-ctr is not available.
    2019-03-28 00:13:15,043 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: aes192-ctr is not available.
    2019-03-28 00:13:15,043 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: aes256-cbc is not available.
    2019-03-28 00:13:15,043 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: aes192-cbc is not available.
    2019-03-28 00:13:15,043 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: CheckKexes: diffie-hellman-group14-sha1,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521
    2019-03-28 00:13:15,156 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: CheckSignatures: ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
    2019-03-28 00:13:15,160 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: SSH_MSG_KEXINIT sent
    2019-03-28 00:13:15,160 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: SSH_MSG_KEXINIT received
    2019-03-28 00:13:15,160 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
    2019-03-28 00:13:15,160 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: none,zlib@openssh.com
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: none,zlib@openssh.com
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: 
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server: 
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: hmac-md5,hmac-sha1,hmac-sha2-256,hmac-sha1-96,hmac-md5-96
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: hmac-md5,hmac-sha1,hmac-sha2-256,hmac-sha1-96,hmac-md5-96
    2019-03-28 00:13:15,161 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: none
    2019-03-28 00:13:15,162 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: none
    2019-03-28 00:13:15,162 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: 
    2019-03-28 00:13:15,162 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client: 
    2019-03-28 00:13:15,162 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: server->client aes128-ctr hmac-md5 none
    2019-03-28 00:13:15,162 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: kex: client->server aes128-ctr hmac-md5 none
    2019-03-28 00:13:15,168 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: SSH_MSG_KEX_ECDH_INIT sent
    2019-03-28 00:13:15,168 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: expecting SSH_MSG_KEX_ECDH_REPLY
    2019-03-28 00:13:15,180 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: ssh_rsa_verify: signature true
    2019-03-28 00:13:15,186 WARN org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Permanently added 'sht-sgmhadoopnn-02' (RSA) to the list of known hosts.
    2019-03-28 00:13:15,187 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: SSH_MSG_NEWKEYS sent
    2019-03-28 00:13:15,187 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: SSH_MSG_NEWKEYS received
    2019-03-28 00:13:15,193 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: SSH_MSG_SERVICE_REQUEST sent
    2019-03-28 00:13:15,195 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: SSH_MSG_SERVICE_ACCEPT received
    2019-03-28 00:13:15,200 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Authentications that can continue: gssapi-with-mic,publickey,keyboard-interactive,password
    2019-03-28 00:13:15,200 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Next authentication method: gssapi-with-mic
    2019-03-28 00:13:15,205 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Authentications that can continue: publickey,keyboard-interactive,password
    2019-03-28 00:13:15,205 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Next authentication method: publickey
    2019-03-28 00:13:15,346 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Authentication succeeded (publickey).
    2019-03-28 00:13:15,346 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Connected to sht-sgmhadoopnn-02
    2019-03-28 00:13:15,346 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Looking for process running on port 8020
    2019-03-28 00:13:15,411 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Indeterminate response from trying to kill service. Verifying whether it is running using nc...
    2019-03-28 00:13:15,433 WARN org.apache.hadoop.ha.SshFenceByTcpPort: nc -z sht-sgmhadoopnn-02 8020 via ssh: bash: nc: command not found
    2019-03-28 00:13:15,434 INFO org.apache.hadoop.ha.SshFenceByTcpPort: Verified that the service is down.
    2019-03-28 00:13:15,434 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Disconnecting from sht-sgmhadoopnn-02 port 22
    2019-03-28 00:13:15,439 INFO org.apache.hadoop.ha.NodeFencer: ====== Fencing successful by method org.apache.hadoop.ha.SshFenceByTcpPort(null) ======
    2019-03-28 00:13:15,439 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing znode /hadoop-ha/mycluster/ActiveBreadCrumb to indicate that the local node is the most recent active...
    2019-03-28 00:13:15,439 INFO org.apache.hadoop.ha.SshFenceByTcpPort.jsch: Caught an exception, leaving main loop due to Socket closed
    2019-03-28 00:13:15,460 INFO org.apache.hadoop.ha.ZKFailoverController: Trying to make NameNode at sht-sgmhadoopnn-01/172.16.101.55:8020 active...
    2019-03-28 00:13:17,059 INFO org.apache.hadoop.ha.ZKFailoverController: Successfully transitioned NameNode at sht-sgmhadoopnn-01/172.16.101.55:8020 to active state
    View Code

    可以看到原来的stanby节点已经自动切换成active状态

     七 配置YARN HA

    1. 在resourcemanager sht-sgmhadoopnn-01节点上修改配置文件

    1). yarn-env.sh

    export JAVA_HOME=/usr/java/jdk1.8.0_45

    2). mapred-site.xml

    <configuration>
    
      <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
      </property>
    
       <property>
            <name>mapreduce.jobtracker.http.address</name>
            <value>0.0.0.0:50030</value>
      </property>
    
      <property>
            <name>mapreduce.jobhistory.address</name>
            <value>0.0.0.0:10020</value>
      </property>
    
      <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>0.0.0.0:19888</value>
      </property>
    
    </configuration>
    View Code

    3). yarn-site.xml

    # touch /usr/local/hadoop/etc/hadoop/yarn_includes
    # touch /usr/local/hadoop/etc/hadoop/yarn_excludes
    <configuration>
    
      <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
      </property>
    
      <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
            <value>org.apache.hadoop.mapred.ShuffleHandler</value>
     </property>
    
      <property>
            <name>yarn.nodemanager.localizer.address</name>
            <value>0.0.0.0:8040</value>
      </property>
    
      <property>
            <name>yarn.nodemanager.webapp.address</name>
            <value>0.0.0.0:8042</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.connect.retry-interval.ms</name>
            <value>2000</value>
      </property>
    
      <property>
            <name>yarn.nodemanager.resource.memory-mb</name>
            <value>4096</value>
      </property>
    
      <property>
            <name>yarn.scheduler.minimum-allocation-mb</name>
            <value>1024</value>
      </property>
    
      <property>
            <name>yarn.scheduler.maximum-allocation-mb</name>
            <value>4096</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.ha.enabled</name>
            <value>true</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
            <value>true</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
            <value>true</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.connect.retry-interval.ms</name>
            <value>2000</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.cluster-id</name>
            <value>yarn-cluster</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.ha.rm-ids</name>
            <value>rm1,rm2</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.ha.id</name>
            <value>rm1</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.scheduler.class</name>
            <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
      </property>
    
      <property>
            <name>yarn.client.failover-proxy-provider</name>
            <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.recovery.enabled</name>
            <value>true</value>
      </property>
    
      <property>
            <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
            <value>5000</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.store.class</name>
            <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.zk-address</name>
            <value>sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181</value>
      </property>
    
      <property>
           <name>yarn.resourcemanager.zk-num-retries</name>
           <value>1000</value>
      </property>
    
      <property>
           <name>yarn.resourcemanager.zk-retry-interval-ms</name>
           <value>1000</value>
      </property>
    
      <property>
           <name>yarn.resourcemanager.zk-timeout-ms</name>
           <value>10000</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.address.rm1</name>
            <value>sht-sgmhadoopnn-01:8032</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.address.rm2</name>
            <value>sht-sgmhadoopnn-02:8032</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.scheduler.address.rm1</name>
            <value>sht-sgmhadoopnn-01:8030</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.scheduler.address.rm2</name>
            <value>sht-sgmhadoopnn-02:8030</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.admin.address.rm1</name>
            <value>sht-sgmhadoopnn-01:8033</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.admin.address.rm2</name>
            <value>sht-sgmhadoopnn-02:8033</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
            <value>sht-sgmhadoopnn-01:8031</value>
      </property>
    
      <property>
            <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
            <value>sht-sgmhadoopnn-02:8031</value>
      </property>
    
      <property>
             <name>yarn.resourcemanager.webapp.address.rm1</name>
             <value>sht-sgmhadoopnn-01:8088</value>
      </property>
      
      <property>
             <name>yarn.resourcemanager.webapp.address.rm2</name>
             <value>sht-sgmhadoopnn-02:8088</value>
      </property>
    
      <property>
             <name>yarn.resourcemanager.webapp.https.address.rm1</name>
             <value>sht-sgmhadoopnn-01:8090</value>
      </property>
    
      <property>
             <name>yarn.resourcemanager.webapp.https.address.rm2</name>
             <value>sht-sgmhadoopnn-02:8090</value>
      </property>
    
      <property>
             <name>yarn.nm.liveness-monitor.expiry-interval-ms</name>
             <value>180000</value>
      </property>
    
      <property>
             <name>yarn.nodemanager.health-checker.interval-ms</name>
             <value>60000</value>
      </property>
    
      <property>
             <name>yarn.resourcemanager.nodes.include-path</name>
             <value>/usr/local/hadoop/etc/hadoop/yarn_includes</value>
      </property>
    
      <property>
             <name>yarn.resourcemanager.nodes.exclude-path</name>
             <value>/usr/local/hadoop/etc/hadoop/yarn_excludes</value>
      </property>
    
    </configuration>
    View Code

    2. 将上述配置文件分别复制到所有节点对应目录

    # rsync -az --progress /usr/local/hadoop/etc/hadoop/* root@sht-sgmhadoopnn-02:/usr/local/hadoop/etc/hadoop
    # rsync -az --progress /usr/local/hadoop/etc/hadoop/* root@sht-sgmhadoopdn-01:/usr/local/hadoop/etc/hadoop
    # rsync -az --progress /usr/local/hadoop/etc/hadoop/* root@sht-sgmhadoopdn-02:/usr/local/hadoop/etc/hadoop
    # rsync -az --progress /usr/local/hadoop/etc/hadoop/* root@sht-sgmhadoopdn-03:/usr/local/hadoop/etc/hadoop

    注:如果yarn-site.xml配置文件设置了“yarn.resourcemanager.ha.id”参数的值为rm1,则需要修改resourcemanager sht-sgmhadoopnn-02上该参数的值为rm2,否则在sht-sgmhadoopnn-02上启动resourcemanager时候会报错

    2019-03-27 18:01:35,210 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
    2019-03-27 18:01:35,214 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
    java.net.BindException: Port in use: sht-sgmhadoopnn-01:8088
    	at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:940)
    	at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:876)
    	at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:306)
    	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:952)
    	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1052)
    	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1186)
    Caused by: java.net.BindException: Cannot assign requested address
    View Code

    3. 在sht-sgmhadoopnn-01节点上启动YARN集群

    # start-yarn.sh
    
    # jps
    8928 Jps
    5345 NameNode
    8833 ResourceManager
    5647 DFSZKFailoverController

    4. 在sht-sgmhadoopnn-02节点上启动resourcemanager

    # yarn-daemon.sh start resourcemanager
    
    # jps
    30435 DFSZKFailoverController
    30997 NameNode
    31420 Jps
    31342 ResourceManager

    5. 在各nodemanager节点查看角色状态

    sht-sgmhadoopdn-01

    # jps
    8275 Jps
    5273 QuorumPeerMain
    5897 DataNode
    5614 JournalNode
    7950 NodeManager

    sht-sgmhadoopdn-02

    # jps
    22720 QuorumPeerMain
    24917 JournalNode
    26358 DataNode
    8278 NodeManager
    8619 Jps

    sht-sgmhadoopdn-03

    # jps
    530 JournalNode
    4722 NodeManager
    5240 Jps
    858 DataNode
    32527 QuorumPeerMain

    6. 查看resourcemanager的HA

    http://172.16.101.55:8088/cluster/cluster

    http://172.16.101.56:8088/cluster/cluster

    7. YARN切换测试

    1). 查看节点角色

    # yarn rmadmin -getServiceState rm1
    active
    # yarn rmadmin -getServiceState rm2
    standby

    2). 停掉当前active主节点的resourcemanager

    # jps
    5345 NameNode
    9778 Jps
    9318 ResourceManager
    5647 DFSZKFailoverController
    
    # kill -9 9318

     同时观察standby节点的日志

    2019-03-28 15:42:16,670 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state

    可以看到在active节点崩溃之后,standby节点已经转换成active节点

    完整输出日志

    2019-03-28 15:42:16,064 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced...
    2019-03-28 15:42:16,067 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old node exists: 0a0c7961726e2d636c75737465721203726d31
    2019-03-28 15:42:16,067 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing znode /yarn-leader-election/yarn-cluster/ActiveBreadCrumb to indicate that the local node is the most recent active...
    2019-03-28 15:42:16,084 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/yarn-site.xml
    2019-03-28 15:42:16,088 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=refreshAdminAcls	TARGET=AdminService	RESULT=SUCCESS
    2019-03-28 15:42:16,089 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/capacity-scheduler.xml
    2019-03-28 15:42:16,129 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Re-initializing queues...
    2019-03-28 15:42:16,131 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined
    2019-03-28 15:42:16,131 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined
    2019-03-28 15:42:16,132 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APP:*ADMINISTER_QUEUE:*, labels=*,
    , reservationsContinueLooking=true
    2019-03-28 15:42:16,132 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root
    2019-03-28 15:42:16,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined
    2019-03-28 15:42:16,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined
    2019-03-28 15:42:16,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing default
    capacity = 1.0 [= (float) configuredCapacity / 100 ]
    asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]
    maxCapacity = 1.0 [= configuredMaxCapacity ]
    absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ]
    userLimit = 100 [= configuredUserLimit ]
    userLimitFactor = 1.0 [= configuredUserLimitFactor ]
    maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)]
    maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ]
    usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)]
    absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]
    maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ]
    minimumAllocationFactor = 0.75 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ]
    maximumAllocation = <memory:4096, vCores:4> [= configuredMaxAllocation ]
    numContainers = 0 [= currentNumContainers ]
    state = RUNNING [= configuredState ]
    acls = SUBMIT_APP:*ADMINISTER_QUEUE:* [= configuredAcls ]
    nodeLocalityDelay = 40
    labels=*,
    nodeLocalityDelay = 40
    reservationsContinueLooking = true
    preemptionDisabled = true
    
    2019-03-28 15:42:16,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0
    2019-03-28 15:42:16,136 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
    2019-03-28 15:42:16,138 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined
    2019-03-28 15:42:16,138 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined
    2019-03-28 15:42:16,139 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APP:*ADMINISTER_QUEUE:*, labels=*,
    , reservationsContinueLooking=true
    2019-03-28 15:42:16,140 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined
    2019-03-28 15:42:16,140 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined
    2019-03-28 15:42:16,141 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing default
    capacity = 1.0 [= (float) configuredCapacity / 100 ]
    asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]
    maxCapacity = 1.0 [= configuredMaxCapacity ]
    absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ]
    userLimit = 100 [= configuredUserLimit ]
    userLimitFactor = 1.0 [= configuredUserLimitFactor ]
    maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)]
    maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ]
    usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)]
    absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]
    maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ]
    minimumAllocationFactor = 0.75 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ]
    maximumAllocation = <memory:4096, vCores:4> [= configuredMaxAllocation ]
    numContainers = 0 [= currentNumContainers ]
    state = RUNNING [= configuredState ]
    acls = SUBMIT_APP:*ADMINISTER_QUEUE:* [= configuredAcls ]
    nodeLocalityDelay = 40
    labels=*,
    nodeLocalityDelay = 40
    reservationsContinueLooking = true
    preemptionDisabled = true
    
    2019-03-28 15:42:16,141 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root: re-configured queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0
    2019-03-28 15:42:16,141 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue mappings, override: false
    2019-03-28 15:42:16,142 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/yarn-site.xml
    2019-03-28 15:42:16,146 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to 
    2019-03-28 15:42:16,146 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to 
    2019-03-28 15:42:16,146 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
    2019-03-28 15:42:16,146 INFO org.apache.hadoop.conf.Configuration: found resource core-site.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/core-site.xml
    2019-03-28 15:42:16,146 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/yarn-site.xml
    2019-03-28 15:42:16,151 INFO org.apache.hadoop.conf.Configuration: found resource core-site.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/core-site.xml
    2019-03-28 15:42:16,151 INFO org.apache.hadoop.security.Groups: clearing userToGroupsMap cache
    2019-03-28 15:42:16,151 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to active state
    2019-03-28 15:42:16,152 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=10000 watcher=null
    2019-03-28 15:42:16,156 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Created new ZK connection
    2019-03-28 15:42:16,160 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server sht-sgmhadoopdn-03/172.16.101.60:2181. Will not attempt to authenticate using SASL (unknown error)
    2019-03-28 15:42:16,161 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-03/172.16.101.60:2181, initiating session
    2019-03-28 15:42:16,166 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-03/172.16.101.60:2181, sessionid = 0x369bfa5e6bf0007, negotiated timeout = 10000
    2019-03-28 15:42:16,201 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Fencing node /rmstore/ZKRMStateRoot/RM_ZK_FENCING_LOCK doesn't exist to delete
    2019-03-28 15:42:16,231 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Recovery started
    2019-03-28 15:42:16,236 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Loaded RM state version info 1.2
    2019-03-28 15:42:16,276 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Watcher event type: None with state:SyncConnected for path:null for Service org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore in state org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: STARTED
    2019-03-28 15:42:16,277 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: ZKRMStateStore Session connected
    2019-03-28 15:42:16,277 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: ZooKeeper sync operation succeeded. path: /rmstore/ZKRMStateRoot
    2019-03-28 15:42:16,347 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: recovering RMDelegationTokenSecretManager.
    2019-03-28 15:42:16,352 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager: Recovering 0 applications
    2019-03-28 15:42:16,352 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Recovery ended
    2019-03-28 15:42:16,353 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: Rolling master-key for container-tokens
    2019-03-28 15:42:16,355 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens
    2019-03-28 15:42:16,355 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
    2019-03-28 15:42:16,356 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 5
    2019-03-28 15:42:16,356 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
    2019-03-28 15:42:16,362 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
    2019-03-28 15:42:16,362 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
    2019-03-28 15:42:16,362 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: storing master key with keyID 6
    2019-03-28 15:42:16,362 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing RMDTMasterKey.
    2019-03-28 15:42:16,365 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler
    2019-03-28 15:42:16,376 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000
    2019-03-28 15:42:16,378 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8031
    2019-03-28 15:42:16,393 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server
    2019-03-28 15:42:16,394 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
    2019-03-28 15:42:16,394 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8031: starting
    2019-03-28 15:42:16,441 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000
    2019-03-28 15:42:16,451 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8030
    2019-03-28 15:42:16,589 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server
    2019-03-28 15:42:16,590 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
    2019-03-28 15:42:16,590 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8030: starting
    2019-03-28 15:42:16,641 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000
    2019-03-28 15:42:16,642 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8032
    2019-03-28 15:42:16,647 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server
    2019-03-28 15:42:16,655 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
    2019-03-28 15:42:16,655 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8032: starting
    2019-03-28 15:42:16,670 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state
    2019-03-28 15:42:16,670 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=transitionToActive	TARGET=RMHAProtocolService	RESULT=SUCCESS
    2019-03-28 15:42:16,705 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing sht-sgmhadoopdn-02:34736
    2019-03-28 15:42:17,738 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved sht-sgmhadoopdn-02 to /default-rack
    2019-03-28 15:42:17,741 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node sht-sgmhadoopdn-02(cmPort: 34736 httpPort: 8042) registered with capability: <memory:4096, vCores:8>, assigned nodeId sht-sgmhadoopdn-02:34736
    2019-03-28 15:42:17,745 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: sht-sgmhadoopdn-02:34736 Node Transitioned from NEW to RUNNING
    2019-03-28 15:42:17,750 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node sht-sgmhadoopdn-02:34736 clusterResource: <memory:4096, vCores:8>
    2019-03-28 15:42:17,753 INFO org.apache.hadoop.yarn.server.resourcemanager.RMActiveServiceContext: Scheduler recovery is done. Start allocating new containers.
    2019-03-28 15:42:19,164 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing sht-sgmhadoopdn-03:14993
    2019-03-28 15:42:20,167 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing sht-sgmhadoopdn-01:21177
    2019-03-28 15:42:20,169 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved sht-sgmhadoopdn-03 to /default-rack
    2019-03-28 15:42:20,169 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node sht-sgmhadoopdn-03(cmPort: 14993 httpPort: 8042) registered with capability: <memory:4096, vCores:8>, assigned nodeId sht-sgmhadoopdn-03:14993
    2019-03-28 15:42:20,169 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: sht-sgmhadoopdn-03:14993 Node Transitioned from NEW to RUNNING
    2019-03-28 15:42:20,171 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node sht-sgmhadoopdn-03:14993 clusterResource: <memory:8192, vCores:16>
    2019-03-28 15:42:21,171 INFO org.apache.hadoop.yarn.util.RackResolver: Resolved sht-sgmhadoopdn-01 to /default-rack
    2019-03-28 15:42:21,171 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: NodeManager from node sht-sgmhadoopdn-01(cmPort: 21177 httpPort: 8042) registered with capability: <memory:4096, vCores:8>, assigned nodeId sht-sgmhadoopdn-01:21177
    2019-03-28 15:42:21,172 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: sht-sgmhadoopdn-01:21177 Node Transitioned from NEW to RUNNING
    2019-03-28 15:42:21,172 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added node sht-sgmhadoopdn-01:21177 clusterResource: <memory:12288, vCores:24>
    View Code

    3). 启动原active主节点的resourcemanager

    # yarn-daemon.sh start resourcemanager

    同时观察原active节点日志

    2019-03-28 15:52:37,822 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state
    2019-03-28 15:52:37,823 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state

    可以看到在原active节点恢复之后,在检测到集群已经有active节点状态下,降级成standby节点。

    完整输出日志

    2019-03-28 15:52:35,747 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: STARTUP_MSG: 
    /************************************************************
    STARTUP_MSG: Starting ResourceManager
    STARTUP_MSG:   host = sht-sgmhadoopnn-01/172.16.101.55
    STARTUP_MSG:   args = []
    STARTUP_MSG:   version = 2.7.4
    STARTUP_MSG:   classpath = /usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/etc/hadoop/rm-config/log4j.properties
    STARTUP_MSG:   build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r cd915e1e8d9d0131462a0b7301586c175728a282; compiled by 'kshvachk' on 2017-08-01T00:29Z
    STARTUP_MSG:   java = 1.8.0_45
    ************************************************************/
    2019-03-28 15:52:35,765 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT]
    2019-03-28 15:52:36,320 INFO org.apache.hadoop.conf.Configuration: found resource core-site.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/core-site.xml
    2019-03-28 15:52:36,499 INFO org.apache.hadoop.security.Groups: clearing userToGroupsMap cache
    2019-03-28 15:52:36,635 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/yarn-site.xml
    2019-03-28 15:52:36,949 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher
    2019-03-28 15:52:37,206 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: NMTokenKeyRollingInterval: 86400000ms and NMTokenKeyActivationDelay: 900000ms
    2019-03-28 15:52:37,211 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager: ContainerTokenKeyRollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms
    2019-03-28 15:52:37,221 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: AMRMTokenKeyRollingInterval: 86400000ms and AMRMTokenKeyActivationDelay: 900000 ms
    2019-03-28 15:52:37,281 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStoreFactory: Using RMStateStore implementation - class org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
    2019-03-28 15:52:37,285 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler
    2019-03-28 15:52:37,309 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.NodesListManager
    2019-03-28 15:52:37,309 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Using Scheduler: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
    2019-03-28 15:52:37,331 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher
    2019-03-28 15:52:37,332 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher
    2019-03-28 15:52:37,333 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher
    2019-03-28 15:52:37,334 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher
    2019-03-28 15:52:37,422 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
    2019-03-28 15:52:37,536 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
    2019-03-28 15:52:37,536 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: ResourceManager metrics system started
    2019-03-28 15:52:37,558 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.RMAppManager
    2019-03-28 15:52:37,567 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher
    2019-03-28 15:52:37,571 INFO org.apache.hadoop.yarn.server.resourcemanager.RMNMInfo: Registered RMNMInfo MBean
    2019-03-28 15:52:37,574 INFO org.apache.hadoop.yarn.security.YarnAuthorizationProvider: org.apache.hadoop.yarn.security.ConfiguredYarnAuthorizer is instiantiated.
    2019-03-28 15:52:37,575 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
    2019-03-28 15:52:37,577 INFO org.apache.hadoop.conf.Configuration: found resource capacity-scheduler.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/capacity-scheduler.xml
    2019-03-28 15:52:37,653 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined
    2019-03-28 15:52:37,654 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined
    2019-03-28 15:52:37,660 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APP:*ADMINISTER_QUEUE:*, labels=*,
    , reservationsContinueLooking=true
    2019-03-28 15:52:37,660 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root
    2019-03-28 15:52:37,675 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined
    2019-03-28 15:52:37,675 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined
    2019-03-28 15:52:37,677 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Initializing default
    capacity = 1.0 [= (float) configuredCapacity / 100 ]
    asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ]
    maxCapacity = 1.0 [= configuredMaxCapacity ]
    absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ]
    userLimit = 100 [= configuredUserLimit ]
    userLimitFactor = 1.0 [= configuredUserLimitFactor ]
    maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)]
    maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ]
    usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)]
    absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory]
    maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ]
    minimumAllocationFactor = 0.75 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ]
    maximumAllocation = <memory:4096, vCores:32> [= configuredMaxAllocation ]
    numContainers = 0 [= currentNumContainers ]
    state = RUNNING [= configuredState ]
    acls = SUBMIT_APP:*ADMINISTER_QUEUE:* [= configuredAcls ]
    nodeLocalityDelay = 40
    labels=*,
    nodeLocalityDelay = 40
    reservationsContinueLooking = true
    preemptionDisabled = true
    
    2019-03-28 15:52:37,677 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0
    2019-03-28 15:52:37,678 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue: root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
    2019-03-28 15:52:37,679 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0
    2019-03-28 15:52:37,679 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized queue mappings, override: false
    2019-03-28 15:52:37,679 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<<memory:1024, vCores:1>>, maximumAllocation=<<memory:4096, vCores:32>>, asynchronousScheduling=false, asyncScheduleInterval=5ms
    2019-03-28 15:52:37,726 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
    2019-03-28 15:52:37,726 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopnn-01
    2019-03-28 15:52:37,726 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.8.0_45
    2019-03-28 15:52:37,726 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
    2019-03-28 15:52:37,726 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.8.0_45/jre
    2019-03-28 15:52:37,726 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/etc/hadoop/rm-config/log4j.properties
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/hadoop-2.7.4/lib/native
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_64
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=root
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/root
    2019-03-28 15:52:37,727 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/hadoop-2.7.4
    2019-03-28 15:52:37,729 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=10000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@102cec62
    2019-03-28 15:52:37,758 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server sht-sgmhadoopdn-03/172.16.101.60:2181. Will not attempt to authenticate using SASL (unknown error)
    2019-03-28 15:52:37,768 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-03/172.16.101.60:2181, initiating session
    2019-03-28 15:52:37,779 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-03/172.16.101.60:2181, sessionid = 0x369bfa5e6bf0008, negotiated timeout = 10000
    2019-03-28 15:52:37,805 INFO org.apache.hadoop.ha.ActiveStandbyElector: Successfully created /yarn-leader-election/yarn-cluster in ZK.
    2019-03-28 15:52:37,818 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
    2019-03-28 15:52:37,822 INFO org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher: YARN system metrics publishing service is not enabled
    2019-03-28 15:52:37,822 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioning to standby state
    2019-03-28 15:52:37,823 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to standby state
    2019-03-28 15:52:38,257 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
    2019-03-28 15:52:38,269 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
    2019-03-28 15:52:38,280 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined
    2019-03-28 15:52:38,294 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
    2019-03-28 15:52:38,300 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster
    2019-03-28 15:52:38,300 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs
    2019-03-28 15:52:38,300 INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static
    2019-03-28 15:52:38,301 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster
    2019-03-28 15:52:38,301 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
    2019-03-28 15:52:38,301 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
    2019-03-28 15:52:38,308 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /cluster/*
    2019-03-28 15:52:38,308 INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
    2019-03-28 15:52:38,937 INFO org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
    2019-03-28 15:52:38,942 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8088
    2019-03-28 15:52:38,942 INFO org.mortbay.log: jetty-6.1.26
    2019-03-28 15:52:38,984 INFO org.mortbay.log: Extract jar:file:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar!/webapps/cluster to /tmp/Jetty_sht.sgmhadoopnn.01_8088_cluster____f553m4/webapp
    2019-03-28 15:52:39,327 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
    2019-03-28 15:52:39,343 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
    2019-03-28 15:52:39,343 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
    2019-03-28 15:52:40,641 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@sht-sgmhadoopnn-01:8088
    2019-03-28 15:52:40,641 INFO org.apache.hadoop.yarn.webapp.WebApps: Web app cluster started at 8088
    2019-03-28 15:52:40,679 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100
    2019-03-28 15:52:40,690 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8033
    2019-03-28 15:52:40,885 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server
    2019-03-28 15:52:40,899 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
    2019-03-28 15:52:40,901 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8033: starting
    2019-03-28 15:52:40,932 INFO org.apache.hadoop.conf.Configuration: found resource yarn-site.xml at file:/usr/local/hadoop-2.7.4/etc/hadoop/yarn-site.xml
    2019-03-28 15:52:40,938 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=refreshAdminAcls	TARGET=AdminService	RESULT=SUCCESS
    2019-03-28 15:52:40,945 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Already in standby state
    2019-03-28 15:52:40,945 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root	OPERATION=transitionToStandby	TARGET=RMHAProtocolService	RESULT=SUCCESS
    View Code
  • 相关阅读:
    Encoding
    F Takio与Blue的人生赢家之战
    D FFF团的怒火
    C Golden gun的巧克力
    B 倒不了的塔
    A jubeat
    17230 计算轴承半径
    10686 DeathGod不知道的事情
    10688 XYM-AC之路
    10692 XYM-入门之道
  • 原文地址:https://www.cnblogs.com/ilifeilong/p/10610993.html
Copyright © 2020-2023  润新知