• HDFS中namenode启动失败


    1、环境配置:

      -1、core-site.xml文件

     1 <configuration>
     2     <property>
     3         <name>fs.defaultFS</name>
     4         <value>hdfs://bigdata-study-104:8020</value>
     5     </property>
     6     <property>
     7         <name>hadoop.tmp.dir</name>
     8         <value>/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/data/tmp</value>
     9     </property>
    10 </configuration>

      -2、hdfs-site.xml文件

     1 <configuration>
     2     <property>
     3         <name>dfs.replication</name>
     4         <value>1</value>
     5     </property>
     6     <property>
     7         <name>dfs.permissions.enabled</name>
     8         <value>false</value>
     9     </property>
    10     <property>
    11         <name>dfs.namenode.secondary.http-address</name>
    12         <value>bigdata-study-104:50090</value>
    13     </property>
    14 </configuration>

     

    2、格式化日志

     1 [walloce@bigdata-study-104 hadoop-2.5.0-cdh5.3.6]$ bin/hdfs namenode -fotmate
     2 18/01/01 04:20:11 INFO namenode.NameNode: STARTUP_MSG: 
     3 /************************************************************
     4 STARTUP_MSG: Starting NameNode
     5 STARTUP_MSG:   host = bigdata-study-104/192.168.192.104
     6 STARTUP_MSG:   args = [-fotmate]
     7 STARTUP_MSG:   version = 2.5.0-cdh5.3.6
     8 STARTUP_MSG:   classpath = /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/etc/hadoop:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-el-1.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/hadoop-auth-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/guava-11.0.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/paranamer-2.3.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jetty-6.1.26.cloudera.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/asm-3.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/hadoop-annotations-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/zookeeper-3.4.5-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/activation-1.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/avro-1.7.6-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/gson-2.2.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/xz-1.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/curator-framework-2.6.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/junit-4.11.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jettison-1.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-io-2.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-net-3.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/lib/curator-client-2.6.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/hadoop-common-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/hadoop-common-2.5.0-cdh5.3.6-tests.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/common/hadoop-nfs-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/hadoop-hdfs-2.5.0-cdh5.3.6-tests.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/hdfs/hadoop-hdfs-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/asm-3.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/zookeeper-3.4.5-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/activation-1.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/log4j-1.2.17.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/xz-1.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/guice-3.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jline-0.9.94.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/commons-io-2.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-common-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-api-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-common-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-client-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/avro-1.7.6-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/xz-1.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6-tests.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6.jar:/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.0-cdh5.3.6.jar:/contrib/capacity-scheduler/*.jar
     9 STARTUP_MSG:   build = http://github.com/cloudera/hadoop -r 6743ef286bfdd317b600adbdb154f982cf2fac7a; compiled by 'jenkins' on 2015-07-28T22:14Z
    10 STARTUP_MSG:   java = 1.7.0_67
    11 ************************************************************/
    12 18/01/01 04:20:11 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
    13 18/01/01 04:20:11 INFO namenode.NameNode: createNameNode [-fotmate]
    14 Usage: java NameNode [-backup] | 
    15     [-checkpoint] | 
    16     [-format [-clusterid cid ] [-force] [-nonInteractive] ] | 
    17     [-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] | 
    18     [-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] | 
    19     [-rollback] | 
    20     [-rollingUpgrade <rollback|downgrade|started> ] | 
    21     [-finalize] | 
    22     [-importCheckpoint] | 
    23     [-initializeSharedEdits] | 
    24     [-bootstrapStandby] | 
    25     [-recover [ -force] ] | 
    26     [-metadataVersion ]  ]
    27 
    28 18/01/01 04:20:11 INFO namenode.NameNode: SHUTDOWN_MSG: 
    29 /************************************************************
    30 SHUTDOWN_MSG: Shutting down NameNode at bigdata-study-104/192.168.192.104
    31 ************************************************************/

    3、格式化完成,没有发现什么比较明显的错误提示信息,启动datanode和namenode

      jps查看进程,发现只有datanode.

      于是查看namenode日志:tail -100 logs/hadoop-walloce-namenode-bigdata-study-104.log 

      

      1 2018-01-01 04:21:06,497 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
      2 2018-01-01 04:21:06,588 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
      3 2018-01-01 04:21:06,588 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
      4 2018-01-01 04:21:06,589 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://bigdata-study-104:8020
      5 2018-01-01 04:21:06,590 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use bigdata-study-104:8020 to access this namenode/service.
      6 2018-01-01 04:21:06,722 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
      7 2018-01-01 04:21:06,907 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
      8 2018-01-01 04:21:06,908 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
      9 2018-01-01 04:21:06,961 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
     10 2018-01-01 04:21:06,968 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
     11 2018-01-01 04:21:06,990 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
     12 2018-01-01 04:21:06,993 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
     13 2018-01-01 04:21:06,999 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
     14 2018-01-01 04:21:06,999 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
     15 2018-01-01 04:21:07,059 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
     16 2018-01-01 04:21:07,070 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
     17 2018-01-01 04:21:07,099 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
     18 2018-01-01 04:21:07,099 INFO org.mortbay.log: jetty-6.1.26.cloudera.4
     19 2018-01-01 04:21:07,326 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
     20 2018-01-01 04:21:07,402 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
     21 2018-01-01 04:21:07,402 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
     22 2018-01-01 04:21:07,449 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
     23 2018-01-01 04:21:07,455 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
     24 2018-01-01 04:21:07,492 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
     25 2018-01-01 04:21:07,492 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
     26 2018-01-01 04:21:07,494 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
     27 2018-01-01 04:21:07,495 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2018 Jan 01 04:21:07
     28 2018-01-01 04:21:07,497 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
     29 2018-01-01 04:21:07,497 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
     30 2018-01-01 04:21:07,498 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
     31 2018-01-01 04:21:07,498 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
     32 2018-01-01 04:21:07,502 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
     33 2018-01-01 04:21:07,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
     34 2018-01-01 04:21:07,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
     35 2018-01-01 04:21:07,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
     36 2018-01-01 04:21:07,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
     37 2018-01-01 04:21:07,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
     38 2018-01-01 04:21:07,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
     39 2018-01-01 04:21:07,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
     40 2018-01-01 04:21:07,503 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
     41 2018-01-01 04:21:07,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = walloce (auth:SIMPLE)
     42 2018-01-01 04:21:07,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
     43 2018-01-01 04:21:07,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
     44 2018-01-01 04:21:07,507 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
     45 2018-01-01 04:21:07,509 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
     46 2018-01-01 04:21:07,668 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
     47 2018-01-01 04:21:07,668 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
     48 2018-01-01 04:21:07,668 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
     49 2018-01-01 04:21:07,668 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
     50 2018-01-01 04:21:07,669 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
     51 2018-01-01 04:21:07,675 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
     52 2018-01-01 04:21:07,676 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
     53 2018-01-01 04:21:07,676 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
     54 2018-01-01 04:21:07,676 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
     55 2018-01-01 04:21:07,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     56 2018-01-01 04:21:07,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
     57 2018-01-01 04:21:07,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
     58 2018-01-01 04:21:07,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
     59 2018-01-01 04:21:07,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
     60 2018-01-01 04:21:07,680 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
     61 2018-01-01 04:21:07,680 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
     62 2018-01-01 04:21:07,680 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
     63 2018-01-01 04:21:07,680 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
     64 2018-01-01 04:21:07,684 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
     65 2018-01-01 04:21:07,684 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
     66 2018-01-01 04:21:07,684 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
     67 2018-01-01 04:21:07,685 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/data/tmp/dfs/name does not exist
     68 2018-01-01 04:21:07,686 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
     69 org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/data/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
     70     at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:314)
     71     at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
     72     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1006)
     73     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:736)
     74     at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:533)
     75     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:589)
     76     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:756)
     77     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:740)
     78     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1430)
     79     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1496)
     80 2018-01-01 04:21:07,689 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
     81 2018-01-01 04:21:07,790 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
     82 2018-01-01 04:21:07,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
     83 2018-01-01 04:21:07,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
     84 2018-01-01 04:21:07,791 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
     85 org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/data/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
     86     at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:314)
     87     at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
     88     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1006)
     89     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:736)
     90     at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:533)
     91     at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:589)
     92     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:756)
     93     at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:740)
     94     at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1430)
     95     at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1496)
     96 2018-01-01 04:21:07,795 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
     97 2018-01-01 04:21:07,797 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
     98 /************************************************************
     99 SHUTDOWN_MSG: Shutting down NameNode at bigdata-study-104/192.168.192.104
    100 ************************************************************/

      发现比较明显的错误是: 

    1 2018-01-01 04:21:07,791 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
    2 org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/data/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

    原因是这个目录在格式化的时候没有生成。

    解决:

      删除目录(之前我有安装过,后来删除重新安装才出现的这个问题): /user/tmp

      重新执行:bin/hdfs namenode -format格式化

     1 18/01/01 04:28:00 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
     2 18/01/01 04:28:00 INFO namenode.NameNode: createNameNode [-format]
     3 18/01/01 04:28:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
     4 Formatting using clusterid: CID-292b734e-c2be-4a96-98c8-4399eaec7a59
     5 18/01/01 04:28:01 INFO namenode.FSNamesystem: No KeyProvider found.
     6 18/01/01 04:28:01 INFO namenode.FSNamesystem: fsLock is fair:true
     7 18/01/01 04:28:01 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
     8 18/01/01 04:28:01 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
     9 18/01/01 04:28:01 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
    10 18/01/01 04:28:01 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jan 01 04:28:01
    11 18/01/01 04:28:01 INFO util.GSet: Computing capacity for map BlocksMap
    12 18/01/01 04:28:01 INFO util.GSet: VM type       = 64-bit
    13 18/01/01 04:28:01 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
    14 18/01/01 04:28:01 INFO util.GSet: capacity      = 2^21 = 2097152 entries
    15 18/01/01 04:28:01 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
    16 18/01/01 04:28:01 INFO blockmanagement.BlockManager: defaultReplication         = 1
    17 18/01/01 04:28:01 INFO blockmanagement.BlockManager: maxReplication             = 512
    18 18/01/01 04:28:01 INFO blockmanagement.BlockManager: minReplication             = 1
    19 18/01/01 04:28:01 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
    20 18/01/01 04:28:01 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
    21 18/01/01 04:28:01 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
    22 18/01/01 04:28:01 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
    23 18/01/01 04:28:01 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
    24 18/01/01 04:28:02 INFO namenode.FSNamesystem: fsOwner             = walloce (auth:SIMPLE)
    25 18/01/01 04:28:02 INFO namenode.FSNamesystem: supergroup          = supergroup
    26 18/01/01 04:28:02 INFO namenode.FSNamesystem: isPermissionEnabled = false
    27 18/01/01 04:28:02 INFO namenode.FSNamesystem: HA Enabled: false
    28 18/01/01 04:28:02 INFO namenode.FSNamesystem: Append Enabled: true
    29 18/01/01 04:28:02 INFO util.GSet: Computing capacity for map INodeMap
    30 18/01/01 04:28:02 INFO util.GSet: VM type       = 64-bit
    31 18/01/01 04:28:02 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
    32 18/01/01 04:28:02 INFO util.GSet: capacity      = 2^20 = 1048576 entries
    33 18/01/01 04:28:02 INFO namenode.NameNode: Caching file names occuring more than 10 times
    34 18/01/01 04:28:02 INFO util.GSet: Computing capacity for map cachedBlocks
    35 18/01/01 04:28:02 INFO util.GSet: VM type       = 64-bit
    36 18/01/01 04:28:02 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
    37 18/01/01 04:28:02 INFO util.GSet: capacity      = 2^18 = 262144 entries
    38 18/01/01 04:28:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
    39 18/01/01 04:28:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
    40 18/01/01 04:28:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
    41 18/01/01 04:28:02 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
    42 18/01/01 04:28:02 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
    43 18/01/01 04:28:02 INFO util.GSet: Computing capacity for map NameNodeRetryCache
    44 18/01/01 04:28:02 INFO util.GSet: VM type       = 64-bit
    45 18/01/01 04:28:02 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
    46 18/01/01 04:28:02 INFO util.GSet: capacity      = 2^15 = 32768 entries
    47 18/01/01 04:28:02 INFO namenode.NNConf: ACLs enabled? false
    48 18/01/01 04:28:02 INFO namenode.NNConf: XAttrs enabled? true
    49 18/01/01 04:28:02 INFO namenode.NNConf: Maximum size of an xattr: 16384
    50 18/01/01 04:28:02 INFO namenode.FSImage: Allocated new BlockPoolId: BP-849058852-192.168.192.104-1514752082294
    51 18/01/01 04:28:02 INFO common.Storage: Storage directory /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/data/tmp/dfs/name has been successfully formatted.
    52 18/01/01 04:28:02 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
    53 18/01/01 04:28:02 INFO util.ExitUtil: Exiting with status 0
    54 18/01/01 04:28:02 INFO namenode.NameNode: SHUTDOWN_MSG: 
    55 /************************************************************
    56 SHUTDOWN_MSG: Shutting down NameNode at bigdata-study-104/192.168.192.104
    57 ************************************************************/

    关键信息:

    1 18/01/01 04:28:02 INFO namenode.FSImage: Allocated new BlockPoolId: BP-849058852-192.168.192.104-1514752082294
    2 18/01/01 04:28:02 INFO common.Storage: Storage directory /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/data/tmp/dfs/name has been successfully formatted.
    3 18/01/01 04:28:02 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
    4 18/01/01 04:28:02 INFO util.ExitUtil: Exiting with status 0

    提示格式化成功。

    4、重新启动hdfs.

      

    1 starting datanode, logging to /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/logs/hadoop-walloce-datanode-bigdata-study-104.out
    2 [walloce@bigdata-study-104 hadoop-2.5.0-cdh5.3.6]$ sbin/hadoop-daemon.sh start namenode
    3 starting namenode, logging to /opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/logs/hadoop-walloce-namenode-bigdata-study-104.out
    4 [walloce@bigdata-study-104 hadoop-2.5.0-cdh5.3.6]$ ^C
    5 [walloce@bigdata-study-104 hadoop-2.5.0-cdh5.3.6]$ jps
    6 3097 Jps
    7 2970 NameNode
    8 2894 DataNode

    结束,重新搭建成功(小白初学中)!

    参考链接:http://blog.csdn.net/caoshichaocaoshichao/article/details/12879821

    初心回归,时光已逝!
  • 相关阅读:
    花生壳内网穿透连接SQL server
    natapp内网穿透连接SQL server
    git 常用命令
    idea日志插件 grep console 的简单使用
    IDEA java.lang.OutOfMemoryError: Java heap space-内存溢出问题
    python pip
    线程同步的几种方法,join(),CountDownLatch、CyclicBarrier 、Semaphore
    多线程 Unsafe类的使用
    【赵强老师】使用kubeadmin部署K8s集群
    3.Exadata 软件体系结构
  • 原文地址:https://www.cnblogs.com/yin1361866686/p/8299497.html
Copyright © 2020-2023  润新知