• hadoop创建两大错误:Bad connection to FS. command aborted. exception和Shutting down NameNode at hadoop


    1.问题目录表:

    Error代码  

    1.           Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:88  

    2.           88 failed on connection exception: java.net.ConnectException: Connection refused  

    3.           : no further information    

    错误提示“Bad connection to FS. command aborted. exception: Call tolocalhost/127.0.0.1:88

    88failed on connection exception: java.net.ConnectException: Connection refused

    :no further information”

     起初怀疑是fs服务没有启动,但反复关闭启动多次后仍没用,请教高手后,被建议重新格式化namenode,就可以了。

    格式化指令如下(在hadoop的bin目录下):

     

    Shell代码  

    1.           $ ./hadoop namenode -format  

    成功之后重启hadoop就可以了

     

    2如果错误还存在,那么手动删除文件

     

    Error代码  

    1.           $ bin/hadoop fs -ls /  

    2.           11/08/18 17:02:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    3.           .1:9000. Already tried 0 time(s).  

    4.           11/08/18 17:02:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    5.           .1:9000. Already tried 1 time(s).  

    6.           11/08/18 17:02:39 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    7.           .1:9000. Already tried 2 time(s).  

    8.           11/08/18 17:02:41 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    9.           .1:9000. Already tried 3 time(s).  

    10.        11/08/18 17:02:43 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    11.        .1:9000. Already tried 4 time(s).  

    12.        11/08/18 17:02:45 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    13.        .1:9000. Already tried 5 time(s).  

    14.        11/08/18 17:02:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    15.        .1:9000. Already tried 6 time(s).  

    16.        11/08/18 17:02:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    17.        .1:9000. Already tried 7 time(s).  

    18.        11/08/18 17:02:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    19.        .1:9000. Already tried 8 time(s).  

    20.        11/08/18 17:02:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0  

    21.        .1:9000. Already tried 9 time(s).  

    Error代码  

    1.           Bad connection to FS. command aborted.  

    错误提示“Bad connection to FS. command aborted.”

     把你DataNode上的DFS数据全删了,再重新格式化NameNode

     即:先将你D盘下tmp目录下所有文件删了,在重复上面第一点的过程

    hadoop格式化失败原因

    [root@hadoop home]# hadoop namenode -format
    Warning: $HADOOP_HOME is deprecated.


    14/01/24 11:41:59 INFO namenode.NameNode: STARTUP_MSG: 
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = hadoop/192.168.174.174
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 1.1.2
    STARTUP_MSG:   build =https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782;compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
    ************************************************************/
    Re-format filesystem in /usr/java/hadoop/tmp/dfs/name ? (Y or N) y
    Format aborted in /usr/java/hadoop/tmp/dfs/name
    14/01/24 11:42:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at hadoop/192.168.174.174
    ************************************************************/

    随后启动hadoop,发现http://hdp0:5007无法显示。

     将cd/usr/java/hadoop/tmp/dfs文件夹整个删除。然后再格,成功!!!

    [root@hadoop dfs]# hadoop namenode -format
    Warning: $HADOOP_HOME is deprecated.


    14/01/24 11:44:59 INFO namenode.NameNode: STARTUP_MSG: 
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = hadoop/192.168.174.174
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 1.1.2
    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1-r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
    ************************************************************/
    14/01/24 11:44:59 INFO util.GSet: VM type       = 32-bit
    14/01/24 11:44:59 INFO util.GSet: 2% max memory = 19.33375 MB
    14/01/24 11:44:59 INFO util.GSet: capacity      = 2^22 = 4194304entries
    14/01/24 11:44:59 INFO util.GSet: recommended=4194304, actual=4194304
    14/01/24 11:45:00 INFO namenode.FSNamesystem: fsOwner=root
    14/01/24 11:45:00 INFO namenode.FSNamesystem: supergroup=supergroup
    14/01/24 11:45:00 INFO namenode.FSNamesystem: isPermissionEnabled=false
    14/01/24 11:45:00 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
    14/01/24 11:45:00 INFO namenode.FSNamesystem: isAccessTokenEnabled=falseaccessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
    14/01/24 11:45:00 INFO namenode.NameNode: Caching file names occuring more than10 times 
    14/01/24 11:45:00 INFO common.Storage: Image file of size 110 saved in 0seconds.
    14/01/24 11:45:00 INFO namenode.FSEditLog: closing edit log: position=4,editlog=/usr/java/hadoop/tmp/dfs/name/current/edits
    14/01/24 11:45:00 INFO namenode.FSEditLog: close success: truncate to 4,editlog=/usr/java/hadoop/tmp/dfs/name/current/edits
    14/01/24 11:45:01 INFO common.Storage: Storage directory/usr/java/hadoop/tmp/dfs/name has been successfully formatted.
    14/01/24 11:45:01 INFO namenode.NameNode: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at hadoop/192.168.174.174
    ************************************************************//************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = hdp0/192.168.221.100
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 0.20.2
    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
    ************************************************************/
    11/04/12 15:33:30 INFO namenode.FSNamesystem:fsOwner=hadoop,hadoop,adm,dialout,cdrom,plugdev,lpadmin,admin,sambashare
    11/04/12 15:33:30 INFO namenode.FSNamesystem: supergroup=supergroup
    11/04/12 15:33:30 INFO namenode.FSNamesystem: isPermissionEnabled=true
    11/04/12 15:33:31 INFO common.Storage: Image file of size 96 saved in 0seconds.
    11/04/12 15:33:31 INFO common.Storage: Storage directory /home/hadoop/dfs/namehas been successfully formatted.
    11/04/12 15:33:31 INFO namenode.NameNode: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at hdp0/192.168.221.100
    ************************************************************/


  • 相关阅读:
    .NET I/O 学习笔记:目录和文件
    WCF学习笔记(1)——Hello WCF
    《Effective C#》读书笔记——条目10:使用可选参数减少方法重载的数量<C#语言习惯>
    .NET I/O 学习笔记:文件的读和写
    开场白:我关注反洗钱、大家都来关注反洗钱
    关于EOM(Enterprise Operating Model)企业经营模型(12)
    关于EOM(Enterprise Operating Model)企业经营模型(7)
    关于EOM(Enterprise Operating Model)企业经营模型(9)
    关于EOM(Enterprise Operating Model)企业经营模型(10)
    关于EOM(Enterprise Operating Model)企业经营模型(11)
  • 原文地址:https://www.cnblogs.com/yangkai-cn/p/4016687.html
Copyright © 2020-2023  润新知