• hdfs datanode 启动失败


    hadoop-root-datanode-ubuntu.log中:
    2015-03-12 23:52:33,671 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.
    java.io.IOException: Incompatible clusterIDs in /hdfs/name/dfs/data: namenode clusterID = CID-70d64aad-1dfe-4f87-af15-d53ff80db3dd; datanode clusterID = CID-388a9ec6-cb87-4b0d-97c4-3b4d5c787b76
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403)
            at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311)
            at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276)
            at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
            at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
            at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
            at java.lang.Thread.run(Thread.java:745)
    2015-03-12 23:52:33,680 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
    2015-03-12 23:52:33,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
    2015-03-12 23:52:35,790 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
    2015-03-12 23:52:35,791 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
    2015-03-12 23:52:35,792 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at ubuntu/127.0.1.1
    ************************************************************/
     
    原因是:
    namenode与datanode的clusterID在重新格式化namenode以后已经不再匹配,datanode无法启动。
    另外:
    此错误会导致在hive导入数据时发生如下错误(由于metadata不存在hdfs中,故create table并无报错):
    hive> load data local inpath '/root/dbfile' overwrite into table employees PARTITION (country='US', state='IL');
    Loading data to table default.employees partition (country=US, state=IL)
    Failed with exception Unable to move source file:/root/dbfile to destination hdfs://localhost:9000/user/hive/warehouse/employees/country=US/state=IL/dbfile
    FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTas
     
    解决方法:
    将hdfs存储数据的所在目录删掉,重新格式化hdfs(相关参数:dfs.name.dir  dfs.data.dir):
    hadoop namenode -format
     
  • 相关阅读:
    【转】RocketMQ事务消费和顺序消费详解
    RocketMQ初探(五)之RocketMQ4.2.6集群部署(单Master+双Master+2m+2s+async异步复制)
    Spring定时器Quartz的使用
    RocketMQ初探(四)之RocketMQ4.x版本可视化管理控制台rocketmq-console-ng搭建(Apache)
    RocketMQ入门(简介、特点)
    RocketMQ初探(二)之RocketMQ3.26版本搭建(含简单Demo测试案例)
    RocketMQ初探(一)
    tomcat详解
    HDFS读写流程
    RabbitMQ
  • 原文地址:https://www.cnblogs.com/JingJ/p/4336926.html
Copyright © 2020-2023  润新知