• HDFS不存在绝对路径,无法找到文件所在具体位置


    This is set in the dfs.datanode.data.dir property, which defaults to file://${hadoop.tmp.dir}/dfs/data (see details here).

    However, in your case, the problem is that you are not using the full path within HDFS. Instead, do:

    hadoop fs -ls /usr/local/myhadoop-tmp/

    Note that, you also seem to be confusing the path within HDFS to the path in your local file system. Within HDFS, your file is in /usr/local/myhadoop-tmp/. In your local system (and given your configuration setting), it is under /usr/local/myhadoop-tmp/dfs/data/; in there, there's a directory structure and naming convention defined by HDFS, that is independent to whatever path in HDFS you decide to use. Also, it won't have the same name, since it is divided into blocks and each block is assigned a unique ID; the name of a block is something like blk_1073741826.

    To conclude: the local path used by the datanode is NOT the same as the paths you use in HDFS. You can go into your local directory looking for files, but you should not do this, since you could mess up the HDFS metadata management. Just use the hadoop command-line tools to copy/move/read files within HDFS, using any logical path (in HDFS) that you wish to use. These paths within HDFS do not need to be tied to the paths you used in for your local datanode storage (there is no reason to or advantage of doing this).

  • 相关阅读:
    计算机网络笔记6-应用层
    计算机网络笔记5-传输层
    计算机网络笔记4-网络层
    计算机组成原理笔记7-输入输出系统
    计算机组成原理笔记6-总线
    计算机组成原理笔记5-中央处理器
    计算机网络笔记3-数据链路层
    计算机组成原理笔记4-指令系统
    计算机组成原理笔记3-存储系统
    信息安全数学基础笔记
  • 原文地址:https://www.cnblogs.com/zwingblog/p/6800624.html
Copyright © 2020-2023  润新知