HDFS所有命令:
[uploaduser@rickiyang ~]$ hadoop fs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] [-h] <path> ...]
[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] <path> ...]
[-expunge]
[-find <path> ... <expression> ...]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-truncate [-w] <length> <path> ...]
[-usage [cmd ...]]
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
1> –ls显示当前目录结构(从根路径开始)
[uploaduser@rickiyang ~]$ hadoop fs -ls /
2> -R递归显示目录结构
[uploaduser@rickiyang ~]$ hadoop fs -ls -R /
3> 创建带内容的文本
echo "hello world" > test1.txt
4> 上传文件
上传并重命名
[uploaduser@rickiyang tmp]$ hadoop fs -put t1.txt /tmp/test1.txt
上传文件夹
[uploaduser@rickiyang tmp]$ hadoop fs -put newPkg /tmp/pkg
一次上传多个文件
[uploaduser@rickiyang tmp]$ hadoop fs -put /tmp/t1.txt /tmp/test1.txt
覆盖上传,如果根目录同一级有同名的文件则会覆盖,-f 表示强制覆盖
[uploaduser@rickiyang tmp]$ hadoop fs -put -f /tmp/test1.txt
5> 追加内容到文件末尾
[uploaduser@rickiyang tmp]$ hadoop fs -appendToFile /tmp/test2.txt
/user/uploadser/test1.txt
6> 查看HDFS某个文件的内容
[uploaduser@rickiyang tmp]$ hadoop fs -cat /user/uploaduser/test1.txt
7> 下载文件/目录
从HDFS到本地文件系统
[uploaduser@rickiyang ~]$ hadoop fs -get /user/uploaduser/test2.txt /tmp/1.txt
与get等价的命令是copyToLocal
[uploaduser@rickiyang ~]$ hadoop fs -copyToLocal /user/uploaduser/test2.txt /tmp/1.txt
其余的命令跟正常的linux命令大同小异,在实践中多多尝试。