一、hadoop fs
1、创建目录
[root@master hadoop-2.7.0]# hadoop fs -mkdir /testdir1 [root@master hadoop-2.7.0]# hadoop fs -ls / Found 2 items drwxr-xr-x - root supergroup 0 2018-05-07 11:27 /test drwxr-xr-x - root supergroup 0 2018-05-18 09:27 /testdir1
加 -p 则创建多级目录
[root@master hadoop-2.7.0]# hadoop fs -mkdir -p /aa/bb/cc [root@master hadoop-2.7.0]# hadoop fs -ls / Found 3 items drwxr-xr-x - root supergroup 0 2018-05-18 09:28 /aa drwxr-xr-x - root supergroup 0 2018-05-07 11:27 /test drwxr-xr-x - root supergroup 0 2018-05-18 09:27 /testdir1
2、ls,列出指定目录的所有文件或文件夹
[root@master hadoop-2.7.0]# hadoop fs -ls / Found 1 items drwxr-xr-x - root supergroup 0 2018-05-07 11:27 /test
加 -R,列出所有级的目录和文件
[root@master hadoop-2.7.0]# hadoop fs -ls -R / drwxr-xr-x - root supergroup 0 2018-05-18 09:28 /aa drwxr-xr-x - root supergroup 0 2018-05-18 09:28 /aa/bb drwxr-xr-x - root supergroup 0 2018-05-18 09:28 /aa/bb/cc drwxr-xr-x - root supergroup 0 2018-05-07 11:27 /test drwxr-xr-x - root supergroup 0 2018-05-18 09:27 /testdir1
3、copeFromLocal,复制本地文件到hdfs的目录,除了限定源路径是一个本地文件外,和put命令相似,如果要强制复制文件 加-f 也可以一次复制多个文件
[root@master hadoop-2.7.0]# touch /root/file1.txt [root@master hadoop-2.7.0]# hadoop fs -copyFromLocal /root/file1.txt /testdir1 [root@master hadoop-2.7.0]# hadoop fs -ls /testdir1 Found 1 items -rw-r--r-- 2 root supergroup 0 2018-05-18 09:33 /testdir1/file1.txt
4、put,复制文件到hdfs中,文件可以从标准输入中读取(与copeFromLocal不同点,此时dst是一个文件)。
使用方法: hadoop fs -put <localsrc> ... <dst>
1)从本地文件复到hdfs的文件夹(与copeFromLocal相同)
[root@master hadoop-2.7.0]# hadoop fs -put /root/file1.txt /aa [root@master hadoop-2.7.0]# hadoop fs -ls /aa Found 2 items drwxr-xr-x - root supergroup 0 2018-05-18 09:28 /aa/bb -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt
2) 从标准输入流中写入到hdfs中
[root@master hadoop-2.7.0]# echo abc | hadoop fs -put - /aa/file2.txt [root@master hadoop-2.7.0]# hadoop fs -ls /aa Found 3 items drwxr-xr-x - root supergroup 0 2018-05-18 09:28 /aa/bb -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt -rw-r--r-- 2 root supergroup 4 2018-05-18 09:44 /aa/file2.txt
5、cat,列出文件的内容
[root@master hadoop-2.7.0]# hadoop fs -cat /test/readme.txt total 0 drwxr-xr-x. 3 root root 18 May 7 10:21 dfs drwxr-xr-x. 10 10021 10021 161 May 7 10:22 hadoop-2.7.0 drwxr-xr-x. 4 root root 30 May 7 09:15 hdfs drwxr-xr-x. 3 root root 17 May 7 10:41 tmp
如果文件过大可以在后面加|more就可以一页一页的显示
hadoop fs -cat /test/readme.txt|more
6、rm,删除文件或目录
1)删除文件
[root@master hadoop-2.7.0]# hadoop fs -rm /aa/file2.txt rm: Failed to get server trash configuration: null. Consider using -skipTrash option [root@master hadoop-2.7.0]# hadoop fs -rm -skipTrash /aa/file2.txt Deleted /aa/file2.txt
2)删除目录
[root@master hadoop-2.7.0]# hadoop fs -rm -r -skipTrash /aa/bb Deleted /aa/bb [root@master hadoop-2.7.0]# hadoop fs -ls /aa Found 1 items -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt
7、cp,在hdfs内部复文件,这个命令允许有多个源路径,此时目标路径必须是一个目录
[root@master hadoop-2.7.0]# hadoop fs -cp /aa/file1.txt /aa/file3.txt 18/05/18 10:01:01 WARN hdfs.DFSClient: DFSInputStream has been closed already [root@master hadoop-2.7.0]# hadoop fs -ls /aa Found 2 items -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt -rw-r--r-- 2 root supergroup 0 2018-05-18 10:01 /aa/file3.txt
8、get,将文件从hdfs复制到本地
[root@master hadoop-2.7.0]# hadoop fs -get /aa/file3.txt /root/ 18/05/18 10:04:11 WARN hdfs.DFSClient: DFSInputStream has been closed already [root@master hadoop-2.7.0]# ll /root/ total 8 -rw-------. 1 root root 1260 Apr 19 18:36 anaconda-ks.cfg -rw-r--r--. 1 root root 784 Apr 19 11:35 authorized_keys -rw-r--r--. 1 root root 0 May 18 09:32 file1.txt -rw-r--r--. 1 root root 0 May 18 10:04 file3.txt
9、copyToLocal,除了限定目标路径是一个本地文件外,和get命令类似
10、mv,在hdfs内部移动文件
[root@master hadoop-2.7.0]# hadoop fs -mv /aa/file3.txt /aa/file4.txt [root@master hadoop-2.7.0]# hadoop fs -ls /aa Found 2 items -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt -rw-r--r-- 2 root supergroup 0 2018-05-18 10:01 /aa/file4.txt
总结用法与Linux命令类似,不再一一列了
二、hdfs dfsadmin 管理命令
1、-report,查看文件系统的信息和统计信息
[root@master hadoop-2.7.0]# hdfs dfsadmin -report Configured Capacity: 47204802560 (43.96 GB) Present Capacity: 43612942336 (40.62 GB) DFS Remaining: 43612909568 (40.62 GB) DFS Used: 32768 (32 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (2): Name: 192.168.137.102:50010 (node2) Hostname: node2 Decommission Status : Normal Configured Capacity: 23602401280 (21.98 GB) DFS Used: 16384 (16 KB) Non DFS Used: 1795612672 (1.67 GB) DFS Remaining: 21806772224 (20.31 GB) DFS Used%: 0.00% DFS Remaining%: 92.39% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri May 18 10:13:58 EDT 2018 Name: 192.168.137.101:50010 (node1) Hostname: node1 Decommission Status : Normal Configured Capacity: 23602401280 (21.98 GB) DFS Used: 16384 (16 KB) Non DFS Used: 1796247552 (1.67 GB) DFS Remaining: 21806137344 (20.31 GB) DFS Used%: 0.00% DFS Remaining%: 92.39% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri May 18 10:13:58 EDT 2018
2、-safemode,enter | leave | get | wait:安全模式命令。安全模式是NameNode的一种状态,在这种状态下,NameNode不接受对名字空间的更改(只读);不复制或删除块。NameNode在启动时自动进入安全模式,当配置块的最小百分数满足最小副本数的条件时,会自动离开安全模式。enter是进入,leave是离开。
[root@master hadoop-2.7.0]# hdfs dfsadmin -safemode get Safe mode is OFF [root@master hadoop-2.7.0]# hdfs dfsadmin -safemode enter Safe mode is ON [root@master hadoop-2.7.0]# hadoop fs -ls /aa Found 2 items -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt -rw-r--r-- 2 root supergroup 0 2018-05-18 10:01 /aa/file4.txt [root@master hadoop-2.7.0]# hadoop fs -rm -skipTrash /aa/file4.txt rm: Cannot delete /aa/file4.txt. Name node is in safe mode.
3、-refreshNodes,重新读取hosts和exclude文件,使新的节点或需要退出集群的节点能够被NameNode重新识别。这个命令在新增节点或注销节点时用到。
[root@master hadoop-2.7.0]# hdfs dfsadmin -refreshNodes Refresh nodes successful
4、-finalizeUpgrade,终结HDFS的升级操作。Datanode删除前一个版本的工作目录,之后Namenode也这样做。这个操作完结整个升级过程。
[root@master hadoop-2.7.0]# hdfs dfsadmin -finalizeUpgrade Finalize upgrade successful
5、-metasave filename,保存Namenode的主要数据结构到hadoop.log.dir属性指定的目录(默认值是hadoop安装目录:/home/hadoop/hadoop-2.7.0)下的<filename>文件。对于下面的每一项,<filename>中都会一行内容与之对应
1. Namenode收到的Datanode的心跳信号
2. 等待被复制的块
3. 正在被复制的块
4. 等待被删除的块
[root@master hadoop-2.7.0]# hdfs dfsadmin -metasave newlog.log Created metasave file newlog.log in the log directory of namenode hdfs://192.168.137.100:9000
[root@master logs]# pwd /home/hadoop/hadoop-2.7.0/logs [root@master logs]# ll total 940 -rw-r--r--. 1 root root 254161 May 18 10:26 hadoop-root-namenode-master.log -rw-r--r--. 1 root root 714 May 18 09:21 hadoop-root-namenode-master.out -rw-r--r--. 1 root root 714 May 7 11:36 hadoop-root-namenode-master.out.1 -rw-r--r--. 1 root root 714 May 7 11:24 hadoop-root-namenode-master.out.2 -rw-r--r--. 1 root root 714 May 7 11:19 hadoop-root-namenode-master.out.3 -rw-r--r--. 1 root root 714 May 7 11:00 hadoop-root-namenode-master.out.4 -rw-r--r--. 1 root root 714 May 7 10:49 hadoop-root-namenode-master.out.5 -rw-r--r--. 1 root root 267631 May 18 10:16 hadoop-root-secondarynamenode-master.log -rw-r--r--. 1 root root 714 May 18 09:21 hadoop-root-secondarynamenode-master.out -rw-r--r--. 1 root root 714 May 7 11:36 hadoop-root-secondarynamenode-master.out.1 -rw-r--r--. 1 root root 714 May 7 11:24 hadoop-root-secondarynamenode-master.out.2 -rw-r--r--. 1 root root 6074 May 7 11:21 hadoop-root-secondarynamenode-master.out.3 -rw-r--r--. 1 root root 1067 May 7 11:10 hadoop-root-secondarynamenode-master.out.4 -rw-r--r--. 1 root root 12148 May 7 11:05 hadoop-root-secondarynamenode-master.out.5 -rw-r--r--. 1 root root 582 May 18 10:33 newlog.log -rw-r--r--. 1 root root 0 May 7 10:22 SecurityAuth-root.audit -rw-r--r--. 1 root root 340628 May 18 09:31 yarn-root-resourcemanager-master.log -rw-r--r--. 1 root root 700 May 18 09:21 yarn-root-resourcemanager-master.out -rw-r--r--. 1 root root 700 May 7 11:36 yarn-root-resourcemanager-master.out.1 -rw-r--r--. 1 root root 700 May 7 11:24 yarn-root-resourcemanager-master.out.2 -rw-r--r--. 1 root root 700 May 7 11:19 yarn-root-resourcemanager-master.out.3 -rw-r--r--. 1 root root 700 May 7 11:10 yarn-root-resourcemanager-master.out.4 -rw-r--r--. 1 root root 700 May 7 11:00 yarn-root-resourcemanager-master.out.5
6、-setQuota <quota> <dirname>...<dirname>,为每个目录 <dirname>设定配额<quota>。目录配额是一个长整型整数N,强制限定了目录树下的名字个数。
命令会在这个目录上工作良好,以下情况会报错:
1. N不是一个正整数,或者
2. 用户不是管理员,或者
3. 这个目录不存在或是文件,或者
4. 目录会马上超出新设定的配额。
[root@master hadoop-2.7.0]# hadoop fs -ls /aa Found 2 items -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt -rw-r--r-- 2 root supergroup 0 2018-05-18 10:01 /aa/file4.txt [root@master hadoop-2.7.0]# [root@master hadoop-2.7.0]# [root@master hadoop-2.7.0]# hdfs dfsadmin -setQuota 3 /aa [root@master hadoop-2.7.0]# hadoop fs -touchz /aa/file5.txt touchz: The NameSpace quota (directories and files) of directory /aa is exceeded: quota=3 file count=4 [root@master hadoop-2.7.0]# hadoop fs -ls /aa Found 2 items -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt -rw-r--r-- 2 root supergroup 0 2018-05-18 10:01 /aa/file4.txt [root@master hadoop-2.7.0]# hadoop fs -ls -R /aa -rw-r--r-- 2 root supergroup 0 2018-05-18 09:40 /aa/file1.txt -rw-r--r-- 2 root supergroup 0 2018-05-18 10:01 /aa/file4.txt [root@master hadoop-2.7.0]# hadoop fs -touchz /aa/fi.txt touchz: The NameSpace quota (directories and files) of directory /aa is exceeded: quota=3 file count=4 [root@master hadoop-2.7.0]# [root@master hadoop-2.7.0]# hdfs dfsadmin -setQuota 6 /aa [root@master hadoop-2.7.0]# hadoop fs -touchz /aa/file5.txt
7、-clrQuota <dirname>...<dirname>,为每一个目录<dirname>清除配额设定。
命令会在这个目录上工作良好,以下情况会报错:
1. 这个目录不存在或是文件,或者
2. 用户不是管理员。
如果目录原来没有配额不会报错。
hdfs dfsadmin -clrQuota /aa