1、查看指定目录下内容:hadoop fs –ls [文件目录]
[root@cdh01 tmp]# hadoop fs -ls -h /tmp
Found 2 items
drwxrwxrwx - hdfs supergroup 0 2016-01-21 10:24 /tmp/.cloudera_health_monitoring_canary_files
drwx-wx-wx - hive supergroup 0 2016-01-21 10:02 /tmp/hive
[root@cdh01 tmp]# hadoop fs -ls -h /
Found 2 items
drwxrwxrwx - hdfs supergroup 0 2016-01-21 10:02 /tmp
drwxrwxr-x - hdfs supergroup 0 2016-01-21 10:01 /user
2、将本地文件夹存储至hadoop上:hadoop fs –put [本地目录] [hadoop目录]
[root@cdh01 /]# mkdir test_put_dir #创建目录
[root@cdh01 /]# chown hdfs:hadoop test_put_dir #赋目录权限给hadoop用户
[root@cdh01 /]# su hdfs #切换到hadoop用户
[hdfs@cdh01 /]$ ls
bin boot dev dfs dfs_bak etc home lib lib64 lost+found media misc mnt net opt proc root sbin selinux srv sys test_put_dir tmp usr var wawa.txt wbwb.txt wyp.txt
[hdfs@cdh01 /]$ hadoop fs -put test_put_dir /
[hdfs@cdh01 /]$ hadoop fs -ls /
Found 4 items
drwxr-xr-x - hdfs supergroup 0 2016-01-21 11:07 /hff
drwxr-xr-x - hdfs supergroup 0 2016-01-21 15:25 /test_put_dir
drwxrwxrwt - hdfs supergroup 0 2016-01-21 10:39 /tmp
drwxr-xr-x - hdfs supergroup 0 2016-01-21 10:39 /user
3、在hadoop指定目录内创建新目录:hadoop fs –mkdir [目录地址]
[root@cdh01 /]# su hdfs
[hdfs@cdh01 /]$ hadoop fs -mkdir /hff
4、在hadoop指定目录下新建一个空文件,使用touchz命令:
[hdfs@cdh01 /]$ hadoop fs -touchz /test_put_dir/test_new_file.txt
[hdfs@cdh01 /]$ hadoop fs -ls /test_put_dir
Found 1 items
-rw-r--r-- 3 hdfs supergroup 0 2016-01-21 15:29 /test_put_dir/test_new_file.txt
5、将本地文件存储至hadoop上:hadoop fs –put [本地地址] [hadoop目录]
[hdfs@cdh01 /]$ hadoop fs -put wyp.txt /hff #直接目录
[hdfs@cdh01 /]$ hadoop fs -put wyp.txt hdfs://cdh01.cap.com:8020/hff #服务器目录
注:文件wyp.txt放在/根目录下,结构如:
bin dfs_bak lib64 mnt root sys var
boot etc lost+found net sbin test_put_dir wawa2.txt
dev home media opt selinux tmp wbwb.txt
dfs lib misc proc srv usr wyp.txt
6、打开某个已存在文件:hadoop fs –cat [file_path]
[hdfs@cdh01 /]$ hadoop fs -cat /hff/wawa.txt
1 张三 男 135
2 刘丽 女 235
3 王五 男 335
7、将hadoop上某个文件重命名hadoop fs –mv [旧文件名] [新文件名]
[hdfs@cdh01 /]$ hadoop fs -mv /tmp /tmp_bak #修改文件夹名
8、将hadoop上某个文件down至本地已有目录下:hadoop fs -get [文件目录] [本地目录]
[hdfs@cdh01 /]$ hadoop fs -get /hff/wawa.txt /test_put_dir
[hdfs@cdh01 /]$ ls -l /test_put_dir/
total 4
-rw-r--r-- 1 hdfs hdfs 42 Jan 21 15:39 wawa.txt
9、删除hadoop上指定文件:hadoop fs -rm [文件地址]
[hdfs@cdh01 /]$ hadoop fs -ls /test_put_dir/
Found 2 items
-rw-r--r-- 3 hdfs supergroup 0 2016-01-21 15:41 /test_put_dir/new2.txt
-rw-r--r-- 3 hdfs supergroup 0 2016-01-21 15:29 /test_put_dir/test_new_file.txt
[hdfs@cdh01 /]$ hadoop fs -rm /test_put_dir/new2.txt
16/01/21 15:42:24 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 1440 minutes, Emptier interval = 0 minutes.
Moved: 'hdfs://cdh01.cap.com:8020/test_put_dir/new2.txt' to trash at: hdfs://cdh01.cap.com:8020/user/hdfs/.Trash/Current
[hdfs@cdh01 /]$ hadoop fs -ls /test_put_dir/
Found 1 items
-rw-r--r-- 3 hdfs supergroup 0 2016-01-21 15:29 /test_put_dir/test_new_file.txt
10、删除hadoop上指定文件夹(包含子目录等):hadoop fs –rm -r [目录地址]
[hdfs@cdh01 /]$ hadoop fs -rmr /test_put_dir
16/01/21 15:50:59 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 1440 minutes, Emptier interval = 0 minutes.
Moved: 'hdfs://cdh01.cap.com:8020/test_put_dir' to trash at: hdfs://cdh01.cap.com:8020/user/hdfs/.Trash/Current
[hdfs@cdh01 /]$ hadoop fs -ls /
Found 3 items
drwxr-xr-x - hdfs supergroup 0 2016-01-21 11:07 /hff
drwxrwxrwt - hdfs supergroup 0 2016-01-21 10:39 /tmp
drwxr-xr-x - hdfs supergroup 0 2016-01-21 15:42 /user
11、将hadoop指定目录下所有内容保存为一个文件,同时down至本地
hadoop dfs –getmerge /user /home/t
12、将正在运行的hadoop作业kill掉
hadoop job –kill [job-id]