• HDFS常用的Java Api详解


    转自:http://blog.csdn.net/michaelwubo/article/details/50879832

    一、使用Hadoop URL读取数据

    复制代码
    package hadoop;
    
    import java.io.InputStream;
    import java.net.URL;
    
    import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
    import org.apache.hadoop.io.IOUtils;
    
    public class URLCat {
    
        static {
            URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
        }
    
        public static void readHdfs(String url) throws Exception {
            InputStream in = null;
            try {
                in = new URL(url).openStream();
                IOUtils.copyBytes(in, System.out, 4096, false);
            } finally {
                IOUtils.closeStream(in);
            }
        }
        
        public static void main(String[] args) throws Exception {
            readHdfs("hdfs://192.168.49.131:9000/user/hadoopuser/input20120828/file01");
        }
    }
    复制代码

    其中,我使用到的jar包有:

    hadoop-core的版本一定要和分布式环境上安装的hadoop版本保持一致,不然会报错:

    12/09/11 14:18:59 INFO security.UserGroupInformation: JAAS Configuration already set up for Hadoop, not re-installing.
    Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/thirdparty/guava/common/collect/LinkedListMultimap
        at org.apache.hadoop.hdfs.SocketCache.<init>(SocketCache.java:48)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:240)


    分布式环境上安装的hadoop版本如下:

    运行main方法,输出:hello world bye world 和hdfs中存储的文件信息是保持一致的:

    二、使用FileSystem API 读取数据

    复制代码
    package hadoop;
    
    import java.io.IOException;
    import java.io.InputStream;
    import java.net.URI;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IOUtils;
    
    public class FileSystemCat {
    
        public static void readHdfs(String url) throws IOException {
            Configuration conf = new Configuration();
            FileSystem fs = FileSystem.get(URI.create(url), conf);
            InputStream in = null;
            try {
                in = fs.open(new Path(url));
                IOUtils.copyBytes(in, System.out, 4096, false);
            } finally {
                IOUtils.closeStream(in);
            }
        }
    
        public static void main(String[] args) throws IOException {
            readHdfs("hdfs://192.168.49.131:9000/user/hadoopuser/output20120828/part-00000");
        }
    }
    复制代码

    执行输出:

    bye    2
    hadoop    2
    hello    2
    world    2

    三、创建目录

         3.1 写数据 public boolean mkdirs(Path f) throws IOException 会按照客户端请求创建未存在的父目录

    复制代码
    package hadoop;
    
    import java.io.BufferedInputStream;
    import java.io.FileInputStream;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.OutputStream;
    import java.net.URI;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IOUtils;
    import org.apache.hadoop.util.Progressable;
    
    public class FileCopyWithProgress {
    
        public static void fileCopy(String localFile, String hdfsFile) throws IOException{
            InputStream in = new BufferedInputStream(new FileInputStream(localFile));
            Configuration conf = new Configuration();
            FileSystem fs = FileSystem.get(URI.create(hdfsFile),conf);
            OutputStream out  = fs.create(new Path(hdfsFile),new Progressable(){
                public void progress(){
                    System.out.println("*");
                }
            });
            IOUtils.copyBytes(in, out, 4096,true);
        }
    
        public static void main(String[] args) throws IOException {
            fileCopy("D://heat2.txt", "hdfs://192.168.49.131:9000/user/hadoopuser/output20120911/");
        }
    }
    复制代码

    执行后会报错如下:

    Exception in thread "main" org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=libininfo, access=WRITE, inode="/user/hadoopuser":hadoopuser:supergroup:drwxr-xr-x

    因为往hadoop写文件是权限不容许的,

    解决方法:在hdfs-site.xml 中取消权限校验,即加入以下配置:

    到服务器上修改hadoop的配置文件:conf/hdfs-core.xml, 找到 dfs.permissions 的配置项 , 将value值改为 false

    再次运行,如果有以下报错:

    Exception in thread "main" org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create file/user/hadoopuser/output20120911. Name node is in safe mode.
    The reported blocks 6 has reached the threshold 0.9990 of total blocks 6. Safe mode will be turned off automatically in 5 seconds.

    说明Hadoop的NameNode处在安全模式下,那什么是Hadoop的安全模式呢?
    在分布式文件系统启动的时候,开始的时候会有安全模式,当分布式文件系统处于安全模式的情况下,文件系统中的内容不允许修改也不允许删除,直到安全模式结束。安全模式主要是为了系统启动的时候检查各个DataNode上数据块的有效性,同时根据策略必要的复制或者删除部分数据块。运行期通过命令也可以进入安全模式。在实践过程中,系统启动的时候去修改和删除文件也会有安全模式不允许修改的出错提示,只需要等待一会儿即可。
    现在就清楚了,那现在要解决这个问题,我想让Hadoop不处在safe mode 模式下,能不能不用等,直接解决呢?
    答案是可以的,只要在Hadoop的目录下输入:
    bin/hadoop dfsadmin -safemode leave
    也就是关闭Hadoop的安全模式,这样问题就解决了。如果不这么操作,我们可以等待几秒,然后再次执行程序,可以看到程序正常执行,有以下输出:

    *
    *
    *
    *
    *
    "*",即上传进度,没写入64KB即输出一个"*"
    然后查看hdfs的目录发现文件已经存在。

         3.2 文件系统查询 列出目录文件信息

    复制代码
    package hadoop;
    
    import java.io.IOException;
    import java.net.URI;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FileStatus;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.FileUtil;
    import org.apache.hadoop.fs.Path;
    
    public class ListStatus {
    
        public static void readStatus(String url) throws IOException {
            Configuration conf = new Configuration();
            FileSystem fs = FileSystem.get(URI.create(url), conf);
            Path[] paths = new Path[1];
            paths[0] = new Path(url);
            FileStatus[] status = fs.listStatus(paths);
            Path[] listedPaths = FileUtil.stat2Paths(status);
            for (Path p : listedPaths) {
                System.out.println(p);
            }
        }
    
        public static void main(String[] args) throws IOException {
            readStatus("hdfs://192.168.49.131:9000/user/hadoopuser/output20120828/");
        }
    }
    复制代码

    输出:

    hdfs://192.168.49.131:9000/user/hadoopuser/output20120828/_SUCCESS
    hdfs://192.168.49.131:9000/user/hadoopuser/output20120828/_logs
    hdfs://192.168.49.131:9000/user/hadoopuser/output20120828/part-00000

  • 相关阅读:
    3.App Resources-Resource Types/Animation
    SwipeRefreshLayout下拉刷新
    3.App Resources-Handling Runtime Changes
    3.App Resources-Accessing Resources
    3.App Resources-Providing Resources
    3.App Resources-Overview
    2.App Components-Processes and Threads
    2.App Components-App Widgets/App Widget Host
    2.App Components-App Widgets
    2.App Components-Content Providers/Storage Access Framework
  • 原文地址:https://www.cnblogs.com/wq3435/p/7816815.html
Copyright © 2020-2023  润新知