• 数据在HDFS和HBASE之间互相传递的过程


      对于简单的结构化数据,我们在HDFS和HBASE上的传递可能只需要用框架即可完成,但是对于复杂的数据传输,特别是实际工作中,数据的收集整理并非简单的结构,因此,我们需要对数据重新整理,并进行发送。这个过程就是依赖MapReduce,通过底层对数据的拆分和重组,达到我们要传输的结构要求。

    下面我们开始进行一个简单的小测试:

    从HDFS 到HBASE

    首先,我们在虚拟机本地创建一个临时文件demo,简单的结构化数据为name,age,sex。值可以适当添加几行即可。然后通过命令

    hdfs dfs -put demo 
    

     此时已经将文件存放到了HDFS上,然后在HBASE中,创建我们需要存储数据的表,创建命令:

    create 'tb_test','info'
    

     hbase 中表建立为表名tb_test,列族名info。

    接下来我们编写对应的代码:

    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.hbase.HBaseConfiguration;
    import org.apache.hadoop.hbase.client.Put;
    import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
    import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
    import org.apache.hadoop.hbase.mapreduce.TableReducer;
    import org.apache.hadoop.io.NullWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Mapper;
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
    
    import java.io.IOException;
    
    public class HdfstoHbase
    {
        public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
            Configuration conf=HBaseConfiguration.create();
            Job job=Job.getInstance(conf, "hdfs-to-hbase");
            job.setJarByClass(HdfstoHbase.class);
            job.setMapperClass(HdfsMap.class);
            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(NullWritable.class);
            TableMapReduceUtil.initTableReducerJob("tb_test",HbaseReduce.class,job );
            FileInputFormat.addInputPath(job,new Path("/user/bda/demo"));
            System.out.println(job.waitForCompletion(true)?0:1);
    
    
        }
    
        public static class HdfsMap extends Mapper<Object,Text,Text,NullWritable>{
            @Override
            protected void map(Object key, Text value, Context context) throws IOException, InterruptedException {
                context.write(value,NullWritable.get() );
            }
        }
        public static class HbaseReduce extends TableReducer<Text,NullWritable,ImmutableBytesWritable>{
            private final byte[] cf="info".getBytes();
            private final byte[] name="name".getBytes();
            private final byte[] age="age".getBytes();
            private final byte[] sex="sex".getBytes();
    
            @Override
            protected void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
                String[] ss=key.toString().split(",");
                Put put=new Put(key.toString().getBytes());
                put.addColumn(cf,name ,ss[0].getBytes() );
                put.addColumn(cf,age ,ss[1].getBytes() );
                put.addColumn(cf,sex ,ss[2].getBytes() );
                context.write(null,put);
            }
        }
    }

    典型的MapReduce结构框架。然后打包,如果你打的是瘦包,那么,执行jar文件的过程你需要添加相应的依赖,或者是脚本提前写好执行,下面直接贴出脚本执行命令:

    HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath`:`${HBASE_HOME}/bin/hbase mapredcp` hadoop jar 打包的jar文件 文件的类名
    

    执行后,在HBASE中,tb_test表即可查看到传入的数据。

    HBASE到HDFS

    同样的操作,这里只粘贴代码部分,其余操作类似。

    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.hbase.HBaseConfiguration;
    import org.apache.hadoop.hbase.client.Result;
    import org.apache.hadoop.hbase.client.Scan;
    import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
    import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
    import org.apache.hadoop.hbase.mapreduce.TableMapper;
    import org.apache.hadoop.hbase.util.Bytes;
    import org.apache.hadoop.io.NullWritable;
    import org.apache.hadoop.io.Text;
    import org.apache.hadoop.mapreduce.Job;
    import org.apache.hadoop.mapreduce.Reducer;
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
    
    import java.io.IOException;
    
    public class HbasetoHdfs {
        public static void main(String[] args) throws Exception {
            Configuration cfg = HBaseConfiguration.create();
            Job job = Job.getInstance(cfg,"hbase-to-hdfs");
            job.setJarByClass(HbasetoHdfs.class);
            Scan scan = new Scan();
            scan.setCaching(50);        // 1 is the default in Scan, which will be bad for MapReduce jobs
            scan.setCacheBlocks(false);
            TableMapReduceUtil.initTableMapperJob("tb_test",scan,HbaseMap.class,Text.class,NullWritable.class,job);
            job.setReducerClass(HdfsReduce.class);
            FileOutputFormat.setOutputPath(job,new Path("/user/bda/dsj3"));
            System.out.println(job.waitForCompletion(true)?0:1);
    
    
        }
        public static class HbaseMap extends TableMapper<Text,NullWritable> {
            private final byte[] cf="info".getBytes();
            private final byte[] name="name".getBytes();
            private final byte[] age="age".getBytes();
            private final byte[] sex="sex".getBytes();
            @Override
            protected void map(ImmutableBytesWritable key, Result value, Context context) throws IOException, InterruptedException {
                String line="";
                String n = Bytes.toString(value.getValue(cf,name));
                String a = Bytes.toString(value.getValue(cf,age));
                String s = Bytes.toString(value.getValue(cf,sex));
                line = n+","+a+","+s;
                context.write(new Text(line),NullWritable.get());
            }
        }
        public static class HdfsReduce extends Reducer<Text,NullWritable,Text,NullWritable> {
            @Override
            protected void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
                context.write(key,NullWritable.get());
            }
        }
    
    }
    
  • 相关阅读:
    https 适配
    SDWebImage 的简单使用方法
    第三方API使用的好习惯
    关于IPicture::Render函数
    标准模板库(STL)MAP容器使用详解
    STL容器
    c++ 遍历map的时候删除元素
    C++的try_catch异常
    Makefile 自动生成头文件的依赖关系 .
    调试过程中,内存泄露检测信息
  • 原文地址:https://www.cnblogs.com/qianshuixianyu/p/9871897.html
Copyright © 2020-2023  润新知