• 设置hdfs和hbase副本数。hadoop2.5.2 hbase0.98.6


    hdfs副本和基本读写。

    core-site.xml
    hdfs-site.xml

    从/etc/hdfs1/conf下拷贝到工作空间


    import java.io.IOException;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FSDataOutputStream;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    // * hadoop2.5.2
    public class CopyOfHadoopDFSFileReadWrite {
     
        static void printAndExit(String str) {
            System.err.println(str);
            System.exit(1);
        }

        public static void main (String[] argv) throws IOException {
            Configuration conf = new Configuration();   
            FileSystem fs = FileSystem.get(conf);
            argv=new String[]{"/tmp/hello.txt"};
            Path outFile = new Path(argv[0]);
            if (fs.exists(outFile))
                printAndExit("Output already exists");
            FSDataOutputStream out = fs.create(outFile,(short)2); 
            try {
                out.write("hello 扒拉扒拉了吧啦啦啦不".getBytes());
            } catch (IOException e) {
                System.out.println("Error while copying file");
            } finally {
                out.close();
            }
        }
    }

    image
    hbase-site.xml
    从/etc/hyperbase1/conf下拷贝
    http://192.168.146.128:8180/#/dashboard 确保hyperbase1服务启动状态


    Configuration conf = HBaseConfiguration.create();
        HBaseAdmin ha = new HBaseAdmin(conf);
        String tableName = "demoReplication";
        HTableDescriptor htd = new HTableDescriptor(tableName.getBytes());  
        HColumnDescriptor hcd = new HColumnDescriptor("f").setMaxVersions(30)
                .setCompressionType(Algorithm.SNAPPY)    //compression
                .setBloomFilterType(BloomType.ROW); 
        htd.addFamily(hcd);
        ha.createTable(htd);   
        HTable htable = new HTable(conf, tableName);
        String uuid = UUID.randomUUID().toString();
        uuid = "uuidsaaaabbbbuuide1";
        Put put = new Put(Bytes.toBytes(uuid));
        File file = new File("C:\Users\microsoft\workspace\hbase\hyper\hbase-client-0.98.6-transwarp-tdh464.jar");
        put.add("f".getBytes(), Bytes.toBytes("q1"),Files.toByteArray(file));
        htable.put(put);
        htable.flushCommits();
        ha.flush(tableName);
        htable.close();

     

    在hbase shell中执行

    describeInJson 'demoReplication' ,'true','/tmp/demo'

    vi '/tmp/demo' 在如下位置添加special.column.replication指定副本数2( hbase shell 外)

    {
      "tableName" : "demoReplication",
      "base" : {
        "families" : [ {
          "FAMILY" : "f",

          "special.column.replication" : "2",

    保存/tmp/demo,更新表定义

    alterUseJson 'demoReplication' ,'/tmp/demo'(hbase shell中)

    hdfs dfs -ls /hyperbase1/data/default/testReplication/c38f234712a99d45797ef1bdd6c3b09a/f用ls命令查看副本数
    image改前是3,改后是2
  • 相关阅读:
    数列分段 II
    Best Cow Fences
    愤怒的牛
    linux 查看文件
    糖果传递
    BL刀片更换主板设置raid
    glance启停
    depot制作
    刀片服务器密码过期, console无法登录解决方案
    DP无法删除失效的多路径链路方法
  • 原文地址:https://www.cnblogs.com/wifi0/p/6767847.html
Copyright © 2020-2023  润新知