• splittability A SequenceFile can be split by Hadoop and distributed across map jobs whereas a GZIP file cannot be.


    splittability

     
    Skip to end of metadata
     
    Go to start of metadata
     

    Compressed Data Storage

    Keeping data compressed in Hive tables has, in some cases, been known to give better performance than uncompressed storage; both in terms of disk usage and query performance.

    You can import text files compressed with Gzip or Bzip2 directly into a table stored as TextFile. The compression will be detected automatically and the file will be decompressed on-the-fly during query execution. For example:

    CREATE TABLE raw (line STRING)
       ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ' LINES TERMINATED BY ' ';
     
    LOAD DATA LOCAL INPATH '/tmp/weblogs/20090603-access.log.gz' INTO TABLE raw;

    The table 'raw' is stored as a TextFile, which is the default storage. However, in this case Hadoop will not be able to split your file into chunks/blocks and run multiple maps in parallel. This can cause underutilization of your cluster's 'mapping' power.

    【 A SequenceFile can be split by Hadoop and distributed across map jobs whereas a GZIP file cannot be.】

    The recommended practice is to insert data into another table, which is stored as a SequenceFile. A SequenceFile can be split by Hadoop and distributed across map jobs whereas a GZIP file cannot be. For example:

    CREATE TABLE raw (line STRING)
       ROW FORMAT DELIMITED FIELDS TERMINATED BY ' ' LINES TERMINATED BY ' ';
     
    CREATE TABLE raw_sequence (line STRING)
       STORED AS SEQUENCEFILE;
     
    LOAD DATA LOCAL INPATH '/tmp/weblogs/20090603-access.log.gz' INTO TABLE raw;
     
    SET hive.exec.compress.output=true;
    SET io.seqfile.compression.type=BLOCK; -- NONE/RECORD/BLOCK (see below)
    INSERT OVERWRITE TABLE raw_sequence SELECT * FROM raw;

    The value for io.seqfile.compression.type determines how the compression is performed. Record compresses each value individually while BLOCK buffers up 1MB (default) before doing compression.

    LZO Compression

    See LZO Compression for information about using LZO with Hive.

     
  • 相关阅读:
    Eclipse 开发过程中利用 JavaRebel 提高效率
    数字转化为大写中文
    网页变灰
    解决QQ截图无法在PS中粘贴
    ORACLE操作表时”资源正忙,需指定nowait"的解锁方法
    网页常用代码
    SQL Server 2000 删除注册的服务器
    GridView 显示序号
    读取Excel数据到DataTable
    清除SVN版本控制
  • 原文地址:https://www.cnblogs.com/rsapaper/p/7764426.html
Copyright © 2020-2023  润新知