• hive基本操作总结


    =================================================hive其他常用操作=====================================================

    1.在hive操作界面 从hdfs上创建文件夹:

    dfs -mkdir /student;
     
    2.在hive界面内 将linux虚拟机本地数据上传到hdfs内:
    dfs -put /opt/module/datas/student.txt /student;
     
    3.加载本地文件到hive
    load data local inpath '/opt/module/hive/datas/student.txt' into table default.student;
     
    4.建表语句
    create table if not exists location(
    loc int,
    loc_name string
    )
    row format delimited fields terminated by ' ';
     
    5.将hive数据表导入到mysql

    ./sqoop export --connect jdbc:mysql://localhost:3306/data_result --username root --password 000000 --table result_2 --export-dir /user/hive/warehouse/db_result.db/data_2

     

    =================================================hive数据库的操作=================================================

    1.创建表时定义分隔符语句:

    row format delimited fields terminated by ' '        (每一行中的元素用tab键分割)

    2.创建数据时选择数据库在hdfs上的位置:

    create database db_name location '/db_hive2.db';
     

    3.过滤显示查询的数据库:

    show databases like 'db_hive*';
     
    4.显示数据库信息:
    desc database db_name;
     
    4.1 显示数据库的详细信息:
    desc extended databases db_name;
     
    5.切换数据库:
    use db_name;
     
    6.删除数据库(数据库是空时):
    drop database db_name;
     
    6.1强制删除数据库(数据库不是空时):
    drop database db_name cascade;
     
    ================================================hive表的操作====================================================
    1.建表语法:
    CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
    [(col_name data_type [COMMENT col_comment], ...)]
    [COMMENT table_comment]
    [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
    [CLUSTERED BY (col_name, col_name, ...)
    [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
    [ROW FORMAT row_format]
    [STORED AS file_format]
    [LOCATION hdfs_path]
    [TBLPROPERTIES (property_name=property_value, ...)]
    [AS select_statement]
     
    2.普通建表语句:
    create table if not exists student(
    id int, name string
    )
    row format delimited fields terminated by ' '
    stored as textfile
    location '/user/hive/warehouse/student';
     
    3.根据查询结果创建表:
    create table if not exists student2 as select id, name from student;
     
    4.根据已存在的表结构创建表:
    create table if not exists student3 like student;
     
    5.显示表的信息:
    desc table_name;
     
    6.加载本地文件到hive
    load data local inpath '/opt/module/hive/datas/student.txt' into table default.student;
     
     7.加载hdfs数据到hive:
    load data inpath '/user/atguigu/hive/student.txt' into table default.student;
     
    8.将查询结果导出到本地
    insert overwrite local directory '/opt/module/hive/data/export/student' select * from student;
     
    8.1将查询结果格式化导出到本地:
    insert overwrite local directory
    '/opt/module/hive/data/export/student1'
    ROW FORMAT DELIMITED FIELDS TERMINATED BY ' '
    select * from student;
     
    9.1建库语句
    create database db_hive;
     
  • 相关阅读:
    RabbitMQ插件安装
    RabbitMQ安装与初始配置
    Spring Boot教程(四十二)LDAP来管理用户信息(2)
    Spring Boot教程(四十一)LDAP来管理用户信息(1)
    Spring Boot教程(四十)使用Flyway来管理数据库版本
    Spring Boot教程(三十九)使用MyBatis注解配置详解(2)
    Spring Boot教程(三十八)使用MyBatis注解配置详解(1)
    Spring Boot教程(三十七)整合MyBatis
    Spring Boot教程(三十六)使用MongoDB数据库(2)
    Spring Boot教程(三十五)使用MongoDB数据库(1)
  • 原文地址:https://www.cnblogs.com/wfswf/p/15387598.html
Copyright © 2020-2023  润新知