• 搭建Hive 2.1.1 基于Hadoop 2.6.1 和 Ubuntu 16.0.4 记录


     

     


     

    Hadoop Hive Hbase 对应版本

    此处输入图片的描述

    Hive官网下载

    此处输入图片的描述

    我们以Hadoop版本作为参考适配Hive Hbase即可, Hadoop版本是2.6.1 所以可以选择Hive1.2.1以上版本,HBase-1.2.x 至 HBase-2.0.x即可.

     

    配置SQL

    • 下载sql
     
    1. mutex@mutex-dl:~$ sudo apt-get install mysql-server mysql-client
    • 启动sql
     
    1. mutex@mutex-dl:~$ sudo /etc/init.d/mysql start      Ubuntu版本)     
    2. * Starting MySQL database server mysqld [ OK ]

    sudo /etc/init.d/mysql restart -- 重启sql 
    sudo /etc/init.d/mysql stop -- 停止sql

    • 新建hive用户 
      此处输入图片的描述
     

    下载Hive

     
    1. mutex@mutex-dl:~$ su - hadoop
    2. 密码:
    3. hadoop@mutex-dl:~$ cd /tmp
    4. hadoop@mutex-dl:/tmp$ wget http://www-us.apache.org/dist/hive/hive-2.1.1/apache-hive-2.1.1-bin.tar.gz
    5. hadoop@mutex-dl:/tmp$ sudo tar xvzf apache-hive-2.1.1-bin.tar.gz -C /usr/local

    请注意,这里一开始就切换到hadoop用户环境,中间不要再切换用户,防止在配置过程中出现用户权限问题

    另外如果上面的链接拒绝访问,那么请用前面提供的Hive官网自行下载,并解压到 /usr/local目录下即可

     

    Hive环境变量

    打开~/.bashrc 并配置如下:

     
    1. export HIVE_HOME=/usr/local/apache-hive-2.1.1-bin
    2. export HIVE_CONF_DIR=/usr/local/apache-hive-2.1.1-bin/conf
    3. export PATH=$HIVE_HOME/bin:$PATH
    4. export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.
    5. export CLASSPATH=$CLASSPATH:/usr/local/apache-hive-2.1.1-bin/lib/*:.

    然后命令行使之生效

     
    1. hadoop@mutex-dl:/tmp$ source ~/.bashrc
     

    新建Hive warehouse目录

     
    1. hadoop@mutex-dl:/tmp$ echo $HADOOP_HOME
    2. /usr/local/hadoop

    显示Hadoop-home配置的真是目录,然后再新建目录以供以后Hive使用

     
    1. hadoop@mutex-dl:~$ hdfs dfs -ls /
    2. drwxr-xr-x - hduser supergroup 0 2018-1-23 11:17 /hbase
    3. drwx------ - hduser supergroup 0 2018-1-18 16:04 /tmp
    4. drwxr-xr-x - hduser supergroup 0 2018-1-18 09:13 /user
    5. hadoop@mutex-dl:~$ hdfs dfs -mkdir /user/hive/warehouse
    6. hadoop@mutex-dl:~$ hdfs dfs -chmod g+w /tmp
    7. hadoop@mutex-dl:~$ hdfs dfs -chmod g+w /user/hive/warehouse
    8. hadoop@mutex-dl:~$ hdfs dfs -ls /
    9. drwxr-xr-x - hduser supergroup 0 2018-1-23 11:17 /hbase
    10. drwx-w---- - hduser supergroup 0 2018-1-18 16:04 /tmp
    11. drwxr-xr-x - hduser supergroup 0 2018-1-23 17:18 /user
    12. hadoop@mutex-dl:~$ hdfs dfs -ls /user
    13. drwxr-xr-x - hduser supergroup 0 2018-1-18 23:17 /user/hduser
    14. drwxr-xr-x - hduser supergroup 0 2018-1-23 17:18 /user/hive
     

    配置Hive

     
    1. hadoop@mutex-dl:~$ cd $HIVE_HOME/conf
    2. hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/conf$ sudo cp hive-env.sh.template hive-env.sh

    编辑hive-env.sh,新加入配置

     
    1. export HADOOP_HOME=/usr/local/hadoop
     

    下载Apache Derby

     
    1. hadoop@mutex-dl:~$ cd /tmp
    2. hadoop@mutex-dl:/tmp$ wget http://archive.apache.org/dist/db/derby/db-derby-10.13.1.1/db-derby-10.13.1.1-bin.tar.gz
    3. hadoop@mutex-dl:/tmp$ sudo tar xvzf db-derby-10.13.1.1-bin.tar.gz -C /usr/local

    如果wget拒绝访问,直接手动在浏览器中下载即可.

    继续配置Derby,打开~/.bashrc,并新增以下内容

     
    1. export DERBY_HOME=/usr/local/db-derby-10.13.1.1-bin
    2. export PATH=$PATH:$DERBY_HOME/bin
    3. export CLASSPATH=$CLASSPATH:$DERBY_HOME/lib/derby.jar:$DERBY_HOME/lib/derbytools.jar

    别忘记 随后用命令行 source ~/.bashrc使之随机生效.

    再创建一个data目录

     
    1. hadoop@mutex-dl:/tmp$ sudo mkdir $DERBY_HOME/data
     

    配置 Hive Metastore

     
    1. hadoop@mutex-dl:/tmp$ cd $HIVE_HOME/conf
    2. hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/conf$ sudo cp hive-default.xml.template hive-site.xml

    编辑hive-site.xml,并新增以下内容

     
    1. <property>
    2. <name>javax.jdo.option.ConnectionURL</name>
    3. <value>jdbc:derby:;databaseName=metastore_db;create=true</value>
    4. <description>
    5. JDBC connect string for a JDBC metastore.
    6. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
    7. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    8. </description>
    9. </property>

    /usr/local/apache-hive-2.1.1-bin/conf当前目录下新建文件jpox.properties,添加以下内容

     
    1. javax.jdo.PersistenceManagerFactoryClass =
    2. org.jpox.PersistenceManagerFactoryImpl
    3. org.jpox.autoCreateSchema = false
    4. org.jpox.validateTables = false
    5. org.jpox.validateColumns = false
    6. org.jpox.validateConstraints = false
    7. org.jpox.storeManagerType = rdbms
    8. org.jpox.autoCreateSchema = true
    9. org.jpox.autoStartMechanismMode = checked
    10. org.jpox.transactionIsolation = read_committed
    11. javax.jdo.option.DetachAllOnCommit = true
    12. javax.jdo.option.NontransactionalRead = true
    13. javax.jdo.option.ConnectionDriverName = org.apache.derby.jdbc.ClientDriver
    14. javax.jdo.option.ConnectionURL = jdbc:derby://hadoop1:1527/metastore_db;create = true
    15. javax.jdo.option.ConnectionUserName = APP
    16. javax.jdo.option.ConnectionPassword = mine

    然后修改apache-hive-2.1.1-bin整个目录的所有者

     
    1. hadoop@mutex-dl:/usr/local$ sudo chown -R hadoop:hadoop apache-hive-2.1.1-bin
     

    初始化 Metastore schem

     
    1. hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/bin$ schematool -dbType derby -initSchema
    2. SLF4J: Class path contains multiple SLF4J bindings.
    3. SLF4J: Found binding in [jar:file:/usr/local/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    4. SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    6. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
    7. Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true
    8. Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
    9. Metastore connection User: APP
    10. Starting metastore schema initialization to 2.1.1
    11. Initialization script hive-schema-2.1.1.derby.sql
    12. Initialization script completed
    13. schemaTool completed
     

    验证是否安装Hive成功

     
    1. hadoop@mutex-dl:~$ echo $HIVE_HOME
    2. /usr/local/apache-hive-2.1.0-bin
    3. hadoop@mutex-dl:~$ $HIVE_HOME/bin/hive
    • 可能出现的问题1

    Exception in thread "main" java.lang.RuntimeException: Couldn't create directory {hive.session.id}_resources

    解决方法: 修改hive-site.xml, 将原先的system:java.io.tmpdir这一行配置注释掉,然后修改为以下所示/user/hive/tmp/

     
    1. <property>
    2. <name>hive.downloaded.resources.dir</name>
    3. <!--
    4. <value>${system:java.io.tmpdir}/${hive.session.id}_resources</value>
    5. -->
    6. <value>/user/hive/tmp/${hive.session.id}_resources</value>
    7. <description>Temporary local directory for added resources in the remote file system.</description>
    8. </property>

    尤其要注意,这里的/user/hive/tmp/,并非是HDFS文件系统下面的,而是hadoop用户的本地目录,本人一开始认为是HDFS文件系统下的目录,造成没有权限新建目录,问题和StackOverflow上面这个问题一样,想看具体细节的可以参考

    此处输入图片的描述

    • 可能出现的问题2

    java.net.URISyntaxException: Relative path in absolute URI: {system:java.io.tmpdir%7D/%7Bsystem:user.name%7D

    解决方法:修改 hive-site.xml,将下面注释的行修改为/tmp/mydir,这个也是Hadoop用户的本地目录,而非HDFS文件系统目录.

     
    1. <property>
    2. <name>hive.exec.local.scratchdir</name>
    3. <!--
    4. <value>${system:java.io.tmpdir}/${system:user.name}</value>
    5. -->
    6. <value>/tmp/mydir</value>
    7. <description>Local scratch space for Hive jobs</description>
    8. </property>
     

    Hive CLI:

    修改完以上的问题,我们就可以启动Hive了

     
    1. hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/bin$ hive
    2. SLF4J: Class path contains multiple SLF4J bindings.
    3. SLF4J: Found binding in [jar:file:/usr/local/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    4. SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    6. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
    7. Logging initialized using configuration in jar:file:/usr/local/apache-hive-2.1.1-bin/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
    8. Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
     
    1. hive> show tables;
    2. OK
    3. Time taken: 4.603 seconds

    此处输入图片的描述

     

    其他错误

    如果在 initSchema 的时候出现以下错误

     
    1. hadoop@mutex-dl:/usr/local/apache-hive-2.1.1-bin/bin$ schematool -dbType mysql -initSchema
    2. SLF4J: Class path contains multiple SLF4J bindings.
    3. SLF4J: Found binding in [jar:file:/usr/local/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    4. SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    6. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
    7. Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true
    8. Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
    9. Metastore connection User: APP
    10. Starting metastore schema initialization to 2.1.0
    11. Initialization script hive-schema-2.1.0.mysql.sql
    12. Error: Syntax error: Encountered "<EOF>" at line 1, column 64. (state=42X01,code=30000)
    13. org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
    14. Underlying cause: java.io.IOException : Schema script failed, errorcode 2
    15. Use --verbose for detailed stacktrace.
    16. *** schemaTool failed ***

    解决的办法就是删除所有目录下的metastore_db目录,典型的如:

     
    1. /usr/local/apache-hive-2.1.1-bin/bin
    2. /usr/local/apache-hive-2.1.1-bin/conf
    3. /usr/local/apache-hive-2.1.1-bin/scripts/metastore/upgrade/mysql

    删除以上目录以后,再执行schematool -dbType mysql -initSchema即可.

     

    附注

    参考文章: Apache Hadoop : Hive 2.1.0 install on Ubuntu 16.04

  • 相关阅读:
    火炬之光使用了哪些技术
    计算机相关,性能开销,统计数据集锦
    IOS打开应用商店应用的几种方式
    IOS怎么获取plist文件里的属性
    IOS正则表达式的用法简介
    log4net学习
    技术之外
    百度面试题[转自CSDN]
    微软.NET开发认证基础技术知识大局观——核心篇
    基于ASP.NET 2.0 实现WEB打印方法的探讨
  • 原文地址:https://www.cnblogs.com/pejsidney/p/8944305.html
Copyright © 2020-2023  润新知