一,概念
hive:是一种数据仓库,数据储存在:hdfs上,hsql是由替换简单的map-reduce,hive通过mysql来记录映射数据
二,安装
1,mysql安装:
1,检测是否有mariadb
rpm -qa|grep mariadb tar -zxvf mysql-5.7.18-linux-glibc2.5-x86_64.tar.gz 存在:rpm -e mariadb-libs-5.5.52-1.el7.x86_64 --nodeps
2,安装前准备:
# ha1环境下:mysql版本mysql-5.7.18-linux-glibc2.5-x86_64 cp mysql-5.7.18-linux-glibc2.5-x86_64 /usr/mysql -r #创建用户组及用户 groupadd mysql useradd -r -g mysql mysql cd /usr/local/mysql mkdir data chown -R mysql:mysql /usr/mysql #验证权限: ls -trhla #创建配置文件: vim /etc/my.conf #一开始并不存在 basedir=/usr/local/mysql/ datadir/usr/local/mysql/data scoket=/tmp/mysql.sock user=mysql symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-fils=/var/run/mysqld/mysqld.pid #创建文件夹 cd /var/run mkdir mysqld cd mysqld vim mysqld.pid #什么都不用写,退出 chown -R mysql:mysql /var/run/mysqld
3,配置mysql:
#初始化: cd /usr/local/mysql/bin ./mysqld --initialize 生成初始密码: gctlsOja8<%0 #添加自启动脚本服务 chown -R mysql:mysql /usr/local/mysql cp support-files/mysql.server /etc/init.d/mysql service mysql start 进程查看: ps -ef | grep mysql #登录 mysql ./mysql -u root -p 输入密码 #修改密码 set password=password("123456"); #客户端远程连接mysql服务器问题: grant all privileges on *.* to root@'%' identified by '1234567'; flush privileges;(sql语句记得加分号啊) #解决root权限访问所有数据库的问题 grant all privileges on *.* to 'root'@'Master' identified by '123456' with grant option;(连接权限的问题) flush privileges; #创建hive数据库 create database hive default charset utf8 collate utf8_general_ci; #设置mysql自启动 cd /etc/init.d chmod +x mysql chkconfig --add mysql chkconfig --list(3到5都为开,则添加成功) #配置环境变量: vim /etc/profile export MYSQL_HOME=/usr/local/mysql export PATH=$JAVA_HOME/bin:$MYSQL_HOME/bin:$PATH source /etc/profile #验证: ok reboot netstat -na |grep 3306
mysql -u root -p123456
2,hive安装(hive-2.1.1)
1,安装前配置
1,启动hadoop 前提先启动zk cd $ZK_HOME/bin zkServer.sh start 在namenode启动start-all.sh 2,可以放在任一台hadoop集群中 3,解压 tar -zxf apache-hive-2.1.1-bin.tar.gz 4,mv apache-hive-2.1.1-bin /usr/hive-2.1.1 mv mysql-connector-java-5.1.42-bin.jar /usr/hive-2.1.1/lib 5,环境变量设置 vim /etc/profie export HIVE_HOME=/usr/hive-2.1.1 export PATH=$PATH:$HIVE_HOME/bin export CLASSPATH=$CLASSPATH:$HIVE_HOME/bin $source /etc/profile
2,hive-env.sh配置
cd /app/soft/hive-2.1.1/conf cp hive-env.sh.template hive-env.sh vim hive-env.sh export HADOOP_HOME=/usr/hadoop-2.7.3 export HIVE_CONF_DIR=/usr/hive-2.1.1/conf
3, hive-site.xml 配置
touch hive-site.xml vim hive-site.xml <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://ha1:3306/hive?createDatabaseIfNotExsit=true;characterEncoding=UTF-8</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> </property> <property> <name>datanucleus.readOnlyDatastore</name> <value>false</value> </property> <property> <name>datanucleus.fixedDatastore</name> <value>false</value> </property> <property> <name>datanucleus.autoCreateSchema</name> <value>true</value> </property> <property> <name>datanucleus.autoCreateTables</name> <value>true</value> </property> <property> <name>datanucleus.autoCreateColumns</name> <value>true</value> </property> <property> <name>datanucleus.autoCreateColumns</name> <value>true</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> </configuration>
4,启动服务 (快照 hive+mysql ok)
cd $HIVE_HOME/bin hive --service metastore & hive --service hiveserver2& ./hive