1、phoenix简介
Apache Phoenix是构建在HBase之上的关系型数据库层,作为内嵌的客户端JDBC驱动用以对HBase中的数据进行低延迟访问。Apache Phoenix会将用户编写的sql查询编译为一系列的scan操作,最终产生通用的JDBC结果集返回给客户端。数据表的元数据存储在HBase的表中被会标记版本号,所以进行查询的时候会自动选择正确的schema。直接使用HBase的API,结合协处理器(coprocessor)和自定义的过滤器的话,小范围的查询在毫秒级响应,千万数据的话响应速度为秒级。
2、安装phoenix
参考官方文档:http://phoenix.apache.org/installation.html
hbase 0.98.1+以上hbase版本需要下载phoenix 4.x版本。
此处使用的phoenix版本为:phoenix-4.3.0-bin.tar.gz
根据官档说明,安装一个预编译的phoenix需要执行以下步骤:
1)下载并且解压phoenix-[version]-bin.tar.
2)将phoenix-[version]-server.jar添加到hbase master和regionserver的classpath路径,这里可以直接cp phoenix-[version]-server.jar到hbase master和regionserver的lib/目录下(phoenix 3.x版本对应jar包:phoenix-core-[version].jar)。
3)重启hbase
4)将phoenix-[version]-client.jar添加到phoenix client的classpath中。
3、连接HBase
执行如下命令连接HBase集群:
$ bin/sqlline.py chavin.king:2181
--说明:这里的chavin.king参数是HBase集群zookeeper集群的hostname,2181是zookeeper端口号。
phoenix中通过!tables可以查看table信息:
0: jdbc:phoenix:chavin.king:2181> !table
+-----------+-------------+-------------+---------------+-----------------+
|TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | |
+-----------+-------------+-------------+---------------+-----------------+
| | SYSTEM | CATALOG | SYSTEM TABLE | |
| | SYSTEM | SEQUENCE | SYSTEM TABLE | |
| | SYSTEM | STATS | SYSTEM TABLE | |
| | | WEB_STAT | TABLE | |
+-----------+-------------+-------------+---------------+-----------------+
--说明:CATALOG、SEQUENCE、STATS这三张表是系统自带的表。HBase中已存在的表不会自动映射过来,需要手动创建相同结果的数据表,具体过程后面会说到。
从HBase的CLI界面查看是否同样多出这三张表:
hbase(main):021:0> list
TABLE
SYSTEM.CATALOG
SYSTEM.SEQUENCE
SYSTEM.STATS
WEB_STAT
phoenix安装完成。
4、phoenix cli CRUD操作
参考文档:http://phoenix.apache.org/language/index.html#alter
1)phoenix中创建表:
0: jdbc:phoenix:chavin.king:2181> CREATE TABLE user_temp (id varchar PRIMARY KEY,account varchar ,passwd varchar);
No rows affected (2.626 seconds)
此时,hbase中也可以看到相应的USER_TEMP表(phoenix会将字母默认转换为大写,如果想保持小写字母形式,可以使用双引号引起来):
hbase(main):030:0> describe 'USER_TEMP'
DESCRIPTION ENABLED
'USER_TEMP', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', copro true
cessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3 => '|org.apache.p
hoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', coprocessor$4 => '|org.apache.phoenix.coprocessor.ServerCac
hingEndpointImpl|805306366|', coprocessor$5 => '|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache
.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec',
coprocessor$6 => '|org.apache.hadoop.hbase.regionserver.LocalIndexSplitter|805306366|'}, {NAME => '0', DATA_BLOCK_ENCODING
=> 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '
0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
1 row(s) in 0.4060 seconds
创建指定列族与列的建表语句:
0: jdbc:phoenix:chavin.king:2181> CREATE TABLE user_temp_cf (id varchar PRIMARY KEY,INFO.account varchar ,INFO.passwd varchar);
No rows affected (3.467 seconds)
hbase中查看表结构:
hbase(main):035:0* describe 'USER_TEMP_CF'
DESCRIPTION ENABLED
'USER_TEMP_CF', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', co true
processor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3 => '|org.apach
e.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', coprocessor$4 => '|org.apache.phoenix.coprocessor.Server
CachingEndpointImpl|805306366|', coprocessor$5 => '|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apa
che.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec
', coprocessor$6 => '|org.apache.hadoop.hbase.regionserver.LocalIndexSplitter|805306366|'}, {NAME => 'INFO', DATA_BLOCK_EN
CODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSION
S => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true
'}
1 row(s) in 0.3380 seconds
2)向表中插入数据:这里插入关键字与rdbms不同,为upsert:
0: jdbc:phoenix:chavin.king:2181> upsert into user_temp(id, account, passwd) values('001', 'admin', 'admin');
1 row affected (0.77 seconds)
3)查新数据
0: jdbc:phoenix:chavin.king:2181> select * from user_temp;
+------+----------+----------+
| ID | ACCOUNT | PASSWD |
+------+----------+----------+
| 001 | admin | admin |
+------+----------+----------+
4)更新数据:语法等同于插入数据
0: jdbc:phoenix:chavin.king:2181> upsert into user_temp(id, account, passwd) values('001', 'admin', 'admin');
1 row affected (0.77 seconds)
5)删除数据:
0: jdbc:phoenix:chavin.king:2181> delete from user_temp where id='001';
1 row affected (0.245 seconds)
6)删除表:
0: jdbc:phoenix:chavin.king:2181> drop table user_temp;
No rows affected (4.577 seconds)
5、phoenix JDBC CRUD操作
http://blog.csdn.net/maomaosi2009/article/details/45582321
http://blog.csdn.net/maomaosi2009/article/details/45583259
6、Phoenix配置Squirrel GUI连接Phoenix
参考官方文档:http://phoenix.apache.org/installation.html#SQL_Client --> SQL Client模块
1)Remove prior phoenix-[oldversion]-client.jar from the lib directory of SQuirrel, copy phoenix-[newversion]-client.jar to the lib directory (newversion should be compatible with the version of the phoenix server jar used with your HBase installation)
2)Start SQuirrel and add new driver to SQuirrel (Drivers -> New Driver)
3)In Add Driver dialog box, set Name to Phoenix, and set the Example URL to jdbc:phoenix:localhost.
4)Type “org.apache.phoenix.jdbc.PhoenixDriver” into the Class Name textbox and click OK to close this dialog.
5)Switch to Alias tab and create the new Alias (Aliases -> New Aliases)
6)In the dialog box, Name: any name, Driver: Phoenix, User Name: anything, Password: anything
7)Construct URL as follows: jdbc:phoenix: zookeeper quorum server. For example, to connect to a local HBase use: jdbc:phoenix:localhost
8)Press Test (which should succeed if everything is setup correctly) and press OK to close.
9)Now double click on your newly created Phoenix alias and click Connect. Now you are ready to run SQL queries against Phoenix.
6.1)下载SQuirreL SQL Client:
https://jaist.dl.sourceforge.net/project/squirrel-sql/1-stable/3.8.0/squirrel-sql-3.8.0-standard.jar
注意,squirrel-sql要匹配对应jdk版本。
6.2)安装SQuirreL SQL Client:
双击下载squirrel-sql-3.8.0-standard.jar,直接下一步就可以了。
6.3)配置SQuirreL SQL Client:
拷贝phoenix-4.3.0-client.jar到SQuirreL的lib/目录下。
进入SQuirreL安装目录,双击squirrel-sql.bat启动SQuirreL。
驱动程序->添加驱动程序:
Name:phoenix
ExampleURL:jdbc:phoenix:chavin.king:2181
Website URL:
Java Class Path:选择phoenix-4.3.0-client.jar
Class Name:org.apache.phoenix.jdbc.PhoenixDriver
配置aliases,选择上一步配置的驱动。
现在可以通过GUI界面操作hbase了。
7、Phoenix映射HBase中创建的表
安装好phoenix后对于HBase中已经存在的数据表或者在hbase中创建的表不会自动进行映射,所以想要再phoenix中操作HBase已有数据表就需要手动进行配置。
hbase中存在的表:
hbase(main):043:0> scan 'user'
ROW COLUMN+CELL
10001 column=info:age, timestamp=1490389800933, value=28
10001 column=info:name, timestamp=1490389743583, value=zhangsan
10001 column=info:sex, timestamp=1490389809756, value=man
10002 column=info:address, timestamp=1490846659202, value=beijing
10002 column=info:age, timestamp=1490846659202, value=x00x00x00x1C
10002 column=info:name, timestamp=1490846659202, value=lisi
2 row(s) in 0.1170 seconds
在phoenix中映射hbase中user表:
create table "user"(
ROW varchar primary key,
"info"."name" varchar,
"info"."age" varchar,
"info"."sex" varchar,
"info"."address" varchar
);
如下:
0: jdbc:phoenix:chavin.king:2181> create table "user"(
. . . . . . . . . . . . . . . . > ROW varchar primary key,
. . . . . . . . . . . . . . . . > "info"."name" varchar,
. . . . . . . . . . . . . . . . > "info"."age" varchar,
. . . . . . . . . . . . . . . . > "info"."sex" varchar,
. . . . . . . . . . . . . . . . > "info"."address" varchar
. . . . . . . . . . . . . . . . > );
2 rows affected (5.593 seconds)
8、通过alter table命令修改phoenix表结构
0: jdbc:phoenix:chavin.king:2181> alter table "user" drop column "address";
9、phoenix执行sql脚本
phoenix工具自带了执行sql脚本的功能,主要目的是方便将rdbms中的数据迁移至hbase中。
在examples/目录下创建脚本p_user.sql,输入内容如下:
-- create table p_user
create table if not exists p_user (id varchar primary key,account varchar ,passwd varchar);
-- insert data
upsert into p_user(id, account, passwd) values('001', 'admin', 'admin');
upsert into p_user(id, account, passwd) values('002', 'test', 'test');
upsert into p_user(id, account, passwd) values('003', 'zx', 'zx');
-- query data
select * from p_user;
使用phoenix工具psql.py执行脚本:
bin/psql.py chavin.king:2181 examples/p_user.sql
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
17/10/22 00:59:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/10/22 00:59:31 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
no rows upserted
Time: 3.23 sec(s)
1 row upserted
Time: 0.297 sec(s)
1 row upserted
Time: 0.038 sec(s)
1 row upserted
Time: 0.029 sec(s)
ID ACCOUNT PASSWD
---------------------------------------- ---------------------------------------- ----------------------------------------
001 admin admin
002 test test
003 zx zx
Time: 0.142 sec(s)
--其中chavin.king:2181为hbase注册zookeeper地址。
10、通过phoenix加载特定格式文件存储数据到hbase
1)单线程psql.py工具导入小批量数据到hbase
在examples/目录下创建文件data.csv:
12345,John,Doe
67890,Mary,Poppins
phoenix中创建用于加载上面数据的表:
CREATE TABLE example (
my_pk bigint not null,
m.first_name varchar(50),
m.last_name varchar(50)
CONSTRAINT pk PRIMARY KEY (my_pk))
使用psql.py工具导入数据data.csv到表example中:
bin/psql.py chavin.king:2181 -t EXAMPLE examples/data.csv
2)使用MapReduce并行加载大批量数据到hbase
创建测试表:
create table p_user0 (id varchar primary key,account varchar ,passwd varchar);
examples/目录生成测试数据:
vim examples/data_mr.csv
001,google,AM
002,baidu,BJ
003,alibaba,HZ
通过mapreduce加载数据:
export HBASE_HOME=/opt/cdh-5.3.6/hbase-0.98.6-cdh5.3.6
export HADOOP_HOME=/opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase mapredcp`:${HBASE_HOME}/conf $HADOOP_HOME/bin/hadoop jar
phoenix-4.3.0-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool
--table P_USER0
--input file:///opt/cdh-5.3.6/phoenix-4.3.0-bin/examples/data_mr.csv
--zookeeper chavin.king:2181