环境:
OS:Centos 7
版本:21.6.6.51
机器:
192.168.1.118
192.168.1.134
192.168.1.135
-----------------------zookeeper部署-----------------------------------------
安装步骤请参考:https://www.cnblogs.com/hxlasky/p/14665752.html
------------------------单机安装---------------------------------------------
每台机器安装步骤相同
1.下载clickhouse
需要下载如下3个按照包
clickhouse-common-static-21.6.6.51-2.x86_64.rpm
clickhouse-client-21.6.6.51-2.noarch.rpm
clickhouse-server-21.6.6.51-2.noarch.rpm
下载地址为:https://packagecloud.io/Altinity/clickhouse
或是这里下载
https://mirrors.tuna.tsinghua.edu.cn/clickhouse/rpm/stable/x86_64/
2.安装
rpm -ivh clickhouse-common-static-21.6.6.51-2.x86_64.rpm
rpm -ivh clickhouse-client-21.6.6.51-2.noarch.rpm
rpm -ivh clickhouse-server-21.6.6.51-2.noarch.rpm
这里会提示输入默认的密码,可以直接回车 不需要密码,后面单独对用户进行设置密码
3.修改数据目录和日志目录
vi /etc/clickhouse-server/config.xml
数据目录修改的地方
<!-- Path to data directory, with trailing slash. -->
<path>/var/lib/clickhouse/</path>
日志目录修改的地方
<level>trace</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
4.配置远程登录
vi /etc/clickhouse-server/config.xml
将之前注释掉的
<!-- <listen_host>0.0.0.0</listen_host> -->
去掉注释
<listen_host>0.0.0.0</listen_host>
5.启动
[root@localhost clickhouse21]# systemctl start clickhouse-server.service
6.登录
clickhouse-client --host localhost --port=9000 -m
clickhouse-client --host 192.168.1.118 --port=9000 -m
------------------------集群安装---------------------------------------------
1.创建 /etc/metrika.xml 文件
<?xml version="1.0"?>
<yandex>
<clickhouse_remote_servers>
<perftest_3shards_1replicas>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.1.118</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<internal_replication>true</internal_replication>
<host>192.168.1.134</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.1.135</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards_1replicas>
</clickhouse_remote_servers>
<!--zookeeper相关配置-->
<zookeeper-servers>
<node index="1">
<host>192.168.1.118</host>
<port>2182</port>
</node>
<node index="2">
<host>192.168.1.134</host>
<port>2182</port>
</node>
<node index="3">
<host>192.168.1.135</host>
<port>2182</port>
</node>
</zookeeper-servers>
<macros>
<shard_name>01</shard_name>
<replica>192.168.1.118</replica>
</macros>
<networks>
<ip>::/0</ip>
</networks>
<clickhouse_compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</clickhouse_compression>
</yandex>
每台机器都创建该文件,红色部分需要根据机器ip进行修改,不同的地方如下,同时注意zookeeper配置的端口
192.168.1.118:
<macros>
<shard_name>01</shard_name>
<replica>192.168.1.118</replica>
</macros>
192.168.1.134:
<macros>
<shard_name>02</shard_name>
<replica>192.168.1.134</replica>
</macros>
192.168.1.135:
<macros>
<shard_name>03</shard_name>
<replica>192.168.1.135</replica>
</macros>
2.修改 /etc/clickhouse-server/config.xml 文件
<yandex>
<!--引入metrika.xml-->
<include_from>/etc/metrika.xml</include_from>
<!--引用Zookeeper配置的定义-->
<zookeeper incl="zookeeper-servers" optional="true">
<!--clickhouse_remote_servers节点引入-->
<remote_servers incl="clickhouse_remote_servers">
<!-- Test only shard config for testing distributed storage -->
<test_shard_localhost>
<!-- Inter-server per-cluster secret for Distributed queries
default: no secret (no authentication will be performed)
If set, then Distributed queries will be validated on shards, so at least:
- such cluster should exist on the shard,
- such cluster should have the same secret.
红色部分是添加上去的.
3.重新启动
systemctl stop clickhouse-server.service
systemctl start clickhouse-server.service
systemctl status clickhouse-server.service
4.登录查看集群情况
在其中提个节点上登陆查看即可
clickhouse-client --host 192.168.1.118 --port=9000
[root@localhost clickhouse-server]# clickhouse-client --host 192.168.1.118 --port=9000
ClickHouse client version 21.6.6.51 (official build).
Connecting to 192.168.1.118:9000 as user default.
Connected to ClickHouse server version 21.6.6 revision 54448.
localhost :) select * from system.clusters;
SELECT *
FROM system.clusters
Query id: e91b2180-45ae-4174-b733-228b9c14cdb3
┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
│ perftest_3shards_1replicas │ 1 │ 1 │ 1 │ 192.168.1.118 │ 192.168.1.118 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ perftest_3shards_1replicas │ 2 │ 1 │ 1 │ 192.168.1.134 │ 192.168.1.134 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ perftest_3shards_1replicas │ 3 │ 1 │ 1 │ 192.168.1.135 │ 192.168.1.135 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_internal_replication │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_internal_replication │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9440 │ 0 │ default │ │ 0 │ 0 │ 0 │
│ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ ::1 │ 9000 │ 1 │ default │ │ 0 │ 0 │ 0 │
│ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ ::1 │ 1 │ 0 │ default │ │ 0 │ 0 │ 0 │
└──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
13 rows in set. Elapsed: 0.006 sec.
5.查看zookeeper信息
localhost :) select * from system.zookeeper WHERE path='/';
SELECT *
FROM system.zookeeper
WHERE path = '/'
Query id: 1dad1e44-5f79-4b85-88fd-fd79dd5e407a
┌─name───────┬─value─┬──────czxid─┬──────mzxid─┬───────────────ctime─┬───────────────mtime─┬─version─┬─cversion─┬─aversion─┬─ephemeralOwner─┬─dataLength─┬─numChildren─┬──────pzxid─┬─path─┐
│ zookeeper │ │ 0 │ 0 │ 1970-01-01 08:00:00 │ 1970-01-01 08:00:00 │ 0 │ -2 │ 0 │ 0 │ 0 │ 2 │ 0 │ / │
│ clickhouse │ │ 4294967298 │ 4294967298 │ 2022-01-15 09:02:27 │ 2022-01-15 09:02:27 │ 0 │ 1 │ 0 │ 0 │ 0 │ 1 │ 4294967299 │ / │
└────────────┴───────┴────────────┴────────────┴─────────────────────┴─────────────────────┴─────────┴──────────┴──────────┴────────────────┴────────────┴─────────────┴────────────┴──────┘
2 rows in set. Elapsed: 0.007 sec.
6.数据验证
创建数据库:
登录其中一个节点上执行我们这里登陆到 192.168.1.118 上执行:
[root@localhost clickhouse-server]# clickhouse-client --host localhost -m
localhost :) CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas;
CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas
Query id: cd14d017-2333-478a-a914-c7dee2437b9b
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.1.118 │ 9000 │ 0 │ │ 2 │ 2 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.1.134 │ 9000 │ 0 │ │ 1 │ 0 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.1.135 │ 9000 │ 0 │ │ 0 │ 0 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
3 rows in set. Elapsed: 0.447 sec.
localhost :) show databases;
SHOW DATABASES
Query id: eaecfd6b-f98d-48a8-bfb0-49d80e8e147e
┌─name────┐
│ db_test │
│ default │
│ system │
└─────────┘
这个时候登陆另外的节点上查看数据库,都可以看到有新创建的数据库db_test
[root@localhost clickhouse21]# clickhouse-client --host 192.168.1.134 -m
localhost :) show databases;
SHOW DATABASES
Query id: 5b7f7379-3e2f-4f8d-b411-5163ad151ba0
┌─name────┐
│ db_test │
│ default │
│ system │
└─────────┘
[root@localhost clickhouse21]# clickhouse-client --host 192.168.1.135 -m
localhost :) show databases;
SHOW DATABASES
Query id: f446b2c4-eac4-4848-8980-0ad1fe80080e
┌─name────┐
│ db_test │
│ default │
│ system │
└─────────┘
创建本地表
登陆到其中一台机器上执行,我这里是在 192.168.1.118 上执行
创建数据库
CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas;
[root@localhost clickhouse-server]# clickhouse-client --host 192.168.1.118 -m
localhost :) use db_test;
带上了ON CLUSTER关键字会在每个节点上创建一个表
CREATE TABLE user_local ON CLUSTER perftest_3shards_1replicas
(
id Int32,
name String
) engine = MergeTree
order by id
settings index_granularity = 8192;
删除每个节点上的表需要带上 ON CLUSTER 关键字
drop table user_local ON CLUSTER perftest_3shards_1replicas;
创建分布式表,同理带上了 ON CLUSTER 关键字,每个节点上都会创建同样的表
CREATE TABLE user_all ON CLUSTER perftest_3shards_1replicas(
id Int32,
name String) ENGINE = Distributed(perftest_3shards_1replicas, db_test, user_local,id);
分布式表的字段顺序可以与本地表不一致,查询的时候会自动映射到本地表相对应名字相同的字段
7.写入数据
本地表写入数据
192.168.1.118
insert into user_local(id, name) values (1, 'zhangsan');
insert into user_local(id, name) values (2, 'lisi');
192.168.1.134:
insert into user_local(id, name) values (3, 'wangm');
insert into user_local(id, name) values (4, 'lijing');
192.168.1.135:
insert into user_local(id, name) values (5, 'zhangquan');
insert into user_local(id, name) values (6, 'lihua');
查询分布式表:
在任何一个节点上查询,可以看到分布式表查出了3个节点所有的数据了
localhost :) select * from user_all order by id;
SELECT *
FROM user_all
ORDER BY id ASC
Query id: 1c5a7bdf-b581-469d-8153-46563eb3e3af
┌─id─┬─name─────┐
│ 1 │ zhangsan │
└────┴──────────┘
┌─id─┬─name─┐
│ 2 │ lisi │
└────┴──────┘
┌─id─┬─name──┐
│ 3 │ wangm │
└────┴───────┘
┌─id─┬─name───┐
│ 4 │ lijing │
└────┴────────┘
┌─id─┬─name──────┐
│ 5 │ zhangquan │
└────┴───────────┘
┌─id─┬─name──┐
│ 6 │ lihua │
└────┴───────┘
我们可以从分布式表里写入数据,但是落到具体那个本地的节点,是通过clickhouse内部的规则机制决定的
insert into user_all(id, name) values (7, 'wangmingquan');
8.添加字段
在任何一个节点上执行:
alter table user_local ON CLUSTER perftest_3shards_1replicas add column name1 String;
在每个节点上执行,看相应字段是否添加上
show create table user_local;
但是这个时候 user_all 不会自动添加字段的,需要手工进行添加
alter table user_all ON CLUSTER perftest_3shards_1replicas add column name1 String;
添加一个在本地表不存在的字段
alter table user_all ON CLUSTER perftest_3shards_1replicas add column name2 String;
该语句可以执行成功,但是查询的时候报错误:
SELECT *
FROM user_all
Query id: 4c0c2717-2b30-4d38-805b-5d8aea6248a1
0 rows in set. Elapsed: 0.004 sec.
Received exception from server (version 21.6.6):
Code: 47. DB::Exception: Received from 192.168.1.118:9000. DB::Exception: There's no column 'user_local.name2' in table 'user_local': While processing user_local.name2.
删除字段
alter table user_all ON CLUSTER perftest_3shards_1replicas drop column name2;