• clickhouse集群部署21.6


    环境:
    OS:Centos 7
    版本:21.6.6.51
    机器:
    192.168.1.118
    192.168.1.134
    192.168.1.135

     

    -----------------------zookeeper部署-----------------------------------------
    安装步骤请参考:https://www.cnblogs.com/hxlasky/p/14665752.html

     

    ------------------------单机安装---------------------------------------------
    每台机器安装步骤相同
    1.下载clickhouse
    需要下载如下3个按照包
    clickhouse-common-static-21.6.6.51-2.x86_64.rpm
    clickhouse-client-21.6.6.51-2.noarch.rpm
    clickhouse-server-21.6.6.51-2.noarch.rpm

    下载地址为:https://packagecloud.io/Altinity/clickhouse
    或是这里下载

    https://mirrors.tuna.tsinghua.edu.cn/clickhouse/rpm/stable/x86_64/

     

    2.安装
    rpm -ivh clickhouse-common-static-21.6.6.51-2.x86_64.rpm
    rpm -ivh clickhouse-client-21.6.6.51-2.noarch.rpm
    rpm -ivh clickhouse-server-21.6.6.51-2.noarch.rpm

    这里会提示输入默认的密码,可以直接回车 不需要密码,后面单独对用户进行设置密码


    3.修改数据目录和日志目录
    vi /etc/clickhouse-server/config.xml

    数据目录修改的地方
    <!-- Path to data directory, with trailing slash. -->
    <path>/var/lib/clickhouse/</path>

    日志目录修改的地方
    <level>trace</level>
    <log>/var/log/clickhouse-server/clickhouse-server.log</log>
    <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>

     

    4.配置远程登录
    vi /etc/clickhouse-server/config.xml

    将之前注释掉的
    <!-- <listen_host>0.0.0.0</listen_host> -->

    去掉注释

     <listen_host>0.0.0.0</listen_host>

    5.启动
    [root@localhost clickhouse21]# systemctl start clickhouse-server.service

     

    6.登录
    clickhouse-client --host localhost --port=9000 -m
    clickhouse-client --host 192.168.1.118 --port=9000 -m

     

    ------------------------集群安装---------------------------------------------

    1.创建 /etc/metrika.xml 文件

    <?xml version="1.0"?>
    <yandex>
      <clickhouse_remote_servers>
        <perftest_3shards_1replicas>
          <shard>
            <internal_replication>true</internal_replication>
            <replica>
              <host>192.168.1.118</host>
              <port>9000</port>
            </replica>
          </shard>
          <shard>
            <replica>
              <internal_replication>true</internal_replication>
              <host>192.168.1.134</host>
              <port>9000</port>
            </replica>
          </shard>
          <shard>
            <internal_replication>true</internal_replication>
            <replica>
              <host>192.168.1.135</host>
              <port>9000</port>
            </replica>
          </shard>
        </perftest_3shards_1replicas>
      </clickhouse_remote_servers>
      <!--zookeeper相关配置-->
      <zookeeper-servers>
        <node index="1">
          <host>192.168.1.118</host>
          <port>2182</port>
        </node>
        <node index="2">
          <host>192.168.1.134</host>
          <port>2182</port>
        </node>
        <node index="3">
          <host>192.168.1.135</host>
          <port>2182</port>
        </node>
      </zookeeper-servers>
      <macros>
        <shard_name>01</shard_name>
        <replica>192.168.1.118</replica>
      </macros>
      <networks>
        <ip>::/0</ip>
      </networks>
      <clickhouse_compression>
        <case>
          <min_part_size>10000000000</min_part_size>
          <min_part_size_ratio>0.01</min_part_size_ratio>
          <method>lz4</method>
        </case>
      </clickhouse_compression>
    </yandex>

     

    每台机器都创建该文件,红色部分需要根据机器ip进行修改,不同的地方如下,同时注意zookeeper配置的端口
    192.168.1.118:
    <macros>
    <shard_name>01</shard_name>
    <replica>192.168.1.118</replica>
    </macros>

    192.168.1.134:
    <macros>
    <shard_name>02</shard_name>
    <replica>192.168.1.134</replica>
    </macros>

    192.168.1.135:
    <macros>
    <shard_name>03</shard_name>
    <replica>192.168.1.135</replica>
    </macros>

     

    2.修改 /etc/clickhouse-server/config.xml 文件

    <yandex>
        <!--引入metrika.xml-->
        <include_from>/etc/metrika.xml</include_from>
        
        <!--引用Zookeeper配置的定义-->
        <zookeeper incl="zookeeper-servers" optional="true">
        
        <!--clickhouse_remote_servers节点引入-->
        <remote_servers incl="clickhouse_remote_servers">  
            <!-- Test only shard config for testing distributed storage -->
            <test_shard_localhost>
                <!-- Inter-server per-cluster secret for Distributed queries
                     default: no secret (no authentication will be performed)
    
                     If set, then Distributed queries will be validated on shards, so at least:
                     - such cluster should exist on the shard,
                     - such cluster should have the same secret.

     

    红色部分是添加上去的.

     

    3.重新启动
    systemctl stop clickhouse-server.service
    systemctl start clickhouse-server.service
    systemctl status clickhouse-server.service

     

    4.登录查看集群情况
    在其中提个节点上登陆查看即可

     

    clickhouse-client --host 192.168.1.118 --port=9000
    [root@localhost clickhouse-server]# clickhouse-client --host 192.168.1.118 --port=9000
    ClickHouse client version 21.6.6.51 (official build).
    Connecting to 192.168.1.118:9000 as user default.
    Connected to ClickHouse server version 21.6.6 revision 54448.
    
    localhost :) select * from system.clusters;
    
    SELECT *
    FROM system.clusters
    
    Query id: e91b2180-45ae-4174-b733-228b9c14cdb3
    
    ┌─cluster──────────────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
    │ perftest_3shards_1replicas                   │         111192.168.1.118192.168.1.11890001default │                  │            000 │
    │ perftest_3shards_1replicas                   │         211192.168.1.134192.168.1.13490000default │                  │            000 │
    │ perftest_3shards_1replicas                   │         311192.168.1.135192.168.1.13590000default │                  │            000 │
    │ test_cluster_two_shards                      │         111127.0.0.1127.0.0.190001default │                  │            000 │
    │ test_cluster_two_shards                      │         211127.0.0.2127.0.0.290000default │                  │            000 │
    │ test_cluster_two_shards_internal_replication │         111127.0.0.1127.0.0.190001default │                  │            000 │
    │ test_cluster_two_shards_internal_replication │         211127.0.0.2127.0.0.290000default │                  │            000 │
    │ test_cluster_two_shards_localhost            │         111 │ localhost     │ ::190001default │                  │            000 │
    │ test_cluster_two_shards_localhost            │         211 │ localhost     │ ::190001default │                  │            000 │
    │ test_shard_localhost                         │         111 │ localhost     │ ::190001default │                  │            000 │
    │ test_shard_localhost_secure                  │         111 │ localhost     │ ::194400default │                  │            000 │
    │ test_unavailable_shard                       │         111 │ localhost     │ ::190001default │                  │            000 │
    │ test_unavailable_shard                       │         211 │ localhost     │ ::110default │                  │            000 │
    └──────────────────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘
    
    13 rows in set. Elapsed: 0.006 sec. 

     

    5.查看zookeeper信息

    localhost :) select * from system.zookeeper WHERE path='/';
    
    SELECT *
    FROM system.zookeeper
    WHERE path = '/'
    
    Query id: 1dad1e44-5f79-4b85-88fd-fd79dd5e407a
    
    ┌─name───────┬─value─┬──────czxid─┬──────mzxid─┬───────────────ctime─┬───────────────mtime─┬─version─┬─cversion─┬─aversion─┬─ephemeralOwner─┬─dataLength─┬─numChildren─┬──────pzxid─┬─path─┐
    │ zookeeper  │       │          001970-01-01 08:00:001970-01-01 08:00:000 │       -200020 │ /    │
    │ clickhouse │       │ 429496729842949672982022-01-15 09:02:272022-01-15 09:02:270100014294967299 │ /    │
    └────────────┴───────┴────────────┴────────────┴─────────────────────┴─────────────────────┴─────────┴──────────┴──────────┴────────────────┴────────────┴─────────────┴────────────┴──────┘
    
    2 rows in set. Elapsed: 0.007 sec. 

     

    6.数据验证
    创建数据库:
    登录其中一个节点上执行我们这里登陆到 192.168.1.118 上执行:

    [root@localhost clickhouse-server]# clickhouse-client --host localhost -m
    
    
    localhost :) CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas;
    
    CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas
    
    Query id: cd14d017-2333-478a-a914-c7dee2437b9b
    
    ┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
    │ 192.168.1.11890000 │       │                   22 │
    └───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
    ┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
    │ 192.168.1.13490000 │       │                   10 │
    └───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
    ┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
    │ 192.168.1.13590000 │       │                   00 │
    └───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
    
    3 rows in set. Elapsed: 0.447 sec. 
    
    localhost :) show databases;
    
    SHOW DATABASES
    
    Query id: eaecfd6b-f98d-48a8-bfb0-49d80e8e147e
    
    ┌─name────┐
    │ db_test │
    │ default │
    │ system  │
    └─────────┘

     

    这个时候登陆另外的节点上查看数据库,都可以看到有新创建的数据库db_test

    [root@localhost clickhouse21]# clickhouse-client --host 192.168.1.134 -m
    localhost :) show databases;
    
    SHOW DATABASES
    
    Query id: 5b7f7379-3e2f-4f8d-b411-5163ad151ba0
    
    ┌─name────┐
    │ db_test │
    │ default │
    │ system  │
    └─────────┘
    
    [root@localhost clickhouse21]# clickhouse-client --host 192.168.1.135 -m
    
    localhost :) show databases;
    
    SHOW DATABASES
    
    Query id: f446b2c4-eac4-4848-8980-0ad1fe80080e
    
    ┌─name────┐
    │ db_test │
    │ default │
    │ system  │
    └─────────┘

     

    创建本地表

    登陆到其中一台机器上执行,我这里是在 192.168.1.118 上执行

     

    创建数据库
    CREATE DATABASE db_test ON CLUSTER perftest_3shards_1replicas;


    [root@localhost clickhouse-server]# clickhouse-client --host 192.168.1.118 -m
    localhost :) use db_test;

    带上了ON CLUSTER关键字会在每个节点上创建一个表
    CREATE TABLE user_local ON CLUSTER perftest_3shards_1replicas
    (
    id Int32,
    name String
    ) engine = MergeTree
    order by id
    settings index_granularity = 8192;


    删除每个节点上的表需要带上 ON CLUSTER 关键字
    drop table user_local ON CLUSTER perftest_3shards_1replicas;


    创建分布式表,同理带上了 ON CLUSTER 关键字,每个节点上都会创建同样的表
    CREATE TABLE user_all ON CLUSTER perftest_3shards_1replicas(
    id Int32,
    name String) ENGINE = Distributed(perftest_3shards_1replicas, db_test, user_local,id);

    分布式表的字段顺序可以与本地表不一致,查询的时候会自动映射到本地表相对应名字相同的字段 

     

     

    7.写入数据
    本地表写入数据

    192.168.1.118
    insert into user_local(id, name) values (1, 'zhangsan');
    insert into user_local(id, name) values (2, 'lisi');
    
    192.168.1.134:
    insert into user_local(id, name) values (3, 'wangm');
    insert into user_local(id, name) values (4, 'lijing');
    
    192.168.1.135:
    insert into user_local(id, name) values (5, 'zhangquan');
    insert into user_local(id, name) values (6, 'lihua');

     

    查询分布式表:
    在任何一个节点上查询,可以看到分布式表查出了3个节点所有的数据了

    localhost :) select * from user_all order by id;
    
    SELECT *
    FROM user_all
    ORDER BY id ASC
    
    Query id: 1c5a7bdf-b581-469d-8153-46563eb3e3af
    
    ┌─id─┬─name─────┐
    │  1 │ zhangsan │
    └────┴──────────┘
    ┌─id─┬─name─┐
    │  2 │ lisi │
    └────┴──────┘
    ┌─id─┬─name──┐
    │  3 │ wangm │
    └────┴───────┘
    ┌─id─┬─name───┐
    │  4 │ lijing │
    └────┴────────┘
    ┌─id─┬─name──────┐
    │  5 │ zhangquan │
    └────┴───────────┘
    ┌─id─┬─name──┐
    │  6 │ lihua │
    └────┴───────┘

     

    我们可以从分布式表里写入数据,但是落到具体那个本地的节点,是通过clickhouse内部的规则机制决定的
    insert into user_all(id, name) values (7, 'wangmingquan');

     

    8.添加字段

    在任何一个节点上执行:
    alter table user_local ON CLUSTER perftest_3shards_1replicas add column name1 String;

    在每个节点上执行,看相应字段是否添加上
    show create table user_local;


    但是这个时候 user_all 不会自动添加字段的,需要手工进行添加
    alter table user_all ON CLUSTER perftest_3shards_1replicas add column name1 String;

     

    添加一个在本地表不存在的字段
    alter table user_all ON CLUSTER perftest_3shards_1replicas add column name2 String;

     

    该语句可以执行成功,但是查询的时候报错误:

    SELECT *
    FROM user_all
    
    Query id: 4c0c2717-2b30-4d38-805b-5d8aea6248a1
    
    
    0 rows in set. Elapsed: 0.004 sec. 
    
    Received exception from server (version 21.6.6):
    Code: 47. DB::Exception: Received from 192.168.1.118:9000. DB::Exception: There's no column 'user_local.name2' in table 'user_local': While processing user_local.name2.

     

     

    删除字段
    alter table user_all ON CLUSTER perftest_3shards_1replicas drop column name2;

     

  • 相关阅读:
    js 符号
    Flexbox的一些东西,再看看这里面的一些文章,基本上能解决相关问题
    使用initial-letter实现首字下沉效果,感兴趣的同学欢迎猛击:
    特效网址2
    特效网址
    CSS Icon 项目地址 小图标-用css写成的
    【SSH进阶之路】一步步重构容器实现Spring框架——配置文件+反射实现IoC容器(十)
    【SSH进阶之路】一步步重构容器实现Spring框架——解决容器对组件的“侵入式”管理的两种方案--主动查找和控制反转(九)
    【SSH进阶之路】一步步重构容器实现Spring框架——从一个简单的容器开始(八)
    从今天起,写一本关于LLVM的书----《深入理解LLVM》
  • 原文地址:https://www.cnblogs.com/hxlasky/p/15813628.html
Copyright © 2020-2023  润新知