• Redhat7.3环境下最小拓扑架构TIDB安装(附图)


    之前讲过单机版TIDB安装,因为资源有限将TiDB,PD,TiKV安装在了同一台服务器上,详见:Redhat7.3环境下单机版TIDB安装(附图)

    本篇介绍开发及测试环境最小拓扑架构安装。参考官方文档:https://docs.pingcap.com/zh/tidb/stable/hardware-and-software-requirements

    组件CPU内存本地存储网络实例数量(最低要求)
    TiDB 8 核+ 16 GB+ 无特殊要求 千兆网卡 1(可与 PD 同机器)
    PD 4 核+ 8 GB+ SAS, 200 GB+ 千兆网卡 1(可与 TiDB 同机器)
    TiKV 8 核+ 32 GB+ SSD, 200 GB+ 千兆网卡 3

    从上面表格可以看出,开发和测试环境仅需要4台服务器即可。我们将TiDB和PD部署在同一台服务器,另外再需要3台TiKV服务器。

    最小拓扑安装和单机版TiDB安装唯一的区别是topology.yaml文件配置不一样,其它安装环节一模一样,真的一模一样。所以安装的具体流程参考:Redhat7.3环境下单机版TIDB安装(附图)

    最小拓扑安装topology.yaml文件配置如下:

    # # Global variables are applied to all deployments and used as the default value of
    # # the deployments if a specific deployment value is missing.
    global:
      user: "tidb"
      ssh_port: 22
      deploy_dir: "/tidb-deploy"
      data_dir: "/tidb-data"
    
    pd_servers:
      - host: 172.16.43.110
    
    tidb_servers:
      - host: 172.16.43.110
    
    tikv_servers:
      - host: 172.16.43.107
      - host: 172.16.43.108
      - host: 172.16.43.109
    
    monitoring_servers:
      - host: 172.16.43.110
    
    grafana_servers:
      - host: 172.16.43.110
    
    alertmanager_servers:
      - host: 172.16.43.110

    执行部署命令

    [root@test4 develop]# tiup cluster deploy tidb-test v4.0.0 ./topology.yaml --user root -p
    Starting component `cluster`: /root/.tiup/components/cluster/v1.4.0/tiup-cluster deploy tidb-test v4.0.0 ./topology.yaml --user root -p
    Please confirm your topology:
    Cluster type:    tidb
    Cluster name:    tidb-test
    Cluster version: v4.0.0
    Role          Host           Ports        OS/Arch       Directories
    ----          ----           -----        -------       -----------
    pd            172.16.43.110  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
    tikv          172.16.43.107  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
    tikv          172.16.43.108  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
    tikv          172.16.43.109  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
    tidb          172.16.43.110  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
    prometheus    172.16.43.110  9090         linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
    grafana       172.16.43.110  3000         linux/x86_64  /tidb-deploy/grafana-3000
    alertmanager  172.16.43.110  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
    Attention:
        1. If the topology is not what you expected, check your yaml file.
        2. Please confirm there is no port/directory conflicts in same host.
    Do you want to continue? [y/N]: (default=N) y
    Input SSH password: 
    + Generate SSH keys ... Done
    + Download TiDB components
      - Download pd:v4.0.0 (linux/amd64) ... Done
      - Download tikv:v4.0.0 (linux/amd64) ... Done
      - Download tidb:v4.0.0 (linux/amd64) ... Done
      - Download prometheus:v4.0.0 (linux/amd64) ... Done
      - Download grafana:v4.0.0 (linux/amd64) ... Done
      - Download alertmanager: (linux/amd64) ... Done
      - Download node_exporter: (linux/amd64) ... Done
      - Download blackbox_exporter: (linux/amd64) ... Done
    + Initialize target host environments
      - Prepare 172.16.43.110:22 ... Done
      - Prepare 172.16.43.107:22 ... Done
      - Prepare 172.16.43.108:22 ... Done
      - Prepare 172.16.43.109:22 ... Done
    + Copy files
      - Copy pd -> 172.16.43.110 ... Done
      - Copy tikv -> 172.16.43.107 ... Done
      - Copy tikv -> 172.16.43.108 ... Done
      - Copy tikv -> 172.16.43.109 ... Done
      - Copy tidb -> 172.16.43.110 ... Done
      - Copy prometheus -> 172.16.43.110 ... Done
      - Copy grafana -> 172.16.43.110 ... Done
      - Copy alertmanager -> 172.16.43.110 ... Done
      - Copy node_exporter -> 172.16.43.110 ... Done
      - Copy node_exporter -> 172.16.43.107 ... Done
      - Copy node_exporter -> 172.16.43.108 ... Done
      - Copy node_exporter -> 172.16.43.109 ... Done
      - Copy blackbox_exporter -> 172.16.43.107 ... Done
      - Copy blackbox_exporter -> 172.16.43.108 ... Done
      - Copy blackbox_exporter -> 172.16.43.109 ... Done
      - Copy blackbox_exporter -> 172.16.43.110 ... Done
    + Check status
    Enabling component pd
            Enabling instance pd 172.16.43.110:2379
            Enable pd 172.16.43.110:2379 success
    Enabling component tikv
            Enabling instance tikv 172.16.43.109:20160
            Enabling instance tikv 172.16.43.107:20160
            Enabling instance tikv 172.16.43.108:20160
            Enable tikv 172.16.43.107:20160 success
            Enable tikv 172.16.43.109:20160 success
            Enable tikv 172.16.43.108:20160 success
    Enabling component node_exporter
    Enabling component blackbox_exporter
    Enabling component node_exporter
    Enabling component blackbox_exporter
    Enabling component node_exporter
    Enabling component blackbox_exporter
    Enabling component tidb
            Enabling instance tidb 172.16.43.110:4000
            Enable tidb 172.16.43.110:4000 success
    Enabling component prometheus
            Enabling instance prometheus 172.16.43.110:9090
            Enable prometheus 172.16.43.110:9090 success
    Enabling component grafana
            Enabling instance grafana 172.16.43.110:3000
            Enable grafana 172.16.43.110:3000 success
    Enabling component alertmanager
            Enabling instance alertmanager 172.16.43.110:9093
            Enable alertmanager 172.16.43.110:9093 success
    Enabling component node_exporter
    Enabling component blackbox_exporter
    Cluster `tidb-test` deployed successfully, you can start it with command: `tiup cluster start tidb-test`

    检查部署的 TiDB 集群情况

    [root@test4 develop]# tiup cluster display tidb-test
    Starting component `cluster`: /root/.tiup/components/cluster/v1.4.0/tiup-cluster display tidb-test
    Cluster type:       tidb
    Cluster name:       tidb-test
    Cluster version:    v4.0.0
    SSH type:           builtin
    ID                   Role          Host           Ports        OS/Arch       Status    Data Dir                      Deploy Dir
    --                   ----          ----           -----        -------       ------    --------                      ----------
    172.16.43.110:9093   alertmanager  172.16.43.110  9093/9094    linux/x86_64  inactive  /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
    172.16.43.110:3000   grafana       172.16.43.110  3000         linux/x86_64  inactive  -                             /tidb-deploy/grafana-3000
    172.16.43.110:2379   pd            172.16.43.110  2379/2380    linux/x86_64  Down      /tidb-data/pd-2379            /tidb-deploy/pd-2379
    172.16.43.110:9090   prometheus    172.16.43.110  9090         linux/x86_64  inactive  /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
    172.16.43.110:4000   tidb          172.16.43.110  4000/10080   linux/x86_64  Down      -                             /tidb-deploy/tidb-4000
    172.16.43.107:20160  tikv          172.16.43.107  20160/20180  linux/x86_64  N/A       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
    172.16.43.108:20160  tikv          172.16.43.108  20160/20180  linux/x86_64  N/A       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
    172.16.43.109:20160  tikv          172.16.43.109  20160/20180  linux/x86_64  N/A       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160

     启动集群

    [root@test4 develop]# tiup cluster start tidb-test
    Starting component `cluster`: /root/.tiup/components/cluster/v1.4.0/tiup-cluster start tidb-test
    Starting cluster tidb-test...
    + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
    + [Parallel] - UserSSH: user=tidb, host=172.16.43.110
    + [Parallel] - UserSSH: user=tidb, host=172.16.43.110
    + [Parallel] - UserSSH: user=tidb, host=172.16.43.107
    + [Parallel] - UserSSH: user=tidb, host=172.16.43.108
    + [Parallel] - UserSSH: user=tidb, host=172.16.43.109
    + [Parallel] - UserSSH: user=tidb, host=172.16.43.110
    + [Parallel] - UserSSH: user=tidb, host=172.16.43.110
    + [Parallel] - UserSSH: user=tidb, host=172.16.43.110
    + [ Serial ] - StartCluster
    Starting component pd
            Starting instance pd 172.16.43.110:2379
            Start pd 172.16.43.110:2379 success
    Starting component node_exporter
            Starting instance 172.16.43.110
            Start 172.16.43.110 success
    Starting component blackbox_exporter
            Starting instance 172.16.43.110
            Start 172.16.43.110 success
    Starting component tikv
            Starting instance tikv 172.16.43.109:20160
            Starting instance tikv 172.16.43.107:20160
            Starting instance tikv 172.16.43.108:20160
            Start tikv 172.16.43.108:20160 success
            Start tikv 172.16.43.107:20160 success
            Start tikv 172.16.43.109:20160 success
    Starting component node_exporter
            Starting instance 172.16.43.107
            Start 172.16.43.107 success
    Starting component blackbox_exporter
            Starting instance 172.16.43.107
            Start 172.16.43.107 success
    Starting component node_exporter
            Starting instance 172.16.43.108
            Start 172.16.43.108 success
    Starting component blackbox_exporter
            Starting instance 172.16.43.108
            Start 172.16.43.108 success
    Starting component node_exporter
            Starting instance 172.16.43.109
            Start 172.16.43.109 success
    Starting component blackbox_exporter
            Starting instance 172.16.43.109
            Start 172.16.43.109 success
    Starting component tidb
            Starting instance tidb 172.16.43.110:4000
            Start tidb 172.16.43.110:4000 success
    Starting component prometheus
            Starting instance prometheus 172.16.43.110:9090
            Start prometheus 172.16.43.110:9090 success
    Starting component grafana
            Starting instance grafana 172.16.43.110:3000
            Start grafana 172.16.43.110:3000 success
    Starting component alertmanager
            Starting instance alertmanager 172.16.43.110:9093
            Start alertmanager 172.16.43.110:9093 success
    + [ Serial ] - UpdateTopology: cluster=tidb-test
    Started cluster `tidb-test` successfully

     检查部署的 TiDB 集群情况

    [root@test4 develop]# tiup cluster display tidb-test
    Starting component `cluster`: /root/.tiup/components/cluster/v1.4.0/tiup-cluster display tidb-test
    Cluster type:       tidb
    Cluster name:       tidb-test
    Cluster version:    v4.0.0
    SSH type:           builtin
    Dashboard URL:      http://172.16.43.110:2379/dashboard
    ID                   Role          Host           Ports        OS/Arch       Status   Data Dir                      Deploy Dir
    --                   ----          ----           -----        -------       ------   --------                      ----------
    172.16.43.110:9093   alertmanager  172.16.43.110  9093/9094    linux/x86_64  Up       /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
    172.16.43.110:3000   grafana       172.16.43.110  3000         linux/x86_64  Up       -                             /tidb-deploy/grafana-3000
    172.16.43.110:2379   pd            172.16.43.110  2379/2380    linux/x86_64  Up|L|UI  /tidb-data/pd-2379            /tidb-deploy/pd-2379
    172.16.43.110:9090   prometheus    172.16.43.110  9090         linux/x86_64  Up       /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
    172.16.43.110:4000   tidb          172.16.43.110  4000/10080   linux/x86_64  Up       -                             /tidb-deploy/tidb-4000
    172.16.43.107:20160  tikv          172.16.43.107  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
    172.16.43.108:20160  tikv          172.16.43.108  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
    172.16.43.109:20160  tikv          172.16.43.109  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
    Total nodes: 8

     后续验证流程请参考:Redhat7.3环境下单机版TIDB安装(附图)

  • 相关阅读:
    MySQL 必知必会学习笔记
    jemter 之cookies管理器
    linux shell通配符、元字符、转义符
    linux cut 、awk、grep、sed
    shell脚本的执行方式
    shell概述
    linux 查看用户常用命令
    linux的挂载命令
    linux关机和重启命令
    linux常用压缩格式
  • 原文地址:https://www.cnblogs.com/shileibrave/p/14610717.html
Copyright © 2020-2023  润新知