• Redis Cluster in K3S


    Redis Cluster in K3S


    学习资料

    https://www.cnblogs.com/cheyunhua/p/15619317.html
    https://blog.csdn.net/cqnaqjy/article/details/126001999
    https://segmentfault.com/a/1190000039196137
    

    背景说明

    周末要求了下Redis Cluster in Docker 和 in file
    搭建了六主六从的Redis集群(在两个服务器上面)
    并且简单验证了机器的性能. 
    发现本地文件比docker稍微好一些
    但是因为docker 使用的 --net=host的模式搭建的
    模拟不出来网络转发和分层的损耗,所以今天研究了下redis Cluster in K8S
    自己K8S能力只是在于原理, 实操比较弱. 所以耗时比较久
    这里简单记录一下整个过程, 备忘
    
    需要说明的是, 仅是搭建出来还没有应用去访问和连接. 
    我也不清楚集群重启之后pod地址发生变化应该如何处理. 
    

    第一步 共享存储的搭建 NFS

    yum install nfs* -y
    yum install rpcbind* -y
    systemctl enable nfs && systemctl enable rpcbind
    systemctl restart nfs && systemctl restart rpcbind 
    
    # 创建目录, 注意不要选择根目录下一级目录.貌似会有问题. 
    mkdir -p /usr/local/k8s/redis/pv
    # 编辑配置文件
    vim /etc/exports
    /usr/local/k8s/redis/pv *(rw,sync,no_root_squash)
    # 注意测试环境这么写.生产可能需要注意安全.
    # 注意保存后需要重启一下 nfs
    systemctl restart nfs
    exportfs 
    # 就可以查看.
    

    K8S创建sc等

    • 可以保存文件为 redis-sc.yaml
    • 部署的方式为 kubectl apply -f redis-sc.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      name: redis
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: redis-nfs-storage
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: redis/nfs
    reclaimPolicy: Retain
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nfs-client-provisioner
      labels:
        app: nfs-client-provisioner
      namespace: default
    spec:
      replicas: 1
      strategy:
        type: Recreate
      selector:
        matchLabels:
          app: nfs-client-provisioner
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          serviceAccountName: nfs-client-provisioner
          containers:
            - name: nfs-client-provisioner
              image: gmoney23/nfs-client-provisioner
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: redis/nfs
                - name: NFS_SERVER
                  value: 10.110.139.191 ## 指定自己nfs服务器地址
                - name: NFS_PATH  
                  value: /usr/local/k8s/redis/pv2  ## nfs服务器共享的目录
          volumes:
            - name: nfs-client-root
              nfs:
                server: 10.110.139.191
                path: /usr/local/k8s/redis/pv2
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: nfs-client-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["create", "update", "patch"]
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: run-nfs-client-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-client-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    rules:
      - apiGroups: [""]
        resources: ["endpoints"]
        verbs: ["get", "list", "watch", "create", "update", "patch"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: leader-locking-nfs-client-provisioner
      # replace with namespace where provisioner is deployed
      namespace: default
    subjects:
      - kind: ServiceAccount
        name: nfs-client-provisioner
        # replace with namespace where provisioner is deployed
        namespace: default
    roleRef:
      kind: Role
      name: leader-locking-nfs-client-provisioner
      apiGroup: rbac.authorization.k8s.io
    

    创建redis配置文件

    • 注意命令为 需要将文件保存为 redis.conf 的文件:
    • kubectl create configmap redis-conf --from-file=redis.conf
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: redis-cluster
      namespace: default
    data:
      fix-ip.sh: |
        #!/bin/sh
        CLUSTER_CONFIG="/data/nodes.conf"
        if [ -f ${CLUSTER_CONFIG} ]; then
          if [ -z "${POD_IP}" ]; then
            echo "Unable to determine Pod IP address!"
            exit 1
          fi
          echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
          sed -i.bak -e '/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/'${POD_IP}'/' ${CLUSTER_CONFIG}
        fi
        exec "$@"
      redis.conf: |
        cluster-enabled yes
        cluster-config-file /data/nodes.conf
        cluster-node-timeout 10000
        protected-mode no
        daemonize no
        pidfile /var/run/redis.pid
        port 6379
        tcp-backlog 511
        bind 0.0.0.0
        timeout 3600
        tcp-keepalive 1
        loglevel verbose
        logfile /data/redis.log
        databases 16
        save 900 1
        save 300 10
        save 60 10000
        stop-writes-on-bgsave-error yes
        rdbcompression yes
        rdbchecksum yes
        dbfilename dump.rdb
        dir /data
        requirepass Test20131127
        appendonly yes
        appendfilename "appendonly.aof"
        appendfsync everysec
        no-appendfsync-on-rewrite no
        auto-aof-rewrite-percentage 100
        auto-aof-rewrite-min-size 64mb
        lua-time-limit 20000
        slowlog-log-slower-than 10000
        slowlog-max-len 128
        #rename-command FLUSHALL  ""
        latency-monitor-threshold 0
        notify-keyspace-events ""
        hash-max-ziplist-entries 512
        hash-max-ziplist-value 64
        list-max-ziplist-entries 512
        list-max-ziplist-value 64
        set-max-intset-entries 512
        zset-max-ziplist-entries 128
        zset-max-ziplist-value 64
        hll-sparse-max-bytes 3000
        activerehashing yes
        client-output-buffer-limit normal 0 0 0
        client-output-buffer-limit slave 256mb 64mb 60
        client-output-buffer-limit pubsub 32mb 8mb 60
        hz 10
        aof-rewrite-incremental-fsync yes
    

    创建Redis Cluster的集群

    • 注意执行命令就可以, 可以保存为 redis-cluster.yaml
    • kubectl apply -f redis-cluster.yaml
    apiVersion: v1
    kind: Service
    metadata:
      namespace: default
      name: redis-cluster
    spec:
      clusterIP: None
      ports:
      - port: 6379
        targetPort: 6379
        name: client
      - port: 16379
        targetPort: 16379
        name: gossip
      selector:
        app: redis-cluster
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      namespace: default
      name: redis-cluster
    spec:
      serviceName: redis-cluster
      replicas: 12
      selector:
        matchLabels:
          app: redis-cluster
      template:
        metadata:
          labels:
            app: redis-cluster
        spec:
          containers:
          - name: redis
            image: harbor.gscloud.online/gscloud/redis:7.0.5
            ports:
            - containerPort: 6379
              name: client
            - containerPort: 16379
              name: gossip
            command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"]
            env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            volumeMounts:
            - name: conf
              mountPath: /etc/redis/
              readOnly: false
            - name: data
              mountPath: /data
              readOnly: false
          volumes:
          - name: conf
            configMap:
              name: redis-cluster
              defaultMode: 0755
      volumeClaimTemplates:
      - metadata:
          name: data
          annotations:
            volume.beta.kubernetes.io/storage-class: "redis-nfs-storage"
        spec:
          accessModes:
            - ReadWriteMany
          resources:
            requests:
              storage: 10Gi
    

    Redis 集群创建

    • 第一步 拉出来K8S内的pod信息
    kubectl get pods -l app=redis-cluster  -o jsonpath='{range.items[*]}{.status.podIP}:6379 '
    # 注意这个里面 多最后一个 :6379 需要清理掉
    
    # 然后需要进入某一个容器
    kubectl exec -it redis-cluster-01 bash
    执行命令为:
    redis-cli -h 127.0.0.1 -p 6379 --cluster create \
    10.42.236.144:6379 10.42.73.71:6379 10.42.236.149:6379 \
    10.42.73.76:6379 10.42.236.145:6379 10.42.73.72:6379 \
    10.42.236.146:6379 10.42.73.73:6379 10.42.236.147:6379  \
    10.42.73.74:6379 10.42.236.148:6379 10.42.73.75:6379 \
    --cluster-replicas 1
    

    Redis 集群检查

    kubectl exec -it redis-cluster-1 bash
    #执行命令
    #注意一开始我没设置密码
    redis-cli cluster nodes
    redis-cli cluster info
    
    注意 Cluster集群的测试命令为:
    redis-benchmark -h 127.0.0.1 --cluster >K8S1.txt
    redis-benchmark -h 127.0.0.1 -c 100 --cluster >K8S2.txt
    redis-benchmark -h 127.0.0.1 -c 100 -d 1024 --cluster >K8S3.txt
    获取部分有效信息的语句为:
    cat file1.txt |grep -E "requests per second|====="
    

    Redis集群测试结果

    测试模式 部署模式 ping get set incr mset
    测试用例1 File 199600 199203 199600 199600 199203
    测试用例1 Docker 200000 199203 199203 169491 132978
    测试用例1 K8S 199203 99700 199203 132978 99601
    测试用例2 File 198807 199203 199600 199600 199203
    测试用例2 Docker 198807 199600 400000 198807 198807
    测试用例2 K8S 132978 132626 132626 132626 133155
    测试用例3 File 198807 57142 199203 199203 79428
    测试用例3 Docker 198807 99900 199600 65963 79554
    测试用例3 K8S 99700 132802 99800 131752 14214
  • 相关阅读:
    阿里云 CentOS 安装JDK
    【JSP&Servlet学习笔记】5.Servlet进阶AIP、过滤器与监听器
    【JSP&Servlet学习笔记】4.会话管理
    【HeadFirst设计模式】13.与设计模式相处
    【HeadFirst设计模式】12.复合模式
    【HeadFirst设计模式】11.代理模式
    【HeadFirst设计模式】10.状态模式
    【HeadFirst设计模式】9.迭代器与组合模式
    【HeadFirst设计模式】8.模板方法模式
    【HeadFirst设计模式】7.适配器模式与外观模式
  • 原文地址:https://www.cnblogs.com/jinanxiaolaohu/p/16866446.html
Copyright © 2020-2023  润新知