• Running an etcd cluster on localhost


    Purpose

    • Run a cluster on localhost while investigating etcd
    • Use a static cluster (So we have no external dependecies for bootstrapping)

    Background information

    Bootstrap

    • Will use static bootstrapping
    • Client connection port default is 2379 (Just supporting a single port per node), we will decerement the port for subsequent nodes so we do not get a port conflict
    • Peer connection (Raft consensus) port default is 2380 (Just supporting a single port per node), we will increment the port for subsequent nodes so we do not get a port conflict
    • Will use /tmp/etcdinv directory for the cluster - If you want the cluster to stick around use a different directory
      • If all nodes are stopped and then restarted the cluster will try to restart with this state, if the OS has not already purged this content
    • Will write node logs to a file and run process in the background
    # etcd bin directory
    etcd_bin_dir=/home/pmcgrath/go/src/github.com/coreos/etcd/bin/
    
    # Ensure we have a root directory for the cluster - Note we are using /tmp here, if you want the cluster to stick arounf use a different directory
    mkdir -p /tmp/etcdinv
    
    # Run node 1 
    $etcd_bin_dir./etcd 
        -name node1 
        -data-dir /tmp/ectdinv/node1 
      -advertise-client-urls http://localhost:2379 -listen-peer-urls http://localhost:2380 -listen-client-urls http://localhost:2379 -initial-advertise-peer-urls http://localhost:2380 -initial-cluster-token MyEtcdCluster -initial-cluster node1=http://localhost:2380,node2=http://localhost:2381,node3=http://localhost:2382 -initial-cluster-state new &> /tmp/etcdinv/node1.log & # Run node 2 $etcd_bin_dir./etcd -name node2 -data-dir /tmp/ectdinv/node2
      -advertise-client-urls http://localhost:2378 -listen-peer-urls http://localhost:2381 -listen-client-urls http://localhost:2378 -initial-advertise-peer-urls http://localhost:2381 -initial-cluster-token MyEtcdCluster -initial-cluster node1=http://localhost:2380,node2=http://localhost:2381,node3=http://localhost:2382 -initial-cluster-state new &> /tmp/etcdinv/node2.log & # Run node 3 $etcd_bin_dir./etcd -name node3 -data-dir /tmp/ectdinv/node3
    -advertise-client-urls http://localhost:2377 -listen-peer-urls http://localhost:2382 -listen-client-urls http://localhost:2377 -initial-advertise-peer-urls http://localhost:2382 -initial-cluster-token MyEtcdCluster -initial-cluster node1=http://localhost:2380,node2=http://localhost:2381,node3=http://localhost:2382 -initial-cluster-state new &> /tmp/etcdinv/node3.log & # List nodes ETCDCTL_PEERS=http://127.0.0.1:2379 $etcd_bin_dir/etcdctl member list
    • You can see the cluster node pids using
      • pidof etcd
      • ps aux | grep etcd

    Interacting with the cluster using etcdctl

    • Will use the client port 2379 based on this
    • etcdctl defaults to 4001 at this time
    • I could have added an extra client url for 4001 when bring up the nodes, but I'm guessing 4001 will be removed at some stage
    # etcd bin directory
    etcd_bin_dir=/home/pmcgrath/go/src/github.com/coreos/etcd/bin/
    
    # Using node1
    # Write a key
    ETCDCTL_PEERS=http://127.0.0.1:2379 $etcd_bin_dir/etcdctl set /dir1/key1 value1
    # Should echo value1
    
    # Read key
    ETCDCTL_PEERS=http://127.0.0.1:2379 $etcd_bin_dir/etcdctl get /dir1/key1
    # Should echo value1
    
    # Using node3
    # Read key
    ETCDCTL_PEERS=http://127.0.0.1:2377 $etcd_bin_dir/etcdctl get /dir1/key1
    # Should echo value1
    

    Kill one of the nodes

    # etcd bin directory
    etcd_bin_dir=/home/pmcgrath/go/src/github.com/coreos/etcd/bin/
    
    # Kill node2
    pidof etcd
    # Should only have 3 pids
    kill $(ps aux | grep 'etcd -name node2' | cut -d ' ' -f 2)
    pidof etcd
    # Should only have 2 pids
    
    # Read key using node1
    ETCDCTL_PEERS=http://127.0.0.1:2379 $etcd_bin_dir/etcdctl get /dir1/key1
    # Should echo value1
    
    # Read key using node2
    ETCDCTL_PEERS=http://127.0.0.1:2378 $etcd_bin_dir/etcdctl get /dir1/key1
    # Should fail indicating cluster node could not available 
    
    # Read key using node3
    ETCDCTL_PEERS=http://127.0.0.1:2377 $etcd_bin_dir/etcdctl get /dir1/key1
    # Should echo value1
    

    Using a proxy

    # etcd bin directory
    etcd_bin_dir=/home/pmcgrath/go/src/github.com/coreos/etcd/bin/
    
    # Run a read write proxy - on 8080 
    $etcd_bin_dir/etcd 
        -proxy on 
        -name proxy 
        -listen-client-urls http://localhost:8080 
        -initial-cluster node1=http://localhost:2380,node2=http://localhost:2381,node3=http://localhost:2382 &> /tmp/etcdinv/proxy.log &
    
    # Read existing key
    ETCDCTL_PEERS=http://127.0.0.1:8080 $etcd_bin_dir/etcdctl get /dir1/key1
    # Should echo value1
    
    # Write a key
    ETCDCTL_PEERS=http://127.0.0.1:8080 $etcd_bin_dir/etcdctl set /dir1/key2 value2
    # Should echo value2
    
    # Read existing key
    ETCDCTL_PEERS=http://127.0.0.1:8080 $etcd_bin_dir/etcdctl get /dir1/key1
    # Should echo value2




     ./etcdctl -peers 127.0.0.1:2379 member list


  • 相关阅读:
    【memesuite】 FATAL: Template does not contain data section
    可变剪切位点强度计算[自用]
    面试官:分布式环境下,如何实现 Session共享
    MySQL 大表优化方案,收藏了细看
    为什么我们不用数据库生成 ID?
    10 分钟彻底理解 Redis 的持久化和主从复制
    linux系统变为只读,提示Readonly file system的解决办法
    一文读懂MySQL所有日志
    一次简单的 JVM 调优
    读文献BioNet: an RPackage for the functional analysis of biological networks
  • 原文地址:https://www.cnblogs.com/toSeeMyDream/p/5465834.html
Copyright © 2020-2023  润新知