• Kubernetes+Flannel 环境中部署HBase集群


    2015-12-14注:加入新节点不更改运行节点参数需求已满足,将在后续文章中陆续总结。

    注:目前方案不满足加入新节点(master节点或regionserver节点)而不更改已运行节点的参数的需求,具体讨论见第六部分。

    一、背景知识

    先看下HBase的组成:

    Master:Master主要负责管理RegionServer集群,如负载均衡及资源分配等,它本身也可以以集群方式运行,但同一时刻只有一个master处于激活状态。当工作中的master宕掉后,zookeeper会切换到其它备选的master上。

    RegionServer:负责具体数据块的读写操作。

    ZooKeeper:负责集群元数据的维护并监控集群的状态以防止单点故障。部署HBase时可以使用自带的ZooKeeper也可以使用独立的集群,是HBase跑起来的先决条件。

    HDFS:写入HBase中的数据最终都持久化到了HDFS中,也是HBase运行的先决条件。

    二、skyDNS部署

    skyDNS并不是必需项,但设置了skyDNS后可以为k8s的service绑定域名,进行hbase的参数设置时可以以域名代替service的IP地址。k8s环境中的skyDNS由三部分组成: 存储IP地址和域名映射关系的ETCD;进行域名解析的skyDNS;连接k8s和skyDNS的桥梁kube2sky。k8s的域名构成为 service_name.namespace.k8s_cluster_domain。k8s的文档中有对部署skyDNS的简略说明(戳这里),其中要用到Google镜像仓库中的image,国内访问不到,可以使用DockerHub上的替换方案,如:

    skyDNS: docker pull shenshouer/skydns:2015-09-22

    kube2sky: docker pull shenshouer/kube2sky:1.11

    ETCD: 只要是2.x版本的ETCD都可以,也可以和上面的保持一致使用 docker pull shenshouer/etcd:2.0.9

    pull下来之后打上tag再push到私有仓库中。

    下面创建一个service和一个pod来部署skyDNS,设定 skyDNS service 的服务地址为 172.16.40.1 (53/UDP, 53/TCP),注意该IP地址要在kube-apiserver启动时设定的service的子网范围内;k8s集群的域名后缀为 domeos.sohu,注意这个后缀的选择最好包含两部分,否则kube2sky可能会出问题(具体讨论戳这里)。

    首先创建skydns.yaml文件:

    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns
      labels:
        app: kube-dns
        version: v8
    spec:
      selector:
        app: kube-dns
        version: v8
      type: ClusterIP
      clusterIP: 172.16.40.1
      ports:
        - name: dns
          port: 53
          protocol: UDP
        - name: dns-tcp
          port: 53
          protocol: TCP
    ---
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: kube-dns-v8
      labels:
        app: kube-dns
        version: v8
    spec:
      replicas: 1
      selector:
        app: kube-dns
        version: v8
      template:
        metadata:
          labels:
            app: kube-dns
            version: v8
        spec:
          containers:
            - name: etcd
              image: 10.11.150.76:5000/openxxs/etcd:2.0.3
              command:
                - "etcd"
              args:
                - "--data-dir=/var/etcd/data"
                - "--listen-client-urls=http://127.0.0.1:2379,http://127.0.0.1:4001"
                - "--advertise-client-urls=http://127.0.0.1:2379,http://127.0.0.1:4001"
                - "--initial-cluster-token=skydns-etcd"
              volumeMounts:
                - name: etcd-storage
                  mountPath: /var/etcd/data
            - name: kube2sky
              image: 10.11.150.76:5000/openxxs/kube2sky:k8s-dns
              args:
                - "--domain=domeos.sohu"
                - "--kube_master_url=http://10.16.42.200:8080"
            - name: skydns
              image: 10.11.150.76:5000/openxxs/skydns:2015-09-22
              args:
                - "--machines=http://localhost:4001"
                - "--addr=0.0.0.0:53"
                - "--domain=domeos.sohu"
              ports:
                - containerPort: 53
                  name: dns
                  protocol: UDP
                - containerPort: 53
                  name: dns-tcp
                  protocol: TCP
          volumes:
              - name: etcd-storage
                emptyDir: {}
          dnsPolicy: Default

    kube2sky中的 --kube_master_url 参数用于指定 kube-apiserver 的地址;kube2sky中的 --domain 和 skydns中的 --domain 要保持一致。

    然后 kubectl create -f skydns.yaml 创建服务和pod:

    $kubectl create -f skydns.yaml 
    service "kube-dns" created
    replicationcontroller "kube-dns-v8" created
    $kubectl get pods
    NAME                    READY     STATUS           RESTARTS   AGE
    kube-dns-v8-61aie       3/3       Running          0          9s
    $kubectl get service
    NAME                    CLUSTER_IP       EXTERNAL_IP    PORT(S)              SELECTOR                     AGE
    kube-dns                172.16.40.1      <none>         53/UDP,53/TCP        app=kube-dns,version=v8      6m

    最后,重启 kubelet 加上dns相关的设置参数 --cluster_dns 和 --cluster_domain,参数值要与前面yaml文件中写的一致,如:

    ./kubelet --logtostderr=true --v=0 --api_servers=http://bx-42-200:8080 --address=0.0.0.0 --hostname_override=bx-42-198 --allow_privileged=false --pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest --cluster_dns=172.16.40.1 --cluster_domain=domeos.sohu &

    注意:只有在kubelet加了dns设置参数重启之后创建的pods才会使用skyDNS。

    此时进入到etcd的container中就可以发现k8s的service域名信息已被写入etcd当中了:

    $ docker exec -it 13e243510e3e sh
    / # etcdctl ls --recursive /
    /skydns
    /skydns/sohu
    /skydns/sohu/domeos
    /skydns/sohu/domeos/default
    /skydns/sohu/domeos/default/kube-dns
    /skydns/sohu/domeos/default/kubernetes
    /skydns/sohu/domeos/default/zookeeper-1
    /skydns/sohu/domeos/default/zookeeper-2
    /skydns/sohu/domeos/default/zookeeper-3
    /skydns/sohu/domeos/svc
    /skydns/sohu/domeos/svc/default
    /skydns/sohu/domeos/svc/default/zookeeper-2
    /skydns/sohu/domeos/svc/default/zookeeper-2/b8757496
    /skydns/sohu/domeos/svc/default/zookeeper-3
    /skydns/sohu/domeos/svc/default/zookeeper-3/8687b21f
    /skydns/sohu/domeos/svc/default/kube-dns
    /skydns/sohu/domeos/svc/default/kube-dns/a9f11e6f
    /skydns/sohu/domeos/svc/default/kubernetes
    /skydns/sohu/domeos/svc/default/kubernetes/cf07aead
    /skydns/sohu/domeos/svc/default/zookeeper-1
    /skydns/sohu/domeos/svc/default/zookeeper-1/75512011
    / # etcdctl get /skydns/sohu/domeos/default/zookeeper-1
    {"host":"172.16.11.1","priority":10,"weight":10,"ttl":30,"targetstrip":0}

    以 /skydns/sohu/domeos/default/zookeeper-1 这条记录为例,其对应的域名即为 zookeeper-1.default.domeos.sohu ,IP 为 172.16.11.1,服务名称为zookeeper-1,k8s的namespace为default,k8s设定的域为 domeos.sohu。在任一重启kubelet之后创建的pod中都可以以 zookeeper-1.default.domeos.sohu 的方式访问zookeeper-1服务,如:

    [@bx_42_199 ~]# docker exec -it 0662660e8708 /bin/bash
    [root@test-3-2h0fx /]# curl zookeeper-1.default.domeos.sohu:2181
    curl: (52) Empty reply from server

    三、HDFS集群部署

    HDFS由namenode和datanode组成,首先从DockerHub上pull合适的镜像再push到自己的私有仓库中:

    # pull 远程images
    docker pull bioshrek/hadoop-hdfs-datanode:cdh5
    docker pull bioshrek/hadoop-hdfs-namenode:cdh5
    # 打上tag
    # docker tag <image的ID> <自己的私有仓库IP:PORT/名称:TAG>
    docker tag c89c3ebcccae 10.11.150.76:5000/hdfs-datanode:latest
    docker tag ca19d4c7e359 10.11.150.76:5000/hdfs-namenode:latest
    # push到仓库中
    docker push 10.11.150.76:5000/hdfs-datanode:latest
    docker push 10.11.150.76:5000/hdfs-namenode:latest

    然后创建如下hdfs.yaml文件:

      1 apiVersion: v1
      2 kind: Service
      3 metadata:
      4   name: hdfs-namenode-service
      5 spec:
      6   selector:
      7     app: hdfs-namenode
      8   type: ClusterIP
      9   clusterIP: "172.16.20.1"
     10   ports:
     11     - name: rpc
     12       port: 4231
     13       targetPort: 8020
     14     - name: p1
     15       port: 50020
     16     - name: p2
     17       port: 50090
     18     - name: p3
     19       port: 50070
     20     - name: p4
     21       port: 50010
     22     - name: p5
     23       port: 50075
     24     - name: p6
     25       port: 8031
     26     - name: p7
     27       port: 8032
     28     - name: p8
     29       port: 8033
     30     - name: p9
     31       port: 8040
     32     - name: p10
     33       port: 8042
     34     - name: p11
     35       port: 49707
     36     - name: p12
     37       port: 22
     38     - name: p13
     39       port: 8088
     40     - name: p14
     41       port: 8030
     42 ---
     43 apiVersion: v1
     44 kind: ReplicationController
     45 metadata:
     46     name: hdfs-namenode-1
     47 spec:
     48   replicas: 1
     49   template:
     50     metadata:
     51       labels:
     52         app: hdfs-namenode
     53     spec:
     54       containers:
     55         - name: hdfs-namenode
     56           image: 10.11.150.76:5000/hdfs-namenode:latest
     57           volumeMounts:
     58             - name: data1
     59               mountPath: /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
     60             - name: data2
     61               mountPath: /home/chianyu/shared_with_docker_container/cdh5/nn
     62           ports:
     63             - containerPort: 50020
     64             - containerPort: 50090
     65             - containerPort: 50070
     66             - containerPort: 50010
     67             - containerPort: 50075
     68             - containerPort: 8031
     69             - containerPort: 8032
     70             - containerPort: 8033
     71             - containerPort: 8040
     72             - containerPort: 8042
     73             - containerPort: 49707
     74             - containerPort: 22
     75             - containerPort: 8088
     76             - containerPort: 8030
     77             - containerPort: 8020
     78       nodeSelector:
     79         kubernetes.io/hostname: bx-42-199
     80       volumes:
     81         - hostPath:
     82             path: /data1/kubernetes/hdfs-namenode/data1
     83           name: data1
     84         - hostPath:
     85             path: /data1/kubernetes/hdfs-namenode/data2
     86           name: data2
     87 ---
     88 apiVersion: v1
     89 kind: ReplicationController
     90 metadata:
     91     name: hdfs-datanode-1
     92 spec:
     93   replicas: 1
     94   template:
     95     metadata:
     96       labels:
     97         app: hdfs-datanode
     98         server-id: "1"
     99     spec:
    100       containers:
    101         - name: hdfs-datanode-1
    102           image: 10.11.150.76:5000/hdfs-datanode:latest
    103           volumeMounts:
    104             - name: data1
    105               mountPath: /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
    106             - name: data2
    107               mountPath: /home/chianyu/shared_with_docker_container/cdh5/dn
    108           env:
    109             - name: HDFSNAMENODERPC_SERVICE_HOST
    110               value: "172.16.20.1"
    111             - name: HDFSNAMENODERPC_SERVICE_PORT
    112               value: "4231"
    113           ports:
    114             - containerPort: 50020
    115             - containerPort: 50090
    116             - containerPort: 50070
    117             - containerPort: 50010
    118             - containerPort: 50075
    119             - containerPort: 8031
    120             - containerPort: 8032
    121             - containerPort: 8033
    122             - containerPort: 8040
    123             - containerPort: 8042
    124             - containerPort: 49707
    125             - containerPort: 22
    126             - containerPort: 8088
    127             - containerPort: 8030
    128             - containerPort: 8020
    129       nodeSelector:
    130         kubernetes.io/hostname: bx-42-199
    131       volumes:
    132         - hostPath:
    133             path: /data1/kubernetes/hdfs-datanode1/data1
    134           name: data1
    135         - hostPath:
    136             path: /data1/kubernetes/hdfs-datanode1/data2
    137           name: data2
    138 ---
    139 apiVersion: v1
    140 kind: ReplicationController
    141 metadata:
    142     name: hdfs-datanode-2
    143 spec:
    144   replicas: 1
    145   template:
    146     metadata:
    147       labels:
    148         app: hdfs-datanode
    149         server-id: "2"
    150     spec:
    151       containers:
    152         - name: hdfs-datanode-2
    153           image: 10.11.150.76:5000/hdfs-datanode:latest
    154           volumeMounts:
    155             - name: data1
    156               mountPath: /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
    157             - name: data2
    158               mountPath: /home/chianyu/shared_with_docker_container/cdh5/dn
    159           env:
    160             - name: HDFSNAMENODERPC_SERVICE_HOST
    161               value: "172.16.20.1"
    162             - name: HDFSNAMENODERPC_SERVICE_PORT
    163               value: "4231"
    164           ports:
    165             - containerPort: 50020
    166             - containerPort: 50090
    167             - containerPort: 50070
    168             - containerPort: 50010
    169             - containerPort: 50075
    170             - containerPort: 8031
    171             - containerPort: 8032
    172             - containerPort: 8033
    173             - containerPort: 8040
    174             - containerPort: 8042
    175             - containerPort: 49707
    176             - containerPort: 22
    177             - containerPort: 8088
    178             - containerPort: 8030
    179       nodeSelector:
    180         kubernetes.io/hostname: bx-42-199
    181       volumes:
    182         - name: data1
    183           hostPath:
    184             path: /data2/kubernetes/hdfs-datanode2/data1
    185         - name: data2
    186           hostPath:
    187             path: /data2/kubernetes/hdfs-datanode2/data2
    188 ---
    189 apiVersion: v1
    190 kind: ReplicationController
    191 metadata:
    192     name: hdfs-datanode-3
    193 spec:
    194   replicas: 1
    195   template:
    196     metadata:
    197       labels:
    198         app: hdfs-datanode
    199         server-id: "3"
    200     spec:
    201       containers:
    202         - name: hdfs-datanode-3
    203           image: 10.11.150.76:5000/hdfs-datanode:latest
    204           volumeMounts:
    205             - name: data1
    206               mountPath: /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
    207             - name: data2
    208               mountPath: /home/chianyu/shared_with_docker_container/cdh5/dn
    209           env:
    210             - name: HDFSNAMENODERPC_SERVICE_HOST
    211               value: "172.16.20.1"
    212             - name: HDFSNAMENODERPC_SERVICE_PORT
    213               value: "4231"
    214           ports:
    215             - containerPort: 50020
    216             - containerPort: 50090
    217             - containerPort: 50070
    218             - containerPort: 50010
    219             - containerPort: 50075
    220             - containerPort: 8031
    221             - containerPort: 8032
    222             - containerPort: 8033
    223             - containerPort: 8040
    224             - containerPort: 8042
    225             - containerPort: 49707
    226             - containerPort: 22
    227             - containerPort: 8088
    228             - containerPort: 8030
    229       nodeSelector:
    230         kubernetes.io/hostname: bx-42-199
    231       volumes:
    232         - name: data1
    233           hostPath:
    234             path: /data3/kubernetes/hdfs-datanode3/data1
    235         - name: data2
    236           hostPath:
    237             path: /data3/kubernetes/hdfs-datanode3/data2
    View Code

    通过 kubectl create -f hdfs.yaml 即创建一个名为hdfs-namenode-service的service,四个分别名为hdfs-namenode-1、hdfs-datanode-1、hdfs-datanode-2、hdfs-datanode-3的RC。通过 kubectl get services/rc/pods 可以看到对应的service和pod都已经正常启动了。

    下面对HDFS进行测试是否可以正常使用:

    # 查看HDFS pods
    kubectl get pods
    
    # 通过describe查看pods跑在哪个k8s node上
    kubectl describe pod hdfs-datanode-3-h4jvt
    
    # 进入容器内部
    docker ps | grep hdfs-datanode-3
    docker exec -it 2e2c4df0c0a9 /bin/bash
    
    # 切换至 hdfs 用户
    su hdfs
    
    # 创建目录
    hadoop fs -mkdir /test
    # 创建本地文件
    echo "Hello" > hello
    # 将本地文件复制到HDFS文件系统中
    hadoop fs -put hello /test
    # 查看HDFS中的文件信息
    hadoop fs -ls /test
    
    # 类似的,可以 docker exec 到其它datanode中查看文件信息,如:
    root@hdfs-datanode-1-nek2l:/# hadoop fs -ls /test
    Found 1 items
    -rw-r--r--   2 hdfs hadoop          6 2015-11-27 08:36 /test/hello

    四、ZooKeeper集群部署

    在 fabric8/zookeeper 的image基础上进行修改,修改后Dockerfile文件内容如下:

     1 FROM jboss/base-jdk:7
     2 
     3 MAINTAINER iocanel@gmail.com
     4 
     5 USER root
     6 
     7 ENV ZOOKEEPER_VERSION 3.4.6
     8 EXPOSE 2181 2888 3888
     9 
    10 RUN yum -y install wget bind-utils && yum clean all 
    11     && wget -q -O - http://apache.mirrors.pair.com/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/zookeeper-${ZOOKEEPER_VERSION}.tar.gz | tar -xzf - -C /opt 
    12     && mv /opt/zookeeper-${ZOOKEEPER_VERSION} /opt/zookeeper 
    13     && cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg 
    14     && mkdir -p /opt/zookeeper/{data,log}
    15 
    16 WORKDIR /opt/zookeeper
    17 VOLUME ["/opt/zookeeper/conf", "/opt/zookeeper/data", "/opt/zookeeper/log"]
    18 
    19 COPY config-and-run.sh ./bin/
    20 COPY zoo.cfg ./conf/
    21 
    22 CMD ["/opt/zookeeper/bin/config-and-run.sh"]
    View Code

    zoo.cfg 文件内容如下:

     1 # The number of milliseconds of each tick
     2 tickTime=2000
     3 # The number of ticks that the initial
     4 # synchronization phase can take
     5 initLimit=10
     6 # The number of ticks that can pass between
     7 # sending a request and getting an acknowledgement
     8 syncLimit=5
     9 # the directory where the snapshot is stored.
    10 dataDir=/opt/zookeeper/data
    11 #This option will direct the machine to write the transaction log to the dataLogDir rather than the dataDir. This allows a dedicated log device to be used, and helps avoid competition between logging and snaphots.
    12 dataLogDir=/opt/zookeeper/log
    13 
    14 # the port at which the clients will connect
    15 clientPort=2181
    16 #
    17 # Be sure to read the maintenance section of the
    18 # administrator guide before turning on autopurge.
    19 #
    20 # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
    21 #
    22 # The number of snapshots to retain in dataDir
    23 #autopurge.snapRetainCount=3
    24 # Purge task interval in hours
    25 # Set to "0" to disable auto purge feature
    26 #autopurge.purgeInterval=1
    View Code

    config-and-run.sh 文件内容如下:

    #!/bin/bash
    
    echo "$SERVER_ID / $MAX_SERVERS" 
    if [ ! -z "$SERVER_ID" ] && [ ! -z "$MAX_SERVERS" ]; then
      echo "Starting up in clustered mode"
      echo "" >> /opt/zookeeper/conf/zoo.cfg
      echo "#Server List" >> /opt/zookeeper/conf/zoo.cfg
      for i in $( eval echo {1..$MAX_SERVERS});do
        HostEnv="ZOOKEEPER_${i}_SERVICE_HOST"
        HOST=${!HostEnv}
        FollowerPortEnv="ZOOKEEPER_${i}_SERVICE_PORT_FOLLOWERS"
        FOLLOWERPORT=${!FollowerPortEnv}
        ElectionPortEnv="ZOOKEEPER_${i}_SERVICE_PORT_ELECTION"
        ELECTIONPORT=${!ElectionPortEnv}
        if [ "$SERVER_ID" = "$i" ];then
          echo "server.$i=0.0.0.0:$FOLLOWERPORT:$ELECTIONPORT" >> /opt/zookeeper/conf/zoo.cfg
        else
          echo "server.$i=$HOST:$FOLLOWERPORT:$ELECTIONPORT" >> /opt/zookeeper/conf/zoo.cfg
        fi
      done
      cat /opt/zookeeper/conf/zoo.cfg
    
      # Persists the ID of the current instance of Zookeeper
      echo ${SERVER_ID} > /opt/zookeeper/data/myid
      else
          echo "Starting up in standalone mode"
    fi
    
    exec /opt/zookeeper/bin/zkServer.sh start-foreground

    修改完后创建镜像并push到私有仓库中(镜像名为10.11.150.76:5000/zookeeper-kb:3.4.6-1)。

    创建zookeeper.yaml文件:

      1 apiVersion: v1
      2 kind: Service
      3 metadata:
      4   name: zookeeper-1
      5   labels:
      6       name: zookeeper-1
      7 spec:
      8   ports:
      9     - name: client
     10       port: 2181
     11       targetPort: 2181
     12     - name: followers
     13       port: 2888
     14       targetPort: 2888
     15     - name: election
     16       port: 3888
     17       targetPort: 3888
     18   selector:
     19     name: zookeeper
     20     server-id: "1"
     21   type: ClusterIP
     22   clusterIP: 172.16.11.1
     23 ---
     24 apiVersion: v1
     25 kind: Service
     26 metadata:
     27   name: zookeeper-2
     28   labels:
     29       name: zookeeper-2
     30 spec:
     31   ports:
     32     - name: client
     33       port: 2181
     34       targetPort: 2181
     35     - name: followers
     36       port: 2888
     37       targetPort: 2888
     38     - name: election
     39       port: 3888
     40       targetPort: 3888
     41   selector:
     42     name: zookeeper
     43     server-id: "2"
     44   type: ClusterIP
     45   clusterIP: 172.16.11.2
     46 ---
     47 apiVersion: v1
     48 kind: Service
     49 metadata:
     50   name: zookeeper-3
     51   labels:
     52       name: zookeeper-3
     53 spec:
     54   ports:
     55     - name: client
     56       port: 2181
     57       targetPort: 2181
     58     - name: followers
     59       port: 2888
     60       targetPort: 2888
     61     - name: election
     62       port: 3888
     63       targetPort: 3888
     64   selector:
     65     name: zookeeper
     66     server-id: "3"
     67   type: ClusterIP
     68   clusterIP: 172.16.11.3
     69 ---
     70 apiVersion: v1
     71 kind: ReplicationController
     72 metadata:
     73   name: zookeeper-1
     74 spec:
     75   replicas: 1
     76   template:
     77     metadata:
     78       labels:
     79         name: zookeeper
     80         server-id: "1"
     81     spec:
     82       volumes:
     83         - hostPath:
     84             path: /data1/kubernetes/zookeeper/data1
     85           name: data
     86         - hostPath:
     87             path: /data1/kubernetes/zookeeper/log1
     88           name: log
     89       containers:
     90         - name: server
     91           image: 10.11.150.76:5000/zookeeper-kb:3.4.6-1
     92           env:
     93             - name: SERVER_ID
     94               value: "1"
     95             - name: MAX_SERVERS
     96               value: "3"
     97           ports:
     98             - containerPort: 2181
     99             - containerPort: 2888
    100             - containerPort: 3888
    101           volumeMounts:
    102             - mountPath: /opt/zookeeper/data
    103               name: data
    104             - mountPath: /opt/zookeeper/log
    105               name: log
    106       nodeSelector:
    107         kubernetes.io/hostname: bx-42-199
    108 ---
    109 apiVersion: v1
    110 kind: ReplicationController
    111 metadata:
    112   name: zookeeper-2
    113 spec:
    114   replicas: 1
    115   template:
    116     metadata:
    117       labels:
    118         name: zookeeper
    119         server-id: "2"
    120     spec:
    121       volumes:
    122         - hostPath:
    123             path: /data1/kubernetes/zookeeper/data2
    124           name: data
    125         - hostPath:
    126             path: /data1/kubernetes/zookeeper/log2
    127           name: log
    128       containers:
    129         - name: server
    130           image: 10.11.150.76:5000/zookeeper-kb:3.4.6-1
    131           env:
    132             - name: SERVER_ID
    133               value: "2"
    134             - name: MAX_SERVERS
    135               value: "3"
    136           ports:
    137             - containerPort: 2181
    138             - containerPort: 2888
    139             - containerPort: 3888
    140           volumeMounts:
    141             - mountPath: /opt/zookeeper/data
    142               name: data
    143             - mountPath: /opt/zookeeper/log
    144               name: log
    145       nodeSelector:
    146         kubernetes.io/hostname: bx-42-199
    147 ---
    148 apiVersion: v1
    149 kind: ReplicationController
    150 metadata:
    151   name: zookeeper-3
    152 spec:
    153   replicas: 1
    154   template:
    155     metadata:
    156       labels:
    157         name: zookeeper
    158         server-id: "3"
    159     spec:
    160       volumes:
    161         - hostPath:
    162             path: /data1/kubernetes/zookeeper/data3
    163           name: data
    164         - hostPath:
    165             path: /data1/kubernetes/zookeeper/log3
    166           name: log
    167       containers:
    168         - name: server
    169           image: 10.11.150.76:5000/zookeeper-kb:3.4.6-1
    170           env:
    171             - name: SERVER_ID
    172               value: "3"
    173             - name: MAX_SERVERS
    174               value: "3"
    175           ports:
    176             - containerPort: 2181
    177             - containerPort: 2888
    178             - containerPort: 3888
    179           volumeMounts:
    180             - mountPath: /opt/zookeeper/data
    181               name: data
    182             - mountPath: /opt/zookeeper/log
    183               name: log
    184       nodeSelector:
    185         kubernetes.io/hostname: bx-42-199
    View Code

    通过 kubectl create -f zookeeper.yaml 创建三个service和对应的RC。注意container中已经把ZooKeeper的data和log目录映射到了主机的对应目录上用于持久化存储。

    创建完之后即可进行测试:

    # 进入zookeeper对应的容器后找到zkCli.sh,用该客户端进行测试
    /opt/zookeeper/bin/zkCli.sh
    [zk: localhost:2181(CONNECTED) 0]
    
    # 连接到k8s创建的zookeeper service (三个service任意一个都行)
    [zk: localhost:2181(CONNECTED) 0] connect 172.16.11.2:2181
    [zk: 172.16.11.2:2181(CONNECTED) 1]
    
    # 查看目录信息
    [zk: 172.16.11.2:2181(CONNECTED) 1] ls /
    [zookeeper]
    [zk: 172.16.11.2:2181(CONNECTED) 2] get /zookeeper
    
    cZxid = 0x0
    ctime = Thu Jan 01 00:00:00 UTC 1970
    mZxid = 0x0
    mtime = Thu Jan 01 00:00:00 UTC 1970
    pZxid = 0x0
    cversion = -1
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 0
    numChildren = 1
    [zk: 172.16.11.2:2181(CONNECTED) 3]

    五、HBase部署

    以上准备工作做好后,下面部署具有两个master和两个regionserver的HBase集群,其中两个master分别位于两个节点上,两个regionserver也分别位于两个节点上;使用独立的HDFS和ZooKeeper服务。

    首先需要创建HBase的镜像,选择的HBase版本为hbase-0.98.10.1-hadoop2。Dockerfile内容如下:

     1 FROM centos:6.6
     2 MAINTAINER openxxs <xiaoshengxu@sohu-inc.com>
     3 
     4 RUN yum install -y java-1.7.0-openjdk-devel.x86_64
     5 ENV JAVA_HOME=/usr/lib/jvm/jre
     6 
     7 RUN yum install -y nc 
     8     && yum install -y tar 
     9     && mkdir /hbase-setup
    10 
    11 WORKDIR /hbase-setup
    12 
    13 COPY hbase-0.98.10.1-hadoop2-bin.tar.gz /hbase-setup/hbase-0.98.10.1-hadoop2-bin.tar.gz
    14 RUN tar zxf hbase-0.98.10.1-hadoop2-bin.tar.gz -C /opt/ 
    15     && ln -s /opt/hbase-0.98.10.1-hadoop2 /opt/hbase
    16 
    17 ADD hbase-site.xml /opt/hbase/conf/hbase-site.xml
    18 ADD start-k8s-hbase.sh /opt/hbase/bin/start-k8s-hbase.sh
    19 RUN chmod +x /opt/hbase/bin/start-k8s-hbase.sh
    20 
    21 WORKDIR /opt/hbase/bin
    22 
    23 ENV PATH=$PATH:/opt/hbase/bin
    24 
    25 CMD /opt/hbase/bin/start-k8s-hbase.sh

    配置文件hbase-site.xml内容如下:

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
      <configuration>
        <property>
          <name>hbase.cluster.distributed</name>
          <value>true</value>
        </property>
        <property>
          <name>hbase.master.port</name>
          <value>@HBASE_MASTER_PORT@</value>
        </property>
        <property>
          <name>hbase.master.info.port</name>
          <value>@HBASE_MASTER_INFO_PORT@</value>
        </property>
        <property>
          <name>hbase.regionserver.port</name>
          <value>@HBASE_REGION_PORT@</value>
        </property>
        <property>
          <name>hbase.regionserver.info.port</name>
          <value>@HBASE_REGION_INFO_PORT@</value>
        </property>
        <property>
          <name>hbase.rootdir</name>
          <value>hdfs://@HDFS_PATH@/@ZNODE_PARENT@</value>
        </property>
        <property>
          <name>hbase.zookeeper.quorum</name>
          <value>@ZOOKEEPER_IP_LIST@</value>
        </property>
        <property>
          <name>hbase.zookeeper.property.clientPort</name>
          <value>@ZOOKEEPER_PORT@</value>
        </property>
        <property>
          <name>zookeeper.znode.parent</name>
          <value>/@ZNODE_PARENT@</value>
        </property>
      </configuration>

    启动脚本 start-k8s-hbase.sh 主要完成参数替换、写入/etc/hosts、启动 hbase 的功能,内容如下:

     1 #!/bin/bash
     2 
     3 export HBASE_CONF_FILE=/opt/hbase/conf/hbase-site.xml
     4 export HADOOP_USER_NAME=hdfs
     5 export HBASE_MANAGES_ZK=false
     6 
     7 sed -i "s/@HBASE_MASTER_PORT@/$HBASE_MASTER_PORT/g" $HBASE_CONF_FILE
     8 sed -i "s/@HBASE_MASTER_INFO_PORT@/$HBASE_MASTER_INFO_PORT/g" $HBASE_CONF_FILE
     9 sed -i "s/@HBASE_REGION_PORT@/$HBASE_REGION_PORT/g" $HBASE_CONF_FILE
    10 sed -i "s/@HBASE_REGION_INFO_PORT@/$HBASE_REGION_INFO_PORT/g" $HBASE_CONF_FILE
    11 sed -i "s/@HDFS_PATH@/$HDFS_SERVICE:$HDFS_PORT/$ZNODE_PARENT/g" $HBASE_CONF_FILE
    12 sed -i "s/@ZOOKEEPER_IP_LIST@/$ZOOKEEPER_SERVICE_LIST/g" $HBASE_CONF_FILE
    13 sed -i "s/@ZOOKEEPER_PORT@/$ZOOKEEPER_PORT/g" $HBASE_CONF_FILE
    14 sed -i "s/@ZNODE_PARENT@/$ZNODE_PARENT/g" $HBASE_CONF_FILE
    15 
    16 for i in ${HBASE_MASTER_LIST[@]}
    17 do
    18    arr=(${i//:/ })
    19    echo "${arr[0]} ${arr[1]}" >> /etc/hosts
    20 done
    21 
    22 for i in ${HBASE_REGION_LIST[@]}
    23 do
    24    arr=(${i//:/ })
    25    echo "${arr[0]} ${arr[1]}" >> /etc/hosts
    26 done
    27 
    28 if [ "$HBASE_SERVER_TYPE" = "master" ]; then
    29     /opt/hbase/bin/hbase master start > logmaster.log 2>&1
    30 elif [ "$HBASE_SERVER_TYPE" = "regionserver" ]; then
    31     /opt/hbase/bin/hbase regionserver start > logregion.log 2>&1
    32 fi

    其中导出HADOOP_USER_NAME为hdfs用户,否则会报Permission Denied的错误;HBASE_MANAGES_ZK=false表示不使用HBase自带的ZooKeeper;HBASE_MASTER_LIST为HBase集群中除当前master外的其余master的服务地址和pod名的对应关系;HBASE_REGION_LIST为HBase集群中除当前regionserver外的其余regionserver的服务地址和pod名的对应关系;最后根据 HBASE_SERVER_TYPE 的取值来确定是启master还是regionserver。

    准备好这些文件后即可创建HBase的image:

    docker build -t 10.11.150.76:5000/openxxs/hbase:1.0 .
    docker push 10.11.150.76:5000/openxxs/hbase:1.0

     随后创建hbase.yaml文件,内容如下:

      1 apiVersion: v1
      2 kind: Service
      3 metadata:
      4   name: hbase-master-1
      5 spec:
      6   selector:
      7     app: hbase-master
      8     server-id: "1"
      9   type: ClusterIP
     10   clusterIP: "172.16.30.1"
     11   ports:
     12     - name: rpc
     13       port: 60000
     14       targetPort: 60000
     15     - name: info
     16       port: 60001
     17       targetPort: 60001
     18 ---
     19 apiVersion: v1
     20 kind: Service
     21 metadata:
     22   name: hbase-master-2
     23 spec:
     24   selector:
     25     app: hbase-master
     26     server-id: "2"
     27   type: ClusterIP
     28   clusterIP: "172.16.30.2"
     29   ports:
     30     - name: rpc
     31       port: 60000
     32       targetPort: 60000
     33     - name: info
     34       port: 60001
     35       targetPort: 60001
     36 ---
     37 apiVersion: v1
     38 kind: Service
     39 metadata:
     40   name: hbase-region-1
     41 spec:
     42   selector:
     43     app: hbase-region
     44     server-id: "1"
     45   type: ClusterIP
     46   clusterIP: "172.16.30.3"
     47   ports:
     48     - name: rpc
     49       port: 60010
     50       targetPort: 60010
     51     - name: info
     52       port: 60011
     53       targetPort: 60011
     54 ---
     55 apiVersion: v1
     56 kind: Service
     57 metadata:
     58   name: hbase-region-2
     59 spec:
     60   selector:
     61     app: hbase-region
     62     server-id: "2"
     63   type: ClusterIP
     64   clusterIP: "172.16.30.4"
     65   ports:
     66     - name: rpc
     67       port: 60010
     68       targetPort: 60010
     69     - name: info
     70       port: 60011
     71       targetPort: 60011
     72 ---
     73 apiVersion: v1
     74 kind: Pod
     75 metadata:
     76   name: hbase-master-1
     77   labels:
     78     app: hbase-master
     79     server-id: "1"
     80 spec:
     81   containers:
     82     - name: hbase-master-1
     83       image: 10.11.150.76:5000/openxxs/hbase:1.0
     84       ports:
     85         - containerPort: 60000
     86         - containerPort: 60001
     87       env:
     88         - name: HBASE_SERVER_TYPE
     89           value: master
     90         - name: HBASE_MASTER_PORT
     91           value: "60000"
     92         - name: HBASE_MASTER_INFO_PORT
     93           value: "60001"
     94         - name: HBASE_REGION_PORT
     95           value: "60010"
     96         - name: HBASE_REGION_INFO_PORT
     97           value: "60011"
     98         - name: HDFS_SERVICE
     99           value: "hdfs-namenode-service.default.domeos.sohu"
    100         - name: HDFS_PORT
    101           value: "4231"
    102         - name: ZOOKEEPER_SERVICE_LIST
    103           value: "zookeeper-1.default.domeos.sohu,zookeeper-2.default.domeos.sohu,zookeeper-3.default.domeos.sohu"
    104         - name: ZOOKEEPER_PORT
    105           value: "2181"
    106         - name: ZNODE_PARENT
    107           value: hbase
    108         - name: HBASE_MASTER_LIST
    109           value: "172.16.30.2:hbase-master-2"
    110         - name: HBASE_REGION_LIST
    111           value: "172.16.30.3:hbase-region-1 172.16.30.4:hbase-region-2"
    112   restartPolicy: Always
    113   nodeSelector:
    114     kubernetes.io/hostname: bx-42-199
    115 ---
    116 apiVersion: v1
    117 kind: Pod
    118 metadata:
    119   name: hbase-master-2
    120   labels:
    121     app: hbase-master
    122     server-id: "2"
    123 spec:
    124   containers:
    125     - name: hbase-master-1
    126       image: 10.11.150.76:5000/openxxs/hbase:1.0
    127       ports:
    128         - containerPort: 60000
    129         - containerPort: 60001
    130       env:
    131         - name: HBASE_SERVER_TYPE
    132           value: master
    133         - name: HBASE_MASTER_PORT
    134           value: "60000"
    135         - name: HBASE_MASTER_INFO_PORT
    136           value: "60001"
    137         - name: HBASE_REGION_PORT
    138           value: "60010"
    139         - name: HBASE_REGION_INFO_PORT
    140           value: "60011"
    141         - name: HDFS_SERVICE
    142           value: "hdfs-namenode-service.default.domeos.sohu"
    143         - name: HDFS_PORT
    144           value: "4231"
    145         - name: ZOOKEEPER_SERVICE_LIST
    146           value: "zookeeper-1.default.domeos.sohu,zookeeper-2.default.domeos.sohu,zookeeper-3.default.domeos.sohu"
    147         - name: ZOOKEEPER_PORT
    148           value: "2181"
    149         - name: ZNODE_PARENT
    150           value: hbase
    151         - name: HBASE_MASTER_LIST
    152           value: "172.16.30.1:hbase-master-1"
    153         - name: HBASE_REGION_LIST
    154           value: "172.16.30.3:hbase-region-1 172.16.30.4:hbase-region-2"
    155   restartPolicy: Always
    156   nodeSelector:
    157     kubernetes.io/hostname: bx-42-198
    158 ---
    159 apiVersion: v1
    160 kind: Pod
    161 metadata:
    162     name: hbase-region-1
    163     labels:
    164       app: hbase-region-1
    165       server-id: "1"
    166 spec:
    167   containers:
    168     - name: hbase-region-1
    169       image: 10.11.150.76:5000/openxxs/hbase:1.0
    170       ports:
    171         - containerPort: 60010
    172         - containerPort: 60011
    173       env:
    174         - name: HBASE_SERVER_TYPE
    175           value: regionserver
    176         - name: HBASE_MASTER_PORT
    177           value: "60000"
    178         - name: HBASE_MASTER_INFO_PORT
    179           value: "60001"
    180         - name: HBASE_REGION_PORT
    181           value: "60010"
    182         - name: HBASE_REGION_INFO_PORT
    183           value: "60011"
    184         - name: HDFS_SERVICE
    185           value: "hdfs-namenode-service.default.domeos.sohu"
    186         - name: HDFS_PORT
    187           value: "4231"
    188         - name: ZOOKEEPER_SERVICE_LIST
    189           value: "zookeeper-1.default.domeos.sohu,zookeeper-2.default.domeos.sohu,zookeeper-3.default.domeos.sohu"
    190         - name: ZOOKEEPER_PORT
    191           value: "2181"
    192         - name: ZNODE_PARENT
    193           value: hbase
    194         - name: HBASE_MASTER_LIST
    195           value: "172.16.30.1:hbase-master-1 172.16.30.2:hbase-master-2"
    196         - name: HBASE_REGION_LIST
    197           value: "172.16.30.4:hbase-region-2"
    198   restartPolicy: Always
    199   nodeSelector:
    200     kubernetes.io/hostname: bx-42-199
    201 ---
    202 apiVersion: v1
    203 kind: Pod
    204 metadata:
    205     name: hbase-region-2
    206     labels:
    207       app: hbase-region-2
    208       server-id: "2"
    209 spec:
    210   containers:
    211     - name: hbase-region-2
    212       image: 10.11.150.76:5000/openxxs/hbase:1.0
    213       ports:
    214         - containerPort: 60010
    215         - containerPort: 60011
    216       env:
    217         - name: HBASE_SERVER_TYPE
    218           value: regionserver
    219         - name: HBASE_MASTER_PORT
    220           value: "60000"
    221         - name: HBASE_MASTER_INFO_PORT
    222           value: "60001"
    223         - name: HBASE_REGION_PORT
    224           value: "60010"
    225         - name: HBASE_REGION_INFO_PORT
    226           value: "60011"
    227         - name: HDFS_SERVICE
    228           value: "hdfs-namenode-service.default.domeos.sohu"
    229         - name: HDFS_PORT
    230           value: "4231"
    231         - name: ZOOKEEPER_SERVICE_LIST
    232           value: "zookeeper-1.default.domeos.sohu,zookeeper-2.default.domeos.sohu,zookeeper-3.default.domeos.sohu"
    233         - name: ZOOKEEPER_PORT
    234           value: "2181"
    235         - name: ZNODE_PARENT
    236           value: hbase
    237         - name: HBASE_MASTER_LIST
    238           value: "172.16.30.1:hbase-master-1 172.16.30.2:hbase-master-2"
    239         - name: HBASE_REGION_LIST
    240           value: "172.16.30.3:hbase-region-1"
    241   restartPolicy: Always
    242   nodeSelector:
    243     kubernetes.io/hostname: bx-42-198
    View Code

    说明:该yaml文件共创建了两个master服务、两个regionserver服务,以及对应的两个master Pods和两个regionserver Pods;Pod的restartPolicy设为Always表示如果该Pod挂掉的话将一直尝试重新启动它;以环境变量的形式将参数传递进Pod中,其中HDFS_SERVICE为HDFS服务经过skyDNS之后的对应域名,若未设置skyDNS则此处值设为HDFS服务对应的IP地址,ZOOKEEPER_SERVICE_LIST同理;HBASE_MASTER_LIST的值格式为 <master服务IP地址>:<master对应Pod名>,多个项之间以空格分隔,HBASE_REGION_LIST同理。

    接着就可以创建和查看HBase服务了:

    # 创建
    $kubectl create -f hbase.yaml
    service "hbase-master-1" created
    service "hbase-master-2" created
    service "hbase-region-1" created
    service "hbase-region-2" created
    pod "hbase-master-1" created
    pod "hbase-master-2" created
    pod "hbase-region-1" created
    pod "hbase-region-2" created
    
    # 查看pods
    $kubectl get pods
    NAME                    READY     STATUS           RESTARTS   AGE
    hbase-master-1          1/1       Running          0          5s
    hbase-master-2          0/1       Pending          0          5s
    hbase-region-1          1/1       Running          0          5s
    hbase-region-2          0/1       Pending          0          5s
    hdfs-datanode-1-nek2l   1/1       Running          3          7d
    hdfs-datanode-2-vkbbt   1/1       Running          3          7d
    hdfs-datanode-3-h4jvt   1/1       Running          3          7d
    hdfs-namenode-1-cl0pj   1/1       Running          3          7d
    kube-dns-v8-x8igc       3/3       Running          0          4h
    zookeeper-1-ojhmy       1/1       Running          0          12h
    zookeeper-2-cr73i       1/1       Running          0          12h
    zookeeper-3-79ls0       1/1       Running          0          12h
    
    # 查看service
    $kubectl get service
    NAME                    CLUSTER_IP       EXTERNAL_IP    PORT(S)                                                                                                                                      SELECTOR                       AGE
    hbase-master-1          172.16.30.1      <none>         60000/TCP,60001/TCP                                                                                                                          app=hbase-master,server-id=1   17m
    hbase-master-2          172.16.30.2      <none>         60000/TCP,60001/TCP                                                                                                                          app=hbase-master,server-id=2   17m
    hbase-region-1          172.16.30.3      <none>         60010/TCP,60011/TCP                                                                                                                          app=hbase-region,server-id=1   17m
    hbase-region-2          172.16.30.4      <none>         60010/TCP,60011/TCP                                                                                                                          app=hbase-region,server-id=2   17m
    hdfs-namenode-service   172.16.20.1      <none>         4231/TCP,50020/TCP,50090/TCP,50070/TCP,50010/TCP,50075/TCP,8031/TCP,8032/TCP,8033/TCP,8040/TCP,8042/TCP,49707/TCP,22/TCP,8088/TCP,8030/TCP   app=hdfs-namenode              7d
    kube-dns                172.16.40.1      <none>         53/UDP,53/TCP                                                                                                                                app=kube-dns,version=v8        10h
    kubernetes              172.16.0.1       <none>         443/TCP                                                                                                                                      <none>                         12d
    zookeeper-1             172.16.11.1      <none>         2181/TCP,2888/TCP,3888/TCP                                                                                                                   name=zookeeper,server-id=1     13h
    zookeeper-2             172.16.11.2      <none>         2181/TCP,2888/TCP,3888/TCP                                                                                                                   name=zookeeper,server-id=2     13h
    zookeeper-3             172.16.11.3      <none>         2181/TCP,2888/TCP,3888/TCP                                                                                                                   name=zookeeper,server-id=3     13h

    通过ZooKeeper的zkCli.sh可以看到/hbase下对应的master和rs的记录(显示乱码是由于系统显示时编码的原因,无影响):

    [zk: localhost:2181(CONNECTED) 0] ls /hbase
    [meta-region-server, backup-masters, table, draining, region-in-transition, table-lock, running, master, namespace, hbaseid, online-snapshot, replication, splitWAL, recovering-regions, rs]
    [zk: localhost:2181(CONNECTED) 1] ls /hbase/rs
    [172.27.0.0,60010,1448896399329, 172.28.0.115,60010,1448896360650]
    [zk: localhost:2181(CONNECTED) 2] get /hbase/master
    ?master:60000??E*?O=PBUF
    
    base-master-1???????*
    cZxid = 0x100000186
    ctime = Mon Nov 30 15:12:42 UTC 2015
    mZxid = 0x100000186
    mtime = Mon Nov 30 15:12:42 UTC 2015
    pZxid = 0x100000186
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x151563a5e37001a
    dataLength = 60
    numChildren = 0

    可以通过docker exec进入到HBase对应的容器中进行表操作以测试HBase的工作状态:

    # 进入198的hbase-master-2容器中
    [@bx_42_198 /opt/scs/openxxs]# docker exec -it f131fcf15a72 /bin/bash
    
    # 使用hbase shell对hbase进行操作
    [root@hbase-master-2 bin]# hbase shell
    2015-11-30 15:15:58,632 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
    HBase Shell; enter 'help<RETURN>' for list of supported commands.
    Type "exit<RETURN>" to leave the HBase Shell
    Version 0.98.10.1-hadoop2, rd5014b47660a58485a6bdd0776dea52114c7041e, Tue Feb 10 11:34:09 PST 2015
    
    # 通过status查看状态,这里显示的 2 dead 是之前测试时遗留的记录,无影响
    hbase(main):001:0> status
    2015-11-30 15:16:03,551 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    2 servers, 2 dead, 1.5000 average load
    
    # 创建表
    hbase(main):002:0> create 'test','id','name'
    0 row(s) in 0.8330 seconds
    
    => Hbase::Table - test
    
    # 查看表
    hbase(main):003:0> list
    TABLE                                                                                                                                                                                
    member                                                                                                                                                                               
    test                                                                                                                                                                                 
    2 row(s) in 0.0240 seconds
    
    => ["member", "test"]
    
    # 插入数据
    hbase(main):004:0> put 'test','test1','id:5','addon'
    0 row(s) in 0.1540 seconds
    
    # 查看数据
    hbase(main):005:0> get 'test','test1'
    COLUMN                                         CELL                                                                                                                                  
     id:5                                          timestamp=1448906130803, value=addon                                                                                                  
    1 row(s) in 0.0490 seconds
    
    # 进入199的hbase-master-1容器中查看从198上插入的数据
    hbase(main):001:0> get 'test','test1'
    2015-11-30 18:01:23,944 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    COLUMN                                         CELL                                                                                                                                  
     id:5                                          timestamp=1448906130803, value=addon                                                                                                  
    1 row(s) in 0.2430 seconds

    从以上结果可以看出HBase算是在k8s中以不怎么完善的方式跑起来了。

    六、讨论

    不得不维护/etc/hosts记录的原因是HBase中master之间、regionserver之间、master与regionserver之间都是通过hostname来彼此识别对方的,而k8s的DNS只针对service并没有对Pod进行解析。而如果把master和regionserver放在同一个Pod下的话存在磁盘资源共享冲突的问题(还没仔细研究)。Github讨论中下面的这段话很直白地说明了HBase对hostname的依赖:

    1. Master uses ZooKeeper to run master election, the winner puts its
    hostname:port in ZooKeeper.
    2. RegionServers will register themselves to the winning Master through its
    hostname registered in ZooKeeper.
    3. We have a RegionServer with meta table/region (which stores Region ->
    RegionServer map), it stores its hostname:port in ZooKeeper.
    4. When a client starts a request to HBase, it gets the hostname of the
    RegionServer with meta table from ZooKeeper, scan the meta table, and find
    out the hostname of the RegionServer it interests, and finally talks to the
    RegionServer through hostname.

    为解决这个问题目前尝试过的方法如下:

    1. 配置skyDNS:skyDNS只针对service进行解析,无法解析Pod的名称。如果向skyDNS中插入hostname的相关记录并动态维护的话或许可以解决该问题,目前正在尝试中。

    2. 更改创建 ReplicationController、Server、Pod、Container 时的各种设置参数,如 name、generateName等,然并卵。

    3. 创建container后启动 master 前通过脚本更改 hostname:Docker只允许在Create Container时进行hostname的修改(docker run自身有一个hostname的参数可以指定Container的hostname),但在容器运行之后并不允许修改,修改则报如下错误:docker Error: hostname: you must be root to change the hostname. 这个错误有些误导,事实上是docker的机制不允许你去修改hostname而不是权限问题,用root也没法改。

    4. 修改HBase参数使其上报到ZK中的值不是hostname而是IP地址:这个一度是前景光明的解决方案,但将hostname写入ZK在HBase中是硬编码在代码中的,并没有参数可以去设置此项。有人给出了个patch(戳这里),但测试结果并不好。

    关于第五部分HBase部署方案的说明:选择使用单Pod而不是ReplicationController,是因为k8s会在RC中Container的hostname后面加上随机字符以区分彼此,而单Pod的Pod name和hostname是一致的;restartPolicy设为Always算是为单Pod方式鲁棒性提供点小小的补偿吧;如果将Pod name设置为对应service的IP或域名怎样?然而hostname并不允许带点号;写入 /etc/hosts 中的IP选择了service的而非Pod的,因为Pod中的IP在运行前并不能获取到,而且在重启Pod后也会发生改变,而service的IP是不变的,因此选择了 serviceIP:PodName 这种对应关系。

    最根本的解决方案是让k8s支持hostname(或者说Pod)的DNS解析,前面配置ZooKeeper同样存在hostname这个问题(戳这里),后面将要部署的Kafka也会有这个问题。k8s的开发团队已经进行了许多讨论并准备解决这个问题了(戳这里),希望下个版本会有相关设置。

    关于Pod和hostname的DNS更多讨论可以参考:戳这里这里还有这里再戳别再戳了

  • 相关阅读:
    Eclipse Java注释模板设置详解
    windows server 2008 配置安装AD 域控制器
    linux 用户、用户组不能是全数字
    python if __name__ == '__main__'解析
    完美解决:Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x
    yum Error: Cannot retrieve metalink for repository: epel. Please verify its path and try again
    在Eclipse中手动安装pydev插件,eclipse开发python环境配置
    xp系统打开软件程序总是弹出警告窗口,很烦人对不,怎么办呢?进来看
    UliPad双击没反应,UliPad打不开
    安装Django,运行django-admin.py startproject 工程名,后不出现指定的工程解决办法!!
  • 原文地址:https://www.cnblogs.com/openxxs/p/5001721.html
Copyright © 2020-2023  润新知