1 https://www.cnblogs.com/g2thend/p/11616534.html 2 3 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.0 4 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:1.15.0 5 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:1.15.0 6 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:1.15.0 7 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 8 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 9 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 10 11 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.0 k8s.gcr.io/kube-apiserver:v1.15.0 12 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0 13 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0 14 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0 15 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 16 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 17 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3. 18 19 sder@sde_env_01:~/k8s$ sudo kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr 10.244.0.0/16 20 [init] Using Kubernetes version: v1.15.0 21 [preflight] Running pre-flight checks 22 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ 23 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 18.09 24 [preflight] Pulling images required for setting up a Kubernetes cluster 25 [preflight] This might take a minute or two, depending on the speed of your internet connection 26 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 27 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 28 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 29 [kubelet-start] Activating the kubelet service 30 [certs] Using certificateDir folder "/etc/kubernetes/pki" 31 [certs] Generating "ca" certificate and key 32 [certs] Generating "apiserver-kubelet-client" certificate and key 33 [certs] Generating "apiserver" certificate and key 34 [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.18.1.181] 35 [certs] Generating "front-proxy-ca" certificate and key 36 [certs] Generating "front-proxy-client" certificate and key 37 [certs] Generating "etcd/ca" certificate and key 38 [certs] Generating "etcd/healthcheck-client" certificate and key 39 [certs] Generating "apiserver-etcd-client" certificate and key 40 [certs] Generating "etcd/server" certificate and key 41 [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.18.1.181 127.0.0.1 ::1] 42 [certs] Generating "etcd/peer" certificate and key 43 [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.18.1.181 127.0.0.1 ::1] 44 [certs] Generating "sa" key and public key 45 [kubeconfig] Using kubeconfig folder "/etc/kubernetes" 46 [kubeconfig] Writing "admin.conf" kubeconfig file 47 [kubeconfig] Writing "kubelet.conf" kubeconfig file 48 [kubeconfig] Writing "controller-manager.conf" kubeconfig file 49 [kubeconfig] Writing "scheduler.conf" kubeconfig file 50 [control-plane] Using manifest folder "/etc/kubernetes/manifests" 51 [control-plane] Creating static Pod manifest for "kube-apiserver" 52 [control-plane] Creating static Pod manifest for "kube-controller-manager" 53 [control-plane] Creating static Pod manifest for "kube-scheduler" 54 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" 55 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s 56 57 [apiclient] All control plane components are healthy after 21.003203 seconds 58 [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace 59 [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster 60 [upload-certs] Skipping phase. Please see --upload-certs 61 [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''" 62 [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] 63 [bootstrap-token] Using token: 3pq8az.ef8ucjjr38zb067n 64 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles 65 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials 66 [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token 67 [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster 68 [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace 69 [addons] Applied essential addon: CoreDNS 70 [addons] Applied essential addon: kube-proxy 71 72 Your Kubernetes control-plane has initialized successfully! 73 74 To start using your cluster, you need to run the following as a regular user: 75 76 mkdir -p $HOME/.kube 77 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 78 sudo chown $(id -u):$(id -g) $HOME/.kube/config 79 80 You should now deploy a pod network to the cluster. 81 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 82 https://kubernetes.io/docs/concepts/cluster-administration/addons/ 83 84 Then you can join any number of worker nodes by running the following on each as root: 85 86 kubeadm join 10.18.1.181:6443 --token 3pq8az.ef8ucjjr38zb067n 87 --discovery-token-ca-cert-hash sha256:25ea22aeb28e2afedce4e965bdb71fe38cbecc2f553023b531b0256e9fd75769 88 sder@sde_env_01:~/k8s$ 89 90 sder@k8s-master:~/k8s$ kubectl get svc 91 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 92 kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h 93 kubia-http LoadBalancer 10.110.12.154 <pending> 8080:30105/TCP 22h 94 nginx ClusterIP 10.105.222.111 <none> 80/TCP 29h 95 96 expose ? 97 98 apt install bash-completion 99 source /usr/share/bash-completion/bash_completion 100 source <(kubectl completion bash)