Istio安装配置及使用
一、k8s安装Istio
软件下载地址:https://github.com/istio/istio/
# 1、下载软件包并解压
[root@k8s-master1 istio]# ls
istio-1.10.1-linux-amd64.tar.gz
[root@k8s-master1 istio]# tar xf istio-1.10.1-linux-amd64.tar.gz
[root@k8s-master1 istio]# cd istio-1.10.1/
[root@k8s-master1 istio-1.10.1]# ll
total 24
drwxr-x--- 2 root root 22 Jun 5 04:44 bin # 包含istioctl的客户端文件。istioctl工具用于手动注入Envoy sidecar代理
-rw-r--r-- 1 root root 11348 Jun 5 04:44 LICENSE
drwxr-xr-x 5 root root 52 Jun 5 04:44 manifests
-rw-r----- 1 root root 854 Jun 5 04:44 manifest.yaml
-rw-r--r-- 1 root root 5866 Jun 5 04:44 README.md
drwxr-xr-x 20 root root 332 Jun 5 04:44 samples # 示例应用程序
drwxr-xr-x 3 root root 57 Jun 5 04:44 tools
[root@k8s-master1 istio-1.10.1]# cp bin/istioctl /usr/bin/
# 2、安装
[root@k8s-master1 istio]# istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete Thank you for installing Istio 1.10. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/KjkrDnMPByq7akrYA
# 3、验证istio是否部署成功
[root@k8s-master1 istio-1.10.1]# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-egressgateway-659cc7697b-42jdf 1/1 Running 0 18s
istio-ingressgateway-569f64cdf8-9xn7f 1/1 Running 0 18s
istiod-85c958cd6-j6p5n 1/1 Running 0 22s
# 4、卸载istio集群,暂时不执行
~]# istioctl manifest generate --set profile=demo | kubectl delete -f -
二、Istio部署在线书店bookinfo
2.1、在线书城功能介绍
在线书店-bookinfo
:该应用由四个单独的微服务构成,这个应用模仿在线书店的一个分类,显示一本书的信息,页面上会显示一本书的描述,书籍的细节(ISBN、页数等),以及关于这本书的一些评论。
Bookinfo应用分为四个单独的微服务
1)productpage这个微服务会调用details和reviews两个微服务,用来生成页面;
2)details这个微服务中包含了书籍的信息;
3)reviews这个微服务中包含了书籍相关的评论,它还会调用ratings微服务;
4)ratings这个微服务中包含了由书籍评价组成的评级信息。
reviews微服务有3个版本
1)v1版本不会调用ratings服务;
2)v2版本会调用ratings服务,并使用1到5个黑色星形图标来显示评分信息;
3)v3版本会调用ratings服务,并使用1到5个红色星形图标来显示评分信息。
Bookinfo应用中的几个微服务是由不同的语言编写的。这些服务对istio并无依赖,但是构成了一个有代表性的服务网格的例子:它由多个服务、多个语言构成,并且reviews服务具有多个版本。
要在Istio中运行这一应用,无需对应用自身做出任何改变。 只要简单的在 Istio 环境中对服务进行配置和运行,具体一点说就是把 Envoy sidecar 注入到每个服务之中。 最终的部署结果将如下图所示:
所有的微服务都和Envoy sidecar
集成在一起,被集成服务所有的出入流量都被envoy sidecar 所劫持,这样就为外部控制准备了所需的 Hook,然后就可以利用Istio控制平面为应用提供服务路由、遥测数据收集以及策略实施等功能。
2.2、在线书城部署
1)istio默认自动注入 sidecar,需要为default命名空间打上标签istio-injection=enabled
[root@k8s-master1 istio-1.10.1]# kubectl label namespace default istio-injection=enabled
[root@k8s-master1 istio-1.10.1]# kubectl describe ns default |grep istio-injection
Labels: istio-injection=enabled
2)使用kubectl部署应用bookinfo
[root@k8s-master1 istio-1.10.1]# pwd
/root/istio/istio-1.10.1
[root@k8s-master1 istio-1.10.1]# cat samples/bookinfo/platform/kube/bookinfo.yaml
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##################################################################################################
# This file defines the services, service accounts, and deployments for the Bookinfo sample.
#
# To apply all 4 Bookinfo services, their corresponding service accounts, and deployments:
#
# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
#
# Alternatively, you can deploy any resource separately:
#
# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l service=reviews # reviews Service
# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l account=reviews # reviews ServiceAccount
# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l app=reviews,version=v3 # reviews-v3 Deployment
##################################################################################################
##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
service: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-details
labels:
account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: details-v1
labels:
app: details
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: details
version: v1
template:
metadata:
labels:
app: details
version: v1
spec:
serviceAccountName: bookinfo-details
containers:
- name: details
image: docker.io/istio/examples-bookinfo-details-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
securityContext:
runAsUser: 1000
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
service: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-ratings
labels:
account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ratings-v1
labels:
app: ratings
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: ratings
version: v1
template:
metadata:
labels:
app: ratings
version: v1
spec:
serviceAccountName: bookinfo-ratings
containers:
- name: ratings
image: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
securityContext:
runAsUser: 1000
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
service: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-reviews
labels:
account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v1
labels:
app: reviews
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v1
template:
metadata:
labels:
app: reviews
version: v1
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v2
labels:
app: reviews
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v2
template:
metadata:
labels:
app: reviews
version: v2
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reviews-v3
labels:
app: reviews
version: v3
spec:
replicas: 1
selector:
matchLabels:
app: reviews
version: v3
template:
metadata:
labels:
app: reviews
version: v3
spec:
serviceAccountName: bookinfo-reviews
containers:
- name: reviews
image: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2
imagePullPolicy: IfNotPresent
env:
- name: LOG_DIR
value: "/tmp/logs"
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
- name: wlp-output
mountPath: /opt/ibm/wlp/output
securityContext:
runAsUser: 1000
volumes:
- name: wlp-output
emptyDir: {}
- name: tmp
emptyDir: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: bookinfo-productpage
labels:
account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: productpage-v1
labels:
app: productpage
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: productpage
version: v1
template:
metadata:
labels:
app: productpage
version: v1
spec:
serviceAccountName: bookinfo-productpage
containers:
- name: productpage
image: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
volumeMounts:
- name: tmp
mountPath: /tmp
securityContext:
runAsUser: 1000
volumes:
- name: tmp
emptyDir: {}
---
[root@k8s-master1 istio-1.10.1]# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
# 确认所有的服务和 Pod 都已经正确的定义和启动:
[root@k8s-master1 istio-1.10.1]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.108.71.3 <none> 9080/TCP 67s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d18h
productpage ClusterIP 10.100.76.24 <none> 9080/TCP 67s
ratings ClusterIP 10.96.56.183 <none> 9080/TCP 67s
reviews ClusterIP 10.102.223.234 <none> 9080/TCP 67s
[root@k8s-master1 istio-1.10.1]# kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-79f774bdb9-gqhd6 2/2 Running 0 75s
productpage-v1-6b746f74dc-r9m5v 2/2 Running 0 75s
ratings-v1-b6994bb9-qkm4c 2/2 Running 0 74s
reviews-v1-545db77b95-9jlcz 2/2 Running 0 75s
reviews-v2-7bf8c9648f-dnbvt 2/2 Running 0 75s
reviews-v3-84779c7bbc-p98kv 2/2 Running 0 75s
3)确认 Bookinfo 应用是否正在运行,在某个Pod中用curl命令对应用发送请求,例如ratings:
[root@k8s-master1 istio-1.10.1]# kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title> # 出现这个说明正常
4)确定Ingress的IP和端口
现在Bookinfo服务已经启动并运行,你需要使应用程序可以从Kubernetes集群外部访问,例如从浏览器访问,那可以用Istio Gateway来实现这个目标。
# 1、为应用程序定义gateway网关
[root@k8s-master1 istio-1.10.1]# cat samples/bookinfo/networking/bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
[root@k8s-master1 istio-1.10.1]# kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
[root@k8s-master1 istio-1.10.1]# kubectl get gateway
NAME AGE
bookinfo-gateway 16s
[root@k8s-master1 istio-1.10.1]# kubectl get virtualservice
NAME GATEWAYS HOSTS AGE
bookinfo ["bookinfo-gateway"] ["*"] 25s
# 2、确定ingress ip和端口
[root@k8s-master1 istio-1.10.1]# kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.108.58.69 <pending> 15021:32688/TCP,80:31860/TCP,443:30213/TCP,31400:31526/TCP,15443:32654/TCP 36m
# 如果EXTERNAL-IP值已设置,说明环境正在使用外部负载均衡,可以用其为ingress gateway 提供服务。 如果EXTERNAL-IP值为<none>(或持续显示<pending>), 说明环境没有提供外部负载均衡,无法使用ingress gateway。在这种情况下,你可以使用服务的NodePort访问网关。
# 3、获取Istio Gateway的地址
[root@k8s-master1 istio-1.10.1]# export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
[root@k8s-master1 istio-1.10.1]# echo $INGRESS_PORT
31860
[root@k8s-master1 istio-1.10.1]# export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
[root@k8s-master1 istio-1.10.1]# echo $SECURE_INGRESS_PORT
30213
# 设置GATEWAY_URL
[root@k8s-master1 istio-1.10.1]# INGRESS_HOST=192.168.40.180
[root@k8s-master1 istio-1.10.1]# export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
[root@k8s-master1 istio-1.10.1]# echo $GATEWAY_URL
192.168.40.180:31860
# 4、curl命令来确认是否能够从集群外部访问 Bookinfo 应用程序
[root@k8s-master1 istio-1.10.1]# curl -s http://${GATEWAY_URL}/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>
# 5、浏览器打开网址http://$GATEWAY_URL/productpage,也就是192.168.40.180:31860/productpage来浏览应用的Web页面,如果刷新几次应用的页面,就会看到 productpage 页面中会随机展示 reviews 服务的不同版本的效果(红色、黑色的星形或者没有显示)
istio的ingressgateway
访问:https://istio.io/docs/examples/bookinfo/#determine-the-ingress-ip-and-port
5)扩展:添加外部IP-extertal-IP
[root@k8s-master1 istio-1.10.1]# kubectl edit svc istio-ingressgateway -n istio-system
spec:
clusterIP: 10.108.58.69
clusterIPs:
- 10.108.58.69
externalIPs:
- 192.168.40.180
[root@k8s-master1 istio-1.10.1]# kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.108.58.69 192.168.40.180 15021:32688/TCP,80:31860/TCP,443:30213/TCP,31400:31526/TCP,15443:32654/TCP 49m
# 在windows机器上的C:WindowsSystem32driversetchosts里面最后一行加上如下域名解析:
192.168.40.180 productpage.kubeprom.com
在浏览器访问:http://productpage.kubeprom.com/productpage
6)卸载bookinfo服务
# 1.删除路由规则,并销毁应用的 Pod
sh samples/bookinfo/platform/kube/cleanup.sh
# 2.确认应用已经关停
kubectl get virtualservices #-- there should be no virtual services
kubectl get destinationrules #-- there should be no destination rules
kubectl get gateway #-- there should be no gateway
kubectl get pods #-- the Bookinfo pods should be deleted
三、Istio实现灰度发布
3.1、什么是灰度发布
灰度发布也叫金丝雀部署 ,是指通过控制流量的比例,实现新老版本的逐步更替。比如对于服务A 有 version1、 version2 两个版本 , 当前两个版本同时部署,但是version1比例90% ,version2比例10% ,看运行效果,如果效果好逐步调整流量占比 80~20 ,70~30 ·····10~90 ,0,100 ,最终version1版本下线。
灰度发布的特点:
1)新老板共存
2)可以实时根据反馈动态调整占比
3)理论上不存在服务完全宕机的情况。
4)适合于服务的平滑升级与动态更新。
3.2、使用istio进行金丝雀发布
1)创建deployment
[root@k8s-master1 istio-canary]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: appv1
labels:
app: v1
spec:
replicas: 1
selector:
matchLabels:
app: v1
apply: canary
template:
metadata:
labels:
app: v1
apply: canary
spec:
containers:
- name: nginx
image: xianchao/canary:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: appv2
labels:
app: v2
spec:
replicas: 1
selector:
matchLabels:
app: v2
apply: canary
template:
metadata:
labels:
app: v2
apply: canary
spec:
containers:
- name: nginx
image: xianchao/canary:v2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
[root@k8s-master1 istio-canary]# kubectl apply -f deployment.yaml
deployment.apps/appv1 created
deployment.apps/appv2 created
[root@k8s-master1 istio-canary]# kubectl get pods
NAME READY STATUS RESTARTS AGE
appv1-6f7b58fd99-2w8xq 2/2 Running 0 9s
appv2-f78cb577-gpj2j 2/2 Running 0 9s
2)创建service
[root@k8s-master1 istio-canary]# cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: canary
labels:
apply: canary
spec:
selector:
apply: canary
ports:
- protocol: TCP
port: 80
targetPort: 80
[root@k8s-master1 istio-canary]# kubectl apply -f service.yaml
service/canary created
[root@k8s-master1 istio-canary]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
canary ClusterIP 10.99.119.233 <none> 80/TCP 4s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d19h
[root@k8s-master1 istio-canary]# kubectl describe svc canary
Name: canary
Namespace: default
Labels: apply=canary
Annotations: <none>
Selector: apply=canary
Type: ClusterIP
IP Families: <none>
IP: 10.99.119.233
IPs: 10.99.119.233
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.36.85:80,10.244.36.86:80
Session Affinity: None
Events: <none>
3)创建gateway
[root@k8s-master1 istio-canary]# cat gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: canary-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
[root@k8s-master1 istio-canary]# kubectl apply -f gateway.yaml
gateway.networking.istio.io/canary-gateway created
[root@k8s-master1 istio-canary]# kubectl get gateway
NAME AGE
canary-gateway 8s
4)创建virtualservice
[root@k8s-master1 istio-canary]# cat virtual.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: canary
spec:
hosts:
- "*"
gateways:
- canary-gateway
http:
- route:
- destination:
host: canary.default.svc.cluster.local
subset: v1
weight: 90
- destination:
host: canary.default.svc.cluster.local
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: canary
spec:
host: canary.default.svc.cluster.local
subsets:
- name: v1
labels:
app: v1
- name: v2
labels:
app: v2
[root@k8s-master1 istio-canary]# kubectl apply -f virtual.yaml
virtualservice.networking.istio.io/canary created
destinationrule.networking.istio.io/canary created
[root@k8s-master1 istio-canary]# kubectl get virtualservices
NAME GATEWAYS HOSTS AGE
canary ["canary-gateway"] ["*"] 8s
5)验证金丝雀发布效果
# 获取Ingress_port
[root@k8s-master1 istio-canary]# kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
31860
# 验证金丝雀发布效果,结果有90次出现v1,10次出现canary-v2,符合我们预先设计的流量走向。
[root@k8s-master1 istio-canary]# for i in `seq 1 100`; do curl 192.168.40.180:31860;done > 1.txt
四、istio核心资源
官网:https://istio.io/latest/docs/concepts/traffic-management/
4.1、Gateway
在Kubernetes环境中,Ingress controller用于管理进入集群的流量。在Istio服务网格中 Istio Ingress Gateway
承担相应的角色,它使用新的配置模型(Gateway
和 VirtualServices
)完成流量管理的功能。通过下图做一个总的描述。
1、用户向某端口发出请求
2、负载均衡器监听端口,并将请求转发到集群中的某个节点上。Istio Ingress Gateway Service 会监听集群节点端口的请求
3、Istio Ingress Gateway Service 将请求交给Istio Ingress Gateway Pod 处理。IngressGateway Pod 通过 Gateway 和 VirtualService 配置规则处理请求。其中,Gateway 用来配置端口、协议和证书;VirtualService 用来配置一些路由信息(找到请求对应处理的服务App Service)
4、Istio Ingress Gateway Pod将请求转给App Service
5、最终的请求会交给App Service 关联的App Deployment处理
# cat gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: canary-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*" # *表示通配符,通过任何域名都可以访问
网关是一个运行在网格边缘的负载均衡器,用于接收传入或传出的HTTP/TCP连接。主要工作是接受外部请求,把请求转发到内部服务。网格边缘的Ingress 流量,会通过对应的 Istio IngressGateway Controller 进入到集群内部。
在上面这个yaml里我们配置了一个监听80端口的入口网关,它会将80端口的http流量导入到集群内对应的Virtual Service上。
4.2、VirtualService
VirtualService
是Istio流量治理的一个核心配置,可以说是Istio流量治理中最重要、最复杂的。VirtualService在形式上表示一个虚拟服务,将满足条件的流量都转发到对应的服务后端,这个服务后端可以是一个服务,也可以是在DestinationRule中定义的服务的子集。
# cat virtual.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: canary
spec:
hosts:
- "*"
gateways:
- canary-gateway
http:
- route:
- destination:
host: canary.default.svc.cluster.local
subset: v1
weight: 90
- destination:
host: canary.default.svc.cluster.local
subset: v2
weight: 10
# 这个虚拟服务会收到上一个gateway中所有80端口来的http流量
VirtualService 主要由以下部分组成:
4.2.1、hosts
虚拟主机名称,如果在 Kubernetes 集群中,则这个主机名可以是service服务名。hosts字段列出了virtual service的虚拟主机。它是客户端向服务发送请求时使用的一个或多个地址,通过该字段提供的地址访问virtual service,进而访问后端服务。在集群内部(网格内)使用时通常与kubernetes的Service同名;当需要在集群外部(网格外)访问时,该字段为gateway请求的地址,即与gateway的hosts字段相同。
hosts:
- reviews
virtual service的主机名可以是IP地址、DNS名称,也可以是短名称(例如Kubernetes服务短名称),该名称会被隐式或显式解析为全限定域名(FQDN),具体取决于istio依赖的平台。可以使用前缀通配符(“*”)为所有匹配的服务创建一组路由规则。virtual service的hosts不一定是Istio服务注册表的一部分,它们只是虚拟目的地,允许用户为网格无法路由到的虚拟主机建立流量模型。
virtual service的hosts短域名在解析为完整的域名时,补齐的namespace是VirtualService所在的命名空间,而非Service所在的命名空间。如上例的hosts会被解析为:reviews.default.svc.cluster.local
。
virtualservice配置路由规则
由规则的功能是:满足http.match
条件的流量都被路由到http.route.destination
,执行重定向(HTTPRedirect
)、重写(HTTPRewrite
)、重试(HTTPRetry
)、故障注入(HTTPFaultInjection
)、跨站(CorsPolicy
)策略等。HTTPRoute不仅可以做路由匹配,还可以做一些写操作来修改请求本身。
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v3
在 http 字段包含了虚拟服务的路由规则,用来描述匹配条件和路由行为,它们把 HTTP/1.1、HTTP2 和 gRPC 等流量发送到 hosts 字段指定的目标。
示例中的第一个路由规则有一个条件,以 match 字段开始。此路由接收来自 ”jason“ 用户的所有请求,把请求发送到destination指定的v2子集。
路由规则优先级:
在上面例子中,不满足第一个路由规则的流量均流向一个默认的目标,该目标在第二条规则中指定。因此,第二条规则没有 match 条件,直接将流量导向 v3 子集。
多路由规则:
详细配置可参考:https://istio.io/latest/zh/docs/reference/config/networking/virtual-service/#HTTPMatchRequest
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- bookinfo.com
http:
- match:
- uri:
prefix: /reviews
route:
- destination:
host: reviews
- match:
- uri:
prefix: /ratings
route:
- destination:
host: ratings
路由规则是将特定流量子集路由到指定目标地址的工具。可以在流量端口、header 字段、URI 等内容上设置匹配条件。例如,上面这个虚拟服务让用户发送请求到两个独立的服务:ratings
和 reviews
,相当于访问http://bookinfo.com/ratings
和http://bookinfo.com/reviews
,虚拟服务规则根据请求的 URI 把请求路由到特定的目标地址。
4.2.2、Gateway
流量来源网关
4.2.3、路由
路由的destination字段指定了匹配条件的流量的实际地址。与virtual service的主机不同,该host必须是存在于istio的服务注册表(如kubernetes services,consul services等)中的真实目的地或由ServiceEntries声明的hosts,否则Envoy不知道应该将流量发送到哪里。它可以是一个带代理的网格服务或使用service entry添加的非网格服务。在kubernetes作为平台的情况下,host表示名为kubernetes的service名称:
- destination:
host: canary.default.svc.cluster.local
subset: v1
weight: 90
4.3、DestinationRule
destination rule是istio流量路由功能的重要组成部分。一个virtual service可以看作是如何将流量分发给特定的目的地,然后调用destination rule来配置分发到该目的地的流量。destination rule在virtual service的路由规则之后起作用(即在virtual service的math->route-destination之后起作用,此时流量已经分发到真实的service上),应用于真实的目的地。
可以使用destination rule来指定命名的服务子集,例如根据版本对服务的实例进行分组,然后通过virtual service的路由规则中的服务子集将控制流量分发到不同服务的实例中。
# cat DestinationRule.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: canary
spec:
host: canary.default.svc.cluster.local
subsets:
- name: v1
labels:
app: v1
- name: v2
labels:
app: v2
在虚拟服务中使用Hosts配置默认绑定的路由地址,用http.route
字段,设置http进入的路由地址,可以看到,上面导入到了目标规则为v1和v2的子集。
v1子集对应的是具有如下标签的pod:
selector:
matchLabels:
app: v1
流量控制流程:Gateway->VirtaulService->TCP/HTTP Router->DestinationWeight->Subset:Port
五、istio核心功能演示
5.1、断路器
断路器是创建弹性微服务应用程序的重要模式。断路器使应用程序可以适应网络故障和延迟等网络不良影响。
官网:https://istio.io/latest/zh/docs/tasks/traffic-management/circuit-breaking/
1)在k8s集群创建后端服务
[root@k8s-master1 istio-1.10.1]# cat samples/httpbin/httpbin.yaml
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##################################################################################################
# httpbin service
##################################################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
service: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
serviceAccountName: httpbin
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80
[root@k8s-master1 istio-1.10.1]# kubectl apply -f samples/httpbin/httpbin.yaml
[root@k8s-master1 istio-1.10.1]# kubectl get pods | grep httpbin
httpbin-74fb669cc6-f2zm7 2/2 Running 0 36s
2)配置断路器
# 创建一个目标规则,在调用httpbin服务时应用断路器设置
[root@k8s-master1 istio-1.10.1]# cat destination.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 1
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
outlierDetection:
consecutiveGatewayErrors: 1
interval: 1s
baseEjectionTime: 3m
maxEjectionPercent: 100
[root@k8s-master1 istio-1.10.1]# kubectl apply -f destination.yaml
destinationrule.networking.istio.io/httpbin created
参数说明:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool: #连接池(TCP | HTTP)配置,例如:连接数、并发请求等
tcp:
maxConnections: 1 #TCP连接池中的最大连接请求数,当超过这个值,会返回503代码。如两个请求过来,就会有一个请求返回503。
http:
http1MaxPendingRequests: 1 #连接到目标主机的最大挂起请求数,也就是待处理请求数。指的是virtualservice路由规则中配置的destination。
maxRequestsPerConnection: 1 #连接池中每个连接最多处理1个请求后就关闭,并根据需要重新创建连接池中的连接
outlierDetection: #异常检测配置,传统意义上的熔断配置,即对规定时间内服务错误数的监测
consecutiveGatewayErrors: 1 #连续错误数1,即连续返回502-504状态码的Http请求错误数
interval: 1s #错误异常的扫描间隔1s,即在interval(1s)内连续发生consecutiveGatewayErrors(1)个错误,则触发服务熔断
baseEjectionTime: 3m #基本驱逐时间3分钟,实际驱逐时间为baseEjectionTime*驱逐次数
maxEjectionPercent: 100 #最大驱逐百分比100%
3)添加客户端访问httpbin服务
创建一个客户端以将流量发送给httpbin服务。该客户端是一个简单的负载测试客户端,Fortio
可以控制连接数,并发数和HTTP调用延迟。使用此客户端来“跳闸”在DestinationRule中设置的断路器策略。
# 通过执行下面的命令部署fortio客户端
[root@k8s-master1 istio-1.10.1]# cat samples/httpbin/sample-client/fortio-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: fortio
labels:
app: fortio
service: fortio
spec:
ports:
- port: 8080
name: http
selector:
app: fortio
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fortio-deploy
spec:
replicas: 1
selector:
matchLabels:
app: fortio
template:
metadata:
annotations:
# This annotation causes Envoy to serve cluster.outbound statistics via 15000/stats
# in addition to the stats normally served by Istio. The Circuit Breaking example task
# gives an example of inspecting Envoy stats.
sidecar.istio.io/statsInclusionPrefixes: cluster.outbound,cluster_manager,listener_manager,http_mixer_filter,tcp_mixer_filter,server,cluster.xds-grpc
labels:
app: fortio
spec:
containers:
- name: fortio
image: fortio/fortio:latest_release
imagePullPolicy: Always
ports:
- containerPort: 8080
name: http-fortio
- containerPort: 8079
name: grpc-ping
[root@k8s-master1 istio-1.10.1]# kubectl apply -f samples/httpbin/sample-client/fortio-deploy.yaml
[root@k8s-master1 istio-1.10.1]# kubectl get pods|grep for
fortio-deploy-576dbdfbc4-59rrp 2/2 Running 0 76s
[root@k8s-master1 istio-1.10.1]# kubectl exec fortio-deploy-576dbdfbc4-59rrp -c fortio -- /usr/bin/fortio curl http://httpbin:8000/get
HTTP/1.1 200 OK
server: envoy
date: Wed, 14 Jul 2021 09:14:42 GMT
content-type: application/json
content-length: 594
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 63
{
"args": {},
"headers": {
"Host": "httpbin:8000",
"User-Agent": "fortio.org/fortio-1.11.3",
"X-B3-Parentspanid": "e43a2d2c7ab3c784",
"X-B3-Sampled": "1",
"X-B3-Spanid": "a319763c32233e40",
"X-B3-Traceid": "bc2ea89e4ad88616e43a2d2c7ab3c784",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=4ae6300ac74341739507fb693662ee381aac6ccb4b8e37fcffec81fd3a9f3dde;Subject="";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin:8000/get"
}
4)触发断路器
在DestinationRule设置中,指定了maxConnections: 1
和 http1MaxPendingRequests: 1
。这些规则表明,如果超过一个以上的连接并发请求,则istio-proxy在为进一步的请求和连接打开路由时,应该会看到下面的情况 。
# 以两个并发连接(-c 2)和发送20个请求(-n 20)调用服务
[root@k8s-master1 istio-1.10.1]# kubectl exec -it fortio-deploy-576dbdfbc4-59rrp -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
09:16:35 I logger.go:127> Log level is now 3 Warning (was 2 Info)
Fortio 1.11.3 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get
Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0)
09:16:35 W http_client.go:693> Parsed non ok code 503 (HTTP/1.1 503)
09:16:35 W http_client.go:693> Parsed non ok code 503 (HTTP/1.1 503)
09:16:35 W http_client.go:693> Parsed non ok code 503 (HTTP/1.1 503)
09:16:35 W http_client.go:693> Parsed non ok code 503 (HTTP/1.1 503)
Ended after 133.919851ms : 20 calls. qps=149.34
Aggregated Function Time : count 20 avg 0.013183304 +/- 0.01239 min 0.000456046 max 0.04724804 sum 0.263666083
# range, mid point, percentile, count
>= 0.000456046 <= 0.001 , 0.000728023 , 10.00, 2
> 0.005 <= 0.006 , 0.0055 , 20.00, 2
> 0.006 <= 0.007 , 0.0065 , 45.00, 5
> 0.007 <= 0.008 , 0.0075 , 55.00, 2
> 0.009 <= 0.01 , 0.0095 , 65.00, 2
> 0.01 <= 0.011 , 0.0105 , 70.00, 1
> 0.012 <= 0.014 , 0.013 , 75.00, 1
> 0.02 <= 0.025 , 0.0225 , 80.00, 1
> 0.025 <= 0.03 , 0.0275 , 90.00, 2
> 0.04 <= 0.045 , 0.0425 , 95.00, 1
> 0.045 <= 0.047248 , 0.046124 , 100.00, 1
# target 50% 0.0075
# target 75% 0.014
# target 90% 0.03
# target 99% 0.0467984
# target 99.9% 0.0472031
Sockets used: 6 (for perfect keepalive, would be 2)
Jitter: false
Code 200 : 16 (80.0 %)
Code 503 : 4 (20.0 %) # 断开了
Response Header Sizes : count 20 avg 184.15 +/- 92.08 min 0 max 231 sum 3683
Response Body/Total Sizes : count 20 avg 707.55 +/- 233.3 min 241 max 825 sum 14151
All done 20 calls (plus 0 warmup) 13.183 ms avg, 149.3 qps
5.2、超时
在生产环境中经常会碰到由于调用方等待下游的响应过长,堆积大量的请求阻塞了自身服务,造成雪崩的情况,通过超时处理来避免由于无限期等待造成的故障,进而增强服务的可用性,Istio 使用虚拟服务来优雅实现超时处理。
下面例子模拟客户端调用 nginx,nginx 将请求转发给 tomcat。nginx 服务设置了超时时间为2秒,如果超出这个时间就不在等待,返回超时错误。tomcat服务设置了响应时间延迟10秒,任何请求都需要等待10秒后才能返回。client 通过访问 nginx 服务去反向代理 tomcat服务,由于 tomcat服务需要10秒后才能返回,但nginx 服务只等待2秒,所以客户端会提示超时错误。
1)创建deployment
[root@k8s-master1 timeout]# cat nginx-tomcat-deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-tomcat
labels:
server: nginx
app: web
spec:
replicas: 1
selector:
matchLabels:
server: nginx
app: web
template:
metadata:
name: nginx
labels:
server: nginx
app: web
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
server: tomcat
app: web
spec:
replicas: 1
selector:
matchLabels:
server: tomcat
app: web
template:
metadata:
name: tomcat
labels:
server: tomcat
app: web
spec:
containers:
- name: tomcat
image: docker.io/kubeguide/tomcat-app:v1
imagePullPolicy: IfNotPresent
[root@k8s-master1 timeout]# kubectl apply -f nginx-tomcat-deployment.yaml
[root@k8s-master1 timeout]# kubectl get pods
NAME READY STATUS RESTARTS AGE
appv1-6f7b58fd99-2w8xq 2/2 Running 2 4h19m
appv2-f78cb577-gpj2j 2/2 Running 2 4h19m
fortio-deploy-576dbdfbc4-59rrp 2/2 Running 2 3h8m
httpbin-74fb669cc6-f2zm7 2/2 Running 2 3h16m
nginx-tomcat-7dd6f74846-jrf5l 2/2 Running 0 7s
tomcat-86ddb8f5c9-s2g9s 2/2 Running 0 7s
2)创建service
[root@k8s-master1 timeout]# cat nginx-tomcat-svc.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
server: nginx
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: tomcat-svc
spec:
selector:
server: tomcat
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
[root@k8s-master1 timeout]# kubectl apply -f nginx-tomcat-svc.yaml
[root@k8s-master1 timeout]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
canary ClusterIP 10.99.119.233 <none> 80/TCP 4h19m
fortio ClusterIP 10.111.48.142 <none> 8080/TCP 3h8m
httpbin ClusterIP 10.97.205.101 <none> 8000/TCP 3h16m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d23h
nginx-svc ClusterIP 10.102.250.203 <none> 80/TCP 43s
tomcat-svc ClusterIP 10.99.36.197 <none> 8080/TCP 43s
3)创建VirtualService
[root@k8s-master1 timeout]# cat virtual-tomcat-nginx.yaml
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-vs
spec:
hosts:
- nginx-svc
http:
- route:
- destination:
host: nginx-svc
timeout: 2s # 说明调用 nginx-svc 的 k8s service,请求超时时间是 2s
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: tomcat-vs
spec:
hosts:
- tomcat-svc
http:
- fault:
delay: # 故障注入,每次调用 tomcat-svc 的 k8s service,都会延迟10s才会调用。
percentage:
value: 100
fixedDelay: 10s
route:
- destination:
host: tomcat-svc
[root@k8s-master1 timeout]# kubectl apply -f virtual-tomcat-nginx.yaml
virtualservice.networking.istio.io/nginx-vs created
virtualservice.networking.istio.io/tomcat-vs created
[root@k8s-master1 timeout]# kubectl get virtualservices
NAME GATEWAYS HOSTS AGE
canary ["canary-gateway"] ["*"] 4h6m
nginx-vs ["nginx-svc"] 7s
tomcat-vs ["tomcat-svc"] 7s
4)设置超时时间
[root@k8s-master1 timeout]# kubectl exec -it nginx-tomcat-7dd6f74846-jrf5l -- /bin/sh
# apt-get update
# apt-get install vim -y
# vim /etc/nginx/conf.d/default.conf
# nginx -t
# nginx -s reload
5)登录客户端验证超时
[root@k8s-master1 timeout]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # time wget -q -O - http://nginx-svc
wget: server returned error: HTTP/1.1 504 Gateway Timeout
Command exited with non-zero status 1
real 0m 2.01s
user 0m 0.00s
sys 0m 0.00s
# 每隔2秒,由于 nginx 服务的超时时间到了而 tomcat未有响应,则提示返回超时错误
/ # while true; do wget -q -O - http://nginx-svc; done
wget: server returned error: HTTP/1.1 504 Gateway Timeout
wget: server returned error: HTTP/1.1 504 Gateway Timeout
wget: server returned error: HTTP/1.1 504 Gateway Timeout
wget: server returned error: HTTP/1.1 504 Gateway Timeout
# 验证故障注入效果,执行如下语句,执行之后10s才会有结果
/ # time wget -q -O - http://tomcat-svc
wget: server returned error: HTTP/1.1 503 Service Unavailable
Command exited with non-zero status 1
real 0m 10.03s
user 0m 0.00s
sys 0m 0.00s
5.3、故障注入和重试
Istio 重试机制就是如果调用服务失败,Envoy 代理尝试连接服务的最大次数。而默认情况下,Envoy 代理在失败后并不会尝试重新连接服务,除非我们启动 Istio 重试机制。
下面例子模拟客户端调用 nginx,nginx 将请求转发给 tomcat。tomcat 通过故障注入而中止对外服务,nginx 设置如果访问 tomcat 失败则会重试 3 次。
1)创建pod
# 删除之前的pod并新建
[root@k8s-master1 timeout]# kubectl delete -f .
[root@k8s-master1 timeout]# kubectl apply -f nginx-tomcat-deployment.yaml
[root@k8s-master1 timeout]# kubectl apply -f nginx-tomcat-svc.yaml
[root@k8s-master1 timeout]# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/2 Error 0 13m
nginx-tomcat-7dd6f74846-86x87 2/2 Running 0 43s
tomcat-86ddb8f5c9-m4wbq 2/2 Running 0 42s
2)创建VirtualService
[root@k8s-master1 timeout]# cat virtual-attempt.yaml
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nginx-vs
spec:
hosts:
- nginx-svc
http:
- route:
- destination:
host: nginx-svc
retries:
attempts: 3 # 调用 nginx-svc 的 k8s service,在初始调用失败后最多重试 3 次来连接到服务子集,每个重试都有 2 秒的超时
perTryTimeout: 2s
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: tomcat-vs
spec:
hosts:
- tomcat-svc
http:
- fault:
abort:
percentage:
value: 100
httpStatus: 503 # 每次调用 tomcat-svc 的service,100%都会返回错误状态码503
route:
- destination:
host: tomcat-svc
[root@k8s-master1 timeout]# kubectl apply -f virtual-attempt.yaml
3)验证超时与重试
[root@k8s-master1 timeout]# kubectl exec -it nginx-tomcat-7dd6f74846-86x87 -- /bin/sh
# apt-get update && apt-get install vim -y
# vim /etc/nginx/conf.d/default.conf
# nginx -t
# nginx -s reload
# exit
# 验证重试是否生效
[root@k8s-master1 timeout]# kubectl delete pods busybox
[root@k8s-master1 timeout]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
/ # wget -q -O - http://nginx-svc
wget: server returned error: HTTP/1.1 503 Service Unavailable
# 新开终端,查看日志
[root@k8s-master1 ~]# kubectl logs -f nginx-tomcat-7dd6f74846-86x87 -c istio-proxy