一、什么是Ingress
Ingress对象,其实就是对“反向代理”的一种抽象,简单的说就是一个全局的负载均衡器,可以通过访问URL定位到后端的Service
有了Ingress这个抽象,K8S就不需要关心Ingress的细节了,实际使用时,只需要选择一个具体的Ingress Controller部署就行了,业界常用的反向代理项目有:Nginx、HAProxy、Envoy、Traefik,都已经成为了K8S专门维护的Ingress Controller
一个Ingress对象的主要内容,就类似Nginx的配置文件描述,对应的转发规则就是ingressRule,
有了Ingress这个对象,用户就可以根据自己的需求选择Ingress Controller,例如,如果应用对代理服务的中断非常敏感,可以使用Treafik这样的Ingress Controller
Ingress工作在七层,Service工作在四层,当想要在Kubernetes里为应用进行TLS配置等HTTPS相关操作时,都必须通过Ingress来进行
LoadBalancer的Service也会在公有云创建负载均衡,但是是一个Servcie对应一个,太浪费了
以下以演示部署一个Nginx Ingress Controller,并代理一个我们自己创建的服务coffee
Nginx Ingress Controller 官网地址:https://kubernetes.github.io/ingress-nginx
依赖的其他github相关yaml文件:https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
二、部署Nginx Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
内容如下
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
1.上面的YAML文件,定义了一个使用nginx-ingress-controller镜像的Pod,Pod的启动命令需要使用Pod的Namespace作为参数,Pod本身,本身就一个监听ingress对象以及它所代理的后端Service变化的控制器
2.当一个新的Ingress对象由用户创建后,nginx-ingress-controller就会根据ingress对象里的内容,生成一份对应的Nginx配置文件,并使用这个配置文件启动一个Nginx服务
如果Ingress对象是被更新,nginx-ingress-controller就会更新这个配置文件
注意:如果只是被代理的Service对象被更新,nginx-ingress-controller所管理的服务是不需要重新加载的,因为nginx-ingress-controller通过Nginx Lua方案实现了Nginx Upstream的动态配置
3.nginx-ingress-controller允许通过K8S的ConfigMap对象对上述Nginx配置文件定制,ConfigMap名字需要以参数方式传递给nginx-ingress-controller,在ConfigMap里添加的字段会生成到nginx配置文件当中
一个Nginx Ingress Controller其实是一个可以根据Ingress对象和被代理后端Service的变化,来自动更新的Nginx负载均衡器
三、部署一个Service将Nginx服务暴露出去
为了让用户能访问这个Nginx,我们创建一个Service把Nginx服务暴露出去,通过NodePort方式
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
格式如下
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
这个service的工作,就是将带有ingress-nginx标签的Pod的80和443端口暴露出去
我们查看一下这个svc
~ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.43.33.55 <none> 80:30164/TCP,443:32229/TCP 107m
通过Node的30164这个端口就可以访问这个servcie,随便选择一个Node就可以,这里使用72.17.0.215访问一下
curl 172.17.0.215:30164
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
四、部署一个我们自己的服务Cafe
这是会创建Deployment和Servcie,cafe.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: coffee
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tea
spec:
replicas: 3
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
labels:
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: tea
运行
➜ ingress-test kubectl apply -f cafe.yaml
deployment.extensions/coffee created
service/coffee-svc created
deployment.extensions/tea created
service/tea-svc created
➜ ingress-test kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
coffee 2/2 2 2 66s
tea 3/3 3 3 66s
➜ ingress-test kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coffee-svc ClusterIP 10.43.216.45 <none> 80/TCP 70s
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 69d
tea-svc ClusterIP 10.43.208.95 <none> 80/TCP 69s
可以看到对应的tea-svc和coffee-svc被创建出来了
五、部署ingress依赖的证书
cafe-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: cafe-secret
type: Opaque
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMakNDQWhZQ0NRREFPRjl0THNhWFdqQU5CZ2txaGtpRzl3MEJBUXNGQURCYU1Rc3dDUVlEVlFRR0V3SlYKVXpFTE1Ba0dBMVVFQ0F3Q1EwRXhJVEFmQmdOVkJBb01HRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MApaREViTUJrR0ExVUVBd3dTWTJGbVpTNWxlR0Z0Y0d4bExtTnZiU0FnTUI0WERURTRNRGt4TWpFMk1UVXpOVm9YCkRUSXpNRGt4TVRFMk1UVXpOVm93V0RFTE1Ba0dBMVVFQmhNQ1ZWTXhDekFKQmdOVkJBZ01Ba05CTVNFd0h3WUQKVlFRS0RCaEpiblJsY201bGRDQlhhV1JuYVhSeklGQjBlU0JNZEdReEdUQVhCZ05WQkFNTUVHTmhabVV1WlhoaApiWEJzWlM1amIyMHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDcDZLbjdzeTgxCnAwanVKL2N5ayt2Q0FtbHNmanRGTTJtdVpOSzBLdGVjcUcyZmpXUWI1NXhRMVlGQTJYT1N3SEFZdlNkd0kyaloKcnVXOHFYWENMMnJiNENaQ0Z4d3BWRUNyY3hkam0zdGVWaVJYVnNZSW1tSkhQUFN5UWdwaW9iczl4N0RsTGM2SQpCQTBaalVPeWwwUHFHOVNKZXhNVjczV0lJYTVyRFZTRjJyNGtTa2JBajREY2o3TFhlRmxWWEgySTVYd1hDcHRDCm42N0pDZzQyZitrOHdnemNSVnA4WFprWldaVmp3cTlSVUtEWG1GQjJZeU4xWEVXZFowZXdSdUtZVUpsc202OTIKc2tPcktRajB2a29QbjQxRUUvK1RhVkVwcUxUUm9VWTNyemc3RGtkemZkQml6Rk8yZHNQTkZ4MkNXMGpYa05MdgpLbzI1Q1pyT2hYQUhBZ01CQUFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLSEZDY3lPalp2b0hzd1VCTWRMClJkSEliMzgzcFdGeW5acS9MdVVvdnNWQTU4QjBDZzdCRWZ5NXZXVlZycTVSSWt2NGxaODFOMjl4MjFkMUpINnIKalNuUXgrRFhDTy9USkVWNWxTQ1VwSUd6RVVZYVVQZ1J5anNNL05VZENKOHVIVmhaSitTNkZBK0NuT0Q5cm4yaQpaQmVQQ0k1ckh3RVh3bm5sOHl3aWozdnZRNXpISXV5QmdsV3IvUXl1aTlmalBwd1dVdlVtNG52NVNNRzl6Q1Y3ClBwdXd2dWF0cWpPMTIwOEJqZkUvY1pISWc4SHc5bXZXOXg5QytJUU1JTURFN2IvZzZPY0s3TEdUTHdsRnh2QTgKN1dqRWVxdW5heUlwaE1oS1JYVmYxTjM0OWVOOThFejM4Zk9USFRQYmRKakZBL1BjQytHeW1lK2lHdDVPUWRGaAp5UkU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcWVpcCs3TXZOYWRJN2lmM01wUHJ3Z0pwYkg0N1JUTnBybVRTdENyWG5LaHRuNDFrCkcrZWNVTldCUU5semtzQndHTDBuY0NObzJhN2x2S2wxd2k5cTIrQW1RaGNjS1ZSQXEzTVhZNXQ3WGxZa1YxYkcKQ0pwaVJ6ejBza0lLWXFHN1BjZXc1UzNPaUFRTkdZMURzcGRENmh2VWlYc1RGZTkxaUNHdWF3MVVoZHErSkVwRwp3SStBM0kreTEzaFpWVng5aU9WOEZ3cWJRcCt1eVFvT05uL3BQTUlNM0VWYWZGMlpHVm1WWThLdlVWQ2cxNWhRCmRtTWpkVnhGbldkSHNFYmltRkNaYkp1dmRySkRxeWtJOUw1S0Q1K05SQlAvazJsUkthaTAwYUZHTjY4NE93NUgKYzMzUVlzeFR0bmJEelJjZGdsdEkxNURTN3lxTnVRbWF6b1Z3QndJREFRQUJBb0lCQVFDUFNkU1luUXRTUHlxbApGZlZGcFRPc29PWVJoZjhzSStpYkZ4SU91UmF1V2VoaEp4ZG01Uk9ScEF6bUNMeUw1VmhqdEptZTIyM2dMcncyCk45OUVqVUtiL1ZPbVp1RHNCYzZvQ0Y2UU5SNThkejhjbk9SVGV3Y290c0pSMXBuMWhobG5SNUhxSkpCSmFzazEKWkVuVVFmY1hackw5NGxvOUpIM0UrVXFqbzFGRnM4eHhFOHdvUEJxalpzVjdwUlVaZ0MzTGh4bndMU0V4eUZvNApjeGI5U09HNU9tQUpvelN0Rm9RMkdKT2VzOHJKNXFmZHZ5dGdnOXhiTGFRTC94MGtwUTYyQm9GTUJEZHFPZVBXCktmUDV6WjYvMDcvdnBqNDh5QTFRMzJQem9idWJzQkxkM0tjbjMyamZtMUU3cHJ0V2wrSmVPRmlPem5CUUZKYk4KNHFQVlJ6NWhBb0dCQU50V3l4aE5DU0x1NFArWGdLeWNrbGpKNkY1NjY4Zk5qNUN6Z0ZScUowOXpuMFRsc05ybwpGVExaY3hEcW5SM0hQWU00MkpFUmgySi9xREZaeW5SUW8zY2czb2VpdlVkQlZHWTgrRkkxVzBxZHViL0w5K3l1CmVkT1pUUTVYbUdHcDZyNmpleHltY0ppbS9Pc0IzWm5ZT3BPcmxEN1NQbUJ2ek5MazRNRjZneGJYQW9HQkFNWk8KMHA2SGJCbWNQMHRqRlhmY0tFNzdJbUxtMHNBRzR1SG9VeDBlUGovMnFyblRuT0JCTkU0TXZnRHVUSnp5K2NhVQprOFJxbWRIQ2JIelRlNmZ6WXEvOWl0OHNaNzdLVk4xcWtiSWN1YytSVHhBOW5OaDFUanNSbmU3NFowajFGQ0xrCmhIY3FIMHJpN1BZU0tIVEU4RnZGQ3haWWRidUI4NENtWmlodnhicFJBb0dBSWJqcWFNWVBUWXVrbENkYTVTNzkKWVNGSjFKelplMUtqYS8vdER3MXpGY2dWQ0thMzFqQXdjaXowZi9sU1JxM0hTMUdHR21lemhQVlRpcUxmZVpxYwpSMGlLYmhnYk9jVlZrSkozSzB5QXlLd1BUdW14S0haNnpJbVpTMGMwYW0rUlk5WUdxNVQ3WXJ6cHpjZnZwaU9VCmZmZTNSeUZUN2NmQ21mb09oREN0enVrQ2dZQjMwb0xDMVJMRk9ycW40M3ZDUzUxemM1em9ZNDR1QnpzcHd3WU4KVHd2UC9FeFdNZjNWSnJEakJDSCtULzZzeXNlUGJKRUltbHpNK0l3eXRGcEFOZmlJWEV0LzQ4WGY2ME54OGdXTQp1SHl4Wlp4L05LdER3MFY4dlgxUE9ucTJBNWVpS2ErOGpSQVJZS0pMWU5kZkR1d29seHZHNmJaaGtQaS80RXRUCjNZMThzUUtCZ0h0S2JrKzdsTkpWZXN3WEU1Y1VHNkVEVXNEZS8yVWE3ZlhwN0ZjanFCRW9hcDFMU3crNlRYcDAKWmdybUtFOEFSek00NytFSkhVdmlpcS9udXBFMTVnMGtKVzNzeWhwVTl6WkxPN2x0QjBLSWtPOVpSY21Vam84UQpjcExsSE1BcWJMSjhXWUdKQ2toaVd4eWFsNmhZVHlXWTRjVmtDMHh0VGwvaFVFOUllTktvCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
执行创建
➜ ingress-test kubectl apply -f cafe-secret.yaml
secret/cafe-secret created
➜ ingress-test kubectl get secrets
NAME TYPE DATA AGE
cafe-secret Opaque 2 17s
六、部署ingress对象
文件如下,cafe-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
可以看到,该ingress指定了/conffee和/tea这两个path,分别指向coffee-svc和tea-svc,我们执行部署
➜ ingress-test kubectl apply -f cafe-ingress.yaml
ingress.extensions/cafe-ingress created
➜ ingress-test kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
cafe-ingress cafe.example.com 10.43.33.55 80, 443 7s
➜ ingress-test kubectl describe ingress cafe-ingres
Name: cafe-ingress
Namespace: default
Address: 10.43.33.55
Default backend: default-http-backend:80 (<none>)
TLS:
cafe-secret terminates cafe.example.com
Rules:
Host Path Backends
---- ---- --------
cafe.example.com
/tea tea-svc:80 (10.42.0.41:80,10.42.0.42:80,10.42.2.185:80)
/coffee coffee-svc:80 (10.42.0.40:80,10.42.2.184:80)
Annotations:
field.cattle.io/publicEndpoints: [{"addresses":["10.43.33.55"],"port":443,"protocol":"HTTPS","serviceName":"default:tea-svc","ingressName":"default:cafe-ingress","hostname":"cafe.example.com","path":"/tea","allNodes":true},{"addresses":["10.43.33.55"],"port":443,"protocol":"HTTPS","serviceName":"default:coffee-svc","ingressName":"default:cafe-ingress","hostname":"cafe.example.com","path":"/coffee","allNodes":true}]
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"cafe-ingress","namespace":"default"},"spec":{"rules":[{"host":"cafe.example.com","http":{"paths":[{"backend":{"serviceName":"tea-svc","servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffee-svc","servicePort":80},"path":"/coffee"}]}}],"tls":[{"hosts":["cafe.example.com"],"secretName":"cafe-secret"}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 2m13s nginx-ingress-controller Ingress default/cafe-ingress
Normal UPDATE 2m13s (x2 over 2m13s) nginx-ingress-controller Ingress default/cafe-ingress
可以看到,这个Ingress的核心部署是Rule字段
Rules:
Host Path Backends
---- ---- --------
cafe.example.com
/tea tea-svc:80 (10.42.0.41:80,10.42.0.42:80,10.42.2.185:80)
/coffee coffee-svc:80 (10.42.0.40:80,10.42.2.184:80)
定义的HOST是cafe.example.com,分别对应两条转发规则/tea和/coffee
七、访问ingress代理的服务
拉下来,我们就可以通过访问这个ingress的地址和端口,访问到我们部署的应用了
地址分别是https://cafe.example.com/coffee:443和https://cafe.example.com/tea:443
我们先把设置一下入口的环境变量导入,指定Node的IP和HTTPS对应的端口号
IC_IP=172.17.0.215
IC_HTTPS_PORT=32229
1.访问coffee
curl --resolve cafe.example.com:$IC_HTTPS_PORT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/coffee --insecur
Server address: 10.42.0.40:80
Server name: coffee-bbd45c6-jp6bz
Date: 23/Oct/2019:06:35:35 +0000
URI: /coffee
Request ID: df63447fcacfc2ff0acd1f8a9b30d334
2.访问tea
curl --resolve cafe.example.com:$IC_HTTPS_PORT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/tea --insecure
Server address: 10.42.2.185:80
Server name: tea-5857f7786b-qkpn7
Date: 23/Oct/2019:06:37:17 +0000
URI: /tea
Request ID: a5da5ff4d869c3978aed450738b44706
3.访问不存在的页面cqh
curl --resolve cafe.example.com:$IC_HTTPS_PORT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/cqh --insecure
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
可以看到,当我们请求没有匹配任何一条IngressRule,会返回一个Nginx的404页面
到这,我们已经成功的部署并使用ingress代理了我们的服务