官方文档:https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal
There are three ways to customize NGINX:
- ConfigMap: using a Configmap to set global configurations in NGINX.
- Annotations: use this if you want a specific configuration for a particular Ingress rule.
- Custom template: when more specific settings are required, like open_file_cache, adjust listen options as
rcvbuf
or when is not possible to change the configuration through the ConfigMap.
Ingress-Controller配置说明
-
外部NG
- 内部NG
污点容忍
在污点容忍时,要注意预留一台无污点的机器地,因为在部署ingress-controller时需要执行二个脚本,用来生成key,如果没有无污点的机器(Job: ingress-nginx-admission-patch && Job: ingress-nginx-admission-create)报错
- 节点标签
# be-nginx 标签 kubernetes.io/env: prod kubernetes.io/ingress: prod-internal --------------------------------------------------- # fe-nginx 标签 kubernetes.io/env: prod kubernetes.io/ingress: prod
- 污点容忍
# be-nginx 污点配置 <root@PROD-K8S-CP1 /var/log/kubernetes># kubectl taint node prod-be-k8s-wn1 ingress=prod-internal:NoExecute <root@PROD-K8S-CP1 /var/log/kubernetes># kubectl taint node prod-be-k8s-wn2 ingress=prod-internal:NoExecute ------------------------------------------------------------------------------------------ # fe-nginx 污点配置 <root@PROD-K8S-CP1 /var/log/kubernetes># kubectl taint node prod-fe-k8s-wn1 ingress=prod:NoExecute <root@PROD-K8S-CP1 /var/log/kubernetes># kubectl taint node prod-fe-k8s-wn2 ingress=prod:NoExecute
prod-ingress
- 具体配置
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx --- # Source: ingress-nginx/templates/controller-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: prod-ingress-template #name: ingress-nginx-controller namespace: ingress-nginx data: --- # Source: ingress-nginx/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - update - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io # k8s 1.14+ resources: - ingressclasses verbs: - get - list - watch --- # Source: ingress-nginx/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - update - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io # k8s 1.14+ resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader-prod verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - endpoints verbs: - create - get - update - apiGroups: - '' resources: - events verbs: - create - patch --- # Source: ingress-nginx/templates/controller-rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-service-webhook.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: prod-ingress namespace: ingress-nginx spec: type: ClusterIP ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller #name: ingress-nginx-controller name: prod-ingress namespace: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: dnsPolicy: ClusterFirst containers: - name: pord-ingress image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --ingress-class=prod - --configmap=ingress-nginx/prod-ingress-template - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true - name: localtime mountPath: /etc/localtime resources: requests: cpu: 100m memory: 512Mi limits: cpu: 2 memory: 3000Mi nodeSelector: kubernetes.io/os: linux affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/ingress operator: In values: - prod podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: kubernetes.io/env operator: In values: - prod topologyKey: kubernetes.io/hostname tolerations: - key: ingress value: prod effect: NoExecute hostNetwork: true serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 60 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission - name: localtime hostPath: path: /etc/localtime type: "" --- # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml # before changing this value, check the required kubernetes version # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admission namespace: ingress-nginx webhooks: - name: validate.nginx.ingress.kubernetes.io rules: - apiGroups: - extensions - networking.k8s.io apiVersions: - v1beta1 operations: - CREATE - UPDATE resources: - ingresses failurePolicy: Fail sideEffects: None admissionReviewVersions: - v1 - v1beta1 clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /extensions/v1beta1/ingresses --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-create annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx spec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: docker.io/jettech/kube-webhook-certgen:v1.2.0 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc - --namespace=ingress-nginx - --secret-name=ingress-nginx-admission restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-patch annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx spec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: docker.io/jettech/kube-webhook-certgen:v1.2.0 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=ingress-nginx - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx rules: - apiGroups: - '' resources: - secrets verbs: - get - create --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx
- 滚动更新、副本集
spec: replicas: 2 .... strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 25%
prod-internal-ingress
- 具体配置
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx --- # Source: ingress-nginx/templates/controller-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: prod-internal-ingress-template #name: ingress-nginx-controller namespace: ingress-nginx data: --- # Source: ingress-nginx/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - update - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io # k8s 1.14+ resources: - ingressclasses verbs: - get - list - watch --- # Source: ingress-nginx/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - update - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io # k8s 1.14+ resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader-prod - ingress-controller-leader-prod-internal verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - endpoints verbs: - create - get - update - apiGroups: - '' resources: - events verbs: - create - patch --- # Source: ingress-nginx/templates/controller-rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-service-webhook.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: prod-internal-ingress namespace: ingress-nginx spec: type: ClusterIP ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller #name: ingress-nginx-controller name: prod-internal-ingress namespace: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: dnsPolicy: ClusterFirst containers: - name: prod-internal-ingress image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --ingress-class=prod-internal - --configmap=ingress-nginx/prod-internal-ingress-template - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true - name: localtime mountPath: /etc/localtime resources: requests: cpu: 100m memory: 512Mi limits: cpu: 2 memory: 3000Mi nodeSelector: kubernetes.io/os: linux affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/ingress operator: In values: - prod-internal podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: kubernetes.io/env operator: In values: - prod topologyKey: kubernetes.io/hostname tolerations: - key: ingress value: prod-internal effect: NoExecute hostNetwork: true serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 60 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission - name: localtime hostPath: path: /etc/localtime type: "" --- # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml # before changing this value, check the required kubernetes version # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admission namespace: ingress-nginx webhooks: - name: validate.nginx.ingress.kubernetes.io rules: - apiGroups: - extensions - networking.k8s.io apiVersions: - v1beta1 operations: - CREATE - UPDATE resources: - ingresses failurePolicy: Fail sideEffects: None admissionReviewVersions: - v1 - v1beta1 clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /extensions/v1beta1/ingresses --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-create annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx spec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: docker.io/jettech/kube-webhook-certgen:v1.2.0 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc - --namespace=ingress-nginx - --secret-name=ingress-nginx-admission restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-patch annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx spec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: docker.io/jettech/kube-webhook-certgen:v1.2.0 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=ingress-nginx - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail restartPolicy: OnFailure serviceAccountName: ingress-nginx-admission securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx rules: - apiGroups: - '' resources: - secrets verbs: - get - create --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.9.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx
- 滚动更新、副本集
spec: replicas: 2 .... strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 25%
日志输出
- 编辑prod-ingress-template ConfigMap (lens),调整nginx日志输出格式
log-format-upstream: >- $realip_remote_addr [$time_local] $server_name "$request" $request_time $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" $proxy_upstream_name $upstream_addr "$upstream_response_time" "$upstream_status" map-hash-bucket-size: '256' nginx-status-ipv4-whitelist: all use-forwarded-headers: 'true'
- 创建ingresss的模板文件
apiVersion: v1 kind: ConfigMap metadata: name: nginx-tmpl namespace: ingress-nginx data: nginx.tmpl: | {{ $all := . }} {{ $servers := .Servers }} {{ $cfg := .Cfg }} {{ $IsIPV6Enabled := .IsIPV6Enabled }} {{ $healthzURI := .HealthzURI }} {{ $backends := .Backends }} {{ $proxyHeaders := .ProxySetHeaders }} {{ $addHeaders := .AddHeaders }} # Configuration checksum: {{ $all.Cfg.Checksum }} # setup custom paths that do not require root access pid {{ .PID }}; {{ if $cfg.UseGeoIP2 }} load_module /etc/nginx/modules/ngx_http_geoip2_module.so; {{ end }} {{ if (shouldLoadModSecurityModule $cfg $servers) }} load_module /etc/nginx/modules/ngx_http_modsecurity_module.so; {{ end }} {{ if (shouldLoadOpentracingModule $cfg $servers) }} load_module /etc/nginx/modules/ngx_http_opentracing_module.so; {{ end }} daemon off; worker_processes {{ $cfg.WorkerProcesses }}; {{ if gt (len $cfg.WorkerCPUAffinity) 0 }} worker_cpu_affinity {{ $cfg.WorkerCPUAffinity }}; {{ end }} worker_rlimit_nofile {{ $cfg.MaxWorkerOpenFiles }}; {{/* http://nginx.org/en/docs/ngx_core_module.html#worker_shutdown_timeout */}} {{/* avoid waiting too long during a reload */}} worker_shutdown_timeout {{ $cfg.WorkerShutdownTimeout }} ; {{ if not (empty $cfg.MainSnippet) }} {{ $cfg.MainSnippet }} {{ end }} events { multi_accept {{ if $cfg.EnableMultiAccept }}on{{ else }}off{{ end }}; worker_connections {{ $cfg.MaxWorkerConnections }}; use epoll; } http { lua_package_path "/etc/nginx/lua/?.lua;;"; {{ buildLuaSharedDictionaries $cfg $servers }} init_by_lua_block { collectgarbage("collect") -- init modules local ok, res ok, res = pcall(require, "lua_ingress") if not ok then error("require failed: " .. tostring(res)) else lua_ingress = res lua_ingress.set_config({{ configForLua $all }}) end ok, res = pcall(require, "configuration") if not ok then error("require failed: " .. tostring(res)) else configuration = res end ok, res = pcall(require, "balancer") if not ok then error("require failed: " .. tostring(res)) else balancer = res end {{ if $all.EnableMetrics }} ok, res = pcall(require, "monitor") if not ok then error("require failed: " .. tostring(res)) else monitor = res end {{ end }} ok, res = pcall(require, "certificate") if not ok then error("require failed: " .. tostring(res)) else certificate = res certificate.is_ocsp_stapling_enabled = {{ $cfg.EnableOCSP }} end ok, res = pcall(require, "plugins") if not ok then error("require failed: " .. tostring(res)) else plugins = res end -- load all plugins that'll be used here plugins.init({ {{ range $idx, $plugin := $cfg.Plugins }}{{ if $idx }},{{ end }}{{ $plugin | quote }}{{ end }} }) } init_worker_by_lua_block { lua_ingress.init_worker() balancer.init_worker() {{ if $all.EnableMetrics }} monitor.init_worker() {{ end }} plugins.run() } {{/* Enable the real_ip module only if we use either X-Forwarded headers or Proxy Protocol. */}} {{/* we use the value of the real IP for the geo_ip module */}} {{ if or $cfg.UseForwardedHeaders $cfg.UseProxyProtocol }} {{ if $cfg.UseProxyProtocol }} real_ip_header proxy_protocol; {{ else }} real_ip_header {{ $cfg.ForwardedForHeader }}; {{ end }} real_ip_recursive on; {{ range $trusted_ip := $cfg.ProxyRealIPCIDR }} set_real_ip_from {{ $trusted_ip }}; {{ end }} {{ end }} {{ if $all.Cfg.EnableModsecurity }} modsecurity on; modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf; {{ if $all.Cfg.EnableOWASPCoreRules }} modsecurity_rules_file /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf; {{ else if (not (empty $all.Cfg.ModsecuritySnippet)) }} modsecurity_rules ' {{ $all.Cfg.ModsecuritySnippet }} '; {{ end }} {{ end }} {{ if $cfg.UseGeoIP }} {{/* databases used to determine the country depending on the client IP address */}} {{/* http://nginx.org/en/docs/http/ngx_http_geoip_module.html */}} {{/* this is require to calculate traffic for individual country using GeoIP in the status page */}} geoip_country /etc/nginx/geoip/GeoIP.dat; geoip_city /etc/nginx/geoip/GeoLiteCity.dat; geoip_org /etc/nginx/geoip/GeoIPASNum.dat; geoip_proxy_recursive on; {{ end }} {{ if $cfg.UseGeoIP2 }} # https://github.com/leev/ngx_http_geoip2_module#example-usage {{ range $index, $file := $all.MaxmindEditionFiles }} {{ if eq $file "GeoLite2-City.mmdb" }} geoip2 /etc/nginx/geoip/GeoLite2-City.mmdb { $geoip2_city_country_code source=$remote_addr country iso_code; $geoip2_city_country_name source=$remote_addr country names en; $geoip2_city source=$remote_addr city names en; $geoip2_postal_code source=$remote_addr postal code; $geoip2_dma_code source=$remote_addr location metro_code; $geoip2_latitude source=$remote_addr location latitude; $geoip2_longitude source=$remote_addr location longitude; $geoip2_time_zone source=$remote_addr location time_zone; $geoip2_region_code source=$remote_addr subdivisions 0 iso_code; $geoip2_region_name source=$remote_addr subdivisions 0 names en; } {{ end }} {{ if eq $file "GeoIP2-City.mmdb" }} geoip2 /etc/nginx/geoip/GeoIP2-City.mmdb { $geoip2_city_country_code source=$remote_addr country iso_code; $geoip2_city_country_name source=$remote_addr country names en; $geoip2_city source=$remote_addr city names en; $geoip2_postal_code source=$remote_addr postal code; $geoip2_dma_code source=$remote_addr location metro_code; $geoip2_latitude source=$remote_addr location latitude; $geoip2_longitude source=$remote_addr location longitude; $geoip2_time_zone source=$remote_addr location time_zone; $geoip2_region_code source=$remote_addr subdivisions 0 iso_code; $geoip2_region_name source=$remote_addr subdivisions 0 names en; } {{ end }} {{ if eq $file "GeoLite2-ASN.mmdb" }} geoip2 /etc/nginx/geoip/GeoLite2-ASN.mmdb { $geoip2_asn source=$remote_addr autonomous_system_number; $geoip2_org source=$remote_addr autonomous_system_organization; } {{ end }} {{ if eq $file "GeoIP2-ASN.mmdb" }} geoip2 /etc/nginx/geoip/GeoIP2-ASN.mmdb { $geoip2_asn source=$remote_addr autonomous_system_number; $geoip2_org source=$remote_addr autonomous_system_organization; } {{ end }} {{ if eq $file "GeoIP2-ISP.mmdb" }} geoip2 /etc/nginx/geoip/GeoIP2-ISP.mmdb { $geoip2_isp isp; $geoip2_isp_org organization; } {{ end }} {{ if eq $file "GeoIP2-Connection-Type.mmdb" }} geoip2 /etc/nginx/geoip/GeoIP2-Connection-Type.mmdb { $geoip2_connection_type connection_type; } {{ end }} {{ if eq $file "GeoIP2-Anonymous-IP.mmdb" }} geoip2 /etc/nginx/geoip/GeoIP2-Anonymous-IP.mmdb { $geoip2_is_anon source=$remote_addr is_anonymous; $geoip2_is_hosting_provider source=$remote_addr is_hosting_provider; $geoip2_is_public_proxy source=$remote_addr is_public_proxy; } {{ end }} {{ end }} {{ end }} aio threads; aio_write on; tcp_nopush on; tcp_nodelay on; log_subrequest on; reset_timedout_connection on; keepalive_timeout {{ $cfg.KeepAlive }}s; keepalive_requests {{ $cfg.KeepAliveRequests }}; client_body_temp_path /tmp/client-body; fastcgi_temp_path /tmp/fastcgi-temp; proxy_temp_path /tmp/proxy-temp; ajp_temp_path /tmp/ajp-temp; client_header_buffer_size {{ $cfg.ClientHeaderBufferSize }}; client_header_timeout {{ $cfg.ClientHeaderTimeout }}s; large_client_header_buffers {{ $cfg.LargeClientHeaderBuffers }}; client_body_buffer_size {{ $cfg.ClientBodyBufferSize }}; client_body_timeout {{ $cfg.ClientBodyTimeout }}s; http2_max_field_size {{ $cfg.HTTP2MaxFieldSize }}; http2_max_header_size {{ $cfg.HTTP2MaxHeaderSize }}; http2_max_requests {{ $cfg.HTTP2MaxRequests }}; http2_max_concurrent_streams {{ $cfg.HTTP2MaxConcurrentStreams }}; types_hash_max_size 2048; server_names_hash_max_size {{ $cfg.ServerNameHashMaxSize }}; server_names_hash_bucket_size {{ $cfg.ServerNameHashBucketSize }}; map_hash_bucket_size {{ $cfg.MapHashBucketSize }}; proxy_headers_hash_max_size {{ $cfg.ProxyHeadersHashMaxSize }}; proxy_headers_hash_bucket_size {{ $cfg.ProxyHeadersHashBucketSize }}; variables_hash_bucket_size {{ $cfg.VariablesHashBucketSize }}; variables_hash_max_size {{ $cfg.VariablesHashMaxSize }}; underscores_in_headers {{ if $cfg.EnableUnderscoresInHeaders }}on{{ else }}off{{ end }}; ignore_invalid_headers {{ if $cfg.IgnoreInvalidHeaders }}on{{ else }}off{{ end }}; limit_req_status {{ $cfg.LimitReqStatusCode }}; limit_conn_status {{ $cfg.LimitConnStatusCode }}; {{ buildOpentracing $cfg $servers }} include /etc/nginx/mime.types; default_type text/html; {{ if $cfg.EnableBrotli }} brotli on; brotli_comp_level {{ $cfg.BrotliLevel }}; brotli_types {{ $cfg.BrotliTypes }}; {{ end }} #{{ if $cfg.UseGzip }} #gzip on; #gzip_comp_level {{ $cfg.GzipLevel }}; #gzip_http_version 1.1; #gzip_min_length {{ $cfg.GzipMinLength}}; #gzip_types {{ $cfg.GzipTypes }}; #gzip_proxied any; #gzip_vary on; #{{ end }} # Custom headers for response {{ range $k, $v := $addHeaders }} more_set_headers {{ printf "%s: %s" $k $v | quote }}; {{ end }} server_tokens {{ if $cfg.ShowServerTokens }}on{{ else }}off{{ end }}; {{ if not $cfg.ShowServerTokens }} more_clear_headers Server; {{ end }} # disable warnings uninitialized_variable_warn off; # Additional available variables: # $namespace # $ingress_name # $service_name # $service_port log_format upstreaminfo {{ if $cfg.LogFormatEscapeJSON }}escape=json {{ end }}'{{ $cfg.LogFormatUpstream }}'; {{/* map urls that should not appear in access.log */}} {{/* http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log */}} map $request_uri $loggable { {{ range $reqUri := $cfg.SkipAccessLogURLs }} {{ $reqUri }} 0;{{ end }} default 1; } {{ if $cfg.EnableSyslog }} error_log syslog:server={{ $cfg.SyslogHost }}:{{ $cfg.SyslogPort }} {{ $cfg.ErrorLogLevel }}; {{ else }} error_log {{ $cfg.ErrorLogPath }} {{ $cfg.ErrorLogLevel }}; {{ end }} {{ buildResolvers $cfg.Resolver $cfg.DisableIpv6DNS }} # See https://www.nginx.com/blog/websocket-nginx map $http_upgrade $connection_upgrade { default upgrade; {{ if (gt $cfg.UpstreamKeepaliveConnections 0) }} # See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive '' ''; {{ else }} '' close; {{ end }} } # Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server. # If no such header is provided, it can provide a random value. map $http_x_request_id $req_id { default $http_x_request_id; {{ if $cfg.GenerateRequestID }} "" $request_id; {{ end }} } {{ if and $cfg.UseForwardedHeaders $cfg.ComputeFullForwardedFor }} # We can't use $proxy_add_x_forwarded_for because the realip module # replaces the remote_addr too soon map $http_x_forwarded_for $full_x_forwarded_for { {{ if $all.Cfg.UseProxyProtocol }} default "$http_x_forwarded_for, $proxy_protocol_addr"; '' "$proxy_protocol_addr"; {{ else }} default "$http_x_forwarded_for, $realip_remote_addr"; '' "$realip_remote_addr"; {{ end}} } map $http_x_forwarded_proto $full_x_forwarded_proto { default $http_x_forwarded_proto; "" $scheme; } {{ end }} # Create a variable that contains the literal $ character. # This works because the geo module will not resolve variables. geo $literal_dollar { default "$"; } server_name_in_redirect off; port_in_redirect off; ssl_protocols {{ $cfg.SSLProtocols }}; ssl_early_data {{ if $cfg.SSLEarlyData }}on{{ else }}off{{ end }}; # turn on session caching to drastically improve performance {{ if $cfg.SSLSessionCache }} ssl_session_cache builtin:1000 shared:SSL:{{ $cfg.SSLSessionCacheSize }}; ssl_session_timeout {{ $cfg.SSLSessionTimeout }}; {{ end }} # allow configuring ssl session tickets ssl_session_tickets {{ if $cfg.SSLSessionTickets }}on{{ else }}off{{ end }}; {{ if not (empty $cfg.SSLSessionTicketKey ) }} ssl_session_ticket_key /etc/nginx/tickets.key; {{ end }} # slightly reduce the time-to-first-byte ssl_buffer_size {{ $cfg.SSLBufferSize }}; {{ if not (empty $cfg.SSLCiphers) }} # allow configuring custom ssl ciphers ssl_ciphers '{{ $cfg.SSLCiphers }}'; ssl_prefer_server_ciphers on; {{ end }} {{ if not (empty $cfg.SSLDHParam) }} # allow custom DH file http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam ssl_dhparam {{ $cfg.SSLDHParam }}; {{ end }} ssl_ecdh_curve {{ $cfg.SSLECDHCurve }}; # PEM sha: {{ $cfg.DefaultSSLCertificate.PemSHA }} ssl_certificate {{ $cfg.DefaultSSLCertificate.PemFileName }}; ssl_certificate_key {{ $cfg.DefaultSSLCertificate.PemFileName }}; {{ if gt (len $cfg.CustomHTTPErrors) 0 }} proxy_intercept_errors on; {{ end }} {{ range $errCode := $cfg.CustomHTTPErrors }} error_page {{ $errCode }} = @custom_upstream-default-backend_{{ $errCode }};{{ end }} proxy_ssl_session_reuse on; {{ if $cfg.AllowBackendServerHeader }} proxy_pass_header Server; {{ end }} {{ range $header := $cfg.HideHeaders }}proxy_hide_header {{ $header }}; {{ end }} {{ if not (empty $cfg.HTTPSnippet) }} # Custom code snippet configured in the configuration configmap {{ $cfg.HTTPSnippet }} {{ end }} upstream upstream_balancer { ### Attention!!! # # We no longer create "upstream" section for every backend. # Backends are handled dynamically using Lua. If you would like to debug # and see what backends ingress-nginx has in its memory you can # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin. # Once you have the plugin you can use "kubectl ingress-nginx backends" command to # inspect current backends. # ### server 0.0.0.1; # placeholder balancer_by_lua_block { balancer.balance() } {{ if (gt $cfg.UpstreamKeepaliveConnections 0) }} keepalive {{ $cfg.UpstreamKeepaliveConnections }}; keepalive_timeout {{ $cfg.UpstreamKeepaliveTimeout }}s; keepalive_requests {{ $cfg.UpstreamKeepaliveRequests }}; {{ end }} } {{ range $rl := (filterRateLimits $servers ) }} # Ratelimit {{ $rl.Name }} geo $remote_addr $whitelist_{{ $rl.ID }} { default 0; {{ range $ip := $rl.Whitelist }} {{ $ip }} 1;{{ end }} } # Ratelimit {{ $rl.Name }} map $whitelist_{{ $rl.ID }} $limit_{{ $rl.ID }} { 0 {{ $cfg.LimitConnZoneVariable }}; 1 ""; } {{ end }} {{/* build all the required rate limit zones. Each annotation requires a dedicated zone */}} {{/* 1MB -> 16 thousand 64-byte states or about 8 thousand 128-byte states */}} {{ range $zone := (buildRateLimitZones $servers) }} {{ $zone }} {{ end }} # Cache for internal auth checks proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off; # Global filters {{ range $ip := $cfg.BlockCIDRs }}deny {{ trimSpace $ip }}; {{ end }} {{ if gt (len $cfg.BlockUserAgents) 0 }} map $http_user_agent $block_ua { default 0; {{ range $ua := $cfg.BlockUserAgents }}{{ trimSpace $ua }} 1; {{ end }} } {{ end }} {{ if gt (len $cfg.BlockReferers) 0 }} map $http_referer $block_ref { default 0; {{ range $ref := $cfg.BlockReferers }}{{ trimSpace $ref }} 1; {{ end }} } {{ end }} {{/* Build server redirects (from/to www) */}} {{ range $redirect := .RedirectServers }} ## start server {{ $redirect.From }} server { server_name {{ $redirect.From }}; {{ buildHTTPListener $all $redirect.From }} {{ buildHTTPSListener $all $redirect.From }} ssl_certificate_by_lua_block { certificate.call() } {{ if gt (len $cfg.BlockUserAgents) 0 }} if ($block_ua) { return 403; } {{ end }} {{ if gt (len $cfg.BlockReferers) 0 }} if ($block_ref) { return 403; } {{ end }} {{ if ne $all.ListenPorts.HTTPS 443 }} {{ $redirect_port := (printf ":%v" $all.ListenPorts.HTTPS) }} return {{ $all.Cfg.HTTPRedirectCode }} $scheme://{{ $redirect.To }}{{ $redirect_port }}$request_uri; {{ else }} return {{ $all.Cfg.HTTPRedirectCode }} $scheme://{{ $redirect.To }}$request_uri; {{ end }} } ## end server {{ $redirect.From }} {{ end }} {{ range $server := $servers }} ## start server {{ $server.Hostname }} server { server_name {{ $server.Hostname }} {{range $server.Aliases }}{{ . }} {{ end }}; {{ if gt (len $cfg.BlockUserAgents) 0 }} if ($block_ua) { return 403; } {{ end }} {{ if gt (len $cfg.BlockReferers) 0 }} if ($block_ref) { return 403; } {{ end }} {{ if $cfg.DisableAccessLog }} access_log off; {{ else }} {{ if $cfg.EnableSyslog }} access_log syslog:server={{ $cfg.SyslogHost }}:{{ $cfg.SyslogPort }} upstreaminfo if=$loggable; {{ else }} access_log /var/log/nginx/{{ $server.Hostname }}.log upstreaminfo {{ $cfg.AccessLogParams }} if=$loggable; {{ end }} {{ end }} {{ template "SERVER" serverConfig $all $server }} {{ if not (empty $cfg.ServerSnippet) }} # Custom code snippet configured in the configuration configmap {{ $cfg.ServerSnippet }} {{ end }} {{ template "CUSTOM_ERRORS" (buildCustomErrorDeps "upstream-default-backend" $cfg.CustomHTTPErrors $all.EnableMetrics) }} } ## end server {{ $server.Hostname }} {{ end }} # backend for when default-backend-service is not configured or it does not have endpoints server { listen {{ $all.ListenPorts.Default }} default_server {{ if $all.Cfg.ReusePort }}reuseport{{ end }} backlog={{ $all.BacklogSize }}; {{ if $IsIPV6Enabled }}listen [::]:{{ $all.ListenPorts.Default }} default_server {{ if $all.Cfg.ReusePort }}reuseport{{ end }} backlog={{ $all.BacklogSize }};{{ end }} set $proxy_upstream_name "internal"; location / { return 404; } } # default server, used for NGINX healthcheck and access to nginx stats server { listen 127.0.0.1:{{ .StatusPort }}; set $proxy_upstream_name "internal"; keepalive_timeout 0; gzip off; access_log off; {{ if $cfg.EnableOpentracing }} opentracing off; {{ end }} location {{ $healthzURI }} { return 200; } location /is-dynamic-lb-initialized { content_by_lua_block { local configuration = require("configuration") local backend_data = configuration.get_backends_data() if not backend_data then ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR) return end ngx.say("OK") ngx.exit(ngx.HTTP_OK) } } location {{ .StatusPath }} { stub_status on; } location /configuration { client_max_body_size {{ luaConfigurationRequestBodySize $cfg }}m; client_body_buffer_size {{ luaConfigurationRequestBodySize $cfg }}m; proxy_buffering off; content_by_lua_block { configuration.call() } } location / { content_by_lua_block { ngx.exit(ngx.HTTP_NOT_FOUND) } } } } stream { lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;"; lua_shared_dict tcp_udp_configuration_data 5M; init_by_lua_block { collectgarbage("collect") -- init modules local ok, res ok, res = pcall(require, "configuration") if not ok then error("require failed: " .. tostring(res)) else configuration = res end ok, res = pcall(require, "tcp_udp_configuration") if not ok then error("require failed: " .. tostring(res)) else tcp_udp_configuration = res end ok, res = pcall(require, "tcp_udp_balancer") if not ok then error("require failed: " .. tostring(res)) else tcp_udp_balancer = res end } init_worker_by_lua_block { tcp_udp_balancer.init_worker() } lua_add_variable $proxy_upstream_name; log_format log_stream '{{ $cfg.LogFormatStream }}'; {{ if $cfg.DisableAccessLog }} access_log off; {{ else }} access_log {{ or $cfg.StreamAccessLogPath $cfg.AccessLogPath }} log_stream {{ $cfg.AccessLogParams }}; {{ end }} error_log {{ $cfg.ErrorLogPath }}; upstream upstream_balancer { server 0.0.0.1:1234; # placeholder balancer_by_lua_block { tcp_udp_balancer.balance() } } server { listen 127.0.0.1:{{ .StreamPort }}; access_log off; content_by_lua_block { tcp_udp_configuration.call() } } # TCP services {{ range $tcpServer := .TCPBackends }} server { preread_by_lua_block { ngx.var.proxy_upstream_name="tcp-{{ $tcpServer.Backend.Namespace }}-{{ $tcpServer.Backend.Name }}-{{ $tcpServer.Backend.Port }}"; } {{ range $address := $all.Cfg.BindAddressIpv4 }} listen {{ $address }}:{{ $tcpServer.Port }}{{ if $tcpServer.Backend.ProxyProtocol.Decode }} proxy_protocol{{ end }}; {{ else }} listen {{ $tcpServer.Port }}{{ if $tcpServer.Backend.ProxyProtocol.Decode }} proxy_protocol{{ end }}; {{ end }} {{ if $IsIPV6Enabled }} {{ range $address := $all.Cfg.BindAddressIpv6 }} listen {{ $address }}:{{ $tcpServer.Port }}{{ if $tcpServer.Backend.ProxyProtocol.Decode }} proxy_protocol{{ end }}; {{ else }} listen [::]:{{ $tcpServer.Port }}{{ if $tcpServer.Backend.ProxyProtocol.Decode }} proxy_protocol{{ end }}; {{ end }} {{ end }} proxy_timeout {{ $cfg.ProxyStreamTimeout }}; proxy_pass upstream_balancer; {{ if $tcpServer.Backend.ProxyProtocol.Encode }} proxy_protocol on; {{ end }} } {{ end }} # UDP services {{ range $udpServer := .UDPBackends }} server { preread_by_lua_block { ngx.var.proxy_upstream_name="udp-{{ $udpServer.Backend.Namespace }}-{{ $udpServer.Backend.Name }}-{{ $udpServer.Backend.Port }}"; } {{ range $address := $all.Cfg.BindAddressIpv4 }} listen {{ $address }}:{{ $udpServer.Port }} udp; {{ else }} listen {{ $udpServer.Port }} udp; {{ end }} {{ if $IsIPV6Enabled }} {{ range $address := $all.Cfg.BindAddressIpv6 }} listen {{ $address }}:{{ $udpServer.Port }} udp; {{ else }} listen [::]:{{ $udpServer.Port }} udp; {{ end }} {{ end }} proxy_responses {{ $cfg.ProxyStreamResponses }}; proxy_timeout {{ $cfg.ProxyStreamTimeout }}; proxy_pass upstream_balancer; } {{ end }} } {{/* definition of templates to avoid repetitions */}} {{ define "CUSTOM_ERRORS" }} {{ $enableMetrics := .EnableMetrics }} {{ $upstreamName := .UpstreamName }} {{ range $errCode := .ErrorCodes }} location @custom_{{ $upstreamName }}_{{ $errCode }} { internal; proxy_intercept_errors off; proxy_set_header X-Code {{ $errCode }}; proxy_set_header X-Format $http_accept; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Namespace $namespace; proxy_set_header X-Ingress-Name $ingress_name; proxy_set_header X-Service-Name $service_name; proxy_set_header X-Service-Port $service_port; proxy_set_header X-Request-ID $req_id; proxy_set_header Host $best_http_host; set $proxy_upstream_name {{ $upstreamName | quote }}; rewrite (.*) / break; proxy_pass http://upstream_balancer; log_by_lua_block { {{ if $enableMetrics }} monitor.call() {{ end }} } } {{ end }} {{ end }} {{/* CORS support from https://michielkalkman.com/snippets/nginx-cors-open-configuration.html */}} {{ define "CORS" }} {{ $cors := .CorsConfig }} # Cors Preflight methods needs additional options and different Return Code if ($request_method = 'OPTIONS') { more_set_headers 'Access-Control-Allow-Origin: {{ $cors.CorsAllowOrigin }}'; {{ if $cors.CorsAllowCredentials }} more_set_headers 'Access-Control-Allow-Credentials: {{ $cors.CorsAllowCredentials }}'; {{ end }} more_set_headers 'Access-Control-Allow-Methods: {{ $cors.CorsAllowMethods }}'; more_set_headers 'Access-Control-Allow-Headers: {{ $cors.CorsAllowHeaders }}'; more_set_headers 'Access-Control-Max-Age: {{ $cors.CorsMaxAge }}'; more_set_headers 'Content-Type: text/plain charset=UTF-8'; more_set_headers 'Content-Length: 0'; return 204; } more_set_headers 'Access-Control-Allow-Origin: {{ $cors.CorsAllowOrigin }}'; {{ if $cors.CorsAllowCredentials }} more_set_headers 'Access-Control-Allow-Credentials: {{ $cors.CorsAllowCredentials }}'; {{ end }} more_set_headers 'Access-Control-Allow-Methods: {{ $cors.CorsAllowMethods }}'; more_set_headers 'Access-Control-Allow-Headers: {{ $cors.CorsAllowHeaders }}'; {{ end }} {{/* definition of server-template to avoid repetitions with server-alias */}} {{ define "SERVER" }} {{ $all := .First }} {{ $server := .Second }} {{ buildHTTPListener $all $server.Hostname }} {{ buildHTTPSListener $all $server.Hostname }} set $proxy_upstream_name "-"; ssl_certificate_by_lua_block { certificate.call() } {{ if not (empty $server.AuthTLSError) }} # {{ $server.AuthTLSError }} return 403; {{ else }} {{ if not (empty $server.CertificateAuth.CAFileName) }} # PEM sha: {{ $server.CertificateAuth.CASHA }} ssl_client_certificate {{ $server.CertificateAuth.CAFileName }}; ssl_verify_client {{ $server.CertificateAuth.VerifyClient }}; ssl_verify_depth {{ $server.CertificateAuth.ValidationDepth }}; {{ if not (empty $server.CertificateAuth.CRLFileName) }} # PEM sha: {{ $server.CertificateAuth.CRLSHA }} ssl_crl {{ $server.CertificateAuth.CRLFileName }}; {{ end }} {{ if not (empty $server.CertificateAuth.ErrorPage)}} error_page 495 496 = {{ $server.CertificateAuth.ErrorPage }}; {{ end }} {{ end }} {{ if not (empty $server.ProxySSL.CAFileName) }} # PEM sha: {{ $server.ProxySSL.CASHA }} proxy_ssl_trusted_certificate {{ $server.ProxySSL.CAFileName }}; proxy_ssl_ciphers {{ $server.ProxySSL.Ciphers }}; proxy_ssl_protocols {{ $server.ProxySSL.Protocols }}; proxy_ssl_verify {{ $server.ProxySSL.Verify }}; proxy_ssl_verify_depth {{ $server.ProxySSL.VerifyDepth }}; {{ if not (empty $server.ProxySSL.ProxySSLName) }} proxy_ssl_name {{ $server.ProxySSL.ProxySSLName }}; {{ end }} {{ end }} {{ if not (empty $server.ProxySSL.PemFileName) }} proxy_ssl_certificate {{ $server.ProxySSL.PemFileName }}; proxy_ssl_certificate_key {{ $server.ProxySSL.PemFileName }}; {{ end }} {{ if not (empty $server.SSLCiphers) }} ssl_ciphers {{ $server.SSLCiphers }}; {{ end }} {{ if not (empty $server.SSLPreferServerCiphers) }} ssl_prefer_server_ciphers {{ $server.SSLPreferServerCiphers }}; {{ end }} {{ if not (empty $server.ServerSnippet) }} {{ $server.ServerSnippet }} {{ end }} {{ range $errorLocation := (buildCustomErrorLocationsPerServer $server) }} {{ template "CUSTOM_ERRORS" (buildCustomErrorDeps $errorLocation.UpstreamName $errorLocation.Codes $all.EnableMetrics) }} {{ end }} {{ buildMirrorLocations $server.Locations }} {{ $enforceRegex := enforceRegexModifier $server.Locations }} {{ range $location := $server.Locations }} {{ $path := buildLocation $location $enforceRegex }} {{ $proxySetHeader := proxySetHeader $location }} {{ $authPath := buildAuthLocation $location $all.Cfg.GlobalExternalAuth.URL }} {{ $applyGlobalAuth := shouldApplyGlobalAuth $location $all.Cfg.GlobalExternalAuth.URL }} {{ $externalAuth := $location.ExternalAuth }} {{ if eq $applyGlobalAuth true }} {{ $externalAuth = $all.Cfg.GlobalExternalAuth }} {{ end }} {{ if not (empty $location.Rewrite.AppRoot) }} if ($uri = /) { return 302 $scheme://$http_host{{ $location.Rewrite.AppRoot }}; } {{ end }} {{ if $authPath }} location = {{ $authPath }} { internal; {{ if $all.Cfg.EnableOpentracing }} opentracing on; opentracing_propagate_context; {{ end }} {{ if $externalAuth.AuthCacheKey }} set $tmp_cache_key '{{ $server.Hostname }}{{ $authPath }}{{ $externalAuth.AuthCacheKey }}'; set $cache_key ''; rewrite_by_lua_block { ngx.var.cache_key = ngx.encode_base64(ngx.sha1_bin(ngx.var.tmp_cache_key)) } proxy_cache auth_cache; {{- range $dur := $externalAuth.AuthCacheDuration }} proxy_cache_valid {{ $dur }}; {{- end }} proxy_cache_key "$cache_key"; {{ end }} # ngx_auth_request module overrides variables in the parent request, # therefore we have to explicitly set this variable again so that when the parent request # resumes it has the correct value set for this variable so that Lua can pick backend correctly set $proxy_upstream_name {{ buildUpstreamName $location | quote }}; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Forwarded-Proto ""; proxy_set_header X-Request-ID $req_id; {{ if $externalAuth.Method }} proxy_method {{ $externalAuth.Method }}; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Scheme $pass_access_scheme; {{ end }} proxy_set_header Host {{ $externalAuth.Host }}; proxy_set_header X-Original-URL $scheme://$http_host$request_uri; proxy_set_header X-Original-Method $request_method; proxy_set_header X-Sent-From "nginx-ingress-controller"; proxy_set_header X-Real-IP $remote_addr; {{ if and $all.Cfg.UseForwardedHeaders $all.Cfg.ComputeFullForwardedFor }} proxy_set_header X-Forwarded-For $full_x_forwarded_for; {{ else }} proxy_set_header X-Forwarded-For $remote_addr; {{ end }} {{ if $externalAuth.RequestRedirect }} proxy_set_header X-Auth-Request-Redirect {{ $externalAuth.RequestRedirect }}; {{ else }} proxy_set_header X-Auth-Request-Redirect $request_uri; {{ end }} {{ if $externalAuth.AuthCacheKey }} proxy_buffering "on"; {{ else }} proxy_buffering {{ $location.Proxy.ProxyBuffering }}; {{ end }} proxy_buffer_size {{ $location.Proxy.BufferSize }}; proxy_buffers {{ $location.Proxy.BuffersNumber }} {{ $location.Proxy.BufferSize }}; proxy_request_buffering {{ $location.Proxy.RequestBuffering }}; proxy_http_version {{ $location.Proxy.ProxyHTTPVersion }}; proxy_ssl_server_name on; proxy_pass_request_headers on; {{ if isValidByteSize $location.Proxy.BodySize true }} client_max_body_size {{ $location.Proxy.BodySize }}; {{ end }} {{ if isValidByteSize $location.ClientBodyBufferSize false }} client_body_buffer_size {{ $location.ClientBodyBufferSize }}; {{ end }} # Pass the extracted client certificate to the auth provider {{ if not (empty $server.CertificateAuth.CAFileName) }} {{ if $server.CertificateAuth.PassCertToUpstream }} proxy_set_header ssl-client-cert $ssl_client_escaped_cert; {{ end }} proxy_set_header ssl-client-verify $ssl_client_verify; proxy_set_header ssl-client-subject-dn $ssl_client_s_dn; proxy_set_header ssl-client-issuer-dn $ssl_client_i_dn; {{ end }} {{- range $line := buildAuthProxySetHeaders $externalAuth.ProxySetHeaders}} {{ $line }} {{- end }} {{ if not (empty $externalAuth.AuthSnippet) }} {{ $externalAuth.AuthSnippet }} {{ end }} set $target {{ $externalAuth.URL }}; proxy_pass $target; } {{ end }} {{ if isLocationAllowed $location }} {{ if $externalAuth.SigninURL }} location {{ buildAuthSignURLLocation $location.Path $externalAuth.SigninURL }} { internal; add_header Set-Cookie $auth_cookie; return 302 {{ buildAuthSignURL $externalAuth.SigninURL }}; } {{ end }} {{ end }} location {{ $path }} { {{ $ing := (getIngressInformation $location.Ingress $server.Hostname $location.Path) }} set $namespace {{ $ing.Namespace | quote}}; set $ingress_name {{ $ing.Rule | quote }}; set $service_name {{ $ing.Service | quote }}; set $service_port {{ $ing.ServicePort | quote }}; set $location_path {{ $location.Path | escapeLiteralDollar | quote }}; {{ buildOpentracingForLocation $all.Cfg.EnableOpentracing $location }} {{ if $location.Mirror.Source }} mirror {{ $location.Mirror.Source }}; mirror_request_body {{ $location.Mirror.RequestBody }}; {{ end }} rewrite_by_lua_block { lua_ingress.rewrite({{ locationConfigForLua $location $all }}) balancer.rewrite() plugins.run() } # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)` # other authentication method such as basic auth or external auth useless - all requests will be allowed. #access_by_lua_block { #} header_filter_by_lua_block { lua_ingress.header() plugins.run() } body_filter_by_lua_block { } log_by_lua_block { balancer.log() {{ if $all.EnableMetrics }} monitor.call() {{ end }} plugins.run() } {{ if not $location.Logs.Access }} access_log off; {{ end }} {{ if $location.Logs.Rewrite }} rewrite_log on; {{ end }} {{ if $location.HTTP2PushPreload }} http2_push_preload on; {{ end }} port_in_redirect {{ if $location.UsePortInRedirects }}on{{ else }}off{{ end }}; set $balancer_ewma_score -1; set $proxy_upstream_name {{ buildUpstreamName $location | quote }}; set $proxy_host $proxy_upstream_name; set $pass_access_scheme $scheme; {{ if $all.Cfg.UseProxyProtocol }} set $pass_server_port $proxy_protocol_server_port; {{ else }} set $pass_server_port $server_port; {{ end }} set $best_http_host $http_host; set $pass_port $pass_server_port; set $proxy_alternative_upstream_name ""; {{ buildModSecurityForLocation $all.Cfg $location }} {{ if isLocationAllowed $location }} {{ if gt (len $location.Whitelist.CIDR) 0 }} {{ range $ip := $location.Whitelist.CIDR }} allow {{ $ip }};{{ end }} deny all; {{ end }} {{ if not (isLocationInLocationList $location $all.Cfg.NoAuthLocations) }} {{ if $authPath }} # this location requires authentication auth_request {{ $authPath }}; auth_request_set $auth_cookie $upstream_http_set_cookie; add_header Set-Cookie $auth_cookie; {{- range $line := buildAuthResponseHeaders $externalAuth.ResponseHeaders }} {{ $line }} {{- end }} {{ end }} {{ if $externalAuth.SigninURL }} set_escape_uri $escaped_request_uri $request_uri; error_page 401 = {{ buildAuthSignURLLocation $location.Path $externalAuth.SigninURL }}; {{ end }} {{ if $location.BasicDigestAuth.Secured }} {{ if eq $location.BasicDigestAuth.Type "basic" }} auth_basic {{ $location.BasicDigestAuth.Realm | quote }}; auth_basic_user_file {{ $location.BasicDigestAuth.File }}; {{ else }} auth_digest {{ $location.BasicDigestAuth.Realm | quote }}; auth_digest_user_file {{ $location.BasicDigestAuth.File }}; {{ end }} proxy_set_header Authorization ""; {{ end }} {{ end }} {{/* if the location contains a rate limit annotation, create one */}} {{ $limits := buildRateLimit $location }} {{ range $limit := $limits }} {{ $limit }}{{ end }} {{ if $location.CorsConfig.CorsEnabled }} {{ template "CORS" $location }} {{ end }} {{ buildInfluxDB $location.InfluxDB }} {{ if isValidByteSize $location.Proxy.BodySize true }} client_max_body_size {{ $location.Proxy.BodySize }}; {{ end }} {{ if isValidByteSize $location.ClientBodyBufferSize false }} client_body_buffer_size {{ $location.ClientBodyBufferSize }}; {{ end }} {{/* By default use vhost as Host to upstream, but allow overrides */}} {{ if not (eq $proxySetHeader "grpc_set_header") }} {{ if not (empty $location.UpstreamVhost) }} {{ $proxySetHeader }} Host {{ $location.UpstreamVhost | quote }}; {{ else }} {{ $proxySetHeader }} Host $best_http_host; {{ end }} {{ end }} # Pass the extracted client certificate to the backend {{ if not (empty $server.CertificateAuth.CAFileName) }} {{ if $server.CertificateAuth.PassCertToUpstream }} {{ $proxySetHeader }} ssl-client-cert $ssl_client_escaped_cert; {{ end }} {{ $proxySetHeader }} ssl-client-verify $ssl_client_verify; {{ $proxySetHeader }} ssl-client-subject-dn $ssl_client_s_dn; {{ $proxySetHeader }} ssl-client-issuer-dn $ssl_client_i_dn; {{ end }} # Allow websocket connections {{ $proxySetHeader }} Upgrade $http_upgrade; {{ if $location.Connection.Enabled}} {{ $proxySetHeader }} Connection {{ $location.Connection.Header }}; {{ else }} {{ $proxySetHeader }} Connection $connection_upgrade; {{ end }} {{ $proxySetHeader }} X-Request-ID $req_id; {{ $proxySetHeader }} X-Real-IP $remote_addr; {{ if and $all.Cfg.UseForwardedHeaders $all.Cfg.ComputeFullForwardedFor }} {{ $proxySetHeader }} X-Forwarded-For $full_x_forwarded_for; {{ $proxySetHeader }} X-Forwarded-Proto $full_x_forwarded_proto; {{ else }} {{ $proxySetHeader }} X-Forwarded-For $remote_addr; {{ end }} {{ $proxySetHeader }} X-Forwarded-Host $best_http_host; {{ $proxySetHeader }} X-Forwarded-Port $pass_port; {{ $proxySetHeader }} X-Forwarded-Proto $pass_access_scheme; {{ if $all.Cfg.ProxyAddOriginalURIHeader }} {{ $proxySetHeader }} X-Original-URI $request_uri; {{ end }} {{ $proxySetHeader }} X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For {{ $proxySetHeader }} X-Original-Forwarded-For {{ buildForwardedFor $all.Cfg.ForwardedForHeader }}; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ {{ $proxySetHeader }} Proxy ""; # Custom headers to proxied server {{ range $k, $v := $all.ProxySetHeaders }} {{ $proxySetHeader }} {{ $k }} {{ $v | quote }}; {{ end }} proxy_connect_timeout {{ $location.Proxy.ConnectTimeout }}s; proxy_send_timeout {{ $location.Proxy.SendTimeout }}s; proxy_read_timeout {{ $location.Proxy.ReadTimeout }}s; proxy_buffering {{ $location.Proxy.ProxyBuffering }}; proxy_buffer_size {{ $location.Proxy.BufferSize }}; proxy_buffers {{ $location.Proxy.BuffersNumber }} {{ $location.Proxy.BufferSize }}; {{ if isValidByteSize $location.Proxy.ProxyMaxTempFileSize true }} proxy_max_temp_file_size {{ $location.Proxy.ProxyMaxTempFileSize }}; {{ end }} proxy_request_buffering {{ $location.Proxy.RequestBuffering }}; proxy_http_version {{ $location.Proxy.ProxyHTTPVersion }}; proxy_cookie_domain {{ $location.Proxy.CookieDomain }}; proxy_cookie_path {{ $location.Proxy.CookiePath }}; # In case of errors try the next upstream server before returning an error proxy_next_upstream {{ buildNextUpstream $location.Proxy.NextUpstream $all.Cfg.RetryNonIdempotent }}; proxy_next_upstream_timeout {{ $location.Proxy.NextUpstreamTimeout }}; proxy_next_upstream_tries {{ $location.Proxy.NextUpstreamTries }}; {{/* Add any additional configuration defined */}} {{ $location.ConfigurationSnippet }} {{ if not (empty $all.Cfg.LocationSnippet) }} # Custom code snippet configured in the configuration configmap {{ $all.Cfg.LocationSnippet }} {{ end }} {{/* if we are sending the request to a custom default backend, we add the required headers */}} {{ if (hasPrefix $location.Backend "custom-default-backend-") }} proxy_set_header X-Code 503; proxy_set_header X-Format $http_accept; proxy_set_header X-Namespace $namespace; proxy_set_header X-Ingress-Name $ingress_name; proxy_set_header X-Service-Name $service_name; proxy_set_header X-Service-Port $service_port; proxy_set_header X-Request-ID $req_id; {{ end }} {{ if $location.Satisfy }} satisfy {{ $location.Satisfy }}; {{ end }} {{/* if a location-specific error override is set, add the proxy_intercept here */}} {{ if $location.CustomHTTPErrors }} # Custom error pages per ingress proxy_intercept_errors on; {{ end }} {{ range $errCode := $location.CustomHTTPErrors }} error_page {{ $errCode }} = @custom_{{ $location.DefaultBackendUpstreamName }}_{{ $errCode }};{{ end }} {{ if (eq $location.BackendProtocol "FCGI") }} include /etc/nginx/fastcgi_params; {{ end }} {{- if $location.FastCGI.Index -}} fastcgi_index {{ $location.FastCGI.Index | quote }}; {{- end -}} {{ range $k, $v := $location.FastCGI.Params }} fastcgi_param {{ $k }} {{ $v | quote }}; {{ end }} {{ if not (empty $location.Redirect.URL) }} return {{ $location.Redirect.Code }} {{ $location.Redirect.URL }}; {{ end }} {{ buildProxyPass $server.Hostname $all.Backends $location }} {{ if (or (eq $location.Proxy.ProxyRedirectFrom "default") (eq $location.Proxy.ProxyRedirectFrom "off")) }} proxy_redirect {{ $location.Proxy.ProxyRedirectFrom }}; {{ else if not (eq $location.Proxy.ProxyRedirectTo "off") }} proxy_redirect {{ $location.Proxy.ProxyRedirectFrom }} {{ $location.Proxy.ProxyRedirectTo }}; {{ end }} {{ else }} # Location denied. Reason: {{ $location.Denied | quote }} return 503; {{ end }} {{ if not (empty $location.ProxySSL.CAFileName) }} # PEM sha: {{ $location.ProxySSL.CASHA }} proxy_ssl_trusted_certificate {{ $location.ProxySSL.CAFileName }}; proxy_ssl_ciphers {{ $location.ProxySSL.Ciphers }}; proxy_ssl_protocols {{ $location.ProxySSL.Protocols }}; proxy_ssl_verify {{ $location.ProxySSL.Verify }}; proxy_ssl_verify_depth {{ $location.ProxySSL.VerifyDepth }}; {{ end }} {{ if not (empty $location.ProxySSL.ProxySSLName) }} proxy_ssl_name {{ $location.ProxySSL.ProxySSLName }}; {{ end }} {{ if not (empty $location.ProxySSL.PemFileName) }} proxy_ssl_certificate {{ $location.ProxySSL.PemFileName }}; proxy_ssl_certificate_key {{ $location.ProxySSL.PemFileName }}; {{ end }} } {{ end }} {{ end }} {{ if eq $server.Hostname "_" }} # health checks in cloud providers require the use of port {{ $all.ListenPorts.HTTP }} location {{ $all.HealthzURI }} { {{ if $all.Cfg.EnableOpentracing }} opentracing off; {{ end }} access_log off; return 200; } # this is required to avoid error if nginx is being monitored # with an external software (like sysdig) location /nginx_status { {{ if $all.Cfg.EnableOpentracing }} opentracing off; {{ end }} {{ range $v := $all.NginxStatusIpv4Whitelist }} allow {{ $v }}; {{ end }} {{ if $all.IsIPV6Enabled -}} {{ range $v := $all.NginxStatusIpv6Whitelist }} allow {{ $v }}; {{ end }} {{ end -}} deny all; access_log off; stub_status on; } {{ end }} {{ end }} # 创建命令如下 <root@PROD-K8S-CP1 ~># kubectl apply -f ingress-configmap.yaml configmap/nginx-tmpl created
- ingress节点上,创建对应的用户及目录
<root@PROD-FE-K8S-WN1 ~># mkdir /var/log/nginx <root@PROD-FE-K8S-WN1 ~># groupadd -g 101 www-data <root@PROD-FE-K8S-WN1 ~># useradd -u 101 -g 101 -G www-data www-data <root@PROD-FE-K8S-WN1 ~># id www-data uid=101(www-data) gid=101(www-data) groups=101(www-data) <root@PROD-FE-K8S-WN1 ~># chown -R www-data:www-data /var/log/nginx/ # 快捷方式 mkdir /var/log/nginx groupadd -g 101 www-data useradd -u 101 -g 101 -G www-data www-data chown -R www-data:www-data /var/log/nginx/
id www-data - 创建好的ConfigMap 及 hostPath 映射到ingress中
spec: volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission defaultMode: 420 - name: nginx-tmpl configMap: name: nginx-tmpl defaultMode: 420 - name: nginxlog hostPath: path: /var/log/nginx type: Directory ... volumeMounts: - name: webhook-cert readOnly: true mountPath: /usr/local/certificates/ - name: nginx-tmpl readOnly: true mountPath: /etc/nginx/template - name: nginxlog mountPath: /var/log/nginx
- 修改时区
volumes: .... - name: localtime hostPath: path: /etc/localtime type: "" ---------------------------------------------- volumeMounts: ... - name: localtime mountPath: /etc/localtime