服务器做网站用什么系统,北京移动端网站公司,网站建设比赛,wordpress wp super------ 课程视频同步分享在今日头条和B站
大家好#xff0c;我是博哥爱运维#xff0c;这节课带来k8s的流量入口ingress#xff0c;作为业务对外服务的公网入口#xff0c;它的重要性不言而喻#xff0c;大家一定要仔细阅读#xff0c;跟着博哥的教程一步步实操去理…------ 课程视频同步分享在今日头条和B站
大家好我是博哥爱运维这节课带来k8s的流量入口ingress作为业务对外服务的公网入口它的重要性不言而喻大家一定要仔细阅读跟着博哥的教程一步步实操去理解。
Ingress基本概念
在Kubernetes集群中Ingress作为集群内服务对外暴露的访问接入点其几乎承载着集群内服务访问的所有流量。Ingress是Kubernetes中的一个资源对象用来管理集群外部访问集群内部服务的方式。您可以通过Ingress资源来配置不同的转发规则从而达到根据不同的规则设置访问集群内不同的Service后端Pod。
Ingress资源仅支持配置HTTP流量的规则无法配置一些高级特性例如负载均衡的算法、Sessions Affinity等这些高级特性都需要在Ingress Controller中进行配置。
Ingress Controller工作原理
为了使得Ingress资源正常工作集群中必须要有个Ingress Controller来解析Ingress的转发规则。Ingress Controller收到请求匹配Ingress转发规则转发到后端Service而Service转发到Pod最终由Pod处理请求。Kubernetes中Service、Ingress与Ingress Controller有着以下关系
Service是后端真实服务的抽象一个Service可以代表多个相同的后端服务。Ingress是反向代理规则用来规定HTTP/HTTPS请求应该被转发到哪个Service上。例如根据请求中不同的Host和URL路径让请求落到不同的 Service上。Ingress Controller是一个反向代理程序负责解析Ingress的反向代理规则。如果Ingress有增删改的变动Ingress Controller会及时更新自己相应的转发规则当Ingress Controller收到请求后就会根据这些规则将请求转发到对应的Service。
Ingress Controller通过API Server获取Ingress资源的变化动态地生成Load Balancer例如Nginx所需的配置文件例如nginx.conf然后重新加载Load Balancer例如执行nginx -s load重新加载Nginx。来生成新的路由转发规则。
这节课所用到的yaml配置比较多但我发现在头条这里发的格式会有问题所以我另外把笔记文字部分存了一份在我的github上面大家可以从这里面来复制yaml配置创建服务
https://github.com/bogeit/LearnK8s/blob/main/2023/%E7%AC%AC12%E5%85%B3%20%20%E7%94%9F%E4%BA%A7%E7%8E%AF%E5%A2%83%E4%B8%8B%E7%9A%84%E6%B5%81%E9%87%8F%E5%85%A5%E5%8F%A3%E6%8E%A7%E5%88%B6%E5%99%A8Ingress-nginx-controller.md
我们上面学习了通过Service服务来访问pod资源另外通过修改Service的类型为NodePort然后通过一些手段作公网IP的端口映射来提供K8s集群外的访问但这并不是一种很优雅的方式。
通常services和Pod只能通过集群内网络访问。 所有在边界路由器上的流量都被丢弃或转发到别处。
从概念上讲这可能看起来像internet|------------[ Services ]另外可以我们通过LoadBalancer负载均衡来提供外部流量的的访问但这种模式对于实际生产来说用起来不是很方便而且用这种模式就意味着每个服务都需要有自己的的负载均衡器以及独立的公有IP。
我们这是用Ingress因为Ingress只需要一个公网IP就能为K8s上所有的服务提供访问Ingress工作在7层HTTPIngress会根据请求的主机名以及路径来决定把请求转发到相应的服务如下图所示 Ingress是允许入站连接到达集群服务的一组规则。即介于物理网络和群集svc之间的一组转发规则。
其实就是实现L4 L7的负载均衡:
注意这里的Ingress并非将外部流量通过Service来转发到服务pod上而只是通过Service来找到对应的Endpoint来发现pod进行转发internet|[ Ingress ] --- [ Services ] --- [ Endpoint ]--|-----|-- |[ Pod,pod,...... ]-------------------------|要在K8s上面使用Ingress我们就需要在K8s上部署Ingress-controller控制器只有它在K8s集群中运行Ingress依次才能正常工作。Ingress-controller控制器有很多种比如traefik但我们这里要用到ingress-nginx这个控制器它的底层就是用Openresty融合nginx和一些lua规则等实现的。
重点来了我在讲课中一直强调本课程带给大家的都是基于生产中实战经验所以这里我们用的ingress-nginx不是普通的社区版本而是经过了超大生产流量检验国内最大的云平台阿里云基于社区版分支出来进行了魔改而成更符合生产基本属于开箱即用下面是aliyun-ingress-controller的介绍 下面介绍只截取了最新的一部分更多文档资源可以查阅官档https://help.aliyun.com/zh/ack/product-overview/nginx-ingress-controller#title-ek8-hx4-hlm 组件介绍
Ingress基本概念
在Kubernetes集群中Ingress作为集群内服务对外暴露的访问接入点其几乎承载着集群内服务访问的所有流量。Ingress是Kubernetes中的一个资源对象用来管理集群外部访问集群内部服务的方式。您可以通过Ingress资源来配置不同的转发规则从而达到根据不同的规则设置访问集群内不同的Service所对应的后端Pod。Nginx Ingress Controller工作原理
为了使得Nginx Ingress资源正常工作集群中必须要有个Nginx Ingress Controller来解析Nginx Ingress的转发规则。Nginx Ingress Controller收到请求匹配Nginx Ingress转发规则转发到后端Service所对应的Pod由Pod处理请求。Kubernetes中Service、Nginx Ingress与Nginx Ingress Controller有着以下关系Service是后端真实服务的抽象一个Service可以代表多个相同的后端服务。Nginx Ingress是反向代理规则用来规定HTTP/HTTPS请求应该被转发到哪个Service所对应的Pod上。例如根据请求中不同的Host和URL路径让请求落到不同Service所对应的Pod上。Nginx Ingress Controller是Kubernetes集群中的一个组件负责解析Nginx Ingress的反向代理规则。如果Nginx Ingress有增删改的变动Nginx Ingress Controller会及时更新自己相应的转发规则当Nginx Ingress Controller收到请求后就会根据这些规则将请求转发到对应Service的Pod上。变更记录
2023年10月版本号 v1.9.3-aliyun.1镜像地址 registry-cn-hangzhou.ack.aliyuncs.com/acs/aliyun-ingress-controller:v1.9.3-aliyun.1变更时间 2023年10月24日变更内容
重要
由于安全原因自该版本起组件将会默认禁用所有snippet注解如nginx.ingress.kubernetes.io/configuration-snippet等。出于安全和稳定性风险考量不建议您开启snippet注解功能。如需使用请在充分评估风险后通过在ConfigMapkube-system/nginx-configuration中添加allow-snippet-annotations: true手动开启。默认关闭在注解中添加snippet的功能。加入--enable-annotation-validation参数默认开启注解内容校验缓解CVE-2023-5044。修复CVE-2023-44487。变更影响建议在业务低峰期升级变更过程中可能会导致已经建立的连接发生瞬断。aliyun-ingress-controller有一个很重要的修改就是它支持路由配置的动态更新大家用过Nginx的可以知道在修改完Nginx的配置我们是需要进行nginx -s reload来重加载配置才能生效的在K8s上这个行为也是一样的但由于K8s运行的服务会非常多所以它的配置更新是非常频繁的因此如果不支持配置动态更新对于在高频率变化的场景下Nginx频繁Reload会带来较明显的请求访问问题
造成一定的QPS抖动和访问失败情况对于长连接服务会被频繁断掉造成大量的处于shutting down的Nginx Worker进程进而引起内存膨胀
详细原理分析见这篇文章 https://developer.aliyun.com/article/692732
我们准备来部署aliyun-ingress-controller下面直接是生产中在用的yaml配置我们保存了aliyun-ingress-nginx.yaml准备开始部署 详细讲解下面yaml配置的每个部分 ---
apiVersion: v1
kind: ServiceAccount
metadata:name: ingress-nginxnamespace: kube-system---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.9.3name: ingress-nginxnamespace: kube-system
rules:
- apiGroups:- resources:- namespacesverbs:- get
- apiGroups:- resources:- configmaps- pods- secrets- endpointsverbs:- get- list- watch
- apiGroups:- resources:- servicesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update
- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
- apiGroups:- resourceNames:- ingress-controller-leader-nginxresources:- configmapsverbs:- get- update
- apiGroups:- resources:- configmapsverbs:- create
- apiGroups:- coordination.k8s.ioresourceNames:- ingress-controller-leader-nginxresources:- leasesverbs:- get- update
- apiGroups:- coordination.k8s.ioresources:- leasesverbs:- create
- apiGroups:- resources:- eventsverbs:- create- patch
- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.9.3name: ingress-nginx
rules:
- apiGroups:- resources:- configmaps- endpoints- nodes- pods- secrets- namespacesverbs:- list- watch
- apiGroups:- coordination.k8s.ioresources:- leasesverbs:- list- watch
- apiGroups:- resources:- nodesverbs:- get
- apiGroups:- resources:- servicesverbs:- get- list- watch
- apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch
- apiGroups:- resources:- eventsverbs:- create- patch
- apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update
- apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch
- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.9.3name: ingress-nginxnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx
subjects:
- kind: ServiceAccountname: ingress-nginxnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.9.3name: ingress-nginx
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx
subjects:
- kind: ServiceAccountname: ingress-nginxnamespace: kube-system# https://www.cnblogs.com/dudu/p/12197646.html
#---
#apiVersion: monitoring.coreos.com/v1
#kind: ServiceMonitor
#metadata:
# labels:
# app: ingress-nginx
# name: nginx-ingress-scraping
# namespace: kube-system
#spec:
# endpoints:
# - interval: 30s
# path: /metrics
# port: metrics
# jobLabel: app
# namespaceSelector:
# matchNames:
# - ingress-nginx
# selector:
# matchLabels:
# app: ingress-nginx---apiVersion: v1
kind: Service
metadata:labels:app: ingress-nginxname: nginx-ingress-lbnamespace: kube-system
spec:# DaemonSet need:# ----------------type: ClusterIP# ----------------# Deployment need:# ----------------
# type: NodePort# ----------------ports:- name: httpport: 80targetPort: 80protocol: TCP- name: httpsport: 443targetPort: 443protocol: TCP- name: metricsport: 10254protocol: TCPtargetPort: 10254selector:app: ingress-nginx---
apiVersion: v1
kind: Service
metadata:name: ingress-nginx-controller-admissionnamespace: kube-system
spec:type: ClusterIPports:- name: https-webhookport: 443targetPort: 8443selector:app: ingress-nginx---
# all configmaps means:
# https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.mdkind: ConfigMap
apiVersion: v1
metadata:name: nginx-configurationnamespace: kube-systemlabels:app: ingress-nginx
data:allow-snippet-annotations: trueallow-backend-server-header: truedisable-access-log: falseenable-underscores-in-headers: truegenerate-request-id: trueignore-invalid-headers: truekeep-alive: 900keep-alive-requests: 10000large-client-header-buffers: 5 20klog-format-upstream: {timestamp: $time_iso8601,remote_addr: $remote_addr,x-forward-for: $proxy_add_x_forwarded_for,request_id: $req_id,remote_user: $remote_user,bytes_sent: $bytes_sent,request_time: $request_time,status: $status,vhost: $host,request_proto: $server_protocol,path: $uri,request_query: $args,request_length: $request_length,duration: $request_time,method: $request_method,http_referrer: $http_referer,http_user_agent: $http_user_agent,upstream-sever:$proxy_upstream_name,proxy_alternative_upstream_name:$proxy_alternative_upstream_name,upstream_addr:$upstream_addr,upstream_response_length:$upstream_response_length,upstream_response_time:$upstream_response_time,upstream_status:$upstream_status}max-worker-connections: 65536proxy-body-size: 20mproxy-connect-timeout: 10proxy-read-timeout: 60proxy-send-timeout: 60reuse-port: trueserver-tokens: falsessl-redirect: falseupstream-keepalive-connections: 300upstream-keepalive-requests: 1000upstream-keepalive-timeout: 900worker-cpu-affinity: worker-processes: 1http-redirect-code: 301proxy_next_upstream: error timeout http_502ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDHAESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHAssl-protocols: TLSv1 TLSv1.1 TLSv1.2---
kind: ConfigMap
apiVersion: v1
metadata:name: tcp-servicesnamespace: kube-systemlabels:app: ingress-nginx---
kind: ConfigMap
apiVersion: v1
metadata:name: udp-servicesnamespace: kube-systemlabels:app: ingress-nginx---
apiVersion: apps/v1
kind: DaemonSet
#kind: Deployment
metadata:name: nginx-ingress-controllernamespace: kube-systemlabels:app: ingress-nginxannotations:component.revision: 2component.version: 1.9.3
spec:# Deployment need:# ----------------
# replicas: 1# ----------------selector:matchLabels:app: ingress-nginxtemplate:metadata:labels:app: ingress-nginxannotations:prometheus.io/port: 10254prometheus.io/scrape: truespec:# DaemonSet need:# ----------------hostNetwork: true# ----------------affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- ingress-nginxtopologyKey: kubernetes.io/hostnameweight: 100nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: typeoperator: NotInvalues:- virtual-kubelet- key: k8s.aliyun.comoperator: NotInvalues:- truecontainers:- args:- /nginx-ingress-controller- --election-idingress-controller-leader-nginx- --ingress-classnginx- --watch-ingress-without-class- --controller-classk8s.io/ingress-nginx- --configmap$(POD_NAMESPACE)/nginx-configuration- --tcp-services-configmap$(POD_NAMESPACE)/tcp-services- --udp-services-configmap$(POD_NAMESPACE)/udp-services- --annotations-prefixnginx.ingress.kubernetes.io- --publish-service$(POD_NAMESPACE)/nginx-ingress-lb- --validating-webhook:8443- --validating-webhook-certificate/usr/local/certificates/cert- --validating-webhook-key/usr/local/certificates/key- --enable-metricsfalse- --v2env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: LD_PRELOADvalue: /usr/local/lib/libmimalloc.soimage: registry-cn-hangzhou.ack.aliyuncs.com/acs/aliyun-ingress-controller:v1.9.3-aliyun.1imagePullPolicy: IfNotPresentlifecycle:preStop:exec:command:- /wait-shutdownlivenessProbe:failureThreshold: 5httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10timeoutSeconds: 1successThreshold: 1name: nginx-ingress-controllerports:- name: httpcontainerPort: 80protocol: TCP- name: httpscontainerPort: 443protocol: TCP- name: webhookcontainerPort: 8443protocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10timeoutSeconds: 1successThreshold: 1
# resources:
# limits:
# cpu: 1
# memory: 2G
# requests:
# cpu: 1
# memory: 2GsecurityContext:allowPrivilegeEscalation: truecapabilities:drop:- ALLadd:- NET_BIND_SERVICErunAsUser: 101# if get mount: mounting rw on /proc/sys failed: Permission denied, use:
# privileged: true
# procMount: Default
# runAsUser: 0volumeMounts:- name: webhook-certmountPath: /usr/local/certificates/readOnly: true- mountPath: /etc/localtimename: localtimereadOnly: truednsPolicy: ClusterFirstinitContainers:- command:- /bin/sh- -c- |if [ $POD_IP ! $HOST_IP ]; thenmount -o remount rw /proc/syssysctl -w net.core.somaxconn65535sysctl -w net.ipv4.ip_local_port_range1024 65535sysctl -w kernel.core_uses_pid0fienv:- name: POD_IPvalueFrom:fieldRef:apiVersion: v1fieldPath: status.podIP- name: HOST_IPvalueFrom:fieldRef:apiVersion: v1fieldPath: status.hostIPimage: registry.cn-shanghai.aliyuncs.com/acs/busybox:v1.29.2imagePullPolicy: IfNotPresentname: init-sysctlresources:limits:cpu: 100mmemory: 70Mirequests:cpu: 100mmemory: 70MisecurityContext:capabilities:add:- SYS_ADMINdrop:- ALL# if get mount: mounting rw on /proc/sys failed: Permission denied, use:privileged: trueprocMount: DefaultrunAsUser: 0# choose node with set this label running# kubectl label node xx.xx.xx.xx boge/ingress-controller-readytrue# kubectl get node --show-labels# kubectl label node xx.xx.xx.xx boge/ingress-controller-ready-nodeSelector:boge/ingress-controller-ready: truepriorityClassName: system-node-criticalrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}serviceAccount: ingress-nginxserviceAccountName: ingress-nginxterminationGracePeriodSeconds: 300# kubectl taint nodes xx.xx.xx.xx boge/ingress-controller-readytrue:NoExecute# kubectl taint nodes xx.xx.xx.xx boge/ingress-controller-ready:NoExecute-tolerations:- operator: Exists
# tolerations:
# - effect: NoExecute
# key: boge/ingress-controller-ready
# operator: Equal
# value: truevolumes:- name: webhook-certsecret:defaultMode: 420secretName: ingress-nginx-admission- hostPath:path: /etc/localtimetype: Filename: localtime---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:name: nginx-masternamespace: kube-systemannotations:ingressclass.kubernetes.io/is-default-class: true
spec:controller: k8s.io/ingress-nginx---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:labels:name: ingress-nginxname: ingress-nginx-admission
webhooks:- name: validate.nginx.ingress.kubernetes.iomatchPolicy: Equivalentrules:- apiGroups:- networking.k8s.ioapiVersions:- v1operations:- CREATE- UPDATEresources:- ingressesfailurePolicy: FailsideEffects: NoneadmissionReviewVersions:- v1clientConfig:service:namespace: kube-systemname: ingress-nginx-controller-admissionpath: /networking/v1/ingresses
---
apiVersion: v1
kind: ServiceAccount
metadata:name: ingress-nginx-admissionnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: ingress-nginx-admission
rules:- apiGroups:- admissionregistration.k8s.ioresources:- validatingwebhookconfigurationsverbs:- get- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: ingress-nginx-admission
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx-admission
subjects:- kind: ServiceAccountname: ingress-nginx-admissionnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: ingress-nginx-admissionnamespace: kube-system
rules:- apiGroups:- resources:- secretsverbs:- get- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: ingress-nginx-admissionnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx-admission
subjects:- kind: ServiceAccountname: ingress-nginx-admissionnamespace: kube-system
---
apiVersion: batch/v1
kind: Job
metadata:name: ingress-nginx-admission-createlabels:name: ingress-nginxnamespace: kube-system
spec:template:metadata:name: ingress-nginx-admission-createlabels:name: ingress-nginxspec:containers:- name: create
# image: registry-vpc.cn-hangzhou.aliyuncs.com/acs/kube-webhook-certgen:v1.1.1image: registry.cn-beijing.aliyuncs.com/acs/kube-webhook-certgen:v1.5.1imagePullPolicy: IfNotPresentargs:- create- --hostingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc- --namespace$(POD_NAMESPACE)- --secret-nameingress-nginx-admissionenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacerestartPolicy: OnFailureserviceAccountName: ingress-nginx-admissionsecurityContext:runAsNonRoot: truerunAsUser: 2000
---
apiVersion: batch/v1
kind: Job
metadata:name: ingress-nginx-admission-patchlabels:name: ingress-nginxnamespace: kube-system
spec:template:metadata:name: ingress-nginx-admission-patchlabels:name: ingress-nginxspec:containers:- name: patchimage: registry.cn-hangzhou.aliyuncs.com/acs/kube-webhook-certgen:v1.1.1 # if error use this imageimagePullPolicy: IfNotPresentargs:- patch- --webhook-nameingress-nginx-admission- --namespace$(POD_NAMESPACE)- --patch-mutatingfalse- --secret-nameingress-nginx-admission- --patch-failure-policyFailenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacesecurityContext:allowPrivilegeEscalation: falserestartPolicy: OnFailureserviceAccountName: ingress-nginx-admissionsecurityContext:runAsNonRoot: truerunAsUser: 2000---
## Deployment need for aliyunk8s:
#apiVersion: v1
#kind: Service
#metadata:
# annotations:
# service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: lb-xxxxxxxxxxxxxxxxx
# service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: true
# labels:
# app: nginx-ingress-lb
# name: nginx-ingress-lb-boge
# namespace: kube-system
#spec:
# externalTrafficPolicy: Local
# ports:
# - name: http
# port: 80
# protocol: TCP
# targetPort: 80
# - name: https
# port: 443
# protocol: TCP
# targetPort: 443
# selector:
# app: ingress-nginx
# type: LoadBalancer
DaemonSet-00:49:30
开始部署
# kubectl apply -f aliyun-ingress-nginx.yaml
namespace/ingress-nginx created
serviceaccount/nginx-ingress-controller created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-controller created
service/nginx-ingress-lb created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
daemonset.apps/nginx-ingress-controller created# 这里是以daemonset资源的形式进行的安装
# DaemonSet资源和Deployment的yaml配置类似但不同的是Deployment可以在每个node上运行多个pod副本但daemonset在每个node上只能运行一个pod副本
# 这里正好就借运行ingress-nginx的情况下把daemonset这个资源做下讲解# 我们查看下pod会发现空空如也为什么会这样呢
# kubectl -n kube-system get pod
注意上面的yaml配置里面我使用了节点选择配置只有打了我指定lable标签的node节点也会被允许调度pod上去运行nodeSelector:boge/ingress-controller-ready: true# 我们现在来打标签
# kubectl label node 10.0.1.201 boge/ingress-controller-readytrue
node/10.0.1.201 labeled
# kubectl label node 10.0.1.202 boge/ingress-controller-readytrue
node/10.0.1.202 labeled# 接着可以看到pod就被调试到这两台node上启动了
# kubectl -n kube-system get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-lchgr 1/1 Running 0 9m1s 10.0.1.202 10.0.1.202 none none
nginx-ingress-controller-x87rp 1/1 Running 0 9m6s 10.0.1.201 10.0.1.201 none none我们基于前面学到的deployment和service来创建一个nginx的相应服务资源保存为nginx.yaml 注意记得把前面测试的资源删除掉以防冲突 ---
kind: Service
apiVersion: v1
metadata:name: new-nginx
spec:selector:app: new-nginxports:- name: http-portport: 80protocol: TCPtargetPort: 80---
# 新版本k8s的ingress配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: new-nginxannotations:kubernetes.io/ingress.class: nginxnginx.ingress.kubernetes.io/force-ssl-redirect: falsenginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0nginx.ingress.kubernetes.io/configuration-snippet: |if ($host ! www.boge.com ) {rewrite ^ http://www.boge.com$request_uri permanent;}
spec:rules:- host: boge.comhttp:paths:- backend:service:name: new-nginxport:number: 80path: /pathType: Prefix- host: m.boge.comhttp:paths:- backend:service:name: new-nginxport:number: 80path: /pathType: Prefix- host: www.boge.comhttp:paths:- backend:service:name: new-nginxport:number: 80path: /pathType: Prefix
# tls:
# - hosts:
# - boge.com
# - m.boge.com
# - www.boge.com
# secretName: boge-com-tls# tls secret create command:
# kubectl -n namespace create secret tls boge-com-tls --key boge-com.key --cert boge-com.csr---
apiVersion: apps/v1
kind: Deployment
metadata:name: new-nginxlabels:app: new-nginx
spec:replicas: 3 # 数量可以根据NODE节点数量来定义selector:matchLabels:app: new-nginxtemplate:metadata:labels:app: new-nginxspec:containers:
#--------------------------------------------------- name: new-nginximage: nginx:1.21.6env:- name: TZvalue: Asia/Shanghaiports:- containerPort: 80volumeMounts:- name: html-filesmountPath: /usr/share/nginx/html
#--------------------------------------------------- name: busyboximage: registry.cn-hangzhou.aliyuncs.com/acs/busybox:v1.29.2args:- /bin/sh- -c- while :; doif [ -f /html/index.html ];thenecho [$(date %F\ %T)] ${MY_POD_NAMESPACE}-${MY_POD_NAME}-${MY_POD_IP} /html/index.htmlsleep 1elsetouch /html/index.htmlfidoneenv:- name: TZvalue: Asia/Shanghai- name: MY_POD_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.name- name: MY_POD_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespace- name: MY_POD_IPvalueFrom:fieldRef:apiVersion: v1fieldPath: status.podIPvolumeMounts:- name: html-filesmountPath: /html- mountPath: /etc/localtimename: tz-config#--------------------------------------------------volumes:- name: html-filesemptyDir:medium: MemorysizeLimit: 10Mi- name: tz-confighostPath:path: /usr/share/zoneinfo/Asia/Shanghai---
运行它
# kubectl apply -f nginx.yaml #查看创建的ingress资源
# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
new-nginx nginx-master boge.com,m.boge.com,www.boge.com 80 8s# 我们在其它节点上加下本地hosts来测试下效果
10.0.1.201 boge.com m.boge.com www.boge.com# 可以看到请求成功了
[rootnode-2 ~]# curl www.boge.com# 回到201节点上看下ingress-nginx的日志
# kubectl -n kube-system logs --tail1 nginx-ingress-controller-nblb9
Defaulted container nginx-ingress-controller out of: nginx-ingress-controller, init-sysctl (init)
{timestamp: 2023-11-22T22:13:1408:00,remote_addr: 10.0.1.1,x-forward-for: 10.0.1.1,request_id: f21a1e569751fb55299ef5f1b039852d,remote_user: -,bytes_sent: 250,request_time: 0.003,status: 200,vhost: www.boge.com,request_proto: HTTP/2.0,path: /,request_query: -,request_length: 439,duration: 0.003,method: GET,http_referrer: -,http_user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36,upstream-sever:default-new-nginx-80,proxy_alternative_upstream_name:,upstream_addr:172.20.247.18:80,upstream_response_length:71,upstream_response_time:0.003,upstream_status:200}
因为http属于是明文传输数据不安全在生产中我们通常会配置https加密通信现在实战下Ingress的tls配置
# 这里我先自签一个https的证书#1. 先生成私钥key
# openssl genrsa -out boge.key 2048
Generating RSA private key, 2048 bit long modulus
..............................................................................................
.....
e is 65537 (0x10001)#2.再基于key生成tls证书(注意这里我用的*.boge.com这是生成泛域名的证书后面所有新增加的三级域名都是可以用这个证书的)
# openssl req -new -x509 -key boge.key -out boge.csr -days 360 -subj /CN*.boge.com# 看下创建结果
# ll
total 8
-rw-r--r-- 1 root root 1099 Nov 27 11:44 boge.csr
-rw-r--r-- 1 root root 1679 Nov 27 11:43 boge.key
在生产环境中如果是自建机房我们通常会在至少2台node节点上运行有ingress-nginx的pod那么有必要在这两台node上面部署负载均衡软件做调度来起到高可用的作用这里我们用haproxykeepalived如果你的生产环境是在云上假设是阿里云那么你只需要购买一个负载均衡器SLB将运行有ingress-nginx的pod的节点服务器加到这个SLB的后端来然后将请求域名和这个SLB的公网IP做好解析即可目前我们用二进制部署的K8s集群通信架构如下 注意在每台node节点上有已经部署有了个精简版的nginx软件kube-lb做四层负载均衡来转发apiserver的请求的那么我们只需要选取两台节点部署keepalived软件并重新配置kube-lb来生成VIP达到ha的效果具体参照文档部署
https://github.com/easzlab/kubeasz/blob/master/docs/setup/ex-lb.md
做到这里是不是有点成就感了呢在已经知道了ingress能给我们带来什么后我们回过头来理解Ingress的工作原理这样掌握ingress会更加稳固这也是我平时学习的方法
如下图Client客户端对nginx.boge.com进行DNS查询DNS服务器我们这里是配的本地hosts返回了Ingress控制器的IP也就是我们的VIP10.0.1.222。然后Client客户端向Ingress控制器发送HTTP请求并在请求Host头中指定nginx.boge.com。Ingress控制器从该头部确定Client客户端是想访问哪个服务通过与该服务并联的Endpoint对象查看具体的Pod IP并将Client客户端的请求转发给其中一个pod。 生产环境正常情况下大部分是一个Ingress对应一个Service服务但在一些特殊情况需要复用一个Ingress来访问多个服务的下面我们来实践下
再创建一个nginx的deployment和service注意名称修改下不要冲突了
# Finally you need excute this command:
# kubectl create deployment old-nginx --imagenginx:1.21.6 --replicas1
# kubectl expose deployment old-nginx --port80 --target-port80# if all done,you need test
# curl -H Host: www.boge.com -H foo: bar http://10.0.0.201
# curl -H Host: www.boge.com http://10.0.0.201# 1 pod 2 containers and ingress-nginx L7 2 service
---
# namespace
apiVersion: v1
kind: Namespace
metadata:name: test-nginx---
# SVC
kind: Service
apiVersion: v1
metadata:name: new-nginxnamespace: test-nginx
spec:selector:app: new-nginxports:- name: http-portport: 80protocol: TCPtargetPort: 80
# nodePort: 30088
# type: NodePort---
# ingress-nginx L7
# https://yq.aliyun.com/articles/594019
# https://help.aliyun.com/document_detail/200941.html?spma2c4g.11186623.6.787.254168fapBIi0A
# KAE多个请求参数配置 query(boge_id, /^aaa$|^bbb$/)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: test-nginxnamespace: test-nginxannotations:nginx.ingress.kubernetes.io/ssl-redirect: false# 请求参数匹配 curl www.boge.com?boge_idID1nginx.ingress.kubernetes.io/service-match: | new-nginx: query(boge_id, /^aaa$|^bbb$/)# 请求头中满足正则匹配foobar的请求才会被路由到新版本服务new-nginx中#nginx.ingress.kubernetes.io/service-match: | # new-nginx: header(foo, /^bar$/)# 在满足上述匹配规则的基础上仅允许50%的流量会被路由到新版本服务new-nginx中#nginx.ingress.kubernetes.io/service-weight: |# new-nginx: 50, old-nginx: 50
spec:rules:- host: www.boge.comhttp:paths:- backend:service:name: new-nginx # 新版本服务port:number: 80path: /pathType: Prefix- backend:service:name: old-nginx # 老版本服务port:number: 80path: /pathType: Prefix---
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:name: new-nginxnamespace: test-nginxlabels:app: new-nginx
spec:replicas: 1selector:matchLabels:app: new-nginxtemplate:metadata:labels:app: new-nginxspec:containers:
#--------------------------------------------------- name: new-nginximage: nginx::1.21.6ports:- containerPort: 80volumeMounts:- name: html-filesmountPath: /usr/share/nginx/html
#--------------------------------------------------- name: busyboximage: registry.cn-hangzhou.aliyuncs.com/acs/busybox:v1.29.2args:- /bin/sh- -c- while :; doif [ -f /html/index.html ];thenecho [$(date %F\ %T)] hello /html/index.htmlsleep 1elsetouch /html/index.htmlfidonevolumeMounts:- name: html-filesmountPath: /html#--------------------------------------------------volumes:- name: html-files# Disk of the working node running pod#emptyDir: {}# Memory of the working node running podemptyDir:medium: Memory# use temp disk space#medium: # if emptyDir file size more than 1Gi ,then this pod will be Evicted, just like restert podsizeLimit: 1Gi