搜索引擎营销的简称是,国外seo综合查询,建设跨境网站,企业微信后台管理系统本文将在前文基础上介绍k8s集群的高可用实践#xff0c;一般来讲#xff0c;k8s集群高可用主要包含以下几个内容#xff1a;1、etcd集群高可用2、集群dns服务高可用3、kube-apiserver、kube-controller-manager、kube-scheduler等master组件的高可用 其中etcd实现的办法较为… 本文将在前文基础上介绍k8s集群的高可用实践一般来讲k8s集群高可用主要包含以下几个内容1、etcd集群高可用2、集群dns服务高可用3、kube-apiserver、kube-controller-manager、kube-scheduler等master组件的高可用 其中etcd实现的办法较为容易具体实现办法可参考前文http://blog.51cto.com/ylw6006/2095871 集群dns服务高可用可以通过配置dns的pod副本数为2通过配置label实现2个副本运行在在不同的节点上实现高可用。 kube-apiserver服务的高可用可行的方案较多具体介绍可参考文档https://jishu.io/kubernetes/kubernetes-master-ha/ kube-controller-manager、kube-scheduler等master组件的高可用相对容易实现运行多份实例即可。 一、环境介绍 master节点1 192.168.115.5/24 主机名vm1 master节点2 192.168.115.6/24 主机名vm2VIP地址 192.168.115.4/24 (使用keepalived实现)Node节点1: 192.168.115.6/24 主机名vm2Node节点2: 192.168.115.7/24 主机名vm3 操作系统版本centos 7.2 64bitK8s版本1.9.6 二进制部署 本文演示环境是在前文的基础上已有k8s集群1个master节点、2个node节点上实现k8s集群master组件的高可用关于k8s环境的部署请参考前文链接!1、配置Etcd集群和TLS认证 —— http://blog.51cto.com/ylw6006/20958712、Flannel网络组件部署 —— http://blog.51cto.com/ylw6006/20973033、升级Docker服务 —— http://blog.51cto.com/ylw6006/21030644、K8S二进制部署Master节点 —— http://blog.51cto.com/ylw6006/21040315、K8S二进制部署Node节点 —— http://blog.51cto.com/ylw6006/2104692 二、证书更新 在vm1节点上完成证书的更新重点是要把master相关ip全部全部加入到列表里面 # mkdir api-ha cd api-ha
# cat k8s-csr.json
{CN: kubernetes,hosts: [127.0.0.1,192.168.115.4,192.168.115.5,192.168.115.6,10.254.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,ST: FuZhou,L: FuZhou,O: k8s,OU: System}]
}# cfssl gencert -ca/etc/ssl/etcd/ca.pem \-ca-key/etc/ssl/etcd/ca-key.pem \-config/etc/ssl/etcd/ca-config.json \-profilekubernetes k8s-csr.json | cfssljson -bare kubernetes# mv *.pem /etc/kubernetes/ssl/ 三、配置master组件 1、复制vm1的kube-apiserver、kube-controller-manager、kube-scheduler文件到vm2节点上 # cd /usr/local/sbin
# scp -rp kube-apiserver kube-controller-manager kube-scheduler vm2:/usr/local/sbin/ 2、复制vm1的证书文件到vm2节点上 # cd /etc/kubernetes/ssl
# scp -rp ./* vm2:/etc/kubernetes/ssl 3、配置服务并启动服务 # cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
DescriptionKubernetes API Server
Documentationhttps://github.com/GoogleCloudPlatform/kubernetes
Afternetwork.target[Service]
ExecStart/usr/local/sbin/kube-apiserver \--admission-controlNamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--advertise-address0.0.0.0 \--bind-address0.0.0.0 \--insecure-bind-address127.0.0.1 \--authorization-modeRBAC \--runtime-configrbac.authorization.k8s.io/v1alpha1 \--kubelet-httpstrue \--enable-bootstrap-token-authtrue \--token-auth-file/etc/kubernetes/token.csv \--service-cluster-ip-range10.254.0.0/16 \--service-node-port-range1024-65535 \--tls-cert-file/etc/kubernetes/ssl/kubernetes.pem \--tls-private-key-file/etc/kubernetes/ssl/kubernetes-key.pem \--client-ca-file/etc/ssl/etcd/ca.pem \--service-account-key-file/etc/ssl/etcd/ca-key.pem \--etcd-cafile/etc/ssl/etcd/ca.pem \--etcd-certfile/etc/ssl/etcd/server.pem \--etcd-keyfile/etc/ssl/etcd/server-key.pem \--etcd-servershttps://192.168.115.5:2379,https://192.168.115.6:2379,https://192.168.115.7:2379 \--enable-swagger-uitrue \--allow-privilegedtrue \--apiserver-count3 \--audit-log-maxage30 \--audit-log-maxbackup3 \--audit-log-maxsize100 \--audit-log-path/var/lib/audit.log \--event-ttl1h \--v2
Restarton-failure
RestartSec5
Typenotify
LimitNOFILE65536[Install]
WantedBymulti-user.target # cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
DescriptionKubernetes Scheduler
Documentationhttps://github.com/GoogleCloudPlatform/kubernetes[Service]
ExecStart/usr/local/sbin/kube-scheduler \--address127.0.0.1 \--masterhttp://127.0.0.1:8080 \--leader-electtrue \--v2
Restarton-failure
RestartSec5[Install]
WantedBymulti-user.target # cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
DescriptionKubernetes Controller Manager
Documentationhttps://github.com/GoogleCloudPlatform/kubernetes[Service]
ExecStart/usr/local/sbin/kube-controller-manager \--address127.0.0.1 \--masterhttp://127.0.0.1:8080 \--allocate-node-cidrstrue \--service-cluster-ip-range10.254.0.0/16 \--cluster-cidr172.30.0.0/16 \--cluster-namekubernetes \--cluster-signing-cert-file/etc/ssl/etcd/ca.pem \--cluster-signing-key-file/etc/ssl/etcd/ca-key.pem \--service-account-private-key-file/etc/ssl/etcd/ca-key.pem \--root-ca-file/etc/ssl/etcd/ca.pem \--leader-electtrue \--v2
Restarton-failure
RestartSec5[Install]
WantedBymulti-user.target # systemctl enable kube-apiserver
# systemctl enable kube-controller-manager
# systemctl enable kube-scheduler
# systemctl start kube-apiserver
# systemctl start kube-controller-manager
# systemctl start kube-scheduler 注意 vm1上的api-server配置文件需要将--advertise-address、--bind-address两个参数修改为全网监听 四、安装和配置keepalived # yum -y install keepalived
# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { notification_email { ylwfjhb.cn} notification_email_from adminfjhb.cnsmtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER
} vrrp_script check_apiserver {script /etc/keepalived/check_apiserver.shinterval 3
} vrrp_instance VI_1 { state MASTERinterface ens33virtual_router_id 60 priority 100 advert_int 1 authentication { auth_type PASS auth_pass k8s.59iedu.com} virtual_ipaddress { 192.168.115.4/24}track_script { check_apiserver}
} # cat /usr/lib/systemd/system/keepalived.service
[Unit]
DescriptionLVS and VRRP High Availability Monitor
Aftersyslog.target network-online.target kube-apiserver.service
Requirekube-apiserver.service[Service]
Typeforking
PIDFile/var/run/keepalived.pid
KillModeprocess
EnvironmentFile-/etc/sysconfig/keepalived
ExecStart/usr/sbin/keepalived $KEEPALIVED_OPTIONS
ExecReload/bin/kill -HUP $MAINPID[Install]
WantedBymulti-user.target 注意: vm2节点上需要修改state为BACKUP, priority为99 priority值必须小于master节点配置值 # cat /etc/keepalived/check_apiserver.sh
#!/bin/bash
flag$(systemctl status kube-apiserver /dev/null;echo $?)
if [[ $flag ! 0 ]];thenecho kube-apiserver is down,close the keepalivedsystemctl stop keepalived
fi
# chmod x /etc/keepalived/check_apiserver.sh
# systemctl daemon-reload
# systemctl enable keepalived
# systemctl start keepalived 五、修改客户端配置 1、Kubelet.kubeconfig 、bootstrap.kubeconfig、kube-proxy.kubeconfig 配置 # grep server /etc/kubernetes/kubelet.kubeconfig
server: https://192.168.115.4:6443
# grep server /etc/kubernetes/bootstrap.kubeconfig
server: https://192.168.115.4:6443
# grep server /etc/kubernetes/kube-proxy.kubeconfig server: https://192.168.115.4:6443 2、confing配置 # grep server /root/.kube/config
server: https://192.168.115.4:6443 3、重启客户端服务 # systemctl restart kubelet
# systemctl restart kube-proxy 六、测试 1、关闭服务前的集群状态VIP在vm1节点上 2、在vm1上将kube-apiserver服务停止可以看到VIP消息但任何可以连接master获取pod信息日志显示vip被自动移除3、在vm2上能看到自动注册上了VIP且kubectl客户端连接正常4、在vm1上将kube-apiserver、keepalived服务启动由于配置的是主从模式所以会抢占VIP5、在vm2上可以看到VIP的释放keepalived重新进入backup状态6、在整个过程中可以用其他的客户端来连接master VIP来测试服务器的连续性 七、使用haproxy改进 只用keepalived实现master ha当api-server的访问量大的时候会有性能瓶颈问题通过配置haproxy可以同时实现master的ha和流量的负载均衡。1、安装和配置haproxy两台master做同样的配置 # yum -y install haproxy
# cat /etc/haproxy/haproxy.cfg
globallog 127.0.0.1 local2chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 4000user haproxygroup haproxydaemonstats socket /var/lib/haproxy/statsdefaultsmode tcplog globaloption tcplogoption dontlognulloption redispatchretries 3timeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout check 10smaxconn 3000listen statsmode httpbind :10086stats enablestats uri /admin?statsstats auth admin:adminstats admin if TRUEfrontend k8s_https *:8443mode tcpmaxconn 2000default_backend https_sribackend https_sribalance roundrobinserver s1 192.168.115.5:6443 check inter 10000 fall 2 rise 2 weight 1server s2 192.168.115.6:6443 check inter 10000 fall 2 rise 2 weight 1 2、修改kube-apiserver配置,ip地址根据实际情况修改 # grep address /usr/lib/systemd/system/kube-apiserver.service --advertise-address192.168.115.5 \--bind-address192.168.115.5 \--insecure-bind-address127.0.0.1 \ 3、修改keepalived启动脚本和配置文件,vrrp脚本的ip地址根据实际情况修改 # cat /usr/lib/systemd/system/keepalived.service
[Unit]
DescriptionLVS and VRRP High Availability Monitor
Aftersyslog.target network-online.target
Requirehaproxy.service
########以下输出省略######### # cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { notification_email { ylwfjhb.cn} notification_email_from adminfjhb.cn smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER
} vrrp_script check_apiserver {script curl -o /dev/null -s -w %{http_code} -k https://192.168.115.5:6443interval 3timeout 3fall 2rise 2
}
########以下输出省略######### 4、修改kubelet和kubectl客户端配置文件指向haproxy的端口8443 # grep 192 /etc/kubernetes/bootstrap.kubeconfig server: https://192.168.115.4:8443
# grep 192 /etc/kubernetes/kubelet.kubeconfig server: https://192.168.115.4:8443
# grep 192 /etc/kubernetes/kube-proxy.kubeconfig
server: https://192.168.115.4:8443
# grep 192 /root/.kube/config
server: https://192.168.115.4:8443 5、重启服务验证master # systemctl daemon-reload
# systemctl enable haproxy
# systemctl start haproxy
# systemctl restart keepalived
# systemctl restart kube-apiserver kubelet # systemctl restart kubelet
# systemctl restart kube-proxy