戴尔公司网站建设特点,wordpress添加主题不显示图片,深圳品牌手表,怎么建设微信二维码发在网站上文章目录 1.k8s环境规划2.kubeadm和二进制安装k8s适用场景分析3.必备工具安装3.初始化3.1 配置静态IP3.2 配置主机名3.3 配置hosts文件3.4 配置主机之间无密码登录#xff0c;每台机器都按照如下操作3.5 关闭firewalld防火墙3.6 关闭selinux3.7 关闭交换分区swap3.8 修改内核参… 文章目录 1.k8s环境规划2.kubeadm和二进制安装k8s适用场景分析3.必备工具安装3.初始化3.1 配置静态IP3.2 配置主机名3.3 配置hosts文件3.4 配置主机之间无密码登录每台机器都按照如下操作3.5 关闭firewalld防火墙3.6 关闭selinux3.7 关闭交换分区swap3.8 修改内核参数3.9阿里源安装docker-ce3.10配置docker加速 4.搭建etcd集群4.1 配置etcd工作目录4.2 安装签发证书工具cfssl4.3 配置ca证书4.4 生成etcd证书4.5 部署etcd集群 5.安装kubernetes组件5.1 下载安装包5.2 部署apiserver组件5.3 部署kubectl组件5.4 部署kube-controller-manager组件5.5 部署kube-scheduler组件5.6 导入离线镜像压缩包5.7 部署kubelet组件5.8 部署kube-proxy组件5.9 部署calico组件5.10 部署coredns组件 6.安装keepalivednginx实现k8s apiserver高可用7.将master节点打上污点禁止调度 1.k8s环境规划
Pod网段 10.0.0.0/16 Service网段 10.255.0.0/16
集群角色 ip 主机名 安装组件 控制节点 10.10.0.10 master01 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx 控制节点 10.10.0.11 master02 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx 控制节点 10.10.0.12 master03 apiserver、controller-manager、scheduler、etcd、docker、keepalived、nginx 工作节点 10.10.0.14 node01 kubelet、kube-proxy、docker、calico、coredns VIP 10.10.0.100
2.kubeadm和二进制安装k8s适用场景分析
kubeadm是官方提供的开源工具是一个开源项目用于快速搭建kubernetes集群 目前是比较方便和推荐使用的。kubeadm init 以及 kubeadm join 这两个命令可以快速创建 kubernetes 集群。 Kubeadm初始化k8s所有的组件都是以pod形式运行的具备故障自恢复能力。 kubeadm是工具可以快速搭建集群也就是相当于用程序脚本帮我们装好了集群属于自动部署 简化部署操作自动部署屏蔽了很多细节使得对各个模块感知很少如果对k8s架构组件理解不深的话遇到问题比较难排查。 kubeadm适合需要经常部署k8s或者对自动化要求比较高的场景下使用。
二进制在官网下载相关组件的二进制包如果手动安装对kubernetes理解也会更全面。 Kubeadm和二进制都适合生产环境在生产环境运行都很稳定具体如何选择可以根据实际项目进行评估。
3.必备工具安装
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
安装ntpdate: [rootmaster01 ~ ]# yum install -y ntpdate
[rootmaster01 ~ ]# systemctl enable ntpdate.service --now
3.初始化
3.1 配置静态IP
略
3.2 配置主机名
各个主机名配置类似下面命令 hostnamectl set-hostname master03 bash
3.3 配置hosts文件
#修改master01、master02、master03、node01机器的/etc/hosts文件增加如下四行 10.10.0.10 master01 10.10.0.11 master02 10.10.0.12 master03 10.10.0.14 node01
3.4 配置主机之间无密码登录每台机器都按照如下操作
#生成ssh 密钥对
ssh-keygen -t rsa #一路回车不输入密码 把本地的ssh公钥文件安装到远程主机对应的账户
3.5 关闭firewalld防火墙
在master01、master02、master03、node01上操作 systemctl stop firewalld ; systemctl disable firewalld
3.6 关闭selinux
在master01、master02、master03、node01上操作 sed -i ‘s/SELINUXenforcing/SELINUXdisabled/g’ /etc/selinux/config #修改selinux配置文件之后重启机器selinux配置才能永久生效 重启之后登录机器验证是否修改成功 getenforce #显示Disabled说明selinux已经关闭
3.7 关闭交换分区swap
在master01、master02、master03、node01上操作 #临时关闭 swapoff -a #永久关闭注释swap挂载给swap这行开头加一下注释 vim /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0
#如果是克隆的虚拟机需要删除UUID
3.8 修改内核参数
在master1、master2、master3、node01上操作
所有节点安装ipvsadm 使用ipvs流量调度模式 yum install ipvsadm ipset sysstat conntrack libseccomp -y
所有节点配置ipvs模块在内核4.19版本nf_conntrack_ipv4已经改为nf_conntrack 4.18以下使用nf_conntrack_ipv4即可 modprobe – ip_vs modprobe – ip_vs_rr modprobe – ip_vs_wrr modprobe – ip_vs_sh modprobe – nf_conntrack
[rootmaster01 ~ ]# vim /etc/modules-load.d/ipvs.confip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip然后执行systemctl enable --now systemd-modules-load.service即可
内核参数修改br_netfilter模块用于将桥接流量转发至iptables链br_netfilter内核参数需要开启转发。 [rootmaster01 ~]# modprobe br_netfilter
#修改内核参数
开启一些k8s集群中必须的内核参数所有节点配置k8s内核
cat EOF /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward 1
net.bridge.bridge-nf-call-iptables 1
net.bridge.bridge-nf-call-ip6tables 1
fs.may_detach_mounts 1
net.ipv4.conf.all.route_localnet 1
vm.overcommit_memory1
vm.panic_on_oom0
fs.inotify.max_user_watches89100
fs.file-max52706963
fs.nr_open52706963
net.netfilter.nf_conntrack_max2310720net.ipv4.tcp_keepalive_time 600
net.ipv4.tcp_keepalive_probes 3
net.ipv4.tcp_keepalive_intvl 15
net.ipv4.tcp_max_tw_buckets 36000
net.ipv4.tcp_tw_reuse 1
net.ipv4.tcp_max_orphans 327680
net.ipv4.tcp_orphan_retries 3
net.ipv4.tcp_syncookies 1
net.ipv4.tcp_max_syn_backlog 16384
net.ipv4.ip_conntrack_max 65536
net.ipv4.tcp_max_syn_backlog 16384
net.ipv4.tcp_timestamps 0
net.core.somaxconn 16384
EOFsysctl --system
sysctl -p /etc/sysctl.d/k8s.conf出现报错
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
解决方法 modprobe br_netfilter
所有节点配置完内核后重启服务器保证重启后内核依旧加载 reboot lsmod | grep --colorauto -e ip_vs -e nf_conntrack
[rootmaster01 ~ ]# lsmod | grep --colorauto -e ip_vs -e nf_conntrack
ip_vs_ftp 16384 0
nf_nat 32768 1 ip_vs_ftp
ip_vs_sed 16384 0
ip_vs_nq 16384 0
ip_vs_fo 16384 0
ip_vs_sh 16384 0
ip_vs_dh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs 151552 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack 143360 2 nf_nat,ip_vs
nf_defrag_ipv6 20480 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs3.9阿里源安装docker-ce
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i sdownload.docker.commirrors.aliyun.com/docker-ce /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 5: 开启Docker服务
sudo service docker start3.10配置docker加速
[rootmaster01 ~ ]# cat /etc/docker/daemon.json
{registry-mirrors: [http://abcd1234.m.daocloud.io],exec-opts: [native.cgroupdriversystemd]
}systemctl daemon-reload
systemctl restart docker
systemctl status docker4.搭建etcd集群
4.1 配置etcd工作目录
#创建配置文件和证书文件存放目录
[rootmaster01 ~ ]#mkdir -p /etc/etcd [rootmaster01 ~ ]#mkdir -p /etc/etcd/ssl
[rootmaster02 ~ ]#mkdir -p /etc/etcd [rootmaster02 ~ ]#mkdir -p /etc/etcd/ssl
[rootmaster03 ~ ]#mkdir -p /etc/etcd [rootmaster03 ~ ]#mkdir -p /etc/etcd/ssl
4.2 安装签发证书工具cfssl
[rootmaster01 ~ ]#mkdir /data/work -p [rootmaster01 ~ ]#cd /data/work/ #cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64上传到/data/work/目录下
[rootmaster01 work ]#ll
total 18808
-rw-r--r-- 1 root root 6595195 Oct 25 15:39 cfssl-certinfo_linux-amd64
-rw-r--r-- 1 root root 2277873 Oct 25 15:39 cfssljson_linux-amd64
-rw-r--r-- 1 root root 10376657 Oct 25 15:39 cfssl_linux-amd64#把文件变成可执行权限
[rootmaster01 work ]#chmod x *
[rootmaster01 work ]#ll
total 18808
-rwxr-xr-x 1 root root 6595195 Oct 25 15:39 cfssl-certinfo_linux-amd64
-rwxr-xr-x 1 root root 2277873 Oct 25 15:39 cfssljson_linux-amd64
-rwxr-xr-x 1 root root 10376657 Oct 25 15:39 cfssl_linux-amd64
[rootmaster01 work ]#mv cfssl_linux-amd64 /usr/local/bin/cfssl
[rootmaster01 work ]#mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[rootmaster01 work ]#mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo4.3 配置ca证书
#生成ca证书请求文件
生成ca证书
初始化
cfssl print-defaults config ca-config.json cfssl print-defaults csr ca-csr.json
#生成ca证书请求文件
[rootmaster01 work ]#cat ca-csr.json
{CN: kubernetes,key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: k8s,OU: system}],ca: {expiry: 87600h}
}注 CNCommon Name公用名称kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name) 浏览器使用该字段验证网站是否合法对于 SSL 证书一般为网站域名而对于代码签名证书则为申请单位名称 而对于客户端证书则为证书申请者的姓名。
OOrganization单位名称kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group) 对于 SSL 证书一般为网站域名而对于代码签名证书则为申请单位名称而对于客户端单位证书则为证书申请者所在单位名称。
L 字段所在城市 S 字段所在省份 C 字段只能是国家字母缩写如中国CN
[rootmaster01 work ]#cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2022/10/25 16:59:15 [INFO] generating a new CA key and certificate from CSR
2022/10/25 16:59:15 [INFO] generate received request
2022/10/25 16:59:15 [INFO] received CSR
2022/10/25 16:59:15 [INFO] generating key: rsa-2048
2022/10/25 16:59:15 [INFO] encoded CSR
2022/10/25 16:59:15 [INFO] signed certificate with serial number 389912219972037047043791867430049210836195704387[rootmaster01 work ]#ll total 16 -rw-r–r-- 1 root root 997 Oct 25 16:59 ca.csr -rw-r–r-- 1 root root 253 Oct 25 15:39 ca-csr.json -rw------- 1 root root 1679 Oct 25 16:59 ca-key.pem -rw-r–r-- 1 root root 1346 Oct 25 16:59 ca.pem
#生成ca证书文件
[rootmaster01 work ]#cat ca-config.json { “signing”: { “default”: { “expiry”: “87600h” }, “profiles”: { “kubernetes”: { “usages”: [ “signing”, “key encipherment”, “server auth”, “client auth” ], “expiry”: “87600h” } } } }
4.4 生成etcd证书
#配置etcd证书请求hosts的ip变成自己etcd所在节点的ip
[rootmaster01 work ]#cat etcd-csr.json
{CN: etcd,hosts: [127.0.0.1,10.10.0.10,10.10.0.11,10.10.0.12,10.10.0.100],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: k8s,OU: system}]
} #上述文件hosts字段中IP为所有etcd节点的集群内部通信IPVIP.可以预留几个做扩容用。
[rootmaster01 work ]#cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes etcd-csr.json | cfssljson -bare etcd
2022/10/25 17:08:11 [INFO] generate received request
2022/10/25 17:08:11 [INFO] received CSR
2022/10/25 17:08:11 [INFO] generating key: rsa-2048
2022/10/25 17:08:11 [INFO] encoded CSR
2022/10/25 17:08:11 [INFO] signed certificate with serial number 135048769106462136999813410398927441265408992932
2022/10/25 17:08:11 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).上述命令-profile指定的文件名称要与ca-config.json 中 profiles下面的名称一致
[rootmaster01 work ]#ls etcd*.pem etcd-key.pem etcd.pem
4.5 部署etcd集群
下载etcd: https://github.com/etcd-io/etcd/releases
把etcd-v3.4.13-linux-amd64.tar.gz上传到/data/work目录下
[rootmaster01 work ]#ll -rw-r–r-- 1 root root 17373136 Oct 25 15:39 etcd-v3.4.13-linux-amd64.tar.gz
[rootmaster01 work ]#tar xf etcd-v3.4.13-linux-amd64.tar.gz
[rootmaster01 etcd-v3.4.13-linux-amd64 ]#cp -a etcdctl etcd /usr/local/bin/
将etcd文件拷贝到另两个master节点
[rootmaster01 etcd-v3.4.13-linux-amd64 ]#scp -p etcd* 10.10.0.11:/usr/local/bin/
etcd 100% 23MB 119.7MB/s 00:00
etcdctl 100% 17MB 69.5MB/s 00:00
[rootmaster01 etcd-v3.4.13-linux-amd64 ]#scp -p etcd* 10.10.0.12:/usr/local/bin/
etcd 100% 23MB 112.7MB/s 00:00
etcdctl #创建配置文件
[rootmaster01 work ]#cat etcd.conf
#[Member]
ETCD_NAMEetcd1
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://10.10.0.10:2380
ETCD_LISTEN_CLIENT_URLShttps://10.10.0.10:2379,http://127.0.0.1:2379
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://10.10.0.10:2380
ETCD_ADVERTISE_CLIENT_URLShttps://10.10.0.10:2379
ETCD_INITIAL_CLUSTERetcd1https://10.10.0.10:2380,etcd2https://10.10.0.11:2380,etcd3https://10.10.0.12:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew#注 ETCD_NAME节点名称集群中唯一 ETCD_DATA_DIR数据目录 ETCD_LISTEN_PEER_URLS集群通信监听地址 ETCD_LISTEN_CLIENT_URLS客户端访问监听地址 ETCD_INITIAL_ADVERTISE_PEER_URLS集群通告地址 ETCD_ADVERTISE_CLIENT_URLS客户端通告地址 ETCD_INITIAL_CLUSTER集群节点地址 ETCD_INITIAL_CLUSTER_TOKEN集群Token ETCD_INITIAL_CLUSTER_STATE加入集群的当前状态new是新集群existing表示加入已有集群
#创建启动服务文件
[rootmaster01 work ]#cat etcd.service
[Unit]
DescriptionEtcd Server
Afternetwork.target
Afternetwork-online.target
Wantsnetwork-online.target[Service]
Typenotify
EnvironmentFile-/etc/etcd/etcd.conf
WorkingDirectory/var/lib/etcd/
ExecStart/usr/local/bin/etcd \--cert-file/etc/etcd/ssl/etcd.pem \--key-file/etc/etcd/ssl/etcd-key.pem \--trusted-ca-file/etc/etcd/ssl/ca.pem \--peer-cert-file/etc/etcd/ssl/etcd.pem \--peer-key-file/etc/etcd/ssl/etcd-key.pem \--peer-trusted-ca-file/etc/etcd/ssl/ca.pem \--peer-client-cert-auth \--client-cert-auth
Restarton-failure
RestartSec5
LimitNOFILE65536[Install]
WantedBymulti-user.target[rootmaster01 work ]#cp ca*.pem /etc/etcd/ssl/
[rootmaster01 work ]#cp etcd*.pem /etc/etcd/ssl/
[rootmaster01 work ]#cp etcd.conf /etc/etcd/
[rootmaster01 work ]#cp etcd.service /usr/lib/systemd/system/[rootmaster01 work ]#for i in master02 master03;do rsync -vaz etcd.conf $i:/etc/etcd/;done[rootmaster01 work ]#for i in master02 master03;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done[rootmaster01 work ]#for i in master02 master03;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done#启动etcd集群
[rootmaster01 work]# mkdir -p /var/lib/etcd/default.etcd [rootmaster02 work]# mkdir -p /var/lib/etcd/default.etcd [rootmaster03 work]# mkdir -p /var/lib/etcd/default.etcd
修改master02的配置文件
[rootmaster02 ~ ]#cat /etc/etcd/etcd.conf
#[Member]
ETCD_NAMEetcd2
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://10.10.0.11:2380
ETCD_LISTEN_CLIENT_URLShttps://10.10.0.11:2379,http://127.0.0.1:2379
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://10.10.0.11:2380
ETCD_ADVERTISE_CLIENT_URLShttps://10.10.0.11:2379
ETCD_INITIAL_CLUSTERetcd1https://10.10.0.10:2380,etcd2https://10.10.0.11:2380,etcd3https://10.10.0.12:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew修改master03的配置文件
[rootmaster03 ~ ]#vim /etc/etcd/etcd.conf
#[Member]
ETCD_NAMEetcd3
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://10.10.0.12:2380
ETCD_LISTEN_CLIENT_URLShttps://10.10.0.12:2379,http://127.0.0.1:2379
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://10.10.0.12:2380
ETCD_ADVERTISE_CLIENT_URLShttps://10.10.0.12:2379
ETCD_INITIAL_CLUSTERetcd1https://10.10.0.10:2380,etcd2https://10.10.0.11:2380,etcd3https://10.10.0.12:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew[rootmaster01 work ]#systemctl daemon-reload [rootmaster02 ~ ]#systemctl daemon-reload [rootmaster03 ~ ]#systemctl daemon-reload
[rootmaster01 work ]#systemctl enable etcd.service --now [rootmaster02 work ]#systemctl enable etcd.service --now [rootmaster03 work ]#systemctl enable etcd.service --now
启动etcd的时候先启动xianchaomaster1的etcd服务 会一直卡住在启动的状态然后接着再启动xianchaomaster2的etcd这样xianchaomaster1这个节点etcd才会正常起来
全部重启下
[rootmaster01 work ]#systemctl restart etcd.service
#查看etcd集群 [rootmaster01 work ]#export ETCDCTL_API3
[rootmaster01 work ]#etcdctl --write-outtable --cacert/etc/etcd/ssl/ca.pem --cert/etc/etcd/ssl/etcd.pem --key/etc/etcd/ssl/etcd-key.pem --endpointshttps://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 endpoint health
-----------------------------------------------------
| ENDPOINT | HEALTH | TOOK | ERROR |
-----------------------------------------------------
| https://10.10.0.12:2379 | true | 10.214118ms | |
| https://10.10.0.10:2379 | true | 9.085152ms | |
| https://10.10.0.11:2379 | true | 10.12115ms | |
-----------------------------------------------------
[rootmaster01 work ]#
[rootmaster01 work ]#etcdctl --write-outtable --cacert/etc/etcd/ssl/ca.pem --cert/etc/etcd/ssl/etcd.pem --key/etc/etcd/ssl/etcd-key.pem --endpointshttps://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 endpoint status
---------------------------------------------------------------------------------------------------------------------------------------
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
---------------------------------------------------------------------------------------------------------------------------------------
| https://10.10.0.10:2379 | 7c3b81b30c59fb64 | 3.4.13 | 20 kB | true | false | 11 | 15 | 15 | |
| https://10.10.0.11:2379 | 86041dd24c0806ff | 3.4.13 | 25 kB | false | false | 11 | 15 | 15 | |
| https://10.10.0.12:2379 | 76002ef45e4ee68e | 3.4.13 | 20 kB | false | false | 11 | 15 | 15 | |
---------------------------------------------------------------------------------------------------------------------------------------5.安装kubernetes组件
5.1 下载安装包
二进制包所在的github地址如下 https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/
在githup上下载 https://github.com/kubernetes/kubernetes/releases/tag/v1.23.8
release 点击CHANGELOG – Download for v1.23.8 – Server Bianries – kubernetes-server-linux-amd64.tar.gz
下载这个 不要使用1.21.0 这种点0版本刚出来的版本可能会有很多问题
#把kubernetes-server-linux-amd64.tar.gz上传到master01上的/data/work目录下:
[rootmaster01 work ]#tar xf kubernetes-server-linux-amd64.tar.gz
[rootmaster01 work ]#cd kubernetes/ [rootmaster01 kubernetes ]#ll total 33672 drwxr-xr-x 2 root root 6 May 12 2021 addons -rw-r–r-- 1 root root 34477274 May 12 2021 kubernetes-src.tar.gz drwxr-xr-x 3 root root 49 May 12 2021 LICENSES drwxr-xr-x 3 root root 17 May 12 2021 server
[rootmaster01 server ]#cd bin/
[rootmaster01 bin ]#ll
total 986524
-rwxr-xr-x 1 root root 46678016 May 12 2021 apiextensions-apiserver
-rwxr-xr-x 1 root root 39215104 May 12 2021 kubeadm
-rwxr-xr-x 1 root root 44675072 May 12 2021 kube-aggregator
-rwxr-xr-x 1 root root 118210560 May 12 2021 kube-apiserver
-rw-r--r-- 1 root root 8 May 12 2021 kube-apiserver.docker_tag
-rw------- 1 root root 123026944 May 12 2021 kube-apiserver.tar
-rwxr-xr-x 1 root root 112746496 May 12 2021 kube-controller-manager
-rw-r--r-- 1 root root 8 May 12 2021 kube-controller-manager.docker_tag
-rw------- 1 root root 117562880 May 12 2021 kube-controller-manager.tar
-rwxr-xr-x 1 root root 40226816 May 12 2021 kubectl
-rwxr-xr-x 1 root root 114097256 May 12 2021 kubelet
-rwxr-xr-x 1 root root 39481344 May 12 2021 kube-proxy
-rw-r--r-- 1 root root 8 May 12 2021 kube-proxy.docker_tag
-rw------- 1 root root 120374784 May 12 2021 kube-proxy.tar
-rwxr-xr-x 1 root root 43716608 May 12 2021 kube-scheduler
-rw-r--r-- 1 root root 8 May 12 2021 kube-scheduler.docker_tag
-rw------- 1 root root 48532992 May 12 2021 kube-scheduler.tar
-rwxr-xr-x 1 root root 1634304 May 12 2021 mounter[rootmaster01 bin ]#cp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet /usr/local/bin/
将二进制程序拷贝到其他master节点
[rootmaster01 bin ]#rsync -avz kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet master02:/usr/local/bin/
sending incremental file list
kube-apiserver
kube-controller-manager
kube-scheduler
kubectl
kubeletsent 109,825,016 bytes received 111 bytes 6,656,068.30 bytes/sec
total size is 428,997,736 speedup is 3.91
[rootmaster01 bin ]#rsync -avz kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet master03:/usr/local/bin/
sending incremental file list
kube-apiserver
kube-controller-manager
kube-scheduler
kubectl
kubeletsent 109,825,016 bytes received 111 bytes 5,108,145.44 bytes/sec
total size is 428,997,736 speedup is 3.91server节点的程序包含client节点的二进制包将二进制程序拷贝到node节点
[rootmaster01 bin ]#pwd
/data/work/kubernetes/server/bin
[rootmaster01 bin ]#scp kube-proxy kubelet node01:/usr/local/bin/
The authenticity of host node01 (10.10.0.14) cant be established.
ECDSA key fingerprint is SHA256:KNFJAL1IY7QwPJevBpHBLelq/cGGjS4Iu3qb3gxgqgs.
ECDSA key fingerprint is MD5:6b:82:08:da:02:d6:1b:d0:ec:d1:93:c3:b8:21:6a:b7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added node01 (ECDSA) to the list of known hosts.
kube-proxy 100% 38MB 27.6MB/s 00:01
kubelet [rootmaster01 work ]#mkdir -p /etc/kubernetes/
[rootmaster01 work ]#mkdir -p /etc/kubernetes/ssl
[rootmaster01 work ]#mkdir /var/log/kubernetes[rootmaster02 ~ ]#mkdir -p /etc/kubernetes/
[rootmaster02 ~ ]#mkdir -p /etc/kubernetes/ssl
[rootmaster02 ~ ]#mkdir /var/log/kubernetes[rootmaster03 ~ ]#mkdir -p /etc/kubernetes/
[rootmaster03 ~ ]#mkdir -p /etc/kubernetes/ssl
[rootmaster03 ~ ]#mkdir /var/log/kubernetes5.2 部署apiserver组件
#启动TLS Bootstrapping 机制 Master apiserver启用TLS认证后每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯 当Node节点很多时这种客户端证书颁发需要大量工作同样也会增加集群扩展复杂度。
为了简化流程Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书 kubelet会以一个低权限用户自动向apiserver申请证书kubelet的证书由apiserver动态签署自动给kubelet颁发证书。
Bootstrap 是很多系统中都存在的程序 比如 Linux 的bootstrapbootstrap 一般都是作为预先配置在开启或者系统启动的时候加载 这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件这个文件的内容类似如下形式
apiVersion: v1
clusters: null
contexts:
- context:cluster: kubernetesuser: kubelet-bootstrapname: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrapuser: {}#TLS bootstrapping 具体引导过程 1.TLS 作用 TLS 的作用就是对通讯加密防止中间人窃听 同时如果证书不信任的话根本就无法与 apiserver 建立连接 更不用提有没有权限向apiserver请求指定内容。
RBAC 作用 Role base access control 基于角色的访问控制 当 TLS 解决了通讯问题后那么权限问题就应由 RBAC 解决(可以使用其他权限模型如 ABAC) RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限 在配合 TLS 加密的时候实际上 apiserver 读取客户端证书的 CN 字段作为用户名读取 O字段作为用户组.
以上说明 第一想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书 这样才能形成信任关系建立 TLS 连接
第二可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。
#kubelet 首次启动流程 TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书 然后用于连接 apiserver那么第一次启动时没有证书如何连接 apiserver ?
在apiserver 配置中指定了一个 token.csv 文件该文件中是一个预设的用户配置启动时会加载 同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中 kubelet启动时会加载bootstrap.kubeconfig文件 这样在首次请求时kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯 使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.
token.csv格式: 3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,“system:kubelet-bootstrap”
创建上述配置文件中token文件 生产token: head -c 16 /dev/urandom | od -An -t x | tr -d ’ ’ cat /data/kubernetes/token/token.csv EOF 8414f742b0b05960998699427f780978,kubelet-bootstrap,10001,“system:node-bootstrapper” EOF
首次启动时可能与遇到 kubelet 报 401 无权访问 apiserver 的错误 这是因为在默认情况下kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份 然后创建 CSR 请求但是不要忘记这个用户在我们不处理的情况下他没任何权限的 包括创建 CSR 请求所以需要创建一个 ClusterRoleBinding 将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起 使其能够发起 CSR 请求。
稍后安装kubelet的时候演示。
#创建token.csv文件
[rootmaster01 work]# cat token.csv EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ),kubelet-bootstrap,10001,system:kubelet-bootstrap
EOF#格式token用户名UID用户组
[rootmaster01 work ]#cat token.csv fe63c95ecacff5f161138fddee6d0a5e,kubelet-bootstrap,10001,“system:kubelet-bootstrap”
#创建csr请求文件替换为自己机器的IP
[rootmaster01 work ]#cat kube-apiserver-csr.json
{CN: kubernetes,hosts: [127.0.0.1,10.10.0.10,10.10.0.11,10.10.0.12,10.10.0.14,10.10.0.100,10.255.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: k8s,OU: system}]
}#注 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用需要将master节点的IP都填上还可以多预留几个ip最好node节点的ip也填上 同时还需要填写 service 网络的首个IP。 (一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP如 10.255.0.1)
#生成证书
[rootmaster01 work ]#cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
2022/10/26 09:17:42 [INFO] generate received request
2022/10/26 09:17:42 [INFO] received CSR
2022/10/26 09:17:42 [INFO] generating key: rsa-2048
2022/10/26 09:17:43 [INFO] encoded CSR
2022/10/26 09:17:43 [INFO] signed certificate with serial number 472605278820153059718832369709947675981512800305
2022/10/26 09:17:43 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).-profilekubernetes 是在ca-config.json文件中指定的
#创建api-server的配置文件替换成自己的ip
[rootmaster01 work ]#cat kube-apiserver.conf
KUBE_APISERVER_OPTS--enable-admission-pluginsNamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-authfalse \--bind-address10.10.0.10 \--secure-port6443 \--advertise-address10.10.0.10 \--insecure-port0 \--authorization-modeNode,RBAC \--runtime-configapi/alltrue \--enable-bootstrap-token-auth \--service-cluster-ip-range10.255.0.0/16 \--token-auth-file/etc/kubernetes/token.csv \--service-node-port-range30000-50000 \--tls-cert-file/etc/kubernetes/ssl/kube-apiserver.pem \--tls-private-key-file/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file/etc/kubernetes/ssl/ca-key.pem \--service-account-issuerhttps://kubernetes.default.svc.cluster.local \--etcd-cafile/etc/etcd/ssl/ca.pem \--etcd-certfile/etc/etcd/ssl/etcd.pem \--etcd-keyfile/etc/etcd/ssl/etcd-key.pem \--etcd-servershttps://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 \--enable-swagger-uitrue \--allow-privilegedtrue \--apiserver-count3 \--audit-log-maxage30 \--audit-log-maxbackup3 \--audit-log-maxsize100 \--audit-log-path/var/log/kube-apiserver-audit.log \--event-ttl1h \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v4#注 –logtostderr启用日志 –v日志等级 –log-dir日志目录 –etcd-serversetcd集群地址 –bind-address监听地址 –secure-porthttps安全端口 –advertise-address集群通告地址 –allow-privileged启用授权 –service-cluster-ip-rangeService虚拟IP地址段 –enable-admission-plugins准入控制模块 –authorization-mode认证授权启用RBAC授权和节点自管理 –enable-bootstrap-token-auth启用TLS bootstrap机制 –token-auth-filebootstrap token文件 –service-node-port-rangeService nodeport类型默认分配端口范围 –kubelet-client-xxxapiserver访问kubelet客户端证书 –tls-xxx-fileapiserver https证书 –etcd-xxxfile连接Etcd集群证书 – -audit-log-xxx审计日志
#创建服务启动文件
[rootmaster01 work ]#cat kube-apiserver.service
[Unit]
DescriptionKubernetes API Server
Documentationhttps://github.com/kubernetes/kubernetes
Afteretcd.service
Wantsetcd.service[Service]
EnvironmentFile-/etc/kubernetes/kube-apiserver.conf
ExecStart/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restarton-failure
RestartSec5
Typenotify
LimitNOFILE65536[Install]
WantedBymulti-user.target[rootmaster01 work ]#cp ca*.pem /etc/kubernetes/ssl
[rootmaster01 work ]#cp kube-apiserver*.pem /etc/kubernetes/ssl/
[rootmaster01 work ]#cp token.csv /etc/kubernetes/
[rootmaster01 work ]#cp kube-apiserver.conf /etc/kubernetes/
[rootmaster01 work ]#cp kube-apiserver.service /usr/lib/systemd/system/
[rootmaster01 work ]#rsync -vaz token.csv master02:/etc/kubernetes/
sending incremental file list
token.csvsent 161 bytes received 35 bytes 392.00 bytes/sec
total size is 84 speedup is 0.43
[rootmaster01 work ]#rsync -vaz token.csv master03:/etc/kubernetes/
sending incremental file list
token.csvsent 161 bytes received 35 bytes 78.40 bytes/sec
total size is 84 speedup is 0.43
[rootmaster01 work ]#rsync -vaz kube-apiserver*.pem master02:/etc/kubernetes/ssl/
sending incremental file list
kube-apiserver-key.pem
kube-apiserver.pemsent 2,604 bytes received 54 bytes 5,316.00 bytes/sec
total size is 3,310 speedup is 1.25
[rootmaster01 work ]#rsync -vaz kube-apiserver*.pem master03:/etc/kubernetes/ssl/
sending incremental file list
kube-apiserver-key.pem
kube-apiserver.pemsent 2,604 bytes received 54 bytes 5,316.00 bytes/sec
total size is 3,310 speedup is 1.25
[rootmaster01 work ]#rsync -vaz ca*.pem master02:/etc/kubernetes/ssl/
sending incremental file list
ca-key.pem
ca.pemsent 2,420 bytes received 54 bytes 4,948.00 bytes/sec
total size is 3,025 speedup is 1.22
[rootmaster01 work ]#rsync -vaz ca*.pem master03:/etc/kubernetes/ssl/
sending incremental file list
ca-key.pem
ca.pemsent 2,420 bytes received 54 bytes 1,649.33 bytes/sec
total size is 3,025 speedup is 1.22
[rootmaster01 work ]#rsync -vaz kube-apiserver.conf master02:/etc/kubernetes/
sending incremental file list
kube-apiserver.confsent 1,005 bytes received 54 bytes 2,118.00 bytes/sec
total size is 1,959 speedup is 1.85
[rootmaster01 work ]#rsync -vaz kube-apiserver.conf master03:/etc/kubernetes/
sending incremental file list
kube-apiserver.confsent 707 bytes received 35 bytes 1,484.00 bytes/sec
total size is 1,597 speedup is 2.15
[rootmaster01 work ]#rsync -vaz kube-apiserver.service master02:/usr/lib/systemd/system/
sending incremental file list
kube-apiserver.servicesent 340 bytes received 35 bytes 750.00 bytes/sec
total size is 362 speedup is 0.97
[rootmaster01 work ]#rsync -vaz kube-apiserver.service master03:/usr/lib/systemd/system/
sending incremental file list
kube-apiserver.servicesent 340 bytes received 35 bytes 750.00 bytes/sec
total size is 362 speedup is 0.97注master02和master03配置文件kube-apiserver.conf的IP地址修改为实际的本机IP
master02配置文件
[rootmaster02 kubernetes ]#cat kube-apiserver.conf
KUBE_APISERVER_OPTS--enable-admission-pluginsNamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-authfalse \--bind-address10.10.0.11 \--secure-port6443 \--advertise-address10.10.0.11 \--insecure-port0 \--authorization-modeNode,RBAC \--runtime-configapi/alltrue \--enable-bootstrap-token-auth \--service-cluster-ip-range10.255.0.0/16 \--token-auth-file/etc/kubernetes/token.csv \--service-node-port-range30000-50000 \--tls-cert-file/etc/kubernetes/ssl/kube-apiserver.pem \--tls-private-key-file/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file/etc/kubernetes/ssl/ca-key.pem \--service-account-issuerhttps://kubernetes.default.svc.cluster.local \--etcd-cafile/etc/etcd/ssl/ca.pem \--etcd-certfile/etc/etcd/ssl/etcd.pem \--etcd-keyfile/etc/etcd/ssl/etcd-key.pem \--etcd-servershttps://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 \--enable-swagger-uitrue \--allow-privilegedtrue \--apiserver-count3 \--audit-log-maxage30 \--audit-log-maxbackup3 \--audit-log-maxsize100 \--audit-log-path/var/log/kube-apiserver-audit.log \--event-ttl1h \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v4master03配置文件
[rootmaster03 kubernetes ]#cat kube-apiserver.conf
KUBE_APISERVER_OPTS--enable-admission-pluginsNamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-authfalse \--bind-address10.10.0.12 \--secure-port6443 \--advertise-address10.10.0.12 \--insecure-port0 \--authorization-modeNode,RBAC \--runtime-configapi/alltrue \--enable-bootstrap-token-auth \--service-cluster-ip-range10.255.0.0/16 \--token-auth-file/etc/kubernetes/token.csv \--service-node-port-range30000-50000 \--tls-cert-file/etc/kubernetes/ssl/kube-apiserver.pem \--tls-private-key-file/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file/etc/kubernetes/ssl/ca-key.pem \--service-account-issuerhttps://kubernetes.default.svc.cluster.local \--etcd-cafile/etc/etcd/ssl/ca.pem \--etcd-certfile/etc/etcd/ssl/etcd.pem \--etcd-keyfile/etc/etcd/ssl/etcd-key.pem \--etcd-servershttps://10.10.0.10:2379,https://10.10.0.11:2379,https://10.10.0.12:2379 \--enable-swagger-uitrue \--allow-privilegedtrue \--apiserver-count3 \--audit-log-maxage30 \--audit-log-maxbackup3 \--audit-log-maxsize100 \--audit-log-path/var/log/kube-apiserver-audit.log \--event-ttl1h \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v4[rootmaster01 work ]#systemctl daemon-reload [rootmaster02 kubernetes ]#systemctl daemon-reload [rootmaster03 kubernetes ]#systemctl daemon-reload
[rootmaster01 work ]#systemctl enable kube-apiserver.service --now [rootmaster02 kubernetes ]#systemctl enable kube-apiserver.service --now [rootmaster03 kubernetes ]#systemctl enable kube-apiserver.service --now
[rootmaster01 work ]#curl --insecure https://10.10.0.10:6443/
{kind: Status,apiVersion: v1,metadata: {},status: Failure,message: Unauthorized,reason: Unauthorized,code: 401上面看到401这个是正常的的状态还没认证
5.3 部署kubectl组件
Kubectl是客户端工具操作k8s资源的如增删改查等。 Kubectl操作资源的时候怎么知道连接到哪个集群 需要一个文件/etc/kubernetes/admin.conf这个文件是kubeadm安装时生成的。kubectl会根据这个文件的配置 去访问k8s资源。/etc/kubernetes/admin.conf文件记录了访问的k8s集群和要用到的证书。
可以设置一个环境变量KUBECONFIG
[rootmaster01 ~ ]#export KUBECONFIG /etc/kubernetes/admin.conf 这样在操作kubectl就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了
也可以按照下面方法这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法
[root master01 ~]# cp /etc/kubernetes/admin.conf /root/.kube/config 这样我们在执行kubectl就会加载/root/.kube/config文件去操作k8s资源了
如果设置了KUBECONFIG那就会先找到KUBECONFIG去操作k8s 如果没有KUBECONFIG变量那就会使用/root/.kube/config文件决定管理哪个k8s集群的资源
二进制安装/root/.kube/config 文件不会自动生成需要手动去配置
#创建csr请求文件
[rootmaster01 work ]#cat admin-csr.json
{CN: admin,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: system:masters, OU: system}]
}#说明 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权 kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings 如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定 该 Role 授予了调用kube-apiserver 的所有 API的权限 O指定该证书的 Group 为 system:masterskubelet 使用该证书访问 kube-apiserver 时 由于证书被 CA 签名所以认证通过同时由于证书用户组为经过预授权的 system:masters所以被授予访问所有 API 的权限
注 这个admin 证书是将来生成管理员用的kube config 配置文件用的 现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制 kubernetes 将证书中的CN 字段 作为User O 字段作为 Group “O”: “system:masters”, 必须是system:masters否则后面kubectl create clusterrolebinding报错。
#证书O配置为system:masters 在集群内部cluster-admin的clusterrolebinding将system:masters组和cluster-admin clusterrole绑定在一起
#生成证书
[rootmaster01 work ]#cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes admin-csr.json | cfssljson -bare admin
2022/10/26 13:13:36 [INFO] generate received request
2022/10/26 13:13:36 [INFO] received CSR
2022/10/26 13:13:36 [INFO] generating key: rsa-2048
2022/10/26 13:13:37 [INFO] encoded CSR
2022/10/26 13:13:37 [INFO] signed certificate with serial number 446576960760276520597218013698142034111813000185
2022/10/26 13:13:37 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).-profilekubernetes 是在ca-config.json文件中指定的
这样admin以及system:masters组下的成员都被apiserver信任。
[rootmaster01 work ]#cp admin*.pem /etc/kubernetes/ssl/
配置安全上下文 #创建kubeconfig配置文件比较重要 kubeconfig 为 kubectl 的配置文件包含访问 apiserver 的所有信息 如 apiserver 地址、CA 证书和自身使用的证书这里如果报错找不到kubeconfig路径请手动复制到相应路径下没有则忽略
二进制部署不会生成~/.kube/config文件
手动生成
1.设置集群参数 集群名字是kubernetes --kubeconfig 指定生成文件的名称以及位置
[rootmaster01 work ]#kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://10.10.0.10:6443 --kubeconfigkube.config
Cluster kubernetes set.#查看kube.config内容
[rootmaster01 work ]#cat kube.config
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoserver: https://10.10.0.10:6443name: kubernetes
contexts: null
current-context:
kind: Config
preferences: {}
users: nullcertificate-authority-data 内容就是根据ca.pem内容加密/解密生成的
2.设置客户端认证参数
set-credentials admin admin是admin-csr.json 文件中的CN值
[rootmaster01 work ]#kubectl config set-credentials admin --client-certificateadmin.pem --client-keyadmin-key.pem --embed-certstrue --kubeconfigkube.config User “admin” set.
[rootmaster01 work ]#cat kube.config
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoserver: https://10.10.0.10:6443name: kubernetes
contexts: null
current-context:
kind: Config
preferences: {}
users:
- name: adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVVGprMEdIUG1CRWNRVVhGeFZrL21wb05pRy9rd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURVd09UQXdXaGNOTXpJeE1ESXpNRFV3T1RBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRqTGlNaXJmUWlJdHp2MHFOOE1iM3AKSExMRGNMVXB3TUtlTFQwb3NsZ0xFL21jdmMvSG12TlBLNGZMTWk1OVhYTk5IV0JiL0ZVa0RPdlFwKytOenZZegpFTVFkTnNNTkdQOUsxbGFTMUw4V1BJMmYwc0xSM2h1UGt0L09OTmdoNDBxUW81aFYwSFpZTms0TzRpTE1lejhZCmNjbjFvaTVJdUFOTHg2Q2dkNURqQWR6NWJHeTdQL0gvM0phKzVNblNkNU8yb05SL0NMRmVkWVdJb055S1VlTDkKV3A2T0l1S0pUdW94T2JMT1lCWUtITnBSUXVnRmFIQmx4aUxCd2FjR3g4VVBtTUQrRlI0SXlMNldBWGFtTUVPQwpDTi9Yayt6bXUrOGdsQ1hoMUNsblJaTGNENXhDR29sTlhpVFZGclIyc3dKcTN0RHJXSUpzV2FKZnBvTGczWDhDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU1lJaXl3V3M5MzFjZmZJSzRNVWlrdQpWQ2J3NWpBZkJnTlZIU01FR0RBV2dCVGZyVlJaNlZEUHlLM05ySm8yUTF6cVhnekZFekFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQVljTW5YdUNmVEV4RkwvRGRkUUYyT3pFTEx3Y1hHOFNMUjJKRUNXSnI0QXpsd1JUVUhjTDkKVGkxWXczZDI4TWh0d09NekNEYzM5ZURLM2RHZGVlMFozeFhXM0FlVGlRcFUrZnBCbURmQW1MRy9Mdzk1YzZsMQoxeXJiN1dXV2pyRHJDZHVVN1JCQnhJT3RqOERnNG1mQk1uUEhFQS9RY04yR1drc2ZwRVpleElzWWc1b242Z0RaCjdyNWxtSzZHTGZSWW83emlJY0NFak1zc281U1owbmJqYXZJUzZ0TUY0U2NLdWJKQUpVSERscFVMSnNqTWxTRW8KY01nQnVtd2ZXUnlRdU1HOXlnbHF0RDlJZDJxTUJsdU5tdlpwRy80ejdVMkhRVjVKOC9uaDFtckladmZKNDNMRQowOTNYOGdIamQvVE5kRjduVFY5ZWphTTlMZCt1TUVJQzhnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdmlNdUl5S3Q5Q0lpM08vU28zd3h2ZWtjc3NOd3RTbkF3cDR0UFNpeVdBc1QrWnk5Cno4ZWE4MDhyaDhzeUxuMWRjMDBkWUZ2OFZTUU02OUNuNzQzTzlqTVF4QjAyd3cwWS8wcldWcExVdnhZOGpaL1MKd3RIZUc0K1MzODQwMkNIalNwQ2ptRlhRZGxnMlRnN2lJc3g3UHhoeHlmV2lMa2k0QTB2SG9LQjNrT01CM1BscwpiTHMvOGYvY2xyN2t5ZEozazdhZzFIOElzVjUxaFlpZzNJcFI0djFhbm80aTRvbE82akU1c3M1Z0Znb2MybEZDCjZBVm9jR1hHSXNIQnB3Ykh4UStZd1A0Vkhnakl2cFlCZHFZd1E0SUkzOWVUN09hNzd5Q1VKZUhVS1dkRmt0d1AKbkVJYWlVMWVKTlVXdEhhekFtcmUwT3RZZ214Wm9sK21ndURkZndJREFRQUJBb0lCQVFDWkxjSjNyL0t3b2VldwpVczFCeEVaV2x6ejFqNXAzZVFIQVNLcHRnU0hjNkYvWlVydGdiNUNYd0FwenhmSFJubEh4R0FrNG5pSzFmT3VqCjkxKzBFR3pSeitZTCtQVXJRcHdHNEFXNWpXVXo1UGczcUxDbEgycHVqY1puNDdxUy9Rb2VBbFNwMzBpb2J2eWcKK2tDWWhHQXVQc1U5VFZTeE1RaCtMMGpPVVRqQ1VacHU2V0RLSVhiVzFqWU5mdWo5bVBuTlhWMTNvOWR6VFpLMApGSm1pd085VVd0WS81Z0FCVDBqQmxhWHpCSVd1QlI5QnMxcHZTS1BsNnBRVXJMMmVKZjJkR2FaRExSWHVXb0crCkp5RUtkNGRkRFZUR0ZzcFVSc0NCYXBOYTg4RjNnNzBmSS9DekR2MG1Mb1Z4cE10RGRQRlljeDhXb0pzRDNGeUQKRFArMVhZNEJBb0dCQVBiVEhXMmhaNkJSbjVXNitEM1F1U3NINDAzWmJUL09Naysyb3JuK1FqNExYR3o0cVBsSQp4YXFhcElycm9XYU05VzJJUEo1dVJZdnM0NTZUcmhWLzl0cWFxOSsrOWtlREJsalE2Wi9kQkJxSTdRUHUrdTVxCnpiK0RxcHg1TVgwVy8rS1dydFlkVHEyaHBwQlQxK0lvZTJ6STN4MXNiUk4zQ3d0VW9CZ2Z4YkQvQW9HQkFNVTAKbXl1SHl3MGVIY21iN3dvTzF5V05NbVNPb0tQK1FJMEV4U2ZCblF1UWNJNFl1dFdITUFUYWJGTXhjazZzNnhPNwppVVdPRWVLck4zbXdjM3NSN3dGUHRWNGUyQ3lWZlZjKzJIV2xmL3JzTGp6eUgrbzM1ZVdLL2NaaW9uT1Y2YzJWCmFLa01tZlR1OXdnbm5UN20vKzlSK0tPdVhubFBreVg4S3J5NElGT0JBb0dBWkZ1TWtLSGE5NVdZbEpIVUU1WkYKWTlpdU5GNGVqSjN6V1BRQ2tDdHdsYmVhMmZmMUJIN3hXQi9PbldtWFU1SW16R1ZqZUd1UHZZZ1JPTTRGTDFxNwpiVUVNZDBvMjZ2YThZdXAydzNoakRjTDAwKytjZWNwVlkvUk9MNWNiWnlndDNOeTFzL3R3blNxb0JmRUJTMFI0CmdzL2Q0Q0hRNitRd1NtZ2JQQlBYRnRNQ2dZQVErSjRCK1FXNGMwY00rcVp2cnlkRXpBbnlMWFFWcU9QVlB2dlkKbUFqejNkSlI2RDdyOFY1b2pJT1dCVU5aRWZpSkVqS1dFY3ZvUGVQZ1RSY2pHRUFCVk9LKzN0aXJ2Wkd6Mkd5NApjeTI0WW1yNFE3NExZaFFlMVA5Uisxc1BwMjhmaWlRZnFEMzNuamtVTXBTTnZVTjVUUXlneVhqSDU5azZBNkdKCjdDNmNBUUtCZ1FDSEdibTJOdXpEaUcyMEJvcURqVWR1dk9ycmYxV3p6THd0L05TZUFGOGdMNGNGcUY2WDAxOUoKcTFuV0VkSkU5UFlEcW9TQ1I4aWovTmxsbVZHVUdjNzdwLytwUDEzQTNWL1ZwdndETU5aK2FDRDJWQkYvUXBUTworUmdTTHhDbWpaeHVBTVlBd0V1c3d2dW5LNG9jeWNWWjBFSHpVSTZnblJDNkxqVk1IZVQxZ0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo3.设置上下文参数
目的是将admin这个用户和apiserver集群连接起来靠上下文参数连接
[rootmaster01 work ]#kubectl config set-context kubernetes --clusterkubernetes --useradmin --kubeconfigkube.config
Context kubernetes created.[rootmaster01 work ]#cat kube.config
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoserver: https://10.10.0.10:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: adminname: kubernetes
current-context:
kind: Config
preferences: {}
users:
- name: adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVVGprMEdIUG1CRWNRVVhGeFZrL21wb05pRy9rd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURVd09UQXdXaGNOTXpJeE1ESXpNRFV3T1RBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRqTGlNaXJmUWlJdHp2MHFOOE1iM3AKSExMRGNMVXB3TUtlTFQwb3NsZ0xFL21jdmMvSG12TlBLNGZMTWk1OVhYTk5IV0JiL0ZVa0RPdlFwKytOenZZegpFTVFkTnNNTkdQOUsxbGFTMUw4V1BJMmYwc0xSM2h1UGt0L09OTmdoNDBxUW81aFYwSFpZTms0TzRpTE1lejhZCmNjbjFvaTVJdUFOTHg2Q2dkNURqQWR6NWJHeTdQL0gvM0phKzVNblNkNU8yb05SL0NMRmVkWVdJb055S1VlTDkKV3A2T0l1S0pUdW94T2JMT1lCWUtITnBSUXVnRmFIQmx4aUxCd2FjR3g4VVBtTUQrRlI0SXlMNldBWGFtTUVPQwpDTi9Yayt6bXUrOGdsQ1hoMUNsblJaTGNENXhDR29sTlhpVFZGclIyc3dKcTN0RHJXSUpzV2FKZnBvTGczWDhDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU1lJaXl3V3M5MzFjZmZJSzRNVWlrdQpWQ2J3NWpBZkJnTlZIU01FR0RBV2dCVGZyVlJaNlZEUHlLM05ySm8yUTF6cVhnekZFekFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQVljTW5YdUNmVEV4RkwvRGRkUUYyT3pFTEx3Y1hHOFNMUjJKRUNXSnI0QXpsd1JUVUhjTDkKVGkxWXczZDI4TWh0d09NekNEYzM5ZURLM2RHZGVlMFozeFhXM0FlVGlRcFUrZnBCbURmQW1MRy9Mdzk1YzZsMQoxeXJiN1dXV2pyRHJDZHVVN1JCQnhJT3RqOERnNG1mQk1uUEhFQS9RY04yR1drc2ZwRVpleElzWWc1b242Z0RaCjdyNWxtSzZHTGZSWW83emlJY0NFak1zc281U1owbmJqYXZJUzZ0TUY0U2NLdWJKQUpVSERscFVMSnNqTWxTRW8KY01nQnVtd2ZXUnlRdU1HOXlnbHF0RDlJZDJxTUJsdU5tdlpwRy80ejdVMkhRVjVKOC9uaDFtckladmZKNDNMRQowOTNYOGdIamQvVE5kRjduVFY5ZWphTTlMZCt1TUVJQzhnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdmlNdUl5S3Q5Q0lpM08vU28zd3h2ZWtjc3NOd3RTbkF3cDR0UFNpeVdBc1QrWnk5Cno4ZWE4MDhyaDhzeUxuMWRjMDBkWUZ2OFZTUU02OUNuNzQzTzlqTVF4QjAyd3cwWS8wcldWcExVdnhZOGpaL1MKd3RIZUc0K1MzODQwMkNIalNwQ2ptRlhRZGxnMlRnN2lJc3g3UHhoeHlmV2lMa2k0QTB2SG9LQjNrT01CM1BscwpiTHMvOGYvY2xyN2t5ZEozazdhZzFIOElzVjUxaFlpZzNJcFI0djFhbm80aTRvbE82akU1c3M1Z0Znb2MybEZDCjZBVm9jR1hHSXNIQnB3Ykh4UStZd1A0Vkhnakl2cFlCZHFZd1E0SUkzOWVUN09hNzd5Q1VKZUhVS1dkRmt0d1AKbkVJYWlVMWVKTlVXdEhhekFtcmUwT3RZZ214Wm9sK21ndURkZndJREFRQUJBb0lCQVFDWkxjSjNyL0t3b2VldwpVczFCeEVaV2x6ejFqNXAzZVFIQVNLcHRnU0hjNkYvWlVydGdiNUNYd0FwenhmSFJubEh4R0FrNG5pSzFmT3VqCjkxKzBFR3pSeitZTCtQVXJRcHdHNEFXNWpXVXo1UGczcUxDbEgycHVqY1puNDdxUy9Rb2VBbFNwMzBpb2J2eWcKK2tDWWhHQXVQc1U5VFZTeE1RaCtMMGpPVVRqQ1VacHU2V0RLSVhiVzFqWU5mdWo5bVBuTlhWMTNvOWR6VFpLMApGSm1pd085VVd0WS81Z0FCVDBqQmxhWHpCSVd1QlI5QnMxcHZTS1BsNnBRVXJMMmVKZjJkR2FaRExSWHVXb0crCkp5RUtkNGRkRFZUR0ZzcFVSc0NCYXBOYTg4RjNnNzBmSS9DekR2MG1Mb1Z4cE10RGRQRlljeDhXb0pzRDNGeUQKRFArMVhZNEJBb0dCQVBiVEhXMmhaNkJSbjVXNitEM1F1U3NINDAzWmJUL09Naysyb3JuK1FqNExYR3o0cVBsSQp4YXFhcElycm9XYU05VzJJUEo1dVJZdnM0NTZUcmhWLzl0cWFxOSsrOWtlREJsalE2Wi9kQkJxSTdRUHUrdTVxCnpiK0RxcHg1TVgwVy8rS1dydFlkVHEyaHBwQlQxK0lvZTJ6STN4MXNiUk4zQ3d0VW9CZ2Z4YkQvQW9HQkFNVTAKbXl1SHl3MGVIY21iN3dvTzF5V05NbVNPb0tQK1FJMEV4U2ZCblF1UWNJNFl1dFdITUFUYWJGTXhjazZzNnhPNwppVVdPRWVLck4zbXdjM3NSN3dGUHRWNGUyQ3lWZlZjKzJIV2xmL3JzTGp6eUgrbzM1ZVdLL2NaaW9uT1Y2YzJWCmFLa01tZlR1OXdnbm5UN20vKzlSK0tPdVhubFBreVg4S3J5NElGT0JBb0dBWkZ1TWtLSGE5NVdZbEpIVUU1WkYKWTlpdU5GNGVqSjN6V1BRQ2tDdHdsYmVhMmZmMUJIN3hXQi9PbldtWFU1SW16R1ZqZUd1UHZZZ1JPTTRGTDFxNwpiVUVNZDBvMjZ2YThZdXAydzNoakRjTDAwKytjZWNwVlkvUk9MNWNiWnlndDNOeTFzL3R3blNxb0JmRUJTMFI0CmdzL2Q0Q0hRNitRd1NtZ2JQQlBYRnRNQ2dZQVErSjRCK1FXNGMwY00rcVp2cnlkRXpBbnlMWFFWcU9QVlB2dlkKbUFqejNkSlI2RDdyOFY1b2pJT1dCVU5aRWZpSkVqS1dFY3ZvUGVQZ1RSY2pHRUFCVk9LKzN0aXJ2Wkd6Mkd5NApjeTI0WW1yNFE3NExZaFFlMVA5Uisxc1BwMjhmaWlRZnFEMzNuamtVTXBTTnZVTjVUUXlneVhqSDU5azZBNkdKCjdDNmNBUUtCZ1FDSEdibTJOdXpEaUcyMEJvcURqVWR1dk9ycmYxV3p6THd0L05TZUFGOGdMNGNGcUY2WDAxOUoKcTFuV0VkSkU5UFlEcW9TQ1I4aWovTmxsbVZHVUdjNzdwLytwUDEzQTNWL1ZwdndETU5aK2FDRDJWQkYvUXBUTworUmdTTHhDbWpaeHVBTVlBd0V1c3d2dW5LNG9jeWNWWjBFSHpVSTZnblJDNkxqVk1IZVQxZ0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo4.设置当前上下文 当前上下文就是admin这个用户访问当前集群 use-context kubernetes 这里kubernetes是上下文的名字
[rootmaster01 work ]#kubectl config use-context kubernetes --kubeconfigkube.config Switched to context “kubernetes”.
[rootmaster01 work ]#cat kube.config
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoserver: https://10.10.0.10:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: adminname: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVVGprMEdIUG1CRWNRVVhGeFZrL21wb05pRy9rd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURVd09UQXdXaGNOTXpJeE1ESXpNRFV3T1RBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRqTGlNaXJmUWlJdHp2MHFOOE1iM3AKSExMRGNMVXB3TUtlTFQwb3NsZ0xFL21jdmMvSG12TlBLNGZMTWk1OVhYTk5IV0JiL0ZVa0RPdlFwKytOenZZegpFTVFkTnNNTkdQOUsxbGFTMUw4V1BJMmYwc0xSM2h1UGt0L09OTmdoNDBxUW81aFYwSFpZTms0TzRpTE1lejhZCmNjbjFvaTVJdUFOTHg2Q2dkNURqQWR6NWJHeTdQL0gvM0phKzVNblNkNU8yb05SL0NMRmVkWVdJb055S1VlTDkKV3A2T0l1S0pUdW94T2JMT1lCWUtITnBSUXVnRmFIQmx4aUxCd2FjR3g4VVBtTUQrRlI0SXlMNldBWGFtTUVPQwpDTi9Yayt6bXUrOGdsQ1hoMUNsblJaTGNENXhDR29sTlhpVFZGclIyc3dKcTN0RHJXSUpzV2FKZnBvTGczWDhDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCU1lJaXl3V3M5MzFjZmZJSzRNVWlrdQpWQ2J3NWpBZkJnTlZIU01FR0RBV2dCVGZyVlJaNlZEUHlLM05ySm8yUTF6cVhnekZFekFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQVljTW5YdUNmVEV4RkwvRGRkUUYyT3pFTEx3Y1hHOFNMUjJKRUNXSnI0QXpsd1JUVUhjTDkKVGkxWXczZDI4TWh0d09NekNEYzM5ZURLM2RHZGVlMFozeFhXM0FlVGlRcFUrZnBCbURmQW1MRy9Mdzk1YzZsMQoxeXJiN1dXV2pyRHJDZHVVN1JCQnhJT3RqOERnNG1mQk1uUEhFQS9RY04yR1drc2ZwRVpleElzWWc1b242Z0RaCjdyNWxtSzZHTGZSWW83emlJY0NFak1zc281U1owbmJqYXZJUzZ0TUY0U2NLdWJKQUpVSERscFVMSnNqTWxTRW8KY01nQnVtd2ZXUnlRdU1HOXlnbHF0RDlJZDJxTUJsdU5tdlpwRy80ejdVMkhRVjVKOC9uaDFtckladmZKNDNMRQowOTNYOGdIamQvVE5kRjduVFY5ZWphTTlMZCt1TUVJQzhnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdmlNdUl5S3Q5Q0lpM08vU28zd3h2ZWtjc3NOd3RTbkF3cDR0UFNpeVdBc1QrWnk5Cno4ZWE4MDhyaDhzeUxuMWRjMDBkWUZ2OFZTUU02OUNuNzQzTzlqTVF4QjAyd3cwWS8wcldWcExVdnhZOGpaL1MKd3RIZUc0K1MzODQwMkNIalNwQ2ptRlhRZGxnMlRnN2lJc3g3UHhoeHlmV2lMa2k0QTB2SG9LQjNrT01CM1BscwpiTHMvOGYvY2xyN2t5ZEozazdhZzFIOElzVjUxaFlpZzNJcFI0djFhbm80aTRvbE82akU1c3M1Z0Znb2MybEZDCjZBVm9jR1hHSXNIQnB3Ykh4UStZd1A0Vkhnakl2cFlCZHFZd1E0SUkzOWVUN09hNzd5Q1VKZUhVS1dkRmt0d1AKbkVJYWlVMWVKTlVXdEhhekFtcmUwT3RZZ214Wm9sK21ndURkZndJREFRQUJBb0lCQVFDWkxjSjNyL0t3b2VldwpVczFCeEVaV2x6ejFqNXAzZVFIQVNLcHRnU0hjNkYvWlVydGdiNUNYd0FwenhmSFJubEh4R0FrNG5pSzFmT3VqCjkxKzBFR3pSeitZTCtQVXJRcHdHNEFXNWpXVXo1UGczcUxDbEgycHVqY1puNDdxUy9Rb2VBbFNwMzBpb2J2eWcKK2tDWWhHQXVQc1U5VFZTeE1RaCtMMGpPVVRqQ1VacHU2V0RLSVhiVzFqWU5mdWo5bVBuTlhWMTNvOWR6VFpLMApGSm1pd085VVd0WS81Z0FCVDBqQmxhWHpCSVd1QlI5QnMxcHZTS1BsNnBRVXJMMmVKZjJkR2FaRExSWHVXb0crCkp5RUtkNGRkRFZUR0ZzcFVSc0NCYXBOYTg4RjNnNzBmSS9DekR2MG1Mb1Z4cE10RGRQRlljeDhXb0pzRDNGeUQKRFArMVhZNEJBb0dCQVBiVEhXMmhaNkJSbjVXNitEM1F1U3NINDAzWmJUL09Naysyb3JuK1FqNExYR3o0cVBsSQp4YXFhcElycm9XYU05VzJJUEo1dVJZdnM0NTZUcmhWLzl0cWFxOSsrOWtlREJsalE2Wi9kQkJxSTdRUHUrdTVxCnpiK0RxcHg1TVgwVy8rS1dydFlkVHEyaHBwQlQxK0lvZTJ6STN4MXNiUk4zQ3d0VW9CZ2Z4YkQvQW9HQkFNVTAKbXl1SHl3MGVIY21iN3dvTzF5V05NbVNPb0tQK1FJMEV4U2ZCblF1UWNJNFl1dFdITUFUYWJGTXhjazZzNnhPNwppVVdPRWVLck4zbXdjM3NSN3dGUHRWNGUyQ3lWZlZjKzJIV2xmL3JzTGp6eUgrbzM1ZVdLL2NaaW9uT1Y2YzJWCmFLa01tZlR1OXdnbm5UN20vKzlSK0tPdVhubFBreVg4S3J5NElGT0JBb0dBWkZ1TWtLSGE5NVdZbEpIVUU1WkYKWTlpdU5GNGVqSjN6V1BRQ2tDdHdsYmVhMmZmMUJIN3hXQi9PbldtWFU1SW16R1ZqZUd1UHZZZ1JPTTRGTDFxNwpiVUVNZDBvMjZ2YThZdXAydzNoakRjTDAwKytjZWNwVlkvUk9MNWNiWnlndDNOeTFzL3R3blNxb0JmRUJTMFI0CmdzL2Q0Q0hRNitRd1NtZ2JQQlBYRnRNQ2dZQVErSjRCK1FXNGMwY00rcVp2cnlkRXpBbnlMWFFWcU9QVlB2dlkKbUFqejNkSlI2RDdyOFY1b2pJT1dCVU5aRWZpSkVqS1dFY3ZvUGVQZ1RSY2pHRUFCVk9LKzN0aXJ2Wkd6Mkd5NApjeTI0WW1yNFE3NExZaFFlMVA5Uisxc1BwMjhmaWlRZnFEMzNuamtVTXBTTnZVTjVUUXlneVhqSDU5azZBNkdKCjdDNmNBUUtCZ1FDSEdibTJOdXpEaUcyMEJvcURqVWR1dk9ycmYxV3p6THd0L05TZUFGOGdMNGNGcUY2WDAxOUoKcTFuV0VkSkU5UFlEcW9TQ1I4aWovTmxsbVZHVUdjNzdwLytwUDEzQTNWL1ZwdndETU5aK2FDRDJWQkYvUXBUTworUmdTTHhDbWpaeHVBTVlBd0V1c3d2dW5LNG9jeWNWWjBFSHpVSTZnblJDNkxqVk1IZVQxZ0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo将config文件拷贝到/root/.kube/config [rootmaster01 work ]#mkdir ~/.kube -p [rootmaster01 work ]#cp kube.config ~/.kube/config
5.授权kubernetes证书访问kubelet api权限 如果想通过kubectl创建资源还需要授权将kubernetes用户绑定到clusterrole kubernetes用户是ca-csr.json中CN定义的
[rootmaster01 work ]#kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrolesystem:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created#查看集群组件状态
[rootmaster01 work ]#kubectl cluster-info Kubernetes control plane is running at https://10.10.0.10:6443
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
[rootmaster01 work ]#kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-1 Healthy {health:true}
etcd-0 Healthy {health:true}
etcd-2 Healthy {health:true} [rootmaster01 work ]#kubectl get all --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.255.0.1 none 443/TCP 147m#同步kubectl文件到其他节点为了防止单台机器故障其他机器仍然能够操作集群
[rootmaster02 ~ ]#mkdir /root/.kube/ [rootmaster03 kubernetes ]#mkdir /root/.kube/
[rootmaster01 work ]#rsync -vaz /root/.kube/config master02:/root/.kube/ sending incremental file list config
sent 4,193 bytes received 35 bytes 8,456.00 bytes/sec total size is 6,234 speedup is 1.47 [rootmaster01 work ]#rsync -vaz /root/.kube/config master03:/root/.kube/ sending incremental file list config
sent 4,193 bytes received 35 bytes 2,818.67 bytes/sec total size is 6,234 speedup is 1.47
#配置kubectl子命令补全
[rootmaster01 work ]#yum install -y bash-completion[rootmaster01 work ]#source /usr/share/bash-completion/bash_completion
[rootmaster01 work ]#source (kubectl completion bash)
[rootmaster01 work ]#kubectl completion bash ~/.kube/completion.bash.inc
[rootmaster01 work ]#source /root/.kube/completion.bash.inc
[rootmaster01 work ]#source $HOME/.bash_profile按两下tab键会将所有向查的东西列出来
[rootmaster01 work ]#kubectl get
apiservices.apiregistration.k8s.io namespaces
certificatesigningrequests.certificates.k8s.io networkpolicies.networking.k8s.io
clusterrolebindings.rbac.authorization.k8s.io nodes
clusterroles.rbac.authorization.k8s.io persistentvolumeclaims
componentstatuses persistentvolumes
configmaps poddisruptionbudgets.policy
controllerrevisions.apps pods
cronjobs.batch podsecuritypolicies.policy
csidrivers.storage.k8s.io podtemplates
csinodes.storage.k8s.io priorityclasses.scheduling.k8s.io
customresourcedefinitions.apiextensions.k8s.io prioritylevelconfigurations.flowcontrol.apiserver.k8s.io
daemonsets.apps replicasets.apps
deployments.apps replicationcontrollers
endpoints resourcequotas
endpointslices.discovery.k8s.io rolebindings.rbac.authorization.k8s.io
events roles.rbac.authorization.k8s.io
events.events.k8s.io runtimeclasses.node.k8s.io
flowschemas.flowcontrol.apiserver.k8s.io secrets
horizontalpodautoscalers.autoscaling serviceaccounts
ingressclasses.networking.k8s.io services
ingresses.extensions statefulsets.apps
ingresses.networking.k8s.io storageclasses.storage.k8s.io
jobs.batch storageversions.internal.apiserver.k8s.io
leases.coordination.k8s.io validatingwebhookconfigurations.admissionregistration.k8s.io
limitranges volumeattachments.storage.k8s.io
mutatingwebhookconfigurations.admissionregistration.k8s.io Kubectl官方备忘单 安装补全命令参考地址 https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/
5.4 部署kube-controller-manager组件
#创建csr请求文件
[rootmaster01 work ]#cat kube-controller-manager-csr.json
{CN: system:kube-controller-manager,key: {algo: rsa,size: 2048},hosts: [127.0.0.1,10.10.0.10,10.10.0.11,10.10.0.12,10.10.0.100],names: [{C: CN,ST: Hubei,L: Wuhan,O: system:kube-controller-manager,OU: system}]
}注 hosts 列表包含所有 kube-controller-manager 节点 IP CN 为 system:kube-controller-manager、 O 为 system:kube-controller-manager kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
#生成证书
[rootmaster01 work ]#cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2022/10/26 14:27:54 [INFO] generate received request
2022/10/26 14:27:54 [INFO] received CSR
2022/10/26 14:27:54 [INFO] generating key: rsa-2048
2022/10/26 14:27:54 [INFO] encoded CSR
2022/10/26 14:27:54 [INFO] signed certificate with serial number 721895207641224154550279977497269955483545721536
2022/10/26 14:27:54 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).#创建kube-controller-manager的kubeconfig
1.设置集群参数
[rootmaster01 work ]#kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://10.10.0.10:6443 --kubeconfigkube-controller-manager.kubeconfig
Cluster kubernetes set.[rootmaster01 work ]#cat kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoserver: https://10.10.0.10:6443name: kubernetes
contexts: null
current-context:
kind: Config
preferences: {}
users: null2.设置客户端认证参数
用户system:kube-controller-manager 就是kube-controller-manager-csr.json CN设置的值 [rootmaster01 work ]#kubectl config set-credentials system:kube-controller-manager --client-certificatekube-controller-manager.pem --client-keykube-controller-manager-key.pem --embed-certstrue --kubeconfigkube-controller-manager.kubeconfig User “system:kube-controller-manager” set.
[rootmaster01 work ]#cat kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoserver: https://10.10.0.10:6443name: kubernetes
contexts: null
current-context:
kind: Config
preferences: {}
users:
- name: system:kube-controller-manageruser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVZm5MbWtpdDdmZEJuY2lmdE5pQ01FdjZ0V3NBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURZeU16QXdXaGNOTXpJeE1ESXpNRFl5TXpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS01RQWFjZElxUEZ3K29LckR1aTdQcmxnaThpT1hKMworTTBTZUNDaFlMZjd5dU9VWlRZV0wyZk81UmZMQ1RRODFCWXVQOUxrY0k4aFRIRHZMZkQ3ZmdWcmZIOVMyaUZ3CllHSGNCc2ZjNFZGVVg5YUlDU0VlM0RRY3NDWG55VnJEVkQzUVUvY3VZQVRuZEMxdFMvZU1pNHFEVFVKTU1maFUKQnR1L3NwUmY4bGpkOWVtdk1ZQzZkQ3dIT1FsZmF0WThLcHJxZjdYTUtKSWJRdnhsSEhlL3RaYW1RWGo0Si9tagpqNHRCYW1uckQrNEp2dXpDY3ZOaEhtc09PbXA5Mkkxd3lNNlNWQUpjM2ZPTFRrakFGZGQ3MnJRUHo4elhhK3NkClpzVWR3SXNZbGp5bW5NcUdaWlZuU1dYL2RxQ2VxQzRxcnZybmo4dmN2NlE4OVVBTjNSU2pmdTBDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTktGSjhqQXIvanQyUmN4MWNNbStFNUJVRTNsCk1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRFhPcGVETVVUTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJBb0tBQXFIQkFvS0FBdUhCQW9LQUF5SEJBb0tBR1F3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUFkcQo4cE4rOXdKdUJFRVpDT2FEN2MrZ0tMK3ZtL2lJTzdQL3ZlbFgrNzd0cVgrTTBqaUwwa2IvTmczRWNuVUhieDVECi9haUJLZU1rRVltUG5rc2hHWng1eGhmN0oraFlkOWsrNm5wOGRmUk80bHlpeENCMWNYODBoMFhZbjBmZ2N0eE8Kd0RCaVhNcUI2OEo4WnkvYVBoSVY0OGpCaUZZZlJMcUpxZWZEMEx2NFh2RVpISkt6bVpZdnlWV3dYRWFzRFI0bQo4cGRSWEtOakRXTXkyTHI1WGNRLzNKaFRCZ2g4cEV4czBFaUQwTUxDUmNwaGdjK3p0UndXZmR0T2FQR0tjNUpDCis3aEVjQmZJZ1MwNTE5bmt5Yjg0dEJpWmRhK21PQVB4RTNqVTRLK01hRWxZK3pFNHR6a1pMOGJkbEV0Vm9Jb2YKSVF2bEl6akJxSzE4VG9tUmNmQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb3hBQnB4MGlvOFhENmdxc082THMrdVdDTHlJNWNuZjR6Uko0SUtGZ3Qvdks0NVJsCk5oWXZaODdsRjhzSk5EelVGaTQvMHVSd2p5Rk1jTzh0OFB0K0JXdDhmMUxhSVhCZ1lkd0d4OXpoVVZSZjFvZ0oKSVI3Y05CeXdKZWZKV3NOVVBkQlQ5eTVnQk9kMExXMUw5NHlMaW9OTlFrd3grRlFHMjcreWxGL3lXTjMxNmE4eApnTHAwTEFjNUNWOXExandxbXVwL3Rjd29raHRDL0dVY2Q3KzFscVpCZVBnbithT1BpMEZxYWVzUDdnbSs3TUp5CjgyRWVhdzQ2YW4zWWpYREl6cEpVQWx6ZDg0dE9TTUFWMTN2YXRBL1B6TmRyNngxbXhSM0FpeGlXUEthY3lvWmwKbFdkSlpmOTJvSjZvTGlxdSt1ZVB5OXkvcER6MVFBM2RGS04rN1FJREFRQUJBb0lCQUc4WFU1anZ2NDdHQ0lCbAp2d3R1SjNlVGJ3ci9qUlhRYUlBR0tqTkkzcVRaOVZMdzRiZGtpKzEwUmgzY3BLdWpHWGIzRVdKelljQVJsb3VHClY4MUsrWU5seEU3V09tZjNzS0piRFgrU215c1dpYWlWeTJwMkpOMllBZVlCTU93V0VVbC9xZ1RING9EVTB4Q3oKMnNLUFRPNFVJRW1mc1plV1g0bk00elEwM2QzdVc4Qi90R3BtcHhvNk1Da2FPNW9yVVRyUFYvUHpTdnA5R3l1VwpzUnE3b3J0QjdNZnRjTXlWdUUzWWVXYlJYbWRFSlk3cWpBWW9qVXVjdEJzNnVlSHhUbGRKNWNWdzQ2aXBPRzE4CnhBZE5xLzNNRVl2b21TbzhJQXJUWTBIc0o4djlXQ0VhUnF5ZVAwTHB5eDdIamxDb0ZhRnhTUTdxTCt0VjlPUnEKZmxIVnlXRUNnWUVBMGdlK3h0amtMOXdFVkpkZHBrektFWGJTc09TaHhya0V6blhNY0ZTNThkQ0hJS0tYalZIagpUaDV4K2EzWkpucTduRDJNUVRZeWcxYmNERUhQMW0vNGdrQnRJS0ZvOFgvcjdEM1c2OHFRNEZJTlZjRnVOZk9vCmFmc3RNQkF4ZnJQaGVyYlRVdUtJTTRvOUxzT3lvN1lHcTBFTTcxWmRYZDg5QVlSVUxibElxRGtDZ1lFQXhzQ1oKN210TG5wZzhaZlJZeWRhVEh4dmtBWTFWNVI3S2JmQ0pmMDhhb0c2N1c0UWF3TXVNZHVrZ3M1QmVwTUNidkRVTQovNmJnQUVncjFENnJTTDVlL3BiYjZkQ0diYVRhSVFsZmdqNlZpakpDZzNLazBTZEw2L2Q1UEdsL3N3ZHB5azljCmMyL1NzaHdoUFpjK1Evd3FqNGw5MGRkblFBWTBBWmhVWExSNHhGVUNnWUVBdGZncDdVU0xaMy9iYktMOGE1SUsKWE5rek1EbldoRU5YQzczNkU3VUVxYUwvQUdKK3BkMDE4RC9taGVsK3c1MEFvUnllUVAzQkJCUWtjS1l3ZVZ6bgoxWW9XUW5nMllVNXd6R3pEb2VVT1lwd1VtNkVNYU1nanVUYjY3cktJLzNyQU43N2hGdVhZRmJlR3pOYVhGc29sCnV3aVFPV2o5V2RDSm5aL1dBd3VPRE5rQ2dZQTdNUlVtK25GMDlDWFl2MkxLQ2N1YkVqVmZlUFpCM0YreFNsZkkKd0loUGkycmxJSHpQT2svRkFqMG8vVEFTcFFJOGxSZ2Y4MVQzQUlkOUdJVHVqek8vWXJKditoaHZBdytya3gwTQpyeExlSzRXL25COFY0endyTkhLNDJUcWMyUEphdkRQdWRUa3NybEFBQmRFWGNqeENyMUgzY3MxZk5mbTdGK0RZCkV5OThXUUtCZ0JycnFMZjc4WE91SWN5K0RnYWkwQ3FKdmg3RU9pMFBPN3BDWGtvUHFQM2Q4VlUrcFFyV0QyTG8KQndOaVh6dzVUQ0Y5UXQ3blJxUW1BRXJjdmJBcHg4UEk5TE9HZ3hYbmpPTHVNWmRWQ2h6Mjg3T0tEbitTL0JhLwppQVJjZ0R6YXlTUllUUCs5Z3lBQm4rNGV0TS9PWFFVYWw0RisyYU5wOG1OV0d2TVhYdCsxCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg3.设置上下文参数
[rootmaster01 work ]#kubectl config set-context system:kube-controller-manager --clusterkubernetes --usersystem:kube-controller-manager --kubeconfigkube-controller-manager.kubeconfig
Context system:kube-controller-manager created.4.设置当前上下文
[rootmaster01 work ]#kubectl config use-context system:kube-controller-manager --kubeconfigkube-controller-manager.kubeconfig
Switched to context system:kube-controller-manager.[rootmaster01 work ]#cat kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoserver: https://10.10.0.10:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: system:kube-controller-managername: system:kube-controller-manager
current-context: system:kube-controller-manager
kind: Config
preferences: {}
users:
- name: system:kube-controller-manageruser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVZm5MbWtpdDdmZEJuY2lmdE5pQ01FdjZ0V3NBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkyTURZeU16QXdXaGNOTXpJeE1ESXpNRFl5TXpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS01RQWFjZElxUEZ3K29LckR1aTdQcmxnaThpT1hKMworTTBTZUNDaFlMZjd5dU9VWlRZV0wyZk81UmZMQ1RRODFCWXVQOUxrY0k4aFRIRHZMZkQ3ZmdWcmZIOVMyaUZ3CllHSGNCc2ZjNFZGVVg5YUlDU0VlM0RRY3NDWG55VnJEVkQzUVUvY3VZQVRuZEMxdFMvZU1pNHFEVFVKTU1maFUKQnR1L3NwUmY4bGpkOWVtdk1ZQzZkQ3dIT1FsZmF0WThLcHJxZjdYTUtKSWJRdnhsSEhlL3RaYW1RWGo0Si9tagpqNHRCYW1uckQrNEp2dXpDY3ZOaEhtc09PbXA5Mkkxd3lNNlNWQUpjM2ZPTFRrakFGZGQ3MnJRUHo4elhhK3NkClpzVWR3SXNZbGp5bW5NcUdaWlZuU1dYL2RxQ2VxQzRxcnZybmo4dmN2NlE4OVVBTjNSU2pmdTBDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGTktGSjhqQXIvanQyUmN4MWNNbStFNUJVRTNsCk1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRFhPcGVETVVUTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJBb0tBQXFIQkFvS0FBdUhCQW9LQUF5SEJBb0tBR1F3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUFkcQo4cE4rOXdKdUJFRVpDT2FEN2MrZ0tMK3ZtL2lJTzdQL3ZlbFgrNzd0cVgrTTBqaUwwa2IvTmczRWNuVUhieDVECi9haUJLZU1rRVltUG5rc2hHWng1eGhmN0oraFlkOWsrNm5wOGRmUk80bHlpeENCMWNYODBoMFhZbjBmZ2N0eE8Kd0RCaVhNcUI2OEo4WnkvYVBoSVY0OGpCaUZZZlJMcUpxZWZEMEx2NFh2RVpISkt6bVpZdnlWV3dYRWFzRFI0bQo4cGRSWEtOakRXTXkyTHI1WGNRLzNKaFRCZ2g4cEV4czBFaUQwTUxDUmNwaGdjK3p0UndXZmR0T2FQR0tjNUpDCis3aEVjQmZJZ1MwNTE5bmt5Yjg0dEJpWmRhK21PQVB4RTNqVTRLK01hRWxZK3pFNHR6a1pMOGJkbEV0Vm9Jb2YKSVF2bEl6akJxSzE4VG9tUmNmQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb3hBQnB4MGlvOFhENmdxc082THMrdVdDTHlJNWNuZjR6Uko0SUtGZ3Qvdks0NVJsCk5oWXZaODdsRjhzSk5EelVGaTQvMHVSd2p5Rk1jTzh0OFB0K0JXdDhmMUxhSVhCZ1lkd0d4OXpoVVZSZjFvZ0oKSVI3Y05CeXdKZWZKV3NOVVBkQlQ5eTVnQk9kMExXMUw5NHlMaW9OTlFrd3grRlFHMjcreWxGL3lXTjMxNmE4eApnTHAwTEFjNUNWOXExandxbXVwL3Rjd29raHRDL0dVY2Q3KzFscVpCZVBnbithT1BpMEZxYWVzUDdnbSs3TUp5CjgyRWVhdzQ2YW4zWWpYREl6cEpVQWx6ZDg0dE9TTUFWMTN2YXRBL1B6TmRyNngxbXhSM0FpeGlXUEthY3lvWmwKbFdkSlpmOTJvSjZvTGlxdSt1ZVB5OXkvcER6MVFBM2RGS04rN1FJREFRQUJBb0lCQUc4WFU1anZ2NDdHQ0lCbAp2d3R1SjNlVGJ3ci9qUlhRYUlBR0tqTkkzcVRaOVZMdzRiZGtpKzEwUmgzY3BLdWpHWGIzRVdKelljQVJsb3VHClY4MUsrWU5seEU3V09tZjNzS0piRFgrU215c1dpYWlWeTJwMkpOMllBZVlCTU93V0VVbC9xZ1RING9EVTB4Q3oKMnNLUFRPNFVJRW1mc1plV1g0bk00elEwM2QzdVc4Qi90R3BtcHhvNk1Da2FPNW9yVVRyUFYvUHpTdnA5R3l1VwpzUnE3b3J0QjdNZnRjTXlWdUUzWWVXYlJYbWRFSlk3cWpBWW9qVXVjdEJzNnVlSHhUbGRKNWNWdzQ2aXBPRzE4CnhBZE5xLzNNRVl2b21TbzhJQXJUWTBIc0o4djlXQ0VhUnF5ZVAwTHB5eDdIamxDb0ZhRnhTUTdxTCt0VjlPUnEKZmxIVnlXRUNnWUVBMGdlK3h0amtMOXdFVkpkZHBrektFWGJTc09TaHhya0V6blhNY0ZTNThkQ0hJS0tYalZIagpUaDV4K2EzWkpucTduRDJNUVRZeWcxYmNERUhQMW0vNGdrQnRJS0ZvOFgvcjdEM1c2OHFRNEZJTlZjRnVOZk9vCmFmc3RNQkF4ZnJQaGVyYlRVdUtJTTRvOUxzT3lvN1lHcTBFTTcxWmRYZDg5QVlSVUxibElxRGtDZ1lFQXhzQ1oKN210TG5wZzhaZlJZeWRhVEh4dmtBWTFWNVI3S2JmQ0pmMDhhb0c2N1c0UWF3TXVNZHVrZ3M1QmVwTUNidkRVTQovNmJnQUVncjFENnJTTDVlL3BiYjZkQ0diYVRhSVFsZmdqNlZpakpDZzNLazBTZEw2L2Q1UEdsL3N3ZHB5azljCmMyL1NzaHdoUFpjK1Evd3FqNGw5MGRkblFBWTBBWmhVWExSNHhGVUNnWUVBdGZncDdVU0xaMy9iYktMOGE1SUsKWE5rek1EbldoRU5YQzczNkU3VUVxYUwvQUdKK3BkMDE4RC9taGVsK3c1MEFvUnllUVAzQkJCUWtjS1l3ZVZ6bgoxWW9XUW5nMllVNXd6R3pEb2VVT1lwd1VtNkVNYU1nanVUYjY3cktJLzNyQU43N2hGdVhZRmJlR3pOYVhGc29sCnV3aVFPV2o5V2RDSm5aL1dBd3VPRE5rQ2dZQTdNUlVtK25GMDlDWFl2MkxLQ2N1YkVqVmZlUFpCM0YreFNsZkkKd0loUGkycmxJSHpQT2svRkFqMG8vVEFTcFFJOGxSZ2Y4MVQzQUlkOUdJVHVqek8vWXJKditoaHZBdytya3gwTQpyeExlSzRXL25COFY0endyTkhLNDJUcWMyUEphdkRQdWRUa3NybEFBQmRFWGNqeENyMUgzY3MxZk5mbTdGK0RZCkV5OThXUUtCZ0JycnFMZjc4WE91SWN5K0RnYWkwQ3FKdmg3RU9pMFBPN3BDWGtvUHFQM2Q4VlUrcFFyV0QyTG8KQndOaVh6dzVUQ0Y5UXQ3blJxUW1BRXJjdmJBcHg4UEk5TE9HZ3hYbmpPTHVNWmRWQ2h6Mjg3T0tEbitTL0JhLwppQVJjZ0R6YXlTUllUUCs5Z3lBQm4rNGV0TS9PWFFVYWw0RisyYU5wOG1OV0d2TVhYdCsxCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg#创建配置文件kube-controller-manager.conf
[rootmaster01 work ]#cat kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS--port0 \--secure-port10252 \--bind-address127.0.0.1 \ #官方推荐绑定127.0.0.1外界不能访问--kubeconfig/etc/kubernetes/kube-controller-manager.kubeconfig \--service-cluster-ip-range10.255.0.0/16 \--cluster-namekubernetes \--cluster-signing-cert-file/etc/kubernetes/ssl/ca.pem \--cluster-signing-key-file/etc/kubernetes/ssl/ca-key.pem \--allocate-node-cidrstrue \--cluster-cidr10.0.0.0/16 \--experimental-cluster-signing-duration87600h \--root-ca-file/etc/kubernetes/ssl/ca.pem \--service-account-private-key-file/etc/kubernetes/ssl/ca-key.pem \--leader-electtrue \--feature-gatesRotateKubeletServerCertificatetrue \--controllers*,bootstrapsigner,tokencleaner \--horizontal-pod-autoscaler-use-rest-clientstrue \--horizontal-pod-autoscaler-sync-period10s \--tls-cert-file/etc/kubernetes/ssl/kube-controller-manager.pem \--tls-private-key-file/etc/kubernetes/ssl/kube-controller-manager-key.pem \--use-service-account-credentialstrue \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v2#创建启动文件
[rootmaster01 work ]#cat kube-controller-manager.service
[Unit]
DescriptionKubernetes Controller Manager
Documentationhttps://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile-/etc/kubernetes/kube-controller-manager.conf
ExecStart/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restarton-failure
RestartSec5
[Install]
WantedBymulti-user.target#启动服务
[rootmaster01 work ]#cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[rootmaster01 work ]#cp kube-controller-manager.kubeconfig /etc/kubernetes/
[rootmaster01 work ]#cp kube-controller-manager.conf /etc/kubernetes/
[rootmaster01 work ]#cp kube-controller-manager.service /usr/lib/systemd/system/
[rootmaster01 work ]#
[rootmaster01 work ]#rsync -vaz kube-controller-manager*.pem master02:/etc/kubernetes/ssl/
sending incremental file list
kube-controller-manager-key.pem
kube-controller-manager.pemsent 2,498 bytes received 54 bytes 5,104.00 bytes/sec
total size is 3,180 speedup is 1.25
[rootmaster01 work ]#rsync -vaz kube-controller-manager*.pem master03:/etc/kubernetes/ssl/
sending incremental file list
kube-controller-manager-key.pem
kube-controller-manager.pemsent 2,498 bytes received 54 bytes 1,701.33 bytes/sec
total size is 3,180 speedup is 1.25
[rootmaster01 work ]#rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master02:/etc/kubernetes/
sending incremental file list
kube-controller-manager.conf
kube-controller-manager.kubeconfigsent 4,869 bytes received 54 bytes 9,846.00 bytes/sec
total size is 7,575 speedup is 1.54
[rootmaster01 work ]#rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master03:/etc/kubernetes/
sending incremental file list
kube-controller-manager.conf
kube-controller-manager.kubeconfigsent 4,869 bytes received 54 bytes 3,282.00 bytes/sec
total size is 7,575 speedup is 1.54
[rootmaster01 work ]#rsync -vaz kube-controller-manager.service master02:/usr/lib/systemd/system/
sending incremental file list
kube-controller-manager.servicesent 328 bytes received 35 bytes 726.00 bytes/sec
total size is 325 speedup is 0.90
[rootmaster01 work ]#rsync -vaz kube-controller-manager.service master03:/usr/lib/systemd/system/
sending incremental file list
kube-controller-manager.servicesent 328 bytes received 35 bytes 242.00 bytes/sec
total size is 325 speedup is 0.90[rootmaster01 work ]#systemctl daemon-reload
[rootmaster01 work ]#
[rootmaster01 work ]#systemctl enable kube-controller-manager.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[rootmaster02 ~ ]#systemctl daemon-reload
[rootmaster02 ~ ]#systemctl enable kube-controller-manager.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[rootmaster03 kubernetes ]#systemctl daemon-reload
[rootmaster03 kubernetes ]#systemctl enable kube-controller-manager.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.端口开启表示正常
[rootmaster02 kubernetes ]#ss -lanptu|grep 10252
tcp LISTEN 0 16384 127.0.0.1:10252 *:* users:((kube-controller,pid61345,fd8))5.5 部署kube-scheduler组件
#创建csr请求
[rootmaster01 work ]#cat kube-scheduler-csr.json
{CN: system:kube-scheduler,hosts: [127.0.0.1,10.10.0.10,10.10.0.11,10.10.0.12,10.10.0.100],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: system:kube-scheduler,OU: system}]
}注 hosts 列表包含所有 kube-scheduler 节点 IP CN 为 system:kube-scheduler、 O 为 system:kube-schedulerkubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
#生成证书
[rootmaster01 work ]#cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2022/10/26 14:45:07 [INFO] generate received request
2022/10/26 14:45:07 [INFO] received CSR
2022/10/26 14:45:07 [INFO] generating key: rsa-2048
2022/10/26 14:45:07 [INFO] encoded CSR
2022/10/26 14:45:07 [INFO] signed certificate with serial number 635930628159443813223981479189831176983910270655
2022/10/26 14:45:07 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).#创建kube-scheduler的kubeconfig
1.设置集群参数
[rootmaster01 work ]#kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://10.10.0.10:6443 --kubeconfigkube-scheduler.kubeconfig
Cluster kubernetes set.2.设置客户端认证参数 system:kube-scheduler这个用户就是kube-scheduler-csr.json 里面CN设置的用户
[rootmaster01 work ]#kubectl config set-credentials system:kube-scheduler --client-certificatekube-scheduler.pem --client-keykube-scheduler-key.pem --embed-certstrue --kubeconfigkube-scheduler.kubeconfig
User system:kube-scheduler set.3.设置上下文参数
[rootmaster01 work ]#kubectl config set-context system:kube-scheduler --clusterkubernetes --usersystem:kube-scheduler --kubeconfigkube-scheduler.kubeconfig
Context system:kube-scheduler created.4.设置当前上下文
[rootmaster01 work ]#kubectl config use-context system:kube-scheduler --kubeconfigkube-scheduler.kubeconfig
Switched to context system:kube-scheduler.#创建配置文件kube-scheduler.conf
[rootmaster01 work ]#cat kube-scheduler.conf
KUBE_SCHEDULER_OPTS--address127.0.0.1 \
--kubeconfig/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-electtrue \
--alsologtostderrtrue \
--logtostderrfalse \
--log-dir/var/log/kubernetes \
--v2#创建服务启动文件
[rootmaster01 work ]#cat kube-scheduler.service
[Unit]
DescriptionKubernetes Scheduler
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile-/etc/kubernetes/kube-scheduler.conf
ExecStart/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restarton-failure
RestartSec5[Install]
WantedBymulti-user.target#启动服务
[rootmaster01 work ]#cp kube-scheduler*.pem /etc/kubernetes/ssl/
[rootmaster01 work ]#cp kube-scheduler.kubeconfig /etc/kubernetes/
[rootmaster01 work ]#cp kube-scheduler.conf /etc/kubernetes/
[rootmaster01 work ]#cp kube-scheduler.service /usr/lib/systemd/system/
[rootmaster01 work ]#rsync -vaz kube-scheduler*.pem master02:/etc/kubernetes/ssl/
sending incremental file list
kube-scheduler-key.pem
kube-scheduler.pemsent 2,531 bytes received 54 bytes 1,723.33 bytes/sec
total size is 3,159 speedup is 1.22
[rootmaster01 work ]#rsync -vaz kube-scheduler*.pem master03:/etc/kubernetes/ssl/
sending incremental file list
kube-scheduler-key.pem
kube-scheduler.pemsent 2,531 bytes received 54 bytes 1,723.33 bytes/sec
total size is 3,159 speedup is 1.22
[rootmaster01 work ]#rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master02:/etc/kubernetes/
sending incremental file list
kube-scheduler.conf
kube-scheduler.kubeconfigsent 4,495 bytes received 54 bytes 9,098.00 bytes/sec
total size is 6,617 speedup is 1.45
[rootmaster01 work ]#rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master03:/etc/kubernetes/
sending incremental file list
kube-scheduler.conf
kube-scheduler.kubeconfigsent 4,495 bytes received 54 bytes 9,098.00 bytes/sec
total size is 6,617 speedup is 1.45
[rootmaster01 work ]#rsync -vaz kube-scheduler.service master02:/usr/lib/systemd/system/
sending incremental file list
kube-scheduler.servicesent 301 bytes received 35 bytes 672.00 bytes/sec
total size is 293 speedup is 0.87
[rootmaster01 work ]#rsync -vaz kube-scheduler.service master03:/usr/lib/systemd/system/
sending incremental file list
kube-scheduler.servicesent 301 bytes received 35 bytes 224.00 bytes/sec
total size is 293 speedup is 0.87[rootmaster01 work ]#systemctl daemon-reload
[rootmaster01 work ]#systemctl enable kube-scheduler.service --now[rootmaster02 kubernetes ]#systemctl daemon-reload
[rootmaster02 kubernetes ]#systemctl enable kube-scheduler.service --now[rootmaster03 kubernetes ]#systemctl daemon-reload
[rootmaster03 kubernetes ]#systemctl enable kube-scheduler.service --now端口开启表示正常
[rootmaster01 kubernetes ]#ss -lanptu|grep 10251
tcp LISTEN 0 16384 127.0.0.1:10251 *:* users:((kube-scheduler,pid823,fd8))kubeadm安装也是绑定到127.0.0.1
5.6 导入离线镜像压缩包
#把pause-cordns.tar.gz上传到node01节点手动解压
[rootnode01 ~ ]#docker load -i pause-cordns.tar.gz
225df95e717c: Loading layer [] 336.4kB/336.4kB
96d17b0b58a7: Loading layer [] 45.02MB/45.02MB
Loaded image: k8s.gcr.io/coredns:1.7.0
ba0dae6243cc: Loading layer [] 684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.2
[rootnode01 ~ ]#
[rootnode01 ~ ]#
[rootnode01 ~ ]#docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 2 years ago 45.2MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 2 years ago 683kB5.7 部署kubelet组件
kubelet 每个Node节点上的kubelet定期就会调用API Server的REST接口报告自身状态 API Server接收这些信息后将节点状态信息更新到etcd中。 kubelet也通过API Server监听Pod信息从而对Node机器上的POD进行管理如创建、删除、更新Pod
控制节点不允许调度podpod都会被调度到工作节点所以kubelet只需要部署到工作节点启动
以下操作在master01上操作
创建kubelet-bootstrap.kubeconfig
[rootmaster01 kubernetes ]#cd /data/work/
[rootmaster01 work ]#BOOTSTRAP_TOKEN$(awk -F “,” ‘{print $1}’ /etc/kubernetes/token.csv)
[rootmaster01 work ]#rm -r kubelet-bootstrap.kubeconfig rm: cannot remove ‘kubelet-bootstrap.kubeconfig’: No such file or directory
1.设置集群参数 [rootmaster01 work ]#kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://10.10.0.10:6443 --kubeconfigkubelet-bootstrap.kubeconfig Cluster “kubernetes” set.
2.设置客户端认证参数 [rootmaster01 work ]#kubectl config set-credentials kubelet-bootstrap --token${BOOTSTRAP_TOKEN} --kubeconfigkubelet-bootstrap.kubeconfig User “kubelet-bootstrap” set.
3.设置上下文参数 [rootmaster01 work ]#kubectl config set-context default --clusterkubernetes --userkubelet-bootstrap --kubeconfigkubelet-bootstrap.kubeconfig Context “default” created.
4.设置当前上下文
[rootmaster01 work ]#kubectl config use-context default --kubeconfigkubelet-bootstrap.kubeconfig Switched to context “default”.
[rootmaster01 work ]#cat kubelet-bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVUkV4RXhidjQ1ZlVBY1hCQlpENGZEZnRqZmtNd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXhNREkxTURnMU5EQXdXaGNOTXpJeE1ESXlNRGcxTkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxVV2lYcGZPMFZ6dTBrek8xYllaVUo2dFVnTHp2TDUKTlJwWnpoS2pNZ3pDbmZYM0pHWUM0SWpFdGoxRWNhUGdFOUErbkZ1UndCWlk0a0U3NkE1RjdhaS8rNzdiak5tMApJcWdzY3o2VmdzM0JuMEZSSXVmMlFkQm9OUmNlWWhXL011Z0ZWUlZRSUVQTjRuZGxCVUFScHpNbDk4clpxSVU5CkxTMkZ1WWRwYTRRSi8weVp3YWlYYllxazFDUk5JUkRBMUZJRDh6VEpHeXhRdmZoZ05wRGJTQkhQVlhqK3oxZW4KTU5XQzVDMGt1QWcya3IzcExHUkg3T2dMR1dHV0RuRXVwMUZKMUJONDl2V3IwTW8zQWd6TE1TcTdaRnRGYS9KYgpvN3lOWmZDZnd5NTZQTi9Rckc5eSs0aFBONHVRdG16em1YbUJYT2JJTFF2SXhEN2lKYTVWb0I4Q0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRk4rdFZGbnBVTS9JcmMyc21qWkRYT3BlRE1VVE1COEdBMVVkSXdRWU1CYUFGTit0VkZucFVNL0lyYzJzbWpaRApYT3BlRE1VVE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQjk3NXBmODY5THN6NkdDWkdxckNyWE1GMm00aENXCmFIRGx4UWQvK3NPQU5neEsxUnV1QWZ5d2lMU2pRTGFaYmxSTnZBMWQ1cGVndXBmcUdyQXljSjE1MkpyUGQ1aE8KNkpGMmlhMWxJMS80bno4OFE1STBZcEpJc0FSUDZBTSttb2RrWjFKM3V5UHFZbFVmSnBJU1pLSlhpQStERVJCaQo1eEhjZWMyWVlPTHJqK2xTMk90eUJ4K3ZpRnlMWFU0aGkrN08zRVRwNjFjRWVKUHE0YzcvWE95UlNnMWZudFFRCkN5SC81eHFBSStSYmhXMzA4TXdVWGh1M2QxcXFyRTBoUXpOVDVoOW4zSHN6V0F3dGthQVUzOHEzMEZqZFdwYXgKeFdDcURBdyt5UElMc2JzdjdBMnVnTUkxY1o0Zk9uWmkwK1MyRHFXdnZ4UEZ6enZza013SS9BNjQKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQoserver: https://10.10.0.10:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubelet-bootstrapname: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrapuser: {}授权
[rootmaster01 work ]#kubectl create clusterrolebinding kubelet-bootstrap --clusterrolesystem:node-bootstrapper --userkubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created#创建配置文件kubelet.json
“cgroupDriver”: systemd要和docker的驱动一致。 address替换为自己node01的IP地址。
[rootmaster01 work ]#cat kubelet.json
{kind: KubeletConfiguration,apiVersion: kubelet.config.k8s.io/v1beta1,authentication: {x509: {clientCAFile: /etc/kubernetes/ssl/ca.pem},webhook: {enabled: true,cacheTTL: 2m0s},anonymous: {enabled: false}},authorization: {mode: Webhook,webhook: {cacheAuthorizedTTL: 5m0s,cacheUnauthorizedTTL: 30s}},address: 10.10.0.14,port: 10250,readOnlyPort: 10255,cgroupDriver: systemd,hairpinMode: promiscuous-bridge,serializeImagePulls: false,featureGates: {RotateKubeletClientCertificate: true,RotateKubeletServerCertificate: true},clusterDomain: cluster.local.,clusterDNS: [10.255.0.2]
}[rootmaster01 work ]#cat kubelet.service
[Unit]
DescriptionKubernetes Kubelet
Documentationhttps://github.com/kubernetes/kubernetes
Afterdocker.service
Requiresdocker.service
[Service]
WorkingDirectory/var/lib/kubelet
ExecStart/usr/local/bin/kubelet \--bootstrap-kubeconfig/etc/kubernetes/kubelet-bootstrap.kubeconfig \ #第一次启动用这个文件后续都用--kubeconfig--cert-dir/etc/kubernetes/ssl \--kubeconfig/etc/kubernetes/kubelet.kubeconfig \ #这个是自动生成的--config/etc/kubernetes/kubelet.json \--network-plugincni \--pod-infra-container-imagek8s.gcr.io/pause:3.2 \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v2
Restarton-failure
RestartSec5[Install]
WantedBymulti-user.target#注 –hostname-override显示名称集群中唯一 –network-plugin启用CNI –kubeconfig空路径会自动生成后面用于连接apiserver –bootstrap-kubeconfig首次启动向apiserver申请证书 –config配置参数文件 –cert-dirkubelet证书生成目录 –pod-infra-container-image管理Pod网络容器的镜像
#注kubelete.json配置文件address改为各个节点的ip地址在各个work节点上启动服务
[rootnode01 ~ ]#mkdir /etc/kubernetes/ssl -p[rootmaster01 work ]#scp kubelet-bootstrap.kubeconfig kubelet.json node01:/etc/kubernetes/
kubelet-bootstrap.kubeconfig 100% 2107 3.7MB/s 00:00
kubelet.json[rootmaster01 work ]#scp ca.pem node01:/etc/kubernetes/ssl/ ca.pem
[rootmaster01 work ]#scp kubelet.service node01:/usr/lib/systemd/system/ kubelet.service
#启动kubelet服务 这个目录最好先删一下 /var/lib/kubelet如果存在之前的会有冲突
[rootnode01 ~ ]#mkdir /var/lib/kubelet [rootnode01 ~ ]#mkdir /var/log/kubernetes
[rootnode01 system ]#systemctl daemon-reload
[rootnode01 system ]#
[rootnode01 system ]#
[rootnode01 system ]#
[rootnode01 system ]#systemctl enable kubelet.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.确认kubelet服务启动成功后接着到master01节点上Approve一下bootstrap请求。
master节点执行如下命令可以看到一个worker节点发送了一个 CSR 请求
[rootmaster01 work ]#kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM 32s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending批准请求
[rootmaster01 work ]#kubectl certificate approve node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM
certificatesigningrequest.certificates.k8s.io/node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM approved[rootmaster01 work ]#kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-auJz6SMLC0bu-hO2MXiibM-syMll0YQHZ1kAXyo3gaM 11m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued[rootmaster01 work ]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01 NotReady none 34s v1.20.7#注意STATUS是NotReady表示还没有安装网络插件
二进制安装 kubectl get nodes 看不到控制节点只能看到工作节点
默认主节点是没部署kubelet的不具备服务调度能力。 一般生产上都是这么做的。如果想让主节点可具备调度能力 可以像worker节点一样去在master上部署一下kubelet相关服务也就可以get no看到了
node节点动态生成一些证书
[rootnode01 ~ ]#cd /etc/kubernetes/
[rootnode01 kubernetes ]#ll
total 12
-rw------- 1 root root 2148 Oct 27 08:44 kubelet-bootstrap.kubeconfig
-rw-r--r-- 1 root root 800 Oct 27 08:44 kubelet.json
-rw------- 1 root root 2277 Oct 27 08:49 kubelet.kubeconfig
drwxr-xr-x 2 root root 138 Oct 27 08:49 ssl
[rootnode01 kubernetes ]#cd ssl/
[rootnode01 ssl ]#ll
total 16
-rw-r--r-- 1 root root 1346 Oct 27 08:44 ca.pem
-rw------- 1 root root 1212 Oct 27 08:49 kubelet-client-2022-10-27-08-49-42.pem
lrwxrwxrwx 1 root root 58 Oct 27 08:49 kubelet-client-current.pem - /etc/kubernetes/ssl/kubelet-client-2022-10-27-08-49-42.pem
-rw-r--r-- 1 root root 2237 Oct 27 08:47 kubelet.crt
-rw------- 1 root root 1675 Oct 27 08:47 kubelet.key5.8 部署kube-proxy组件
[rootmaster01 work ]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.255.0.1 none 443/TCP 22h10.255.0.1在外面是访问不了的只能在iptables,ipvs规则中使用需要kube-proxy来生成规则
#创建csr请求
[rootmaster01 work ]#cat kube-proxy-csr.json
{CN: system:kube-proxy,key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: k8s,OU: system}]
}生成证书
[rootmaster01 work ]#cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2022/10/27 08:52:02 [INFO] generate received request
2022/10/27 08:52:02 [INFO] received CSR
2022/10/27 08:52:02 [INFO] generating key: rsa-2048
2022/10/27 08:52:02 [INFO] encoded CSR
2022/10/27 08:52:02 [INFO] signed certificate with serial number 621761669559249301102184867130469483006128723356
2022/10/27 08:52:02 [WARNING] This certificate lacks a hosts field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 (Information Requirements).[rootmaster01 work ]#ll kube-proxy*
-rw-r--r-- 1 root root 1005 Oct 27 08:52 kube-proxy.csr
-rw-r--r-- 1 root root 212 Oct 25 15:39 kube-proxy-csr.json
-rw------- 1 root root 1679 Oct 27 08:52 kube-proxy-key.pem
-rw------- 1 root root 6238 Oct 27 08:54 kube-proxy.kubeconfig
-rw-r--r-- 1 root root 1391 Oct 27 08:52 kube-proxy.pem
-rw-r--r-- 1 root root 297 Oct 25 15:39 kube-proxy.yaml#创建kubeconfig文件安全上下文确认kube-proxy 跟哪个集群交互 设置集群参数 [rootmaster01 work ]#kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://10.10.0.10:6443 --kubeconfigkube-proxy.kubeconfig Cluster “kubernetes” set.
设置客户端认证参数 [rootmaster01 work ]#kubectl config set-credentials kube-proxy --client-certificatekube-proxy.pem --client-keykube-proxy-key.pem --embed-certstrue --kubeconfigkube-proxy.kubeconfig User “kube-proxy” set.
设置上下文参数 [rootmaster01 work ]#kubectl config set-context default --clusterkubernetes --userkube-proxy --kubeconfigkube-proxy.kubeconfig Context “default” created.
–userkube-proxy 是在kube-proxy-csr.json中设置的CN用户
如果OU那里设置的是system,可以–user那里可以直接使用后面的用户名与加上system:效果一样
设置当前上下文 [rootmaster01 work ]#kubectl config use-context default --kubeconfigkube-proxy.kubeconfig Switched to context “default”.
#创建kube-proxy配置文件
[rootmaster01 work ]#cat kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 10.10.0.14 #node节点ip
clientConnection:kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.10.0.0/24 #这里要使用物理机的网段
healthzBindAddress: 10.10.0.14:10256 #node节点ip
kind: KubeProxyConfiguration
metricsBindAddress: 10.10.0.14:10249 #node节点ip 采集指标
mode: ipvs #使用ipvs转发模式#创建服务启动文件
[rootmaster01 work ]#cat kube-proxy.service
[Unit]
DescriptionKubernetes Kube-Proxy Server
Documentationhttps://github.com/kubernetes/kubernetes
Afternetwork.target[Service]
WorkingDirectory/var/lib/kube-proxy
ExecStart/usr/local/bin/kube-proxy \--config/etc/kubernetes/kube-proxy.yaml \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v2
Restarton-failure
RestartSec5
LimitNOFILE65536[Install]
WantedBymulti-user.target[rootmaster01 work ]#scp kube-proxy.kubeconfig kube-proxy.yaml node01:/etc/kubernetes/
kube-proxy.kubeconfig 100% 6238 5.9MB/s 00:00
kube-proxy.yaml 100% 282 418.8KB/s 00:00
[rootmaster01 work ]#scp kube-proxy.service node01:/usr/lib/systemd/system/
kube-proxy.service#启动服务在node节点创建目录
[rootnode01 kubernetes ]#mkdir -p /var/lib/kube-proxy[rootnode01 kubernetes ]#systemctl daemon-reload
[rootnode01 kubernetes ]#systemctl enable kube-proxy.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[rootnode01 kubernetes ]#systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Kube-Proxy ServerLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)Active: active (running) since Thu 2022-10-27 09:39:33 CST; 1min 22s agoDocs: https://github.com/kubernetes/kubernetesMain PID: 12737 (kube-proxy)Tasks: 7Memory: 57.5MCGroup: /system.slice/kube-proxy.service└─12737 /usr/local/bin/kube-proxy --config/etc/kubernetes/kube-proxy.yaml --alsologtostderrtrue --logtostderrfalse --log-dir/var/log/ku...Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136193 12737 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136212 12737 config.go:315] Starting service config controller
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136214 12737 shared_informer.go:240] Waiting for caches to sync for service config
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136432 12737 reflector.go:219] Starting reflector *v1beta1.EndpointSlice (15m0s) f...ry.go:134
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.136441 12737 reflector.go:219] Starting reflector *v1.Service (15m0s) from k8s.io/...ry.go:134
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.172077 12737 service.go:275] Service default/kubernetes updated: 1 ports
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236331 12737 shared_informer.go:247] Caches are synced for service config
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236505 12737 shared_informer.go:247] Caches are synced for endpoint slice config
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236532 12737 proxier.go:1036] Not syncing ipvs rules until Services and Endpoints ...om master
Oct 27 09:39:35 node01 kube-proxy[12737]: I1027 09:39:35.236928 12737 service.go:390] Adding new service port default/kubernetes:https at...1:443/TCP
Hint: Some lines were ellipsized, use -l to show in full.5.9 部署calico组件
#解压离线镜像压缩包
#把calico.tar.gz上传到node01节点手动解压
[rootnode01 ~ ]#docker load -i calico.tar.gz
#把calico.yaml文件上传到master01上的的/data/work目录
Calico组件的安装
Calicohttps://www.projectcalico.org/ https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises
curl https://docs.projectcalico.org/manifests/calico.yaml -O 原版本
点击Manifest下载
curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O 新版本
点击requirements 可以查看calico支持的kubernetes的版本
修改calico.yaml的以下位置把注释取消改成自己配置的podSubnet网段 - name: CALICO_IPV4POOL_CIDR value: “10.0.0.0/16”
这个如果是二进制安装必须改否则生成的pod 的ip地址默认是 192.168.0.0/16网段的
使用默认的 [rootmaster01 work ]#vim calico.yaml # - name: CALICO_IPV4POOL_CIDR # value: “192.168.0.0/16”
[rootmaster01 ~ ]#kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 55s 192.168.196.130 node01
修改后
[rootmaster01 work ]#vim calico.yaml - name: CALICO_IPV4POOL_CIDR value: “10.0.0.0/16”
[rootmaster01 work ]#kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 2m9s 10.0.196.130 node01
[rootmaster01 work ]#kubectl apply -f calico.yaml [rootmaster01 work ]#kubectl get pods -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6949477b58-dpwmn 1/1 Running 0 24s 10.0.196.129 node01 none none
calico-node-rmkbm 1/1 Running 0 24s 10.10.0.14 node01 none nonecalico-node-rmkbm 分配ip的 calico-kube-controllers-6949477b58-dpwmn 做网络策略的
[rootmaster01 work ]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01 Ready none 62m v1.20.7报错分析
[rootmaster03 kubernetes ]#kubectl get pods -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6949477b58-cr5lh 1/1 Running 0 5h44m 10.0.196.129 node01 none none
calico-node-49kj4 1/1 Running 6 35m 10.10.0.11 master02 none none
calico-node-bgv4n 1/1 Running 5 35m 10.10.0.10 master01 none none
calico-node-bjsmm 1/1 Running 0 5h44m 10.10.0.14 node01 none none
calico-node-t2ljz 0/1 Running 1 8m46s 10.10.0.12 master03 none none
coredns-7bf4bd64bd-572z9 1/1 Running 0 5h31m 10.0.196.131 node01 none nonemaster03的calico组件未就绪
K8S集群Calico网络组件报错BIRD
calico/node is not ready: BIRD is not ready: BGP not established with
Warning Unhealthy 8s kubelet Readiness probe failed: 2022-10-27 08:00:12.256 [INFO][374] confd/health.go 180: Number of node(s) with BGP peering established 0 calico/node is not ready: BIRD is not ready: BGP not established with 10.10.0.10,10.10.0.11,10.10.0.14
产生该报错的原因是节点上出现了冲突的网卡将有问题的网卡删除后Calico组件的pod就能正常启动了
异常网卡命名都是以br开头的
[rootmaster03 kubernetes ]#ip link
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000link/ether 00:0c:29:77:30:7c brd ff:ff:ff:ff:ff:ff
3: tunl0NONE: NOARP,UP,LOWER_UP mtu 1480 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000link/ipip 0.0.0.0 brd 0.0.0.0
4: docker0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:c2:48:1f:bc brd ff:ff:ff:ff:ff:ff
5: br-8763acf3655c: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:da:e8:e8:55 brd ff:ff:ff:ff:ff:ff就是这个br-8763acf3655c
将br开头的网卡删除
[rootmaster03 kubernetes ]#ip link delete br-8763acf3655c重新生成calico,成功
[rootmaster03 kubernetes ]#kubectl get pods -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6949477b58-5rqgc 1/1 Running 2 20m 10.0.241.64 master01 none none
calico-node-g48ng 1/1 Running 0 7m53s 10.10.0.12 master03 none none
calico-node-nb4g4 1/1 Running 0 20m 10.10.0.11 master02 none none
calico-node-ppcqd 1/1 Running 0 20m 10.10.0.10 master01 none none
calico-node-tsrd4 1/1 Running 0 20m 10.10.0.14 node01 none none
coredns-7bf4bd64bd-572z9 1/1 Running 0 5h57m 10.0.196.131 node01 none none或者 估计是没用发现实际真正的网卡 解决方法 /* 调整calicao 网络插件的网卡发现机制修改IP_AUTODETECTION_METHOD对应的value值。 官方提供的yaml文件中ip识别策略IPDETECTMETHOD没有配置即默认为first-found这会导致一个网络异常的ip作为nodeIP被注册 从而影响node-to-node mesh。我们可以修改成can-reach或者interface的策略尝试连接某一个Ready的node的IP以此选择出正确的IP。 */
// calico.yaml 文件添加以下二行 - name: IP_AUTODETECTION_METHOD value: “interfaceens.*” # ens 根据实际网卡开头配置
// 配置如下 - name: CLUSTER_TYPE value: “k8s,bgp” - name: IP_AUTODETECTION_METHOD value: “interfaceens.*” #或者 value: “interfaceens160” # Auto-detect the BGP IP address. - name: IP value: “autodetect” # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: “Always”
5.10 部署coredns组件
coreDNS 就是DNS服务器将域名解析成ip 会根据svc创建FQDN 全限定域名 svcname.namespace.svc.cluster.local
镜像就是之前解压的pause-cordns.tar.gz
[rootmaster01 work ]#cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
rules:- apiGroups:- resources:- endpoints- services- pods- namespacesverbs:- list- watch- apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: truelabels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: CoreDNS
spec:# replicas: not specified here:# 1. Default is 1.# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: CriticalAddonsOnlyoperator: ExistsnodeSelector:kubernetes.io/os: linuxaffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: [kube-dns]topologyKey: kubernetes.io/hostnamecontainers:- name: corednsimage: coredns/coredns:1.7.0imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ -conf, /etc/coredns/Corefile ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPdnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: 9153prometheus.io/scrape: truelabels:k8s-app: kube-dnskubernetes.io/cluster-service: truekubernetes.io/name: CoreDNS
spec:selector:k8s-app: kube-dnsclusterIP: 10.255.0.2ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP[rootmaster01 work ]#kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created[rootmaster01 work ]#kubectl get pods -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6949477b58-cr5lh 1/1 Running 0 13m 10.0.196.129 node01 none none
calico-node-bjsmm 1/1 Running 0 13m 10.10.0.14 node01 none none
coredns-7bf4bd64bd-572z9 1/1 Running 0 16s 10.0.196.131 node01 none none[rootmaster01 work ]#kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.255.0.2 none 53/UDP,53/TCP,9153/TCP 2m6scalico和coredns都是以pod方式运行的
4.查看集群状态
[rootmaster01 work ]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
node01 Ready none 102m v1.20.75.测试k8s集群部署tomcat服务
#把tomcat.tar.gz和busybox-1-28.tar.gz上传到node01手动解压
[rootnode01 ~ ]#docker load -i busybox-1-28.tar.gz
432b65032b94: Loading layer [] 1.36MB/1.36MB
Loaded image: busybox:1.28
[rootnode01 ~ ]#docker load -i tomcat.tar.gz
f1b5933fe4b5: Loading layer [] 5.796MB/5.796MB
9b9b7f3d56a0: Loading layer [] 3.584kB/3.584kB
edd61588d126: Loading layer [] 80.28MB/80.28MB
48988bb7b861: Loading layer [] 2.56kB/2.56kB
8e0feedfd296: Loading layer [] 24.06MB/24.06MB
aac21c2169ae: Loading layer [] 2.048kB/2.048kB
Loaded image: tomcat:8.5-jre8-alpine在master01执行
[rootmaster01 work ]#cat tomcat.yaml
apiVersion: v1 #pod属于k8s核心组v1
kind: Pod #创建的是一个Pod资源
metadata: #元数据name: demo-pod #pod名字namespace: default #pod所属的名称空间labels:app: myapp #pod具有的标签env: dev #pod具有的标签
spec:containers: #定义一个容器容器是对象列表下面可以有多个name- name: tomcat-pod-java #容器的名字ports:- containerPort: 8080image: tomcat:8.5-jre8-alpine #容器使用的镜像imagePullPolicy: IfNotPresent- name: busyboximage: busybox:latestcommand: #command是一个列表定义的时候下面的参数加横线- /bin/sh- -c- sleep 3600[rootmaster01 work ]#kubectl apply -f tomcat.yaml pod/demo-pod created
[rootmaster01 work ]#kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-pod 2/2 Running 0 17s 10.0.196.132 node01 none none创建个service讲服务代理出去
[rootmaster01 work ]#cat tomcat-service.yaml
apiVersion: v1
kind: Service
metadata:name: tomcat
spec:type: NodePortports:- port: 8080nodePort: 30080selector:app: myappenv: dev[rootmaster01 work ]#kubectl apply -f tomcat-service.yaml service/tomcat created
[rootmaster01 work ]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.255.0.1 none 443/TCP 23h
tomcat NodePort 10.255.199.247 none 8080:30080/TCP 12s在浏览器访问 http://node01的ip:30080
验证可以正常展示页面
6.验证cordns是否正常
[rootmaster01 work ]#kubectl run busybox --image busybox:1.28 --restartNever --rm -it busybox -- sh
If you dont see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (14.215.177.39): 56 data bytes
64 bytes from 14.215.177.39: seq0 ttl127 time5.189 ms
64 bytes from 14.215.177.39: seq1 ttl127 time17.240 ms
^C
--- www.baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max 5.189/11.214/17.240 ms/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.localName: kubernetes.default.svc.cluster.local
Address 1: 10.255.0.1 kubernetes.default.svc.cluster.local/ # nslookup tomcat.default.svc.cluster.local
Server: 10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.localName: tomcat.default.svc.cluster.local
Address 1: 10.255.199.247 tomcat.default.svc.cluster.local#注意 busybox要用指定的1.28版本不能用最新版本最新版本nslookup会解析不到dns和ip报错如下
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.255.0.2
Address: 10.255.0.2:53
*** Cant find kubernetes.default.svc.cluster.local: No answer
*** Cant find kubernetes.default.svc.cluster.local: No answer10.255.0.2 就是我们coreDNS的clusterIP说明coreDNS配置好了。 解析内部Service的名称是通过coreDNS去解析的。
6.安装keepalivednginx实现k8s apiserver高可用
把epel.repo上传到xianchaomaster1的/etc/yum.repos.d目录下这样才能安装keepalived和nginx 把epel.repo传到xianchaomaster2、xianchaomaster3、node01上
总结一下各个节点安装nginx和keepalived
三个master节点。直接用epel源安装 yum install nginx keepalived -y
所有Master节点配置nginx详细配置参考nginx文档所有Master节点的nginx配置相同
http模块外面下方添加stream模块与http模块时平行的
[rootmaster01 work ]#cat /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;events {worker_connections 1024;
}http {log_format main $remote_addr - $remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for;access_log /var/log/nginx/access.log main;sendfile on;tcp_nopush on;tcp_nodelay on;keepalive_timeout 65;types_hash_max_size 4096;include /etc/nginx/mime.types;default_type application/octet-stream;# Load modular configuration files from the /etc/nginx/conf.d directory.# See http://nginx.org/en/docs/ngx_core_module.html#include# for more information.include /etc/nginx/conf.d/*.conf;server {listen 80;listen [::]:80;server_name _;root /usr/share/nginx/html;# Load configuration files for the default server block.include /etc/nginx/default.d/*.conf;error_page 404 /404.html;location /404.html {}error_page 500 502 503 504 /50x.html;location /50x.html {}}# Settings for a TLS enabled server.
#
# server {
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# server_name _;
# root /usr/share/nginx/html;
#
# ssl_certificate /etc/pki/nginx/server.crt;
# ssl_certificate_key /etc/pki/nginx/private/server.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 10m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# error_page 404 /404.html;
# location /40x.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location /50x.html {
# }
# }}stream {upstream apiserver {server 10.10.0.10:6443 max_fails3 fail_timeout30s;server 10.10.0.11:6443 max_fails3 fail_timeout30s;server 10.10.0.12:6443 max_fails3 fail_timeout30s;}server {listen 7443;proxy_connect_timeout 2s;proxy_timeout 900s;proxy_pass apiserver;}}启动nginx如果报错
[rootmaster01 work ]#systemctl status nginx.service
● nginx.service - The nginx HTTP and reverse proxy serverLoaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)Active: failed (Result: exit-code) since Thu 2022-10-27 10:55:16 CST; 12s agoProcess: 3019 ExecStartPre/usr/sbin/nginx -t (codeexited, status1/FAILURE)Process: 3017 ExecStartPre/usr/bin/rm -f /run/nginx.pid (codeexited, status0/SUCCESS)Oct 27 10:55:16 master01 systemd[1]: Starting The nginx HTTP and reverse proxy server...
Oct 27 10:55:16 master01 nginx[3019]: nginx: [emerg] unknown directive stream in /etc/nginx/nginx.conf:37
Oct 27 10:55:16 master01 nginx[3019]: nginx: configuration file /etc/nginx/nginx.conf test failed
Oct 27 10:55:16 master01 systemd[1]: nginx.service: control process exited, codeexited status1
Oct 27 10:55:16 master01 systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
Oct 27 10:55:16 master01 systemd[1]: Unit nginx.service entered failed state.
Oct 27 10:55:16 master01 systemd[1]: nginx.service failed.就需要安装stream模块并不是所有的版本都有现成的没有的话需要编译目前epel源的1.20版本有
yum install nginx-mod-stream -y
[rootmaster01 work ]#systemctl enable nginx.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.[rootmaster02 yum.repos.d ]#systemctl enable nginx.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.[rootmaster03 yum.repos.d ]#systemctl enable nginx.service --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.keepalived配置
所有Master节点配置KeepAlived配置不一样注意区分 公有云不支持keepalived
主备三个地方配置不一样 router_id state priority 其他都完全一样
主keepalived-master01
[rootmaster01 keepalived ]#cat keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id 10.10.0.10
}vrrp_script chk_nginx {script /etc/keepalived/check_nginx.shinterval 2weight -20
}vrrp_instance VI_1 {state MASTERinterface ens33 # 修改为实际网卡名virtual_router_id 51 # VRRP 路由 ID实例每个实例是唯一的 如果运行多个keepalived该id不能重复priority 100 # 优先级备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间默认1秒 nopreempt #默认是抢占模式主节点改成非抢占模式防止VIP来回漂移authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.10.0.100}track_script {chk_nginx}
}如果nginx死掉会导致用户请求失败但keepalived并不会进行切换需要写脚本检查nginx存活状态 如果确认nginx已死掉则停掉本机keepalived,已确保keepalived能够正常切换
[rootmaster01 keepalived ]#cat check_nginx.sh
#!/bin/bash
while true;do
nginxpid$(ps -C nginx --no-header|wc -l) /dev/null
if [ $nginxpid -eq 0 ];thensystemctl start nginxecho nginx已重启!sleep 5nginxpid$(ps -C nginx --no-header|wc -l) /dev/nullif [ $nginxpid -eq 0 ];thensystemctl stop keepalivedecho 该机已出现故障!exit 1fi
fisleep 5echo 正常运行中!
done给脚本加执行权限 [rootmaster01 keepalived ]#chmod x check_nginx.sh
备-master02配置:
[rootmaster02 keepalived ]#cat keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id 10.10.0.11
}vrrp_script chk_nginx {script /etc/keepalived/check_nginx.shinterval 2weight -20
}vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.10.0.100}track_script {chk_nginx}
}备-master03配置:
[rootmaster03 keepalived ]#cat keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id 10.10.0.12
}vrrp_script chk_nginx {script /etc/keepalived/check_nginx.shinterval 2weight -20
}vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 90advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {10.10.0.100}track_script {chk_nginx}
}测试keepalived
停掉master01上的nginx。vip会漂移到master02或master03
经测试会漂移
更改node节点连接VIP 目前所有的Worker Node组件连接都还是master01 Node如果不改为连接VIP走负载均衡器那么Master还是单点故障。 因此接下来就是要改所有Worker Nodekubectl get node命令查看到的节点组件配置文件 由原来10.10.0.10修改为10.10.0.100VIP。 在所有Worker Node执行
[rootnode01 ~]# sed -i s#10.10.0.10:6443#10.10.0.100:7443# /etc/kubernetes/kubelet-bootstrap.kubeconfig[rootnode01 ~]# sed -i s#10.10.0.10:6443#10.10.0.100:7443# /etc/kubernetes/kubelet.kubeconfig[rootnode01 ~]# sed -i s#10.10.0.10:6443#10.10.0.100:7443# /etc/kubernetes/kube-proxy.yaml[rootnode01 ~]# sed -i s#10.10.0.10:6443#10.10.0.100:7443# /etc/kubernetes/kube-proxy.kubeconfig[rootnode01 ~]# systemctl restart kubelet kube-proxy每个master节点
[rootmaster02 ~ ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /root/.kube/config
[rootmaster03 keepalived ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /root/.kube/config
[rootmaster01 work ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /root/.kube/config[rootmaster03 ~ ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /etc/kubernetes/kube-scheduler.kubeconfig
[rootmaster03 ~ ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /etc/kubernetes/kube-controller-manager.kubeconfig[rootmaster02 ~ ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /etc/kubernetes/kube-scheduler.kubeconfig
[rootmaster02 ~ ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /etc/kubernetes/kube-controller-manager.kubeconfig[rootmaster01 ~ ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /etc/kubernetes/kube-scheduler.kubeconfig
[rootmaster01 ~ ]#sed -i s/10.10.0.10:6443/10.10.0.100:7443/g /etc/kubernetes/kube-controller-manager.kubeconfig[rootmaster03 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service
[rootmaster02 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service
[rootmaster01 ~ ]#systemctl restart kube-scheduler.service kube-controller-manager.service [rootmaster03 ~ ]#kubectl cluster-info
Kubernetes control plane is running at https://10.10.0.100:7443
CoreDNS is running at https://10.10.0.100:7443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use kubectl cluster-info dump.这样高可用集群就安装好了
目前kubectl get nodes 只能看到工作节点若想能看到控制节点。则需要在控制节点将kubelet kube-proxy流程也走一遍 并且注意修改kubelet.json kubelet-bootstrap.kubeconfig kube-proxy.yaml kube-proxy.kubeconfig 中的ip
将master节点打上标签
[rootmaster01 work ]#kubectl label node master01 node-role.kubernetes.io/controlplanetrue
node/master01 labeled
[rootmaster01 work ]#kubectl label node master02 node-role.kubernetes.io/controlplanetrue
node/master02 labeled
[rootmaster01 work ]#kubectl label node master03 node-role.kubernetes.io/controlplanetrue
node/master03 labeled
[rootmaster01 work ]#kubectl label node master01 node-role.kubernetes.io/etcdtrue
node/master01 labeled
[rootmaster01 work ]#kubectl label node master02 node-role.kubernetes.io/etcdtrue
node/master02 labeled
[rootmaster01 work ]#kubectl label node master03 node-role.kubernetes.io/etcdtrue工作节点打标签
[rootmaster01 work ]kubectl label node node01 node-role.kubernetes.io/workertrue[rootmaster01 work ]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready controlplane,etcd 67m v1.20.7
master02 Ready controlplane,etcd 67m v1.20.7
master03 Ready controlplane,etcd 67m v1.20.7
node01 Ready worker 7h42m v1.20.77.将master节点打上污点禁止调度
[rootmaster01 work ]#kubectl taint node master01 master01null:NoSchedule
node/master01 tainted
[rootmaster01 work ]#kubectl taint node master02 master02null:NoSchedule
node/master02 tainted
[rootmaster01 work ]#kubectl taint node master03 master03null:NoSchedule
node/master03 tainted好了以上就是二进制文件搭建高可用K8S集群的全部流程希望对大家学习工作中提供帮助ღ( ´ᴗ )比心。