vs2017做的网站如何发布,广东网站建设制作价格低,网站建设平台排名,网站 禁止ping1. 前言
之前文章安装 kubernetes 集群#xff0c;都是使用 kubeadm 安装#xff0c;然鹅很多公司也采用二进制方式搭建集群。这篇文章主要讲解#xff0c;如何采用二进制包来搭建完整的高可用集群。相比使用 kubeadm 搭建#xff0c;二进制搭建要繁琐很多#xff0c;需要…1. 前言
之前文章安装 kubernetes 集群都是使用 kubeadm 安装然鹅很多公司也采用二进制方式搭建集群。这篇文章主要讲解如何采用二进制包来搭建完整的高可用集群。相比使用 kubeadm 搭建二进制搭建要繁琐很多需要自己配置签名证书每个组件都需要一步步配置安装。
2. 环境准备
2.1 机器规划
IP地址机器名称机器配置操作系统机器角色安装软件172.10.1.11master12C4GCentOS7.6masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd172.10.1.12msater22C4GCentOS7.6masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd172.10.1.13master32C4GCentOS7.6masterkube-apiserver、kube-controller-manager、kube-scheduler、etcd172.10.1.14node12C4GCentOS7.6workerkubelet、kube-proxy172.10.1.15node22C4GCentOS7.6workerkubelet、kube-proxy172.10.1.16node22C4GCentOS7.6workerkubelet、kube-proxy172.10.0.20///负载均衡VIP/
注此处VIP是采用的云厂商的SLB你也可以使用haproxy keepalived的方式实现。
2.2 软件版本
软件版本kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.20.2kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxyv1.20.2etcdv3.4.13calicov3.14coredns1.7.03. 搭建集群
3.1 机器基本配置
以下配置在6台机器上面操作
3.1.1 修改主机名
修改主机名称master1、master2、master3、node1、node2、node3
3.1.2 配置hosts文件
修改机器的/etc/hosts文件
cat /etc/hosts EOF
172.10.1.11 master1
172.10.1.12 master2
172.10.1.13 master3
172.10.1.14 node1
172.10.1.15 node2
172.10.1.16 node3
EOF3.1.3 关闭防火墙和selinux
systemctl stop firewalld
setenforce 0
sed -i s/^SELINUX.\*/SELINUXdisabled/ /etc/selinux/config3.1.4 关闭交换分区
swapoff -a
永久关闭修改/etc/fstab,注释掉swap一行3.1.5 时间同步
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd
chronyc sources3.1.6 修改内核参数
cat /etc/sysctl.d/k8s.conf EOF
net.ipv4.ip_forward 1
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
EOF
sysctl --system3.1.7 加载ipvs模块
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
lsmod | grep ip_vs
lsmod | grep nf_conntrack_ipv4
yum install -y ipvsadm3.2 配置工作目录
每台机器都需要配置证书文件、组件的配置文件、组件的服务启动文件现专门选择 master1 来统一生成这些文件然后再分发到其他机器。以下操作在 master1 上进行
[rootmaster1 ~]# mkdir -p /data/work
注该目录为配置文件和证书文件生成目录后面的所有文件生成相关操作均在此目录下进行
[rootmaster1 ~]# ssh-keygen -t rsa -b 2048
将秘钥分发到另外五台机器让 master1 可以免密码登录其他机器3.3 搭建etcd集群
3.3.1 配置etcd工作目录
[rootmaster1 ~]# mkdir -p /etc/etcd # 配置文件存放目录
[rootmaster1 ~]# mkdir -p /etc/etcd/ssl # 证书文件存放目录3.3.2 创建etcd证书
工具下载
[rootmaster1 work]# cd /data/work/
[rootmaster1 work]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[rootmaster1 work]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[rootmaster1 work]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64工具配置
[rootmaster1 work]# chmod x cfssl*
[rootmaster1 work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[rootmaster1 work]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[rootmaster1 work]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo配置ca请求文件
[rootmaster1 work]# vim ca-csr.json
{CN: kubernetes,key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: k8s,OU: system}],ca: {expiry: 87600h}
}注 CNCommon Namekube-apiserver 从证书中提取该字段作为请求的用户名 (User Name)浏览器使用该字段验证网站是否合法 OOrganizationkube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
创建ca证书
[rootmaster1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca配置ca证书策略
[rootmaster1 work]# vim ca-config.json
{signing: {default: {expiry: 87600h},profiles: {kubernetes: {usages: [signing,key encipherment,server auth,client auth],expiry: 87600h}}}
}配置etcd请求csr文件
[rootmaster1 work]# vim etcd-csr.json
{CN: etcd,hosts: [127.0.0.1,172.10.1.11,172.10.1.12,172.10.1.13],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: k8s,OU: system}]
}生成证书
[rootmaster1 work]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes etcd-csr.json | cfssljson -bare etcd
[rootmaster1 work]# ls etcd*.pem
etcd-key.pem etcd.pem3.3.3 部署etcd集群
下载etcd软件包
[rootmaster1 work]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
[rootmaster1 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[rootmaster1 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
[rootmaster1 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* master2:/usr/local/bin/
[rootmaster1 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* master3:/usr/local/bin/创建配置文件
[rootmaster1 work]# vim etcd.conf
#[Member]
ETCD_NAMEetcd1
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://172.10.1.11:2380
ETCD_LISTEN_CLIENT_URLShttps://172.10.1.11:2379,http://127.0.0.1:2379#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://172.10.1.11:2380
ETCD_ADVERTISE_CLIENT_URLShttps://172.10.1.11:2379
ETCD_INITIAL_CLUSTERetcd1https://172.10.1.11:2380,etcd2https://172.10.1.12:2380,etcd3https://172.10.1.13:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-cluster
ETCD_INITIAL_CLUSTER_STATEnew注 ETCD_NAME节点名称集群中唯一 ETCD_DATA_DIR数据目录 ETCD_LISTEN_PEER_URLS集群通信监听地址 ETCD_LISTEN_CLIENT_URLS客户端访问监听地址 ETCD_INITIAL_ADVERTISE_PEER_URLS集群通告地址 ETCD_ADVERTISE_CLIENT_URLS客户端通告地址 ETCD_INITIAL_CLUSTER集群节点地址 ETCD_INITIAL_CLUSTER_TOKEN集群Token ETCD_INITIAL_CLUSTER_STATE加入集群的当前状态new是新集群existing表示加入已有集群 创建启动服务文件 方式一 有配置文件的启动
[rootmaster1 work]# vim etcd.service
[Unit]
DescriptionEtcd Server
Afternetwork.target
Afternetwork-online.target
Wantsnetwork-online.target[Service]
Typenotify
EnvironmentFile-/etc/etcd/etcd.conf
WorkingDirectory/var/lib/etcd/
ExecStart/usr/local/bin/etcd \--cert-file/etc/etcd/ssl/etcd.pem \--key-file/etc/etcd/ssl/etcd-key.pem \--trusted-ca-file/etc/etcd/ssl/ca.pem \--peer-cert-file/etc/etcd/ssl/etcd.pem \--peer-key-file/etc/etcd/ssl/etcd-key.pem \--peer-trusted-ca-file/etc/etcd/ssl/ca.pem \--peer-client-cert-auth \--client-cert-auth
Restarton-failure
RestartSec5
LimitNOFILE65536[Install]
WantedBymulti-user.target方式二 无配置文件的启动方式
[rootmaster1 work]# vim etcd.service
[Unit]
DescriptionEtcd Server
Afternetwork.target
Afternetwork-online.target
Wantsnetwork-online.target[Service]
Typenotify
WorkingDirectory/var/lib/etcd/
ExecStart/usr/local/bin/etcd \--nameetcd1 \--data-dir/var/lib/etcd/default.etcd \--cert-file/etc/etcd/ssl/etcd.pem \--key-file/etc/etcd/ssl/etcd-key.pem \--trusted-ca-file/etc/etcd/ssl/ca.pem \--peer-cert-file/etc/etcd/ssl/etcd.pem \--peer-key-file/etc/etcd/ssl/etcd-key.pem \--peer-trusted-ca-file/etc/etcd/ssl/ca.pem \--peer-client-cert-auth \--client-cert-auth \--listen-peer-urlshttps://172.10.1.11:2380 \--listen-client-urlshttps://172.10.1.11:2379,http://127.0.0.1:2379 \--advertise-client-urlshttps://172.10.1.11:2379 \--initial-advertise-peer-urlshttps://172.10.1.11:2380 \--initial-clusteretcd1https://172.10.1.11:2380,etcd2https://172.10.1.12:2380,etcd3https://172.10.1.13:2380 \--initial-cluster-tokenetcd-cluster \--initial-cluster-statenew
Restarton-failure
RestartSec5
LimitNOFILE65536[Install]
WantedBymulti-user.target注本文采用第一种方式 同步相关文件到各个节点
[rootmaster1 work]# cp ca*.pem /etc/etcd/ssl/
[rootmaster1 work]# cp etcd*.pem /etc/etcd/ssl/
[rootmaster1 work]# cp etcd.conf /etc/etcd/
[rootmaster1 work]# cp etcd.service /usr/lib/systemd/system/
[rootmaster1 work]# for i in master2 master3;do rsync -vaz etcd.conf $i:/etc/etcd/;done
[rootmaster1 work]# for i in master2 master3;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done
[rootmaster1 work]# for i in master2 master3;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done注master2和master3分别修改配置文件中etcd名字和ip并创建目录 /var/lib/etcd/default.etcd 启动etcd集群
[rootmaster1 work]# mkdir -p /var/lib/etcd/default.etcd
[rootmaster1 work]# systemctl daemon-reload
[rootmaster1 work]# systemctl enable etcd.service
[rootmaster1 work]# systemctl start etcd.service
[rootmaster1 work]# systemctl status etcd注第一次启动可能会卡一段时间因为节点会等待其他节点启动 查看集群状态
[rootmaster1 work]# ETCDCTL_API3 /usr/local/bin/etcdctl --write-outtable --cacert/etc/etcd/ssl/ca.pem --cert/etc/etcd/ssl/etcd.pem --key/etc/etcd/ssl/etcd-key.pem --endpointshttps://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 endpoint health3.4 kubernetes组件部署
3.4.1 下载安装包
[rootmaster1 work]# wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz
[rootmaster1 work]# tar -xf kubernetes-server-linux-amd64.tar
[rootmaster1 work]# cd kubernetes/server/bin/
[rootmaster1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
[rootmaster1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master2:/usr/local/bin/
[rootmaster1 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master3:/usr/local/bin/
[rootmaster1 bin]# for i in node1 node2 node3;do rsync -vaz kubelet kube-proxy $i:/usr/local/bin/;done
[rootmaster1 bin]# cd /data/work/3.4.2 创建工作目录
[rootmaster1 work]# mkdir -p /etc/kubernetes/ # kubernetes组件配置文件存放目录
[rootmaster1 work]# mkdir -p /etc/kubernetes/ssl # kubernetes组件证书文件存放目录
[rootmaster1 work]# mkdir /var/log/kubernetes # kubernetes组件日志文件存放目录3.4.3 部署api-server
创建csr请求文件
[rootmaster1 work]# vim kube-apiserver-csr.json
{CN: kubernetes,hosts: [127.0.0.1,172.10.1.11,172.10.1.12,172.10.1.13,172.10.1.14,172.10.1.15,172.10.1.16,172.10.0.20,10.255.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: k8s,OU: system}]
}注 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用需要将master节点的IP都填上同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP如 10.254.0.1)
生成证书和token文件
[rootmaster1 work]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
[rootmaster1 work]# cat token.csv EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ),kubelet-bootstrap,10001,system:kubelet-bootstrap
EOF创建配置文件
[rootmaster1 work]# vim kube-apiserver.conf
KUBE_APISERVER_OPTS--enable-admission-pluginsNamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-authfalse \--bind-address172.10.1.11 \--secure-port6443 \--advertise-address172.10.1.11 \--insecure-port0 \--authorization-modeNode,RBAC \--runtime-configapi/alltrue \--enable-bootstrap-token-auth \--service-cluster-ip-range10.255.0.0/16 \--token-auth-file/etc/kubernetes/token.csv \--service-node-port-range30000-50000 \--tls-cert-file/etc/kubernetes/ssl/kube-apiserver.pem \--tls-private-key-file/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file/etc/kubernetes/ssl/ca-key.pem \ # 1.20以上版本必须有此参数--service-account-issuerhttps://kubernetes.default.svc.cluster.local \ # 1.20以上版本必须有此参数--etcd-cafile/etc/etcd/ssl/ca.pem \--etcd-certfile/etc/etcd/ssl/etcd.pem \--etcd-keyfile/etc/etcd/ssl/etcd-key.pem \--etcd-servershttps://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 \--enable-swagger-uitrue \--allow-privilegedtrue \--apiserver-count3 \--audit-log-maxage30 \--audit-log-maxbackup3 \--audit-log-maxsize100 \--audit-log-path/var/log/kube-apiserver-audit.log \--event-ttl1h \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v4注 --logtostderr启用日志 --v日志等级 --log-dir日志目录 --etcd-serversetcd集群地址 --bind-address监听地址 --secure-porthttps安全端口 --advertise-address集群通告地址 --allow-privileged启用授权 --service-cluster-ip-rangeService虚拟IP地址段 --enable-admission-plugins准入控制模块 --authorization-mode认证授权启用RBAC授权和节点自管理 --enable-bootstrap-token-auth启用TLS bootstrap机制 --token-auth-filebootstrap token文件 --service-node-port-rangeService nodeport类型默认分配端口范围 --kubelet-client-xxxapiserver访问kubelet客户端证书 --tls-xxx-fileapiserver https证书 --etcd-xxxfile连接Etcd集群证书 --audit-log-xxx审计日志 创建服务启动文件
[rootmaster1 work]# vim kube-apiserver.service
[Unit]
DescriptionKubernetes API Server
Documentationhttps://github.com/kubernetes/kubernetes
Afteretcd.service
Wantsetcd.service[Service]
EnvironmentFile-/etc/kubernetes/kube-apiserver.conf
ExecStart/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restarton-failure
RestartSec5
Typenotify
LimitNOFILE65536[Install]
WantedBymulti-user.target同步相关文件到各个节点
[rootmaster1 work]# cp ca*.pem /etc/kubernetes/ssl/
[rootmaster1 work]# cp kube-apiserver*.pem /etc/kubernetes/ssl/
[rootmaster1 work]# cp token.csv /etc/kubernetes/
[rootmaster1 work]# cp kube-apiserver.conf /etc/kubernetes/
[rootmaster1 work]# cp kube-apiserver.service /usr/lib/systemd/system/
[rootmaster1 work]# rsync -vaz token.csv master2:/etc/kubernetes/
[rootmaster1 work]# rsync -vaz token.csv master3:/etc/kubernetes/
[rootmaster1 work]# rsync -vaz kube-apiserver*.pem master2:/etc/kubernetes/ssl/ # 主要rsync同步文件只能创建最后一级目录如果ssl目录不存在会自动创建但是上一级目录kubernetes必须存在
[rootmaster1 work]# rsync -vaz kube-apiserver*.pem master3:/etc/kubernetes/ssl/
[rootmaster1 work]# rsync -vaz ca*.pem master2:/etc/kubernetes/ssl/
[rootmaster1 work]# rsync -vaz ca*.pem master3:/etc/kubernetes/ssl/
[rootmaster1 work]# rsync -vaz kube-apiserver.conf master2:/etc/kubernetes/
[rootmaster1 work]# rsync -vaz kube-apiserver.conf master3:/etc/kubernetes/
[rootmaster1 work]# rsync -vaz kube-apiserver.service master2:/usr/lib/systemd/system/
[rootmaster1 work]# rsync -vaz kube-apiserver.service master3:/usr/lib/systemd/system/注master2和master3配置文件的IP地址修改为实际的本机IP 启动服务
[rootmaster1 work]# systemctl daemon-reload
[rootmaster1 work]# systemctl enable kube-apiserver
[rootmaster1 work]# systemctl start kube-apiserver
[rootmaster1 work]# systemctl status kube-apiserver
测试
[rootmaster1 work]# curl --insecure https://172.10.1.11:6443/
有返回说明启动正常3.4.4 部署kubectl
创建csr请求文件
[rootmaster1 work]# vim admin-csr.json
{CN: admin,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: system:masters, OU: system}]
}说明 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权 kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定该 Role 授予了调用kube-apiserver 的所有 API的权限 O指定该证书的 Group 为 system:masterskubelet 使用该证书访问 kube-apiserver 时 由于证书被 CA 签名所以认证通过同时由于证书用户组为经过预授权的 system:masters所以被授予访问所有 API 的权限 注 这个admin 证书是将来生成管理员用的kube config 配置文件用的现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制 kubernetes 将证书中的CN 字段 作为User O 字段作为 Group O: system:masters, 必须是system:masters否则后面kubectl create clusterrolebinding报错。 生成证书
[rootmaster1 work]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes admin-csr.json | cfssljson -bare admin
[rootmaster1 work]# cp admin*.pem /etc/kubernetes/ssl/创建kubeconfig配置文件 kubeconfig 为 kubectl 的配置文件包含访问 apiserver 的所有信息如 apiserver 地址、CA 证书和自身使用的证书
设置集群参数
[rootmaster1 work]# kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://172.10.0.20:6443 --kubeconfigkube.config
设置客户端认证参数
[rootmaster1 work]# kubectl config set-credentials admin --client-certificateadmin.pem --client-keyadmin-key.pem --embed-certstrue --kubeconfigkube.config
设置上下文参数
[rootmaster1 work]# kubectl config set-context kubernetes --clusterkubernetes --useradmin --kubeconfigkube.config
设置默认上下文
[rootmaster1 work]# kubectl config use-context kubernetes --kubeconfigkube.config
[rootmaster1 work]# mkdir ~/.kube
[rootmaster1 work]# cp kube.config ~/.kube/config
授权kubernetes证书访问kubelet api权限
[rootmaster1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrolesystem:kubelet-api-admin --user kubernetes查看集群组件状态 上面步骤完成后kubectl就可以与kube-apiserver通信了
[rootmaster1 work]# kubectl cluster-info
[rootmaster1 work]# kubectl get componentstatuses
[rootmaster1 work]# kubectl get all --all-namespaces同步kubectl配置文件到其他节点
[rootmaster1 work]# rsync -vaz /root/.kube/config master2:/root/.kube/
[rootmaster1 work]# rsync -vaz /root/.kube/config master3:/root/.kube/配置kubectl子命令补全
[rootmaster1 work]# yum install -y bash-completion
[rootmaster1 work]# source /usr/share/bash-completion/bash_completion
[rootmaster1 work]# source (kubectl completion bash)
[rootmaster1 work]# kubectl completion bash ~/.kube/completion.bash.inc
[rootmaster1 work]# source /root/.kube/completion.bash.inc
[rootmaster1 work]# source $HOME/.bash_profile3.4.5 部署kube-controller-manager
创建csr请求文件
[rootmaster1 work]# vim kube-controller-manager-csr.json
{CN: system:kube-controller-manager,key: {algo: rsa,size: 2048},hosts: [127.0.0.1,172.10.1.11,172.10.1.12,172.10.1.13],names: [{C: CN,ST: Hubei,L: Wuhan,O: system:kube-controller-manager,OU: system}]
}注 hosts 列表包含所有 kube-controller-manager 节点 IP CN 为 system:kube-controller-manager、O 为 system:kube-controller-managerkubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限生成证书
[rootmaster1 work]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[rootmaster1 work]# ls kube-controller-manager*.pem创建kube-controller-manager的kubeconfig
设置集群参数
[rootmaster1 work]# kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://172.10.0.20:6443 --kubeconfigkube-controller-manager.kubeconfig
设置客户端认证参数
[rootmaster1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificatekube-controller-manager.pem --client-keykube-controller-manager-key.pem --embed-certstrue --kubeconfigkube-controller-manager.kubeconfig
设置上下文参数
[rootmaster1 work]# kubectl config set-context system:kube-controller-manager --clusterkubernetes --usersystem:kube-controller-manager --kubeconfigkube-controller-manager.kubeconfig
设置默认上下文
[rootmaster1 work]# kubectl config use-context system:kube-controller-manager --kubeconfigkube-controller-manager.kubeconfig创建配置文件
[rootmaster1 work]# vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS--port0 \--secure-port10252 \--bind-address127.0.0.1 \--kubeconfig/etc/kubernetes/kube-controller-manager.kubeconfig \--service-cluster-ip-range10.255.0.0/16 \--cluster-namekubernetes \--cluster-signing-cert-file/etc/kubernetes/ssl/ca.pem \--cluster-signing-key-file/etc/kubernetes/ssl/ca-key.pem \--allocate-node-cidrstrue \--cluster-cidr10.0.0.0/16 \--experimental-cluster-signing-duration87600h \--root-ca-file/etc/kubernetes/ssl/ca.pem \--service-account-private-key-file/etc/kubernetes/ssl/ca-key.pem \--leader-electtrue \--feature-gatesRotateKubeletServerCertificatetrue \--controllers*,bootstrapsigner,tokencleaner \--horizontal-pod-autoscaler-use-rest-clientstrue \--horizontal-pod-autoscaler-sync-period10s \--tls-cert-file/etc/kubernetes/ssl/kube-controller-manager.pem \--tls-private-key-file/etc/kubernetes/ssl/kube-controller-manager-key.pem \--use-service-account-credentialstrue \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v2创建启动文件
[rootmaster1 work]# vim kube-controller-manager.service
[Unit]
DescriptionKubernetes Controller Manager
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile-/etc/kubernetes/kube-controller-manager.conf
ExecStart/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restarton-failure
RestartSec5[Install]
WantedBymulti-user.target同步相关文件到各个节点
[rootmaster1 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[rootmaster1 work]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
[rootmaster1 work]# cp kube-controller-manager.conf /etc/kubernetes/
[rootmaster1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/
[rootmaster1 work]# rsync -vaz kube-controller-manager*.pem master2:/etc/kubernetes/ssl/
[rootmaster1 work]# rsync -vaz kube-controller-manager*.pem master3:/etc/kubernetes/ssl/
[rootmaster1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master2:/etc/kubernetes/
[rootmaster1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master3:/etc/kubernetes/
[rootmaster1 work]# rsync -vaz kube-controller-manager.service master2:/usr/lib/systemd/system/
[rootmaster1 work]# rsync -vaz kube-controller-manager.service master3:/usr/lib/systemd/system/启动服务
[rootmaster1 work]# systemctl daemon-reload
[rootmaster1 work]# systemctl enable kube-controller-manager
[rootmaster1 work]# systemctl start kube-controller-manager
[rootmaster1 work]# systemctl status kube-controller-manager3.4.6 部署kube-scheduler
创建csr请求文件
[rootmaster1 work]# vim kube-scheduler-csr.json
{CN: system:kube-scheduler,hosts: [127.0.0.1,172.10.1.11,172.10.1.12,172.10.1.13],key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: system:kube-scheduler,OU: system}]
}注 hosts 列表包含所有 kube-scheduler 节点 IP CN 为 system:kube-scheduler、O 为 system:kube-schedulerkubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
生成证书
[rootmaster1 work]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[rootmaster1 work]# ls kube-scheduler*.pem创建kube-scheduler的kubeconfig
设置集群参数
[rootmaster1 work]# kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://172.10.0.20:6443 --kubeconfigkube-scheduler.kubeconfig
设置客户端认证参数
[rootmaster1 work]# kubectl config set-credentials system:kube-scheduler --client-certificatekube-scheduler.pem --client-keykube-scheduler-key.pem --embed-certstrue --kubeconfigkube-scheduler.kubeconfig
设置上下文参数
[rootmaster1 work]# kubectl config set-context system:kube-scheduler --clusterkubernetes --usersystem:kube-scheduler --kubeconfigkube-scheduler.kubeconfig
设置默认上下文
[rootmaster1 work]# kubectl config use-context system:kube-scheduler --kubeconfigkube-scheduler.kubeconfig创建配置文件
[rootmaster1 work]# vim kube-scheduler.conf
KUBE_SCHEDULER_OPTS--address127.0.0.1 \
--kubeconfig/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-electtrue \
--alsologtostderrtrue \
--logtostderrfalse \
--log-dir/var/log/kubernetes \
--v2创建服务启动文件
[rootmaster1 work]# vim kube-scheduler.service
[Unit]
DescriptionKubernetes Scheduler
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile-/etc/kubernetes/kube-scheduler.conf
ExecStart/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restarton-failure
RestartSec5[Install]
WantedBymulti-user.target同步相关文件到各个节点
[rootmaster1 work]# cp kube-scheduler*.pem /etc/kubernetes/ssl/
[rootmaster1 work]# cp kube-scheduler.kubeconfig /etc/kubernetes/
[rootmaster1 work]# cp kube-scheduler.conf /etc/kubernetes/
[rootmaster1 work]# cp kube-scheduler.service /usr/lib/systemd/system/
[rootmaster1 work]# rsync -vaz kube-scheduler*.pem master2:/etc/kubernetes/ssl/
[rootmaster1 work]# rsync -vaz kube-scheduler*.pem master3:/etc/kubernetes/ssl/
[rootmaster1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master2:/etc/kubernetes/
[rootmaster1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master3:/etc/kubernetes/
[rootmaster1 work]# rsync -vaz kube-scheduler.service master2:/usr/lib/systemd/system/
[rootmaster1 work]# rsync -vaz kube-scheduler.service master3:/usr/lib/systemd/system/启动服务
[rootmaster1 work]# systemctl daemon-reload
[rootmaster1 work]# systemctl enable kube-scheduler
[rootmaster1 work]# systemctl start kube-scheduler
[rootmaster1 work]# systemctl status kube-scheduler3.4.7 部署docker
在三个work节点上安装安装docker
[rootnode1 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[rootnode1 ~]# yum install -y docker-ce
[rootnode1 ~]# systemctl enable docker
[rootnode1 ~]# systemctl start docker
[rootnode1 ~]# docker --version修改docker源和驱动
[rootnode1 ~]# cat /etc/docker/daemon.json EOF
{exec-opts: [native.cgroupdriversystemd],registry-mirrors: [https://1nj0zren.mirror.aliyuncs.com,https://kfwkfulq.mirror.aliyuncs.com,https://2lqq34jg.mirror.aliyuncs.com,https://pee6w651.mirror.aliyuncs.com,http://hub-mirror.c.163.com,https://docker.mirrors.ustc.edu.cn,http://f1361db2.m.daocloud.io,https://registry.docker-cn.com]
}
EOF
[rootnode1 ~]# systemctl restart docker
[rootnode1 ~]# docker info | grep Cgroup Driver下载依赖镜像
[rootnode1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
[rootnode1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
[rootnode1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2[rootnode1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
[rootnode1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
[rootnode1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.03.4.8 部署kubelet
以下操作在master1上操作创建kubelet-bootstrap.kubeconfig
[rootmaster1 work]# BOOTSTRAP_TOKEN$(awk -F , {print $1} /etc/kubernetes/token.csv)
设置集群参数
[rootmaster1 work]# kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://172.10.0.20:6443 --kubeconfigkubelet-bootstrap.kubeconfig
设置客户端认证参数
[rootmaster1 work]# kubectl config set-credentials kubelet-bootstrap --token${BOOTSTRAP_TOKEN} --kubeconfigkubelet-bootstrap.kubeconfig
设置上下文参数
[rootmaster1 work]# kubectl config set-context default --clusterkubernetes --userkubelet-bootstrap --kubeconfigkubelet-bootstrap.kubeconfig
设置默认上下文
[rootmaster1 work]# kubectl config use-context default --kubeconfigkubelet-bootstrap.kubeconfig
创建角色绑定
[rootmaster1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrolesystem:node-bootstrapper --userkubelet-bootstrap创建配置文件
[rootmaster1 work]# vim kubelet.json
{kind: KubeletConfiguration,apiVersion: kubelet.config.k8s.io/v1beta1,authentication: {x509: {clientCAFile: /etc/kubernetes/ssl/ca.pem},webhook: {enabled: true,cacheTTL: 2m0s},anonymous: {enabled: false}},authorization: {mode: Webhook,webhook: {cacheAuthorizedTTL: 5m0s,cacheUnauthorizedTTL: 30s}},address: 172.10.1.14,port: 10250,readOnlyPort: 10255,cgroupDriver: cgroupfs, # 如果docker的驱动为systemd处修改为systemd。此处设置很重要否则后面node节点无法加入到集群hairpinMode: promiscuous-bridge,serializeImagePulls: false,featureGates: {RotateKubeletClientCertificate: true,RotateKubeletServerCertificate: true},clusterDomain: cluster.local.,clusterDNS: [10.255.0.2]
}创建启动文件
[rootmaster1 work]# vim kubelet.service
[Unit]
DescriptionKubernetes Kubelet
Documentationhttps://github.com/kubernetes/kubernetes
Afterdocker.service
Requiresdocker.service[Service]
WorkingDirectory/var/lib/kubelet
ExecStart/usr/local/bin/kubelet \--bootstrap-kubeconfig/etc/kubernetes/kubelet-bootstrap.kubeconfig \--cert-dir/etc/kubernetes/ssl \--kubeconfig/etc/kubernetes/kubelet.kubeconfig \--config/etc/kubernetes/kubelet.json \--network-plugincni \--pod-infra-container-imagek8s.gcr.io/pause:3.2 \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v2
Restarton-failure
RestartSec5[Install]
WantedBymulti-user.target[Unit]
DescriptionKubernetes Kubelet
Documentationhttps://github.com/GoogleCloudPlatform/kubernetes
Afterdocker.service
Requiresdocker.service[Service]
WorkingDirectory/var/lib/kubelet/
ExecStart/usr/local/bin/kubelet \--address192.168.63.135 \--hostname-override192.168.63.135 \--pod-infra-container-imageregistry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 \--bootstrap-kubeconfig/etc/kubernetes/ssl1/kubelet-bootstrap.kubeconfig \--kubeconfig/etc/kubernetes/ssl1/kubelet.kubeconfig \--cert-dir/etc/kubernetes/ssl1/ \--container-runtimedocker \--cluster-dns10.255.0.2 \--cluster-domaincluster.local \--hairpin-mode promiscuous-bridge \--serialize-image-pullsfalse \--register-nodetrue \--logtostderrtrue \--cgroup-drivercgroupfs \--runtime-cgroups/systemd/user.slice \--kubelet-cgroups/systemd/user.slice \--v2Restarton-failure
KillModeprocess
LimitNOFILE65536
RestartSec5[Install]
WantedBymulti-user.target
注 –hostname-override显示名称集群中唯一 –network-plugin启用CNI –kubeconfig空路径会自动生成后面用于连接apiserver –bootstrap-kubeconfig首次启动向apiserver申请证书 –config配置参数文件 –cert-dirkubelet证书生成目录 –pod-infra-container-image管理Pod网络容器的镜像
同步相关文件到各个节点
[rootmaster1 work]# cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
[rootmaster1 work]# cp kubelet.json /etc/kubernetes/
[rootmaster1 work]# cp kubelet.service /usr/lib/systemd/system/
以上步骤如果master节点不安装kubelet则不用执行
[rootmaster1 work]# for i in node1 node2 node3;do rsync -vaz kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done
[rootmaster1 work]# for i in node1 node2 node3;do rsync -vaz ca.pem $i:/etc/kubernetes/ssl/;done
[rootmaster1 work]# for i in node1 node2 node3;do rsync -vaz kubelet.service $i:/usr/lib/systemd/system/;done注kubelete.json配置文件address改为各个节点的ip地址启动服务 各个work节点上操作
[rootnode1 ~]# mkdir /var/lib/kubelet
[rootnode1 ~]# mkdir /var/log/kubernetes
[rootnode1 ~]# systemctl daemon-reload
[rootnode1 ~]# systemctl enable kubelet
[rootnode1 ~]# systemctl start kubelet
[rootnode1 ~]# systemctl status kubelet确认kubelet服务启动成功后接着到master上Approve一下bootstrap请求。执行如下命令可以看到三个worker节点分别发送了三个 CSR 请求
[rootmaster1 work]# kubectl get csr[rootmaster1 work]# kubectl certificate approve node-csr-HlX3cExsZohWsu8Dd6Rp_ztFejmMdpzvti_qgxo4SAQ
[rootmaster1 work]# kubectl certificate approve node-csr-oykYfnH_coRF2PLJH4fOHlGznOZUBPDg5BPZXDo2wgk
[rootmaster1 work]# kubectl certificate approve node-csr-ytRB2fikhL6dykcekGg4BdD87o-zw9WPU44SZ1nFT50
[rootmaster1 work]# kubectl get csr
[rootmaster1 work]# kubectl get nodes3.4.9 部署kube-proxy
创建csr请求文件
[rootmaster1 work]# vim kube-proxy-csr.json
{CN: system:kube-proxy,key: {algo: rsa,size: 2048},names: [{C: CN,ST: Hubei,L: Wuhan,O: k8s,OU: system}]
}生成证书
[rootmaster1 work]# cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[rootmaster1 work]# ls kube-proxy*.pem创建kubeconfig文件
[rootmaster1 work]# kubectl config set-cluster kubernetes --certificate-authorityca.pem --embed-certstrue --serverhttps://172.10.0.20:6443 --kubeconfigkube-proxy.kubeconfig
[rootmaster1 work]# kubectl config set-credentials kube-proxy --client-certificatekube-proxy.pem --client-keykube-proxy-key.pem --embed-certstrue --kubeconfigkube-proxy.kubeconfig
[rootmaster1 work]# kubectl config set-context default --clusterkubernetes --userkube-proxy --kubeconfigkube-proxy.kubeconfig
[rootmaster1 work]# kubectl config use-context default --kubeconfigkube-proxy.kubeconfig创建kube-proxy配置文件
[rootmaster1 work]# vim kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.10.1.14
clientConnection:kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.0.0/16 # 此处网段必须与网络组件网段保持一致否则部署网络组件时会报错
healthzBindAddress: 172.10.1.14:10256
kind: KubeProxyConfiguration
metricsBindAddress: 172.10.1.14:10249
mode: ipvs创建服务启动文件
[rootmaster1 work]# vim kube-proxy.service
[Unit]
DescriptionKubernetes Kube-Proxy Server
Documentationhttps://github.com/kubernetes/kubernetes
Afternetwork.target[Service]
WorkingDirectory/var/lib/kube-proxy
ExecStart/usr/local/bin/kube-proxy \--config/etc/kubernetes/kube-proxy.yaml \--alsologtostderrtrue \--logtostderrfalse \--log-dir/var/log/kubernetes \--v2
Restarton-failure
RestartSec5
LimitNOFILE65536[Install]
WantedBymulti-user.target同步文件到各个节点
[rootmaster1 work]# cp kube-proxy*.pem /etc/kubernetes/ssl/
[rootmaster1 work]# cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/
[rootmaster1 work]# cp kube-proxy.service /usr/lib/systemd/system/
master节点不安装kube-proxy则以上步骤不用执行
[rootmaster1 work]# for i in node1 node2 node3;do rsync -vaz kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done
[rootmaster1 work]# for i in node1 node2 node3;do rsync -vaz kube-proxy.service $i:/usr/lib/systemd/system/;done注配置文件kube-proxy.yaml中address修改为各节点的实际IP启动服务
[rootnode1 ~]# mkdir -p /var/lib/kube-proxy
[rootnode1 ~]# systemctl daemon-reload
[rootnode1 ~]# systemctl enable kube-proxy
[rootnode1 ~]# systemctl restart kube-proxy
[rootnode1 ~]# systemctl status kube-proxy3.4.10 配置网络组件
[rootmaster1 work]# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
[rootmaster1 work]# kubectl apply -f calico.yaml
此时再来查看各个节点均为Ready状态
[rootmaster1 work]# kubectl get pods -A
[rootmaster1 work]# kubectl get nodes3.4.11 部署coredns
下载coredns yaml文件https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed 修改yaml文件: kubernetes cluster.local in-addr.arpa ip6.arpa forward . /etc/resolv.conf clusterIP为10.255.0.2kubelet配置文件中的clusterDNS
[rootmaster1 work]# cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: corednsnamespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
rules:
- apiGroups:- resources:- endpoints- services- pods- namespacesverbs:- list- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: truelabels:kubernetes.io/bootstrapping: rbac-defaultsname: system:coredns
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:coredns
subjects:
- kind: ServiceAccountname: corednsnamespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:name: corednsnamespace: kube-system
data:Corefile: |.:53 {errorshealth {lameduck 5s}readykubernetes cluster.local in-addr.arpa ip6.arpa {fallthrough in-addr.arpa ip6.arpa}prometheus :9153forward . /etc/resolv.conf {max_concurrent 1000}cache 30loopreloadloadbalance}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: corednsnamespace: kube-systemlabels:k8s-app: kube-dnskubernetes.io/name: CoreDNS
spec:# replicas: not specified here:# 1. Default is 1.# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.strategy:type: RollingUpdaterollingUpdate:maxUnavailable: 1selector:matchLabels:k8s-app: kube-dnstemplate:metadata:labels:k8s-app: kube-dnsspec:priorityClassName: system-cluster-criticalserviceAccountName: corednstolerations:- key: CriticalAddonsOnlyoperator: ExistsnodeSelector:kubernetes.io/os: linuxaffinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: k8s-appoperator: Invalues: [kube-dns]topologyKey: kubernetes.io/hostnamecontainers:- name: corednsimage: coredns/coredns:1.8.0imagePullPolicy: IfNotPresentresources:limits:memory: 170Mirequests:cpu: 100mmemory: 70Miargs: [ -conf, /etc/coredns/Corefile ]volumeMounts:- name: config-volumemountPath: /etc/corednsreadOnly: trueports:- containerPort: 53name: dnsprotocol: UDP- containerPort: 53name: dns-tcpprotocol: TCP- containerPort: 9153name: metricsprotocol: TCPsecurityContext:allowPrivilegeEscalation: falsecapabilities:add:- NET_BIND_SERVICEdrop:- allreadOnlyRootFilesystem: truelivenessProbe:httpGet:path: /healthport: 8080scheme: HTTPinitialDelaySeconds: 60timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /readyport: 8181scheme: HTTPdnsPolicy: Defaultvolumes:- name: config-volumeconfigMap:name: corednsitems:- key: Corefilepath: Corefile
---
apiVersion: v1
kind: Service
metadata:name: kube-dnsnamespace: kube-systemannotations:prometheus.io/port: 9153prometheus.io/scrape: truelabels:k8s-app: kube-dnskubernetes.io/cluster-service: truekubernetes.io/name: CoreDNS
spec:selector:k8s-app: kube-dnsclusterIP: 10.255.0.2ports:- name: dnsport: 53protocol: UDP- name: dns-tcpport: 53protocol: TCP- name: metricsport: 9153protocol: TCP
[rootmaster1 work]# kubectl apply -f coredns.yaml 3.5 验证
3.5.1 部署nginx
[rootmaster1 ~]# vim nginx.yaml
---
apiVersion: v1
kind: ReplicationController
metadata:name: nginx-controller
spec:replicas: 2selector:name: nginxtemplate:metadata:labels:name: nginxspec:containers:- name: nginximage: nginx:1.19.6ports:- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:name: nginx-service-nodeport
spec:ports:- port: 80targetPort: 80nodePort: 30001protocol: TCPtype: NodePortselector:name: nginx
[rootmaster1 ~]# kubectl apply -f nginx.yaml
[rootmaster1 ~]# kubectl get svc
[rootmaster1 ~]# kubectl get pods3.5.2 验证
ping验证nginx service 访问nginx