中小学网站建设论文,无锡网站建设推广服务,山东省住房建设厅网站首页,网站和虚拟服务器目录 一、系统基础设置1.1、关闭防火墙1.2、关闭selinux1.3、关闭swap1.4、设置hostname1.5、将桥接的IPv4流量传递到iptables的链1.6、 时间同步1.7 配置开启 ipvs 的前提条件1.8、 安装iproute-tc和dig工具 二、所有master节点部署keepalived2.1 安装相关包和keepalived2.2配… 目录 一、系统基础设置1.1、关闭防火墙1.2、关闭selinux1.3、关闭swap1.4、设置hostname1.5、将桥接的IPv4流量传递到iptables的链1.6、 时间同步1.7 配置开启 ipvs 的前提条件1.8、 安装iproute-tc和dig工具 二、所有master节点部署keepalived2.1 安装相关包和keepalived2.2配置master节点2.3 启动和检查 三、 部署haproxy3.1 安装3.2 配置3.3 启动和检查 四、 所有节点安装Docker/kubeadm/kubelet4.1 安装Docker4.2 cri-dockerd安装4.3 添加阿里云YUM软件源4.4 安装kubeadmkubelet和kubectl4.4 安装kubeadm-cni 五、部署Kubernetes Master5.1 创建kubeadm配置文件5.2 在master1节点执行 六、安装集群网络七、master2节点加入集群7.1 复制密钥及相关文件7.2 master2加入集群7.3master3加入集群7.4检查状态 八、加入Kubernetes Node8.1在node1、node2、node3上执行join命令8.2集群网络重新安装因为添加了新的node节点8.3检查状态 九、测试kubernetes集群十、修改 kube-proxy网络模式并重启kube-proxy1、编辑kube-proxy 配置2、重启kube-proxy3、查看日志 十一、常用命令 一、系统基础设置
1.1、关闭防火墙
systemctl stop firewalld
systemctldisable firewalld1.2、关闭selinux
sed -i s/enforcing/disabled/ /etc/selinux/config # 永久
setenforce 0 # 临时1.3、关闭swap
swapoff -a # 临时
sed -ri s/.*swap.*/#/ /etc/fstab # 永久1.4、设置hostname
hostnamectl set-hostname zxhy-master cat /etc/hosts EOF
192.168.0.15 zxhy-vip
192.168.0.14 zxhy-master
192.168.0.222 zxhy-slave1
192.168.0.77 zxhy-slave2
192.168.0.188 zxhy-slave3
192.168.0.193 zxhy-slave4
192.168.0.227 zxhy-slave5
EOF1.5、将桥接的IPv4流量传递到iptables的链
cat /etc/sysctl.d/k8s.conf EOF
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
EOF
sysctl --system # 生效1.6、 时间同步
yum install ntpdate -y
ntpdate time.windows.com1.7 配置开启 ipvs 的前提条件
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack1.8、 安装iproute-tc和dig工具
yum install iproute-tc -y
yum -y install bind-utils
二、所有master节点部署keepalived
2.1 安装相关包和keepalived
yum install -y conntrack-tools libseccomp libtool-ltdl
yum install -y keepalived2.2配置master节点
master1节点配置
cat /etc/keepalived/keepalived.conf EOF
! Configuration File for keepalivedglobal_defs {router_id k8s
}vrrp_script check_haproxy {script killall -0 haproxyinterval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state MASTER interface eth0virtual_router_id 51priority 250advert_int 1authentication {auth_type PASSauth_pass ceb1b3ec013d66163d6ab}virtual_ipaddress {192.168.0.15}track_script {check_haproxy}}
EOFmaster2节点配置
cat /etc/keepalived/keepalived.conf EOF
! Configuration File for keepalivedglobal_defs {router_id k8s
}vrrp_script check_haproxy {script killall -0 haproxyinterval 3weight -2fall 10rise 2
}vrrp_instance VI_1 {state BACKUPinterface eth0virtual_router_id 51priority 250advert_int 1authentication {auth_type PASSauth_pass ceb1b3ec013d66163d6ab}virtual_ipaddress {192.168.0.14}track_script {check_haproxy}}
EOF2.3 启动和检查
在三台master节点都执行
# 启动keepalived
$ systemctl start keepalived.service
设置开机启动
$ systemctl enable keepalived.service
# 查看启动状态
$ systemctl status keepalived.service启动后查看master1的网卡信息
ip a s eth0如果是云服务器搭建的话记得云服务器管理平台上申请虚拟IP地址然后绑定虚拟IP地址到三台主节点服务器上然后添加相应的网络策略否侧ping不通虚拟ip
三、 部署haproxy
3.1 安装
yum install -y haproxy3.2 配置
两台master节点的配置均相同配置中声明了后端代理的两个master节点服务器指定了haproxy运行的端口为16443等因此16443端口为集群的入口
cat /etc/haproxy/haproxy.cfg EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global# to have these messages end up in /var/log/haproxy.log you will# need to:# 1) configure syslog to accept network log events. This is done# by adding the -r option to the SYSLOGD_OPTIONS in# /etc/sysconfig/syslog# 2) configure local2 events to go to the /var/log/haproxy.log# file. A line like the following can be added to# /etc/sysconfig/syslog## local2.* /var/log/haproxy.log#log 127.0.0.1 local2chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 4000user haproxygroup haproxydaemon # turn on stats unix socketstats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the listen and backend sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaultsmode httplog globaloption httplogoption dontlognulloption http-server-closeoption forwardfor except 127.0.0.0/8option redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiservermode tcpbind *:16443option tcplogdefault_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiservermode tcpbalance roundrobinserver zxhy-nacos 192.168.0.14:6443 checkserver zxhy-redis 192.168.0.77:6443 checkserver zxhy-mysql 192.168.0.222:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen statsbind *:1080stats auth admin:awesomePasswordstats refresh 5sstats realm HAProxy\ Statisticsstats uri /admin?stats
EOF3.3 启动和检查
三台master都启动
# 设置开机启动
$ systemctl enable haproxy
# 开启haproxy
$ systemctl start haproxy
# 查看启动状态
$ systemctl status haproxy四、 所有节点安装Docker/kubeadm/kubelet
Kubernetes默认CRI容器运行时为Docker因此先安装Docker。
4.1 安装Docker
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce-24.0.5.el7
$ systemctl enable docker systemctl start docker
$ docker --version
Docker version 24.0.5, build e68fc7a$ cat /etc/docker/daemon.json EOF
{registry-mirrors: [https://6f09bd673d1d4e8d98dab0ab278fc7c2.mirror.swr.myhuaweicloud.com],log-driver: json-file,log-opts: {max-size: 100m,max-file: 5}
}
EOF
# 重启docker服务
systemctl restart docker4.2 cri-dockerd安装
下载 cri-dockerd 安装包
cd /opt
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.6/cri-dockerd-0.3.6.20231018204925.877dc6a4-0.el7.x86_64.rpm安装服务
yum install -y cri-dockerd-0.3.6.20231018204925.877dc6a4-0.el7.x86_64.rpm
vim /usr/lib/systemd/system/cri-docker.service
#添加镜像源
--pod-infra-container-imageregistry.aliyuncs.com/google_containers/pause:3.7systemctl daemon-reload
vim /usr/lib/systemd/system/cri-docker.socket查看服务启动状态
# 设置开机启动
$ systemctl enable cri-docker
# 开启cri-docker
$ systemctl start cri-docker
# 查看启动状态
$ systemctl status cri-docker查看CRI服务是否被禁止
vi /etc/containerd/config.toml
#如果disabled_plugins中包含cri删除“cri”即可
#disabled_plugins [“cri”]
disabled_plugins []重启容器运行时
systemctl restart containerd 4.3 添加阿里云YUM软件源
$ cat /etc/yum.repos.d/kubernetes.repo EOF
[kubernetes]
nameKubernetes
baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled1
gpgcheck0
repo_gpgcheck0
gpgkeyhttps://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF4.4 安装kubeadmkubelet和kubectl
由于版本更新频繁这里指定版本号部署
$ yum install -y kubelet-1.24.7 kubeadm-1.24.7 kubectl-1.24.7
$ systemctl enable kubelet4.4 安装kubeadm-cni
network plugin is not ready: cni config uninitialized
五、部署Kubernetes Master
5.1 创建kubeadm配置文件
在具有vip的master上操作这里为master1
$ mkdir /usr/local/kubernetes/manifests -p$ cd /usr/local/kubernetes/manifests/$ vi kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
localAPIEndpoint:advertiseAddress: 192.168.0.14bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/cri-dockerd.sockimagePullPolicy: IfNotPresentname: zxhy-nacostaints: null
---
apiServer:certSANs:- zxhy-nacos- zxhy-redis- zxhy-mysql- zxhy-vip- 192.168.0.14- 192.168.0.222- 192.168.0.77- 192.168.0.15- 127.0.0.1extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: zxhy-vip:16443
controllerManager: {}
dns: {}
etcd:local: dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.24.7
networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16serviceSubnet: 10.1.0.0/16
scheduler: {}5.2 在master1节点执行
$ kubeadm init --config kubeadm-config.yamlexport KUBECONFIG/etc/kubernetes/admin.conf按照提示配置环境变量使用kubectl工具
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
$ kubectl get pods -n kube-system按照提示保存以下内容一会要使用
kubeadm join zxhy-vip:16443 --token gp4qgj.3x8wal0o2gmbcpis --discovery-token-ca-cert-hash sha256:af5fe3bb4f2ada51967c34053e94ed4c703287e3e26487d6d8dbe450a2550013 --cri-socketunix:///var/run/cri-dockerd.sock
#如果忘记复制也可以利用这个命令重新生成下加入命令
kubeadm token create --print-join-command六、安装集群网络
从官方地址获取到flannel的yaml在master1上执行
mkdir /usr/local/kubernetes/manifests/flannel
cd /usr/local/kubernetes/manifests/flannel
wget -c https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml安装flannel网络
kubectl apply -f kube-flannel.yml 检查
kubectl get pods -n kube-system七、master2节点加入集群
7.1 复制密钥及相关文件
从master1复制密钥及相关文件到master2
# ssh root192.168.0.222 mkdir -p /etc/kubernetes/pki/etcd# scp /etc/kubernetes/admin.conf root192.168.0.222:/etc/kubernetes# scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root192.168.0.222:/etc/kubernetes/pki# scp /etc/kubernetes/pki/etcd/ca.* root192.168.0.222:/etc/kubernetes/pki/etcd7.2 master2加入集群
执行在master1上init后输出的join命令,需要带上参数--control-plane表示把master控制节点加入集群
kubeadm join zxhy-vip:16443 --token gp4qgj.3x8wal0o2gmbcpis --discovery-token-ca-cert-hash sha256:af5fe3bb4f2ada51967c34053e94ed4c703287e3e26487d6d8dbe450a2550013 --control-plane --cri-socketunix:///var/run/cri-dockerd.sock按照提示配置环境变量使用kubectl工具
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
$ kubectl get pods -n kube-system7.3master3加入集群
跟节点2同样的操作
7.4检查状态
kubectl get nodekubectl get pods --all-namespaces八、加入Kubernetes Node
8.1在node1、node2、node3上执行join命令
向集群添加新节点执行在kubeadm init输出的kubeadm join命令
kubeadm join zxhy-vip:16443 --token gp4qgj.3x8wal0o2gmbcpis --discovery-token-ca-cert-hash sha256:af5fe3bb4f2ada51967c34053e94ed4c703287e3e26487d6d8dbe450a2550013 --cri-socketunix:///var/run/cri-dockerd.sock8.2集群网络重新安装因为添加了新的node节点
所有节点加入完成后安装flannel网络
#进入flannel网络
cd /usr/local/kubernetes/manifests/flannel
#删除之前的网络
kubectl delete -f kube-flannel.yml
#重新初始化的网络
kubectl apply -f kube-flannel.yml 8.3检查状态
kubectl get nodekubectl get pods --all-namespaces九、测试kubernetes集群
在Kubernetes集群中创建一个pod验证是否正常运行
$ kubectl create deployment nginx --imagenginx
$ kubectl expose deployment nginx --port80 --typeNodePort
$ kubectl get pod,svc十、修改 kube-proxy网络模式并重启kube-proxy
1、编辑kube-proxy 配置
kubectl edit cm kube-proxy -n kube-system
将mode: iptables 修改成 mode: ipvs
2、重启kube-proxy
kubectl get pod -n kube-system | grep kube-proxy |awk ‘{system(“kubectl delete pod “$1” -n kube-system”)}’
3、查看日志
kubectl logs -f -n kube-system kube-proxy-62fk7
十一、常用命令
#进入容器 kubectl exec -it nacos-0 bash #查看dns服务器 cat /etc/resolv.conf #查看无头服务访问地址 dig 10.1.0.10 nacos-headless.default.svc.cluster.local