php企业公司网站源码,教务系统门户网站,网站分享到微信缩略图,邯郸做网站推广多少钱K8S集群搭建#xff08;1 master 2 worker#xff09; 一、环境资源准备1.1、版本统一1.2、k8s环境系统要求1.3、准备三台Centos7虚拟机 二、集群搭建2.1、更新yum#xff0c;并安装依赖包2.2、安装Docker2.3、设置hostname#xff0c;修改hosts文件2.4、设置k8s的系统要求… K8S集群搭建1 master 2 worker 一、环境资源准备1.1、版本统一1.2、k8s环境系统要求1.3、准备三台Centos7虚拟机 二、集群搭建2.1、更新yum并安装依赖包2.2、安装Docker2.3、设置hostname修改hosts文件2.4、设置k8s的系统要求2.5、安装kubeadm, kubelet and kubectl2.6、安装k8s核心组件proxy、pause、scheduler2.7、kube init 初始化 master 三、测试(体验)Pod kubernetes官网https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl GitHubhttps://github.com/kubernetes/kubeadm 本文使用kubeadm搭建一个3台机器组成的k8s集群1台master节点2台worker节点。 一、环境资源准备
1.1、版本统一
由于k8s安装较麻烦为防止出现其他异常特此统一下安装的软件版本。
Docker 18.09.0
---
kubeadm-1.14.0-0
kubelet-1.14.0-0
kubectl-1.14.0-0
---
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
---
calico:v3.91.2、k8s环境系统要求 配置要求 一台或多台机器运行其中之一 K8S需要运行在以下一台或多台机器中 Ubuntu 16.04Debian 9CentOS 7 【本文使用】Red Hat Enterprise Linux (RHEL) 7Fedora 25HypriotOS v1.0.1Container Linux (tested with 1800.6.0) 每台机器至少有2GB或更多的内存少于2GB的空间其他应用就无法使用了2个或更多cpu所有网络节点互联互相ping通每个节点具有唯一主机名、MAC地址和product_uuid关闭防火墙或开放某些互联端口为了使kubelet正常工作必须禁用swap. 1.3、准备三台Centos7虚拟机
1保证3台虚拟机的Mac地址唯一
需同时启动3台虚拟机故应保证3台虚拟机的Mac地址唯一。 2虚拟机配置 节点角色IP处理器内核总数内存(GB)K8S_masterCentos7192.168.116.17022K8S_worker01Centos7192.168.116.17122K8S_worker02Centos7192.168.116.17222 可参考下图 3保证各节点互相Ping通 二、集群搭建
2.1、更新yum并安装依赖包
注意3台机器均要执行 注 需联网时间较长。 1更新yum
[rootlocalhost ~]# yum -y update2安装依赖包
[rootlocalhost ~]# yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp2.2、安装Docker
注意3台机器均要执行
在每一台机器上都安装好Docker版本为18.09.0
0卸载之前安装的docker
[rootlocalhost /]# sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine1安装必要的依赖
[rootlocalhost ~]# sudo yum install -y yum-utils device-mapper-persistent-data lvm22设置docker仓库添加软件源阿里源
[rootlocalhost ~]# sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo3设置ustc镜像加速器强烈建议 注 如果没有 /etc/docker/daemon.json需要手动创建。 ① 创建文件夹/etc/docker
[rootlocalhost ~]# mkdir -p /etc/docker
[rootlocalhost ~]#
[rootlocalhost ~]# vi /etc/docker/daemon.json② 创建并编辑daemon.json内容如下
{registry-mirrors: [https://docker.mirrors.ustc.edu.cn]
}③ 重新加载生效
[rootlocalhost ~]# sudo systemctl daemon-reload也可参考博文Docker设置ustc的镜像源镜像加速器修改/etc/docker/daemon.json文件 4安装指定版本dockerv18.09.0
① 列出可用的docker版本排序从高到低
[rootlocalhost /]# yum list docker-ce --showduplicates | sort -r② 安装指定版本这里安装版本v18.09.0需联网
[rootlocalhost /]# sudo yum -y install docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io5启动Docker 提示 安装完成后Docker默认创建好了docker用户组但该用户组下没有用户。 启动命令
[rootlocalhost /]# sudo systemctl start docker
[rootlocalhost ~]#
[rootlocalhost ~]# ps -ef | grep docker
[rootlocalhost ~]# 或
启动docker并设置开机启动推荐
[rootlocalhost /]# sudo systemctl start docker sudo systemctl enable docker
[rootlocalhost ~]#
[rootlocalhost ~]# ps -ef | grep docker
[rootlocalhost ~]# 附停止docker命令 systemctl stop docker 6测试Docker是否安装成功 提示 通过运行 hello-world映像来验证是否正确安装了 Docker Engine-Community。 运行“hello-world”容器测试
[rootlocalhost /]# sudo docker run hello-world如上图即已安装成功。 2.3、设置hostname修改hosts文件
注意3台机器均要执行 hosts文件用于解析计算机名称和IP地址的映射关系功能相当于windows下面的c:\windows\system32\drivers\etc\hosts文件如果想使用计算机名称来访问对方的主机需要把对方计算机的名称和IP地址写到本机的hosts文件中或使用配置DNS。 1对master节点主机操作
① 设置master虚拟机的hostname为m
[rootlocalhost /]# sudo hostnamectl set-hostname m注 使用 hostname 或 uname -n 命令可查看当前主机名 如[rootlocalhost ~]# hostname ② 修改hosts文件
在/etc/hosts文件内容末尾 追加 IP和主机名映射。
[rootlocalhost /]# vi /etc/hosts附/etc/hosts中追加内容如下 192.168.116.170 m 192.168.116.171 w1 192.168.116.172 w2 2对worker01节点主机操作
① 设置worker01虚拟机的hostname为w1
[rootlocalhost /]# sudo hostnamectl set-hostname w1注 使用 hostname 或 uname -n 命令可查看当前主机名 如[rootlocalhost ~]# hostname ② 修改hosts文件
在/etc/hosts文件内容末尾 追加 IP和主机名映射。
[rootlocalhost /]# vi /etc/hosts附/etc/hosts中追加内容如下 192.168.116.170 m 192.168.116.171 w1 192.168.116.172 w2 3对worker02节点主机操作
① 设置worker02虚拟机的hostname为w2
[rootlocalhost /]# sudo hostnamectl set-hostname w2注 使用 hostname 或 uname -n 命令可查看当前主机名 如[rootlocalhost ~]# hostname ② 修改hosts文件
在/etc/hosts文件内容末尾 追加 IP和主机名映射。
[rootlocalhost /]# vi /etc/hosts附/etc/hosts中追加内容如下 192.168.116.170 m 192.168.116.171 w1 192.168.116.172 w2 3使用ping命令互测三台虚拟机的网络联通状态
[rootlocalhost ~]# ping m
[rootlocalhost ~]# ping w1
[rootlocalhost ~]# ping w22.4、设置k8s的系统要求
注意3台机器均要执行
1关闭防火墙永久关闭
[rootlocalhost ~]# systemctl stop firewalld systemctl disable firewalld2关闭selinux SELinuxSecurity Enhanced Linux 的缩写也就是安全强化的 Linux是由美国国家安全局NSA联合其他安全机构比如 SCC 公司共同开发的旨在增强传统 Linux 操作系统的安全性解决传统 Linux 系统中自主访问控制DAC系统中的各种权限问题如 root 权限过高等。 依次执行命令
[rootlocalhost ~]# setenforce 0
[rootlocalhost ~]# sed -i s/^SELINUXenforcing$/SELINUXpermissive/ /etc/selinux/config3关闭swap Swap分区在系统的物理内存不够用的时候把硬盘内存中的一部分空间释放出来以供当前运行的程序使用。那些被释放的空间可能来自一些很长时间没有什么操作的程序这些被释放的空间被临时保存到Swap分区中等到那些程序要运行时再从Swap分区中恢复保存的数据到内存中。 依次执行命令
[rootlocalhost ~]# swapoff -a
[rootlocalhost ~]# sed -i /swap/s/^\(.*\)$/#\1/g /etc/fstab4配置iptables的ACCEPT规则
[rootlocalhost ~]# iptables -F iptables -X iptables -F -t nat iptables -X -t nat iptables -P FORWARD ACCEPT5设置系统参数 注 catEOF以EOF输入字符作为标准输入结束。 ① 执行以下命令
cat EOF /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
EOF② 重新加载配置
[rootlocalhost ~]# sysctl --system操作如下图 2.5、安装kubeadm, kubelet and kubectl
注意3台机器均要执行
1配置yum源 注 catEOF以EOF输入字符作为标准输入结束。 执行以下命令
cat EOF /etc/yum.repos.d/kubernetes.repo
[kubernetes]
nameKubernetes
baseurlhttp://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled1
gpgcheck0
repo_gpgcheck0
gpgkeyhttp://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF2安装kubeadmkubeletkubectl
依次执行命令
#1查看源上的kubeaadm版本列表
[rootlocalhost ~]# yum list kubeadm --showduplicates | sort -r#2安装kubelet会安装依赖kubernetes-cni.x86_64 0:0.7.5-0切记以下第一步必须
#先安装kubelet-1.14.0-0否则先安装其他会出现各种问题。
#本人测试先安装其他会默认安装依赖kubelet-1.20.0-0版本与我们的要求不符否则后面
# 初始化master节点的时候会报错
[rootlocalhost ~]# yum install -y kubelet-1.14.0-0#3安装kubeadm、kubectl
[rootlocalhost ~]# yum install -y kubeadm-1.14.0-0 kubectl-1.14.0-0#4查看kubelet版本保证版本是v1.14.0即完成安装
[rootlocalhost ~]# kubelet --version
Kubernetes v1.14.0注卸载安装或安装其他版本方法 1检查yum方式已安装软件yum list installed 如检测到已安装kubeadm.x86_64、kubectl.x86_64、kubelet.x86_64、kubernetes-cni.x86_64 2移除卸载软件 yum remove kubeadm.x86_64 yum remove kubectl.x86_64 yum remove kubelet.x86_64 yum remove kubernetes-cni.x86_64 3docker和k8s设置同一个cgroup
① 编辑/etc/docker/daemon.json内容
[rootlocalhost ~]# vi /etc/docker/daemon.json添加内容如下 “exec-opts”: [“native.cgroupdriversystemd”] 附 daemon.json全部内容
{registry-mirrors: [https://docker.mirrors.ustc.edu.cn],exec-opts: [native.cgroupdriversystemd]
}② 重启docker
# systemctl daemon-reload
# systemctl restart docker③ 配置kubelet启动kubelet并设置开机启动kubelet
Ⅰ、配置kubelet
[rootlocalhost ~]# sed -i s/cgroup-driversystemd/cgroup-drivercgroupfs/g /etc/systemd/system/kubelet.service.d/10-kubeadm.conf报“sed无法读取 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf没有那个文件或目录” 说明 报如上问题不用管属于正常现象请继续进行下一步操作。 如果上面路径不存在大概率在此路径/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf Ⅱ、启动kubelet并设置开机启动kubelet
[rootlocalhost ~]# systemctl enable kubelet systemctl start kubelet注 此时使用ps -ef|grep kubelet查不到任何进程请先忽略 2.6、安装k8s核心组件proxy、pause、scheduler
注意3台机器均要执行
1查看kubeadm镜像发现都是国外镜像
[rootlocalhost ~]# kubeadm config images list
I0106 02:06:13.785323 101920 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL https://dl.k8s.io/release/stable-1.txt: Get https://storage.googleapis.com/kubernetes-release/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0106 02:06:13.785667 101920 version.go:97] falling back to the local client version: v1.14.0
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
[rootlocalhost ~]# 2解决国外镜像不能访问的问题 注 由于国内大陆网络限制无法获取国外镜像但是如果是香港的服务器就可以直接获取。 解决 由于使用国内地址“registry.cn-hangzhou.aliyuncs.com/google_containers ”获取所有镜像过程(pull镜像—打tag—删除原有镜像)每个镜像依次操作过程较麻烦故此本节提供以脚本方式运行从国内地址获取镜像如下 ① 创建并编辑kubeadm.sh脚本 说明 在当前目录创建脚本文件即可目的只为执行下载镜像而已。 执行命令
[rootlocalhost ~]# vi kubeadm.sh脚本内容
#!/bin/bashset -eKUBE_VERSIONv1.14.0
KUBE_PAUSE_VERSION3.1
ETCD_VERSION3.3.10
CORE_DNS_VERSION1.3.1GCR_URLk8s.gcr.io
ALIYUN_URLregistry.cn-hangzhou.aliyuncs.com/google_containersimages(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})for imageName in ${images[]} ; dodocker pull $ALIYUN_URL/$imageNamedocker tag $ALIYUN_URL/$imageName $GCR_URL/$imageNamedocker rmi $ALIYUN_URL/$imageName
done② 执行脚本kubeadm.sh
[rootlocalhost ~]# sh ./kubeadm.sh说明 1执行脚本命令“sh ./kubeadm.sh”过程中可能报错 Error response from daemon: Get https://registry.cn-hangzhou.aliyuncs.com/v2/: dial tcp: lookup registry.cn-hangzhou.aliyuncs.com on 8.8.8.8:53: read udp 192.168.116.170:51212-8.8.8.8:53: i/o timeout 2执行脚本命令“sh ./kubeadm.sh”过程中语法或符号报错 [rootm xiao]# sh ./kubeadm.sh ./kubeadm.sh: line 2: $‘\r’: command not found : invalid optionne 3: set: - set: usage: set [-abefhkmnptuvxBCHP] [-o option-name] [–] [arg …] [rootm xiao]# 【解决方法】去除Shell脚本的\r字符执行命令 [rootm xiao]# sed -i ‘s/\r//’ kubeadm.sh 个人测试 1先执行下载其中一个镜像进行测试如下载kube-proxy执行docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.0能下载即可。 2再执行脚本sh ./kubeadm.sh 实际执行过程
[rootm xiao]# sh kubeadm.sh
v1.24.0: Pulling from google_containers/kube-proxy
Digest: sha256:c957d602267fa61082ab8847914b2118955d0739d592cc7b01e278513478d6a8
Status: Image is up to date for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxysha256:c957d602267fa61082ab8847914b2118955d0739d592cc7b01e278513478d6a8
v1.24.0: Pulling from google_containers/kube-scheduler
36698cfa5275: Pull complete
fe7d6916a3e1: Pull complete
a47c83d2acf4: Pull complete
Digest: sha256:db842a7c431fd51db7e1911f6d1df27a7b6b6963ceda24852b654d2cd535b776
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-schedulersha256:db842a7c431fd51db7e1911f6d1df27a7b6b6963ceda24852b654d2cd535b776
v1.24.0: Pulling from google_containers/kube-controller-manager
36698cfa5275: Already exists
fe7d6916a3e1: Already exists
45a9a072021e: Pull complete
Digest: sha256:df044a154e79a18f749d3cd9d958c3edde2b6a00c815176472002b7bbf956637
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-managersha256:df044a154e79a18f749d3cd9d958c3edde2b6a00c815176472002b7bbf956637
v1.24.0: Pulling from google_containers/kube-apiserver
36698cfa5275: Already exists
fe7d6916a3e1: Already exists
8ebfef20cea7: Pull complete
Digest: sha256:a04522b882e919de6141b47d72393fb01226c78e7388400f966198222558c955
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserversha256:a04522b882e919de6141b47d72393fb01226c78e7388400f966198222558c955
3.7: Pulling from google_containers/pause
7582c2cc65ef: Pull complete
Digest: sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pausesha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c
3.5.3-0: Pulling from google_containers/etcd
36698cfa5275: Already exists
924f6cbb1ab3: Pull complete
11ade7be2717: Pull complete
8c6339f7974a: Pull complete
d846fbeccd2d: Pull complete
Digest: sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/etcdsha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5
v1.8.6: Pulling from google_containers/coredns
d92bdee79785: Pull complete
6e1b7c06e42d: Pull complete
Digest: sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/corednssha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
[rootm xiao]# ③ 查看docker镜像
[rootlocalhost ~]# docker images2.7、kube init 初始化 master
注意只在Master节点执行 官网 https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/ 1初始化集群状态可跳过 说明 1 正常情况下请跳过此步骤。 2 如果在后续的步骤中出现错误可进行此步骤操作然后重新接着下一步执行。 3 执行命令[rootm ~]# kubeadm reset 2初始化master节点 如果出现如下问题解决方法请参考博文3台机器均要执行Centos7安装K8S报错[ERROR KubeletVersion]: the kubelet version is higher than the control plane version. 执行初始化master节点命令时我的机器报了一个错误[ERROR KubeletVersion]: the kubelet version is higher than the control plane version. This is not a supported version skew and may lead to a malfunctional cluster. Kubelet version: 1.20.1 Control plane version: 1.14.0 执行命令只在Master节点执行
[rootm ~]# kubeadm init --v5 \--kubernetes-version1.14.0 \--apiserver-advertise-address192.168.116.170 \--pod-network-cidr10.244.0.0/16参数说明 –apiserver-advertise-addressmaster节点ip –v5 会打出详细检查日志 –pod-network-cidr10.244.0.0/16 保持不做修改 注意 修改命令中的ip。 【说明】如果init过程中还去拉取镜像请仔细之前步骤是否已经拉取了该镜像如果已拉取只是镜像名称不正确则重新打下tag即可 格式docker tag 源镜像名:版本 新镜像名:版本 打tag docker tag k8s.gcr.io/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6 删除原镜像 docker rmi k8s.gcr.io/coredns:v1.8.6 当看到如下界面即初始化成功仔细观察 初始化的全部过程如下仔细阅读里面有需要继续执行的命令
[rootm ~]# kubeadm init --v5 \--kubernetes-version1.14.0 \--apiserver-advertise-address192.168.116.170 \--pod-network-cidr10.244.0.0/16
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using kubeadm config images pull
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder /etc/kubernetes/pki
[certs] Generating ca certificate and key
[certs] Generating apiserver certificate and key
[certs] apiserver serving cert is signed for DNS names [m kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.116.170]
[certs] Generating apiserver-kubelet-client certificate and key
[certs] Generating front-proxy-ca certificate and key
[certs] Generating front-proxy-client certificate and key
[certs] Generating etcd/ca certificate and key
[certs] Generating etcd/healthcheck-client certificate and key
[certs] Generating etcd/server certificate and key
[certs] etcd/server serving cert is signed for DNS names [m localhost] and IPs [192.168.116.170 127.0.0.1 ::1]
[certs] Generating etcd/peer certificate and key
[certs] etcd/peer serving cert is signed for DNS names [m localhost] and IPs [192.168.116.170 127.0.0.1 ::1]
[certs] Generating apiserver-etcd-client certificate and key
[certs] Generating sa key and public key
[kubeconfig] Using kubeconfig folder /etc/kubernetes
[kubeconfig] Writing admin.conf kubeconfig file
[kubeconfig] Writing kubelet.conf kubeconfig file
[kubeconfig] Writing controller-manager.conf kubeconfig file
[kubeconfig] Writing scheduler.conf kubeconfig file
[control-plane] Using manifest folder /etc/kubernetes/manifests
[control-plane] Creating static Pod manifest for kube-apiserver
[control-plane] Creating static Pod manifest for kube-controller-manager
[control-plane] Creating static Pod manifest for kube-scheduler
[etcd] Creating static Pod manifest for local etcd in /etc/kubernetes/manifests
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory /etc/kubernetes/manifests. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 108.532279 seconds
[upload-config] storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config-1.14 in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node m as control-plane by adding the label node-role.kubernetes.io/master
[mark-control-plane] Marking the node m as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qzgakq.brqfhysglgh1z285
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the cluster-info ConfigMap in the kube-public namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.116.170:6443 --token qzgakq.brqfhysglgh1z285 \--discovery-token-ca-cert-hash sha256:ec5a689ac221ad489452c96730e27814f0faa798b058fbe843f8a47388fe910f
[rootm ~]# 从上述日志中可以看出etcd、controller、scheduler等组件都以pod的方式安装成功了。
切记要留意上述过程中的最后部分如下图后面要用 接着依次执行命令只在Master节点执行
[rootm ~]# mkdir -p $HOME/.kube
[rootm ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[rootm ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config查看podMaster节点
[rootm ~]# kubectl get pods -n kube-system注意 如上图发现必须coredns处于Pending等待状态这是由于缺少各节点互联的网络插件导致安装网络插件参考下一小节。 3安装calico网络插件只在Master节点
选择网络插件网址https://kubernetes.io/docs/concepts/cluster-administration/addons/ 这里使用calico网络插件https://docs.projectcalico.org/v3.9/getting-started/kubernetes/ 提示最新我安装的版本为calico-v3.21 # 1下载calico.yaml到当前目录
[rootm ~]# wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml# 2查看下calico.yaml中所需的镜像image
[rootm ~]# cat calico.yaml | grep imageimage: calico/cni:v3.9.6image: calico/cni:v3.9.6image: calico/pod2daemon-flexvol:v3.9.6image: calico/node:v3.9.6image: calico/kube-controllers:v3.9.6
[rootm ~]# # 3为后续操作方便我们这里先手动pull下所有镜像
[rootm ~]# docker pull calico/cni:v3.9.3
[rootm ~]# docker pull calico/pod2daemon-flexvol:v3.9.3
[rootm ~]# docker pull calico/node:v3.9.3
[rootm ~]# docker pull calico/kube-controllers:v3.9.3# 4应用calico.yaml
[rootm ~]# kubectl apply -f calico.yaml# 5不断查看podMaster节点等会所有pod会处于Running状态即安装成功(如下图)
[rootm ~]# kubectl get pods -n kube-system4将worker01节点和worker02节点加入master节点
# 1在master节点查看所有node信息
[rootm ~]# kubectl get nodes# 2在worker01节点执行命令来自初始化master节点的过程日志中
[rootm ~]# kubeadm join 192.168.116.170:6443 --token qzgakq.brqfhysglgh1z285 \--discovery-token-ca-cert-hash sha256:ec5a689ac221ad489452c96730e27814f0faa798b058fbe843f8a47388fe910f# 3在worker02节点执行命令来自初始化master节点的过程日志中
[rootm ~]# kubeadm join 192.168.116.170:6443 --token qzgakq.brqfhysglgh1z285 \--discovery-token-ca-cert-hash sha256:ec5a689ac221ad489452c96730e27814f0faa798b058fbe843f8a47388fe910f# 4再在master节点查看所有node信息(等待所有状态变为Ready如下图)-大概需要等待3~5分钟
[rootm ~]# kubectl get nodes注如果join报错请看后续处理异常环节。 常用命令 1在master节点实时查看所有的Pods前台监控 kubectl get pods --all-namespaces -w 2在master节点查看所有node信息 kubectl get nodes . join异常处理 . 异常详情 accepts at most 1 arg(s), received 2 问题分析 最多接收1个参数实际2个参数。问题大概率是join命令格式问题我是将命令复制到windows上了又复制粘贴去执行里面换行符导致多一个参数 解决方法 删除换行即可再次执行 异常详情 [preflight] Running pre-flight checks error execution phase preflight: couldnt validate the identity of the API Server: invalid discovery token CA certificate hash: invalid hash sha256:9f89085f6bfbf60a35935b502a57532204c12a3da3d6412, expected a 32 byte SHA-256 hash, found 23 bytes 问题分析 master节点的token、sha256均过期了我是隔天才去执行join命令的 解决方法 1在master节点重新生成token [rootm ~]# kubeadm token create 2在master节点重新生成sha256 [rootm ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’ 3在work01、work02节点重新执行join命令 kubeadm join 192.168.116.201:6443 --token fsyl1v.i4cglmc2i0b2d86p --discovery-token-ca-cert-hash sha256:9f89085f6bfbf60a35935b502a57532204c12a3da3d6412443b7b2c700aa6b3c 4再在master节点查看所有node信息(等待所有状态变为Ready如下图)-大概需要等待3~5分钟 [rootm ~]# kubectl get nodes 安装完成 三、测试(体验)Pod
1定义/创建pod.yml文件
执行以下创建pod_nginx_rs.yaml的命令会在当前目录创建
cat pod_nginx_rs.yaml EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:name: nginxlabels:tier: frontend
spec:replicas: 3selector:matchLabels:tier: frontendtemplate:metadata:name: nginxlabels:tier: frontendspec:containers:- name: nginximage: nginxports:- containerPort: 80
EOF2创建pod根据pod_nginx_rs.yml创建
[rootm ~]# kubectl apply -f pod_nginx_rs.yaml
replicaset.apps/nginx created
[rootm ~]# 3查看pod 注意 创建pod需要一定的时间等待所有pod状态均变为Running状态即可。 # 概览所有pod
[rootm ~]# kubectl get pods# 概览所有pod可看到分布到的集群节点node
[rootm ~]# kubectl get pods -o wide# 查看指定pod详细信息这里查看podnginx
[rootm ~]# kubectl describe pod nginx4感受通过rs将pod扩容
#1通过rs将pod扩容
[rootm ~]# kubectl scale rs nginx --replicas5
replicaset.extensions/nginx scaled
[rootm ~]# #2概览pod等待所有pod状态均变为Running状态
[rootm ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7b465 1/1 Running 0 16m 192.168.190.67 w1 none none
nginx-fn8rx 1/1 Running 0 16m 192.168.80.194 w2 none none
nginx-gp44d 1/1 Running 0 26m 192.168.190.65 w1 none none
nginx-q2cbz 1/1 Running 0 26m 192.168.80.193 w2 none none
nginx-xw85v 1/1 Running 0 26m 192.168.190.66 w1 none none
[rootm ~]# 5删除测试pod
[rootm ~]# kubectl delete -f pod_nginx_rs.yaml
replicaset.apps nginx deleted
[rootm ~]#
[rootm ~]# # 查看pod正在终止中...
[rootm ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7b465 1/1 Terminating 0 21m 192.168.190.67 w1 none none
nginx-fn8rx 1/1 Terminating 0 21m 192.168.80.194 w2 none none
nginx-gp44d 1/1 Terminating 0 31m 192.168.190.65 w1 none none
nginx-q2cbz 1/1 Terminating 0 31m 192.168.80.193 w2 none none
nginx-xw85v 1/1 Terminating 0 31m 192.168.190.66 w1 none none# 查看pod无法查到
[rootm ~]# kubectl get pods
No resources found in default namespace.
[rootm ~]#