青岛网站设计如何做,海尔电子商务网站建设情况,用dw制作个人网站,目前玩的人最多网游排行作者 | 天元浪子来源 | CSDN博客概述作为容器引擎#xff0c;Docker为容器化的应用程序提供了开放标准#xff0c;使得开发者可以用管理应用程序的方式来管理基础架构#xff0c;实现快速交付、测试和部署代码。随着容器的大量使用#xff0c;又产生了如何协调、调度和管理… 作者 | 天元浪子来源 | CSDN博客概述作为容器引擎Docker为容器化的应用程序提供了开放标准使得开发者可以用管理应用程序的方式来管理基础架构实现快速交付、测试和部署代码。随着容器的大量使用又产生了如何协调、调度和管理容器的问题Docker的容器编排应运而生。所谓容器编排通俗一点可以理解为集群管理。Docker的容器编排工具有很多最出名的当属Compose、Machine和Swarm合称Docker三剑客。其中Compose和Machine是第三方的而Swarm则是Docker官方的容器编排工具已经被集成在Docker中了。Swarm由三大部分组成swarm集群管理node节点管理service服务管理集群与节点管理使用 docker swarm 命令可以创建或加入集群Docker集群中的节点分为manager和worker两种。这两种节点都可以运行Docker容器但只有manager节点拥有管理功能。一个集群中即便只有manager节点也可以正常的工作。2.1 创建集群我测试的环境有两台机器ip地址分别为192.168.1.220和192.168.1.116。下面在192.168.1.220上创建集群# docker swarm init
Swarm initialized: current node (ppmurem8j7mdbmgpdhssjh0h9) is now a manager.To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-3e4l8crbt04xlqfxwyw6nuf9gtcwpw72zggtayzy8clyqmvb5h-7o6ww4ftwm38dz7ydbolsz3kd 192.168.1.220:2377To add a manager to this swarm, run docker swarm join-token manager and follow the instructions.执行 docker swarm init 后集群就被创建好了。当前的机器自动成为集群的manager节点并且输出了其他机器加入集群的方式即docker swarm join --token SWMTKN-1-3e4l8crbt04xlqfxwyw6nuf9gtcwpw72zggtayzy8clyqmvb5h-7o6ww4ftwm38dz7ydbolsz3kd 192.168.1.220:2377。使用这个token加入的节点是worker节点如果想加入一个新的manager节点可以执行 docker swarm join-token manager它也会输出一串类似的命令执行就可以以manager的方式加入。如果忘记加入的命令也可以使用docker swarm join-token worker 进行查看。2.2 加入集群下面在192.168.1.116上执行加入命令# docker swarm join --token SWMTKN-1-12dlq70adr3z38mlkltc288rdzevtjn73xse7d0qndnjmx45zs-b1kwenzmrsqb4o5nvni5rafcr 192.168.1.220:2377
This node joined a swarm as a worker.这里发生了一个小插曲在我创建集群的两台机器的时区不一致导致在加入worker节点时报错Error response from daemon: error while validating Root CA Certificate: x509: certificate has expired or is not yet valid在更新了220的时区后依然无法加入。于是我删除了集群又重新创建就可以了。没有尝试使用docker swarm update是不是也可以。加入了集群后可以在manager节点上查询集群的节点# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Leader 20.10.3
hz50cnwrbk4vxa7h0g23ccil9 zhangmh-virtual-machine Ready Active 20.10.12.3 退出集群在192.168.1.116上执行下面命令可以退出集群# docker swarm leave
Node left the swarm.再次查看节点# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Leader 20.10.3
hz50cnwrbk4vxa7h0g23ccil9 zhangmh-virtual-machine Down Active 20.10.1发现刚退出的这个节点还在只是状态变成了Down。需要在manager节点中删除# docker node rm hz50cnwrbk4vxa7h0g23ccil9
hz50cnwrbk4vxa7h0g23ccil9
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Leader 20.10.3
xby86ffkqw3axyfkwd4s7nubz zhangmh-virtual-machine Ready Active 20.10.1这样才真正删除了节点。如果退出的节点是manager节点需要强制退出即docker swarm leave -f。2.4 将节点提升为 manager 节点只有一个manager的集群是不稳定的当manager节点崩溃时整个集群就群龙无首了。Docker认为一个集群中应该至少有三个manager节点并且有一半以上的manager节点是可达的才能保证集群的正常运行。当集群中只有两个manager节点且有一个节点出现问题时整个集群还是处于不可用的状态。当然对于我们测试是没有必要的我们只需要使用两个manager节点测试一下是否可以主从切换就可以了。使用下面的命令可以直接将workder节点提升为manager节点# docker node promote xby86ffkqw3axyfkwd4s7nubz
Node xby86ffkqw3axyfkwd4s7nubz promoted to a manager in the swarm.
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Leader 20.10.3
xby86ffkqw3axyfkwd4s7nubz zhangmh-virtual-machine Ready Active Reachable 20.10.1OK现在有两个manager节点了192.168.1.220的状态为leader即当前是领导节点192.168.1.116的状态为Reachable是可达的。下面关闭192.168.1.220节点的Docker服务# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:docker.socket关闭时输出了一个警告意思是Docker服务已经被关闭了但它仍然可被docker.socket服务唤醒。再次查看节点状态# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
9b4cmakc4hpc9ra4rruy5x5yo * localhost.localdomain Ready Active Reachable 20.10.3
xby86ffkqw3axyfkwd4s7nubz zhangmh-virtual-machine Ready Active Leader 20.10.1可以看到192.168.1.116已经成为了Leader并且192.168.1.220也已经被唤醒。由此可见Docker集群的稳定是相当不错的。服务管理集群中各节点都配置好后就可以创建服务了。Docker的服务其实就是启动容器并且赋予了容器副本和负载均衡的能力。以之前创建的ws:1.0为例创建5个副本# docker service create --replicas 5 --name ws -p 80:8000 ws:1.0
image ws:1.0 could not be accessed on a registry to record
its digest. Each node will access ws:1.0 independently,
possibly leading to different nodes running different
versions of the image.1nj3o38slbo2zwt5p69l1qi5t
overall progress: 5 out of 5 tasks
1/5: running []
2/5: running []
3/5: running []
4/5: running []
5/5: running []
verify: Service converged服务已经创建并运行了使用浏览器访问192.168.1.220和192.168.1.116的80端口都可以访问。使用 docker service ls 命令可以查看ws服务# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
1nj3o38slbo2 ws replicated 5/5 ws:1.0 *:80-8000/tcp使用 docker service ps ws 命令可查看ws服务的进程# docker service ps ws
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
jpckj0mn24ae ws.1 ws:1.0 zhangmh-virtual-machine Running Running 6 minutes ago
yrrdn4ntb089 ws.2 ws:1.0 localhost.localdomain Running Running 6 minutes ago
mdjxadbmlmhs ws.3 ws:1.0 zhangmh-virtual-machine Running Running 6 minutes ago
kqdwfrddbaxd ws.4 ws:1.0 localhost.localdomain Running Running 6 minutes ago
is2iimz1v4eb ws.5 ws:1.0 zhangmh-virtual-machine Running Running 6 minutes ago可以看到有两个进程运行在192.168.1.220上三个进程运行在192.168.1.116上。我在浏览器上访问了几次之后 使用 docker service logs ws 命令查看服务的日志# docker service logs ws
ws.5.is2iimz1v4ebzhangmh-virtual-machine | [I 210219 01:57:23 web:2239] 200 GET / (10.0.0.2) 3.56ms
ws.5.is2iimz1v4ebzhangmh-virtual-machine | [W 210219 01:57:23 web:2239] 404 GET /favicon.ico (10.0.0.2) 0.97ms
ws.5.is2iimz1v4ebzhangmh-virtual-machine | [I 210219 01:57:28 web:2239] 200 GET / (10.0.0.4) 0.82ms
ws.5.is2iimz1v4ebzhangmh-virtual-machine | [W 210219 01:57:28 web:2239] 404 GET /favicon.ico (10.0.0.4) 0.79ms
ws.1.jpckj0mn24aezhangmh-virtual-machine | [I 210219 02:01:45 web:2239] 304 GET / (10.0.0.2) 1.82ms
ws.1.jpckj0mn24aezhangmh-virtual-machine | [I 210219 02:01:59 web:2239] 304 GET / (10.0.0.2) 0.49ms
ws.1.jpckj0mn24aezhangmh-virtual-machine | [I 210219 02:02:01 web:2239] 304 GET / (10.0.0.2) 2.05ms
ws.1.jpckj0mn24aezhangmh-virtual-machine | [I 210219 02:02:02 web:2239] 304 GET / (10.0.0.2) 0.89ms
ws.1.jpckj0mn24aezhangmh-virtual-machine | [I 210219 02:02:02 web:2239] 304 GET / (10.0.0.2) 1.13ms
ws.1.jpckj0mn24aezhangmh-virtual-machine | [I 210219 02:02:03 web:2239] 304 GET / (10.0.0.2) 0.92ms
ws.1.jpckj0mn24aezhangmh-virtual-machine | [I 210219 02:02:03 web:2239] 304 GET / (10.0.0.2) 2.19ms
ws.1.jpckj0mn24aezhangmh-virtual-machine | [I 210219 02:02:20 web:2239] 304 GET / (10.0.0.2) 1.00ms可以看到即使我访问的是192.168.1.220而实际访问的扔然是192.168.1.116上的进程。如果把192.168.1.116关机其上运行的进程会自动转移到192.168.1.220的节点中因为192.168.1.116现在是manager节点如果停止集群会进入不可用的状态所以需要先将其降级为worker节点# docker node demote xby86ffkqw3axyfkwd4s7nubz
Manager xby86ffkqw3axyfkwd4s7nubz demoted in the swarm.然后将192.168.1.116关机。# docker service ps ws
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
jrj9ben9vr5c ws.1 ws:1.0 localhost.localdomain Running Running 57 minutes ago
yrrdn4ntb089 ws.2 ws:1.0 localhost.localdomain Running Running about an hour ago
opig9zrmp261 ws.3 ws:1.0 localhost.localdomain Running Running 57 minutes ago
kqdwfrddbaxd ws.4 ws:1.0 localhost.localdomain Running Running about an hour ago
hiz8730pl3je ws.5 ws:1.0 localhost.localdomain Running Running 57 minutes ago可以看到5个进程都转移到192.168.1.220上运行了。# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc4c457ce769 ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.5.hiz8730pl3je7qvo2lv6k554b
c846ac1c4d91 ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.3.opig9zrmp2619t4e1o3ntnj2w
214daa36c138 ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.1.jrj9ben9vr5c3biuc90xtoffh
17842db9dc47 ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.4.kqdwfrddbaxd5z78uo3zsy5sd
47185ba9a4fd ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.2.yrrdn4ntb089t6i66w8xvq8r9
# docker kill bc4c457ce769
bc4c457ce769杀死第5个进程后等待几秒再查看进程# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
416b55e8d174 ws:1.0 /bin/sh -c python … About a minute ago Up About a minute ws.5.fvpm334t2zqbj5l50tyx5glr6
c846ac1c4d91 ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.3.opig9zrmp2619t4e1o3ntnj2w
214daa36c138 ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.1.jrj9ben9vr5c3biuc90xtoffh
17842db9dc47 ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.4.kqdwfrddbaxd5z78uo3zsy5sd
47185ba9a4fd ws:1.0 /bin/sh -c python … 3 hours ago Up 3 hours ws.2.yrrdn4ntb089t6i66w8xvq8r9第5个进程又被启动。Docker服务的副本数量是可以动态调整的比如系统负载过高需要添加副本时只需要执行# docker service scale ws6
ws scaled to 6
overall progress: 6 out of 6 tasks
1/6: running []
2/6: running []
3/6: running []
4/6: running []
5/6: running []
6/6: running []
verify: Service converged这样就增加了一个副本。服务创建好以后就可以随着Docker的系统服务被启动只要执行systemctl enable docker刚才创建的集群和服务都会开机启动不用担心机器重启导致程序运行不正常。共享数据卷首先使用 docker volume create 命令创建一个数据卷# docker volume create ws_volume
ws_volume创建完成后使用 docker volume ls 命令可查看现有的数据卷# docker volume ls
DRIVER VOLUME NAME
local ws_volume使用 docker inspect 命令可查看数据卷的详细信息# docker inspect ws_volume
[{CreatedAt: 2021-02-19T14:09:5808:00,Driver: local,Labels: {},Mountpoint: /var/lib/docker/volumes/ws_volume/_data,Name: ws_volume,Options: {},Scope: local}
]在创建service时可使用 --mount 参数将数据卷挂载到service中# docker service create --replicas 2 --name ws -p 80:8000 --mount typevolume,srcws_volume,dst/volume ws:1.0
image ws:1.0 could not be accessed on a registry to record
its digest. Each node will access ws:1.0 independently,
possibly leading to different nodes running different
versions of the image.iiiit9slq9qqwcdwwi0w0mcz5
overall progress: 2 out of 2 tasks
1/2: running []
2/2: running []
verify: Service converged--mount 有很多的子参数把它们写成keyvalue的形式然后用逗号隔开即可最简单的只需要设置type、src、dst三个参数即可。往期推荐如果让你来设计网络70% 开发者对云原生一知半解“云深”如何知处一把王者的时间我就学会了Nginx如何在 Kubernetes Pod 内进行网络抓包点分享点收藏点点赞点在看