长春 网站 设计公司,一级网站建设,大型网站的建设包括那些内容,昌邑住房和城乡建设局网站Web集群_01
Keepalived
概述 Keepalived实现了高可用集群 Keepalived最初是为LVS设计 , 专门监控各种服务器节点的状态 Keepalived 后加入了 VRRP 功能 , 防止单点故障 VRRP ( 虚拟冗余路由协议 ) VRRP能在不改变网组的情况下 , 将多台路由器虚拟成一个虚拟路由器 , 通过配…Web集群_01
Keepalived
概述 Keepalived实现了高可用集群 Keepalived最初是为LVS设计 , 专门监控各种服务器节点的状态 Keepalived 后加入了 VRRP 功能 , 防止单点故障 VRRP ( 虚拟冗余路由协议 ) VRRP能在不改变网组的情况下 , 将多台路由器虚拟成一个虚拟路由器 , 通过配置虚拟路由器的IP地址为默认网关 , 实现网关的备份
作用
通过 VRRP 实现服务或网络的高可用性为LVS设计 , 监控集群各个服务节点的状态某个节点出现故障时 , Keepalived将检测到后将节点从集群系统中剔除在节点修复后 , 自动将此节点重新加入集群节点需要手工修复
特点
部署简单 , 只需要配置一个配置文件即可加入了VRRP , 可以保证业务或网络不间断稳定运行
核心功能
健康检查 采用 tcp三次握手 , icmp请求 , http请求 , udp , echo请求等方式对负载均衡后端实际服务器进行保活 故障切换 主要应用在配置了主备的服务器上使用ARRP 协议维持主备之间的心跳当主服务器出现问题时 , 由备服务器承载对应业务最大限度减少损失 , 并提供服务的稳定性
原理
keepalived 工作在 TCP/IP五层模型中 , 使用 第三层到第五层
网络层 , 传输层 , 应用层
网络层 : 通过 ICMP协议向集群每个节点发送UCMP数据包 , 根据回馈判定节点是否正常运行 , 失效节点则剔除集群传输层 : 通过 TCP协议的端口连接和扫描技术判断集群节点是否正常 , 若端口无反映则视为失效节点**应用层 : 用户可以编写shell脚本或playbook剧本来运行keepalived , keepalived根据脚本检测各种程序或服务是否正常 . **
配置Keepalived 高可用web集群
# 在两台web服务器上安装keepalived
[rootpubserver cluster]# vim 07-install-keepalived.yml
---
- name: install keepalivedhosts: webserverstasks:- name: install keepalived # 安装keepalivedyum:name: keepalivedstate: present[rootpubserver cluster]# ansible-playbook 07-install-keepalived.yml# 修改配置文件
[rootweb1 ~]# vim /etc/keepalived/keepalived.conf 12 router_id web1 # 设置本机在集群中的唯一识别符13 vrrp_iptables # 自动配置iptables放行规则... ...20 vrrp_instance VI_1 {21 state MASTER # 状态主为MASTER备为BACKUP22 interface eth0 # 网卡23 virtual_router_id 51 # 虚拟路由器地址24 priority 100 # 优先级25 advert_int 1 # 发送心跳消息的间隔26 authentication {27 auth_type PASS # 认证类型为共享密码28 auth_pass 1111 # 集群中的机器密码相同才能成为集群29 } 30 virtual_ipaddress {31 192.168.88.80/24 # VIP地址32 } 33 }
# 删除下面所有行
[rootweb1 ~]# systemctl start keepalived# 等几秒服务完全启动后可以查看到vip
[rootweb1 ~]# ip a s eth0 | grep 88inet 192.168.88.100/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.80/24 scope global secondary eth0# 配置web2
[rootweb1 ~]# scp /etc/keepalived/keepalived.conf 192.168.88.200:/etc/keepalived/[rootweb2 ~]# vim /etc/keepalived/keepalived.conf 12 router_id web2 # 改id13 vrrp_iptables... ... 20 vrrp_instance VI_1 {21 state BACKUP # 改状态22 interface eth023 virtual_router_id 5124 priority 80 # 改优先级25 advert_int 126 authentication {27 auth_type PASS28 auth_pass 111129 }30 virtual_ipaddress {31 192.168.88.80/2432 }33 }# 启动服务
[rootweb2 ~]# systemctl start keepalived
# 查看地址eth0不会出现vip
[rootweb2 ~]# ip a s | grep 88inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0# 测试现在访问88.80看到是web1上的内容
[rootclient1 ~]# curl http://192.168.88.80
Welcome from web1# 模拟web1出现故障
[rootweb1 ~]# systemctl stop keepalived.service # 测试现在访问88.80看到是web2上的内容
[rootclient1 ~]# curl http://192.168.88.80
Welcome from web2# 在web2上查看vip可以查看到vip 192.168.88.80
[rootweb2 ~]# ip a s | grep 88inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.80/24 scope global secondary eth0监控80端口 , 实现主备切换
实现原理 配置高可用的web集群时Keepalived只为服务器提供了VIP Keepalived不知道服务器上运行了哪些服务 MASTER服务器可以通过跟踪脚本监视本机的80端口一旦本机80端口失效则将VIP切换至BACKUP服务器 Keepalived对脚本的要求是退出码为0表示访问成功退出码为1表示失败。
实施
# 1. 在MASTER上创建监视脚本
[rootweb1 ~]# vim /etc/keepalived/check_http.sh
#!/bin/bash
ss -tlnp | grep :80 /dev/null exit 0 || exit 1[rootweb1 ~]# chmod x /etc/keepalived/check_http.sh# 2. 修改MASTER配置文件使用脚本
[rootweb1 ~]# vim /etc/keepalived/keepalived.conf 1 ! Configuration File for keepalived2 3 global_defs {
...略...18 }19 20 vrrp_script chk_http_port { # 定义监视脚本21 script /etc/keepalived/check_http.sh22 interval 2 # 脚本每隔2秒运行一次23 }2425 vrrp_instance VI_1 {26 state MASTER27 interface eth028 virtual_router_id 5129 priority 10030 advert_int 131 authentication {32 auth_type PASS33 auth_pass 111134 }35 virtual_ipaddress {36 192.168.88.80/2437 }38 track_script { # 引用脚本39 chk_http_port40 }41 }# 3. 重起服务
[rootweb1 ~]# systemctl restart keepalived.service # 4. 测试关闭web1的nginx后VIP将会切换至web2
[rootweb1 ~]# systemctl stop nginx.service
[rootweb1 ~]# ip a s | grep 88inet 192.168.88.100/24 brd 192.168.88.255 scope global noprefixroute eth0
[rootweb2 ~]# ip a s | grep 88inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.80/24 scope global secondary eth0# 5. 当MASTER的nginx修复后VIP将会切换回至web1
[rootweb1 ~]# systemctl start nginx.service
[rootweb1 ~]# ip a s | grep 88inet 192.168.88.100/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.80/24 scope global secondary eth0[rootweb2 ~]# ip a s | grep 88inet 192.168.88.200/24 brd 192.168.88.255 scope global noprefixroute eth0配置高可用 , 负载均衡web集群
开启web1 , web2 的虚拟网卡 ifup lo:0 192.168.88.15ifup lo:0设置开启自动启动/etc/rc.d/rc.local 内为开机执行命令 , 添加 ifup lo:0
LVS-DR模式
client1eth0-192.168.88.10lvs1eth0-192.168.88.5lvs2eth0-192.168.88.6web1eth0-192.168.88.100web2eth0-192.168.88.200
环境准备
# 关闭2台web服务器上的keepalived并卸载
[rootpubserver cluster]# vim 08-rm-keepalived.yml
---
- name: remove keepalivedhosts: webserverstasks:- name: stop keepalived # 停服务service:name: keepalivedstate: stopped- name: uninstall keepalived # 卸载yum:name: keepalivedstate: absent[rootpubserver cluster]# ansible-playbook 08-rm-keepalived.yml# 创建新虚拟机lvs2
[rootmyhost ~]# vm clone lvs2# 为lvs2设置ip地址
[rootmyhost ~]# vm setip lvs2 192.168.88.6# 连接
[rootmyhost ~]# ssh 192.168.88.6配置高可用 , 负载均衡
1. 在2台web服务器的lo上配置vip
2. 在2台web服务器上配置内核参数
3. 删除lvs1上的eth0上的VIP地址。因为vip将由keepalived接管
[rootpubserver cluster]# vim 09-del-lvs1-vip.yml
---
- name: del lvs1 viphosts: lvs1tasks:- name: rm viplineinfile: # 在指定文件中删除行path: /etc/sysconfig/network-scripts/ifcfg-eth0regexp: IPADDR2 # 正则匹配state: absentnotify: restart systemhandlers:- name: restart systemshell: reboot[rootpubserver cluster]# ansible-playbook 09-del-lvs1-vip.yml# 查看结果
[rootlvs1 ~]# ip a s eth0 | grep 88inet 192.168.88.5/24 brd 192.168.88.255 scope global noprefixroute eth04. 删除lvs1上的lvs规则。因为lvs规则将由keepalived创建
[rootlvs1 ~]# ipvsadm -Ln # 查看规则
[rootlvs1 ~]# ipvsadm -D -t 192.168.88.15:805. 在lvs上配置keepalived
# 在主机清单文件中加入lvs2的说明
[rootpubserver cluster]# vim inventory
...略...
[lb]
lvs1 ansible_host192.168.88.5
lvs2 ansible_host192.168.88.6
...略...# 安装软件包
[rootpubserver cluster]# cp 01-upload-repo.yml 10-upload-repo.yml
---
- name: config repos.dhosts: lbtasks:- name: delete repos.dfile:path: /etc/yum.repos.dstate: absent- name: create repos.dfile:path: /etc/yum.repos.dstate: directorymode: 0755- name: upload local88copy:src: files/local88.repodest: /etc/yum.repos.d/[rootpubserver cluster]# ansible-playbook 10-upload-repo.yml [rootpubserver cluster]# vim 11-install-lvs2.yml
---
- name: install lvs keepalivedhosts: lbtasks:- name: install pkgs # 安装软件包yum:name: ipvsadm,keepalivedstate: present[rootpubserver cluster]# ansible-playbook 11-install-lvs2.yml[rootlvs1 ~]# vim /etc/keepalived/keepalived.conf 12 router_id lvs1 # 为本机取一个唯一的id13 vrrp_iptables # 自动开启iptables放行规则
... ...20 vrrp_instance VI_1 {21 state MASTER22 interface eth023 virtual_router_id 5124 priority 10025 advert_int 126 authentication {27 auth_type PASS28 auth_pass 111129 } 30 virtual_ipaddress {31 192.168.88.15 # vip地址与web服务器的vip一致32 } 33 }# 以下为keepalived配置lvs的规则35 virtual_server 192.168.88.15 80 { # 声明虚拟服务器地址36 delay_loop 6 # 健康检查延迟6秒开始37 lb_algo wrr # 调度算法为wrr38 lb_kind DR # 工作模式为DR39 persistence_timeout 50 # 50秒内相同客户端调度到相同服务器40 protocol TCP # 协议是TCP41 42 real_server 192.168.88.100 80 { # 声明真实服务器43 weight 1 # 权重44 TCP_CHECK { # 通过TCP协议对真实服务器做健康检查45 connect_timeout 3 # 连接超时时间为3秒46 nb_get_retry 3 # 3次访问失败则认为真实服务器故障47 delay_before_retry 3 # 两次检查时间的间隔3秒48 }49 }50 real_server 192.168.88.200 80 {51 weight 252 TCP_CHECK {53 connect_timeout 354 nb_get_retry 355 delay_before_retry 356 }57 }58 }# 以下部分删除# 启动keepalived服务
[rootlvs1 ~]# systemctl start keepalived# 验证
[rootlvs1 ~]# ip a s eth0 | grep 88inet 192.168.88.5/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.15/32 scope global eth0[rootlvs1 ~]# ipvsadm -Ln # 出现规则
IP Virtual Server version 1.2.1 (size4096)
Prot LocalAddress:Port Scheduler Flags- RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.88.15:80 wrr persistent 50- 192.168.88.100:80 Route 1 0 0 - 192.168.88.200:80 Route 2 0 0 # 客户端连接测试
[rootclient1 ~]# for i in {1..6}; do curl http://192.168.88.15/; done
Welcome from web2
Welcome from web2
Welcome from web2
Welcome from web2
Welcome from web2
Welcome from web2# 为了效率相同的客户端在50秒内分发给同一台服务器。为了使用同一个客户端可以看到轮询效果可以注释配置文件中相应的行后重启keepavlied。
[rootlvs1 ~]# vim 39 /etc/keepalived/keepalived.conf
...略...# persistence_timeout 50
...略...
[rootlvs1 ~]# systemctl restart keepalived.service # 在客户端验证
[rootclient1 ~]# for i in {1..6}; do curl http://192.168.88.15/; done
Welcome from web2
Welcome from web1
Welcome from web2
Welcome from web2
Welcome from web1
Welcome from web2# 配置LVS2
[rootlvs1 ~]# scp /etc/keepalived/keepalived.conf 192.168.88.6:/etc/keepalived/[rootlvs2 ~]# vim /etc/keepalived/keepalived.conf 12 router_id lvs221 state BACKUP24 priority 80[rootlvs2 ~]# systemctl start keepalived
[rootlvs2 ~]# ipvsadm -Ln # 出现规则
IP Virtual Server version 1.2.1 (size4096)
Prot LocalAddress:Port Scheduler Flags- RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.88.15:80 wrr- 192.168.88.100:80 Route 1 0 0 - 192.168.88.200:80 Route 2 0 0 6.验证
# 1. 验证真实服务器健康检查
[rootweb1 ~]# systemctl stop nginx
[rootlvs1 ~]# ipvsadm -Ln # web1在规则中消失
[rootlvs2 ~]# ipvsadm -Ln[rootweb1 ~]# systemctl start nginx
[rootlvs1 ~]# ipvsadm -Ln # web1重新出现在规则中
[rootlvs2 ~]# ipvsadm -Ln# 2. 验证lvs的高可用性
[rootlvs1 ~]# shutdown -h now # 关机
[rootlvs2 ~]# ip a s | grep 88 # 可以查看到vipinet 192.168.88.6/24 brd 192.168.88.255 scope global noprefixroute eth0inet 192.168.88.15/32 scope global eth0
# 客户端访问vip依然可用
[rootclient1 ~]# for i in {1..6}; do curl http://192.168.88.15/; done
Welcome from web1
Welcome from web2
Welcome from web2
Welcome from web1
Welcome from web2
Welcome from web2HAProxy 也是一款实现负载均衡的调度器 适用于负载特别大的web站点 HAProxy的工作模式 mode http只适用于web服务mode tcp适用于各种服务mode health仅做健康检查很少使用
初始化配置
client1eth0 - 192.168.88.10HAProxyeth0 - 192.168.88.5web1eth0 - 192.168.88.100web2eth0 - 192.168.88.200
# 关闭192.168.88.6
[rootlvs2 ~]# shutdown -h now# 配置192.168.88.5为haproxy服务器
[rootpubserver cluster]# vim 12-config-haproxy.yml
---
- name: config haproxyhosts: lvs1tasks:- name: rm lvs keepalived # 删除软件包yum:name: ipvsadm,keepalivedstate: absent- name: rename hostname # 修改主机名shell: hostnamectl set-hostname haproxy1- name: install haproxy # 安装软件包yum:name: haproxystate: present[rootpubserver cluster]# ansible-playbook 12-config-haproxy.yml
# web服务器不需要配置vip不需要改内核参数。但是存在对haproxy也没有影响。