Haproxy+Keepalived实现负载均衡
HAProxy介绍
反向代理服务器,支持双机热备支持虚拟主机,但其配置简单,拥有非常不错的服务器健康检查功能,当其代理的后端服务器出现故障, HAProxy会自动将该服务器摘除,故障恢复后再自动将该服务器加入新的1.3引入了frontend,backend;frontend根据任意 HTTP请求头内容做规则匹配,然后把请求定向到相关的backend.
HAProxy 提供高可用性、负载均衡以及基于 TCP 和 HTTP 应用的代理,支持虚拟主机,它是免费、快速并且可靠的一种解决方案。HAProxy 特别适用于那些负载特大的 web 站点, 这些站点通常又需要会话保持或七层处理。HAProxy 运行在当前的硬件上,完全可以支持数以万计的并发连接。并且它的运行模式使得它可以很简单安全的整 合进您当前的架构中, 同时可以保护你的 web 服务器不被暴露到网络上。
Keepalived 介绍
Keepalived 的作用是检测web服务器的状态,如果有一台web服务器死机,或工作出现故障,Keepalived将检测到,并将有故障的web服务器从系统中剔除, 当web服务器工作正常后Keepalived自动将web服务器加入到服务器群中,这些工作全部自动完成,不需要人工干涉,需要人工做的只是修复故障的 web服务器。
在上篇中,我使用了nginx反向代理加上keepalived,来达到双机主主模式的高可用。Haproxy也是一种反向代理的软件,故本次实验也将实现 haproxy 加上 keepalived 来实现主主模式的高可用。
一、环境说明
本次试验依旧使用的是2台 haproxy代理服务器 加上 keepalived,后端为了方便,使用apache来发布网页,达到测试的目的。
操作系统:centos7 64位
软件源:阿里云
2台服务器(haproxy1、haproxy2)安装 keepalived 和安装 haproxy 来反向代理
2台服务器(web1、web2)安装 apache 来提供服务
服务器的防火墙和selinux全部关闭
haproxy1 IP:192.168.163.166
haproxy2 IP:192.168.163.167
nginx1 IP:192.168.163.164
nginx2 IP:192.168.163.165
虚拟IP:192.168.163.100 和 192.168.163.200
拓扑图如下:
二、环境安装
先安装好web服务器
1、首先为后端的2台web服务器安装apache
[root@web1 ~]# yum install -y httpd
[root@web2 ~]# yum install -y httpd
2、修改web1的域名为img.xhk.com web2的域名为test.xhk.com
ServerName img.xhk.com:80
ServerName test.xhk.com:80
3、创建各自的网页
[root@web1 ~]# echo "web1" > /var/www/html/index.html
[root@web2 ~]# echo "web2" > /var/www/html/index.html
4、启动服务
[root@web1 ~]# systemctl start httpd
[root@web2 ~]# systemctl start httpd
5、在Client修改hosts文件并测试网页
[root@client ~]# curl img.xhk.com
web1
[root@client ~]# curl test.xhk.com
web2
接下来,就是安装haproxy服务器的 haproxy 和 keepalived
1、为了方便,在2台haproxy服务器都使用 yum 安装
[root@haproxy ~]# yum install -y haproxy keepalived
2、先修改haproxy文件
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
globallog 127.0.0.1 local2chroot /var/lib/haproxypidfile /var/run/haproxy.pidmaxconn 4000user haproxygroup haproxydaemonstats socket /var/lib/haproxy/stats defaultsmode httplog globaloption httplogoption dontlognulloption http-server-closeoption forwardfor except 127.0.0.0/8option redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client 1mtimeout server 1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000 frontend main *:80 option forwardfor except 127.0.0.0/8 rspadd X-via:\ haproxy/www.xhk.com rspidel Server.* stats enable #在frontend添加监控页面stats uri /xhk?stats #自定义监控页面的urlstats realm Stats\ Page\ Area #去掉空格转译stats authxhk:xhk #配置帐号密码stats refresh 5s #每5s刷新一次监控页面stats hide-version #隐藏版本信息stats admin if TRUEmaxconn 10000 #定义最大并发为10000 acl url_img hdr(host) -i img.xhk.com acl url_test hdr(host) -i test.xhk.com use_backend img_web if url_img use_backend test_web if url_test backend img_web balance roundrobin server web1 192.168.163.164:80 check backend test_web balance roundrobin server web2 192.168.163.165:80 check
保存退出
3、重启 haproxy 服务
[root@haproxy haproxy]# systemctl restart haproxy
[root@haproxy haproxy]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2017-10-21 22:31:03 EDT; 5s ago
Main PID: 2265 (haproxy-systemd)
CGroup: /system.slice/haproxy.service
├─2265 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
├─2266 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
└─2267 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Oct 21 22:31:03 haproxy systemd[1]: Started HAProxy Load Balancer.
Oct 21 22:31:03 haproxy systemd[1]: Starting HAProxy Load Balancer...
Oct 21 22:31:03 haproxy haproxy-systemd-wrapper[2265]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /et... -Ds
Hint: Some lines were ellipsized, use -l to show in full.
访问 haproxy 自带的监控页面,用户名密码为配置文件中设置的
url为haproxy服务器的IP地址加上自定义的url
http://192.168.163.166/xhk?stats
4、接下来安装keepalived
[root@haproxy haproxy]# yum install -y keepalived
5、编辑 keepalived 的配置文件
[root@haproxy haproxy]# vim /etc/keepalived/keepalived.conf
global_defs {notification_email { # 邮件接收者root@localhost}notification_email_from keepalived@localhost #邮件发送者smtp_server 127.0.0.1 #邮件服务器地址smtp_connect_timeout 30 #超时时限router_id xhk #此处注意router_id为负载均衡标识vrrp_mcast_group4 224.0.100.19 #添加ipv4的多播地址,并确保多播功能启用 }vrrp_script check_haproxy { script /root/check_haproxy.sh #检测脚本,该路径就是你脚本存放的位置interval 2 #每隔2秒钟检测一次# weight -2 #优先级减去2 }vrrp_instance VI_1 {state MASTER #状态只有MASTER和BACKUP两种interface ens32 #网络接口virtual_router_id 51 #虚拟路由标识priority 100 #优先级advert_int 1 #MASTER 与BACKUP 之间同步检查的时间间隔,单位为秒authentication {auth_type PASS #验证authentication。包含验证类型和验证密码auth_pass xhk #AH 使用时有问题。验证密码为明文}virtual_ipaddress {192.168.163.100 dev ens32 #虚拟ip地址,可以有多个地址}track_script { #运行上面所写的检测脚本check_haproxy} }vrrp_instance VI_2 {state BACKUPinterface ens32virtual_router_id 52priority 99advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.163.200 dev ens32}track_script { check_haproxy} }
保存退出
6、编写检测haproxy服务状态的脚本
vim /root/check_haproxy.sh
#!/bin/bashps -C haproxy -o pid if [ $? != 0 ];thensystemctl stop keepalived fi
添加执行权限
[root@haproxy2 ~]# chmod +x check_haproxy.sh
第二台 haproxy 服务器也执行上述这些操作,不同的是 keepalived 配置文件,下面贴出第二台 haproxy 的 keepalived
[root@haproxy2 ~]# vim /etc/keepalived/keepalived.conf
global_defs {notification_email { # 邮件接收者root@localhost}notification_email_from keepalived@localhost #邮件发送者smtp_server 127.0.0.1 #邮件服务器地址smtp_connect_timeout 30 #超时时限router_id xhk #此处注意router_id为负载均衡标识vrrp_mcast_group4 224.0.100.19 #添加ipv4的多播地址,并确保多播功能启用 }vrrp_script check_haproxy { script /root/check_haproxy.sh #检测脚本interval 2 #每隔2秒钟检测一次# weight -2 #优先级减去2 }vrrp_instance VI_1 {state BACKUP #状态只有MASTER和BACKUP两种。interface ens32 #网络接口virtual_router_id 51 #虚拟路由标识priority 99 #优先级advert_int 1 #MASTER 与BACKUP 之间同步检查的时间间隔,单位为秒。authentication {auth_type PASS #验证authenticationauth_pass xhk #AH 使用时有问题。验证密码为明文}virtual_ipaddress {192.168.163.100 dev ens32 #虚拟ip地址,可以有多个地址}track_script { #运行上面所写的检测脚本check_haproxy} }vrrp_instance VI_2 {state MASTERinterface ens32virtual_router_id 52priority 100advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.163.200 dev ens32}track_script { check_haproxy} }
不同之处也不过就是,各个 instance 的优先级要互为主备
7、开启 keepalived 服务
[root@haproxy ~]# systemctl start keepalived
[root@haproxy2 ~]# systemctl start keepalived
查看keepalived状态
[root@haproxy ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2017-10-21 22:44:46 EDT; 4min 5s ago
Process: 2278 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2279 (keepalived)
CGroup: /system.slice/keepalived.service
├─2279 /usr/sbin/keepalived -D
├─2280 /usr/sbin/keepalived -D
└─2281 /usr/sbin/keepalived -D
Oct 21 22:45:04 haproxy Keepalived_vrrp[2281]: Sending gratuitous ARP on ens32 for 192.168.163.100
Oct 21 22:45:06 haproxy Keepalived_vrrp[2281]: Sending gratuitous ARP on ens32 for 192.168.163.200
Oct 21 22:45:06 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_2) Sending/queueing gratuitous ARPs on ens32 for ...3.200
Oct 21 22:45:06 haproxy Keepalived_vrrp[2281]: Sending gratuitous ARP on ens32 for 192.168.163.200
Oct 21 22:45:06 haproxy Keepalived_vrrp[2281]: Sending gratuitous ARP on ens32 for 192.168.163.200
Oct 21 22:45:06 haproxy Keepalived_vrrp[2281]: Sending gratuitous ARP on ens32 for 192.168.163.200
Oct 21 22:45:06 haproxy Keepalived_vrrp[2281]: Sending gratuitous ARP on ens32 for 192.168.163.200
Oct 21 22:45:36 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_2) Received advert with higher priority 100, ours 99
Oct 21 22:45:36 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_2) Entering BACKUP STATE
Oct 21 22:45:36 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_2) removing protocol VIPs.
Hint: Some lines were ellipsized, use -l to show in full.
[root@haproxy ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:4a:86:70 brd ff:ff:ff:ff:ff:ff
inet 192.168.163.166/24 brd 192.168.163.255 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.163.100/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe4a:8670/64 scope link
valid_lft forever preferred_lft forever
可以发现,因为第一台服务器的 VI_1 的优先级高,所以haproxy拿到了vip为192.168.163.100
而 VI_2 的优先级低,所以让 haproxy2 拿到了192.168.163.200,自己降为backup
三、测试环节
在Client绑定域名,让Client访问img.xhk.com、test.xhk.com 时分别去到192.168.163.100、192.168.163.200
[root@client ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.163.100 img.xhk.com test.xhk.com
192.168.163.200 img.xhk.com test.xhk.com
访问网页
[root@client ~]# curl img.xhk.com
web1
[root@client ~]# curl test.xhk.com
web2
停掉第一台的haproxy服务
[root@haproxy ~]# systemctl stop haproxy
[root@haproxy ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Oct 21 22:31:03 haproxy systemd[1]: Stopping HAProxy Load Balancer...
Oct 21 22:31:03 haproxy haproxy-systemd-wrapper[2253]: haproxy-systemd-wrapper: SIGTERM -> 2256.
Oct 21 22:31:03 haproxy haproxy-systemd-wrapper[2253]: haproxy-systemd-wrapper: exit, haproxy RC=0
Oct 21 22:31:03 haproxy systemd[1]: Started HAProxy Load Balancer.
Oct 21 22:31:03 haproxy systemd[1]: Starting HAProxy Load Balancer...
Oct 21 22:31:03 haproxy haproxy-systemd-wrapper[2265]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /et... -Ds
Oct 21 23:17:19 haproxy systemd[1]: Stopping HAProxy Load Balancer...
Oct 21 23:17:19 haproxy haproxy-systemd-wrapper[2265]: haproxy-systemd-wrapper: SIGTERM -> 2267.
Oct 21 23:17:19 haproxy haproxy-systemd-wrapper[2265]: haproxy-systemd-wrapper: exit, haproxy RC=0
Oct 21 23:17:19 haproxy systemd[1]: Stopped HAProxy Load Balancer.
Hint: Some lines were ellipsized, use -l to show in full.
[root@haproxy ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Oct 21 22:45:06 haproxy Keepalived_vrrp[2281]: Sending gratuitous ARP on ens32 for 192.168.163.200
Oct 21 22:45:06 haproxy Keepalived_vrrp[2281]: Sending gratuitous ARP on ens32 for 192.168.163.200
Oct 21 22:45:36 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_2) Received advert with higher priority 100, ours 99
Oct 21 22:45:36 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_2) Entering BACKUP STATE
Oct 21 22:45:36 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_2) removing protocol VIPs.
Oct 21 23:17:20 haproxy Keepalived[2279]: Stopping
Oct 21 23:17:20 haproxy systemd[1]: Stopping LVS and VRRP High Availability Monitor...
Oct 21 23:17:20 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_1) sent 0 priority
Oct 21 23:17:20 haproxy Keepalived_vrrp[2281]: VRRP_Instance(VI_1) removing protocol VIPs.
Oct 21 23:17:21 haproxy systemd[1]: Stopped LVS and VRRP High Availability Monitor.
停掉第一台的 haproxy 之后,keepalived的检测脚本检测到 haproxy 服务停止,执行脚本,停掉自身的keepalived进程
查看第二台 haproxy 服务器(haproxy2)的keepalived状态
[root@haproxy2 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2017-10-21 22:45:35 EDT; 35min ago
Process: 2451 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2452 (keepalived)
CGroup: /system.slice/keepalived.service
├─2452 /usr/sbin/keepalived -D
├─2453 /usr/sbin/keepalived -D
└─2454 /usr/sbin/keepalived -D
Oct 21 23:17:21 haproxy2 Keepalived_vrrp[2454]: Sending gratuitous ARP on ens32 for 192.168.163.100
[root@haproxy2 ~]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:d5:fc:a2 brd ff:ff:ff:ff:ff:ff
inet 192.168.163.167/24 brd 192.168.163.255 scope global dynamic ens32
valid_lft 1369sec preferred_lft 1369sec
inet 192.168.163.200/32 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.163.100/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed5:fca2/64 scope link
valid_lft forever preferred_lft forever
2个VIP都已经漂移到 haproxy2 上
进行网页访问
[root@client ~]# curl img.xhk.com
web1
[root@client ~]# curl test.xhk.com
web2
实验成功!!!!!!!!!!!!!!!!!!!!!!!
转载于:https://blog.51cto.com/xhk777/1975006