LVS+Keepalived

Posted by Mathew on 2017-02-11

keepalived 简介

   Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures. Loadbalancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 loadbalancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage loadbalanced server pool according their health. On the other hand high-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router failover. In addition, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. Keepalived frameworks can be used independently or all together to provide resilient infrastructures. keepalived

   Keepalived是一个由C编写的项目,主要目标是为Linux系统和基于Linux的架构提供高可用和负载均衡服务。负载均衡框架主要依赖于众所周知的**Linux Virtual Server(IPVS)内核提供的四层负载均衡。Keepalived可以根据他们的健康状态动态检测来管理负载均衡服务。 另一方面,keepalived的高可用是VRRP(virtual route redundancy protocol)**协议的实现。 关于VRRP,参考H3C技术白皮书

keepalived架构图

keepalived 的工作原理

    Keepalived 是一个基于VRRP协议来实现LVS服务高可用的方案。
    一个LVS服务会有2台服务器运行keepalived 一台为主服务器MASTER一台为备份服务器BACKUP,对外只表现为一个虚拟IP;主服务器会发送特定的消息给备份服务器,当备份服务器收不到这个消息的时候即主服务器宕机的时候,备份服务器会直接接管虚拟IP继续提供服务从而保证了高可用性。

实验拓扑

主机规划

主机名 IP地址 角色
Director1 192.168.1.21 Direcotr
Director1 192.168.1.22 Direcotr
Rs1 192.168.1.23 Real Server
Rs2 192.168.1.24 Real Server

VIP1:192.168.1.41
VIP2:192.168.1.42

注意:所有节点iptables和selinux关闭

配置Keepalived-1

各节点时间同步,博主这里使用的是ansible自动化工具批量处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
## 测试ansible定义各主机连通性
root@director1 ~]# ansible all -m ping
192.168.1.23 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.1.24 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.1.22 | SUCCESS => {
"changed": false,
"ping": "pong"
}
192.168.1.21 | SUCCESS => {
"changed": false,
"ping": "pong"
}
[root@director1 ~]# ansible all -m command -a "ntpdate 0.centos.pool.ntp.org"
192.168.1.22 | SUCCESS | rc=0 >>
25 Dec 15:54:02 ntpdate[2805]: step time server 202.112.29.82 offset -51.455956 sec

192.168.1.24 | SUCCESS | rc=0 >>
25 Dec 15:54:02 ntpdate[2449]: step time server 202.112.29.82 offset -86450.516769 sec

192.168.1.21 | SUCCESS | rc=0 >>
25 Dec 15:54:02 ntpdate[2894]: step time server 202.112.29.82 offset -51.292752 sec

192.168.1.23 | SUCCESS | rc=0 >>
25 Dec 15:54:03 ntpdate[2425]: step time server 202.112.29.82 offset 86408.418292 sec
## Director1和Direcotr2安装keepalived
[root@director1 ~]# ansible Director -m yum -a "name=keepalived state=latest"

####实现Director的VIP互为主从

Direcotr1配置

1
2
3
[root@director1 ~]# yum install ipvsadm httpd -y 
[root@director1 ~]# echo "<h1>Sorry,this is wrong page of D1</h1>" > /var/www/html/index.html
[root@director1 ~]# systemctl start httpd.service

Director1 keepalived配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
[root@director1 ~]# vim /etc/keepalived/keepalived.conf  
! Configuration File for keepalived

global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_script chk_mt {
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}


vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass Daniel
}
virtual_ipaddress {
192.168.1.41/32
}

track_script {
chk_mt
}

}

vrrp_instance VI_2 {
state BACKUP
interface eno16777736
virtual_router_id 61
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass Daniel
}
virtual_ipaddress {
192.168.1.42/32
}

track_script {
chk_mt
}

}



virtual_server 192.168.1.41 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
nat_mask 255.255.255.255
protocol TCP
sorry_server 127.0.0.1 80

real_server 192.168.1.23 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}

real_server 192.168.1.24 80 {
weight 2
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}


virtual_server 192.168.1.42 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
nat_mask 255.255.255.255
protocol TCP
sorry_server 127.0.0.1 80

real_server 192.168.1.23 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}

real_server 192.168.1.24 80 {
weight 2
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
## 上述keepalived配置是双主模型Director完整配置

Derector2配置

1
2
3
[root@director2 ~]# yum install ipvsadm httpd -y 
[root@director2 ~]# echo "<h1>Sorry,this is wrong page of D2</h1>" > /var/www/html/index.html
[root@director2 ~]# systemctl start httpd.service

Director2 keepalived配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
! Configuration File for keepalived

global_defs {
notification_email {
root@localhost
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}

vrrp_script chk_mt {
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}

vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass Daniel
}
virtual_ipaddress {
192.168.1.41/32
}

track_script {
chk_mt
}

}


vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 61
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass Daniel
}
virtual_ipaddress {
192.168.1.42/32
}

track_script {
chk_mt
}

}


virtual_server 192.168.1.41 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
nat_mask 255.255.255.255
protocol TCP
sorry_server 127.0.0.1 80

real_server 192.168.1.23 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}

real_server 192.168.1.24 80 {
weight 2
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}

virtual_server 192.168.1.42 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
nat_mask 255.255.255.255
protocol TCP
sorry_server 127.0.0.1 80

real_server 192.168.1.23 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}

real_server 192.168.1.24 80 {
weight 2
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
## 上述keepalived配置是双主模型完整配置

同时在Director1和Director2 上启动keepalived服务

1
2
[root@director1 ~]# systemctl start keepalived.service
[root@director2 ~]# systemctl start keepalived.service

测试

默认情况下director1director2的ip如下

director1的keepalived服务停掉,IP自动转移到director2

再次启动director1的keepalived服务,IP再次回到director1

配置LVS

这里我们使用DR模型进行实验, 因为keepalived可以通过调用ipvs的接口来自动生成规则, 所以我们这里无需ipvsadm, 但是我们要通过ipvsadm命令来查看一下ipvs规则

上述keepalived配置文件中对real server 的定义已经加进去,此时我们重启下keepalived服务,查看生成的ipvs规则

1
2
3
4
5
6
7
8
9
10
11
[root@director1 keepalived]# ipvsadm -L -n 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.41:80 wrr
-> 192.168.1.23:80 Route 1 0 0
-> 192.168.1.24:80 Route 2 0 0
TCP 192.168.1.42:80 wrr
-> 192.168.1.23:80 Route 1 0 0
-> 192.168.1.24:80 Route 2 0 0
[root@director1 keepalived]#

RS1配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@rs1 ~]# echo "<h1>This is Rs1.Service</h1>" > /var/www/html/index.html 
[root@rs1 ~]# cat /var/www/html/index.html
<h1>This is Rs1.Service</h1>
[root@rs1 ~]# systemctl start httpd.service
[root@rs1 ~]# cat set.sh #编写脚本配置相关内核参数和IP
#!/bin/bash
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:0 192.168.1.41/32 broadcast 192.168.1.41 up
ifconfig lo:1 192.168.1.42/32 broadcast 192.168.1.42 up
;;
stop)
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:0 192.168.1.41/32 broadcast 192.168.1.41 down
ifconfig lo:1 192.168.1.42/32 broadcast 192.168.1.42 down
;;
esac

[root@rs1 ~]# bash setup.sh start
[root@rs1 ~]# scp setup.sh 192.168.1.24:/root #将脚本传给rs2

RS2配置

1
2
3
4
5
[root@rs2 ~]# echo "<h1>This is Rs2.Service</h1>" > /var/www/html/index.html
[root@rs2 ~]# cat /var/www/html/index.html
<h1>This is Rs2.Service</h1>
[root@rs2 ~]# systemctl start httpd.service
[root@rs2 ~]# bash setup.sh start #运行脚本

测试LVS

测试 Director1Director2

当我们关闭rs1的web服务, 会自动检查健康状态并删除

当我们同时关闭rs1和rs2的web服务, 会自动启用sorry server

至此 LVS+Keepalived双主模型已然实现,这次实验做了一天,中间遇到很多坑,总算是做好。记录下来方便以后忘记时翻阅。