镜缘浮影 小人本住在 苏州的城外 家里有屋又有田 生活乐无边

LVS TUN

2018-06-22
wilmosfang
lvs
原文地址 http://soft.dog/2018/06/22/lvs-tun/

前言

LVS(Linux Virtual Server) 是一款开源的 LB(load balancing) 软件

The Linux Virtual Server is a highly scalable and highly available server built on a cluster of real servers, with the load balancer running on the Linux operating system. The architecture of the server cluster is fully transparent to end users, and the users interact as if it were a single high-performance virtual server

LVS 的主要目的是构建一个高性能,高可用,可扩展,可靠的 Linux 集群服务

Build a high-performance and highly available server for Linux using clustering technology, which provides good scalability, reliability and serviceability.

LVS 主要有三种模式:

  • NAT
  • TUN
  • DR

它们的主要区别可以参考

  VS/NAT VS/TUN VS/DR
server any tunneling non-arp device
server network private LAN/WAN LAN
server number low (10~20) high high
server gateway load balancer own router own router

详细区别可以参考 How virtual server works

这里演示一下如何配置 LVS 的 TUN 模式

参考 负载均衡LVS基本介绍Virtual Server via NAT

Tip: 当前的版本为 IPVS 1.2.1


操作

系统环境

DS

[root@ds1 ~]# hostnamectl 
   Static hostname: ds1
         Icon name: computer-vm
           Chassis: vm
        Machine ID: fce1d1d9ca0345dca5e300f4ee8f6b0a
           Boot ID: 04693ae27f67491ea0d1f23fd456904f
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-862.2.3.el7.x86_64
      Architecture: x86-64
[root@ds1 ~]# ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:c7:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 86231sec preferred_lft 86231sec
    inet6 fe80::5054:ff:fec9:c704/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:fc:bf:30 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.181/24 brd 192.168.1.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fefc:bf30/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b3:ae:0b brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.181/24 brd 192.168.56.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb3:ae0b/64 scope link 
       valid_lft forever preferred_lft forever
[root@ds1 ~]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core) 
[root@ds1 ~]# 

RS

[vagrant@rs1 ~]$ hostnamectl 
   Static hostname: rs1
         Icon name: computer-vm
           Chassis: vm
        Machine ID: f7f1e12e7c2b48cd90739bb41d24c97d
           Boot ID: f50c7cf156a24573a9be96f998dc8d51
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-862.2.3.el7.x86_64
      Architecture: x86-64
[vagrant@rs1 ~]$ ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:c7:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 86240sec preferred_lft 86240sec
    inet6 fe80::5054:ff:fec9:c704/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ab:52:cb brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.183/24 brd 192.168.56.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feab:52cb/64 scope link 
       valid_lft forever preferred_lft forever
[vagrant@rs1 ~]$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core) 
[vagrant@rs1 ~]$ 

网络环境

ds1:
    192.168.1.181
    192.168.56.181
rs1:
    192.168.56.183
rs2:
    192.168.56.184

VIP 为 192.168.56.185

拓扑

client--->ds1
      `<--rs1
      `<--rs2

概念

  • DS:Director Server 指的是前端负载均衡器节点
  • RS:Real Server 后端真实的工作服务器
  • VIP:向外部直接面向用户请求,作为用户请求的目标的IP地址
  • DIP:Director Server IP,主要用于和内部主机通讯的IP地址
  • RIP:Real Server IP,后端服务器的IP地址
  • CIP:Client IP,访问客户端的IP地址

安装 nginx

[root@rs1 ~]# rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
Retrieving http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
warning: /var/tmp/rpm-tmp.dehFJg: Header V4 RSA/SHA1 Signature, key ID 7bd9bf62: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:nginx-release-centos-7-0.el7.ngx ################################# [100%]
[root@rs1 ~]# yum list all | grep -i nginx
nginx-release-centos.noarch                 7-0.el7.ngx                installed
nginx.x86_64                                1:1.14.0-1.el7_4.ngx       nginx    
nginx-debug.x86_64                          1:1.8.0-1.el7.ngx          nginx    
nginx-debuginfo.x86_64                      1:1.14.0-1.el7_4.ngx       nginx    
nginx-module-geoip.x86_64                   1:1.14.0-1.el7_4.ngx       nginx    
nginx-module-geoip-debuginfo.x86_64         1:1.14.0-1.el7_4.ngx       nginx    
nginx-module-image-filter.x86_64            1:1.14.0-1.el7_4.ngx       nginx    
nginx-module-image-filter-debuginfo.x86_64  1:1.14.0-1.el7_4.ngx       nginx    
nginx-module-njs.x86_64                     1:1.14.0.0.2.0-1.el7_4.ngx nginx    
nginx-module-njs-debuginfo.x86_64           1:1.14.0.0.2.0-1.el7_4.ngx nginx    
nginx-module-perl.x86_64                    1:1.14.0-1.el7_4.ngx       nginx    
nginx-module-perl-debuginfo.x86_64          1:1.14.0-1.el7_4.ngx       nginx    
nginx-module-xslt.x86_64                    1:1.14.0-1.el7_4.ngx       nginx    
nginx-module-xslt-debuginfo.x86_64          1:1.14.0-1.el7_4.ngx       nginx    
nginx-nr-agent.noarch                       2.0.0-12.el7.ngx           nginx    
pcp-pmda-nginx.x86_64                       3.12.2-5.el7               base     
[root@rs1 ~]# yum install -y nginx.x86_64
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.vinahost.vn
 * extras: mirrors.vinahost.vn
 * updates: centos.ipserverone.com
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 1:1.14.0-1.el7_4.ngx will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package       Arch           Version                       Repository     Size
================================================================================
Installing:
 nginx         x86_64         1:1.14.0-1.el7_4.ngx          nginx         750 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 750 k
Installed size: 2.6 M
Downloading packages:
nginx-1.14.0-1.el7_4.ngx.x86_64.rpm                        | 750 kB   00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : 1:nginx-1.14.0-1.el7_4.ngx.x86_64                            1/1 
----------------------------------------------------------------------

Thanks for using nginx!

Please find the official documentation for nginx here:
* http://nginx.org/en/docs/

Please subscribe to nginx-announce mailing list to get
the most important news about nginx:
* http://nginx.org/en/support.html

Commercial subscriptions for nginx are available on:
* http://nginx.com/products/

----------------------------------------------------------------------
  Verifying  : 1:nginx-1.14.0-1.el7_4.ngx.x86_64                            1/1 

Installed:
  nginx.x86_64 1:1.14.0-1.el7_4.ngx                                             

Complete!
[root@rs1 ~]# systemctl start nginx.service
[root@rs1 ~]# curl localhost:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@rs1 ~]# 
[root@rs1 conf.d]# echo 'rs1' > /usr/share/nginx/html/index.html 
[root@rs1 conf.d]# curl localhost:80
rs1
[root@rs1 conf.d]# 

将 RS2 也执行相同的操作

[root@rs2 ~]# echo 'rs2' > /usr/share/nginx/html/index.html 
[root@rs2 ~]# curl localhost:80
rs2
[root@rs2 ~]# 

安装 ipvsadm

[root@ds1 ~]# yum install ipvsadm
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirrors.vhost.vn
 * extras: mirrors.vhost.vn
 * updates: mirrors.vhost.vn
base                                                     | 3.6 kB     00:00     
extras                                                   | 3.4 kB     00:00     
updates                                                  | 3.4 kB     00:00     
(1/4): extras/7/x86_64/primary_db                          | 147 kB   00:00     
(2/4): base/7/x86_64/group_gz                              | 166 kB   00:00     
(3/4): updates/7/x86_64/primary_db                         | 2.0 MB   00:02     
(4/4): base/7/x86_64/primary_db                            | 5.9 MB   00:04     
Resolving Dependencies
--> Running transaction check
---> Package ipvsadm.x86_64 0:1.27-7.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package           Arch             Version                Repository      Size
================================================================================
Installing:
 ipvsadm           x86_64           1.27-7.el7             base            45 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 45 k
Installed size: 75 k
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7/base/packages/ipvsadm-1.27-7.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for ipvsadm-1.27-7.el7.x86_64.rpm is not installed
ipvsadm-1.27-7.el7.x86_64.rpm                              |  45 kB   00:00     
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-5.1804.el7.centos.x86_64 (@anaconda)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : ipvsadm-1.27-7.el7.x86_64                                    1/1 
  Verifying  : ipvsadm-1.27-7.el7.x86_64                                    1/1 

Installed:
  ipvsadm.x86_64 0:1.27-7.el7                                                   

Complete!
[root@ds1 ~]# 
[root@ds1 ~]# ipvsadm --version
ipvsadm v1.27 2008/5/15 (compiled with popt and IPVS v1.2.1)
[root@ds1 ~]# 

装的东西很少

[root@ds1 ~]# rpm -ql ipvsadm
/etc/sysconfig/ipvsadm-config
/usr/lib/systemd/system/ipvsadm.service
/usr/sbin/ipvsadm
/usr/sbin/ipvsadm-restore
/usr/sbin/ipvsadm-save
/usr/share/doc/ipvsadm-1.27
/usr/share/doc/ipvsadm-1.27/README
/usr/share/man/man8/ipvsadm-restore.8.gz
/usr/share/man/man8/ipvsadm-save.8.gz
/usr/share/man/man8/ipvsadm.8.gz
[root@ds1 ~]# 

查看帮助

[root@ds1 ~]# ipvsadm -h
ipvsadm v1.27 2008/5/15 (compiled with popt and IPVS v1.2.1)
Usage:
  ipvsadm -A|E -t|u|f service-address [-s scheduler] [-p [timeout]] [-M netmask] [--pe persistence_engine] [-b sched-flags]
  ipvsadm -D -t|u|f service-address
  ipvsadm -C
  ipvsadm -R
  ipvsadm -S [-n]
  ipvsadm -a|e -t|u|f service-address -r server-address [options]
  ipvsadm -d -t|u|f service-address -r server-address
  ipvsadm -L|l [options]
  ipvsadm -Z [-t|u|f service-address]
  ipvsadm --set tcp tcpfin udp
  ipvsadm --start-daemon state [--mcast-interface interface] [--syncid sid]
  ipvsadm --stop-daemon state
  ipvsadm -h

Commands:
Either long or short options are allowed.
  --add-service     -A        add virtual service with options
  --edit-service    -E        edit virtual service with options
  --delete-service  -D        delete virtual service
  --clear           -C        clear the whole table
  --restore         -R        restore rules from stdin
  --save            -S        save rules to stdout
  --add-server      -a        add real server with options
  --edit-server     -e        edit real server with options
  --delete-server   -d        delete real server
  --list            -L|-l     list the table
  --zero            -Z        zero counters in a service or all services
  --set tcp tcpfin udp        set connection timeout values
  --start-daemon              start connection sync daemon
  --stop-daemon               stop connection sync daemon
  --help            -h        display this help message

Options:
  --tcp-service  -t service-address   service-address is host[:port]
  --udp-service  -u service-address   service-address is host[:port]
  --fwmark-service  -f fwmark         fwmark is an integer greater than zero
  --ipv6         -6                   fwmark entry uses IPv6
  --scheduler    -s scheduler         one of rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq,
                                      the default scheduler is wlc.
  --pe            engine              alternate persistence engine may be sip,
                                      not set by default.
  --persistent   -p [timeout]         persistent service
  --netmask      -M netmask           persistent granularity mask
  --real-server  -r server-address    server-address is host (and port)
  --gatewaying   -g                   gatewaying (direct routing) (default)
  --ipip         -i                   ipip encapsulation (tunneling)
  --masquerading -m                   masquerading (NAT)
  --weight       -w weight            capacity of real server
  --u-threshold  -x uthreshold        upper threshold of connections
  --l-threshold  -y lthreshold        lower threshold of connections
  --mcast-interface interface         multicast interface for connection sync
  --syncid sid                        syncid for connection sync (default=255)
  --connection   -c                   output of current IPVS connections
  --timeout                           output of timeout (tcp tcpfin udp)
  --daemon                            output of daemon information
  --stats                             output of statistics information
  --rate                              output of rate information
  --exact                             expand numbers (display exact values)
  --thresholds                        output of thresholds information
  --persistent-conn                   output of persistent connection info
  --nosort                            disable sorting output of service/server entries
  --sort                              does nothing, for backwards compatibility
  --ops          -o                   one-packet scheduling
  --numeric      -n                   numeric output of addresses and ports
  --sched-flags  -b flags             scheduler flags (comma-separated)
[root@ds1 ~]# 

配置 RS VIP

R1 的隧道上绑上 VIP

然后添加上路由

[root@rs1 ~]# ifconfig tunl0 192.168.56.185 broadcast 192.168.56.185 netmask 255.255.255.255 up
[root@rs1 ~]# route add -host 192.168.56.185 tunl0
[root@rs1 ~]# ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:c7:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85486sec preferred_lft 85486sec
    inet6 fe80::5054:ff:fec9:c704/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ab:52:cb brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.183/24 brd 192.168.56.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feab:52cb/64 scope link 
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.56.185/32 brd 192.168.56.185 scope global tunl0
       valid_lft forever preferred_lft forever
[root@rs1 ~]# ip route 
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.183 metric 101 
192.168.56.185 dev tunl0 scope link 
[root@rs1 ~]# 

R2 上也进行同样操作

[root@rs2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:c7:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85459sec preferred_lft 85459sec
    inet6 fe80::5054:ff:fec9:c704/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:60:01:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.184/24 brd 192.168.56.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe60:12d/64 scope link 
       valid_lft forever preferred_lft forever
[root@rs2 ~]# ifconfig tunl0 192.168.56.185 broadcast 192.168.56.185 netmask 255.255.255.255 up
[root@rs2 ~]# route add -host 192.168.56.185 tunl0
[root@rs2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:c7:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 85441sec preferred_lft 85441sec
    inet6 fe80::5054:ff:fec9:c704/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:60:01:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.184/24 brd 192.168.56.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe60:12d/64 scope link 
       valid_lft forever preferred_lft forever
4: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.56.185/32 brd 192.168.56.185 scope global tunl0
       valid_lft forever preferred_lft forever
[root@rs2 ~]# ip route
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.184 metric 101 
192.168.56.185 dev tunl0 scope link 
[root@rs2 ~]# 

调整 RS 内核参数

在 RS1 上进行操作

[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/tunl0/arp_ignore
0
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/tunl0/arp_announce
0
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
0
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_announce
0
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/tunl0/rp_filter
1
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/rp_filter
1
[root@rs1 ~]# echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore
[root@rs1 ~]# echo 2 > /proc/sys/net/ipv4/conf/tunl0/arp_announce
[root@rs1 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@rs1 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[root@rs1 ~]# echo 0 > /proc/sys/net/ipv4/conf/tunl0/rp_filter
[root@rs1 ~]# echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/tunl0/arp_ignore
1
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/tunl0/arp_announce
2
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
1
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_announce
2
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/tunl0/rp_filter
0
[root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/rp_filter
0
[root@rs1 ~]# 

在 RS2 上执行相同操作

[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/tunl0/arp_ignore
0
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/tunl0/arp_announce
0
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
0
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/all/arp_announce
0
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/tunl0/rp_filter
1
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/all/rp_filter
1
[root@rs2 ~]# echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore
[root@rs2 ~]# echo 2 > /proc/sys/net/ipv4/conf/tunl0/arp_announce
[root@rs2 ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[root@rs2 ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[root@rs2 ~]# echo 0 > /proc/sys/net/ipv4/conf/tunl0/rp_filter
[root@rs2 ~]# echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/tunl0/arp_ignore
1
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/tunl0/arp_announce
2
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
1
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/all/arp_announce
2
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/tunl0/rp_filter
0
[root@rs2 ~]# cat /proc/sys/net/ipv4/conf/all/rp_filter
0
[root@rs2 ~]# 

这一步的主要目的是让 RS 禁言

在相对较新的版本中新增了两个内核参数(kernelparameter)

  • 第一个是arp_ignore定义接受到ARP请求时的相应级别
  • 第二个是arp_announce定义将自己地址向外通告是的通告级别
  • 第三个是rp_filter定义系统是否开启对数据包源地址的校验

绑定 VIP 到 DS 上

[root@ds1 ~]# ifconfig tunl0 192.168.56.185 broadcast 192.168.56.185 netmask 255.255.255.255 up
[root@ds1 ~]# route add -host 192.168.56.185 tunl0
[root@ds1 ~]# ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:c7:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 84705sec preferred_lft 84705sec
    inet6 fe80::5054:ff:fec9:c704/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:fc:bf:30 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.181/24 brd 192.168.1.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fefc:bf30/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b3:ae:0b brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.181/24 brd 192.168.56.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb3:ae0b/64 scope link 
       valid_lft forever preferred_lft forever
5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.56.185/32 brd 192.168.56.185 scope global tunl0
       valid_lft forever preferred_lft forever
[root@ds1 ~]# ip route 
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.181 metric 101 
192.168.56.0/24 dev eth2 proto kernel scope link src 192.168.56.181 metric 102 
192.168.56.185 dev tunl0 scope link 
[root@ds1 ~]# 

LVS TUN 配置

[root@ds1 ~]# ipvsadm -L -n 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@ds1 ~]# ipvsadm -A -t 192.168.56.185:80 -s wrr 
[root@ds1 ~]# ipvsadm -a -t 192.168.56.185:80 -r 192.168.56.183:80 -i -w 1
[root@ds1 ~]# ipvsadm -a -t 192.168.56.185:80 -r 192.168.56.184:80 -i -w 1
[root@ds1 ~]# ipvsadm -L -n 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.56.185:80 wrr
  -> 192.168.56.183:80            Tunnel  1      0          0         
  -> 192.168.56.184:80            Tunnel  1      0          0         
[root@ds1 ~]#

进行访问

wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs2
wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs2
wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs2
wilmos@Nothing:~$ 

可见 rs1 和 rs2 在接受轮询访问

这就代表 LVS TUN 已经正常配置了

在线调整 RS 权重

[root@ds1 ~]# ipvsadm -L -n 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.56.185:80 wrr
  -> 192.168.56.183:80            Tunnel  1      0          5         
  -> 192.168.56.184:80            Tunnel  1      0          6         
[root@ds1 ~]# ipvsadm -e -t 192.168.56.185:80 -r 192.168.56.183:80 -i -w 2
[root@ds1 ~]# ipvsadm -L -n 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.56.185:80 wrr
  -> 192.168.56.183:80            Tunnel  2      0          5         
  -> 192.168.56.184:80            Tunnel  1      0          6         
[root@ds1 ~]#

直接进行访问测试

wilmos@Nothing:~$ curl http://192.168.56.185
rs2
wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs2
wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs2
wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs1
wilmos@Nothing:~$ curl http://192.168.56.185
rs2
wilmos@Nothing:~$ 

发现 rs1 调整了权重后,被轮询到的次数就相应多了一倍

尝试进行端口映射

再给 DS 加一个 VIP 192.168.56.186

[root@ds1 ~]# ifconfig tunl0:1 192.168.56.186 broadcast 192.168.56.186 netmask 255.255.255.255 up
[root@ds1 ~]# route add -host 192.168.56.186 dev tunl0:1
[root@ds1 ~]# ip route 
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.181 metric 101 
192.168.56.0/24 dev eth2 proto kernel scope link src 192.168.56.181 metric 102 
192.168.56.185 dev tunl0 scope link 
192.168.56.186 dev tunl0 scope link src 192.168.56.186 
[root@ds1 ~]#
[root@ds1 ~]# ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:c7:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 84367sec preferred_lft 84367sec
    inet6 fe80::5054:ff:fec9:c704/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:fc:bf:30 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.181/24 brd 192.168.1.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fefc:bf30/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b3:ae:0b brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.181/24 brd 192.168.56.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb3:ae0b/64 scope link 
       valid_lft forever preferred_lft forever
5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.56.185/32 brd 192.168.56.185 scope global tunl0
       valid_lft forever preferred_lft forever
    inet 192.168.56.186/32 brd 192.168.56.186 scope global tunl0:1
       valid_lft forever preferred_lft forever
[root@ds1 ~]# 

配置端口映射,将 80 映射为 8099

[root@ds1 ~]# ipvsadm -L -n 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.56.185:80 wrr
  -> 192.168.56.183:80            Tunnel  2      0          0         
  -> 192.168.56.184:80            Tunnel  1      0          0         
[root@ds1 ~]# ipvsadm -A -t 192.168.56.186:8099 -s wrr 
[root@ds1 ~]# ipvsadm -a -t 192.168.56.186:8099 -r 192.168.56.183:80 -i -w 1
[root@ds1 ~]# ipvsadm -a -t 192.168.56.186:8099 -r 192.168.56.184:80 -i -w 1
[root@ds1 ~]# ipvsadm -L -n 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.56.185:80 wrr
  -> 192.168.56.183:80            Tunnel  2      0          0         
  -> 192.168.56.184:80            Tunnel  1      0          0         
TCP  192.168.56.186:8099 wrr
  -> 192.168.56.183:8099          Tunnel  1      0          0         
  -> 192.168.56.184:8099          Tunnel  1      0          0         
[root@ds1 ~]# 

发现我们配置的映射并没有生效,而是直接进行的 8099 映射

原理是 TUN 只进行了简单的三层转发,没有涉及到第四层的端口调整,所以第四层的映射是无法做的

这一点上很像 DR(直接路由模式)

rs 配上 VIP

[root@rs1 ~]# ifconfig tunl0:1 192.168.56.186 broadcast 192.168.56.186 netmask 255.255.255.255 up
[root@rs1 ~]# route add -host 192.168.56.186 tunl0:1
[root@rs1 ~]# 
----------
[root@rs2 ~]# ifconfig tunl0:1 192.168.56.186 broadcast 192.168.56.186 netmask 255.255.255.255 up
[root@rs2 ~]# route add -host 192.168.56.186 tunl0:1
[root@rs2 ~]#

进行访问测试

wilmos@Nothing:~$ curl http://192.168.56.186:8099
curl: (7) Failed to connect to 192.168.56.186 port 8099: Connection refused
wilmos@Nothing:~$ curl http://192.168.56.186:8099
curl: (7) Failed to connect to 192.168.56.186 port 8099: Connection refused
wilmos@Nothing:~$ 

这种方式无法正常进行访问,不过我们可以调整 RS 上的防火墙来进行目标端口转换

[root@rs1 ~]# iptables -t nat -A PREROUTING -d 192.168.56.186 -p tcp -m tcp --dport 8099 -j DNAT --to-destination 192.168.56.186:80
[root@rs1 ~]# 
----------
[root@rs2 ~]# iptables -t nat -A PREROUTING -d 192.168.56.186 -p tcp -m tcp --dport 8099 -j DNAT --to-destination 192.168.56.186:80
[root@rs2 ~]#

再进行访问测试

wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs2
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs2
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs2
wilmos@Nothing:~$ 

调整权重

[root@ds1 ~]# ipvsadm -e -t 192.168.56.186:8099 -r 192.168.56.183:8099 -i -w 2
[root@ds1 ~]# 

再进行访问

wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs2
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs2
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs2
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ curl http://192.168.56.186:8099
rs1
wilmos@Nothing:~$ 

发现 rs1 调整了权重后,被轮询到的次数就相应多了一倍

在非隧道口上的添加一个 VIP

[root@ds1 ~]# ifconfig eth2:1 192.168.56.187 broadcast 192.168.56.187 netmask 255.255.255.255 up
[root@ds1 ~]# route add -host 192.168.56.187 dev eth2:1
[root@ds1 ~]# ip route 
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.181 metric 101 
192.168.56.0/24 dev eth2 proto kernel scope link src 192.168.56.181 metric 102 
192.168.56.185 dev tunl0 scope link 
192.168.56.186 dev tunl0 scope link src 192.168.56.186 
192.168.56.187 dev eth2 scope link src 192.168.56.187 
[root@ds1 ~]# ip a 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:c9:c7:04 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 83754sec preferred_lft 83754sec
    inet6 fe80::5054:ff:fec9:c704/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:fc:bf:30 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.181/24 brd 192.168.1.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fefc:bf30/64 scope link 
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b3:ae:0b brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.181/24 brd 192.168.56.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
    inet 192.168.56.187/32 brd 192.168.56.187 scope global eth2:1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb3:ae0b/64 scope link 
       valid_lft forever preferred_lft forever
5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 192.168.56.185/32 brd 192.168.56.185 scope global tunl0
       valid_lft forever preferred_lft forever
    inet 192.168.56.186/32 brd 192.168.56.186 scope global tunl0:1
       valid_lft forever preferred_lft forever
[root@ds1 ~]# 

进行转发配置

[root@ds1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.56.185:80 wrr
  -> 192.168.56.183:80            Tunnel  2      0          0         
  -> 192.168.56.184:80            Tunnel  1      0          0         
TCP  192.168.56.186:8099 wrr
  -> 192.168.56.183:8099          Tunnel  1      0          0         
  -> 192.168.56.184:8099          Tunnel  1      0          0         
[root@ds1 ~]# ipvsadm -A -t 192.168.56.187:80 -s wrr
[root@ds1 ~]# ipvsadm -a -t 192.168.56.187:80 -r 192.168.56.183:80 -i -w 1
[root@ds1 ~]# ipvsadm -a -t 192.168.56.187:80 -r 192.168.56.184:80 -i -w 2
[root@ds1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.56.185:80 wrr
  -> 192.168.56.183:80            Tunnel  2      0          0         
  -> 192.168.56.184:80            Tunnel  1      0          0         
TCP  192.168.56.186:8099 wrr
  -> 192.168.56.183:8099          Tunnel  1      0          0         
  -> 192.168.56.184:8099          Tunnel  1      0          0         
TCP  192.168.56.187:80 wrr
  -> 192.168.56.183:80            Tunnel  1      0          0         
  -> 192.168.56.184:80            Tunnel  2      0          0         
[root@ds1 ~]# 

rs1 与 rs2 上面配置如下

[root@rs1 ~]# ifconfig tunl0:1 192.168.56.187 broadcast 192.168.56.187 netmask 255.255.255.255 up
[root@rs1 ~]# route add -host 192.168.56.187 tunl0:1
[root@rs1 ~]# 
------------
[root@rs2 ~]# ifconfig tunl0:1 192.168.56.187 broadcast 192.168.56.187 netmask 255.255.255.255 up
[root@rs2 ~]# route add -host 192.168.56.187 tunl0:1
[root@rs2 ~]# 

进行访问测试 

wilmos@Nothing:~$ curl http://192.168.56.187
rs2
wilmos@Nothing:~$ curl http://192.168.56.187
rs2
wilmos@Nothing:~$ curl http://192.168.56.187
rs1
wilmos@Nothing:~$ curl http://192.168.56.187
rs2
wilmos@Nothing:~$ curl http://192.168.56.187
rs2
wilmos@Nothing:~$ curl http://192.168.56.187
rs1
wilmos@Nothing:~$ curl http://192.168.56.187
rs2
wilmos@Nothing:~$ curl http://192.168.56.187
rs2
wilmos@Nothing:~$ curl http://192.168.56.187
rs1
wilmos@Nothing:~$ curl http://192.168.56.187
rs2
wilmos@Nothing:~$ curl http://192.168.56.187
rs2
wilmos@Nothing:~$

总结

总体来说 LVS 的 TUN 模式还是非常简单和直接的

官网上表明此类型调度模式,可以支持大于 100 台的集群规模

  • NAT: 1-20
  • DR: 20-100
  • TUN: 100+

后面再演示 LVS 结合 keepalived 的调度方式


原文地址 http://soft.dog/2018/06/22/lvs-tun/

评论