2

LVS实战案例(三):LVS-NAT模式案例

 2 years ago
source link: https://blog.51cto.com/shone/5141938
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

LVS-NAT模式案例

LVS-NAT:本质是多目标IP的DNAT,通过将请求报文中的目标地址和目标端口修改为某挑出的RS的RIP和PORT实现转发。

(1)RIP和DIP应在同一个IP网络,且应使用私网地址;RS的网关要指向DIP

(2)请求报文和响应报文都必须经由Director转发,Director易于成为系统瓶颈

(3)支持端口映射,可修改请求报文的目标PORT

(4)VS必须是Linux系统,RS可以是任意OS系统

1. 架构及主机

LVS实战案例(三):LVS-NAT模式案例_LVS

# 四台主机
1 2台RS服务器 :
主机名:RS1-IP18
CentOS 8.4
RIP1 IP: 192.168.250.18 GW:192.168.250.8
httpd web服务 页面内容 RS1-IP18 IP:192.168.250.18

主机名:RS2-IP28
CentOS 8.4
RIP2 IP: 192.168.250.28 GW:192.168.250.8
httpd web服务 页面内容 RS2-IP28 IP:192.168.250.28

2 1台LVS服务器 也可以称为Director:
主机名: LVS-IP08
CentOS 8.4
VIP eth1 IP:172.16.0.8/24 GW:无
DIP eth0 IP:192.168.250.8/24 GW:无
ipvsadm


3 1台client机器 :
主机名: Client-IP48
CentOS 8.4
eth0 IP:172.16.0.48/24 GW:无

2. 两台RS服务器配置

########################################################################################################
#### 第一台RS1 IP192.168.250.18 配置
# 验证防火墙、Selinux关闭;修改主机名、同步时间等操作系统优化
[root@CentOS84 ]#hostnamectl set-hostname RS1-IP18
[root@CentOS84 ]#exit
[root@RS1-IP18 ]#systemctl enable --now chronyd.service
# 安装Apache httpd,并定义和修改主页
[root@RS1-IP18 ]#yum -y install httpd;hostname > /var/www/html/index.html;systemctl enable --now httpd

# 修改主页内容,让后面测试更直观
[root@RS1-IP18 ]#vim /var/www/html/index.html
[root@RS1-IP18 ]#cat /var/www/html/index.html
RS1-IP18 IP:192.168.250.18

# 查询确认并修改好网卡的信息
[root@RS1-IP18 ]#nmcli connection
NAME UUID TYPE DEVICE
eth0 b5e0e3e5-7738-403f-9912-cf32e0f90a75 ethernet eth0

[root@RS1-IP18 ]#vim /etc/sysconfig/network-scripts/ifcfg-Profile_1
TYPE=Ethernet
DEVICE=eth0
NAME="eth0"
IPADDR=192.168.250.18
PREFIX=24
GATEWAY=192.168.250.8
DEFROUTE=yes
ONBOOT=yes
[root@RS1-IP18 ]#

# 让网卡配置生效
[root@RS1-IP18 ]#nmcli con reload eth0
[root@RS1-IP18 ]#nmcli con up eth0

[root@RS1-IP18 ]#ip route
default via 192.168.250.8 dev eth0 proto static metric 100
192.168.250.0/24 dev eth0 proto kernel scope link src 192.168.250.18 metric 100

# 验证页面
[root@RS1-IP18 ]#curl 192.168.250.18
RS1-IP18 IP:192.168.250.18


############################################################################################################

#### 第二台RS2 IP192.168.250.28 配置
# 验证防火墙、Selinux关闭;修改主机名、同步时间等操作系统优化
[root@CentOS84 ]#hostnamectl set-hostname RS2-IP28
[root@CentOS84 ]#exit
[root@RS1-IP28 ]#systemctl enable --now chronyd.service

# 安装Apache httpd,并定义和修改主页
[root@RS2-IP28 ]#yum -y install httpd;hostname > /var/www/html/index.html;systemctl enable --now httpd

# 修改主页内容,让后面测试更直观
[root@RS2-IP28 ]#vim /var/www/html/index.html
[root@RS2-IP28 ]#cat /var/www/html/index.html
RS2-IP28 IP:192.168.250.28
[root@RS2-IP28 ]#vim /etc/sysconfig/network-scripts/ifcfg-Profile_1
TYPE=Ethernet
DEVICE=eth0
NAME="eth0"
IPADDR=192.168.250.28
PREFIX=24
GATEWAY=192.168.250.8
DEFROUTE=yes
ONBOOT=yes

# 让网卡配置生效
[root@RS2-IP28 ]#reboot

[root@RS2-IP28 ]#nmcli connection
NAME UUID TYPE DEVICE
eth0 0fe6428d-d1b2-44ee-a4e1-5f1c1f97a00c ethernet eth0
[root@RS2-IP28 ]#ip route
default via 192.168.250.8 dev eth0 proto static metric 100
192.168.250.0/24 dev eth0 proto kernel scope link src 192.168.250.28 metric 100
[root@RS1-IP28 ]#

# 验证页面
[root@RS2-IP28 ]#curl 192.168.250.28
RS2-IP28 IP:192.168.250.28

3. LVS服务器配置

任务内容及基本过程:服务器基础环境准备及优化;安装ipvsadm软件包;通过虚拟机VSCA界面给LVS服务器配置好两块网卡;配好网络信息并验证;配置好LVS集群及RS地址、端口等转发信息。

# 验证防火墙、Selinux关闭;修改主机名、同步时间等操作系统优化
[root@CentOS84 ]#hostnamectl set-hostname LVS-IP08
[root@CentOS84 ]#exit
[root@LVS-IP08 ]#systemctl enable --now chronyd.service

# 优化完成CentOS配置后确认 ip_forward 已经开启,如果不开启LVS集群和转发规则配置好,在测试客户端测试会失败的,通过抓包可以分析到第四个阶段从LVS服务器回数据包给Client客户端无法完成,前三个阶段都数据包的通信都正常
[root@LVS-IP08 ]#cat /etc/sysctl.conf | grep ip_forward
net.ipv4.ip_forward = 1

# 先安装好LVS的管理工具包 ipvsadm 后面需要修改网卡信息,不能连接外网了,所以要先下载
[root@LVS-IP08 ]#yum -y install ipvsadm

# 在虚拟集群管理VCSA下给LVS服务器加一块网卡,并按照规划配置好网卡IP地址、网关等
[root@LVS-IP08 ]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:a3:9e:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.250.8/24 brd 192.168.250.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 #此为新增加的网卡
link/ether 00:50:56:a3:e3:9b brd ff:ff:ff:ff:ff:ff

[root@LVS-IP08 ]#ll /etc/sysconfig/network-scripts/ifcfg-Profile_1
-rw-r--r-- 1 root root 339 Jan 15 20:36 /etc/sysconfig/network-scripts/ifcfg-Profile_1
[root@LVS-IP08 ]#vim /etc/sysconfig/network-scripts/ifcfg-Profile_1
TYPE=Ethernet
DEVICE=eth0
NAME="eth0"
IPADDR=192.168.250.8
PREFIX=24
GATEWAY=192.168.250.254 #网关可以不配,也可以保留上外网的网关,不影响LVS的运行
DEFROUTE=yes
ONBOOT=yes

[root@LVS-IP08 ]#cp /etc/sysconfig/network-scripts/ifcfg-Profile_1 /etc/sysconfig/network-scripts/ifcfg-Profile_2
[root@LVS-IP08 ]#vim /etc/sysconfig/network-scripts/ifcfg-Profile_2
TYPE=Ethernet
DEVICE=eth1
NAME="eth1"
IPADDR=172.16.0.8
PREFIX=24
DEFROUTE=yes
ONBOOT=yes

# 重启机器让配置生效,并验证网卡信息
[root@LVS-IP08 ]#reboot
[root@LVS-IP08 ]#nmcli con
NAME UUID TYPE DEVICE
eth0 b5e0e3e5-7738-403f-9912-cf32e0f90a75 ethernet eth0
eth1 1f162eb7-8128-c2ab-afbb-c099cbc4b75f ethernet eth1
[root@LVS-IP08 ]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:a3:9e:ee brd ff:ff:ff:ff:ff:ff
inet 192.168.250.8/24 brd 192.168.250.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:a3:e3:9b brd ff:ff:ff:ff:ff:ff
inet 172.16.0.8/24 brd 172.16.0.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
[root@LVS-IP08 ]#

# 查看LVS集群信息,确认为空
[root@LVS-IP08 ]#ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

# 配置LVS集群并验证。 说明:生产中保持默认值,实验中修改成轮询rr,后面测试起来看上去比较直观
[root@LVS-IP08 ]#ipvsadm -A -t 172.16.0.8:80 -s rr
[root@LVS-IP08 ]#ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.0.8:80 rr

# 添加两台后端的RS的转发规则
[root@LVS-IP08 ]#ipvsadm -a -t 172.16.0.8:80 -r 192.168.250.18 -m
[root@LVS-IP08 ]#ipvsadm -a -t 172.16.0.8:80 -r 192.168.250.28:80 -m
[root@LVS-IP08 ]#ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.0.8:80 rr
-> 192.168.250.18:80 Masq 1 0 0
-> 192.168.250.28:80 Masq 1 0 0
[root@LVS-IP08 ]#

# 保存配置,并以服务方式开启启动LVS
[root@LVS-IP08 ]#ipvsadm -Sn > /etc/sysconfig/ipvsadm
[root@LVS-IP08 ]#systemctl enable --now ipvsadm.service
[root@LVS-IP08 ]#


[root@LVS-IP08 ]#ipvsadm -Ln --stats
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
TCP 172.16.0.8:80 299 1794 1196 97474 131859
-> 192.168.250.18:80 149 894 596 48574 65709
-> 192.168.250.28:80 150 900 600 48900 66150
[root@LVS-IP08 ]#

[root@LVS-IP08 ]#cat /proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP AC100008:0050 rr
-> C0A8FA1C:0050 Masq 1 0 8
-> C0A8FA12:0050 Masq 1 0 8
[root@LVS-IP08 ]#

[root@LVS-IP08 ]#cat /proc/net/ip_vs_conn

4. 测试终端准备及验证LVS功能

# 验证防火墙、Selinux关闭;修改主机名、同步时间等操作系统优化
[root@CentOS84 ]#hostnamectl set-hostname Client-IP48
[root@Client-IP48 ]#exit
[root@Client-IP48 ]#systemctl enable --now chronyd.service

# 配置好网卡信息,验证如下
[root@Client-IP48 ]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:a3:48:a4 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.48/24 brd 172.16.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever

[root@Client-IP48 ]#ip route
172.16.0.0/24 dev eth0 proto kernel scope link src 172.16.0.48 metric 100
[root@Client-IP48 ]#

# 在LVS服务器IP地址全部配置好,未配置LVS集群及转发规则前,PING其ETH1端口 172.16.0.8地址应该是可达。
[root@Client-IP48 ]#ping 172.16.0.8
PING 172.16.0.8 (172.16.0.8) 56(84) bytes of data.
64 bytes from 172.16.0.8: icmp_seq=1 ttl=64 time=0.915 ms
...............
21 packets transmitted, 21 received, 0% packet loss, time 20486ms

# 在LVS服务器IP地址全部配置好,未配置LVS集群及转发规则前,PING其LVS后的两台RS应该都不通
[root@Client-IP48 ]#ping 192.168.250.18
connect: Network is unreachable
[root@Client-IP48 ]#ping 192.168.250.28
connect: Network is unreachable
[root@Client-IP48 ]#ping 192.168.250.8
connect: Network is unreachable

# 在LVS服务器全部配置好,包括LVS集群及转发规则后,测试页面访问
[root@Client-IP48 ]#curl 172.16.0.8
RS1-IP18 IP:192.168.250.18
[root@Client-IP48 ]#curl 172.16.0.8
RS2-IP28 IP:192.168.250.28
[root@Client-IP48 ]#curl 172.16.0.8
RS1-IP18 IP:192.168.250.18
[root@Client-IP48 ]#curl 172.16.0.8
RS2-IP28 IP:192.168.250.28
[root@Client-IP48 ]#curl 172.16.0.8
RS1-IP18 IP:192.168.250.18
[root@Client-IP48 ]#curl 172.16.0.8
RS2-IP28 IP:192.168.250.28

# 也可以用下面的命令一直观测客户的访问
[root@Client-IP48 ]#while :;do curl 172.16.0.8;sleep 1;done

5. 附录:

5.1 CentOS建议优化的内核或系统参数

#### CentOS建议优化的内核或系统参数

[root@LVS-IP08 ]#cat /etc/sysctl.conf
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# # Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# # Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# TCP kernel paramater
net.ipv4.tcp_mem = 786432 1048576 1572864
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1

# socket buffer
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 20480
net.core.optmem_max = 81920


# TCP conn
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15

# tcp conn reuse
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_timestamps = 0

net.ipv4.tcp_max_tw_buckets = 20000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syncookies = 1

# keepalive conn
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.ip_local_port_range = 10001 65000

# swap
vm.overcommit_memory = 0
vm.swappiness = 10

#net.ipv4.conf.eth1.rp_filter = 0
#net.ipv4.conf.lo.arp_ignore = 1
#net.ipv4.conf.lo.arp_announce = 2
#net.ipv4.conf.all.arp_ignore = 1
#net.ipv4.conf.all.arp_announce = 2
[root@LVS-IP08 ]#cat /etc/sysctl.conf | grep ip_forward
net.ipv4.ip_forward = 1
[root@LVS-IP08 ]#
# 特别说明:在上面如果不开启转发 net.ipv4.ip_forward = 1 在客户端是无法访问成功的,通过抓包可以看到第1、2、3、4、5阶段数据包都是正常传送的,第6个阶段的包没能成功传输(数据包流向图见下图),这个是因为LVS 类似iptables 前面不经过ip_forward 回包时候经过了ip_forward ,可以查看老王用 抓包软件在视频教学里面教了这个过程。

5.2 LVS-NAT模式数据包流向图

简要说明:为了更好理解LVS-NAT模式,附上数据包流向图,建议和iptables的原理一起去理解。

LVS实战案例(三):LVS-NAT模式案例_LVS_02


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK