Installing & Configuring Linux Load Balancer Cluster (Direct Routing Method)
In Fedora, CentOS, and Rehat Enterprise Linux, IP Load Balancing solution is provided by using a package called ‘Piranha’.
Piranha offers the facility for load balancing inward IP network traffics (requests) and distribution of this IP traffic among a farm of server machines. The technique that is used to load balance IP network traffic is based on Linux Virtual Server tools.
This High Availability is purely software based provided by Piranha. Piranha also facilitates system administrator with a cool Graphical User Interface tool for management.
The Piranha monitoring tool is responsible for the following functions:
Heartbeating between active and backup load balancers.
Checking availability of the services on each of real servers.
Components of Piranha Cluster Software:
IPVS kernel, LVS (manage the IPVS routing table via the ipvsadm tool)
Nanny (monitor servers & services on real servers in a cluster)
Pulse (control the other daemons and handle failovers between IPVS routing boxes).
We will configure our computers or nodes as following:
Our load balancing will be done using 2 Linux Virtual Server Nodes or routing boxes.
We will install two or more Web servers for load balancing.
--------
First of all stop all the services that we don’t need to run on the nodes.
[root@websrv1 ~]# service bluetooth stop && chkconfig –level 235 bluetooth off
[root@websrv1 ~]# service sendmail stop && chkconfig –level 235 sendmail off
We will modify our hosts configuration file at /etc/hosts on each of the nodes in our setup
This load balance schema requires 4 VM instances, each of them having a single network card on the same subnet.
Make sure all machines have the correct host name. The command lines look like below.
[root@websrv1 ~]#
[root@websrv2 ~]#
[root@lbnode1 ~]#
[root@lbnode2 ~]#
Edit the host file using vim
[root@websrv1 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
::1 localhost localhost.localdomain
localhost6 localhost6.localdomain6
10.32.29.140 www.pul.com www
10.32.29.174 lbnode1.pul.com lbnode1
10.32.29.178 lbnode2.pul.com lbnode2
10.32.29.185 websrv1.pul.com websrv1
10.32.29.186 websrv2.pul.com websrv2
Copy the /etc/hosts file to all the servers (This step is not required if you have DNS)
[root@websrv1 ~]# scp /etc/hosts websrv2:/etc
[root@websrv1 ~]# scp /etc/hosts lbnode1:/etc
[root@websrv1 ~]# scp /etc/hosts lbnode2:/etc
After copying to host file to all the nodes, we need to generate SSH keys.
[root@websrv1 ~]# ssh-keygen –t rsa
[root@websrv1 ~]# ssh-keygen –t dsa
[root@websrv1 ~]# cd /root/.ssh/
[root@websrv1 .ssh]# cat *.pub > authorized_keys
Now copy ssh keys to all other nodes for password less entry which is required by pulse daemon.
[root@websrv1 .ssh]# scp -r /root/.ssh/ websrv2:/root/
[root@websrv1 .ssh]# scp -r /root/.ssh/ lbnode1:/root/
[root@websrv1 .ssh]# scp -r /root/.ssh/ lbnode2:/root/
We can build up a global finger print list as following:
[root@websrv1 .ssh]# ssh-keyscan -t rsa websrv1 websrv2 lbnode1
lbnode2
[root@websrv1 .ssh]# ssh-keyscan -t dsa websrv1 websrv2 lbnode1
lbnode2
--------
Now we will configure NTP service on all the nodes. We will make the LBNODE1 as our NTP Server.
[root@lbnode1 ~]# rpm -qa | grep ntp
ntpdate-4.2.6p5-1.el6.x86_64
fontpackages-filesystem-1.41-1.1.el6.noarch
ntp-4.2.6p5-1.el6.x86_64
Edit the ntp config file on the NTP server on lbnode1 so that it
contains below
[root@lbnode1 ~]# cat /etc/ntp.conf | egrep -v
"(^#.*|^$)"
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
Start the ntp server on lbnode1
[root@lbnode1 ~]# service ntpd start && chkconfig ntpd on
The NTP server is already set up successfully.
Now we will configure client side configuration in WEBSRV1. It is
the NTP client.
[root@websrv1 ~]# cat /etc/ntp.conf | egrep -v
"(^#.*|^$)"
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
server 10.32.29.174
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
#restrict 127.0.0.1
#restrict -6 ::1
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp.org
#server 2.centos.pool.ntp.org
#server 127.127.1.0 # local clock
#fudge 127.127.1.0 stratum 10
#restrict -6 ::1
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp.org
#server 2.centos.pool.ntp.org
#server 127.127.1.0 # local clock
#fudge 127.127.1.0 stratum 10
Now the NTP start the NTP service at client of webserv1
[root@websrv1 ~]# service ntpd start && chkconfig ntpd on
Synchronize the time by specifying the ip address of lbnode1 (NTP
Server)
[root@websrv1 ~]# ntpdate -u 10.32.29.174
Copy the same configuration or the file /etc/ntp.conf to other 2 nodes websrv2, lbnode2.
[root@websrv1 ~]# scp /etc/ntp.conf websrv2:/etc
[root@websrv1 ~]# scp /etc/ntp.conf lbnode2:/etc
After copying, restart the ntp service on these nodes
[root@websrv2 ~]# service ntpd start && chkconfig ntpd on
[root@lbnode2 ~]# service ntpd start && chkconfig ntpd on
Now we will update the time on all the nodes by typing following
command:
[root@werbsrv2 ~]# ntpdate -u 10.32.29.174
[root@lbnode2 ~]# ntpdate -u 10.32.29.174
[root@werbsrv2 ~]# ntpdate -u 10.32.29.174
[root@lbnode2 ~]# ntpdate -u 10.32.29.174
--------
Edit the sysctl config file as below
[root@lbnode1 ~]# cat /etc/sysctl.conf | egrep -v
"(^#.*|^$)"
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
Copy same sysctl config file from lbnode1 to lbnode2
[root@lbnode1 ~]# scp /etc/sysctl.conf lbnode2:/etc/
Run sysctl –p on lbnode1
[root@lbnode1 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
[root@lbnode1 ~]#
Run sysctl –p on lbnode2
[root@lbnode2 ~]# sysctl -p
Run sysctl –p on lbnode2
[root@lbnode2 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
--------
--------
Now we will setup our Linux Virtual Server (LBNODE1 & LBNODE2) by installing Piranha package. We already know that Piranha includes ipvsadm, nanny and pulse demon.
We will use Yum to install Piranha on the both nodes.
[root@lbnode1 ~]# yum install piranha -y
[root@lbnode2 ~]# yum install piranha -y
Now we will configure Linux Virtual Server configuration file at /etc/sysconfig/ha/lvs.cf
[root@lbnode1 ~]# cat /etc/sysconfig/ha/lvs.cf
serial_no = 12
primary = 10.32.29.174
service = lvs
rsh_command = ssh
backup_active = 1
backup = 10.32.29.178
heartbeat = 1
heartbeat_port = 1050
keepalive = 2
deadtime = 10
network = direct
debug_level = NONE
monitor_links = 1
syncdaemon = 0
virtual server1 {
active = 1
address = 10.32.29.140
eth0:1
port = 80
send = "GET /
HTTP/1.1\r\n\r\n"
expect =
"HTTP"
use_regex = 0
load_monitor = none
scheduler = rr
protocol = tcp
timeout = 10
reentry = 180
quiesce_server = 0
server websrv1 {
address =
10.32.29.185
active = 1
weight = 1
}
server websrv2 {
address =
10.32.29.186
active = 1
weight = 1
}
}
Set up password for piranha web access. Empty password is used
here.
service httpd start
service httpd start
/sbin/service piranha-gui start
[root@lbnode1 ~]# piranha-passwd
New Password:
Verify:
Updating password for user piranha
Use a web browser e.g. chrome to remote login piranha web
configuration tool at http://10.32.29.174:3636/
Login name: piranha
Password: <Empty>
Go to “Global Setting” page,
verify that “Primary server public IP” is 10.32.29.174 (lbnode1)
Go to “Virtual Servers”, verify a server named “server1” has 10.32.29.140
(our virtual ip address).
Go to “Virtual Servers” and “Virtual
Server”, verify the virtual ip address is 10.32.29.140 and is attached to
eth0:1.
Go to “Virtual Servers” and “Real
Server”, verify the real ip address are the same as the web servers
Now we will copy this
configuration file to lbnode2.
[root@lbnode1 ~]# scp /etc/sysconfig/ha/lvs.cf lbnode2:/etc/sysconfig/ha/
[root@lbnode1 ~]# scp /etc/sysconfig/ha/lvs.cf lbnode2:/etc/sysconfig/ha/
--------
Run arptables scripts as below
[root@websrv1 ~]# cd ~
[root@websrv1 ~]# cat arp_arptables.sh
#!/bin/bash
VIP=10.32.29.140
RIP=10.32.29.185
arptables -F
arptables -A IN -d $VIP -j DROP
arptables -A OUT -s $VIP -j mangle --mangle-ip-s $RIP
/sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.0 up
/sbin/route add -host $VIP dev eth0:1
[root@websrv1 ~]# . arp_arptables.sh
[root@websrv2 ~]# cd ~
[root@websrv2 ~]# cat arp_arptables.sh
#!/bin/bash
VIP=10.32.29.140
RIP=10.32.29.186
arptables -F
arptables -A IN -d $VIP -j DROP
arptables -A OUT -s $VIP -j mangle --mangle-ip-s $RIP
/sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.0 up
/sbin/route add -host $VIP dev eth0:1
[root@websrv2 ~]# . arp_arptables.sh
--------
--------
Optional. High change
that you may not need this section.
Now we will install and configure our web servers and arptables_jf package for direct routing.
[root@websrv1 ~]# yum install httpd arptables_jf -y
Now we will configure the Ethernet interfaces for virtual IP on first web server node.
[root@websrv1 ~]# ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up
[root@websrv1 ]# echo “ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up” >> /etc/rc.local
Now we will do it on the second web server node.
[root@websrv2 ~]# yum install httpd arptables_jf -y
Now we will configure the Ethernet interfaces for virtual IP on second web server node.
[root@websrv2 ~]# ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up
[root@websrv2 ~]# echo “ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up” >> /etc/rc.local
Now we will configure our arptables on our first web server node.
[root@websrv1 ~]# arptables -A IN -d 10.32.29.140 -j DROP
[root@websrv1 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.174
[root@websrv1 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.178
[root@websrv1 ~]# service arptables_jf save
[root@websrv1 ~]# chkconfig arptables_jf on
Now we will configure our arptables on our first web server node.
[root@websrv2 ~]# arptables -A IN -d 10.32.29.140 -j DROP
[root@websrv2 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.174
[root@websrv2 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.178
[root@websrv2 ~]# service arptables_jf save
[root@websrv2 ~]# chkconfig arptables_jf on
--------
Install ipvsadm piranha httpd package on lbnode1
[root@lbnode1 ~]# yum install -y ipvsadm piranha httpd
Loaded plugins: refresh-packagekit, security
Setting up Install Process
Package ipvsadm-1.26-4.el6.x86_64 already installed and latest
version
Package piranha-0.8.6-4.0.1.el6_5.2.x86_64 already installed and
latest version
Package httpd-2.2.15-39.0.1.el6.x86_64 already installed and
latest version
Nothing to do
Install ipvsadm piranha httpd package on lbnode2
[root@lbnode2 ~]# yum install -y ipvsadm piranha httpd
Loaded plugins: refresh-packagekit, security
Setting up Install Process
Package ipvsadm-1.26-4.el6.x86_64 already installed and latest
version
Package piranha-0.8.6-4.0.1.el6_5.2.x86_64 already installed and
latest version
Package httpd-2.2.15-39.0.1.el6.x86_64 already installed and
latest version
Nothing to do
We will start httpd on both web servers 1.
[root@websrv1 ]#/etc/init.d/httpd start && chkconfig httpd on
[root@websrv1 ]#/etc/init.d/httpd start && chkconfig httpd on
[root@websrv1 ]# echo "Web Server 1" >
/var/www/html/index.html
We will start httpd on both web servers 2.
[root@websrv2 ]#/etc/init.d/httpd start && chkconfig httpd on
[root@websrv2 ]# echo "Web Server 2" >
/var/www/html/index.html
We will start pulse service on both lbs nodes:
[root@lbnode1 ~]# service pulse start && chkconfig pulse on
[root@lbnode1 ~]# tail -f /var/log/messages
We will start pulse service on both lbs nodes:
[root@lbnode1 ~]# service pulse start && chkconfig pulse on
[root@lbnode1 ~]# tail -f /var/log/messages
Jan 20 15:30:34 lbnode1
pulse[5084]: STARTING PULSE AS MASTER
Jan 20 15:30:35 lbnode1
pulse[5084]: backup inactive: activating lvs
Jan 20 15:30:35 lbnode1
lvsd[5086]: starting virtual service server1 active: 80
Jan 20 15:30:35 lbnode1
lvsd[5086]: create_monitor for server1/websrv1 running as pid 5090
Jan 20 15:30:35 lbnode1
lvsd[5086]: create_monitor for server1/websrv2 running as pid 5091
Jan 20 15:30:35 lbnode1
nanny[5091]: starting LVS client monitor for 10.32.29.140:80 ->
10.32.29.186:80
Jan 20 15:30:35 lbnode1
nanny[5091]: [ active ] making 10.32.29.186:80 available
Jan 20 15:30:35 lbnode1
nanny[5090]: starting LVS client monitor for 10.32.29.140:80 ->
10.32.29.185:80
Jan 20 15:30:35 lbnode1
nanny[5090]: [ active ] making 10.32.29.185:80 available
Jan 20 15:30:40 lbnode1
pulse[5093]: gratuitous lvs arps finished
[root@lbnode2 ~]# service pulse start && chkconfig pulse on
Starting pulse:
[root@lbnode2 ~]# tail -f /var/log/messages
Jan 20 15:25:09 lbnode2
NetworkManager[1383]: <info> (eth0): device state change: 7 -> 8
(reason 0)
Jan 20 15:25:09 lbnode2
NetworkManager[1383]: <info> Policy set 'System eth0' (eth0) as default
for IPv4 routing and DNS.
Jan 20 15:25:09 lbnode2
NetworkManager[1383]: <info> Activation (eth0) successful, device
activated.
Jan 20 15:25:09 lbnode2
NetworkManager[1383]: <info> Activation (eth0) Stage 5 of 5 (IP Configure
Commit) complete.
Jan 20 15:25:09 lbnode2
ntpd[1610]: Deleting interface #6 eth0:1, 10.32.29.140#123, interface stats:
received=0, sent=0, dropped=0, active_time=551 secs
Jan 20 15:25:09 lbnode2
ntpd[1610]: peers refreshed
Jan 20 15:28:56 lbnode2
pulse[2635]: Terminating due to signal 15
Jan 20 15:28:59 lbnode2
pulse[3172]: STARTING PULSE AS BACKUP
Jan 20 15:30:41 lbnode2
pulse[3172]: Terminating due to signal 15
Jan 20 15:30:41 lbnode2
pulse[3211]: STARTING PULSE AS BACKUP
--------
We have managed to setup our LVS and webserver nodes now it is time to test if everything is working or not.
[root@lbnode1 ~]# ipvsadm
We have managed to setup our LVS and webserver nodes now it is time to test if everything is working or not.
[root@lbnode1 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
->
RemoteAddress:Port Forward
Weight ActiveConn InActConn
TCP www.pul.com:http rr
->
websrv1.pul.com:http Route 1
0 0
->
websrv2.pul.com:http Route 1
0 0
[root@lbnode2 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@lbnode2~]# watch ipvsadm
Finally open a web browser from any machine and type http://www.pul.com and keep on refreshing the page, we will get output of page contents from Webserver 1 and Web Server 2.
--------
After the client browser is refreshed by F5 button, now you can enter "ipvsadm --list" at the master load balancer. You will see the InActConn number is changing.
Alternatively, you can use "ipvsadm -Lnc" and see the destination ip address is changing.
Finally open a web browser from any machine and type http://www.pul.com and keep on refreshing the page, we will get output of page contents from Webserver 1 and Web Server 2.
--------
After the client browser is refreshed by F5 button, now you can enter "ipvsadm --list" at the master load balancer. You will see the InActConn number is changing.
Alternatively, you can use "ipvsadm -Lnc" and see the destination ip address is changing.
沒有留言:
張貼留言