2015年1月20日 星期二

Installing & Configuring Linux Load Balancer Cluster (Direct Routing Method)

http://easylinuxtutorials.blogspot.com/2012/07/installing-configuring-linux-load.html


In Fedora, CentOS, and Rehat Enterprise Linux, IP Load Balancing solution is provided by using a package called ‘Piranha’.

Piranha offers the facility for load balancing inward IP network traffics (requests) and distribution of this IP traffic among a farm of server machines. The technique that is used to load balance IP network traffic is based on Linux Virtual Server tools.

This High Availability is purely software based provided by Piranha. Piranha also facilitates system administrator with a cool Graphical User Interface tool for management.

The Piranha monitoring tool is responsible for the following functions:
  • Heartbeating between active and backup load balancers.
  • Checking availability of the services on each of real servers.
Components of Piranha Cluster Software:
  • IPVS kernel, LVS (manage the IPVS routing table via the ipvsadm tool)
  • Nanny (monitor servers & services on real servers in a cluster)
  • Pulse (control the other daemons and handle failovers between IPVS routing boxes).
We will configure our computers or nodes as following:
Our load balancing will be done using 2 Linux Virtual Server Nodes or routing boxes.
We will install two or more Web servers for load balancing.

First of all stop all the services that we don’t need to run on the nodes.
[root@websrv1 ~]# service bluetooth stop && chkconfig –level 235 bluetooth off
[root@websrv1 ~]# service sendmail stop && chkconfig –level 235 sendmail off


We will modify our hosts configuration file at /etc/hosts on each of the nodes in our setup 

[root@websrv1 ~]# vim /etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

##### Web Servers IPs #####
10.32.29.185             websrv1.pul.com  websrv1
10.32.29.186             websrv2.pul.com  websrv2

##### Load Balancing Nodes IPs #####
10.32.29.174                 lbnode1.pul.com  lbnode1
10.32.29.178                 lbnode2.pul.com  lbnode2
##########  Virtual IP/Service IP of Webserver ##########
10.32.29.140             www.pul.com  www


Copy the /etc/hosts file to all the servers (This step is not required if you have DNS)
[root@websrv1 ~]# scp /etc/hosts websrv2:/etc
[root@websrv1 ~]# scp /etc/hosts lbnode1:/etc
[root@websrv1 ~]# scp /etc/hosts lbnode2:/etc


After copying to host file to all the nodes, we need to generate SSH keys.
[root@websrv1 ~]# ssh-keygen –t rsa
[root@websrv1 ~]# ssh-keygen –t dsa
[root@websrv1 ~]# cd /root/.ssh/
[root@websrv1 .ssh]# cat *.pub > authorized_keys


Now copy ssh keys to all other nodes for password less entry which is required by pulse daemon.
[root@websrv1 .ssh]# scp -r /root/.ssh/ websrv2:/root/
[root@websrv1 .ssh]# scp -r /root/.ssh/ lbnode1:/root/
[root@websrv1 .ssh]# scp -r /root/.ssh/ lbnode2:/root/


We can build up a global finger print list as following:
[root@websrv1 .ssh]# ssh-keyscan -t rsa websrv1 websrv2 lbnode1 lbnode2
[root@websrv1 .ssh]# ssh-keyscan -t dsa websrv1 websrv2 lbnode1 lbnode2


Now we will configure NTP service on all the nodes. We will make the LBNODE1 as our NTP Server.
[root@lbnode1 ~]# rpm -qa | grep ntp
ntp-4.3.3p1-9.el5.centos
chkfontpath-1.20.1-1.1


[root@lbnode01 ]# vim /etc/ntp.conf
###Configuration for NTP server###
restrict 127.0.0.1
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10


[root@lbnode01 ~]# service ntpd start
[root@lbnode01 ~]# chkconfig ntpd on

Now we will configure client side configuration in WEBSRV1.
[root@websrv1 ~]# vim /etc/ntp.conf
#restrict 127.0.0.1
#restrict -6 ::1
server 10.32.29.174
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp.org
#server 2.centos.pool.ntp.org

#server 127.127.1.0 # local clock
#fudge 127.127.1.0 stratum 10


[root@websrv1 ~]# service ntpd start
[root@websrv1 ~]# chkconfig ntpd on 
[root@websrv1 ~]# ntpdate -u 10.32.29.174

Copy the same configuration or the file /etc/ntp.conf to other 2 nodes websrv2, lbnode2. 
[root@websrv1 ~]# scp /etc/ntp.conf websrv2:/etc
[root@websrv1 ~]# scp /etc/ntp.conf lbnode2:/etc

After copying, restart the ntp service on these nodes.
[root@websrv2 ~]# service ntpd start && chkconfig ntpd on
[root@lbnode2 ~]# service ntpd start && chkconfig ntpd on

Now we will update the time on all the nodes by typing following command:
[root@werbsrv2 ~]# ntpdate -u 10.32.29.174
[root@lbnode2 ~]# ntpdate -u 10.32.29.174



Now we will setup our Linux Virtual Server (LBNODE1 & LBNODE2) by installing Piranha package. We already know that Piranha includes ipvsadm, nanny and pulse demon.
We will use Yum to install Piranha on the both nodes.
[root@lbnode1 ~]# yum install piranha -y
[root@lbnode2 ~]# yum install piranha -y

Now we will configure Linux Virtual Server configuration file at /etc/sysconfig/ha/lvs.cf
[root@lbnode01 ]# vim /etc/sysconfig/ha/lvs.cf
serial_no = 1
primary = 10.32.29.174
service = lvs
rsh_command = ssh
backup_active = 1
backup = 10.32.29.178
heartbeat = 1
heartbeat_port = 1050
keepalive = 2
deadtime = 10
network = direct
debug_level = NONE
monitor_links = 1
virtual server1 {
active = 1
address = 10.32.29.140 eth0:1
port = 80
send = "GET / HTTP/1.1\r\n\r\n"
expect = "HTTP"
load_monitor = uptime
scheduler = rr
protocol = tcp
timeout = 10
reentry = 180
quiesce_server = 0
server websrv1 {
address = 10.32.29.185
active = 1
weight = 1
}
server websrv2 {
address = 10.32.29.186
active = 1
weight = 1
}
}

Now we will copy this configuration file to lbnode2.
[root@lbnode1 ~]# scp /etc/sysconfig/ha/lvs.cf lbnode2:/etc/sysconfig/ha/

[root@lbnode1 ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2

[root@lbnode1 ~]# scp /etc/sysctl.conf lbnode2:/etc/

Run this command on both nodes
[root@lbnode1 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 4294967295
kernel.shmall = 268435456

[root@lbnode2 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 4294967295
kernel.shmall = 268435456

We will start httpd on both web servers.
[root@webnode01 ]#/etc/init.d/httpd start && chkconfig httpd on
[root@webnode02 ]#/etc/init.d/httpd start && chkconfig httpd on

We will start pulse service on both lbs nodes:
[root@lbnode1 ~]# service pulse start
[root@lbnode1 ~]# chkconfig pulse on
[root@lbnode1 ~]# tail -f /var/log/messages

Now we will install and configure our web servers and arptables_jf package for direct routing.
[root@websrv1 ~]# yum install httpd arptables_jf -y
[root@websrv1 ~]# echo "Web Server 1" > /var/www/html/index.html

Now we will configure the Ethernet interfaces for virtual IP on first web server node.
[root@websrv1 ~]# ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up
[root@websrv1 ]# echo “ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up” >> /etc/rc.local

Now we will do it on the second web server node.
[root@websrv2 ~]# yum install httpd arptables_jf -y
[root@websrv2 ~]# echo "Web Server 2" > /var/www/html/index.html

Now we will configure the Ethernet interfaces for virtual IP on second web server node.
[root@websrv2 ~]# ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up
[root@websrv2 ~]# echo “ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up” >> /etc/rc.local

Now we will configure our arptables on our first web server node.
[root@websrv1 ~]# arptables -A IN -d 10.32.29.140 -j DROP
[root@websrv1 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.174
[root@websrv1 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.178
[root@websrv1 ~]# service arptables_jf save
[root@websrv1 ~]# chkconfig arptables_jf on

Now we will configure our arptables on our first web server node.
[root@websrv2 ~]# arptables -A IN -d 10.32.29.140 -j DROP
[root@websrv2 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.174
[root@websrv2 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.178
[root@websrv2 ~]# service arptables_jf save
[root@websrv2 ~]# chkconfig arptables_jf on

We have managed to setup our LVS and webserver nodes now its time to test if everything is working or not.
[root@lbnode01 ]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP www.pul.com:http rr
-> websrv1.pul.com:http Route 1 0 0
-> websrv2.pul.com:http Route 1 0 0

Finally open a web browser from any machine and type http://www.pul.com and keep on refreshing the page, we will get output of page contents from Webserver 1 and Web Server 2.

沒有留言:

張貼留言

2023 Promox on Morefine N6000 16GB 512GB

2023 Promox on Morefine N6000 16GB 512GB Software Etcher 100MB (not but can be rufus-4.3.exe 1.4MB) Proxmox VE 7.4 ISO Installer (1st ISO re...