2015年1月28日 星期三

Oracle Fact Sheet

NUMBER(p,s):p 指定一個數目有多少個十進位數字,s 指定多少個小數位。
DATE:日期資料型態,它儲存的資訊包括:世紀、年、月、日、時、分、秒。
VARCHAR2(size):可變長度字串。size 是字串長度的上限。

CONSTRAINT_TYPE (from 11gR2 docs)
C - Check constraint on a table
P - Primary key
U - Unique key
R - Referential integrity
V - With check option, on a view
O - With read only, on a view
H - Hash expression
F - Constraint that involves a REF column
S - Supplemental logging

2015年1月27日 星期二

Oracle Database SQL Developer - How to write and call a stored procedure?



Assume you have below store procedure

create or replace PROCEDURE           RICMAP_TABLE_CLEANUP(TABLENAME IN VARCHAR2,RETURN_CODE OUT NUMBER, RETURN_MESSAGE OUT VARCHAR2)
AUTHID DEFINER
AS
BEGIN
   EXECUTE IMMEDIATE 'TRUNCATE TABLE '||UPPER(TABLENAME);
 
   FOR CONSTRAINT_LIST IN (SELECT CONSTRAINT_NAME FROM USER_CONSTRAINTS WHERE TABLE_NAME = UPPER(TABLENAME) AND CONSTRAINT_TYPE = 'P')
   LOOP
   EXECUTE IMMEDIATE 'ALTER TABLE '||UPPER(TABLENAME)||' DISABLE CONSTRAINT '||CONSTRAINT_LIST.CONSTRAINT_NAME;
   END LOOP;
 
   FOR INDEX_LIST IN (SELECT INDEX_NAME FROM  USER_INDEXES WHERE TABLE_NAME = UPPER(TABLENAME))
   LOOP
   EXECUTE IMMEDIATE 'DROP INDEX '|| INDEX_LIST.INDEX_NAME;
   END LOOP;
   RETURN_CODE := 0;
   RETURN_MESSAGE := 'SUCCESS';
EXCEPTION
   WHEN OTHERS THEN
      RETURN_CODE := SQLCODE;
      RETURN_MESSAGE := SQLERRM;
END;


You can call the store procedure as below

select count (*) from stage_instruments;
var returnCode number;
var returnMessage varchar2;
execute RICMAP_TABLE_CLEANUP('stage_instruments', :returnCode, :returnMessage)
select count (*) from stage_instruments;

Batch file name string concatenation




Inside your for loop you show the value of %attachment%, so that value is NOT updated in each iteration; you must use delayed expansion to do so.
The set of values in for command is: (%directory% *.xml) that is, the value C:\temp and all files with .xml extension, that I assumed is none in current directory. After that, you use this value in %a% %directory%\%%n expression, so the result is -a C:\temp\C:\temp. I think there is not a point here.
If you want not the folder value in the list, just don't insert it and use %~NXn modifier in the forreplaceable parameter.
Below is the correct code:
copy *.xml C:\FTP
setLocal Enabledelayedexpansion
set "directory=C:\temp"
set "attachment= "
set "a= -a "
for %%n in ("%directory%\*.xml") DO ( 
set "attachment=!attachment! %a% %%~NXn "
echo.!attachment!
)
setlocal disabledelayedexpansion
echo.%attachment%

My Version
setLocal Enabledelayedexpansion
set "directory=E:\data_temp\~Ricky\testscript"
set "attachment= "
set "a= "
for %%n in ("%directory%\*.pcap") DO ( 
set "attachment=!attachment!%a%%%~NXn "
echo.!attachment!
)
setlocal disabledelayedexpansion
echo.%attachment%

PAUSE

2015年1月26日 星期一

Oracle Commands

How to check table space
select TABLESPACT_NAME, STATUS, CONTENTS from USER_TABLESPACES;

Where do you know the above answer?
http://dba.fyicenter.com/faq/oracle/Oracle-Tablespace-Unit-of-Logical-Storage.html

Funny Useful Linux Commands

Find Latest File
ls -altar
ls -Art | tail -n 1
ls -Art | tail 1

Network port with process
netstat -natup

Output last modified file
cat `ls -t | head -1`
tail -f `ls -t | head -1`

monitor the directory
watch -n 0.1 find .
watch -n 0.1 ls -LR

each $PATH


Tar
# tar cvzf MyImages-14-09-12.tar.gz /home/MyImages
# tar -zxvf tecmintbackup.tar.gz tecmintbackup.xml


umask
rpm -ivh rpmname.rpm

bzip2 –d man.config.bz2 




Remove smf
rpm -e smf-1.2

List all installed rpms
rpm -qa

How to check cpu?
top

2015年1月21日 星期三

Building a Load-Balancing Cluster with LVS

http://dak1n1.com/blog/13-load-balancing-lvs


This article is part 3 of a series about building a high-performance web cluster powerful enough to handle 3 million requests per second. For information on finding that perfectload-testing tool, and tuning your web servers, see the previous articles.
So you've tuned your web servers, and their network stacks. You're getting the best raw network performance you've seen using 'iperf' and 'netperf', and Tsung is reporting that your web servers are serving up 500,000+ static web pages per second. That's great!
Now you're ready to begin the cluster install. 
Redhat has some excellent documentation on this already, so I recommend checking that out if you get lost. Don't worry though, we'll cover every step you need to make the cluster in this tutorial.

LVS Router installation

This single machine will act as a router, balancing tcp traffic evenly across all the nodes in your cluster. Choose a machine to act in this role, and complete the steps below. You can afford to use your weakest machine for this purpose, since routing IP traffic requires very little system resources.
1. Install LVS on the LVS Router machine.
yum groupinstall "Load Balancer"
chkconfig piranha-gui on
chkconfig pulse on
2. Set a password for the web ui
/usr/sbin/piranha-passwd
3. Allow ports in iptables
vim /etc/sysconfig/iptables
-A INPUT -m state --state NEW -m tcp -p tcp --dport 3636 -j ACCEPT
4. Start the web gui
service piranha-gui start
-> Don't start pulse until Piranha configuration is complete!
5. Turn on packet forwarding.
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
6. Configure services on the Real Servers (webservers).
[root@webservers ~] service nginx start

Direct Routing Configuration

1. On the LVS Router, log in to the Piranha web ui to begin configuration.
lvs01

In the Global Settings section, notice that Direct Routing is default. This is the option we'll want to use, in order to achieve best performance. This allows our web servers to directly reply to requests sent to the cluster IP address (Virtual IP). 
2. Click the VIRTUAL SERVERS tab to create the virtual web server. This "server" is actually your collective web cluster. It allows your nodes to act as one, responding together as if they were a single web server, hence the name "virtual server".
Click ADD, then EDIT.
lvs02
3. Editing the Virtual Server. Choose a cluster IP to use for the Virtual IP (not the IP of any real machine). And choose a device to attach that Virtual IP to.
Click ACCEPT when finished. The webpage will not refresh, but your data will be saved.  
Click REAL SERVER to configure the next part.
lvs03
 4. Real Server configuration. This page allows you to define the physical machines, or Real Servers, behind the web cluster.
ADD all of your http servers here, then EDIT those Real Servers to insert the details.
Click ACCEPT when finished.
To get back to the previous page, click VIRTUAL SERVER, then REAL SERVER. 
After all nodes are added to the REAL SERVER section, select each one and click (DE)ACTIVATE to activate them.
lvs06
 5. Now that all the Real Servers have been added and activated, return to the VIRTUAL SERVERS page.
Click (DE)ACTIVATE to activate the Virtual Server. 
lvs07
Router configuration complete! You can now exit the web browser, start up pulse, and continue on to configure the physical nodes.
service pulse start
Check 'ipvsadm' to see the cluster come online.
[root@lvsrouter ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.122.10:http wlc
 -> 192.168.122.1:http Route 1   0 0
 -> 192.168.122.2:http Route 1   0 0
 -> 192.168.122.3:http Route 1   0 0

Direct Routing - Real Server node configuration

Complete these steps on each of the web servers in the cluster.
1. Create a Virtual IP address on each of the Real Servers.
ip addr add 192.168.12.10 dev eth0:1
Since this IP address needs to be brought up after boot time, add this to /etc/rc.local.
vim /etc/rc.local
ip addr add 192.168.12.10 dev eth2:1
2. Create an arptables entry for each Virtual IP address on each Real Server.
This will cause the Real Servers to ignore all ARP requests for the Virtual IP addresses, and change any outgoing ARP responses which might otherwise contain the Virtual IP, so that they contain the real IP of the server instead. The LVS Router is the only node that should respond to ARP requests for any of the cluster's Virtual IPs (VIPs).
yum -y install arptables_jf
arptables -A IN -d <cluster-ip-address> -j DROP
arptables -A OUT -s <cluster-ip-address> -j mangle --mangle-ip-s <realserver-ip-address>
3. Once this has been completed on each Real Server, save the ARP table entries.
service arptables_jf save 
chkconfig --level 2345 arptables_jf on
4. TestIf arptables is functioning as it should, the LVS Router is the only machine that should respond to ping. Make sure pulse is shut off, then ping the Virtual IP from any of the cluster nodes.
If a machine does respond to ping, you can look in your arp table to find the misbehaving node. 
ping 192.168.122.10
arp | grep 192.168.122.10
This will reveal the node's MAC address and allow you to track it down.
Another useful test is to simply request a page from the cluster using 'curl', and then watch for the traffic on the LVS Router with 'ipvsadm'.
[root@lvsrouter ~]# watch ipvsadm
[user@outside ~]$ curl http://192.168.122.10/test.txt 

Cluster load testing with Tsung

Now that the cluster is up and running, you can see just how powerful it is by putting it through a strenuous load test. See this article for information on setting up Tsung to generate the right amount of traffic for your cluster. 
[root@loadnode1 ~] tsung start
Starting Tsung
"Log directory is: /root/.tsung/log/20120421-1004"
Leave this for at least 2 hours. It takes a long time to ramp up all those connections to achieve the peak amount of http requests per second. During that time, you can watch the load of your cluster machines using htop, to see individual core utilization.
Assuming you have EPEL & RPMforge repos installed...
yum -y install htop cluster-ssh
cssh node1 node2 node3 ...
htop
Cluster Load at 2 million
You'll be able to see that the LVS Router is actually doing very little work, while the http servers are chugging along at top speed, responding to requests as quickly as they can.
Be sure to keep your Load Average slightly less than the number of CPUs in the system. (For example, on my 24-core systems, I try to keep the load at 23 or less.) That will ensure that CPUs are being well-utilized without getting backed up. 
After Tsung has finished, view the report to see the details of your cluster's performance.
 cd /root/.tsung/log/20120421-1004
/usr/lib/tsung/bin/tsung_stats.pl
firefox report.html

2015年1月20日 星期二

Installing & Configuring Linux Load Balancer Cluster (Direct Routing Method) Updated




Installing & Configuring Linux Load Balancer Cluster (Direct Routing Method)

In Fedora, CentOS, and Rehat Enterprise Linux, IP Load Balancing solution is provided by using a package called ‘Piranha’.

Piranha offers the facility for load balancing inward IP network traffics (requests) and distribution of this IP traffic among a farm of server machines. The technique that is used to load balance IP network traffic is based on Linux Virtual Server tools.

This High Availability is purely software based provided by Piranha. Piranha also facilitates system administrator with a cool Graphical User Interface tool for management.

The Piranha monitoring tool is responsible for the following functions:
Heartbeating between active and backup load balancers.
Checking availability of the services on each of real servers.


Components of Piranha Cluster Software:
IPVS kernel, LVS (manage the IPVS routing table via the ipvsadm tool)
Nanny (monitor servers & services on real servers in a cluster)
Pulse (control the other daemons and handle failovers between IPVS routing boxes).

We will configure our computers or nodes as following:
Our load balancing will be done using 2 Linux Virtual Server Nodes or routing boxes.

We will install two or more Web servers for load balancing.


--------

First of all stop all the services that we don’t need to run on the nodes.

[root@websrv1 ~]# service bluetooth stop && chkconfig –level 235 bluetooth off
[root@websrv1 ~]# service sendmail stop && chkconfig –level 235 sendmail off

We will modify our hosts configuration file at /etc/hosts on each of the nodes in our setup


This load balance schema requires 4 VM instances, each of them having a single network card on the same subnet.

Make sure all machines have the correct host name. The command lines look like below.

[root@websrv1 ~]#
[root@websrv2 ~]#
[root@lbnode1 ~]#
[root@lbnode2 ~]#

Edit the host file using vim

[root@websrv1 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.32.29.140    www.pul.com     www
10.32.29.174    lbnode1.pul.com lbnode1
10.32.29.178    lbnode2.pul.com lbnode2
10.32.29.185    websrv1.pul.com websrv1
10.32.29.186    websrv2.pul.com websrv2

Copy the /etc/hosts file to all the servers (This step is not required if you have DNS)

[root@websrv1 ~]# scp /etc/hosts websrv2:/etc
[root@websrv1 ~]# scp /etc/hosts lbnode1:/etc
[root@websrv1 ~]# scp /etc/hosts lbnode2:/etc


After copying to host file to all the nodes, we need to generate SSH keys.

[root@websrv1 ~]# ssh-keygen –t rsa
[root@websrv1 ~]# ssh-keygen –t dsa
[root@websrv1 ~]# cd /root/.ssh/
[root@websrv1 .ssh]# cat *.pub > authorized_keys

Now copy ssh keys to all other nodes for password less entry which is required by pulse daemon.

[root@websrv1 .ssh]# scp -r /root/.ssh/ websrv2:/root/
[root@websrv1 .ssh]# scp -r /root/.ssh/ lbnode1:/root/
[root@websrv1 .ssh]# scp -r /root/.ssh/ lbnode2:/root/

We can build up a global finger print list as following:
[root@websrv1 .ssh]# ssh-keyscan -t rsa websrv1 websrv2 lbnode1 lbnode2
[root@websrv1 .ssh]# ssh-keyscan -t dsa websrv1 websrv2 lbnode1 lbnode2

--------

Now we will configure NTP service on all the nodes. We will make the LBNODE1 as our NTP Server.

[root@lbnode1 ~]# rpm -qa | grep ntp
ntpdate-4.2.6p5-1.el6.x86_64
fontpackages-filesystem-1.41-1.1.el6.noarch
ntp-4.2.6p5-1.el6.x86_64

Edit the ntp config file on the NTP server on lbnode1 so that it contains below

[root@lbnode1 ~]# cat /etc/ntp.conf | egrep -v "(^#.*|^$)"
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys

Start the ntp server on lbnode1

[root@lbnode1 ~]# service ntpd start && chkconfig ntpd on

The NTP server is already set up successfully.

Now we will configure client side configuration in WEBSRV1. It is the NTP client.

[root@websrv1 ~]# cat /etc/ntp.conf | egrep -v "(^#.*|^$)"
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
server 10.32.29.174
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
#restrict 127.0.0.1
#restrict -6 ::1
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp.org
#server 2.centos.pool.ntp.org
#server 127.127.1.0 # local clock
#fudge 127.127.1.0 stratum 10

Now the NTP start the NTP service at client of webserv1

[root@websrv1 ~]# service ntpd start && chkconfig ntpd on

Synchronize the time by specifying the ip address of lbnode1 (NTP Server)

[root@websrv1 ~]# ntpdate -u 10.32.29.174

Copy the same configuration or the file /etc/ntp.conf to other 2 nodes websrv2, lbnode2.
 

[root@websrv1 ~]# scp /etc/ntp.conf websrv2:/etc
[root@websrv1 ~]# scp /etc/ntp.conf lbnode2:/etc

After copying, restart the ntp service on these nodes

[root@websrv2 ~]# service ntpd start && chkconfig ntpd on
[root@lbnode2 ~]# service ntpd start && chkconfig ntpd on

Now we will update the time on all the nodes by typing following command:

[root@werbsrv2 ~]# ntpdate -u 10.32.29.174
[root@lbnode2 ~]# ntpdate -u 10.32.29.174

--------


Edit the sysctl config file as below

[root@lbnode1 ~]# cat /etc/sysctl.conf | egrep -v "(^#.*|^$)"
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296

Copy same sysctl config file from lbnode1 to lbnode2

[root@lbnode1 ~]# scp /etc/sysctl.conf lbnode2:/etc/

Run sysctl –p on lbnode1

[root@lbnode1 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
[root@lbnode1 ~]#

Run sysctl –p on lbnode2

[root@lbnode2 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296

--------

Now we will setup our Linux Virtual Server (LBNODE1 & LBNODE2) by installing Piranha package. We already know that Piranha includes ipvsadm, nanny and pulse demon.

We will use Yum to install Piranha on the both nodes.

[root@lbnode1 ~]# yum install piranha -y
[root@lbnode2 ~]# yum install piranha -y

Now we will configure Linux Virtual Server configuration file at /etc/sysconfig/ha/lvs.cf

[root@lbnode1 ~]# cat /etc/sysconfig/ha/lvs.cf
serial_no = 12
primary = 10.32.29.174
service = lvs
rsh_command = ssh
backup_active = 1
backup = 10.32.29.178
heartbeat = 1
heartbeat_port = 1050
keepalive = 2
deadtime = 10
network = direct
debug_level = NONE
monitor_links = 1
syncdaemon = 0
virtual server1 {
     active = 1
     address = 10.32.29.140 eth0:1
     port = 80
     send = "GET / HTTP/1.1\r\n\r\n"
     expect = "HTTP"
     use_regex = 0
     load_monitor = none
     scheduler = rr
     protocol = tcp
     timeout = 10
     reentry = 180
     quiesce_server = 0
     server websrv1 {
         address = 10.32.29.185
         active = 1
         weight = 1
     }
     server websrv2 {
         address = 10.32.29.186
         active = 1
         weight = 1
     }
}

Set up password for piranha web access. Empty password is used here.

service httpd start
/sbin/service piranha-gui start


[root@lbnode1 ~]# piranha-passwd
New Password:
Verify:
Updating password for user piranha

Use a web browser e.g. chrome to remote login piranha web configuration tool at http://10.32.29.174:3636/
Login name: piranha
Password: <Empty>

Go to “Global Setting” page, verify that “Primary server public IP” is 10.32.29.174 (lbnode1) 
Go to “Redundancy” page, verify Redundant server public IP is 10.32.29.178 (lbnode2)

Go to “Virtual Servers”, verify a server named “server1” has 10.32.29.140 (our virtual ip address).

Go to “Virtual Servers” and “Virtual Server”, verify the virtual ip address is 10.32.29.140 and is attached to eth0:1.

Go to “Virtual Servers” and “Real Server”, verify the real ip address are the same as the web servers



Now we will copy this configuration file to lbnode2.

[root@lbnode1 ~]# scp /etc/sysconfig/ha/lvs.cf lbnode2:/etc/sysconfig/ha/

--------

Run arptables scripts as below

[root@websrv1 ~]# cd ~
[root@websrv1 ~]# cat arp_arptables.sh
#!/bin/bash
VIP=10.32.29.140
RIP=10.32.29.185
arptables -F
arptables -A IN -d $VIP -j DROP
arptables -A OUT -s $VIP -j mangle --mangle-ip-s $RIP
/sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.0 up
/sbin/route add -host $VIP dev eth0:1
[root@websrv1 ~]# . arp_arptables.sh

[root@websrv2 ~]# cd ~
[root@websrv2 ~]# cat arp_arptables.sh
#!/bin/bash
VIP=10.32.29.140
RIP=10.32.29.186
arptables -F
arptables -A IN -d $VIP -j DROP
arptables -A OUT -s $VIP -j mangle --mangle-ip-s $RIP
/sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.0 up
/sbin/route add -host $VIP dev eth0:1
[root@websrv2 ~]# . arp_arptables.sh

--------

Optional. High change that you may not need this section.

Now we will install and configure our web servers and arptables_jf package for direct routing.

[root@websrv1 ~]# yum install httpd arptables_jf -y

Now we will configure the Ethernet interfaces for virtual IP on first web server node.

[root@websrv1 ~]# ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up

[root@websrv1 ]# echo “ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up” >> /etc/rc.local

Now we will do it on the second web server node.

[root@websrv2 ~]# yum install httpd arptables_jf -y

Now we will configure the Ethernet interfaces for virtual IP on second web server node.

[root@websrv2 ~]# ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up

[root@websrv2 ~]# echo “ifconfig eth0:1 10.32.29.140 netmask 255.255.255.0 broadcast 10.32.29.255 up” >> /etc/rc.local


Now we will configure our arptables on our first web server node.

[root@websrv1 ~]# arptables -A IN -d 10.32.29.140 -j DROP
[root@websrv1 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.174
[root@websrv1 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.178
[root@websrv1 ~]# service arptables_jf save
[root@websrv1 ~]# chkconfig arptables_jf on


Now we will configure our arptables on our first web server node.

[root@websrv2 ~]# arptables -A IN -d 10.32.29.140 -j DROP
[root@websrv2 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.174
[root@websrv2 ~]# arptables -A OUT -d 10.32.29.140 -j mangle --mangle-ip-s 10.32.29.178
[root@websrv2 ~]# service arptables_jf save
[root@websrv2 ~]# chkconfig arptables_jf on

--------

Install ipvsadm piranha httpd package on lbnode1

[root@lbnode1 ~]# yum install -y ipvsadm piranha httpd
Loaded plugins: refresh-packagekit, security
Setting up Install Process
Package ipvsadm-1.26-4.el6.x86_64 already installed and latest version
Package piranha-0.8.6-4.0.1.el6_5.2.x86_64 already installed and latest version
Package httpd-2.2.15-39.0.1.el6.x86_64 already installed and latest version
Nothing to do

Install ipvsadm piranha httpd package on lbnode2

[root@lbnode2 ~]# yum install -y ipvsadm piranha httpd
Loaded plugins: refresh-packagekit, security
Setting up Install Process
Package ipvsadm-1.26-4.el6.x86_64 already installed and latest version
Package piranha-0.8.6-4.0.1.el6_5.2.x86_64 already installed and latest version
Package httpd-2.2.15-39.0.1.el6.x86_64 already installed and latest version
Nothing to do

We will start httpd on both web servers 1.

[root@websrv1 ]#/etc/init.d/httpd start && chkconfig httpd on
[root@websrv1 ]# echo "Web Server 1" > /var/www/html/index.html

We will start httpd on both web servers 2.

[root@websrv2 ]#/etc/init.d/httpd start && chkconfig httpd on
[root@websrv2 ]# echo "Web Server 2" > /var/www/html/index.html

We will start pulse service on both lbs nodes:

[root@lbnode1 ~]# service pulse start && chkconfig pulse on
[root@lbnode1 ~]# tail -f /var/log/messages

Jan 20 15:30:34 lbnode1 pulse[5084]: STARTING PULSE AS MASTER
Jan 20 15:30:35 lbnode1 pulse[5084]: backup inactive: activating lvs
Jan 20 15:30:35 lbnode1 lvsd[5086]: starting virtual service server1 active: 80
Jan 20 15:30:35 lbnode1 lvsd[5086]: create_monitor for server1/websrv1 running as pid 5090
Jan 20 15:30:35 lbnode1 lvsd[5086]: create_monitor for server1/websrv2 running as pid 5091
Jan 20 15:30:35 lbnode1 nanny[5091]: starting LVS client monitor for 10.32.29.140:80 -> 10.32.29.186:80
Jan 20 15:30:35 lbnode1 nanny[5091]: [ active ] making 10.32.29.186:80 available
Jan 20 15:30:35 lbnode1 nanny[5090]: starting LVS client monitor for 10.32.29.140:80 -> 10.32.29.185:80
Jan 20 15:30:35 lbnode1 nanny[5090]: [ active ] making 10.32.29.185:80 available
Jan 20 15:30:40 lbnode1 pulse[5093]: gratuitous lvs arps finished

[root@lbnode2 ~]# service pulse start && chkconfig pulse on
Starting pulse:
[root@lbnode2 ~]# tail -f /var/log/messages

Jan 20 15:25:09 lbnode2 NetworkManager[1383]: <info> (eth0): device state change: 7 -> 8 (reason 0)
Jan 20 15:25:09 lbnode2 NetworkManager[1383]: <info> Policy set 'System eth0' (eth0) as default for IPv4 routing and DNS.
Jan 20 15:25:09 lbnode2 NetworkManager[1383]: <info> Activation (eth0) successful, device activated.
Jan 20 15:25:09 lbnode2 NetworkManager[1383]: <info> Activation (eth0) Stage 5 of 5 (IP Configure Commit) complete.
Jan 20 15:25:09 lbnode2 ntpd[1610]: Deleting interface #6 eth0:1, 10.32.29.140#123, interface stats: received=0, sent=0, dropped=0, active_time=551 secs
Jan 20 15:25:09 lbnode2 ntpd[1610]: peers refreshed
Jan 20 15:28:56 lbnode2 pulse[2635]: Terminating due to signal 15
Jan 20 15:28:59 lbnode2 pulse[3172]: STARTING PULSE AS BACKUP
Jan 20 15:30:41 lbnode2 pulse[3172]: Terminating due to signal 15
Jan 20 15:30:41 lbnode2 pulse[3211]: STARTING PULSE AS BACKUP

--------

We have managed to setup our LVS and webserver nodes now it is time to test if everything is working or not.

[root@lbnode1 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  www.pul.com:http rr
  -> websrv1.pul.com:http         Route   1      0          0
  -> websrv2.pul.com:http         Route   1      0          0

[root@lbnode2 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

[root@lbnode2~]# watch ipvsadm


Finally open a web browser from any machine and type http://www.pul.com and keep on refreshing the page, we will get output of page contents from Webserver 1 and Web Server 2.


--------

After the client browser is refreshed by F5 button, now you can enter "ipvsadm --list" at the master load balancer. You will see the InActConn number is changing.

Alternatively, you can use "ipvsadm -Lnc" and see the destination ip address is changing.





2023 Promox on Morefine N6000 16GB 512GB

2023 Promox on Morefine N6000 16GB 512GB Software Etcher 100MB (not but can be rufus-4.3.exe 1.4MB) Proxmox VE 7.4 ISO Installer (1st ISO re...