Tuesday, November 10, 2009

CHROOT-BIND

Before starting
This tutorial is written "only for a beginner,by a beginner".NOT FOR PRODUCTION

Why we are using CHROOT-BIND

The idea behind running BIND in a chroot jail is to limit the amount of access any malicious individual could gain by hacking BIND.
It is for the same reason that we run BIND as a non-root user.



CHROOT-BIND configuration
========================================
/var/named/chroot/---will be the root ("/") directory

ie;/etc/named.conf will be /var/named/chroot/etc/named.conf
/var/named/ will be /var/named/chroot/var/named/

KEEP THIS IN MIND EVERY TIME....
we will not refer original location ie; /var/named/chroot/etc/named.conf
Will refer as /etc/named.conf

=========================================
/etc/named.conf
=========================================
options
{
directory "/var/named";
};

controls {
inet 127.0.0.1 allow { localhost;127.0.0.1;} keys { rndckey; };
};


acl "safe-subnet" { 10.10.40.0/24; };
view "internal" {

match-clients { localnets; localhost; safe-subnet; };
match-destinations { localnets; localhost; safe-subnet; };
recursion yes;

zone "." IN {
type hint;
file "named.ca";
};

zone "localhost" IN {
type master;
file "localhost.zone";
allow-update { none; };
};

zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
allow-update { none; };
};
include "/var/named/zones/internal/internal_zones.conf";
};

include "/etc/rndc.key";
============================================
1.The first line "options" contains only one directive pointing to
where to store default zone related files

2.The "controls" section is used to administer named deamon using "rndc"

3.The "acl" is used to define control list (Just like in squid proxy)

4.The "view" section is used to define internal and external connection
We have only internal view so that internal netwok can query the name server

5.The three zone definitions are needed for a production system
a)The first zone contains all root domains (13 Servers)
b)Forward zone file for local host
c)Revers file for local host

6.The first include directive is pointing to our internal zone(internal view).

7.The last include directive is used for administer named deamon using rndc
The file contains an algorithm and an md5-key for encrypting data while using rndc
from remote server

===============================================
Our Zone file definitions
===============================================
/var/named/zones/internal/internal_zones.conf
===============================================
zone "40.10.10.in-addr.arpa" IN {
type master;
file "/var/named/zones/internal/10-10-40.zone";
allow-update { none; };
};

zone "ansil.com" IN {
type master;
file "/var/named/zones/internal/ansil.com.zone";
allow-update { none; };
};
==============================================
/var/named/zones/internal/10-10-40.zone
==============================================
$TTL 3D
@ IN SOA ns1.ansil.com. root.ansil.com. (
200911101 ; serial number
8H ; refresh, seconds
2H ; retry, seconds
4W ; expire, seconds
1D ) ; minimum, seconds

NS www ; Nameserver Address

211 PTR www.ansil.com.
=============================================
/var/named/zones/internal/ansil.com.zone
=============================================
$TTL 3D
@ IN SOA ns1.ansil.com. root.ansil.com. (
200911101 ; serial#
3600 ; refresh, seconds
3600 ; retry, seconds
3600 ; expire, seconds
3600 ) ; minimum, seconds

NS ns1.ansil.com ; Inet Address of nameserver
ansil.com. MX 10 mail ; Primary Mail Exchanger

localhost A 127.0.0.1
www A 10.10.40.211
mail A 10.10.40.212
ns1 A 10.10.40.213
=============================================
/etc/rndc.key
=============================================
You can create your own rndc key file using
# rndc-confgen >rndc.kye
The file will shown like

key "rndckey" {
algorithm hmac-md5;
secret "YKPl5gxHe1d2J6kyjDGZFg==";
};

options {
default-key "rndckey";
default-server 127.0.0.1;
default-port 953;
};

We will not use options here;because we already mentioned
the same in "controls" section.So just keep "rndckey" entry
At last the file /etc/rndc.key will be like this
============================================
key "rndckey" {
algorithm hmac-md5;
secret "YKPl5gxHe1d2J6kyjDGZFg==";
};
============================================
At last you have to copy some files from/usr/share/doc/bind-9.3.3/sample/var/named
to /var/named/

1.named.root
2.named.local
3.localhost.zone

Rename named.root to named.ca

open /etc/resolve.conf and add
nameserver 127.0.0.1
===========================================

Check your configuration using "host" command
1.Forward Lookup
# host www.ansil.com
www.ansil.com has address 10.10.40.211
2.Reverse lookup
# host 10.10.40.211
211.40.10.10.in-addr.arpa domain name pointer www.ansil.com.

The Reverse internal zone file contains only one pointer www.ansil.com to 10.10.40.211
The Forward internal zone file contains many entries like
1.Mail exchanger
2.Name server
3.A record will give the IP of a domain

Tuesday, November 3, 2009

Clustering Apche Using RHCS(Redhat Cluster Suit)

How to Install Redhat Clustering Suite

For a cluster setup you need minimum two machines

1. Node1

2. Node2

In this setup we will install cluster management software on another machine called Manager

The machine names and IPs are

1. node1.ansil.com --- 10.10.40.212

2. node2.ansil.com --- 10.10.40.213

3. manager.ansil.com --- 10.10.40.211

Be sure that you can access these machines through their names(FQDN).For this use a dns server/flat dns(/etc/hosts)

Management Server

1. Configure YUM

2. Install Cluster management software Luci

3. Initialise Luci cluster manager

4. Set password for Luci admin login

5. Start Luci service

1. Configure YUM

First you have to setup a yum repository in management server. You can find the procedure for creating yum is in earlier post.

2. Install Cluster management software Luci

# yum install luci*

3. Initialise Luci cluster manager

# luci_admin init

4. Set password for Luci admin login

Enter password for ‘Admin’ user

Enter password:

Reenter password:

5. Start Luci service

# service luci start

After starting service you can access Luci cluster management using

https://10.10.40.211:8084

More on here...


Enter “admin” as user name and password that you entered while initializing Luci

Friday, October 23, 2009

How to Configure YUM



Most of the beginners in Linux faces a big problem when they are deploying a package i.e.; dependency failure. To resolve this issue redhat introduced a technology called YUM.
YUM is a repository system contains information about packages and files inside every RPMs
Don’t worry about redhat repository .Create your own Yum repository in a local system and enjoy
Here we go

1. You must have all CDs used for Redhat installation
2. 3 GB partition formatted in ext3
3. Ftp service
4. Create repo package

Create a directory in / called yum

# mkdir /yum

Create one partition using fdisk with 3 gb size
After creating partition reboot the system or use partprob command to rebuild partition table

Format partition

# mkfs.ext3 /dev/sdX

Mount the newly created partition to /yum

# mount /dev/sdX /yum

Add the same to fstab for mounting at system startup

Mount your 1st CD to /mnt

# mount /dev/cdrom /mnt

Copy Server, Cluster, ClusterStorage, VT directories to /yum

# cp –ai /mnt/Server /yum
# cp –ai /mnt/Cluster /yum
# cp –ai /mnt/ClusterStorage /yum
# cp –ai /mnt/VT /yum

Unmount 1st CD and mount 2nd CD to /mnt

Umount /dev/cdrom
Insert 2nd CD

# mount /dev/cdrom /mnt

Copy all the contents from /mnt/Server directory to /yum/ Server
Copy all the contents from /mnt/Cluster directory to yum/Cluster
Copy all the contents from /mnt/ClusterStorage to /yum/ ClusterStorage
Copy all the contents from /mnt/VT directory to /yum/VT

# cp –ai /mnt/Server /* /yum/Server/
# cp –ai /mnt/Cluster /* /yum/ Cluster /
# cp –ai /mnt/ClusterStorage /* /yum/ ClusterStorage /
# cp –ai /mnt/VT /* /yum/ VT /


Copy /yum/Server/repodata/comp-rhel5-server-core.xml to /tmp
Copy /yum/Cluster/repodata/comp-rhel5-cluster.xml to /tmp
Copy /yum/ClusterStorage/repodata/comp-rhel5-cluster-st.xml to /tmp
Copy /yum/VT/repodata/comp-rhel5-vt.xml to /tmp

# cp /yum/Server/repodata/comp-rhel5-server-core.xml /tmp
# cp /yum/Cluster/repodata/comp-rhel5-cluster.xml /tmp
# cp /yum/ClusterStorage/repodata/comp-rhel5-cluster-st.xml /tmp
# cp /yum/VT/repodata/comp-rhel5-vt.xml /tmp


Remove /yum/Server/repodata directory
Remove /yum/ Cluster /repodata directory
Remove /yum/ ClusterStorage /repodata directory
Remove /yum/ VT /repodata directory

# rm –fr /yum/Server/repodata
# rm –fr /yum/ Cluster /repodata
# rm –fr /yum/ ClusterStorage /repodata
# rm –fr /yum/ VT /repodata

Create your group repository using createrepo

#createrepo -g /tmp/ comp-rhel5-server-core.xml /yum/Server/
#createrepo –g /tmp/ comp-rhel5-cluster.xml /yum/ Cluster
#createrepo –g /tmp/c omp-rhel5-cluster-st.xml/yum/ ClusterStorage
#createrepo –g /tmp/ comp-rhel5-vt.xml /yum/ VT

Create repodata file in /etc/yum.repos.d/
1. Copy rhel-debuginfo.repo as Server.repo to the same directory
2. Open Server.repo
3. And change the portions highlighted
-----------------------------------------------------------
[Server]
Name=Ansil’s repo
Baseurl=ftp://your ip/Server/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
-----------------------------------------------------------

Copy Server.repo as Cluster.repo, ClusterStorage.repo, and VT.repo to the same directory (/etc/yum.repos.d/)

Change filed in “[ ]” for each file and put Cluster, ClusterStorage.repo, and VT for Cluster.repo, ClusterStorage.repo, and VT.repo respectively

Now your YUM repository is ready


Also you have to configure Ftp service for accessing this repository from outside

Open /etc/vsftpd/vsftpd.conf

1. Check line
anonymous_enable=YES
is uncommented

2. Add
anon_root=/yum
at the end of file

Restart Ftp service
# service vsftpd restart
# chkconfig –level 35 vsftpd on

ALL… DONE…!!!!!

Go to Application->Add/Remove Software

LVM thin provisioning - file system usage and pool usage dosn't match

When I was demonstrating LVM thin provisioning to new batch of campus hires ; they pointed out an important mismatch between thin pool usag...