Linux.com

Community Blogs



Network Card Bonding On RedHat

In the following I will use the word bonding because practically we will bond interfaces as one. Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports into a three-megabits trunk port. That is equivalent with having one interface with three megabytes speed.

Where should I use bonding?

You can use it wherever you need redundant links, fault tolerance or load balancing networks. It is the best way to have a high availability network segment. A very useful way to use bonding is to use it in connection with 802.1q VLAN support (your network equipment must have 802.1q protocol implemented).

Diverse modes of bonding:

mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
Prerequisites: * Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
* A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
* Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb)
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server. Also you can use multiple bond interface but for that you must load the bonding module as many as you need.

Example:

In the /etc/modprobe.conf file add the following:

alias bond0 bonding
options bond0 miimon=80 mode=5

In the /etc/sysconfig/network-scripts/ directory create ifcfg-bond0:

DEVICE=bond0
IPADDR=(ip address)
NETMASK=
NETWORK=
BROADCAST=
GATEWAY=
ONBOOT=yes
BOOTPROTO=none
USERCTL=no

Change the ifcfg-eth0 to:

DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MASTER=bond0
SLAVE=yes

Change the ifcfg-eth1 to:

DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MASTER=bond0
SLAVE=yes

That´s all! Now your trunk should be up and running!

 

Linux lvm - Logical Volume Manager

Create Partitions

For this Linux lvm example you need an unpartitioned hard disk /dev/sdb. First you need to create physical volumes. To do this you need partitions or a whole disk. It is possible to run pvcreate command on /dev/sdb, but I prefer to use partitions and from partitions I later create physical volumes.

[root@host]# fdisk /dev/sda

Create physical volumes

Use the pvcreate command to create physical volumes.

[root@host]# pvcreate /dev/sdb1
[root@host]# pvcreate /dev/sdb2

The pvdisplay command displays all physical volumes on your system.

[root@host]# pvdisplay

Alternatively the following command should be used:

[root@host]# pvdisplay /dev/sdb1

Create Virtual Group

At this stage you need to create a virtual group which will serve as a container for your physical volumes. To create a virtual group with the name "mynew_vg" which will include /dev/sdb1 partition, you can issue the following command:

[root@host]# vgcreate mynew_vg /dev/sdb1

To include both partitions at once you can use this command:

[root@host]# vgcreate mynew_vg /dev/sdb1 /dev/sdb2

Feel free to add new physical volumes to a virtual group by using the vgextend command.

[root@host]# vgextend mynew_vg /dev/sdb2

Create Logical Volumes

From your big cake (virtual group) you can cut pieces (logical volumes) which will be treated as a partitions for your linux system. To create a logical volume, named "vol01", with a size of 400 MB from the virtual group "mynew_vg" use the following command:

* create a logical volume of size 400 MB -L 400
* create a logical volume of size 4 GB -L 4G

[root@host]# lvcreate -L 400 -n vol01 mynew_vg

In this case you have created a logical volume with a size of 1GB and the name of vol02

[root@host]# lvcreate -L 1000 -n vol02 mynew_vg

Create File system on logical volumes

The logical volume is almost ready to use. All you need to do is to create a filesystem.:

[root@host]# mkfs.ext3 -m 0 /dev/mynew_vg/vol01

the -m option specifies the percentage reserved for the super-user, set this to 0 if you wish not to waste any space, the default is 5%.

Edit /etc/fstab

Add an entry for your newly created logical volume into /etc/fstab

/dev/mynew_vg/vol01 /home/foobar ext3 defaults 0 2

Mount logical volumes

Before you mount do not forget to create a mount point.

[root@host]# mkdir /home/foobar

Extend logical volume

The biggest advantage of logical volume manager is that you can extend your logical volumes any time you are running out of the space. To increase the size of a logical volume by another 800 MB you can run this command:

[root@host]# lvextend -L +800 /dev/mynew_vg/vol01

The command above does not actually increase the physical size of volume, to do that you need to:

[root@host]# resize2fs /dev/mynew_vg/vol01

Remove logical volume

The command lvremove can be used to remove logical volumes. Make sure that before you attempt to remove logical volumes your logical volume does not have any valuable data stored on it, moreover, make sure the volume is unmounted.

[root@host]# lvdisplay
[root@host]# lvremove /dev/mynew_vg/vol02

 

Centralized logging with syslong-ng over stunnel

Installing syslog-ng and stunnel

Login to the client and the server, download syslog-ng and stunnel and install them:

[root@host]# yum install -y openssl-devel glibc gcc glib2
[root@host]# wget http://www.stunnel.org/download/stunnel/src/stunnel-4.26.tar.gz
[root@host]# lynx http://www.balabit.com/downloads/files/syslog-ng/open-source-edition/pkgs/dists/rhel-5/syslog-ng-ose-2.1.3/i386/RPMS.syslog-ng/
[root@host]# mkdir -p /usr/local/var/run/stunnel/
[root@host]# cd /usr/src
[root@host]# tar zxfv stunnel-4.26.tar.gz
[root@host]# cd stunnel-4.26
[root@host]# ./configure
[root@host]# make
[root@host]# make install
[root@host]# cd /usr/src/SYSLOG-NG
[root@host]# rpm -Uvh libdbi8-0.8.2bb2-3.rhel5.i386.rpm libdbi8-dev-0.8.2bb2-3.rhel5.i386.rpm libevtlog0-0.2.8-1.i386.rpm syslog-ng-2.1.3-1.i386.rpm

Creating the certificates

After the installation is complete login to your CA server and create the server and the client certificate. If you have more than one client that will log to the server you have to generate new client certificate:

[root@host]# cd /etc/pki/tls/certs
[root@host]# make syslog-ng-server.pem
[root@host]# make syslog-ng-client.pem

Place copies of syslog-ng-server.pem on all machines in /etc/stunnel with one important alteration. The clients only need the certificate section of syslog-ng-server.pem. In other words, remove the private key section from syslog-ng-server.pem on all clients.
Place every client's syslog-ng-client.pem in /etc/stunnel. For server, create a special syslog-ng-client.pem containing the certificate sections for all clients and place in /etc/stunnel. In other words, remove the private key sections from all syslog-ng-client.pem files and concatenate what is left to create server's special syslog-ng-client.pem.

note:It is very important that you put the server's short name when you're asked about the Common Name !

Creating the configuration files

Create the stunnel.conf configuration file in /etc/stunnel on the client:

[root@host]# vi /etc/stunnel/stunnel.conf

#foreground = yes
#debug = 7
client = yes
cert = /etc/stunnel/syslog-ng-client.pem
CAfile = /etc/stunnel/syslog-ng-server.pem
verify = 3
[5140]
accept = 127.0.0.1:514
connect = server.yourdomain.com:5140

For syslog-ng.conf you can start with:

[root@host]# vi /etc/syslog-ng/syslog-ng.conf

options {long_hostnames(off);
sync(0);};
source src {unix-stream("/dev/log"); pipe("/proc/kmsg"); internal();};
destination dest {file("/var/log/messages");};
destination stunnel {tcp("127.0.0.1" port(514));};
log {source(src);destination(dest);};
log {source(src);destination(stunnel);};

Similarly stunnel.conf on the server can look like this:

[root@host]# vi /etc/stunnel/stunnel.conf

#foreground = yes
debug = 7
cert = /etc/stunnel/syslog-ng-server.pem
CAfile = /etc/stunnel/syslog-ng-client.pem
verify = 3
[5140]
accept = server.yourdomain.com:5140
connect = 127.0.0.1:514

An example of syslog-ng.conf on the server:

[root@host]# vi /etc/syslog-ng/syslog-ng.conf

options { long_hostnames(off); sync(0); keep_hostname(yes); chain_hostnames(no); };
source src {unix-stream("/dev/log"); pipe("/proc/kmsg"); internal();};
source stunnel {tcp(ip("127.0.0.1") port(514) max-connections(500));};
destination remoteclient {file("/var/backup/CentralizedLogging/remoteclients");};
destination dest {file("/var/log/messages");};
log {source(src); destination(dest);};
log {source(stunnel); destination(remoteclient);};

Starting syslog-ng and stunnel

Make sure syslog-ng is not running (it automatically start once you install it from the rpm's)

[root@host]# killall syslog-ng

Start syslong-ng BEFORE stunnel by running:

[root@host]# syslog-ng -f /etc/syslog-ng/syslog-ng.conf

Make sure it's running by checking the logs:

[root@host]# tail -f /var/log/messages

Start stunnel by running:

[root@host]# stunnel /etc/stunnel/stunnel.conf

Make sure stunnel is running by checking the logs:

[root@host]# tail -f /var/log/messages

If stunnel is not running you can uncomment the debug line in the stunnel.conf file, start stunnel again and check the logs for detailed description of the problem.

Final steps

Restart stunnel on the server for it to re-read the certificates file and accept the newly added clients:

[root@host]# killall stunnel stunnel /etc/stunnel/stunnel.conf

Make sure syslog-ng does not start (on client) through the init process:

[root@host]# chkconfig --level 2345 syslog-ng off

Edit /etc/rc.d/rc.local (on client) and add syslog-ng and stunnel:

[root@host]# vi /etc/rc.d/rc.local

echo "Starting syslog-ng ..."
syslog-ng -f /etc/syslog-ng/syslog-ng.conf
echo "Starting stunnel ..."
stunnel /etc/stunnel/stunnel.conf

To test the remote logging run on the client:

[root@host]# logger "Testing remote logging"

The message should appear on bu3 in /var/backup/CentralizedLogging/remoteclients

One alternative to syslog-ng is Splunk. You can always use Splunk along syslog-ng for indexing purpose

 

Configuring sudo: Explaination with an example

sudo is one of my favorite and important security tool.Its really comes handy when you need to give super user access to person other than you or your client.It gives them limited access to your box.


For Eg.:
If your client needs SSH access to restart the web server.It won't be wise to give away your root password, sudo is the best option.You don't need to have 100% trust with sudo as you would with su. After all, if you only want them able to restart the web server, what more should they be able to do? Should they be able to modify your Apache config files? Add new users? Restart your mail server? Absolutely not, they just can do is restart the web server.

Read more... Comment (0)
 

Installing Xen on RedHat

To install Xen, we simply run:

[root@host]# yum install kernel-xen xen

This installs Xen and a Xen kernel on our CentOS system. Afterwards, we can find our new Xen kernel (vmlinuz-2.6.18-8.1.4.el5xen) and its ramdisk (initrd-2.6.18-8.1.4.el5xen.img) in the /boot directory:

[root@host]# ls -l /boot/

Before we can boot the system with the Xen kernel, we must tell the bootloader GRUB about it. We open /boot/grub/menu.lst:

vi /boot/grub/menu.lst

and add the following stanza above all other kernel stanzas:

[...]
title CentOS (2.6.18-8.1.4.el5xen)
root (hd0,0)
kernel /xen.gz-2.6.18-8.1.4.el5
module
/vmlinuz-2.6.18-8.1.4.el5xen ro
root=/dev/VolGroup00/LogVol00 module
/initrd-2.6.18-8.1.4.el5xen.img
[...]

Then change the value of default to 0:

[...]
default=0
[...]

The complete /boot/grub/menu.lst should look something like this:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
# initrd /initrd-version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu
title CentOS (2.6.18-8.1.4.el5xen)
root (hd0,0)
kernel /xen.gz-2.6.18-8.1.4.el5
module
/vmlinuz-2.6.18-8.1.4.el5xen ro root=/dev/VolGroup00/LogVol00
module
/initrd-2.6.18-8.1.4.el5xen.img
title CentOS (2.6.18-8.1.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.1.1.el5
ro root=/dev/VolGroup00/LogVol00
initrd
/initrd-2.6.18-8.1.1.el5.img
title CentOS (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro
root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.18-8.el5.img

Afterwards, we reboot the system:

[root@host]# shutdown -r now

The system should now automatically boot the new Xen kernel. After the system has booted, we can check that by running

[root@host]# uname -r

[root@host]# uname -r
2.6.18-8.1.4.el5xen
[root@host]#

So it's really using the new Xen kernel!

We can now run

[root@host]# xm list

to check if Xen has started. It should list Domain-0 (dom0):

[root@host]# xm list
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 350 1 r----- 94.4
[root@host]#

CentOS comes with a nice tool called virt-install with which we can create virtual machines for Xen. To start it, we simply run

[root@host]# virt-install

The tools asks a few questions before it creates a virtual machine. I want to call my first virtual machine vm01, with 256MB RAM and a disk size of 4GB. I want to store it in the file /vm/vm01.img:

What is the name of your virtual machine? <-- vm01
How much RAM should be allocated (in megabytes)? <-- 256
What would you like to use as the disk (path)? <-- /vm/vm01.img
How large would you like the disk (/vm/vm01.img) to be (in gigabytes)? <-- 4
Would you like to enable graphics support? (yes or no) <-- no
What is the install location? <-- http://wftp.tu-chemnitz.de/pub/linux/centos/5.0/os/i386

The question about the graphics support refers to the installer, not the virtual machine itself! It is possible to start a graphical installer, but you'd have to connect to it via VNC. It's easier to use the text installer - it offers the same options, so I choose the text installer.

As install location, you should specify a mirror close to you where the installer can download all files needed for the installation of CentOS 5.0 in our virtual machine. You can find a list of CentOS mirrors here: http://www.centos.org/modules/tinycontent/index.php?id=13

After we have answered all questions, virt-install starts the normal CentOS 5.0 installer (in text mode) in our vm01 virtual machine. You already know the CentOS installer, so it should be no problem for you to finish the CentOS installation in vm01.

After the installation, we stay at the vm01 console. To leave it, type CTRL+] if you are at the console, or CTRL+5 if you're using PuTTY. You will then be back at the dom0 console.

virt-install has created the vm01 configuration file /etc/xen/vm01 for us (in dom0). It should look like this:

[root@host]# cat /etc/xen/vm01

# Automatically generated xen config file
name = "vm01"
memory = "256"
disk = [ 'tap:aio:/vm/vm01.img,xvda,w', ]
vif = [ 'mac=00:16:3e:13:e4:81, bridge=xenbr0', ]

uuid = "5aafecf1-dd66-401d-69cc-151c1cb8ac9e"
bootloader="/usr/bin/pygrub"
vcpus=1
on_reboot = 'restart'
on_crash = 'restart'

Run

[root@host]# xm console vm01

to log in on that virtual machine again (type CTRL+] if you are at the console, or CTRL+5 if you're using PuTTY to go back to dom0), or use an SSH client to connect to it.

To get a list of running virtual machines, type

[root@host]# xm list

The output should look like this:

[root@host]# xm list
Name ID Mem(MiB) VCPUs State Time(s)
Domain-0 0 259 1 r----- 1906.6
vm01 3 255 1 ------ 137.9
[root@server1 xen]#

To shut down vm01, do this:

[root@host]# xm shutdown vm01

To start vm01 again, run

[root@host]# xm create /etc/xen/vm01

If you want vm01 to start automatically at the next boot of the system, then do this:

[root@host]# ln -s /etc/xen/vm01 /etc/xen/auto

Here are the most important Xen commands:

xm create -c /path/to/config - Start a virtual machine.
xm shutdown - Stop a virtual machine.
xm destroy - Stop a virtual machine immediately without shutting it down. It's as if you switch off the power button.
xm list - List all running systems.
xm console - Log in on a virtual machine.
xm help - List of all commands.

If you would like to use kickstart you can use virt-install on the command line like this:

[root@host]# virt-install -n hostname -r 4040 --vcpus=2 -f /domu/hostname \
-s 60 --nographics --os-type=linux --os-variant=centos5 -p -l \
http://hostname.com/centos/5.1/os/x86_64/ -x \
"ks=http://hostname.com/ks/javakickstart.cfg"

If the server has more than one network interface make sure you add them all in the /etc/xen/vm01 file:

name = "pub1-53"
uuid = "d78d5d81-131a-6ec6-fbc3-ac2184a7cba7"
maxmem = 3968
memory = 3968
vcpus = 2
bootloader = "/usr/bin/pygrub"
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
vfb = [ ]
disk = [ "tap:aio:/domu/pub1-53,xvda,w" ]
vif = [ "mac=00:16:3e:4c:cb:5c,bridge=xenbr0", "mac=00:16:3e:4c:cb:5d,bridge=xenbr1" ]

If you need to resize the file system on an instance shut down the XenU and run:

[root@host]# dd if=/dev/zero bs=1M count=1024 >> filesystem.image
[root@host]# e2fsck -f filesystem.image
[root@host]# resize2fs filesystem.image
[root@host]# e2fsck -f filesystem.image

 

Installing kernel source on CentOS/RedHat

1. Maybe you do not need the full kernel source

If you need to compile a kernel driver module, the chances are you do not really need the full kernel source tree. You might just need the kernel-devel package. (If, however, you are certain that the full source tree is required, please follow the instructions in Section 2.)

In CentOS-5, there are three kernel-devel packages available:

* kernel-devel (both 32- & 64-bit architectures)
* kernel-xen-devel (both 32- & 64-bit architectures)
* kernel-PAE-devel (32-bit architecture only)

In CentOS-4, there are four kernel-devel packages available:

* kernel-devel (both 32- & 64-bit architectures)
* kernel-smp-devel (both 32- & 64-bit architectures)
* kernel-xenU-devel (both 32- & 64-bit architectures)
* kernel-hugemem-devel (32-bit architecture only)
* kernel-largesmp-devel (64-bit architecture only)

If you are running the standard kernel (for example), you can install the kernel-devel package by:

[root@host]# yum install kernel-devel

You can use this command to determine the version of your running kernel:

[root@host]# uname -r

The result will look similar to this:

2.6.18-92.1.18.el5xen

In this case, the xen kernel is installed and the way to install this specific kernel-devel package is:

[root@host]# yum install kernel-xen-devel

For more specific information about the available kernels please see the Release Notes:

*CentOS-5 i386 kernels
* CentOS-5 x86_64 kernels
* CentOS-4 (search for the heading kernel in the section Package-Specific Notes, sub-section Core, for more details.)

If your kernel is not listed by yum because it is in an older tree, you can download it manually from the CentOS Vault. Pick the version of CentOS you are interested in and then, for the arch, look in either the os/arch/CentOS/RPMS/ or the updates/arch/RPMS/ directories for the kernel[-type]-devel-version.arch.rpm

Once you have the proper kernel[-type]-devel-version.arch.rpm installed, try to compile your module. It should work this way. If it does not, please provide feedback to the module's developer as this is the way all new kernel modules should be designed to be built.

2. If you really need the full kernel source

If you really must have the kernel source tree, for whatever reason, it is obtainable.

2.1. CentOS 4 and 5

As root, install the packages rpm-build, redhat-rpm-config and unifdef:

[root@host]# yum install rpm-build redhat-rpm-config unifdef

* The latter package is only required for 64-bit systems.

As an ordinary user, not root, create a directory tree based on ~/rpmbuild:

[user@host]$ cd
[user@host]$ mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
[user@host]$ echo '%_topdir %(echo $HOME)/rpmbuild' > .rpmmacros

* You are strongly advised against package building as root.

Find the kernel source rpm in:

* http://mirror.centos.org/centos/5/updates/SRPMS/(Current Updates/SRPMS)
* http://mirror.centos.org/centos/5/os/SRPMS/(Current OS/SRPMS)

(Replace the "5" with a "4" for CentOS-4 kernels)

* http://vault.centos.org/ (CentOS Vault)

(Pick either (version)/updates/SRPMS or (version)/os/SRPMS)

Once you have located the source file, you can install it by running, for example:

[user@host]$ rpm -ivh http://mirror.centos.org/centos/5/updates/SRPMS/kernel-2.6.18-92.1.18.el5.src.rpm 2> /dev/null (for CentOS 5)

- or -

[user@host]$ rpm -ivh http://mirror.centos.org/centos/4/updates/SRPMS/kernel-2.6.9-78.0.8.EL.src.rpm 2> /dev/null

(for CentOS 4)

note: Make sure you use -i instead of -U so that you don't upgrade already installed source three

Now that the source rpm is installed, unpack and prepare the source files:

[user@host]$ cd ~/rpmbuild/SPECS
[user@host SPECS]$ rpmbuild -bp --target=`uname -m` kernel-2.6.spec 2> prep-err.log | tee prep-out.log

The value of `uname -m` (note: back ticks (grave accents) not single quotation marks (apostrophies)) sets --target to the architecture of your current kernel. This is generally accepted and most people will have either i686 or x86_64.

The kernel source tree will now be found in the directory ~/rpmbuild/BUILD/.

 

Installing kernel source on CentOS/RedHat

1. Maybe you do not need the full kernel source

If you need to compile a kernel driver module, the chances are you do not really need the full kernel source tree. You might just need the kernel-devel package. (If, however, you are certain that the full source tree is required, please follow the instructions in Section 2.)

In CentOS-5, there are three kernel-devel packages available:

* kernel-devel (both 32- & 64-bit architectures)
* kernel-xen-devel (both 32- & 64-bit architectures)
* kernel-PAE-devel (32-bit architecture only)

In CentOS-4, there are four kernel-devel packages available:

* kernel-devel (both 32- & 64-bit architectures)
* kernel-smp-devel (both 32- & 64-bit architectures)
* kernel-xenU-devel (both 32- & 64-bit architectures)
* kernel-hugemem-devel (32-bit architecture only)
* kernel-largesmp-devel (64-bit architecture only)

If you are running the standard kernel (for example), you can install the kernel-devel package by:

[root@host]# yum install kernel-devel

You can use this command to determine the version of your running kernel:

[root@host]# uname -r

The result will look similar to this:

2.6.18-92.1.18.el5xen

In this case, the xen kernel is installed and the way to install this specific kernel-devel package is:

[root@host]# yum install kernel-xen-devel

For more specific information about the available kernels please see the Release Notes:

*CentOS-5 i386 kernels
* CentOS-5 x86_64 kernels
* CentOS-4 (search for the heading kernel in the section Package-Specific Notes, sub-section Core, for more details.)

If your kernel is not listed by yum because it is in an older tree, you can download it manually from the CentOS Vault. Pick the version of CentOS you are interested in and then, for the arch, look in either the os/arch/CentOS/RPMS/ or the updates/arch/RPMS/ directories for the kernel[-type]-devel-version.arch.rpm

Once you have the proper kernel[-type]-devel-version.arch.rpm installed, try to compile your module. It should work this way. If it does not, please provide feedback to the module's developer as this is the way all new kernel modules should be designed to be built.

2. If you really need the full kernel source

If you really must have the kernel source tree, for whatever reason, it is obtainable.

2.1. CentOS 4 and 5

As root, install the packages rpm-build, redhat-rpm-config and unifdef:

[root@host]# yum install rpm-build redhat-rpm-config unifdef

* The latter package is only required for 64-bit systems.

As an ordinary user, not root, create a directory tree based on ~/rpmbuild:

[user@host]$ cd
[user@host]$ mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
[user@host]$ echo '%_topdir %(echo $HOME)/rpmbuild' > .rpmmacros

* You are strongly advised against package building as root.

Find the kernel source rpm in:

* http://mirror.centos.org/centos/5/updates/SRPMS/(Current Updates/SRPMS)
* http://mirror.centos.org/centos/5/os/SRPMS/(Current OS/SRPMS)

(Replace the "5" with a "4" for CentOS-4 kernels)

* http://vault.centos.org/ (CentOS Vault)

(Pick either (version)/updates/SRPMS or (version)/os/SRPMS)

Once you have located the source file, you can install it by running, for example:

[user@host]$ rpm -ivh http://mirror.centos.org/centos/5/updates/SRPMS/kernel-2.6.18-92.1.18.el5.src.rpm 2> /dev/null (for CentOS 5)

- or -

[user@host]$ rpm -ivh http://mirror.centos.org/centos/4/updates/SRPMS/kernel-2.6.9-78.0.8.EL.src.rpm 2> /dev/null

(for CentOS 4)

note: Make sure you use -i instead of -U so that you don't upgrade already installed source three

Now that the source rpm is installed, unpack and prepare the source files:

[user@host]$ cd ~/rpmbuild/SPECS
[user@host SPECS]$ rpmbuild -bp --target=`uname -m` kernel-2.6.spec 2> prep-err.log | tee prep-out.log

The value of `uname -m` (note: back ticks (grave accents) not single quotation marks (apostrophies)) sets --target to the architecture of your current kernel. This is generally accepted and most people will have either i686 or x86_64.

The kernel source tree will now be found in the directory ~/rpmbuild/BUILD/.

 

Centralized authentication with OpenLDAP

Setting up a Certificate Authority

On a separate server, preferably isolated from the network and physically secured, create the Certificate Authority that will generate all the certificates for TLS encryption:

[root@host]# yum install openssl openssl-devel
[root@host]# vi /etc/pki/tls/openssl.cnf
[root@host]# cd /etc/pki/tls/misc
[root@host]# ./CA -newca

note: The common name field must be the machine's hostname!

This process does the following:

1. Creates the directory /etc/pki/CA (by default), which contains files necessary for the operation of a certificate authority
2. Creates a public-private key pair for the CA in the file /etc/pki/CA/private/cakey.pem. The private key must be kept private in order to ensure the security of the certificates the CA will later sign.
3. Signs the public key (using the corresponding private key, in a process called self-signing) to create the CA certificate, which is then stored in /etc/pki/CA/cacert.pem.

Creating a certificate for the LDAP server

Change into the CA certificate directory.

[root@host]# cd /etc/pki/tls/certs

Generate a key pair for the LDAP server, ldapserverkey.pem is the private key.

[root@host]# openssl genrsa -out ldapserverkey.pem 2048

Generate a certificate signing request (CSR) for the CA to sign.

[root@host]# openssl req -new -key ldapserverkey.pem -out ldapserver.csr

Sign the ldapserver.csr request, which will produce the server certificate. It will ask for a password, it's the same as when the CA cerificate was created

[root@host]# openssl ca -in ldapserver.csr -out ldapservercert.pem

How TLS Communication Works

There is a sequence of events that occur prior to the creation of an LDAP communication session using TLS. These include the following steps:

1. Both the LDAP server and client need to be configured with a shared copy of a CA certificate beforehand.
2. When the TLS LDAP connection is made, the client and server negotiate their SSL encryption scheme.
3. The LDAP server then sends its public encryption key and its server certificate (the certificate contains the public key).
4. The LDAP client inspects the server certificate to make sure that it hasn't expired and takes note of the name and key ID of the CA server that issued it. It then checks this CA information with all the CA certificates in its database to determine whether the server certificate should be trusted.
5. If everything is valid, the LDAP client then creates a random "premaster" secret encryption key that it encrypts with the LDAP server's public key. It then sends the encrypted encryption key to the LDAP server.
6. When public keys are created, a special "private" key is also simultaneously created. Anything encrypted with the public key can only be decrypted with the private key and vice versa. The server then uses its private key to extract the premaster key.
7. The client and server then use the premaster key to generate a master secret that will be the same for both, but will never be transmitted so that a third-party cannot intercept it.
8. The master secret key is then used to create session keys that will be used to encrypt all future communication between client and server for the duration of the TLS session.

Installing the Certificate on the LDAP Server

Create the PKI directory for LDAP certificates if it does not already exist

[root@host]# mkdir /etc/pki/tls/ldap
[root@host]# chown root:root /etc/pki/tls/ldap
[root@host]# chmod 755 /etc/pki/tls/ldap

Copy the private key and the certificate from the CA server

[root@host]# scp -r caserver:/etc/pki/tls/certs/ldapserverkey.pem /etc/pki/tls/ldap/serverkey.pem
[root@host]# scp -r caserver:/etc/pki/tls/certs/ldapservercert.pem /etc/pki/tls/ldap/servercert.pem

Verify the ownership and permissions of these files

[root@host]# chown root:ldap /etc/pki/tls/ldap/serverkey.pem
[root@host]# chown root:ldap /etc/pki/tls/ldap/servercert.pem
[root@host]# chmod 640 /etc/pki/tls/ldap/serverkey.pem
[root@host]# chmod 640 /etc/pki/tls/ldap/servercert.pem

Copy the CA's public certificate from the CA server residing in /etc/pki/CA/cacert.pem to the LDAP server

[root@host]# mkdir /etc/pki/tls/CA
[root@host]# scp -r caserver:/etc/pki/CA/cacert.pem /etc/pki/tls/CA/
[root@host]# chown root:root /etc/pki/tls/CA/cacert.pem
[root@host]# chmod 644 /etc/pki/tls/CA/cacert.pem

To test the TLS connectivity run

[root@host]# openssl s_client -connect cybervirt1:636 -showcerts

Installing CA's public certificate from the CA server residing in /etc/pki/CA/cacert.pem to the LDAP clients

On all clients run

[root@host]# scp -r caserver:/etc/pki/CA/cacert.pem /etc/pki/tls/CA/

Installing OpenLDAP

You can either download OpenLDAP source and compile it after you install BerkeleyDB

[root@host]# cd /usr/src
[root@host]# wget http://freshmeat.net/urls/1835e002467534891ad4a4c6158963c7
[root@host]# cd /usr/src/db-4.7.25/build_unix
[root@host]# ../dist/configure
[root@host]# make; make install
[root@host]# cd /usr/src
[root@host]# wget ftp://ftp.openldap.org/pub/OpenLDAP/openldap-stable/openldap-stable-20080813.tgz
[root@host]# tar zxfv openldap-stable-20080813.tgz
[root@host]# cd openldap-2.4.11
[root@host]# CPPFLAGS="-I/usr/local/BerkeleyDB.4.7/include"
[root@host]# export CPPFLAGS
[root@host]# LDFLAGS="-L/usr/local/lib -L/usr/local/BerkeleyDB.4.7/lib -R/usr/local/BerkeleyDB.4.7/lib"
[root@host]# export LDFLAGS
[root@host]# LD_LIBRARY_PATH="/usr/local/BerkeleyDB.4.7/lib"
[root@host]# export LD_LIBRARY_PATH
[root@host]# ./configure;make;make intstall

Or you can install it with yum

[root@host]# yum install -y openldap openldap-devel openldap-servers openldap-clients

Starting OpenLDAP server

For different versions of ldap (from source or rpm) make sure /usr/local/etc/openldap/ldap.conf is the same as /etc/openldap/ldap.conf or there will be CA error.

[root@host]# /usr/local/libexec/slapd -f /usr/local/etc/openldap/slapd.conf -d255 -h 'ldap:/// ldaps:///'

Migrating all user accounts in to OpenLDAP

Install the perl migration tool and migrate all files (passwd, groups, network, etc) by change the domain in the file bellow to yourdomain.com

[root@host]# wget http://www.padl.com/download/MigrationTools.tgz
[root@host]# tar zxfv MigrationTools.tgz
[root@host]# vi /usr/share/openldap/migration/migrate_common.ph
[root@host]# /usr/share/openldap/migration/migrate_all_offline.sh

Changing the authentication method

Two files need to be changed for an ssh client to authenticate to OpenLDAP - /etc/pam.d/system-auth-ac and /etc/nsswitch. You can do that manually or by running authconfig

[root@host]# authconfig --disableldap --enableldapauth --ldapserver=ldap.planetdiscover.com --ldapbasedn="dc=planetdiscover,dc=com" --disableldaptls --update
[root@host]# vi /etc/pam.d/system-auth-ac

auth required pam_env.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 500 quiet
auth sufficient pam_ldap.so use_first_pass
auth required pam_deny.so

account required pam_unix.so broken_shadow
account sufficient pam_succeed_if.so uid < 500 quiet
account [default=bad success=ok user_unknown=ignore] pam_ldap.so
account required pam_permit.so

password requisite pam_cracklib.so try_first_pass retry=3
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok
password sufficient pam_ldap.so use_authtok
password required pam_deny.so

session optional pam_keyinit.so revoke
session required pam_limits.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_ldap.so

session optional /lib/security/$ISA/pam_ldap.so

[root@host]# vi /etc/nsswitch

passwd: files ldap
shadow: files ldap
group: files ldap
automount: files ldap

Configuring OpenLDAP server and clients

Here's how /etc/openldap/slapd.conf and /usr/local/etc/openldap/slapd.conf server config files should look like

include /usr/local/etc/openldap/schema/core.schema
include /usr/local/etc/openldap/schema/cosine.schema
include /usr/local/etc/openldap/schema/inetorgperson.schema
include /usr/local/etc/openldap/schema/nis.schema
pidfile /usr/local/var/run/slapd.pid
argsfile /usr/local/var/run/slapd.args
access to attr=userPassword
by anonymous auth
by self write
by * none
access to attrs=shadowLastChange
by self write
by * read
access to * by * read
database bdb
suffix "dc=planetdiscover,dc=com"
rootdn "cn=Manager,dc=planetdiscover,dc=com"
rootpw {MD5}Tw4es8U1dRL2oLhM58ZBhA==
directory /usr/local/var/openldap-data
index objectClass eq
TLSCACertificateFile /etc/pki/tls/CA/cacert.pem
TLSCertificateFile /etc/pki/tls/ldap/servercert.pem
TLSCertificateKeyFile /etc/pki/tls/ldap/serverkey.pem
security simple_bind=128
loglevel stats2

The client config file in /etc/ldap.conf

base dc=planetdiscover,dc=com
uri ldap://cybervirt1.planetdiscover.com/
timelimit 120
bind_timelimit 120
idle_timelimit 3600
nss_initgroups_ignoreusers root,ldap,named,avahi,haldaemon,dbus,radvd,tomcat,radiusd,news,mailman
pam_password md5
ssl start_tls
tls_checkpeer yes
tls_cacertdir /etc/pki/tls/CA
tls_cacertfile /etc/pki/tls/CA/cacert.pem

The /etc/openldap/ldap.conf and /usr/local/etc/openldap/ldap.conf client config files

BASE dc=planetdiscover, dc=com
URI ldap://cybervirt1.planetdiscover.com
TLS_CACERTDIR /etc/pki/tls/CA
TLS_CACERT /etc/pki/tls/CA/cacert.pem

Various OpenLDAP operations and examples

### Define the top-level organization unit ###

## Build the root node.
dn: dc=planetdiscover,dc=com
dc: planetdiscover
objectClass: dcObject
objectClass: organizationalUnit
ou: planetdiscover Dot Org

## Build the people ou container.
dn: ou=people,dc=planetdiscover,dc=com
ou: people
objectClass: organizationalUnit

## Build the group ou container.
dn: ou=group,dc=planetdiscover,dc=com
ou: group
objectclass: organizationalUnit

## Add the records offline
[root@host]# slapadd -v -l /tmp/top.ldif

## Add a user LDIF entry for Jerry Carter. cn is the mandatory attribute for this objectclass

dn: cn=Jerry Carter,ou=people,dc=planetdiscover,dc=com
cn: Jerry Carter
sn: Carter
mail: This e-mail address is being protected from spambots. You need JavaScript enabled to view it
telephoneNumber: 555-123-1234
objectclass: inetOrgPerson

## Add a user LDIF entry for root. uid is the mandatory attribute in this case

dn: uid=root,ou=People,dc=planetdiscover,dc=com
uid: root
cn: root
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$1$Kp8hx.m0$Y1Aw37IStTqU8UU5kLgbq.
shadowLastChange: 13692
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 0
gidNumber: 0
homeDirectory: /root
gecos: root

[root@host]# ldapmodify -D "cn=Manager,dc=planetdiscover,dc=com" -w secret -x -a -f /tmp/users.ldif

## Modify. Add a web page location to Jerry Carter.
dn: cn=Jerry Carter,ou=people,dc=planetdiscover,dc=com
changetype: modify
add: labeledURI
labeledURI: http://www.planetdiscover.org/~jerry/

## Modify. Remove an email address from Gerald W. Carter.
dn: cn=Gerald W. Carter,ou=people,dc=planetdiscover,dc=com
changetype: modify
delete: mail
mail: This e-mail address is being protected from spambots. You need JavaScript enabled to view it

## Modify. Remove the entire entry for Peabody Soup.
dn: cn=Peabody Soup,ou=people,dc=planetdiscover,dc=com
changetype: delete

[root@host]# ldapmodify -D "cn=Manager,dc=planetdiscover,dc=com" -w secret -x -v -f /tmp/update.ldif

## Delete dn root.
[root@host]# ldapdelete -D "cn=Manager,dc=planetdiscover,dc=com" -w secret -x -r -v "uid=root,ou=People,dc=planetdiscover,dc=com"

## Delete the entire dn ou=people subtree.
[root@host]# ldapdelete -D "cn=Manager,dc=planetdiscover,dc=com" -w secret -x -r -v "ou=people,dc=planetdiscover,dc=com"

## Search for uid cybergod record
[root@host]# ldapsearch -x -b "dc=planetdiscover,dc=com" "(uid=cybergod)"
# -b can be omited it's from where to start the search
[root@host]# ldapsearch -x -W -D cn="Manager,dc=planetdiscover,dc=com" "(uid=cybergod)" -Z
# -Z is for Using TLS it goes with -W for the Manager password

## Search for all objectclass records
[root@host]# ldapsearch -x -b "dc=planetdiscover,dc=com" "(objectclass=*)"

## Search using SASL DIGEST-MD5
[root@host]# ldapsearch -U This e-mail address is being protected from spambots. You need JavaScript enabled to view it -b "dc=planetdiscover,dc=com" "(objectclass=*)" -Y DIGEST-MD5

## Changing users password to "test" online through TLS
[root@host]# ldappasswd -s test -x -W -D cn="Manager,dc=planetdiscover,dc=com" "uid=cybergod,ou=People,dc=planetdiscover,dc=com" -Z

## Show ldap information
[root@host]# ldapsearch -x -s base -b "" "(objectclass=*)" +
[root@host]# ldapsearch -h localhost -p 389 -x -b "" -s base -LLL supportedSASLMechanisms

## Generate ssha password to use in slapd.conf
[root@host]# slappasswd

 

Setting up MySQL Replication

MySQL replication allows you to have an exact copy of a database from a master server on another server (slave), and all updates to the database on the master server are immediately replicated to the database on the slave server so that both databases are in sync. This is not a backup policy because an accidentally issued DELETE command will also be carried out on the slave; but replication can help protect against hardware failures though.

Configure The Master

First we have to edit /etc/mysql/my.cnf. We have to enable networking for MySQL, and MySQL should listen on all IP addresses, therefore we comment out these lines (if existant):

#skip-networking
#bind-address = 127.0.0.1

Furthermore we have to tell MySQL for which database it should write logs (these logs are used by the slave to see what has changed on the master), which log file it should use, and we have to specify that this MySQL server is the master. We want to replicate the database exampledb, so we put the following lines into /etc/mysql/my.cnf:

log-bin = /var/log/mysql/mysql-bin.logbinlog-do-db=exampledbserver-id=1

Then we restart MySQL:

[root@host]# /etc/init.d/mysql restart

Then we log into the MySQL database as root and create a user with replication privileges:

[root@host]# mysql -uroot -p
Enter password:
mysql> GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY '';
mysql> FLUSH PRIVILEGES;
mysql> USE exampledb; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;

Write down this information, we will need it later on the slave!

Then leave the MySQL shell:

mysql> quit;

There are two possibilities to get the existing tables and data from exampledb from the master to the slave. The first one is to make a database dump, the second one is to use the LOAD DATA FROM MASTER; command on the slave. The latter has the disadvantage the the database on the master will be locked during this operation, so if you have a large database on a high-traffic production system, this is not what you want, and I recommend to follow the first method in this case. However, the latter method is very fast, so I will describe both here.

If you want to follow the first method, then do this:

[root@host]# mysqldump -u root -p --opt exampledb > exampledb.sql

This will create an SQL dump of exampledb in the file exampledb.sql. Transfer this file to your slave server!

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.

Finally we have to unlock the tables in exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> UNLOCK TABLES;
mysql> quit;

Now the configuration on the master is finished.

Configure The Slave

On the slave we first have to create the database exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> CREATE DATABASE exampledb;
mysql> quit;

If you have made an SQL dump of exampledb on the master and have transferred it to the slave, then it is time now to import the SQL dump into our newly created exampledb on the slave:

[root@host]# mysql -u root -p exampledb < /path/to/exampledb.sql

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.

Now we have to tell MySQL on the slave that it is the slave, that the master is 192.168.0.100, and that the master database to watch is exampledb. Therefore we add the following lines to /etc/mysql/my.cnf:

server-id=2
master-host=192.168.0.100
master-user=slave_user
master-password=secret
master-connect-retry=60
replicate-do-db=exampledb

Then we restart MySQL:

[root@host]# /etc/init.d/mysql restart

If you have not imported the master exampledb with the help of an SQL dump, but want to go the LOAD DATA FROM MASTER; way, then it is time for you now to get the data from the master exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> LOAD DATA FROM MASTER;
mysql> quit;

If you have phpMyAdmin installed on the slave you can now check if all tables/data from the master exampledb is also available on the slave exampledb.

Finally, we must do this:

[root@host]# mysql -u root -p
Enter password:
mysql> SLAVE STOP;

In the next command (still on the MySQL shell) you have to replace the values appropriately:

mysql> CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave_user', MASTER_PASSWORD='', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;

MASTER_HOST is the IP address or hostname of the master (in this example it is 192.168.0.100). MASTER_USER is the user we granted replication privileges on the master.
MASTER_PASSWORD is the password of MASTER_USER on the master.
MASTER_LOG_FILE is the file MySQL gave back when you ran SHOW MASTER STATUS; on the master.
MASTER_LOG_POS is the position MySQL gave back when you ran SHOW MASTER STATUS; on the master.

Now all that is left to do is start the slave. Still on the MySQL shell we run

mysql> START SLAVE; mysql> quit;

That's it! Now whenever exampledb is updated on the master, all changes will be replicated to exampledb on the slave. Test it!

Here are two examples of the my.cnf file on the master and slave servers:

On the Master:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_allowed_packet=512000000
open-files-limit=5000
table_cache=2000
max_connections=1000
key_buffer_size=2048M
sort_buffer_size=24M
query-cache-type=1
query-cache-size=512M
sort_buffer=24M
read_rnd_buffer_size=3M
read_buffer_size=1M
tmp_table_size=64M
interactive_timeout=288000
log-bin
server-id=83
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt
myisam_max_sort_file_size=16G
myisam_max_extra_sort_file_size=16G
myisam_sort_buffer_size=24M
max_binlog_size=256M
log-slow-queries = /var/log/mysql_slow.log
long_query_time = 1

log-slow-queries = /var/log/mysql-slow.log
long_query_time = 1

[mysql.server]
user=mysql

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[myisamchk]
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt

On the Slave:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_allowed_packet=512000000
open-files-limit=5000
table_cache=2000
sort_buffer_size=4M
key_buffer_size=2048M
query-cache-type=1
query-cache-size=512M
sort_buffer=4M
read_rnd_buffer_size=3M
tmp_table_size=64M
max_connections=500
interactive_timeout=288000
server-id=84
replicate-wild-ignore-table=%.indexTasks
replicate-wild-ignore-table=%.indexClusterTasks
replicate-wild-ignore-table=%.indexPages
replicate-wild-ignore-table=%.adRequestsRollup
replicate-wild-ignore-table=%.textAdsRollup
replicate-wild-ignore-table=%.%Log%
replicate-wild-ignore-table=%.%Archive%
replicate-wild-ignore-table=%.tmp%
replicate-wild-ignore-table=%.pageContents%
master-host=db10m.int
master-user=replicationuser
master-password=3y9nR16k
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt
set-variable = myisam_max_sort_file_size=16G
set-variable = myisam_max_extra_sort_file_size=16G
set-variable = sort_buffer_size=4M
set-variable = myisam_sort_buffer_size=4M
slave-skip-errors=1062
read-only
max_binlog_size=256M
log-slow-queries = /var/log/mysql_slow.log
long_query_time = 1

[mysql.server]
user=mysql

[safe_mysqld] err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[myisamchk] ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt

 

Setting up MySQL Replication

MySQL replication allows you to have an exact copy of a database from a master server on another server (slave), and all updates to the database on the master server are immediately replicated to the database on the slave server so that both databases are in sync. This is not a backup policy because an accidentally issued DELETE command will also be carried out on the slave; but replication can help protect against hardware failures though.

Configure The Master

First we have to edit /etc/mysql/my.cnf. We have to enable networking for MySQL, and MySQL should listen on all IP addresses, therefore we comment out these lines (if existant):

#skip-networking
#bind-address = 127.0.0.1

Furthermore we have to tell MySQL for which database it should write logs (these logs are used by the slave to see what has changed on the master), which log file it should use, and we have to specify that this MySQL server is the master. We want to replicate the database exampledb, so we put the following lines into /etc/mysql/my.cnf:

log-bin = /var/log/mysql/mysql-bin.logbinlog-do-db=exampledbserver-id=1

Then we restart MySQL:

[root@host]# /etc/init.d/mysql restart

Then we log into the MySQL database as root and create a user with replication privileges:

[root@host]# mysql -uroot -p
Enter password:
mysql> GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY '';
mysql> FLUSH PRIVILEGES;
mysql> USE exampledb; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;

Write down this information, we will need it later on the slave!

Then leave the MySQL shell:

mysql> quit;

There are two possibilities to get the existing tables and data from exampledb from the master to the slave. The first one is to make a database dump, the second one is to use the LOAD DATA FROM MASTER; command on the slave. The latter has the disadvantage the the database on the master will be locked during this operation, so if you have a large database on a high-traffic production system, this is not what you want, and I recommend to follow the first method in this case. However, the latter method is very fast, so I will describe both here.

If you want to follow the first method, then do this:

[root@host]# mysqldump -u root -p --opt exampledb > exampledb.sql

This will create an SQL dump of exampledb in the file exampledb.sql. Transfer this file to your slave server!

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.

Finally we have to unlock the tables in exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> UNLOCK TABLES;
mysql> quit;

Now the configuration on the master is finished.

Configure The Slave

On the slave we first have to create the database exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> CREATE DATABASE exampledb;
mysql> quit;

If you have made an SQL dump of exampledb on the master and have transferred it to the slave, then it is time now to import the SQL dump into our newly created exampledb on the slave:

[root@host]# mysql -u root -p exampledb < /path/to/exampledb.sql

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.

Now we have to tell MySQL on the slave that it is the slave, that the master is 192.168.0.100, and that the master database to watch is exampledb. Therefore we add the following lines to /etc/mysql/my.cnf:

server-id=2
master-host=192.168.0.100
master-user=slave_user
master-password=secret
master-connect-retry=60
replicate-do-db=exampledb

Then we restart MySQL:

[root@host]# /etc/init.d/mysql restart

If you have not imported the master exampledb with the help of an SQL dump, but want to go the LOAD DATA FROM MASTER; way, then it is time for you now to get the data from the master exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> LOAD DATA FROM MASTER;
mysql> quit;

If you have phpMyAdmin installed on the slave you can now check if all tables/data from the master exampledb is also available on the slave exampledb.

Finally, we must do this:

[root@host]# mysql -u root -p
Enter password:
mysql> SLAVE STOP;

In the next command (still on the MySQL shell) you have to replace the values appropriately:

mysql> CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave_user', MASTER_PASSWORD='', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;

MASTER_HOST is the IP address or hostname of the master (in this example it is 192.168.0.100). MASTER_USER is the user we granted replication privileges on the master.
MASTER_PASSWORD is the password of MASTER_USER on the master.
MASTER_LOG_FILE is the file MySQL gave back when you ran SHOW MASTER STATUS; on the master.
MASTER_LOG_POS is the position MySQL gave back when you ran SHOW MASTER STATUS; on the master.

Now all that is left to do is start the slave. Still on the MySQL shell we run

mysql> START SLAVE; mysql> quit;

That's it! Now whenever exampledb is updated on the master, all changes will be replicated to exampledb on the slave. Test it!

Here are two examples of the my.cnf file on the master and slave servers:

On the Master:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_allowed_packet=512000000
open-files-limit=5000
table_cache=2000
max_connections=1000
key_buffer_size=2048M
sort_buffer_size=24M
query-cache-type=1
query-cache-size=512M
sort_buffer=24M
read_rnd_buffer_size=3M
read_buffer_size=1M
tmp_table_size=64M
interactive_timeout=288000
log-bin
server-id=83
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt
myisam_max_sort_file_size=16G
myisam_max_extra_sort_file_size=16G
myisam_sort_buffer_size=24M
max_binlog_size=256M
log-slow-queries = /var/log/mysql_slow.log
long_query_time = 1

log-slow-queries = /var/log/mysql-slow.log
long_query_time = 1

[mysql.server]
user=mysql

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[myisamchk]
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt

On the Slave:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_allowed_packet=512000000
open-files-limit=5000
table_cache=2000
sort_buffer_size=4M
key_buffer_size=2048M
query-cache-type=1
query-cache-size=512M
sort_buffer=4M
read_rnd_buffer_size=3M
tmp_table_size=64M
max_connections=500
interactive_timeout=288000
server-id=84
replicate-wild-ignore-table=%.indexTasks
replicate-wild-ignore-table=%.indexClusterTasks
replicate-wild-ignore-table=%.indexPages
replicate-wild-ignore-table=%.adRequestsRollup
replicate-wild-ignore-table=%.textAdsRollup
replicate-wild-ignore-table=%.%Log%
replicate-wild-ignore-table=%.%Archive%
replicate-wild-ignore-table=%.tmp%
replicate-wild-ignore-table=%.pageContents%
master-host=db10m.int
master-user=replicationuser
master-password=3y9nR16k
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt
set-variable = myisam_max_sort_file_size=16G
set-variable = myisam_max_extra_sort_file_size=16G
set-variable = sort_buffer_size=4M
set-variable = myisam_sort_buffer_size=4M
slave-skip-errors=1062
read-only
max_binlog_size=256M
log-slow-queries = /var/log/mysql_slow.log
long_query_time = 1

[mysql.server]
user=mysql

[safe_mysqld] err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[myisamchk] ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt

 

Portable JumpStart Environment with PXE and Kickstart

Overview

In addition to the basic requirements of DHCP, TFTP, and NFS, you will need to add another component called PXE (Pre-boot Execution Environment). Much like Sun systems use the OpenBoot firmware to allow booting from their network devices, PXE works with your x86 system to provide that same functionality. This means that before you begin, be sure your client is PXE aware. If you have older hardware, you may want to look into Etherboot as an alternative. To enable PXE on your client, simply enter your systems BIOS and turn it on.

With PXE enabled and listed as your primary boot device, your system is ready to boot from the network. Once the request is received by DHCP from your client, the server assigns an IP address and tells PXE where to find its pxelinux.0 file. This binary is then transferred through TFTP with instructions on the location of the netboot image. This file contains the data stating which kernel and initial ramdisk to load. It also gives the necessary information to NFS to mount the install directory. After all of the above is accounted for, your system will begin installing in the same manner as if you installed it from CD-ROM.

Now that you have a basic idea of the differences and similarities of performing a network install with both Solaris and Red Hat, let's put it all together.

Copying Software

Begin by copying the Red Hat software to your laptop. You may want to consider structuring the file system under the same parent directory used for Solaris. This will shorten your exports file and keep you from having to add new entries. Once you have the CD-ROM mounted, you can use dd to create the ISO image. You will need to do this for each CD-ROM:

[root@host]# dd if=/dev/cdrom of=/home/BUILD/RedHat/rhe3/rhe3-disc1.iso bs=32k

The ISO images alone are sufficient to complete the install; you do not need to unpack the software. However, this makes upgrading the software more difficult. To see the contents of the ISO, you can mount it up with a loop-back device. You will need to do this anyway to extract the correct initial ramdisk and kernel. Here is an example:

[root@host]# mount -o loop /home/BUILD/RedHat/rhe3/rhe3upd6-i386-disc1.iso /mnt

Obtaining the Initial Ramdisk and Kernel

After you have mounted up the first ISO image with the above command, you can copy the initial ramdisk and kernel to your /tftpboot directory. The initial ramdisk is called initrd.img and the kernel is vmlinuz. It's a good idea for you to rename both files with specific names related to the version of Red Hat you're installing. This will also allow you to store multiple copies of the kernel and initial ramdisk for different versions of the OS:

[root@host]# cd /mnt/images/pxeboot
[root@host]# cp initrd.img /tftpboot/rhe3-initrd.img
[root@host]# cp vmlinuz /tftpboot/rhe3-vmlinuz

The initrd.img file can be customized with specific modules to fit your needs. Here is how to take a look inside:

[root@host]# cp /tftpboot/rhe3-initrd.img /tmp
[root@host]# cd /tmp
[root@host]# gunzip -dc rhe3-initrd.img > initrd.ext2
[root@host]# mount -o loop /tmp/initrd.ext2 /mnt2

PXE Configuration

After copying the correct initrd.img and vmlinuz files, you can address the server-side requirements for PXE. As I said previously, PXE is what makes network-booting a PC possible. The first file you will need is called pxelinux.0. There are a couple of ways to obtain this file. If you already have some Red Hat systems in your environment, you can copy it from one of them. Here is how to find it after you are logged into a running system:

[root@host]# locate pxelinux.0
[root@host]# cp /usr/lib/syslinux/pxelinux.0 /tftpboot

If you don't have an existing system, you can download the file from http://syslinux.zytor.com. This site will also help to answer any questions related to PXELINUX.

Creating a Netboot Image

The next file addressed in this process is the netboot image. A netboot image is basically a bootloader that determines whether your client will boot from the network or its hard drive. This file defines things such as kernel, initial ramdisk, network device, and method used for booting, as well as where to look for the kickstart configuration file. An important note about the append line within this file is that it needs to be entirely on one line. Line breaks and continuation slashes will cause problems resulting in failure of the boot process. You will need to create the directory /tftpboot/pxelinux.cfg and then create the file. I'm using vi:

[root@host]# mkdir /tftpboot/pxelinux.cfg
[root@host]# vi default.netboot-rhe3
default linux
serial 0,38400n8
label linux
kernel vmlinuz
append ksdevice=eth0 ip=dhcp console=tty0 load_initial
ramdisk=1 initrd=initrd.img network
ks=nfs:192.168.0.1:/home/BUILD/RedHat/rhe3/ks.cfg

Another important piece to this file is how it is called via TFTP. There are three methods to load this file. The first is a symbolic link of your client's MAC address:

01-00-0F-1F-AB-39-19 -> default.netboot-rhe3

The next method is similar to how we set up a Sun to load its mini-kernel, and that's with a Hex representation of your client's IP address:

0A0A0A0A -> default.netboot-rhe3

If you're going to use one netboot file for everything, just make a symbolic link called "default":

default -> default.netboot-rhe3

Kickstart Configuration

The ks.cfg file is really the guts of your Red Hat configuration. This is where you lay out your partition table, define which services will be turned on or off, configure network settings, and ultimately tell the system which software packages to load. You can also instruct the system to perform any post-install scripts you may have. There are many directives that can be used to customize your Red Hat install. When defining disks, it's important to specify SCSI vs. IDE (sda, hda). Here is a simple configuration to get you started:

# simple ks.cfg
install
nfs --server=192.168.0.1 -dir=/home/BUILD/RedHat/rhe3
lang en_US.UTF-8
langsupport --default en_US.UTF-8 en_US.UTF-8
keyboard us
mouse none
skipx
network --device eth0 --bootproto static --ip=192.168.0.11
--netmask=255.255.255.0 --gateway=192.168.0.1
--nameserver=192.168.0.1 --hostname=node1
rootpw --iscrypted $3$y606grSH$SUzlwxKc73Lhgn82yu1bnF1
firewall --disabled
authconfig --enableshadow --enablemd5
timezone America/New_York
bootloader --location=mbr
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
clearpart --all --initlabel
part /boot --fstype ext3 --size=100 --ondisk=sda
part / --fstype ext3 --size=1024 --grow --ondisk=sda
part swap --size=1000 --grow --maxsize=2000 --ondisk=sda

%packages
@ everything
grub
kernel-smp
kernel

%post
wget http://foo.server/post-install.sh
sh post-install.sh

Services

Now that I've covered the specific pieces needed to complete a Red Hat install over the network, I will explain the additional configurations that need to be made to your existing services. As you could probably tell from the information on PXE, the service most changed in all of this is the TFTP server. There are several new files you will need to add to its directory structure as well as a new sub-directory. The files that should exist at the top level of the /tftpboot directory are pxelinux.0, rhe3-initrd.img, and rhe3-vmlinuz. Here is an example of what it might look like:

drwxr-xr-x 2 root root 152 Aug 31 2004 pxelinux.cfg
lrwxrwxrwx 1 root root 15 Aug 31 2004 initrd.img -> rhe3-initrd.img
lrwxrwxrwx 1 root root 12 Aug 31 2004 vmlinuz -> rhe3-vmlinuz

The /tftpboot/pxelinux.cfg directory is where you will put the netboot image you have created. It is also where you will need to decide how you will call that file:

lrwxrwxrwx 1 root root 20 Aug 31 2004 default -> default.netboot-rhe3

DHCP is the next service where you will need to make changes. In its simplest form, you are basically defining the TFTP server and the bootloader program. Below is a stripped-down version of the dhcpd.conf file I used for testing:

ddns-update-style none; ddns-updates off;

## PXE Stuff

deny unknown-clients;
not authoritative;

option domain-name "example.com";
option domain-name-servers 192.168.0.9, 192.168.0.10;
option subnet-mask 255.255.255.0;

allow bootp; allow booting;

option ip-forwarding false; # No IP forwarding
option mask-supplier false; # Don't respond to ICMP Mask req

subnet 192.168.0.0 netmask 255.255.255.0 {
option routers 192.168.0.1;
}

group {
next-server 192.168.0.1; # name of your TFTP server
filename "pxelinux.0"; # name of the bootloader program

host node1 {
hardware ethernet 00:11:43:d 9:46:29;
fixed-address 192.168.0.11;
}
}

Finally, depending on how you structured your file systems, the only other service you may need to adjust is your NFS server. If you have several versions of the OS you want to install, I recommend exporting your data at a higher level so you don't need to keep adding to your exports file. Here is the exports file I used:

/home/BUILD/RedHat *(ro,async,anonuid=0,anongid=0)

In addition to Solaris, you now have a system that is capable of installing the Red Hat operating system over the network.

 
Page 119 of 131

Upcoming Linux Foundation Courses

  1. LFD331 Developing Linux Device Drivers
    25 Aug » 29 Aug - Virtual
    Details
  2. LFS422 High Availability Linux Architecture
    08 Sep » 11 Sep - Raleigh, NC
    Details
  3. LFS426 Linux Performance Tuning
    08 Sep » 11 Sep - New York
    Details

View All Upcoming Courses


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board