Home Linux Community Community Blogs

Community Blogs

Improving debians nginx init script

Article Source:
Date: April 27th 2009

nginx is a high performance HTTP and mail proxy server written by Igor Sysoev.

I’m not sure what the init scripts do for other distros but it seems a bit of an oversight to leave out checking the config file when running the init script

Full Entry


Hello World

This is my first blog. I may write often, or I may not write much at all. I may write about cool things, or I may write about boring stuff instead. I make no guarantees, but I think my next post will be pretty cool ;-). 

It took forever but the revamp of is pretty sweet. I think this website will be a great resource and community for Linux users everywhere. Maybe this will redistribute the 64,321 (a nice random number) Linux blogs out there?


What makes a good package manager?

This is a question that has been bothering me for a while, after realising that pacman, while being aufully close, isn't actualy perfect. Here are my requirements:

Dependecie tracking:

pacman - Yes

apt-get - yes

rpm - Yes

Conflict tracking:

pacman - Yes

apt-get - Yes

rpm -  Yes

Access to build (compile) enstructions:

pacman - Yes, via ABS

apt-get - Not as far as I know

rpm - As above

Downloads binary:

 pacman - Yes

apt-get - Yes

rpm - Yes

Tracks user created source compiles:

pacman - Yes, via makepkg

apt-get - Yes

rpm - No

Repos must be resonable size (5000+) :

pacman (Archlinux) - Yes

apt-get (Debian) - Definatly

rpm (Fedora) - Yes

Flexible, with various options for things:

pacman - Yes, up to 5 options for install, remove and upgrade, with more non spesific ones.

 apt-get - Yes, though not particularly flexible in my experience

rpm - Yes, though equaly inflexible


What I'm trying to say here is that pacman is the best! Actualy, what I'm saying is that pacman is good, but we must remeber how small in comparison it's repos are to debians giant repos. It shows that they all do what you want, just some better than others


fred woor's first blog.

Today , I begin write my blog.

I'm from china, a voip developer.

Using motavisita linux , and MTF for voice programming . 


Dual Mouse :D

Ah, this is life, I can use two mouses at ones, one on my table and one on my knee :D.

This way I don't need to stretch a lot, resulting with a healthy back.



This is my first entry to my blog. The site is really good. Thanks for everyone

Presto: Speed up your updates and save bandwidth

A little background information

Delta RPMs (DRPMs) are very similar to binary (regular) RPMs. The main difference is that DRPMs contain only the changes between two versions of an RPM package. This allows you to do full updates in a lot less time - Instead of downloading a full 10MB for an update where only 50kb of content changed, for example, you can now download only that 50kb of change and apply it to your system.

Presto is a project which brings deltarpm and yum together; In other words, letting you use yum to apply DRPMs.

Not only will you save on bandwidth since you're only downloading in the changes in a package, but you'll also cut down on the time it takes to download and apply the packages.

Installing yum-presto

The first step toward setting up Presto is installing the yum plugin:
yum -y install yum-presto
Configure the Updates repository

Next, we need to configure your updates repository to download deltarpm packages instead of the full ones.

Fedora 8 and newer
In the /etc/yum.repos.d/fedora-updates.repo file you'll find two lines that looks like this in the [updates] section:
Change it to:
The added mirror list will give yum a list of the Presto-enabled mirrors. Of course,if all the DRPM mirrors fail it will always drop back to the original mirror list.

Fedora 8 and 9 users only
2008/09/14: Because of the recent security issue with the Fedora repositories, it is required to change a second repositority configuration file. In the
/etc/yum.repos.d/fedora-updates-newkey.repo file, comment out the old mirrorlist just like above and add this line:

If you've previously followed this howto (pre-June 2008)
There's been an update by the presto team, so if you've followed this howto before June 2008, undo the changes then follow the section above.
In the /etc/yum.repos.d/fedora-updates.repo file you'll find two lines that looks like this in the [updates] section:
Remove the pound character to the start of the mirrorlist line so that it looks like this:
  • for i386 (32 bit users), remove the line:
  • for x86_64 (64 bit users), remove the line:

Fedora 7
In Fedora 7, the deprecated deltaurl= key is used. This sound bad, however it actually makes the configuration much easier! Simply add the following line to the /etc/yum.repos.d/fedora-updates.repo file just "mirrorlist=" line in the [updates] section:
  • for i386 (32 bit users), add:
  • for x86_64 (64 bit users), add:

That's it! Now run you can use yum or yumex as normal and benefit from the advantages of deltarpms.

Big Endian or Little Endian.

#include <stdio.h>

int w = 0x41000042;
int main()

if( 'A' == *(char*)&w ) {
printf("First char in integer is %c ", *(char*)&w);
printf(", so Big Endian\n");
} else {
printf("First char in integer is %c ", *(char*)&w);
printf(", so little endian\n");


Network Card Bonding On RedHat

In the following I will use the word bonding because practically we will bond interfaces as one. Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports into a three-megabits trunk port. That is equivalent with having one interface with three megabytes speed.

Where should I use bonding?

You can use it wherever you need redundant links, fault tolerance or load balancing networks. It is the best way to have a high availability network segment. A very useful way to use bonding is to use it in connection with 802.1q VLAN support (your network equipment must have 802.1q protocol implemented).

Diverse modes of bonding:

mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.

mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
Prerequisites: * Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
* A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
* Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb)
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server. Also you can use multiple bond interface but for that you must load the bonding module as many as you need.


In the /etc/modprobe.conf file add the following:

alias bond0 bonding
options bond0 miimon=80 mode=5

In the /etc/sysconfig/network-scripts/ directory create ifcfg-bond0:

IPADDR=(ip address)

Change the ifcfg-eth0 to:


Change the ifcfg-eth1 to:


That´s all! Now your trunk should be up and running!


Linux lvm - Logical Volume Manager

Create Partitions

For this Linux lvm example you need an unpartitioned hard disk /dev/sdb. First you need to create physical volumes. To do this you need partitions or a whole disk. It is possible to run pvcreate command on /dev/sdb, but I prefer to use partitions and from partitions I later create physical volumes.

[root@host]# fdisk /dev/sda

Create physical volumes

Use the pvcreate command to create physical volumes.

[root@host]# pvcreate /dev/sdb1
[root@host]# pvcreate /dev/sdb2

The pvdisplay command displays all physical volumes on your system.

[root@host]# pvdisplay

Alternatively the following command should be used:

[root@host]# pvdisplay /dev/sdb1

Create Virtual Group

At this stage you need to create a virtual group which will serve as a container for your physical volumes. To create a virtual group with the name "mynew_vg" which will include /dev/sdb1 partition, you can issue the following command:

[root@host]# vgcreate mynew_vg /dev/sdb1

To include both partitions at once you can use this command:

[root@host]# vgcreate mynew_vg /dev/sdb1 /dev/sdb2

Feel free to add new physical volumes to a virtual group by using the vgextend command.

[root@host]# vgextend mynew_vg /dev/sdb2

Create Logical Volumes

From your big cake (virtual group) you can cut pieces (logical volumes) which will be treated as a partitions for your linux system. To create a logical volume, named "vol01", with a size of 400 MB from the virtual group "mynew_vg" use the following command:

* create a logical volume of size 400 MB -L 400
* create a logical volume of size 4 GB -L 4G

[root@host]# lvcreate -L 400 -n vol01 mynew_vg

In this case you have created a logical volume with a size of 1GB and the name of vol02

[root@host]# lvcreate -L 1000 -n vol02 mynew_vg

Create File system on logical volumes

The logical volume is almost ready to use. All you need to do is to create a filesystem.:

[root@host]# mkfs.ext3 -m 0 /dev/mynew_vg/vol01

the -m option specifies the percentage reserved for the super-user, set this to 0 if you wish not to waste any space, the default is 5%.

Edit /etc/fstab

Add an entry for your newly created logical volume into /etc/fstab

/dev/mynew_vg/vol01 /home/foobar ext3 defaults 0 2

Mount logical volumes

Before you mount do not forget to create a mount point.

[root@host]# mkdir /home/foobar

Extend logical volume

The biggest advantage of logical volume manager is that you can extend your logical volumes any time you are running out of the space. To increase the size of a logical volume by another 800 MB you can run this command:

[root@host]# lvextend -L +800 /dev/mynew_vg/vol01

The command above does not actually increase the physical size of volume, to do that you need to:

[root@host]# resize2fs /dev/mynew_vg/vol01

Remove logical volume

The command lvremove can be used to remove logical volumes. Make sure that before you attempt to remove logical volumes your logical volume does not have any valuable data stored on it, moreover, make sure the volume is unmounted.

[root@host]# lvdisplay
[root@host]# lvremove /dev/mynew_vg/vol02


Centralized logging with syslong-ng over stunnel

Installing syslog-ng and stunnel

Login to the client and the server, download syslog-ng and stunnel and install them:

[root@host]# yum install -y openssl-devel glibc gcc glib2
[root@host]# wget
[root@host]# lynx
[root@host]# mkdir -p /usr/local/var/run/stunnel/
[root@host]# cd /usr/src
[root@host]# tar zxfv stunnel-4.26.tar.gz
[root@host]# cd stunnel-4.26
[root@host]# ./configure
[root@host]# make
[root@host]# make install
[root@host]# cd /usr/src/SYSLOG-NG
[root@host]# rpm -Uvh libdbi8-0.8.2bb2-3.rhel5.i386.rpm libdbi8-dev-0.8.2bb2-3.rhel5.i386.rpm libevtlog0-0.2.8-1.i386.rpm syslog-ng-2.1.3-1.i386.rpm

Creating the certificates

After the installation is complete login to your CA server and create the server and the client certificate. If you have more than one client that will log to the server you have to generate new client certificate:

[root@host]# cd /etc/pki/tls/certs
[root@host]# make syslog-ng-server.pem
[root@host]# make syslog-ng-client.pem

Place copies of syslog-ng-server.pem on all machines in /etc/stunnel with one important alteration. The clients only need the certificate section of syslog-ng-server.pem. In other words, remove the private key section from syslog-ng-server.pem on all clients.
Place every client's syslog-ng-client.pem in /etc/stunnel. For server, create a special syslog-ng-client.pem containing the certificate sections for all clients and place in /etc/stunnel. In other words, remove the private key sections from all syslog-ng-client.pem files and concatenate what is left to create server's special syslog-ng-client.pem.

note:It is very important that you put the server's short name when you're asked about the Common Name !

Creating the configuration files

Create the stunnel.conf configuration file in /etc/stunnel on the client:

[root@host]# vi /etc/stunnel/stunnel.conf

#foreground = yes
#debug = 7
client = yes
cert = /etc/stunnel/syslog-ng-client.pem
CAfile = /etc/stunnel/syslog-ng-server.pem
verify = 3
accept =
connect =

For syslog-ng.conf you can start with:

[root@host]# vi /etc/syslog-ng/syslog-ng.conf

options {long_hostnames(off);
source src {unix-stream("/dev/log"); pipe("/proc/kmsg"); internal();};
destination dest {file("/var/log/messages");};
destination stunnel {tcp("" port(514));};
log {source(src);destination(dest);};
log {source(src);destination(stunnel);};

Similarly stunnel.conf on the server can look like this:

[root@host]# vi /etc/stunnel/stunnel.conf

#foreground = yes
debug = 7
cert = /etc/stunnel/syslog-ng-server.pem
CAfile = /etc/stunnel/syslog-ng-client.pem
verify = 3
accept =
connect =

An example of syslog-ng.conf on the server:

[root@host]# vi /etc/syslog-ng/syslog-ng.conf

options { long_hostnames(off); sync(0); keep_hostname(yes); chain_hostnames(no); };
source src {unix-stream("/dev/log"); pipe("/proc/kmsg"); internal();};
source stunnel {tcp(ip("") port(514) max-connections(500));};
destination remoteclient {file("/var/backup/CentralizedLogging/remoteclients");};
destination dest {file("/var/log/messages");};
log {source(src); destination(dest);};
log {source(stunnel); destination(remoteclient);};

Starting syslog-ng and stunnel

Make sure syslog-ng is not running (it automatically start once you install it from the rpm's)

[root@host]# killall syslog-ng

Start syslong-ng BEFORE stunnel by running:

[root@host]# syslog-ng -f /etc/syslog-ng/syslog-ng.conf

Make sure it's running by checking the logs:

[root@host]# tail -f /var/log/messages

Start stunnel by running:

[root@host]# stunnel /etc/stunnel/stunnel.conf

Make sure stunnel is running by checking the logs:

[root@host]# tail -f /var/log/messages

If stunnel is not running you can uncomment the debug line in the stunnel.conf file, start stunnel again and check the logs for detailed description of the problem.

Final steps

Restart stunnel on the server for it to re-read the certificates file and accept the newly added clients:

[root@host]# killall stunnel stunnel /etc/stunnel/stunnel.conf

Make sure syslog-ng does not start (on client) through the init process:

[root@host]# chkconfig --level 2345 syslog-ng off

Edit /etc/rc.d/rc.local (on client) and add syslog-ng and stunnel:

[root@host]# vi /etc/rc.d/rc.local

echo "Starting syslog-ng ..."
syslog-ng -f /etc/syslog-ng/syslog-ng.conf
echo "Starting stunnel ..."
stunnel /etc/stunnel/stunnel.conf

To test the remote logging run on the client:

[root@host]# logger "Testing remote logging"

The message should appear on bu3 in /var/backup/CentralizedLogging/remoteclients

One alternative to syslog-ng is Splunk. You can always use Splunk along syslog-ng for indexing purpose

Page 136 of 150

Upcoming Linux Foundation Courses

  1. LFD312 Developing Applications For Linux
    16 Feb » 20 Feb - Atlanta - GA
  2. LFD331 Developing Linux Device Drivers
    16 Feb » 20 Feb - San Jose - CA
  3. LFS220 Linux System Administration
    16 Feb » 19 Feb - Virtual

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board