Linux.com

Community Blogs



Installing kernel source on CentOS/RedHat

1. Maybe you do not need the full kernel source

If you need to compile a kernel driver module, the chances are you do not really need the full kernel source tree. You might just need the kernel-devel package. (If, however, you are certain that the full source tree is required, please follow the instructions in Section 2.)

In CentOS-5, there are three kernel-devel packages available:

* kernel-devel (both 32- & 64-bit architectures)
* kernel-xen-devel (both 32- & 64-bit architectures)
* kernel-PAE-devel (32-bit architecture only)

In CentOS-4, there are four kernel-devel packages available:

* kernel-devel (both 32- & 64-bit architectures)
* kernel-smp-devel (both 32- & 64-bit architectures)
* kernel-xenU-devel (both 32- & 64-bit architectures)
* kernel-hugemem-devel (32-bit architecture only)
* kernel-largesmp-devel (64-bit architecture only)

If you are running the standard kernel (for example), you can install the kernel-devel package by:

[root@host]# yum install kernel-devel

You can use this command to determine the version of your running kernel:

[root@host]# uname -r

The result will look similar to this:

2.6.18-92.1.18.el5xen

In this case, the xen kernel is installed and the way to install this specific kernel-devel package is:

[root@host]# yum install kernel-xen-devel

For more specific information about the available kernels please see the Release Notes:

*CentOS-5 i386 kernels
* CentOS-5 x86_64 kernels
* CentOS-4 (search for the heading kernel in the section Package-Specific Notes, sub-section Core, for more details.)

If your kernel is not listed by yum because it is in an older tree, you can download it manually from the CentOS Vault. Pick the version of CentOS you are interested in and then, for the arch, look in either the os/arch/CentOS/RPMS/ or the updates/arch/RPMS/ directories for the kernel[-type]-devel-version.arch.rpm

Once you have the proper kernel[-type]-devel-version.arch.rpm installed, try to compile your module. It should work this way. If it does not, please provide feedback to the module's developer as this is the way all new kernel modules should be designed to be built.

2. If you really need the full kernel source

If you really must have the kernel source tree, for whatever reason, it is obtainable.

2.1. CentOS 4 and 5

As root, install the packages rpm-build, redhat-rpm-config and unifdef:

[root@host]# yum install rpm-build redhat-rpm-config unifdef

* The latter package is only required for 64-bit systems.

As an ordinary user, not root, create a directory tree based on ~/rpmbuild:

[user@host]$ cd
[user@host]$ mkdir -p rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
[user@host]$ echo '%_topdir %(echo $HOME)/rpmbuild' > .rpmmacros

* You are strongly advised against package building as root.

Find the kernel source rpm in:

* http://mirror.centos.org/centos/5/updates/SRPMS/(Current Updates/SRPMS)
* http://mirror.centos.org/centos/5/os/SRPMS/(Current OS/SRPMS)

(Replace the "5" with a "4" for CentOS-4 kernels)

* http://vault.centos.org/ (CentOS Vault)

(Pick either (version)/updates/SRPMS or (version)/os/SRPMS)

Once you have located the source file, you can install it by running, for example:

[user@host]$ rpm -ivh http://mirror.centos.org/centos/5/updates/SRPMS/kernel-2.6.18-92.1.18.el5.src.rpm 2> /dev/null (for CentOS 5)

- or -

[user@host]$ rpm -ivh http://mirror.centos.org/centos/4/updates/SRPMS/kernel-2.6.9-78.0.8.EL.src.rpm 2> /dev/null

(for CentOS 4)

note: Make sure you use -i instead of -U so that you don't upgrade already installed source three

Now that the source rpm is installed, unpack and prepare the source files:

[user@host]$ cd ~/rpmbuild/SPECS
[user@host SPECS]$ rpmbuild -bp --target=`uname -m` kernel-2.6.spec 2> prep-err.log | tee prep-out.log

The value of `uname -m` (note: back ticks (grave accents) not single quotation marks (apostrophies)) sets --target to the architecture of your current kernel. This is generally accepted and most people will have either i686 or x86_64.

The kernel source tree will now be found in the directory ~/rpmbuild/BUILD/.

 

Centralized authentication with OpenLDAP

Setting up a Certificate Authority

On a separate server, preferably isolated from the network and physically secured, create the Certificate Authority that will generate all the certificates for TLS encryption:

[root@host]# yum install openssl openssl-devel
[root@host]# vi /etc/pki/tls/openssl.cnf
[root@host]# cd /etc/pki/tls/misc
[root@host]# ./CA -newca

note: The common name field must be the machine's hostname!

This process does the following:

1. Creates the directory /etc/pki/CA (by default), which contains files necessary for the operation of a certificate authority
2. Creates a public-private key pair for the CA in the file /etc/pki/CA/private/cakey.pem. The private key must be kept private in order to ensure the security of the certificates the CA will later sign.
3. Signs the public key (using the corresponding private key, in a process called self-signing) to create the CA certificate, which is then stored in /etc/pki/CA/cacert.pem.

Creating a certificate for the LDAP server

Change into the CA certificate directory.

[root@host]# cd /etc/pki/tls/certs

Generate a key pair for the LDAP server, ldapserverkey.pem is the private key.

[root@host]# openssl genrsa -out ldapserverkey.pem 2048

Generate a certificate signing request (CSR) for the CA to sign.

[root@host]# openssl req -new -key ldapserverkey.pem -out ldapserver.csr

Sign the ldapserver.csr request, which will produce the server certificate. It will ask for a password, it's the same as when the CA cerificate was created

[root@host]# openssl ca -in ldapserver.csr -out ldapservercert.pem

How TLS Communication Works

There is a sequence of events that occur prior to the creation of an LDAP communication session using TLS. These include the following steps:

1. Both the LDAP server and client need to be configured with a shared copy of a CA certificate beforehand.
2. When the TLS LDAP connection is made, the client and server negotiate their SSL encryption scheme.
3. The LDAP server then sends its public encryption key and its server certificate (the certificate contains the public key).
4. The LDAP client inspects the server certificate to make sure that it hasn't expired and takes note of the name and key ID of the CA server that issued it. It then checks this CA information with all the CA certificates in its database to determine whether the server certificate should be trusted.
5. If everything is valid, the LDAP client then creates a random "premaster" secret encryption key that it encrypts with the LDAP server's public key. It then sends the encrypted encryption key to the LDAP server.
6. When public keys are created, a special "private" key is also simultaneously created. Anything encrypted with the public key can only be decrypted with the private key and vice versa. The server then uses its private key to extract the premaster key.
7. The client and server then use the premaster key to generate a master secret that will be the same for both, but will never be transmitted so that a third-party cannot intercept it.
8. The master secret key is then used to create session keys that will be used to encrypt all future communication between client and server for the duration of the TLS session.

Installing the Certificate on the LDAP Server

Create the PKI directory for LDAP certificates if it does not already exist

[root@host]# mkdir /etc/pki/tls/ldap
[root@host]# chown root:root /etc/pki/tls/ldap
[root@host]# chmod 755 /etc/pki/tls/ldap

Copy the private key and the certificate from the CA server

[root@host]# scp -r caserver:/etc/pki/tls/certs/ldapserverkey.pem /etc/pki/tls/ldap/serverkey.pem
[root@host]# scp -r caserver:/etc/pki/tls/certs/ldapservercert.pem /etc/pki/tls/ldap/servercert.pem

Verify the ownership and permissions of these files

[root@host]# chown root:ldap /etc/pki/tls/ldap/serverkey.pem
[root@host]# chown root:ldap /etc/pki/tls/ldap/servercert.pem
[root@host]# chmod 640 /etc/pki/tls/ldap/serverkey.pem
[root@host]# chmod 640 /etc/pki/tls/ldap/servercert.pem

Copy the CA's public certificate from the CA server residing in /etc/pki/CA/cacert.pem to the LDAP server

[root@host]# mkdir /etc/pki/tls/CA
[root@host]# scp -r caserver:/etc/pki/CA/cacert.pem /etc/pki/tls/CA/
[root@host]# chown root:root /etc/pki/tls/CA/cacert.pem
[root@host]# chmod 644 /etc/pki/tls/CA/cacert.pem

To test the TLS connectivity run

[root@host]# openssl s_client -connect cybervirt1:636 -showcerts

Installing CA's public certificate from the CA server residing in /etc/pki/CA/cacert.pem to the LDAP clients

On all clients run

[root@host]# scp -r caserver:/etc/pki/CA/cacert.pem /etc/pki/tls/CA/

Installing OpenLDAP

You can either download OpenLDAP source and compile it after you install BerkeleyDB

[root@host]# cd /usr/src
[root@host]# wget http://freshmeat.net/urls/1835e002467534891ad4a4c6158963c7
[root@host]# cd /usr/src/db-4.7.25/build_unix
[root@host]# ../dist/configure
[root@host]# make; make install
[root@host]# cd /usr/src
[root@host]# wget ftp://ftp.openldap.org/pub/OpenLDAP/openldap-stable/openldap-stable-20080813.tgz
[root@host]# tar zxfv openldap-stable-20080813.tgz
[root@host]# cd openldap-2.4.11
[root@host]# CPPFLAGS="-I/usr/local/BerkeleyDB.4.7/include"
[root@host]# export CPPFLAGS
[root@host]# LDFLAGS="-L/usr/local/lib -L/usr/local/BerkeleyDB.4.7/lib -R/usr/local/BerkeleyDB.4.7/lib"
[root@host]# export LDFLAGS
[root@host]# LD_LIBRARY_PATH="/usr/local/BerkeleyDB.4.7/lib"
[root@host]# export LD_LIBRARY_PATH
[root@host]# ./configure;make;make intstall

Or you can install it with yum

[root@host]# yum install -y openldap openldap-devel openldap-servers openldap-clients

Starting OpenLDAP server

For different versions of ldap (from source or rpm) make sure /usr/local/etc/openldap/ldap.conf is the same as /etc/openldap/ldap.conf or there will be CA error.

[root@host]# /usr/local/libexec/slapd -f /usr/local/etc/openldap/slapd.conf -d255 -h 'ldap:/// ldaps:///'

Migrating all user accounts in to OpenLDAP

Install the perl migration tool and migrate all files (passwd, groups, network, etc) by change the domain in the file bellow to yourdomain.com

[root@host]# wget http://www.padl.com/download/MigrationTools.tgz
[root@host]# tar zxfv MigrationTools.tgz
[root@host]# vi /usr/share/openldap/migration/migrate_common.ph
[root@host]# /usr/share/openldap/migration/migrate_all_offline.sh

Changing the authentication method

Two files need to be changed for an ssh client to authenticate to OpenLDAP - /etc/pam.d/system-auth-ac and /etc/nsswitch. You can do that manually or by running authconfig

[root@host]# authconfig --disableldap --enableldapauth --ldapserver=ldap.planetdiscover.com --ldapbasedn="dc=planetdiscover,dc=com" --disableldaptls --update
[root@host]# vi /etc/pam.d/system-auth-ac

auth required pam_env.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 500 quiet
auth sufficient pam_ldap.so use_first_pass
auth required pam_deny.so

account required pam_unix.so broken_shadow
account sufficient pam_succeed_if.so uid < 500 quiet
account [default=bad success=ok user_unknown=ignore] pam_ldap.so
account required pam_permit.so

password requisite pam_cracklib.so try_first_pass retry=3
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok
password sufficient pam_ldap.so use_authtok
password required pam_deny.so

session optional pam_keyinit.so revoke
session required pam_limits.so
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_ldap.so

session optional /lib/security/$ISA/pam_ldap.so

[root@host]# vi /etc/nsswitch

passwd: files ldap
shadow: files ldap
group: files ldap
automount: files ldap

Configuring OpenLDAP server and clients

Here's how /etc/openldap/slapd.conf and /usr/local/etc/openldap/slapd.conf server config files should look like

include /usr/local/etc/openldap/schema/core.schema
include /usr/local/etc/openldap/schema/cosine.schema
include /usr/local/etc/openldap/schema/inetorgperson.schema
include /usr/local/etc/openldap/schema/nis.schema
pidfile /usr/local/var/run/slapd.pid
argsfile /usr/local/var/run/slapd.args
access to attr=userPassword
by anonymous auth
by self write
by * none
access to attrs=shadowLastChange
by self write
by * read
access to * by * read
database bdb
suffix "dc=planetdiscover,dc=com"
rootdn "cn=Manager,dc=planetdiscover,dc=com"
rootpw {MD5}Tw4es8U1dRL2oLhM58ZBhA==
directory /usr/local/var/openldap-data
index objectClass eq
TLSCACertificateFile /etc/pki/tls/CA/cacert.pem
TLSCertificateFile /etc/pki/tls/ldap/servercert.pem
TLSCertificateKeyFile /etc/pki/tls/ldap/serverkey.pem
security simple_bind=128
loglevel stats2

The client config file in /etc/ldap.conf

base dc=planetdiscover,dc=com
uri ldap://cybervirt1.planetdiscover.com/
timelimit 120
bind_timelimit 120
idle_timelimit 3600
nss_initgroups_ignoreusers root,ldap,named,avahi,haldaemon,dbus,radvd,tomcat,radiusd,news,mailman
pam_password md5
ssl start_tls
tls_checkpeer yes
tls_cacertdir /etc/pki/tls/CA
tls_cacertfile /etc/pki/tls/CA/cacert.pem

The /etc/openldap/ldap.conf and /usr/local/etc/openldap/ldap.conf client config files

BASE dc=planetdiscover, dc=com
URI ldap://cybervirt1.planetdiscover.com
TLS_CACERTDIR /etc/pki/tls/CA
TLS_CACERT /etc/pki/tls/CA/cacert.pem

Various OpenLDAP operations and examples

### Define the top-level organization unit ###

## Build the root node.
dn: dc=planetdiscover,dc=com
dc: planetdiscover
objectClass: dcObject
objectClass: organizationalUnit
ou: planetdiscover Dot Org

## Build the people ou container.
dn: ou=people,dc=planetdiscover,dc=com
ou: people
objectClass: organizationalUnit

## Build the group ou container.
dn: ou=group,dc=planetdiscover,dc=com
ou: group
objectclass: organizationalUnit

## Add the records offline
[root@host]# slapadd -v -l /tmp/top.ldif

## Add a user LDIF entry for Jerry Carter. cn is the mandatory attribute for this objectclass

dn: cn=Jerry Carter,ou=people,dc=planetdiscover,dc=com
cn: Jerry Carter
sn: Carter
mail: This e-mail address is being protected from spambots. You need JavaScript enabled to view it
telephoneNumber: 555-123-1234
objectclass: inetOrgPerson

## Add a user LDIF entry for root. uid is the mandatory attribute in this case

dn: uid=root,ou=People,dc=planetdiscover,dc=com
uid: root
cn: root
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$1$Kp8hx.m0$Y1Aw37IStTqU8UU5kLgbq.
shadowLastChange: 13692
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 0
gidNumber: 0
homeDirectory: /root
gecos: root

[root@host]# ldapmodify -D "cn=Manager,dc=planetdiscover,dc=com" -w secret -x -a -f /tmp/users.ldif

## Modify. Add a web page location to Jerry Carter.
dn: cn=Jerry Carter,ou=people,dc=planetdiscover,dc=com
changetype: modify
add: labeledURI
labeledURI: http://www.planetdiscover.org/~jerry/

## Modify. Remove an email address from Gerald W. Carter.
dn: cn=Gerald W. Carter,ou=people,dc=planetdiscover,dc=com
changetype: modify
delete: mail
mail: This e-mail address is being protected from spambots. You need JavaScript enabled to view it

## Modify. Remove the entire entry for Peabody Soup.
dn: cn=Peabody Soup,ou=people,dc=planetdiscover,dc=com
changetype: delete

[root@host]# ldapmodify -D "cn=Manager,dc=planetdiscover,dc=com" -w secret -x -v -f /tmp/update.ldif

## Delete dn root.
[root@host]# ldapdelete -D "cn=Manager,dc=planetdiscover,dc=com" -w secret -x -r -v "uid=root,ou=People,dc=planetdiscover,dc=com"

## Delete the entire dn ou=people subtree.
[root@host]# ldapdelete -D "cn=Manager,dc=planetdiscover,dc=com" -w secret -x -r -v "ou=people,dc=planetdiscover,dc=com"

## Search for uid cybergod record
[root@host]# ldapsearch -x -b "dc=planetdiscover,dc=com" "(uid=cybergod)"
# -b can be omited it's from where to start the search
[root@host]# ldapsearch -x -W -D cn="Manager,dc=planetdiscover,dc=com" "(uid=cybergod)" -Z
# -Z is for Using TLS it goes with -W for the Manager password

## Search for all objectclass records
[root@host]# ldapsearch -x -b "dc=planetdiscover,dc=com" "(objectclass=*)"

## Search using SASL DIGEST-MD5
[root@host]# ldapsearch -U This e-mail address is being protected from spambots. You need JavaScript enabled to view it -b "dc=planetdiscover,dc=com" "(objectclass=*)" -Y DIGEST-MD5

## Changing users password to "test" online through TLS
[root@host]# ldappasswd -s test -x -W -D cn="Manager,dc=planetdiscover,dc=com" "uid=cybergod,ou=People,dc=planetdiscover,dc=com" -Z

## Show ldap information
[root@host]# ldapsearch -x -s base -b "" "(objectclass=*)" +
[root@host]# ldapsearch -h localhost -p 389 -x -b "" -s base -LLL supportedSASLMechanisms

## Generate ssha password to use in slapd.conf
[root@host]# slappasswd

 

Setting up MySQL Replication

MySQL replication allows you to have an exact copy of a database from a master server on another server (slave), and all updates to the database on the master server are immediately replicated to the database on the slave server so that both databases are in sync. This is not a backup policy because an accidentally issued DELETE command will also be carried out on the slave; but replication can help protect against hardware failures though.

Configure The Master

First we have to edit /etc/mysql/my.cnf. We have to enable networking for MySQL, and MySQL should listen on all IP addresses, therefore we comment out these lines (if existant):

#skip-networking
#bind-address = 127.0.0.1

Furthermore we have to tell MySQL for which database it should write logs (these logs are used by the slave to see what has changed on the master), which log file it should use, and we have to specify that this MySQL server is the master. We want to replicate the database exampledb, so we put the following lines into /etc/mysql/my.cnf:

log-bin = /var/log/mysql/mysql-bin.logbinlog-do-db=exampledbserver-id=1

Then we restart MySQL:

[root@host]# /etc/init.d/mysql restart

Then we log into the MySQL database as root and create a user with replication privileges:

[root@host]# mysql -uroot -p
Enter password:
mysql> GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY '';
mysql> FLUSH PRIVILEGES;
mysql> USE exampledb; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;

Write down this information, we will need it later on the slave!

Then leave the MySQL shell:

mysql> quit;

There are two possibilities to get the existing tables and data from exampledb from the master to the slave. The first one is to make a database dump, the second one is to use the LOAD DATA FROM MASTER; command on the slave. The latter has the disadvantage the the database on the master will be locked during this operation, so if you have a large database on a high-traffic production system, this is not what you want, and I recommend to follow the first method in this case. However, the latter method is very fast, so I will describe both here.

If you want to follow the first method, then do this:

[root@host]# mysqldump -u root -p --opt exampledb > exampledb.sql

This will create an SQL dump of exampledb in the file exampledb.sql. Transfer this file to your slave server!

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.

Finally we have to unlock the tables in exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> UNLOCK TABLES;
mysql> quit;

Now the configuration on the master is finished.

Configure The Slave

On the slave we first have to create the database exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> CREATE DATABASE exampledb;
mysql> quit;

If you have made an SQL dump of exampledb on the master and have transferred it to the slave, then it is time now to import the SQL dump into our newly created exampledb on the slave:

[root@host]# mysql -u root -p exampledb < /path/to/exampledb.sql

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.

Now we have to tell MySQL on the slave that it is the slave, that the master is 192.168.0.100, and that the master database to watch is exampledb. Therefore we add the following lines to /etc/mysql/my.cnf:

server-id=2
master-host=192.168.0.100
master-user=slave_user
master-password=secret
master-connect-retry=60
replicate-do-db=exampledb

Then we restart MySQL:

[root@host]# /etc/init.d/mysql restart

If you have not imported the master exampledb with the help of an SQL dump, but want to go the LOAD DATA FROM MASTER; way, then it is time for you now to get the data from the master exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> LOAD DATA FROM MASTER;
mysql> quit;

If you have phpMyAdmin installed on the slave you can now check if all tables/data from the master exampledb is also available on the slave exampledb.

Finally, we must do this:

[root@host]# mysql -u root -p
Enter password:
mysql> SLAVE STOP;

In the next command (still on the MySQL shell) you have to replace the values appropriately:

mysql> CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave_user', MASTER_PASSWORD='', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;

MASTER_HOST is the IP address or hostname of the master (in this example it is 192.168.0.100). MASTER_USER is the user we granted replication privileges on the master.
MASTER_PASSWORD is the password of MASTER_USER on the master.
MASTER_LOG_FILE is the file MySQL gave back when you ran SHOW MASTER STATUS; on the master.
MASTER_LOG_POS is the position MySQL gave back when you ran SHOW MASTER STATUS; on the master.

Now all that is left to do is start the slave. Still on the MySQL shell we run

mysql> START SLAVE; mysql> quit;

That's it! Now whenever exampledb is updated on the master, all changes will be replicated to exampledb on the slave. Test it!

Here are two examples of the my.cnf file on the master and slave servers:

On the Master:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_allowed_packet=512000000
open-files-limit=5000
table_cache=2000
max_connections=1000
key_buffer_size=2048M
sort_buffer_size=24M
query-cache-type=1
query-cache-size=512M
sort_buffer=24M
read_rnd_buffer_size=3M
read_buffer_size=1M
tmp_table_size=64M
interactive_timeout=288000
log-bin
server-id=83
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt
myisam_max_sort_file_size=16G
myisam_max_extra_sort_file_size=16G
myisam_sort_buffer_size=24M
max_binlog_size=256M
log-slow-queries = /var/log/mysql_slow.log
long_query_time = 1

log-slow-queries = /var/log/mysql-slow.log
long_query_time = 1

[mysql.server]
user=mysql

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[myisamchk]
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt

On the Slave:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_allowed_packet=512000000
open-files-limit=5000
table_cache=2000
sort_buffer_size=4M
key_buffer_size=2048M
query-cache-type=1
query-cache-size=512M
sort_buffer=4M
read_rnd_buffer_size=3M
tmp_table_size=64M
max_connections=500
interactive_timeout=288000
server-id=84
replicate-wild-ignore-table=%.indexTasks
replicate-wild-ignore-table=%.indexClusterTasks
replicate-wild-ignore-table=%.indexPages
replicate-wild-ignore-table=%.adRequestsRollup
replicate-wild-ignore-table=%.textAdsRollup
replicate-wild-ignore-table=%.%Log%
replicate-wild-ignore-table=%.%Archive%
replicate-wild-ignore-table=%.tmp%
replicate-wild-ignore-table=%.pageContents%
master-host=db10m.int
master-user=replicationuser
master-password=3y9nR16k
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt
set-variable = myisam_max_sort_file_size=16G
set-variable = myisam_max_extra_sort_file_size=16G
set-variable = sort_buffer_size=4M
set-variable = myisam_sort_buffer_size=4M
slave-skip-errors=1062
read-only
max_binlog_size=256M
log-slow-queries = /var/log/mysql_slow.log
long_query_time = 1

[mysql.server]
user=mysql

[safe_mysqld] err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[myisamchk] ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt

 

Setting up MySQL Replication

MySQL replication allows you to have an exact copy of a database from a master server on another server (slave), and all updates to the database on the master server are immediately replicated to the database on the slave server so that both databases are in sync. This is not a backup policy because an accidentally issued DELETE command will also be carried out on the slave; but replication can help protect against hardware failures though.

Configure The Master

First we have to edit /etc/mysql/my.cnf. We have to enable networking for MySQL, and MySQL should listen on all IP addresses, therefore we comment out these lines (if existant):

#skip-networking
#bind-address = 127.0.0.1

Furthermore we have to tell MySQL for which database it should write logs (these logs are used by the slave to see what has changed on the master), which log file it should use, and we have to specify that this MySQL server is the master. We want to replicate the database exampledb, so we put the following lines into /etc/mysql/my.cnf:

log-bin = /var/log/mysql/mysql-bin.logbinlog-do-db=exampledbserver-id=1

Then we restart MySQL:

[root@host]# /etc/init.d/mysql restart

Then we log into the MySQL database as root and create a user with replication privileges:

[root@host]# mysql -uroot -p
Enter password:
mysql> GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY '';
mysql> FLUSH PRIVILEGES;
mysql> USE exampledb; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;

Write down this information, we will need it later on the slave!

Then leave the MySQL shell:

mysql> quit;

There are two possibilities to get the existing tables and data from exampledb from the master to the slave. The first one is to make a database dump, the second one is to use the LOAD DATA FROM MASTER; command on the slave. The latter has the disadvantage the the database on the master will be locked during this operation, so if you have a large database on a high-traffic production system, this is not what you want, and I recommend to follow the first method in this case. However, the latter method is very fast, so I will describe both here.

If you want to follow the first method, then do this:

[root@host]# mysqldump -u root -p --opt exampledb > exampledb.sql

This will create an SQL dump of exampledb in the file exampledb.sql. Transfer this file to your slave server!

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.

Finally we have to unlock the tables in exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> UNLOCK TABLES;
mysql> quit;

Now the configuration on the master is finished.

Configure The Slave

On the slave we first have to create the database exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> CREATE DATABASE exampledb;
mysql> quit;

If you have made an SQL dump of exampledb on the master and have transferred it to the slave, then it is time now to import the SQL dump into our newly created exampledb on the slave:

[root@host]# mysql -u root -p exampledb < /path/to/exampledb.sql

If you want to go the LOAD DATA FROM MASTER; way then there is nothing you must do right now.

Now we have to tell MySQL on the slave that it is the slave, that the master is 192.168.0.100, and that the master database to watch is exampledb. Therefore we add the following lines to /etc/mysql/my.cnf:

server-id=2
master-host=192.168.0.100
master-user=slave_user
master-password=secret
master-connect-retry=60
replicate-do-db=exampledb

Then we restart MySQL:

[root@host]# /etc/init.d/mysql restart

If you have not imported the master exampledb with the help of an SQL dump, but want to go the LOAD DATA FROM MASTER; way, then it is time for you now to get the data from the master exampledb:

[root@host]# mysql -u root -p
Enter password:
mysql> LOAD DATA FROM MASTER;
mysql> quit;

If you have phpMyAdmin installed on the slave you can now check if all tables/data from the master exampledb is also available on the slave exampledb.

Finally, we must do this:

[root@host]# mysql -u root -p
Enter password:
mysql> SLAVE STOP;

In the next command (still on the MySQL shell) you have to replace the values appropriately:

mysql> CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave_user', MASTER_PASSWORD='', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;

MASTER_HOST is the IP address or hostname of the master (in this example it is 192.168.0.100). MASTER_USER is the user we granted replication privileges on the master.
MASTER_PASSWORD is the password of MASTER_USER on the master.
MASTER_LOG_FILE is the file MySQL gave back when you ran SHOW MASTER STATUS; on the master.
MASTER_LOG_POS is the position MySQL gave back when you ran SHOW MASTER STATUS; on the master.

Now all that is left to do is start the slave. Still on the MySQL shell we run

mysql> START SLAVE; mysql> quit;

That's it! Now whenever exampledb is updated on the master, all changes will be replicated to exampledb on the slave. Test it!

Here are two examples of the my.cnf file on the master and slave servers:

On the Master:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_allowed_packet=512000000
open-files-limit=5000
table_cache=2000
max_connections=1000
key_buffer_size=2048M
sort_buffer_size=24M
query-cache-type=1
query-cache-size=512M
sort_buffer=24M
read_rnd_buffer_size=3M
read_buffer_size=1M
tmp_table_size=64M
interactive_timeout=288000
log-bin
server-id=83
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt
myisam_max_sort_file_size=16G
myisam_max_extra_sort_file_size=16G
myisam_sort_buffer_size=24M
max_binlog_size=256M
log-slow-queries = /var/log/mysql_slow.log
long_query_time = 1

log-slow-queries = /var/log/mysql-slow.log
long_query_time = 1

[mysql.server]
user=mysql

[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[myisamchk]
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt

On the Slave:

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
max_allowed_packet=512000000
open-files-limit=5000
table_cache=2000
sort_buffer_size=4M
key_buffer_size=2048M
query-cache-type=1
query-cache-size=512M
sort_buffer=4M
read_rnd_buffer_size=3M
tmp_table_size=64M
max_connections=500
interactive_timeout=288000
server-id=84
replicate-wild-ignore-table=%.indexTasks
replicate-wild-ignore-table=%.indexClusterTasks
replicate-wild-ignore-table=%.indexPages
replicate-wild-ignore-table=%.adRequestsRollup
replicate-wild-ignore-table=%.textAdsRollup
replicate-wild-ignore-table=%.%Log%
replicate-wild-ignore-table=%.%Archive%
replicate-wild-ignore-table=%.tmp%
replicate-wild-ignore-table=%.pageContents%
master-host=db10m.int
master-user=replicationuser
master-password=3y9nR16k
ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt
set-variable = myisam_max_sort_file_size=16G
set-variable = myisam_max_extra_sort_file_size=16G
set-variable = sort_buffer_size=4M
set-variable = myisam_sort_buffer_size=4M
slave-skip-errors=1062
read-only
max_binlog_size=256M
log-slow-queries = /var/log/mysql_slow.log
long_query_time = 1

[mysql.server]
user=mysql

[safe_mysqld] err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[myisamchk] ft_min_word_len=2
ft_stopword_file=/var/lib/mysql/stopwords.txt

 

Portable JumpStart Environment with PXE and Kickstart

Overview

In addition to the basic requirements of DHCP, TFTP, and NFS, you will need to add another component called PXE (Pre-boot Execution Environment). Much like Sun systems use the OpenBoot firmware to allow booting from their network devices, PXE works with your x86 system to provide that same functionality. This means that before you begin, be sure your client is PXE aware. If you have older hardware, you may want to look into Etherboot as an alternative. To enable PXE on your client, simply enter your systems BIOS and turn it on.

With PXE enabled and listed as your primary boot device, your system is ready to boot from the network. Once the request is received by DHCP from your client, the server assigns an IP address and tells PXE where to find its pxelinux.0 file. This binary is then transferred through TFTP with instructions on the location of the netboot image. This file contains the data stating which kernel and initial ramdisk to load. It also gives the necessary information to NFS to mount the install directory. After all of the above is accounted for, your system will begin installing in the same manner as if you installed it from CD-ROM.

Now that you have a basic idea of the differences and similarities of performing a network install with both Solaris and Red Hat, let's put it all together.

Copying Software

Begin by copying the Red Hat software to your laptop. You may want to consider structuring the file system under the same parent directory used for Solaris. This will shorten your exports file and keep you from having to add new entries. Once you have the CD-ROM mounted, you can use dd to create the ISO image. You will need to do this for each CD-ROM:

[root@host]# dd if=/dev/cdrom of=/home/BUILD/RedHat/rhe3/rhe3-disc1.iso bs=32k

The ISO images alone are sufficient to complete the install; you do not need to unpack the software. However, this makes upgrading the software more difficult. To see the contents of the ISO, you can mount it up with a loop-back device. You will need to do this anyway to extract the correct initial ramdisk and kernel. Here is an example:

[root@host]# mount -o loop /home/BUILD/RedHat/rhe3/rhe3upd6-i386-disc1.iso /mnt

Obtaining the Initial Ramdisk and Kernel

After you have mounted up the first ISO image with the above command, you can copy the initial ramdisk and kernel to your /tftpboot directory. The initial ramdisk is called initrd.img and the kernel is vmlinuz. It's a good idea for you to rename both files with specific names related to the version of Red Hat you're installing. This will also allow you to store multiple copies of the kernel and initial ramdisk for different versions of the OS:

[root@host]# cd /mnt/images/pxeboot
[root@host]# cp initrd.img /tftpboot/rhe3-initrd.img
[root@host]# cp vmlinuz /tftpboot/rhe3-vmlinuz

The initrd.img file can be customized with specific modules to fit your needs. Here is how to take a look inside:

[root@host]# cp /tftpboot/rhe3-initrd.img /tmp
[root@host]# cd /tmp
[root@host]# gunzip -dc rhe3-initrd.img > initrd.ext2
[root@host]# mount -o loop /tmp/initrd.ext2 /mnt2

PXE Configuration

After copying the correct initrd.img and vmlinuz files, you can address the server-side requirements for PXE. As I said previously, PXE is what makes network-booting a PC possible. The first file you will need is called pxelinux.0. There are a couple of ways to obtain this file. If you already have some Red Hat systems in your environment, you can copy it from one of them. Here is how to find it after you are logged into a running system:

[root@host]# locate pxelinux.0
[root@host]# cp /usr/lib/syslinux/pxelinux.0 /tftpboot

If you don't have an existing system, you can download the file from http://syslinux.zytor.com. This site will also help to answer any questions related to PXELINUX.

Creating a Netboot Image

The next file addressed in this process is the netboot image. A netboot image is basically a bootloader that determines whether your client will boot from the network or its hard drive. This file defines things such as kernel, initial ramdisk, network device, and method used for booting, as well as where to look for the kickstart configuration file. An important note about the append line within this file is that it needs to be entirely on one line. Line breaks and continuation slashes will cause problems resulting in failure of the boot process. You will need to create the directory /tftpboot/pxelinux.cfg and then create the file. I'm using vi:

[root@host]# mkdir /tftpboot/pxelinux.cfg
[root@host]# vi default.netboot-rhe3
default linux
serial 0,38400n8
label linux
kernel vmlinuz
append ksdevice=eth0 ip=dhcp console=tty0 load_initial
ramdisk=1 initrd=initrd.img network
ks=nfs:192.168.0.1:/home/BUILD/RedHat/rhe3/ks.cfg

Another important piece to this file is how it is called via TFTP. There are three methods to load this file. The first is a symbolic link of your client's MAC address:

01-00-0F-1F-AB-39-19 -> default.netboot-rhe3

The next method is similar to how we set up a Sun to load its mini-kernel, and that's with a Hex representation of your client's IP address:

0A0A0A0A -> default.netboot-rhe3

If you're going to use one netboot file for everything, just make a symbolic link called "default":

default -> default.netboot-rhe3

Kickstart Configuration

The ks.cfg file is really the guts of your Red Hat configuration. This is where you lay out your partition table, define which services will be turned on or off, configure network settings, and ultimately tell the system which software packages to load. You can also instruct the system to perform any post-install scripts you may have. There are many directives that can be used to customize your Red Hat install. When defining disks, it's important to specify SCSI vs. IDE (sda, hda). Here is a simple configuration to get you started:

# simple ks.cfg
install
nfs --server=192.168.0.1 -dir=/home/BUILD/RedHat/rhe3
lang en_US.UTF-8
langsupport --default en_US.UTF-8 en_US.UTF-8
keyboard us
mouse none
skipx
network --device eth0 --bootproto static --ip=192.168.0.11
--netmask=255.255.255.0 --gateway=192.168.0.1
--nameserver=192.168.0.1 --hostname=node1
rootpw --iscrypted $3$y606grSH$SUzlwxKc73Lhgn82yu1bnF1
firewall --disabled
authconfig --enableshadow --enablemd5
timezone America/New_York
bootloader --location=mbr
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
clearpart --all --initlabel
part /boot --fstype ext3 --size=100 --ondisk=sda
part / --fstype ext3 --size=1024 --grow --ondisk=sda
part swap --size=1000 --grow --maxsize=2000 --ondisk=sda

%packages
@ everything
grub
kernel-smp
kernel

%post
wget http://foo.server/post-install.sh
sh post-install.sh

Services

Now that I've covered the specific pieces needed to complete a Red Hat install over the network, I will explain the additional configurations that need to be made to your existing services. As you could probably tell from the information on PXE, the service most changed in all of this is the TFTP server. There are several new files you will need to add to its directory structure as well as a new sub-directory. The files that should exist at the top level of the /tftpboot directory are pxelinux.0, rhe3-initrd.img, and rhe3-vmlinuz. Here is an example of what it might look like:

drwxr-xr-x 2 root root 152 Aug 31 2004 pxelinux.cfg
lrwxrwxrwx 1 root root 15 Aug 31 2004 initrd.img -> rhe3-initrd.img
lrwxrwxrwx 1 root root 12 Aug 31 2004 vmlinuz -> rhe3-vmlinuz

The /tftpboot/pxelinux.cfg directory is where you will put the netboot image you have created. It is also where you will need to decide how you will call that file:

lrwxrwxrwx 1 root root 20 Aug 31 2004 default -> default.netboot-rhe3

DHCP is the next service where you will need to make changes. In its simplest form, you are basically defining the TFTP server and the bootloader program. Below is a stripped-down version of the dhcpd.conf file I used for testing:

ddns-update-style none; ddns-updates off;

## PXE Stuff

deny unknown-clients;
not authoritative;

option domain-name "example.com";
option domain-name-servers 192.168.0.9, 192.168.0.10;
option subnet-mask 255.255.255.0;

allow bootp; allow booting;

option ip-forwarding false; # No IP forwarding
option mask-supplier false; # Don't respond to ICMP Mask req

subnet 192.168.0.0 netmask 255.255.255.0 {
option routers 192.168.0.1;
}

group {
next-server 192.168.0.1; # name of your TFTP server
filename "pxelinux.0"; # name of the bootloader program

host node1 {
hardware ethernet 00:11:43:d 9:46:29;
fixed-address 192.168.0.11;
}
}

Finally, depending on how you structured your file systems, the only other service you may need to adjust is your NFS server. If you have several versions of the OS you want to install, I recommend exporting your data at a higher level so you don't need to keep adding to your exports file. Here is the exports file I used:

/home/BUILD/RedHat *(ro,async,anonuid=0,anongid=0)

In addition to Solaris, you now have a system that is capable of installing the Red Hat operating system over the network.

 

Naming Network Interfaces on LInux

Introduction

When the Linux kernel boots, it assigns names (eth0 etc..) to network devices in the order that it finds them. This means that two different versions of the kernel, say 2.4 and 2.6, might find the network interfaces in a diffent order. When this happens you might have to swap all the cables to get your connections to work the way you want. The proper way to do this is to name the interfaces with the nameif command (part of the net-tools).

You can install net-tools by running:

[root@host]# yum install net-tools

MACTAB and NAMEIF

The nameif command can be driven from the command line, if you want to do that, then read it's man page. Another way is to set up a /etc/mactab file to relate the MAC addresses of the network cards to the names you want.

Every NIC interface in the (known) universe has a unique MAC address (Media Access Control address), which is usually expressed as a 12 digit hexadecimal number, colon-dotted in pairs for readability.

You will need to find the MAC addresses of each of your network cards. The easiest way to find these (if you didn't make a note of the MAC label when you installed the card) is to use ifconfig, each interface that is configured will report its MAC address. e.g:

[root@host]# /sbin/ifconfig

eth0 Link encap:Ethernet HWaddr 00:60:97:52:9A:94
inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6043 errors:0 dropped:0 overruns:0 frame:0
TX packets:6039 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:1439604 (1.3 Mb) TX bytes:509857 (497.9 Kb)
Interrupt:10 Base address:0xc800

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:7218 errors:0 dropped:0 overruns:0 frame:0
TX packets:7218 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1085452 (1.0 Mb) TX bytes:1085452 (1.0 Mb)

Take note of the HWaddr, this the NIC's MAC address.

Now you can decide what you would like the NIC to be called, and set up your /etc/mactab, here's mine as an example:

# Begin /etc/mactab
# This file relates MAC addresses to interface names.
# We need this so that we can force the name we want
# even if the kernel finds the interfaces in the
# wrong order.

# eth0 under 2.4, eth1 under 2.6
cyberint 00:60:97:52:9A:94

# eth1 under 2.4, eth0 under 2.6
newint 00:A0:C9:43:8F:77

# End /etc/mactab

If you run nameif (without parameters) now you will probably get an error message, since nameif must be run when the interfaces are down.

[root@host]# nameif
cannot change name of eth0 to beannet: Device or resource busy

so, first take the interface down, then rename it:

[root@host]# ifconfig eth0 down
[root@host]# nameif
[root@host]# ifconfig eth0 up
[root@host]# ifconfig

cyberint Link encap:Ethernet HWaddr 00:60:97:52:9A:94
inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6617 errors:0 dropped:0 overruns:0 frame:0
TX packets:6596 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:1748349 (1.6 Mb) TX bytes:598513 (584.4 Kb)
Interrupt:10 Base address:0xc800

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:9097 errors:0 dropped:0 overruns:0 frame:0
TX packets:9097 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1340480 (1.2 Mb) TX bytes:1340480 (1.2 Mb)

Using ifrename as a newer alternative

nameif has been obsoleted by the ifrename command.

To use the ifrename command first create the /etc/iftab file containing the new interface name and the corresponding MAC address like this:

ifname mac 00:16:3E:3B:B0:52

Bring the interface down and run:

[root@host]# ifrename

Then bring the interface up with the new name specified in the config file:

[root@host]# ifconfig ifname up

 

/etc/sudoers

The sudo command gives users access to otherwise inaccessible commands. The /etc/sudoers file makes use of 3 sets of groups to allow or deny access to commands on the nodes of a network.
 

UbuntuOne -- Selling a sevice? Or themselves?

I recently applied for the UbuntuOne beta program. It seems interesting enough, and sounds like a good idea. But the thing is, why is Canonical trying to hide a storage server and make it seem like it's so much more? What is UbuntuOne you say. Well here's a cap:

 

Sync your files, share your work with others or work remotely, all with your Ubuntu computer.Sync your files, share your work with others or work remotely, all with your Ubuntu computer. 

 

Well, that's not very descriptive now is it. If you go to the plans page, it will give you the option of choosing a 2GB (free) or 10GB (pay) for storage. I'm not saying that this is a bad thing. I'm just trying to say that disguising a FTP storage site as a brand new idea isn't very sportsmanshiplike.

So, am I going to buy an account? Probably. But I still don't like the cover up. Oh ya, and one more thing--Linux Mint had this first, with a storage site and their own FTP for access to it. Just some thoughts for chew.

 

Copy files recursive with folder hierarchy (rsync method)

rsync --include-from=/tmp/include.txt --exclude-from=/tmp/exclude.txt -aRvm ./src /tmp/dest

include.txt
>>*.pdf

exclude.txt
>>*.*

 

-a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)

-R, --relative              use relative path names

-m, --prune-empty-dirs      prune empty directory chains from file-list

-v, --verbose               increase verbosity

 

Stack Growth Direction.

#include <stdio.h>

void foo(int *);
int main(int argc, char *argv[])
{
int i;
foo(&i);
}
void foo(int *ii)
{
int j;
if ( &j <>
printf("I think the stack grows down.\n");
else if ( &j > ii )
printf("I think the stack grows up.\n");
else
printf("I'm really confused now.\n");
}
 

Good Luck

Good luck!linux.
 
Page 114 of 125

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board