SETTING UP KVM ON LINUX
VMs have made working with multiple OSs a lot easier. And if you are a tinkerer like me, you can’t wait to get your hands on the latest system, and find that you nothing to install it on. Thank goodness for virtual processing. I’ve used serverel VM emulators such as Vmware and Virtualbox and both offer great advantages. Yet there are others, and not all VMs run good on every system. After doing some exploring and getting help some very good linux gurus, I came across KVM.
KVM (kernel-based virtual machine) is a virtual machine program that runs within the linux kernel. When it runs inside the kernel, it is treated like a normal linux process. This setup gives kvm priority when it requests services and execution from the cpu allowing better runtime performance. KVM would be like any kernel module loaded within the kernel when the a linux system boots.
KVM does not have a gui interface nor does it provide machine emulation. Instead it responds to calls from a VM manager, like qemu/aqemu for resources in order for a VM guest to be created. KVM manages low-level resources like memory, diskspace, cpus etc. The VM manager (qemu/aqemu) takes those resources and creates the guest OS.
KVM can run on a number of linux operating systems including Mac OSX, open solaris and others. For KVM to run on linux, the kernel must be of version 2.6.20 or later. If running kvm on Intel or AMD hardware archs, both cpus must support VM extensions. For Intel, the cpu must have the Intel VT extension and for AMD, AMD-V extension. To see if your architecture supports KVM, run this command in your terminal.
grep -E 'vmx|svm' /proc/cpuinfo
If you get any type of output, then your system has support. Look closely at the output at the bottom image. If you see svm, then your AMD has support. Likewise, if you see vmx, then your Intel has support.
You must also make sure that virtualization is enabled within your system BIOS. In IBM systems, this is enabled by default, for others you have to enable it manually. KVM consists of a loadable kernel module, the kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.
I’m using opensuse 11.3 with LXDE to demonstrate the installation of kvm. Particularly, opensuse has a huge selection of system management and configuratioin options and can be very intimidating to normal users. The system opensuse runs on has the support for KVM. And after some tinkering, is wasn’t hard to install.
Again, to use kvm you must use a kernel version of 2.6.20 or later. latest kernels have kvm install already. If not you will have to install it manually through your systems repositories or download the source files. In opensuse, I used used YaST2 to download kvm from the repositories and install it. YaST2 works like a charm when it comes to package management. Arguably, it may perhaps be better then ubuntu’s synaptic package manager. Go to the start menu, select systems, click administratioin, and select YaST. The YaST2 control center pops up.
In the software section, search for and click “software management”. Once the software manager pops up, in the search tab, type kvm in the search box and click search. In the right window, you should see the options to install kvm select it.
Next you need to install qemu. Type qemu in the search box. Once you see qemu, select it. One you have both packages selected, click “accept”. A dialog box will pop-up showing the packages you installed and any dependencies that must be installed as well as packages that will be changed. One very important dependency is virt-utils. This needs to be install. Go ahead and click continue to install all packages.
Restart your machine to appy all changes. To confirm kvm is loaded within the kernel, open a terminal and type “lsmod”. Look at the list of modules. If you see kvm, then kvm has been loaded. You should also see the kvm module loaded for your cpu arch. In my case I have the kvm-amd module designed for amd arch.
Remember, to setup a virtual machine using kvm, you have to use qemu which is ran in the terminal. If you don’t want to use qemu, you can use aqemu which provides a gui interface to setup guest OSs using utilities provided from qemu.. Aqemu is not included in the opensuse repos, you will have to download the rpm manually.
To download aqemu, link to this site, http://rpm.pbone.net/index.php3/stat/4/idpl/11188025/dir/opensuse/com/aqemu-0.6-34.1.i386.rpm.html
, and download the rpm package designed for your arch. Go to the folder containing the rpm and click to install. The YaST package manager will open showing all the libraries that will be install along with aqemu. All you need to do is click accept, and the
installation will execute.
When aqemu is ran for the first time, it must be adjusted for first time use. The aqem settings dialog box will open for you to ajust the program. Select the default settings to continue. You can always go back to the settings if you need to change anything in the future.
Once the settings have been accepted, the aqemu window will open and you are ready to create your first guest OS.
Now we need to pick an OS we want to install. You can pick any OS you like, for this tutorial, we will use a particular linux OS that I have found to be very useful on low-end machines, Xubuntu, http://distrowatch.com/table.php?distribution=xubuntu. I am using xubuntu for the reason of it being light weight. I have notice the stronger an operating system is, the longer it will take for it to install under aqemu if it will install at all. If you plan to use KVM as your default VM machine, I recommend using a dual core 64bit system with at least 2GB of ram.
Insert your Linux OS cd/dvd into the cd/dvd rom. Wait for the media to mount. Then in the aqemu window, scroll to the menu bar, select file and click create a new HDD image.
This will be a new image drive file, so leave “use base HDD image” unselected. In the “New Image File Name” section box, click the “browse button” and navigate to a place where you want to store the image file. I’ve chosen the aqemu folder and create a subfolder named VMs which is where I will store all my VM images. Name your hdd image file anything you want. I previously named mine zorin OS. Yet since zorin crashed kvm, I decided to delete the VM files pertaining to zorin and re-used the hdd image file I created previously.
When you are done click save which will return you back to the previous aqemu hdd image window. Click create and you now have a 10Gb hdd image file. You can leave the default settings or change them, the choice its yours.. Next, click the VM menu, go to “New VM” and select “add new VM” and name your VM.
Once you’ve named your VM, aqemu will show you the system resources that will be used with your VM.
Make sure the “boot priority” is set to CD-ROM and under the “Storage devices” section, CD/DVD-ROM is set to use cdrom, which is the host’s CD-ROM device and “HDA” is set to use the HDD image file that we created. You can change these settings using the above tabs. Click apply to accept the changes. Then on the action menu, click the blue arrow button which will began the guest OS installation.
Here is the xubuntu setup screen running in aqemu as you would see if booting a physical system from the CD-ROM. We will began the installation by selecting “install Xubuntu”. Again this process may take a while depending on the strength of your system.
We are now preparing the installation of xubuntu. If you have installed linux before, this should be self explanatory. Go ahead and install the system they way you normally would. Keep in mind to be patient. KVM is a new development and is a little rough behind the edges. Plus, we are using the gui form of qemu which could prove to be unstable, so take your time with.
Here is the fully installed xubuntu system running on kvm. Important note, once the installation completes, you have to change the boot order back to HDD, which will be the HDD image file we created. Admittedly, there is no big performance improvement from the more popular VM emulators like VMware or virtualbox. However it is another way to setup virtual processing. Although it ranks low on the performance scale on this system, I do like the fact of KVM running directly within the kernel which gives the VMs better processing scheduling. I only wished to have an available dual core system to really see the benefits of KVM.
We have been getting questions from users about how to delete a linux OS from their hard drive. Well, lets oblige them. Uninstalling an OS is not like uninstalling a program with the simple "click to uninstall". It's a little trickier than that, however not that difficult. I will show a comprehensive method of removing a Linux OS, which is the correct term than uninstalling. These procedures will work on dual boot or single OS systems.
First you can not delete any partition on a drive if it is mounted. You will have to unmount the drive first to proceed. To umount a drive that has the linuxOS on it:
- insert a liveCD, and boot to the desktop. I recommend using the live ubuntu CD
You are now using your system via the liveCD, no drive on your system is mounted. From the live desktop, you have two options to delete the partition. You can either use a gui tool like gparted, which is located in systems>administration or fdisk from the command line. From the gparted window, there is a drop-down menu arrow that allows you to select any drive that is located on the system.
In you the drop-down menu, select your require drive. You will get a listing of all the partitions on your hard drive. Select the required partition, and in the action menu, select delete. Then select aply changes. That's all it is too it. This works if you have only one OS on the hard drive.
Another method, which I like, is to use the fdisk command. To access the fdisk command:
- open a terminal
- type fdisk /dev/<drive>
The fdisk program has a lot of advanced features that come in handy when you need to make special changes to your hard drive. When you type the fdisk command you must specify the drive you want it to access. The drive specified by the drive files located in the folder /dev/. If your drive is a sata drive, you will specify /dev/sdax, or if it is an ide drive, you specify it with /dev/hdax. The variable x represents ths partition number.
Once you are in fdisk, you are given the option to list all the commands used with fdisk which is accessed by pressing the m key. Press the p key, and you will get the information pertaining to your hard drive.
Reading the information from your hard drive, you will notice a partition table. If you have only linux installed, you will see two partitions, a swap partition and an ext partition. If you have two OSs, windows and linux, you will in addition see an ntfs partition. Each partition will have a number after the sdax syntax telling you what number partition it is.What you are going to do is delete the ext partition by pressing the d key and selecting the partition's number. The fdisk program will help in the operations, all you have to do is follow what is says.
Keep in mind, nothing is changed unless you press the w key that force fdisk to write the changes to the drive. This is very useful if you accidentally select the wrong changes.
Notice, while reading the partition table, their is a boot column. The row of the ext partition has a "*" under the boot column heading. This means that this partition is the partition your system will boot from. You want to move the "*" to the ntfs partition, so your system will boot from that partition. You first remove the "*" from the ext partition by pressing the a key and select the partition number that you want to remove the "*" from. Then you must add the boot flag "*" to the ntfs partition. Press the a key and select the ntfs partition. Confirm you have made the right changes by pressing the p key to view the partition table. Once confirmed, press the w key to apply the changes.
Remove the liveCD once you reboot the system. You should see your windows OS loading, or start a fresh OS installation. Happy operating.
Some time ago I was helping a friend with some kexec problems and written some notes on how to use it - here a CentOS based server was used, but the process should be pretty similiar also for other distributions. The main advantage is in skipping the BIOS init part which on servers takes quite some time. I personally use it for the gateway server (it has also other functions, like dns, dhcp, openvpn server) and testing servers reboots with minimal downtime. A nice kexec description is on its man page:
kexec is a system call that enables you to load and boot into another kernel from the currently running kernel. kexec performs the function of the boot loader from within the kernel. The primary difference between a standard system boot and a kexec boot is that the hardware initialization normally performed by the BIOS or firmware (depending on architecture) is not performed during a kexec boot. This has the effect of reducing the time required for a reboot.
CentOS, Fedora users can install it using yum:
[root@cent:~]# yum install kexec-tools
To switch between kernels you have to install a new one, here for example after running a ''yum update'' also a new kernel was installed - the 2.6.18-194.11.4.el5 version.
[root@cent:~]# yum update
kernel.x86_64 0:2.6.18-194.11.4.el5 kernel-devel.x86_64 0:2.6.18-194.11.4.el5
Current kernel is 2.6.18-194.11.3.el5
[root@cent:~]# uname -r
For kexec, kernel and initrd path will be specified; paths (not full) can be found for example in the grub.conf file which was already updated.
[root@cent:~]# cat /etc/grub.conf
title CentOS (2.6.18-194.11.4.el5)
kernel /vmlinuz-2.6.18-194.11.4.el5 ro root=LABEL=/
title CentOS (2.6.18-194.11.3.el5)
kernel /vmlinuz-2.6.18-194.11.3.el5 ro root=LABEL=/
Also the arguments passed to the kernel at boot time are needed, you can look at your current arguments in the /proc/cmdline file. Later these same arguments will be given for the new kernel.
[root@cent:~]# cat /proc/cmdline
Now to load the new kernel:
[root@cent:~]# kexec -l /boot/vmlinuz-2.6.18-194.11.4.el5
--command-line="$( cat /proc/cmdline )"
Start the magic and boot to the new loaded kernel:
[root@cent:~]# kexec -e
Hope this post will be helpful and inspire others to some kexec experiments :)
The most important characteristic of OpenStretMap, however, is another one: everybody can freely, legally reuse them for any purpose, including (unlike Google Maps) for commercial activities. This is why, especially when paired to Free Software, maps like those of OSM can help people (even if they are NOT software programmers!) to find a job or start their own business!
I'm a generally average Linux user. I'm not a coder/designer, nor do I run any huge servers. I'm an IT major in college, and surely know my way around computers, but I'm not anything particularly special. The way I've always experienced Linux was with a classic Desktop Environment (DE), where basically everything I need is included. I used mostly GUI applications, and used command line sparingly.
As I've become more and more comfortable with Linux, I've learned of the power the command line holds, and I've learned that the thing I love most about Linux is making it my own. I can make it look, act, and feel however I want. I can have a bloated system with all the fancy UI effects that has everything any user could need, or I can customize it to the point that others barely know how to use my computer, let alone do any harm.
As of late, my old habits changed, and I'm making the shift towards the sleek, customized feel. I've been playing around with Window Managers (WM) which mimic Desktop Environments in many ways, but don't include all those unnecessary programs that I found myself cursing after some time. No, I don't need KAlarm, Koffice, and Kate; in fact, they just get in the way of the programs I want to use.
At first I tried e17, the Enlightenment window manager, which boasts customizability and minimalistic design. It sounded perfect for my transition from full fledged KDE or Gnome desktop environment to sleek, customized window manager. I have to admit, compared to KDE or Gnome, I really liked e17. I customized it to fit my look, and never have I used multiple desktops so efficiently. I had a black theme installed that I loved, and none of the crazy bloat that comes with most desktop environments.
With my Linux palate sufficiently wet, I decided to start trying more and more window managers. I went through the basics with openbox and fluxbox, but nothing surpassed e17. I then heard about 'Tiling Window Managers' which organize your applications into many desktops, and tile themselves on your screen. At first I was reluctant because it sounded like something necessary for those hardcore 'power users', but after hearing of some tiling window managers I decided to give 'awesome' a try.
Boy am I happy I did. There are plenty of tiling window managers out there, but I decided on awesome after hearing some good things. I've now got 7 dedicated desktops (main, www, irc, office, im, media, and files), and two miscellaneous desktops, which keep me organized. After a quick overview of the keyboard shortcuts to switch between windows and screens, you quickly become accustomed to the shortcuts and stop needing your mouse for much outside of web browsing. I've also begun using more CLI programs, which use less resources and often times prove to be more efficient. Where I once used xchat, I now use irssi, and where I once used Amarok, I now use mp3blaster. Of course, I can still use the GUI programs like any other window manager, but I've learned to love the command line.
I think when people first hear about tiling window mangers, they worry that their screen isn't big enough (I'm on a 16in laptop by the way, 1366x768), or that they're made for true coders and Linux power users. If you get over your fears and try a tiling window manager, and take the time to customize it for yourself, you'll learn to love it. My small screen works just fine, and I can use multiple desktops to have everything I need running.
To tile or not to tile? I say give it a try, and see what you think. You may just be surprised with how easy to use and efficient they can be.
Follow these steps:
1. Download Boost from Boost WebSite
2. Extract it to the folder e.g. D:/Program Files/Boost_1_45_0
3. build bjam tool
run the below command on the command line from the path(given below) where you have extracted the Boost
Go to the folder where files has been extracted, eg D:/Program Files/Boost_1_45_0, then
build.bat mingw (mingw should be at path C:MinGW)
bjam will be created in: D:/Program Files/Boost_1_45_0/tools/build/v2/engine/src/in.ntx86
4. now copy the newly created bjam from D:/Program Files/Boost_1_45_0/tools/build/v2/engine/src/in.ntx86 to the boost root directory, D:/Program Files/Boost_1_45_0
5. Now to build boost libraries
go to D:/Program Files/Boost_1_45_0
bjam - -toolset=gcc OR
bjam - -toolset=gcc - -build-type=complete stage
This will take time so wait-----
After all the targets build, the include header files are located in the D:/Program Files/Boost_1_45_0/boost and the libraries are located in the D:/Program Files/Boost_1_45_0/stage/lib
Now your C++ code is ready to use boost with g++
I thought I would share the steps I went through to create my own S3 backed Instance-store Custom Debian Squeeze AMI, as I found it a bit more involved than trying to get Lenny working… apologies for the format as its a bit of a brain dump
I created a couple of scripts, mainly for quickness, but realise that these can be written with much more elegance,etc. I just wanted to create a quick proof of concept. If you want to just get going you should be able to just copy and paste the scripts and the 2 ec2 scripts into whatever machine you are using for AMI creation and AWS maintenance. This tutorial also makes the assumption that you already have the EC2 AMI-tools and API-tools installed and that you have some experience with using these tools for basic deployment. I ran my maintenance platform from a Vagrant Ubuntu install..
This first script creates a 500 MB empty image, then I create an ext3 filesystem and mount it on loopback to a directory I created called /chroot.
Next I run debootstrap specifying which release of Debian I want to pull down and the architecture. Then I copy two script files (you can find these by looking in a currently running Instance in its /etc/init.d directory) these are called ec2-get-credentials and ec2-ssh-host-key-gen. These get copied into the image mounted under /chroot. I also copy over the correct kernel modules (188.8.131.52-2.fc8xen), which are publicly available on the EC2 forums or from any instance using those modules (I tarred up and scp’d these down from an existing instance, as this also allowed me to check which AKI and ARI I would need to pass at build time.) Lastly I copy over the second bash script into the /chroot, then I put myself inside the chroot by running “chroot /chroot”
Next jump to section 2 where I explain what needs to happen once you are in the chroot….
dd if=/dev/zero of=squeeze-ami count=500 bs=1M
mkfs.ext3 -F squeeze-ami
mount -o loop /home/userhomedir/squeeze-ami /chroot
debootstrap –arch i386 squeeze /chroot/ http://ftp.debian.org
cp ec2-get-credentials /chroot/etc/init.d/
cp ec2-ssh-host-key-gen /chroot/etc/init.d/
cp -r /home/userhomedir/matts_modules/lib/modules/184.108.40.206-2.fc8xen/ /chroot/lib/modules
cp /home/userhomedir/copy_into_chroot.sh /chroot/
echo “now type chroot /chroot”
Ok, once I am in the chroot environment which hosts the image we are creating to send up to AWS S3, we need to do the following things..
mount the proc and devpts filesystems, run aptitude update to check we are current, install locales (if we dont do this we get nasty errors when we try to then install the makedev package. – I chose en_GB-UTF-8 as my locale, then followed the onscreen prompts when running dpkg-reconfigure.) Next I removed the /dev/.udev directory, otherwise the makedev install complains that udev is running.
Next create the symlinks to /dev for MAKEDEV, then change directory to /dev and create some basic devices. Then remove /etc/hostname as this will be determined for us by the EC2 Platform when the AMI starts up. Next up install ssh, make sure its stopped, then grab curl, dhcpcd and apache2. – Then used apt-get purge to remove some dhcp client packages.
The next few steps involve echoing new values into config files that will get read on startup. Firstly setting up sshd_config to not use DNS, build an fstab, network interfaces – setting eth0 to dhcp.
The Magic Bit!!!!
Then the next line is the magic line that sorts out the problem of the SSH process not starting properly. – If you dont include this line, then when you dump out the EC2 console for the Instance you will see a load of error messages saying “PRNG not seeded”, then you will find that its impossible to login to the instance, even though you will be able to get a response from apache,etc. Also the console log will show that the SSH keys did not get regenerated. The issue seems to be, that regardless of if you actually create the devices /dev/random and /dev/urandom before bundling the image, as the EC2 instance boots you will see some messages saying that the devices cant be found (no such file or directory). So I figured I might be able to create them on the fly as the machine image boots, to do this I used “mknod”, then restarted the ssh process and removed startup references to the hardware clock, and made the two ec2 init scripts available to run at boot time.
mount -t proc none /proc
mount -t devpts none /dev/pts
aptitude install locales
rm -Rf /dev/.udev
aptitude install makedev
ln -s /sbin/MAKEDEV /dev
for dev in “zero null console std generic”; do MAKEDEV $dev; done
rm -f /etc/hostname
aptitude install ssh
aptitude install curl
aptitude purge isc-dhcp-client isc-dhcp-common dhcp3-client
aptitude install dhcpcd
aptitude install apache2
echo “UseDNS no” >> /etc/ssh/sshd_config
echo ‘/dev/sda1 / ext3 defaults 1 1
/dev/sda2 /mnt ext3 defaults 0 0
/dev/sda3 swap swap defaults 0 0
none /proc proc defaults
none /sys sysfs defaults 0 0′ > /etc/fstab
echo ‘auto lo
iface lo inet loopback
iface eth0 inet dhcp’ >> /etc/network/interfaces
mknod -m 644 /dev/random c 1 8
mknod -m 644 /dev/urandom c 1 9
chown root:root /dev/random /dev/urandom
/etc/init.d/ssh start” > /etc/init.d/local
chmod 755 /etc/init.d/local
update-rc.d local start 98 2 3 4 5 .
ln -s /etc/init.d/local /etc/rc.d/rc.local
chmod 755 /etc/init.d/ec2-get-credentials
chmod 755 /etc/init.d/ec2-ssh-host-key-gen
update-rc.d ec2-get-credentials defaults
update-rc.d ec2-ssh-host-key-gen defaults
update-rc.d -f hwclock.sh remove
update-rc.d -f hwclockfirst.sh remove
This next script does the obvious steps of bundling up the Image we created and prepares it for bundling and registering. As part of the ec2-register process I also include with the –kernel flag the compatible custom AKI to use for squeeze… which in EU-WEST-1 is “aki-7e0d250a” and the ARI is “ari-7d0d2509″
ec2-bundle-image -i squeeze-ami –cert /ec2_creds/cert-.pem –privatekey /ec2_creds/pk-.pem -u AWS-ACCT
ec2-upload-bundle -b squeezebucket -m /tmp/squeeze-ami.manifest.xml -a accesskey -s secretkey –location=EU
ec2-register –private-key=/ec2_creds/pk-.pem –cert=/ec2_creds/cert-.pem –region=EU-WEST-1 squeezebucket/squeeze-ami.manifest.xml -n squeezelabelname -a i386 -d “Matts Debian Squeeze AMI” –kernel=”aki-7e0d250a”
After registration is complete you will get given the ami-xxxxx id of your custom AMI, which you will then be able to see under the EC2 tab -> Launch Instances -> My AMIs…
Give it a try, not forgetting to pass the AKI and ARI’s as described above.
It will probably be helpful to show the two ec2 scripts, so you can see what they do before the instance starts. I include these below, hopefully this will save you some of the time and effort I had to use figuring out what the problem was.
### BEGIN INIT INFO
# Provides: ec2-get-credentials
# Required-Start: $remote_fs
# Default-Start: 2 3 4 5
# Short-Description: Retrieve the ssh credentials and add to authorized_keys
### END INIT INFO
logger=”logger -t $prog”
while true; do
curl –connect-timeout 1 –max-time 2 169.254.169.254:80 > /dev/null 2>&1 && break
# Try to get the ssh public key from instance data.
curl –silent –fail -o $public_key_file $public_key_url
test -d /root/.ssh || mkdir -p -m 700 /root/.ssh
if [ $? -eq 0 -a -e $public_key_file ] ; then
if ! grep -s -q -f $public_key_file $authorized_keys
cat $public_key_file >> $authorized_keys
$logger “New ssh key added to $authorized_keys from $public_key_url”
chmod 600 $authorized_keys
rm -f $public_key_file
### BEGIN INIT INFO
# Provides: ec2-ssh-host-key-gen
# Required-Start: $remote_fs
# Should-Start: sshd
# Default-Start: 2 3 4 5
# Short-Description: Generate new ssh host keys on first boot
# Description: Re-generates the ssh host keys on every
# new instance (i.e., new AMI). If you want
# to keep the same ssh host keys for rebundled
# AMIs, then disable this before rebundling
# using a command like:
# rm -f /etc/rc?.d/S*ec2-ssh-host-key-gen
### END INIT INFO
curl=”curl –retry 3 –silent –show-error –fail”
while true; do
curl –connect-timeout 1 –max-time 2 169.254.169.254:80 > /dev/null 2>&1 && break
# Exit if we have already run on this instance (e.g., previous boot).
mkdir -p $(dirname $been_run_file)
if [ -f $been_run_file ]; then
logger -st $prog < $been_run_file
# Re-generate the ssh host keys
rm -f /etc/ssh/ssh_host_*_key*
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -t rsa -C ‘host’ -N ”
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -t dsa -C ‘host’ -N ”
# This allows user to get host keys securely through console log
echo “—–BEGIN SSH HOST KEY FINGERPRINTS—–” | logger -st “ec2″
ssh-keygen -l -f /etc/ssh/ssh_host_key.pub | logger -st “ec2″
ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub | logger -st “ec2″
ssh-keygen -l -f /etc/ssh/ssh_host_dsa_key.pub | logger -st “ec2″
echo “—–END SSH HOST KEY FINGERPRINTS—–” | logger -st “ec2″
# restart ssh with new keys
# Don’t run again on this instance
echo “$prog has already been run on this instance” > $been_run_file
some credit to other sites…
As a base template I used some of the info from the following sites, and added a couple of bits.
http://harajuku-tech.posterous.com/hypervmxen-starting-openbsd-secure-shell-serv — This was specific to OpenBSD, but I used this as a basis for testing my mknod theory and SSH problem, which I will give more detail on later.
Some parts from this site: http://gista.blog.root.cz/2010/10/18/creating-debian-amazon-ec2-ebs-ami-using-debootstrap, although some of this is geared towards an EBS backed Debian install.
What is ifconfig???
Well, ifconfig command has been used for years to manage network interface cards.Its is easy and small but gets the job done.
If ifconfig is used without any parameters it will show all the information of all the network interfaces(loopback,eth0,etc...). Information like : Link encap, hardware address and most especially RX packets and TX packets since with from these you can determine whether there is any errors in the network.
To display all information of all the network interfaces simply type:
However, to display information of a specific network interface like eth0 , simply type:
ifconfig can be used not only to display network information but also to change them.
for example if you want to change the current ip address of eth0 from 192.168.1.2 to 192.168.1.10 you can simply type this command in the terminal to do so:
ifconfig eth0 192.168.1.10 up
However, you can pass some extra parameters while changing the ip address like subnet mask eg:
ifconfig eth0 192.168.1.10 netmask 255.255.255.0 up
You can also bring a specific network up(on) or down(off) eg:
ifconfig eth0 up
ifconfig eth0 down
Using Virtual IP Addresses with ifconfig
Another rather useful way of using ifconfig is to add virtual IP addresses, which are just secondary IP addresses that are added to a network card. A network board with virtual IP addresses can listen to two different IP addresses, which is useful if you are configuring serv- ices on your server that all need their own IP addresses.
You can also use the virtual IP address within the same address range or on a different one. To add a virtual IP address, add :n where n is a number after the name of the network interface.For example, ifconfig eth0:0 10.0.0.10 adds the address 10.0.0.10 as a virtual IP address to eth0. The number after the colon must be unique, you can add a second virtual IP address with ifconfig eth0:1 10.0.0.20, and so on.
Often times when I talk with fellow computer enthusiasts, usually Microsoft users, I am met with derisive shouts proclaiming GNU/Linux to be devoid of quality games. With that in mind, I decided to voice my opinion on what I consider to be some of the best games available for the Debian GNU/Linux operating system.While most people choose five or ten for their top lists, I decided to go with seven as it is my lucky number. For my pricing descriptions, I shall borrow from Free Software Foundation founder Richard Stallman's classic quote, "'Free as in Freedom,' not 'Free as in Free Beer'" to denote titles that are financially free to download.
For Debian-based systems:# apt-get install armagetronad
Price: "Free as in Free Beer"
An oldie-but-a-goodie, this Free Software title evokes the excitement of the original 1982 Tron film's light-cycle grid battles. Despite its minimalistic control scheme, it continues to thrill gamers like me who enjoy all the twists and turns that come from such a frenetic title.
Price: Pay what you want
I did not originally know what to make of this innocuous puzzle game when I purchased it as part of the second Humble Indie Bundle, as you play a microscopic organism intent on expansion. As the days went by, I found it to be a deceptively captivating release as well as a welcome bit of respite during my lunch hour at work.
5.) World of Goo
Price: Pay what you want
With my mother being a mechanical engineer, I grew up appreciating the logic and care that goes into building sound structures. This game is a problem-solving physics game where you must surmount obstacles of varying degrees of difficulty with exponentially increasing levels of ingenuity. A very whimsical title for gamers of all ages, it was available from the Humble Indie Bundle #2 in December of 2010 as a bonus game for earlier purchasers.
Price: Pay what you want
A delightful game betwixt Super Mario Brothers and Prince of Persia, Braid allows users to rewind time to prevent painful mistakes. This quantum quality underscores the game's story, which showcases the trials of a man named Tim as he attempts to reconcile a past relationship with a princess. With a combination of beautiful art direction and calming musical accompaniment, this was one of main reasons I purchased the Humble Indie Bundle #2 in December of 2010.
3.) Revenge of the Titans
Price: Pay what you want
This past week, I was home sick for several days with nothing to do but rest, work on college classwork and experiment on a Humble Indie Bundle title I purchased called Revenge of the Titans. Save for being sick, it was one of the most enjoyable experiences I've had in years playing tower defense real-time strategy games on PC.
2.) Frozen Bubble
For Debian-based systems:# apt-get install frozen-bubble
Price: "Free as in Free Beer"
With the crazed tempo of most modern PC and console games, it is a delight to find such an engaging title in the form of Frozen Bubble. With no blood, gore or violence to speak of, its a puzzle game suitable for any gamer. I have had much jocularity with my siblings over this release, and I intend to keep playing it for years to come.
Price: "Free as in Free Beer"
In contrast to the adorable penguins of Frozen Bubble is that of the first-person shooter Nexuiz (hereafter forked as Xonotic due to licensing issues) I have had many a goodhearted frag-fest with other fans of this FLOSS title over the years, including a memorable game of "hide-and-seek" involving rocket launchers. As soon as Xonotic sees its first official release, the fork shall be with me.
If you have not already installed these games via terminal or purchased them, I hope you will give them a chance. Until then, GNU yourself a favor and enjoy your favorite games on the only operating system that is free as in free speech and free as in free beer.
My mother is a music teacher. In 2003, looking for educational software we came across GCompris, and she liked it a lot. At the time she tested it thoroughly, and spotted a typo. We submitted a patch, and were having fun when it was applied upstream.
Recently she asked me how she could "give GCompris to a friend". I gave this problem some thought, and came to the conclusion that the best way might be to use a USB stick with some live distro with GCompris.
I bought a 2 GB USB stick with the latest Knoppix pre-installed for less than 10 Euro. With the included package manager I installed GCompris. I noticed that its optional dependencies (tuxpaint, gnucap and gnuchess) didn't get pulled in, and installed them as well.
Voilà, GCompris on a USB stick. Works perfectly. :)