Home Linux Community Community Blogs General Linux How-to Create an S3 Instance-store Custom Debian Squeeze AMI on Amazon AWS EC2

How-to Create an S3 Instance-store Custom Debian Squeeze AMI on Amazon AWS EC2


I thought I would share the steps I went through to create my own S3 backed Instance-store Custom Debian Squeeze AMI, as I found it a bit more involved than trying to get Lenny working… apologies for the format as its a bit of a brain dump :-)

I created a couple of scripts, mainly for quickness, but realise that these can be written with much more elegance,etc. I just wanted to create a quick proof of concept. If you want to just get going you should be able to just copy and paste the scripts and the 2 ec2 scripts into whatever machine you are using for AMI creation and AWS maintenance. This tutorial also makes the assumption that you already have the EC2 AMI-tools and API-tools installed and that you have some experience with using these tools for basic deployment. I ran my maintenance platform from a Vagrant Ubuntu install..

Section 1

This first script creates a 500 MB empty image, then I create an ext3 filesystem and mount it on loopback to a directory I created called /chroot.

Next I run debootstrap specifying which release of Debian I want to pull down and the architecture. Then I copy two script files (you can find these by looking in a currently running Instance in its /etc/init.d directory) these are called ec2-get-credentials and ec2-ssh-host-key-gen. These get copied into the image mounted under /chroot. I also copy over the correct kernel modules (, which are publicly available on the EC2 forums or from any instance using those modules (I tarred up  and scp’d these down from an existing instance, as this also allowed me to check which AKI and ARI I would need to pass at build time.) Lastly I copy over the second bash script into the /chroot, then I put myself inside the chroot by running “chroot /chroot”

Next jump to section 2 where I explain what needs to happen once you are in the chroot….

dd if=/dev/zero of=squeeze-ami count=500 bs=1M
mkfs.ext3 -F squeeze-ami
mount -o loop /home/userhomedir/squeeze-ami /chroot
debootstrap –arch i386 squeeze /chroot/
cp ec2-get-credentials /chroot/etc/init.d/
cp ec2-ssh-host-key-gen /chroot/etc/init.d/
cp -r /home/userhomedir/matts_modules/lib/modules/ /chroot/lib/modules
cp /home/userhomedir/ /chroot/
echo “now type chroot /chroot”

Section 2

Ok, once I am in the chroot environment which hosts the image we are creating to send up to AWS S3, we need to do the following things..

mount the proc and devpts filesystems, run aptitude update to check we are current, install locales (if we dont do this we get nasty errors when we try to then install the makedev package. – I chose en_GB-UTF-8 as my locale, then followed the onscreen prompts when running dpkg-reconfigure.) Next I removed the /dev/.udev directory, otherwise the makedev install complains that udev is running.

Next create the symlinks to /dev for MAKEDEV, then change directory to /dev and create some basic devices. Then remove /etc/hostname as this will be determined for us by the EC2 Platform when the AMI starts up. Next up install ssh, make sure its stopped, then grab curl, dhcpcd and apache2. – Then used apt-get purge to remove some dhcp client packages.

The next few steps involve echoing new values into config files that will get read on startup. Firstly setting up sshd_config to not use DNS, build an fstab, network interfaces – setting eth0 to dhcp.

The Magic Bit!!!!

Then the next line is the magic line that sorts out the problem of the SSH process not starting properly. – If you dont include this line, then when you dump out the EC2 console for the Instance you will see a load of error messages saying “PRNG not seeded”, then you will find that its impossible to login to the instance, even though you will be able to get a response from apache,etc. Also the console log will show that the SSH keys did not get regenerated. The issue seems to be, that regardless of if you actually create the devices /dev/random and /dev/urandom before bundling the image, as the EC2 instance boots you will see some messages saying that the devices cant be found (no such file or directory). So I figured I might be able to create them on the fly as the machine image boots, to do this I used “mknod”, then restarted the ssh process and removed startup references to the hardware clock, and made the two ec2 init scripts available to run at boot time.

mount -t proc none /proc
mount -t devpts none /dev/pts
aptitude update
aptitude install locales
dpkg-reconfigure locales
rm -Rf /dev/.udev
aptitude install makedev
ln -s /sbin/MAKEDEV /dev
cd /dev
for dev in “zero null console std generic”; do MAKEDEV $dev; done
rm -f /etc/hostname
aptitude install ssh
/etc/init.d/ssh stop
aptitude install curl
aptitude purge isc-dhcp-client isc-dhcp-common dhcp3-client
aptitude install dhcpcd
aptitude install apache2
aptitude update
echo “UseDNS no” >> /etc/ssh/sshd_config
echo  ‘/dev/sda1 / ext3 defaults 1 1 /dev/sda2 /mnt ext3 defaults 0 0 /dev/sda3 swap swap defaults 0 0 none /proc proc defaults
0 0 none /sys sysfs defaults 0 0′ > /etc/fstab
echo  ‘auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp’ >> /etc/network/interfaces

echo “#!/bin/bash mknod -m 644 /dev/random c 1 8 mknod -m 644 /dev/urandom c 1 9 chown root:root /dev/random /dev/urandom /etc/init.d/ssh start” > /etc/init.d/local

chmod 755 /etc/init.d/local
update-rc.d local start 98 2 3 4 5 .
mkdir /etc/rc.d/
ln -s /etc/init.d/local /etc/rc.d/rc.local

chmod 755 /etc/init.d/ec2-get-credentials
chmod 755 /etc/init.d/ec2-ssh-host-key-gen
update-rc.d ec2-get-credentials defaults
update-rc.d ec2-ssh-host-key-gen defaults
update-rc.d -f remove
update-rc.d -f remove

Section 3

This next script does the obvious steps of bundling up the Image we created and prepares it for bundling and registering. As part of the ec2-register process I also include with the –kernel flag the compatible custom AKI to use for squeeze… which in EU-WEST-1 is “aki-7e0d250a” and the ARI is “ari-7d0d2509″
ec2-bundle-image -i squeeze-ami –cert /ec2_creds/cert-.pem  –privatekey /ec2_creds/pk-.pem -u AWS-ACCT
ec2-upload-bundle -b squeezebucket -m /tmp/squeeze-ami.manifest.xml -a accesskey -s secretkey –location=EU
ec2-register –private-key=/ec2_creds/pk-.pem –cert=/ec2_creds/cert-.pem –region=EU-WEST-1 squeezebucket/squeeze-ami.manifest.xml -n squeezelabelname -a i386 -d “Matts Debian Squeeze AMI” –kernel=”aki-7e0d250a”

After registration is complete you will get given the ami-xxxxx id of your custom AMI, which you will then be able to see under the EC2 tab -> Launch Instances -> My AMIs…

Give it a try, not forgetting to pass the AKI and ARI’s as described above.

It will probably be helpful to show the two ec2 scripts, so you can see what they do before the instance starts. I include these below, hopefully this will save you some of the time and effort I had to use figuring out what the problem was.


# Provides:          ec2-get-credentials
# Required-Start:    $remote_fs
# Required-Stop:
# Should-Start:
# Default-Start:     2 3 4 5
# Default-Stop:
# Short-Description: Retrieve the ssh credentials and add to authorized_keys
# Description:

prog=$(basename $0)
logger=”logger -t $prog”


while true; do
curl –connect-timeout 1 –max-time 2 > /dev/null 2>&1 && break
sleep 1;

# Try to get the ssh public key from instance data.
curl –silent –fail -o $public_key_file $public_key_url
test -d /root/.ssh || mkdir -p -m 700 /root/.ssh
if [ $? -eq 0 -a -e $public_key_file ] ; then
if ! grep -s -q -f $public_key_file $authorized_keys
cat $public_key_file >> $authorized_keys
$logger “New ssh key added to $authorized_keys from $public_key_url”
chmod 600 $authorized_keys
rm -f $public_key_file


# Provides:          ec2-ssh-host-key-gen
# Required-Start:    $remote_fs
# Required-Stop:
# Should-Start:      sshd
# Default-Start:     2 3 4 5
# Default-Stop:
# Short-Description: Generate new ssh host keys on first boot
# Description:       Re-generates the ssh host keys on every
#                    new instance (i.e., new AMI). If you want
#                    to keep the same ssh host keys for rebundled
#                    AMIs, then disable this before rebundling
#                    using a command like:
#                       rm -f /etc/rc?.d/S*ec2-ssh-host-key-gen

prog=$(basename $0)
curl=”curl –retry 3 –silent –show-error –fail”

while true; do
curl –connect-timeout 1 –max-time 2 > /dev/null 2>&1 && break
sleep 1;

# Exit if we have already run on this instance (e.g., previous boot).
ami_id=$($curl $instance_data_url/meta-data/ami-id)
mkdir -p $(dirname $been_run_file)
if [ -f $been_run_file ]; then
logger -st $prog < $been_run_file

# Re-generate the ssh host keys
rm -f /etc/ssh/ssh_host_*_key*
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -t rsa -C ‘host’ -N ”
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -t dsa -C ‘host’ -N ”

# This allows user to get host keys securely through console log
echo “—–BEGIN SSH HOST KEY FINGERPRINTS—–”  | logger -st “ec2″
ssh-keygen -l -f /etc/ssh/        | logger -st “ec2″
ssh-keygen -l -f /etc/ssh/    | logger -st “ec2″
ssh-keygen -l -f /etc/ssh/    | logger -st “ec2″
echo “—–END SSH HOST KEY FINGERPRINTS—–”    | logger -st “ec2″

# restart ssh with new keys
/etc/init.d/ssh restart

# Don’t run again on this instance
echo “$prog has already been run on this instance” > $been_run_file

some credit to other sites…

As a base template I used some of the info from the following sites, and added a couple of bits. — This was specific to OpenBSD, but I used this as a basis for testing my mknod theory and SSH problem, which I will give more detail on later.

Some parts from this site:, although some of this is geared towards an EBS backed Debian install.



Subscribe to Comments Feed

Upcoming Linux Foundation Courses

  1. LFS422 High Availability Linux Architecture
    08 Sep » 11 Sep - Raleigh, NC
  2. LFS426 Linux Performance Tuning
    08 Sep » 11 Sep - New York
  3. LFS520 OpenStack Cloud Architecture and Deployment
    08 Sep » 11 Sep - Virtual

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board