Home Linux Community Community Blogs

Community Blogs

Linux Foundation Guru's get mentioned in FOSS article in UK Linux Format Magazine!

I posted this on my own external blog, but thought I would share the pics of the article in Linux Format that mentions the Linux Foundation!

This years Top Ten Linux Guru’s as judged by the Linux Foundation where picked earlier last month. Imagine my surprise to find my name (Matt Palmer) amongst them when I opened this months (UK June Edition) of Linux Format!!

I thought I would put up a couple of pictures of the article for prosperity. Well done to all the other guru’s as well, who have done a brilliant job of helping new users to Linux, as well as assisting others with questions,whitepapers and best practices, and inspiring innovation and development within the organisation.

Here were the Top Ten contributors to the Foundation:

1)Matthew Fillpot

2)Aaron Aceves

3)Andrea Benini


5)Istimsak Abdulbasir

6)Marco Fioretti

7)Matt Palmer (Me!)

8)Per Lindholm



Take a look at the Linux Foundations Website for some great articles and topics for debate, and the latest news on the greatest operating system on the planet!



Move over Apple, here comes Compufon!


I thought this was amazing. This nifty phone runs on Android, and has the ability to integrate into a Tablet or as a slimline PC with a real keyboard.

Its amazing take a look. It includes bluetooth headset that you can use if you have the phone integrated into the tablet when someone calls you.

This could prove to be an interesting competitor to other Smartphone vendors when this reaches production stage, as it is basically the iPhone/iPad and Laptop all in one. Because its Android, it also opens the door to allow enthusiasts to develop a wider range of applications,etc. 


Migrate Bugzilla 3.0 Server to a Bugzilla 4.0 Server


I recently did a migration from Bugzilla 3.0 running on one server to Bugzilla 4.0 running on a new server.  Since I was already writing these down as I went, I thought that I would share these instructions with the greater Linux community.  You may find that some of these instructions may need to be run in a slightly different order, but this is a pretty complete account of what it took to get this going on the new server.  I know that I had to periodically go back the the ./ script over and over throughout the process to check myself and figure out what needed to occur next.  I tried to arrange all of these in an order that will work, but if you have to install the bugzilla folder first and continually re-run ./, it's pretty much what I had to do.

These instructions are for RedHat 5.5.

Backup Your Current Bugzilla 3.0 Database

These directions all assume that you have a Bugzilla 3.0 database that you've backed up with teh following commands: 

sudo mkdir /data/backups/mysql-bugzilla-3.0
sudo chown www-data.www-data /data/backups/mysql-bugzilla-3.0
mysqldump -u bugs --password=XXXXXXXXX bugs | gzip -9v > `date '+/data/backups/mysql-bugzilla-3.0/bugs_%Y%m%d.sql.gz'`


Setup Apache

Install the Web Server, set the run levels, and start the service:

[root@mybugzilla]# yum groupinstall "Web Server"
[root@mybugzilla]# yum chkconfig --levels 2345 httpd on
[root@mybugzilla]# service httpd restart

Try the main site to ensure that apache is setup correctly by browsing to your site name or to http://localhost

You should see the default apache site showing that the webserver is installed properly.

Modify the httpd.conf file

Open the /etc/httpd/conf/httpd.conf file in your favorite editor.  Add these lines to the end of your httpd.conf file:

PerlSwitches -w -T
PerlConfigRequire /var/www/html/bugz/

Also make sure that you have the line "KeepAlive Off" per the bugzilla install instructions.

Restart Apache

[root@mybugzilla]# service httpd restart

Setup MySQL

Install MySQL, set the service to start on boot, and set the mysql root password:

[root@mybugzilla]# <b>yum groupinstall "MySQL Database"</b>
[root@mybugzilla]# <b>chkconfig --levels 2345 mysqld</b>
[root@mybugzilla]# <b>service mysqld restart</b>
[root@mybugzilla]# <b>/usr/bin/mysqladmin -u root password 'XXXX'</b>

Now login to mysql:

[root@mybugzilla]#  <b>mysql -pXXXX</b>

Once logged into mysql as root, create the bugs user and the database:

mysql> create user 'bugs'@'localhost' IDENTIFIED BY 'XXXpasswordXXX';
mysql> create database bugs;
mysql> show databases;

Restore data

These instructions assume that you followed the above instructions for exporting your bugzilla database from Bugzilla 3.0.

[root@mybugzilla]# cd /path/to/restore/file
[root@mybugzilla]# gunzip bugs_20110414.sql.gz
[root@mybugzilla]# mysql -u root -pXXXX bugs < bugs_20110414.sql

Grant Permissions

Login to mysql as above and run

mysql> grant all on bugs.* to 'bugs'@'localhost';


Fix MySQL Defaults in /etc/my.cnf

Edit the /etc/my.cnf file and add these lines under the [mysqld] section:

# Allow packets up to 4MB

# Allow small words in full-text indexes

This ensures that uploaded attachments to bugs is now 4MB instead of the default 1MB.

Restart MySQL

Restart the service to ensure any changes above are loaded:

service mysqld restart


Setup Bugzilla

Prep Bugzilla

Create the directories and download Bugzilla:

[root@mybugzilla]# cd /var/www
[root@mybugzilla]# mkdir bugz
[root@mybugzilla]# cd bugz
[root@mybugzilla]# wget
[root@mybugzilla]# tar xzvf bugzilla-4.0.tar.gz
[root@mybugzilla]# cd /var/www/html
[root@mybugzilla]# ln -s ../bugz/bugzilla-4.0 bugz
[root@mybugzilla]# cd /var/www/bugz/bugzilla-4.0


Install Bugzilla Dependencies

./ --check-modules

You will be told you should run several commands to install all the perl modules.  For example,
I was told to run these:

[root@mybugzilla]# /usr/bin/perl Digest::SHA
[root@mybugzilla]# /usr/bin/perl Date::Format
[root@mybugzilla]# /usr/bin/perl DateTime
[root@mybugzilla]# /usr/bin/perl DateTime::TimeZone
[root@mybugzilla]# /usr/bin/perl Template
[root@mybugzilla]# /usr/bin/perl Email::Send
[root@mybugzilla]# /usr/bin/perl Email::MIME
[root@mybugzilla]# /usr/bin/perl List::MoreUtils

or just simply:

  [root@mybugzilla]# /usr/bin/perl --all

  DateTime won't build so I'm trying this:

  [root@mybugzilla]# yum install mysql-devel gd gd-devel perl-DBD-MySQL mod_perl-devel

    (per instructions at
  The perl-DBD-MySQL package is 3.07 which is too old (4.0 is needed) so I had to be sure to
run the cpan install:

[root@mybugzilla]# /usr/bin/perl DBD::mysql

  I noticed that DateTime complains that I only have Archive::Tar 1.3901 which it found in the
perl-Archive-Tar package.  I upgraded with the following:

[root@mybugzilla]# /usr/bin/perl Archive::Tar

  Which got me Archive::Tar 1.76, which stopped the complaints in DateTime.  I installed datetime
running these:

[root@mybugzilla]# /usr/bin/perl Module::Build
[root@mybugzilla]# /usr/bin/perl DateTime


Install Additional Modules

I installed these extra modules:

[root@mybugzilla]# /usr/bin/perl Net::LDAP
[root@mybugzilla]# /usr/bin/perl GD
[root@mybugzilla]# /usr/bin/perl Chart::Lines
[root@mybugzilla]# /usr/bin/perl Email::Reply
[root@mybugzilla]# /usr/bin/perl Apache2::SizeLimit
[root@mybugzilla]# /usr/bin/perl GD::Graph
[root@mybugzilla]# /usr/bin/perl PatchReader
[root@mybugzilla]# /usr/bin/perl Email::MIME::Attachment::Stripper

    I know that out of the above, if you don't have Apache2::SizeLimit then there will be a problem when restarting the httpd server, which (per the above instructions) includes a reference to the script in the bugzilla folder, which in turn uses Apache2::SizeLimit.  If you have trouble (as I did), make sure you have the =mod_perl-devel= package installed in order to install Apache2::SizeLimit.

Edit Configuration

Now you'll need to edit the localconfig file to setup the variables.  Make sure you have these variables set.  You'll probably have to manually set
those listed in bold:

$create_htaccess = 1;
$webservergroup = 'apache';
$use_suexec = 0;
$db_driver = 'mysql';
$db_host = 'localhost';
$db_name = 'bugs';
$db_user = 'bugs';
<b>$db_pass = 'XXXpasswordXXX';</b>
$db_port = 0;
$db_sock = '';
$db_check = 1;
$index_html = 0;
$cvsbin = '/usr/bin/cvs';
$interdiffbin = '/usr/bin/interdiff';
$diffpath = '/usr/bin';


Re-Run CheckSetup

Now re-run checksetup now that you've fixed the localconfig file.  This should test the settings in the localconfig to ensure the correct db name, user name and password:

./ --check-modules

The script may remind you to perform some actions documented above.  If you've missed anything, go ahead and fix it now before continuing.

Bugzilla Upgrade

Upgrading can cause serious harm to the database.  Make sure you've backed up or you have a file migrated from another server that you can use to restore the database 'bugs' table in case the following fail.  There are probably more ways to migrate, but these are the actions that I took.

Ensure the character encoding is correct

Your database must be in UTF-8.  If the previous database used another encoding, you should fix it with the script.

[root@mybugzilla]# /usr/bin/perl Encode::Detect
[root@mybugzilla]# cd /var/www/bugz/bugzilla-4.0/
[root@mybugzilla]# contrib/ --dry-run --guess

Review what changes then do the command for real:

[root@csobugzilla]# contrib/ --guess


Continue with the

Try the script again.  You can now safely continue when prompted about the UTF-8 stuff.



Using a Distributed Shell (dsh) to Administer Multiple Workstations


This is something I used to need a lot, that I dug out of my documentation to share.  These are steps to using dsh on ubuntu to administer multiple workstations.  This does not include instructions for setting up public/private key authentication in ssh, but these instructions work the best when this has been setup.  If you don't setup ssh keys, you'll end up having to type the root password for each machine you ssh into.



On Ubuntu, run this to install:

sudo aptitude install dsh



Start by creating a new group of workstations.  I have several CentOS workstations, so I create a new file that contains all of these:


# These are standard workstations conforming to the typical centos configuration

Running Commands

You can run pretty much anything, but here's an example:

dsh -g centos-workstations yum clean all
dsh -g centos-workstations yum -y upgrade

Some Shortcuts

I simplify matters even more by creating an alias

alias cent='dsh -g centos-workstations'

I add this to my .zshrc file.  This buys the the ability to run things like:

cent yum -y upgrade


I realize these instructions are a bit basic, but I hope it points out new possibilities to those who have never used dsh before.  I have found myself using this pretty extensively with about 50 workstations and publickey authentication and it works wonderfully.  I found it especially useful a year or more ago when I was trying to modify 50+ workstations to conform to a more standard configuration.  I tried several other solutions over a year ago (parallel ssh or something like that) and I didn't find anything quite as nice as dsh.



Going to Vatican as a FOSS blogger: which questions and proposals should I bring with me?

I'm one of the 150 people invited to the first "Bloggers in Vatican" meeting next month. I've put all details about the goals of the meeting and why I was invited here, and now I have a question:

Which questions and proposals do you think I may suggest for a meeting like that if I had the possibility, both in general and as a FOSS/Open Standards advocate?

Thanks! Any feedback is welcome.


How to automatically replace files when updating WordPress

WordPress is quick and easy to install and update, but the quicker you can make these operations the better, right? If you have shell access to the server where your WordPress copy is installed, it is possible to make some operations much quicker and safer with this shell script.


The importance of education to Linux

Just by looking at the amount of voluntary work users engage in to help each other with their computer problems I wonder if we could not use this massive source of human intelligence for educational purposes.

The idea is that we first aim high and help each other on entry level university mathematics. Young adults would then see a another real benefit to join the Linux community.

In my previous blog entry I wrote about the potential benefit from bundling student books with some popular distributions. I believe this where you need to start. Having a couple of good open textbooks for users to engage in. Perhaps its enough to have them in the repository. The import thing is to get enough users reading the same books.

I wrote the below blog entry explaining how the cooperation could work.

Here I focus on explaining the textbook theory in multi fashionable way. What I forgot to write is that the explanations need to be approved before publishing and you can link the specific theoretical part to whatever is relevant be it videos, exercises or forum discussions for questions not answered.

We need advanced innovative software in all fields. By helping with the educational bit I think we get a lot in return.



Dbmail? A great Open Source email system, especially for LAMP/MySql administrators

A couple of weeks ago, I was thinking about how I may build an advanced search utility for my own email archive. One way to make complex queries on the archive seemed to be to put it all into a relational database. Since the Dbmail system stores email in that way, I asked to its developers and to Harald Reindl (an email administrator at The Lounge who already uses Dbmail: I found him in the PostFix Mailing list archives) if Dbmail could be used in that way. The feedback I got made me change my mind about how to rebuild my own email search system, for the reasons explained below. At the same time, how and why Harald and his company use Dbmail seemed really interesting. Here’s the full story. tries to make it easier to claim damages by climate change is LAMP software that allows any individual to record claims of damages caused by climate change, so that in the future these damage claims may be litigated. The developers hope to spread this idea and encourage other organizations to use the same model to collect other types of environmental damage claims, so soon the software source code will be available for everybody. Read the interview to a developer here.


Creating a custom live distro: choosing the hardware

Brief Intro

If you've followed my previous article ( you know what I'm trying to do. This project aims to create an home server, requirements have already been described, I've few mods in my mind (X10 automation, SMS gateway and few more) but basically this is what I'm trying to do. As I've told you before I've already decided to use an Atom motherboard but there were other architectures involved in the decision process, here are few personal considerations with pros/cons:


The Candidates:


SX (STM) and derived architectures.

Pros: really really cheap, really reduced power consumption

Cons: limited I/O, not enough I/O port available or “high level” devices linkable to them


Atmel ATMega architecture (and Arduino clones as well)

Pros: quite cheap, easy customization, reliable hw, good programming (related to my skills), close to the real world (digital I/O, analog I/O, measurements), RT OS available for “bigger” CPUs, really reduced power consumption

Cons: Nice objects for controlling limited appliances, even if they've a really good CPU processing power they're not suitable for a full blown home server with an USB controller. They've the capability to connect to USB drives/peripherals but they're too limited for some cool stuff or if I'd like to have a strong OS on it (read Linux)

Note: I'll probably use some custom created PCBs with Atmel devices on it for controlling little things like lights, sensors, LCD panels and so on


MIPS devices

Pros: reduced power consumption, “high level” interfacing with buses and peripherals (if compared with atmega/sx/pic chips) like: micro-pci bus, usb. RT OS (real time operating systems) available as well as a compact Linux OS (generally with uclibc and busybox), also check OpenWRT. I can find these kind of CPUs into a lot of small hardware devices (Access Points, DSL Routers, few switches, …). You may find them at cheaper prices (20-30-40-50$) inside ready made appliances. Power consumption is limited

Cons: USB is available and you may connect external drives to it (if you find the proper device) but heavy disk operations may affect heavily this kind of CPU (if you don't have a DSP and a southbridge-like controller). Heavy CPU operations (P2P software) may be a problem for them. Realizing a custom made project with these devices may cost you a lot of bucks


ARM devices

Pros: Same benefits of MIPS devices and even something more: ARM CPUs performs really well, I cannot compare them to a Pentium class device but they've enough power for a lot of heavy disk operations. There's a lot of efforts around Linux ARM community and there are a lot patches and features for an autonomous device (uclibc+busybox+linux are a must). There're even some ARM distro with good performances. Ready made devices may be found into high class routers, switches, access points and so on; if you like them remember: OpenWRT ( could be your best friend. Power consumption is limited

Cons: It's not easy to find a ready made device with an USB controller and an affordable price; multiple USB connections may affect machine performances (without a DSP or a Southbridge), I need few USB devices connected to my machine (USB disks, webcams, an audio card, …) dealing all of them may be challenging. Heavy disk operations (file transfer or P2P for example) may affect your machine a little bit. ARM has enough CPU power to solve these problems but even if I can realize or buy a custom made motherboard it could be a problem: they may cost you a lot of money and it's not so easy to run under 100$. ARM chips are reliable, have a lot of CPU power, they don't absorb so much (more than MIPS btw) but they're not so cheap in the industrial market. You may find something in the consumer market but motherboard are limited (1eth, 1usb and few more)

Note: This is my second choice and unfortunately they're to expensive for me


Geode/Intel Atom/Via C3 devices

As I've already told you in my last article this is my current choice, that's why:

Pros: they're not so cheap (like an MPU: micro processor units like ATMega's, PIC Microchips or STM SX's) but Intel an Via (especially Intel) are cutting down prices a lot, in some cases they're even less expensive than MIPS or ARM based cards. I may find a CPU+Motherboard for 50-70$ and use it as a common PC (with limited CPU power if compared to a Pentium Class device). You generally have a complete micro motherboard around it, pains like adding external devices are solved, you've: audio sound card, a bunch of USB ports (4-8), ethernet (wireless or wired), RS232, Parallel port, … You've a southbridge on the motherboard, no more troubles with heavy USB I/O operations (for a really small server, not suitable for a medium sized corp.). If you want an efficient system you may create an optimized OS, use a common Linux distro (if you don't have skills or time) or install a full blown MS Windows (if you want to leave dangerously). Passive cooling is available on low end systems

Cons: CPU intensive tasks and I/O may affect your power absorption a lot (even 40W or more) but if you tune it a little while and if you turn off what you don't use you may save a lot (I'm now using 15W in idle, 20-25W when streaming some video), this is my first problem with these devices: I'd like to absorb like an ARM card but have this class of CPU. I've found a lot of dual cores atom but they've too much CPU power for my needs and absorb a lot, in this case a CPU fan is required, I'd better stay with a low end system with passive cooling (just a radiator)


Pentium Class” devices (Pentium/P. Mobile/Core 1,2,3,4,.../Core i3,5,7,1000)

Pros: They're marvelous devices, huge cpu power, you can do everything you want with them.... but I don't care... I just need a bare bone machine and nothing more. Expansion boards available: pci, usb, eth, lots of ports everywhere

Cons: Active cooling is needed (at least 1 CPU fan). They absorb an huge amount of power, too much for a 24/7 machine (at least until I'll pay my electricity bill), even underpowered you cannot cut a lot of watts from them (desktop or laptop machine doesn't care). Their cost is adequate to their CPU processing but decisively too classy if you compare them with an Atom board. You've a cheap hardware with moving parts (CPU fan: 10$), if you break it you lost your entire system, you cannot leave it unattended for a long period.


Grandmas computer or something cheap from your roof

Pros: It's cheap because you already have it (but please read the cons as well), you've a ready made machine for an unbeatable price (free)

Cons: Is it reliable ? How many years does it have ? What about spare parts ? Again, if this device is a pentium class object you may need to read Cons section for pentium based devices. If you look at your electricity bill you may pay a brand new atom board with the same amount of money spent on extra Watts with these platforms


Consoles, consumer products (TiVo, …), DVD players

Pros: you may find something good in this category but please consider these two constraints: power absorption and price, you have to stay under 50W and 100$. If you find something good please let me know, I'm interested in it as well.... :-)

Cons: You may spend time on hacking it but how much does it cost a new fully loaded Atom device if you compare to it ? But again: if you've a 50W machine (or below) and a 100$ price please let me know



So after this considerations you may know why I'm now using Atom devices, I've used hacked MIPS/ARM machines (before this replacement) but I've switched from them due to CPU requirements, if you've a viable solution like an Atom device and an ARM power absorption I'm really interested in it



The Plan

You've read about my basic requirements, we've just passed the HW selection, now it's time to take a look at the software selection: small intro, a lot of testing, distro selection and customization will follow in the next article



Creating a custom live distro for a target appliance

Create a custom distro, choice of the right base

Create a custom distro: building the build machine

Create your first target image

Building a minimal image

Stay tuned



A Few More Shell Basics

In our last lesson here, I scratched the surface of the basics needed to start scripting in BASH. Today, we'll learn a few more things.

Here are some fundamentals that we need to cover real quick. It's nothing too complicated.

chmod - You've seen this command in action before, I'm sure. It's important when it comes to writing shell scripts because it allows the script to become executable. This is, of course, a necessity if we want the script to actually run and do something.

#! - These two symbols used in this particular order at the beginning of a script specifies which shell we want to use to run this script. We will primarily be talking about BASH here in these lessons, but let's say we wanted to write a script that used the tcsh shell instead. The first line of the script would be:


The operating system would check that first line initially to determine what shell we intended when we wrote the script. It would then call up that shell and run the script. You'll see that a lot when programming in BASH. Now you know what it does.

# - This symbol signifies a comment. You've probably seen this many times already in configuration files like GRUB's menu.lst or your distribution's repository configuration file, maybe. It's not at all uncommon to find this symbol used in many different programming languages to alert the OS (and the user) that the data on the same line following the # is an informational comment of some sort. Comments are GOOD things when programming. Make lots of comments. They will help you and maybe others who use your program or script down the road. Here's an example of a commented snippet of script:

# ZIP-100 mknod script - 07222006 - Bruno
mknod /dev/hdb4 b 3 64
#End script

You can see the initial #! telling the OS what shell to use. In the next line, you can see the # symbol followed by some informational data. This was an actual script that I used to need to run at boot to get my ZIP-100 drive to be recognized by the OS. Newer kernels later on solved the issue by using udev, but that's a whole 'nother subject. I just wanted you to see what a comment looks like in a BASH script.

; and <newline> - These two devices, when used in the shell, denote a transition from one command to another. For example, if I had three commands called c1, c2, and c3, I could run them on the same line using the ; (semi-colon) to separate them. Or I could just put each of them on a new line. Here's what I mean...

$ c1;c2;c3 <ENTER to execute>

$ c1 <ENTER>
$ c2 <ENTER>
$ c3 <ENTER>

Pretty simple so far, huh? OK. Let's continue...

- The backward slash is used to continue a command from one line to the next. Like this:

$ echo "She sells seashells

> by the seashore." <ENTER>

She sells seashells by the seashore.

Without the , the line would have been broken when displayed by the echo command's output. Like this:

$ echo "She sells seashells

> by the seashore." <ENTER>

Sea sells seashells

by the seashore.

See the difference? Cool! OK, then... just a few more for today. We don't want to get a headache by stuffing too much in there at one time.

| and & - These two are very cool, and you'll see them a lot. The first one is called a "pipe", and that's just what it does. It pipes output of one command into the input of another. The ampersand (&) symbol tells the shell to run a command in the background. I should briefly diverge here for a moment and mention foreground and background operations so you'll understand what they are.

A quick sidebar lesson...

You can run a command in the foreground (actively running before your eyes) or in the background (hidden from view but still going). The way to do this is to use the bg and fg command modifiers in the command line. Let's look a a couple simple examples. Let's say Mary wants to run some big admin job in the background while she does other stuff in the foreground. It's pretty easy to do. First she starts big_job01 from the command line.

mary@workplace:~$ big_job01


With big_job01 running actively in the foreground, Mary doesn't have a cursor blinking at the command line. She cant do any other work because she has to watch big_job01 outputting. To change this, Mary will "background" big_job01 to bring back her command line cursor so she can do other things.


What this combination of keystrokes will do for Mary is it will stop the process big_job01, giving this notification:

1]+  Stopped                 big_job01

Now Mary will get her cursor back and she can send the big_job01 to the background and get it running again.

mary@workplace:~$ bg 1

She can check to see that it's in the background and running by listing the jobs.

mary@workplace:~$ jobs -l

1]+  4107 Running                 big_job01 &

Note the following & in the above line. That's telling Mary that big_job01 is running in the background. Cool, huh? Now Mary can go on about her other chores in the foreground as she normally would. When big_job01 finishes, the command line will give Mary a notification like this:

1]+ Done                                         big_job01

OK, there you have it... a bit about backgrounding and foregrounding operations within the shell. Now let's go back to what we were talking about before.

( and ) - The parenthesis are used in shell scripting to group commands. The shell will actually create a copy of itself, called a sub-shell, for each group. Groups are seen by the shell as jobs and process IDs are created for each group. Sub-shells have their own environment; meaning they can have their own variables with values different from other sub-shells or the main shell. A bit confusing, huh? Well, let's see a quick example and then wrap up this lesson so we can all go watch C.S.I. on TV.

$ (c1 ; c2) & c3 &

In the above example, grouped command c1 and c2 would execute sequentially in the background while executing command c3 in the background. Remember that we can background commands by using the ampersand (&) symbol and separate commands by using the semi-colon (;). The prompt would return immediately after hitting ENTER on the line above because both jobs would be running in background. The shell would run the grouped commands  c1 and  c2 as one job and c3 as another job. Clear as chocolate pudding, huh? Don't worry. It'll all come together... I hope.

Until next time....


*This article originally appeared on my Nocturnal Slacker v1.0 site at





Page 20 of 133

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board