Home Linux Community Community Blogs

Community Blogs

How to configure vsftpd to use SSL/TLS (FTPS) on CentOS/Ubuntu

Securing FTP Vsftpd is a widely used ftp server, and if you are setting it up on your server for transferring files, then be aware of the security issues that come along. The ftp protocol has weak security inherent to its design. It transfers all data in plain text (unencrypted), and on public/unsecure network this is something too risky. To fix the issue we have FTPS. It secures FTP communication by encrypting it with SSL/TLS. And this post shows how to setup SSL...
Read more... Comment (0)

New Year's Resolutions for SysAdmins

Ah, a new year, with old systems. If you recently took time off to relax with friends and family and ring in 2014, perhaps you're feeling rejuvenated and ready to break bad old habits and develop good new ones. We asked our friends and followers on Twitter, Facebook, and G+ what system administration resolutions they're making for 2014, and here's what they said. 

Read more: New Year's Resolutions for SysAdmins





Cloud Operating System - what is it really?

A recent article published on, “Are Cloud Operating Systems the Next Big Thing”, suggests that a Cloud Operating System should simplify the Application stack. The idea being that the language runtime is executed directly on the hypervisor without an Operating System Kernel.

Other approaches for cloud operating systems are focussed on optimising Operating System distributions for the cloud with automation in mind. The concepts of IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) remain in the realm of conventional computing paradigms. 

None of these approaches address the core benefits of the cloud. The cloud is a pool of resources, not just another “single” computer. When we think of a computer, it has a processor, persistent storage and memory. A conventional operating system exposes compute resources based on these physical limitations of a single computer. 

There are numerous strategies to create the illusion of a larger compute platform, such as load balancing to a cluster of compute nodes. Load balancing is most commonly performed at a network level with applications or operating systems having limited exposure of the overall compute platform. This means an application cannot determine the available compute resources and scale the cloud accordingly.

To fully embrace the cloud concept a platform is required that can automatically scale application components with additional cloud compute resources. Amazon and Google both have solutions that provide some of these capabilities, however internal Enterprise solutions are somewhat limited. Many organisations embrace the benefits of a hosted cloud within the mega data centres around the world. Many companies have a requirement to host applications internally.

As network speeds increase the feasibility of a real “Cloud Operating System” becomes a reality. This is where an application can start a thread that executes not on a separate processor core, but executes somewhere within the cloud. 

A complete paradigm shift is required to comprehend the possibilities of an Operating System providing distributed parallel processing. Virtualisation takes this new cloud paradigm to a different level where the abstraction of the hardware using a virtualisation layer and a platform operating system presents compute resources to a Cloud Operating System.

The same way as a conventional operating system determines which CPU core is the most appropriate to execute a specific process or thread, a cloud operating system should identify which instance of the cloud execution component is most appropriate to execute a task. 

A cloud operating system with multiple execute instances on numerous hosts can schedule tasks based on the available resources of an execute instance. By abstracting task scheduling to a higher layer the underlying operating system is still required to optimise performance  using techniques such as Symmetric Multiprocessing (SMP), processor affinity and thread priorities.

The application developer has for many years been abstracted from the hardware with development environments such as C#, Java and even PHP. Operating systems have not adapted to the Cloud concept of providing compute resources beyond a single computer. 

The most comparable implementation is the route taken by Application Servers with solutions such as JAVA EJB where lookups can occur to find providers.  Automatic scalability is however limited with these solutions.

Hardware vendors are moving ahead by creating cloud optimised platforms. The concept is that many smaller platforms create optimal compute capacity. HP seem to be leading this sector with their Moonshot solution. The question however remains: How do you make many look like one?  

Enterprises have existing data centres where very little of the overall compute capacity is actually leveraged on an ongoing basis. When one system is busy, numerous others are idle. A cloud compute environment that can automatically scale across a collection of servers providing actual cost savings. Compute capacity would be additive using existing infrastructure for workload based on available resources. According to the IDC report on world wide server shipments the server market is in excess of $12B per quarter. The major vendors are looking for ways to differentiate their solutions and provide optimal value to customers.

Combining hardware, virtualisation and a Cloud Operating System organisations will benefit from a reduction in the cost to provide adequate compute capacity to serve business needs.

Gideon Serfontein is a co-founder of the Bongi Cloud Operating System research project. Additional information at


30 Cool Open Source Software I Discovered in 2013

These are full-featured open source software products, free as in beer and speech that I started to use recently. Vivek Gite picks his best open source software of 2013.

#1 Replicant - Fully free Android distribution

Replicant is entirely free and open source distributions of Android on several devices including both phones and tablets. I have installed it on an older Nexus S. You can install apps from F-Droid store a GPLv2 client app that comes configured with a repository hosting only free as in freedom applications.

Read more: 30 Cool Open Source Software I Discovered in 2013


See Behind That Shortened URL Using Python

I'm a big user of URL shortening, especially when sending links via email to family, friends, and/or coworkers. But there are occasions when I come across a shortened URL either on a website or from an unknown source.  So, with just a few lines of Python code, I managed to write a script that, given a shortened URL, will reveal the actual URL behind the shortened link.

Copy the following code into a file and make it executable:

#!/usr/bin/env python
# -- simple python script to take a shortened url
# -- and return the true url that it's pointing too
import sys
import urllib2

# get the url from the command-line, bail
# if nothing was entered and show usage
if len(sys.argv) != 2:
print '[!] Usage: <shortened url>'

# store the given url
url = sys.argv[1]

# connect to the url and retrieve the real
# url
shortenedUrl = urllib2.urlopen(url)
realUrl = shortenedUrl.geturl()

# display result
print '[+] Shortened URL: ' + url
print '[+] Real URL: ' + realUrl

# done

The execute the script like so:

[+] Shortened URL:
[+]          Real URL:

There you have it.  A quick and easy way to show what's behind those shortened URLs.


UEFI SecureBoot mini-HOWTO available

I have been sorely missing some kind of UEFI SecureBoot HOWTO while figuring out the bits and pieces which are required to let our users avoid the pain of having to wrestle this by themselves; having found a lot of sparse articles I had to write one myself.  It's published in the hope that at least those who choose to follow this way have a few bumps less and know better what lies ahead right from the start.

Here's ALT Linux English wiki page and here's its static copy as of today just in case.

While at that, the described approach has resulted in a few groups of images made available that survive Secure Boot left enabled:


Linux: Keep An Eye On Your System With Glances Monitor

Glances is a free LGPL licensed cross-platform curses-based monitoring tool that can provide me a maximum of information about your cpu, disk I/O, network, nfsd, memory and more in a minimum of space in a terminal. It can also work in a client/server mode for remote monitoring. This utility is written in Python and uses the psutil library to fetch the statistical values from your server.


Read more: Linux: Keep An Eye On Your System With Glances Monitor


How to Install Apache OpenOffice 4.0.1 on CentOS, RHEL and Fedora

Apache OpenOffice 4.0 versions have significant changes to your OpenOffice directory setup which effect your older OpenOffice profile. The Apache OpenOffice 4.0 Release Notes provide an explanation of these changes. However,you should have the opportunity to migrate your old profile settings to the new profile area.

Read complete article on  Install Apache Openoffice 4.0.1 on CentOS, RHEL and Fedora systems. If you are running Libre-Office or Older version of Apache OpenOffice, We recommend to remove that from system.


The Tyranny of the Clouds

Or “How I learned to start worrying and never trust the cloud.”

The Clouderati have been derping for some time now about how we’re all going towards the public cloud and “private cloud” will soon become a distant, painful memory, much like electric generators filled the gap before power grids became the norm. They seem far too glib about that prospect, and frankly, they should know better. When the Clouderati see the inevitability of the public cloud, their minds lead to unicorns and rainbows that are sure to follow. When I think of the inevitability of the public cloud, my mind strays to “The Empire Strikes Back” and who’s going to end up as Han Solo. When the Clouderati extol the virtues of public cloud providers, they prove to be very useful idiots advancing service providers’ aims, sort of the Lando Calrissians of the cloud wars. I, on the other hand, see an empire striking back at end users and developers, taking away our hard-fought gains made from the proliferation of free/open source software. That “the empire” is doing this *with* free/open source software just makes it all the more painful an irony to bear.

I wrote previously that It Was Never About Innovation, and that article was set up to lead to this one, which is all about the cloud. I can still recall talking to Nicholas Carr about his new book at the time, “The Big Switch“, all about how we were heading towards a future of utility computing, and what that would portend. Nicholas saw the same trends the Clouderati did, except a few years earlier, and came away with a much different impression. Where the Clouderati are bowled over by Technology! and Innovation!, Nicholas saw a harbinger of potential harm and warned of a potential economic calamity as a result. While I also see a potential calamity, it has less to do with economic stagnation and more to do with the loss of both freedom and equality.

The virtuous cycle I mentioned in the previous article does not exist when it comes to abstracting software over a network, into the cloud, and away from the end user and developer. In the world of cloud computing, there is no level playing field – at least, not at the moment. Customers are at the mercy of service providers and operators, and there are no “four freedoms” to fall back on.

When several of us co-founded the Open Cloud Initiative (OCI), it was with the intent, as Simon Phipps so eloquently put it, of projecting the four freedoms onto the cloud. There have been attempts to mandate additional terms in licensing that would force service providers to participate in a level playing field. See, for example, the great debates over “closing the web services loophole” as we called it then, during the process to create the successor to the GNU General Public License version 2. Unfortunately, while we didn’t yet realize it, we didn’t have the same leverage as we had when software was something that you installed and maintained on a local machine.

The Way to the Open Cloud

Many “open cloud” efforts have come and gone over the years, none of them leading to anything of substance or gaining traction where it matters. Bradley Kuhn helped drive the creation of the Affero GPL version 3, which set out to define what software distribution and conveyance mean in a web-driven world, but the rest of the world has been slow to adopt because, again, service providers have no economic incentive to do so. Where we find ourselves today is a world without a level playing field, which will, in my opinion, stifle creativity and, yes, innovation. It is this desire for “innovation” that drives the service providers to behave as they do, although as you might surmise, I do not think that word means what they think it means. As in many things, service providers want to be the arbiters of said innovation without letting those dreaded freeloaders have much of a say. Worse yet, they create services that push freeloaders into becoming part of the product – not a participant in the process that drives product direction. (I know, I know: yes, users can get together and complain or file bugs, but they cannot mandate anything over the providers)

Most surprising is that the closed cloud is aided and abetted by well-intentioned, but ultimately harmful actors. If you listen to the Clouderati, public cloud providers are the wonderful innovators in the space, along with heaping helpings of concern trolling over OpenStack’s future prospects. And when customers lose because a cloud company shuts its doors, the clouderati can’t be bothered to bring themselves to care: c’est la vie and let them eat cake. The problem is that too many of the clouderati think that Innovation! is a means to its own ends without thinking of ground rules or a “bill of rights” for the cloud. Innovation! and Technology! must rule all, and therefore the most innovative take all, and anything else is counter-productive or hindering the “free market”. This is what happens when the libertarian-minded carry prejudiced notions of what enabled open source success without understanding what made it possible: the establishment and codification of rights and freedoms. None of the Clouderati are evil, freedom-stealing, or greedy, per se, but their actions serve to enable those who are. Because they think solely in terms of Innovation! and Technology!, they set the stage for some companies to dominate the cloud space without any regard for establishing a level playing field.

Let us enumerate the essential items for open innovation:

  1. Set of ground rules by which everyone must abide, eg. the four freedoms
  2. Level playing field where every participant is a stakeholder in a collaborative effort
  3. Economic incentives for participation

These will be vigorously opposed by those who argue that establishing such a list is too restrictive for innovation to happen, because… free market! The irony is that establishing such rules enabled Open Source communities to become the engine that runs the world’s economy. Let us take each and discuss its role in creating the open cloud.

Ground Rules

We have already established the irony that the four freedoms led to the creation of software that was used as the infrastructure for creating proprietary cloud services. What if the four freedoms where tweaked for cloud services. As a reminder, here are the four freedoms:


  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1).
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3).


If we rewrote this to apply to cloud services, how much would need to change? I made an attempt at this, and it turns out that only a couple of words need to change:


  • The freedom to run the program or service, for any purpose (freedom 0).
  • The freedom to study how the service works, and change it so it does your computing as you wish (freedom 1).
  • The freedom to implement and redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to implement your modified versions for others (freedom 3).


Freedom 0 adds “or service” to denote that we’re not just talking about a single program, but a set of programs that act in concert to deliver a service.

Freedom 1 allows end users and developers to peak under the hood.

Freedom 2 adds “implement and” to remind us that the software alone is not much use – the data forms a crucial part of any service.

Freedom 3 also changes “distribute copies of” to “implement” because of the fundamental role that data plays in any service. Distributing copies of software in this case doesn’t help anyone without also adding the capability of implementing the modified service, data and all.

Establishing these rules will be met, of course, with howls of rancor from the established players in the market, as it should be.

Level Playing Field

With the establishment of the service-oriented freedoms, above, we have the foundation for a level playing field with actors from all sides having a stake in each other’s success. Each of the enumerated freedoms serves to establish a managed ecosystem, rather than a winner-take-all pillage and plunder system. This will be countered by the argument that if we hinder the development of innovative companies won’t we a.) hinder economic growth in general and b.) socialism!

In the first case, there is a very real threat from a winner-take-all system. In its formative stages, when everyone has the economic incentive to innovate (there’s that word again!), everyone wins. Companies create and disrupt each other, and everyone else wins by utilizing the creations of those companies. But there’s a well known consequence of this activity: each actor will try to build in the ability to retain customers at all costs. We have seen this happen in many markets, such as the creation of proprietary, undocumented data formats in the office productivity market. And we have seen it in the cloud, with the creation of proprietary APIs that lock in customers to a particular service offering. This, too, chokes off economic development and, eventually, innovation. At first, this lock in happens via the creation of new products and services which usually offer new features that enable customers to be more productive and agile. Over time, however, once the lock-in is established, customers find that their long-term margins are not in their favor, and moving to another platform proves too costly and time-consuming. If all vendors are equal, this may not be so bad, because vendors have an incentive to lure customers away from their existing providers, and the market becomes populated by vendors competing for customers, acting in their interest. Allow one vendor to establish a larger share than others, and this model breaks down. In a monopoly situation, the incumbent vendor has many levers to lock in their customers, making the transition cost too high to switch to another provider. In cloud computing, this winner-take-all effect is magnified by the massive economies of scale enjoyed by the incumbent providers. Thus, the customer is unable to be as innovative as they could be due to their vendor’s lock-in schemes. If you believe in unfettered Innovation! at all costs, then you must also understand the very real economic consequences of vendor lock-in. By creating a level playing field through the establishment of ground rules that ensure freedom, a sustainable and innovative market is at least feasible. Without that, an unfettered winner-take-all approach will invariably result in the loss of freedom and, consequently, agility and innovation.

Economic Incentives

This is the hard one. We have already established that open source ecosystems work because all actors have an incentive to participate, but we have not established whether the same incentives apply here. In the open source software world, developers participate because they had to, because the price of software is always dropping, and customers enjoy open source software too much to give it up for anything else. One thing that may be in our favor is the distinct lack of profits in the cloud computing space, although that changes once you include services built on cloud computing architectures.

If we focus on infrastructure as a service (IaaS) and platform as a service (PaaS), the primary gateways to creating cloud-based services, then the margins and profits are quite low. This market is, by its nature, open to competition because the race is on to lure as many developers and customers as possible to the respective platform offerings. However, the danger becomes if one particular service provider is able to offer proprietary services that give it leverage over the others, establishing the lock-in levers needed to pound the competition into oblivion.

In contrast to basic infrastructure, the profit margins of proprietary products built on top of cloud infrastructure has been growing for some time, which incentivizes the IaaS and PaaS vendors to keep stacking proprietary services on top of their basic infrastructure. This results in a situation where increasing numbers of people and businesses have happily donated their most important business processes and workflows to these service providers. If any of them are to grow unhappy with the service, they cannot easily switch, because no competitor would have access to the same data or implementation of that service. In this case, not only is there a high cost associated with moving to another service, there is the distinct loss of utility (and revenue) that the customer would experience. There is a cost that comes from entrusting so much of your business to single points of failure with no known mechanism for migrating to a competitor.

In this model, there is no incentive for service providers to voluntarily open up their data or services to other service providers. There is, however, an incentive for competing service providers to be more open with their products. One possible solution could be to create an Open Cloud certification that would allow services that abide by the four freedoms in the cloud to differentiate themselves from the rest of the pack. If enough service providers signed on, it would lead to a network effect adding pressure to those providers who don’t abide by the four freedoms. This is similar to the model established by the Free Software Foundation and, although the GNU people would be loathe to admit it, the Open Source Initiative. The OCI’s goal was to ultimately create this, but we have not yet been able to follow through on those efforts.


We have a pretty good idea why open source succeeded, but we don’t know if the open cloud will follow the same path. At the moment, end users and developers have little leverage in this game. One possibility would be if end users chose, at massive scale, to use services that adhered to open cloud principles, but we are a long way away from this reality. Ultimately, in order for the open cloud to succeed, there must be economic incentives for all parties involved. Perhaps pricing demands will drive some of the lower rung service providers to adopt more open policies. Perhaps end users will flock to those service providers, starting a new virtuous cycle. We don’t yet know. What we do know is that attempts to create Innovation! will undoubtedly lead to a stacked deck and a lack of leverage for those who rely on these services.

If we are to resolve this problem, it can’t be about innovation for innovation’s sake – it must be, once again, about freedom.

This article originally appeared on the Gluster Community blog.


CentOS / RHEL: See Detailed History Of yum Commands

The yum command has history option on the latest version of CentOS / RHEL v6.x+. To database are normally found in /var/lib/yum/history/ directory. The history option was added at the the end of 2009 (or thereabouts) to yum command. The history command allows an admin to access detailed information on the history of yum transactions that have been run on a system. You can see what has happened in past transactions (assuming the history_record config option is set). You can use various command line options to view what happened, undo/redo/rollback to act on that information and start a new history file.

Read more: CentOS / RHEL: See Detailed History Of yum Commands


Setting up an ARM Based Cassandra Cluster with Beagle Bone Black

A great project to try on cheap ARM boards such as the BeagleBone cluster is to set up a database cluster. For a developer, this is useful both for storage for projects and to gain experience administrating NoSQL databases with replication. Using my existing three BeagleBone Black cluster I decided to try using the Cassandra database. Cassandra is easy to use for those already familiar with SQL databases and is freely available. All of these steps were done on an Ubuntu install and should work on any ARM board running an Ubuntu based OS.

To get started, you need the Cassandra binaries. I was unable to find a repository that had an arm version of Cassandra so I downloaded straight from the apache site and untared it. You can go to and use wget to get the gzip file from one of the mirrors. The version I downloaded was apache-cassandra-2.0.2-bin.tar.gz. Once you have it on each machine, place it in a directory you want to use for its home, for example /app/cassandra and then unzip it:

tar -xvzf apache-cassandra-2.0.2-bin.tar.gz

Now you have everything you need to configure and run cassandra. To run in a cluster, we need to set up some basic properties on each machine so they know how to join the cluster. Start by navigating to the conf directory inside the folder you just extracted and open the cassandra.yaml file for editing. First find listen_address and set it to the ip or name of the current machine. For example for the first machine in my cluster:


Then do the same for rpc_address:


Finally we list all of our ips in the seeds section. As I have three nodes in my cluster, the setting on each machine looked like this:

- seeds: ",,"

Additionally if you want to give your cluster a specific name, you can set the cluster_name property. Once all three machines are set up as you like them, you can start up cassandra by going to the bin directory on each and running:

sudo ./cassandra

Using Cassandra

Once all three nodes are running, we can test on one of the nodes with cqlsh and create a database. Cqlsh is a command line program for using Cassandra's SQL like language called CQL. I connected to the utility from my first node like this:


The first step is to create a keyspace which acts like a schema in a SQL database like Oracle. Keyspaces store columnsets which act like tables and store data with like columns:

>create keyspace testspace with replication = { 'class': 'SimpleStrategy', 'replication_factor': 3 };
>use testspace;

Now we can create our column set and add some rows to it. It looks just like creating a database table:

>create table machines (id int primary key, name text);
>insert into machines values (1, 'beaglebone1');
>insert into machines values (2, 'beaglebone2');
> insert into machines values (3, 'beaglebone3');
>select * from machines;

Now we have a simple set of columns with some rows of data. We can check that replication is working by logging in from a different node and performing the same selection:

>use testspace;
>select * from machines;

So now we have a working arm based database cluster. There is a lot more you can do with Cassandra and some good documentation can be found here. The biggest issue with using the BeagleBone Black for this is the speed of the reads and writes. The SD card is definitely not ideal for real time applications or anything needing performance but this tutorial is certainly applicable for faster arm machines like the Cubieboard and of course desktop clusters as well.

Page 16 of 142

Upcoming Linux Foundation Courses

  1. LFS230 Linux Network Management
    06 Oct » 09 Oct - Virtual
  2. LFD331 Developing Linux Device Drivers
    13 Oct » 17 Oct - Virtual
  3. LFS430 Linux Enterprise Automation
    13 Oct » 16 Oct - Virtual

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board