Home Linux Community Community Blogs

Community Blogs

Going against the grain

For me, I have been an advocate of Linux from the first day I set eyes on Redhat 4.2 at the age of 14. I had my new PC for less than 6 months when I decided to wipe Windows95 from the disk, and install RedHat with its awesome NeXTSTEP window manager.

That was the first day I started to argue the point about the difference between Windows and Linux. Sure, Windows had loads of software, but Linux had so much potential, but still living with my parents meant that they wanted to use Windows if they needed to use a computer, they could not possibly use Linux.

It continues like this from thereonin. At school, I was the odd one, the teachers concerned that I may do something which in their eyes would be the equivalent of digital armageddon.

For me, Linux really took off when I discovered Usenet, where I started to discover more and more applications, scripts and more distributions of Linux, where I quickly moved onto Debian.

When I went to college, I found myself doing a course in IT in which the IT lecturers, and the IT support staff had not ever used anything other than Windows. They were stuck in their ways, only teaching and supporting Windows and Microsoft technologies.

Despite arguing my advocacy for Linux for several months, and requesting that I used an alternative to Windows, I was alway shunned to the point where I was warned that if I didn't conform, I would be kicked off the course.

Being restricted like this wasn't good for me, I soon dropped out, wanting to find a way to stretch my wings, and teach myself what I wanted to learn. I felt that Linux was certainly the way to go.

It was certainly the right thing to do. In every job I have had, I have brought in Linux. Each time, it has been the same. I promote, people seem me as being a bit odd, suggesting time after time that we should use Linux for X. Eventually wearing them down to the point that I get to do it, and every time, I have managed to deliver and beyond, whether its because it was more forgiving with some questionable hardware, or whether it was that it 'just worked'.

For me, my biggest win was working at an eduational establishment where the majority of the infrastructure was apple-based, with their 'crashproof technology', and their 'it just works' motto, I loved the fact that over six years I took critical services away from OSX Server to linux, seeing server uptimes over a year on Linux hardware, compared to the weekly reboots required by their fruity counterparts.

I found however, attitudes change when I became a developer. Once I start working with people who embrace technology, and don't sit on the rigid rails of Microsoft brand software, it was easy to convince people to use Linux servers, and also the benefits of using it day-to-day as a desktop, a staging system, and the basis of every new project I develop. Now, I am Senior Developer for a multi-million pound company, one of the fastest growing tech companies in the UK, and one of the top growing 200 tech companies in the EMEA. I put a lot of faith in the tools I use, and they have never let me down.

Even after 16 years, I still get strange looks, my wife still refuses to use Linux, and I can easily empty a room just by showing my passion. Despite all this, I continue on, promoting Linux, promoting Opensource technology, and always being there if anyone wanted help making the same change too.


My Nerd Story: Learn By Doing

My nerd story started in the winter of 2007, when I was 13 years old. And it all started from a simple challenge from a friend to look at some HTML tutorials, which I did. And I was captivated from the first second. I started doing very simple things like writing text and changing the background colour of a page, you know, beginner's HTML stuff, but I found it a lot of fun. For some reason, this stuff was second nature for me from day one.
Read more... Comment (0)

Collectl is a powerful tool to monitor system resources on Linux

Monitoring system resources Linux system admins often need to monitor system resources like cpu, memory, disk, network etc to make sure that the system is in a good condition. And there are plenty of commands like iotop, top, free, htop, sar etc to do the task. Today we shall take a look at a tool called collectl that can be used to measure, monitor and analyse system performance on linux. Collectl is a nifty little program that does a lot more than most other tools. It comes with a...

Read more... Comment (1)

My Nerd Story: Ham Radio, Atari, and UNIX

My geek story started early, probably because my dad and grandfather were into amateur radio (ham radio) in a pretty hard core way from the time I was little. I remember my dad studying for one of the morse code exams when I was maybe 4 or 5 years old, and me being the little sponge that I was, picked it up pretty easily. Nothing like a mouthy toddler shouting the answers to motivate someone to learn.

Read more... Comment (3)

Nmon – A Nifty Little Tool to Monitor System Resources on Linux

Nmon Nmon (Nigel's performance Monitor for Linux) is another very useful command line utility that can display information about various system resources like cpu, memory, disk, network etc. It was developed at IBM and later released open source. It is available for most common architectures like x86, ARM and platforms like linux, unix etc. It is interactive and the output is well organised similar to htop. Using Nmon it is possible to view the performance of different system resources on a single screen. The man page describes nmon as nmon is is a systems administrator, tuner,...

Read more... Comment (0)

Inxi is an amazing tool to check hardware information on Linux

Inxi A very common thing linux users struggle with is to find what hardware has the OS detected and how well. Because unless the OS is aware of the hardware, it might not be using it at all. And there an entire ocean of commands to check hardware information. There are quite a few gui tools like hardinfo, sysinfo etc on the desktop, but having a generic command line tool is far more useful and this is where Inxi works well. Inxi is a set of scripts that will detect a...
Read more... Comment (0)

Munich Transition Documentary

Just in general, if someone has connections with anyone over in Munich, what would be the possibility that a documentary covering their decade long migration to Linux could be assembled for the rest of us to understand better all the trials and tribulations that had to be overcome to bring the complete transition to fruition?


The benefits of a well planned Virtualization

One of the biggest challenges facing IT departments today is to keep your work environment. This is due to the need to maintain IT infrastructure able to meet the current demand for services and applications, and also ensure, that in the critical situations of the company, is able to resume normal activities quickly. And here is where it appears the big problem .

Much of IT departments are working on their physical limit, logical and economical . Your budget is very small and grows on average 1% a year, while managing the complexity grows at an exponential rate. IT has been viewed as a cost center real and not as an investment, as I have observed in most of the companies for which I have passed.

With this reality, IT professionals have to juggle to maintain a functional structure. For colleagues working in a similar reality, recommend special attention to this topic Virtualization .

Instead of speculating, Virtualization is not an expensive process compared to its benefits . Believing that depending on the scenario, Virtualization can be more expensive than many traditional designs. To give you an idea, today over 70% of the IT budget is spent just to keep the system environment, while less than 30% of the budget is invested in innovation advantage, differentiation and competitiveness. This means that almost any IT investment is dedicated simply to "put out the fire" emergency solve problems and very little is spent on solving the problem.

I followed a very common reality in the daily lives of large companies where the IT department is so overwhelmed that you can not measure the time to think again. In several of them, we see two completely different scenarios. A before and after Virtualization / cloud computing. In the first case, what we see is a bottleneck with reality drastic and resources to the limit. In the second, a scene of tranquility, guaranteed safe management and scalability.

Therefore, consider the proposal of Virtualization and discover what you can do for your department and therefore, for your company.

Within this reasoning, we have two maxims. The first: "Rethinking IT. The second: "Reinventing the business."

The big challenge for organizations is precisely this: rethink. What to do to transform technical consultants?

Increase efficiency and security

To the extent that the structure increases, so does the complexity of managing the environment. It is common to see data center dedicated to a single application. This is because the best practices for each service request that has a dedicated server. Obviously the metric is still valid, because without doubt this is the best option to avoid conflicts between applications, performance, etc.. Also, environments such as this are becoming increasingly detrimental as the processing capacity and memory are increasingly underutilized . On average, only 15% of the processing power is consumed by a server application, that is, over 80% of processing power and memory is actually no use.

Can you imagine the situation? We, first, we have virtually unused servers, while others need more resources, and ever lighter applications, the use of hardware is more powerful.

Another point that needs careful consideration is the safety of the environment. Imagine a database server with disk problem? What will be the difficulty of your company today? The time that your business needs to quote, purchase, receive, change and configure the environment to drop the item. During all this time, what was the problem?

Many companies are based in the cities / regions far from major centers and therefore may not think this hypothesis.

With Virtualization it does not, because we left the traditional scenario where we have a lot of servers, each hosting its own operating system and applications, and we go to a more modern and efficient.

In the image below, we can see the process of migrating the physical environment, where multiple servers to a virtual environment, where we have fewer physical servers or virtual servers hosting.


By working with this technology and we have underutilized servers for different applications / services that are assigned to the same physical hardware, sharing CPU resources, memory, disk and network. This makes the average usage of this equipment can reach 85%. Moreover, fewer physical servers means less spending on supplies, memories, processors, means less purchasing power and cooling, and therefore fewer people to manage the structure.


At this point you should ask, but what about security? If now I have multiple servers running simultaneously on a single physical server I'm at the mercy of this server? What if equipment fails?

New thinking is not only the technology but how to implement this technology in the best way possible. Today VMware , the global leader in Virtualization and cloud computing, working with a technology cluster, enabling and ensuring high availability of their servers. Basically, if you have two or more servers that work together in the event of failure of any equipment, VMware identifies this fault and automatically restores all its services on another host. This is automatic, without IT staff intervention.

At runtime, the physical failure is simulated to test the high availability and security of the environment in the future, the response time is fairly quick. On average, each server can be restarted with 10 seconds, 30 seconds or up to 2 minutes between each server. In some scenarios, it is possible that the operating environment will restart in about 5 minutes.

Be ready quickly new services

In a virtualized environment, the availability of new services becomes a quick and easy task, since resources are managed by the Virtualization tool and not tied to a single physical machine. This way you can hire a virtual server resources only and therefore avoids waste. On the other hand, if demand is rapidly increasing daily can increase the amount of memory allocated to this server. This same reasoning applies to the records and processing.

Remember that you are limited by the amount of hardware present in the cluster, you can only increase the memory to a virtual server if this report is available in its physical environment. This ends underutilized servers, as it begins to manage their environment intelligently and dynamically, ensuring greater stability

Integrating resources through the cloud

Cloud computing is a reality, and there is no cloud without Virtualization. VMware provides a tool called vCloud with it is possible to have a private cloud using its virtual structure, all managed with a single tool.

Reinventing the Business

After rethinking, now is the time to change, now is the time to reap the rewards of having an optimized IT organization, we see that when we do a project structured high availability, security, capacity growth and technology everything becomes much easier in the benefits we can mention the following:

Respond quickly to expand its business

When working in a virtualized environment, you have to think in a professional manner to meet all your needs, you can meet the demand for new services, this is possible with VMware because it offers a new server configured in a few clicks, in five minutes and has offered a new server ready to use. Today becomes crucial, since the start time a new project is decreasing.

Increase focus on strategic activities

With the controlled environment, management is simple and it becomes easier to focus on the business. That's because you get almost all the information and operational work is to have a thought of IT in business, and that is to transform a technical consultant. Therefore, a team will be fully focused on technology and strategic decisions, and not another team as firefighters, are dedicated to put out the fires caused.

Aligning the IT departments decision making

Virtualization allows IT staff have the metric reporting and analysis. With these reports have in their hands a professional tool that will lead to a fairly simple language and understand the reality of their environment. Often, this information supports a negotiation with management and, therefore, the approval of the budget for the purchase of new equipment.

Well folks, that's all. I tried not to write too much, but it's hard to say something as important in less lines, I promise that future articles will discuss in detail a little more about VMware and how it works.


How to configure vsftpd to use SSL/TLS (FTPS) on CentOS/Ubuntu

Securing FTP Vsftpd is a widely used ftp server, and if you are setting it up on your server for transferring files, then be aware of the security issues that come along. The ftp protocol has weak security inherent to its design. It transfers all data in plain text (unencrypted), and on public/unsecure network this is something too risky. To fix the issue we have FTPS. It secures FTP communication by encrypting it with SSL/TLS. And this post shows how to setup SSL...
Read more... Comment (0)

New Year's Resolutions for SysAdmins

Ah, a new year, with old systems. If you recently took time off to relax with friends and family and ring in 2014, perhaps you're feeling rejuvenated and ready to break bad old habits and develop good new ones. We asked our friends and followers on Twitter, Facebook, and G+ what system administration resolutions they're making for 2014, and here's what they said. 

Read more: New Year's Resolutions for SysAdmins





Cloud Operating System - what is it really?

A recent article published on, “Are Cloud Operating Systems the Next Big Thing”, suggests that a Cloud Operating System should simplify the Application stack. The idea being that the language runtime is executed directly on the hypervisor without an Operating System Kernel.

Other approaches for cloud operating systems are focussed on optimising Operating System distributions for the cloud with automation in mind. The concepts of IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) remain in the realm of conventional computing paradigms. 

None of these approaches address the core benefits of the cloud. The cloud is a pool of resources, not just another “single” computer. When we think of a computer, it has a processor, persistent storage and memory. A conventional operating system exposes compute resources based on these physical limitations of a single computer. 

There are numerous strategies to create the illusion of a larger compute platform, such as load balancing to a cluster of compute nodes. Load balancing is most commonly performed at a network level with applications or operating systems having limited exposure of the overall compute platform. This means an application cannot determine the available compute resources and scale the cloud accordingly.

To fully embrace the cloud concept a platform is required that can automatically scale application components with additional cloud compute resources. Amazon and Google both have solutions that provide some of these capabilities, however internal Enterprise solutions are somewhat limited. Many organisations embrace the benefits of a hosted cloud within the mega data centres around the world. Many companies have a requirement to host applications internally.

As network speeds increase the feasibility of a real “Cloud Operating System” becomes a reality. This is where an application can start a thread that executes not on a separate processor core, but executes somewhere within the cloud. 

A complete paradigm shift is required to comprehend the possibilities of an Operating System providing distributed parallel processing. Virtualisation takes this new cloud paradigm to a different level where the abstraction of the hardware using a virtualisation layer and a platform operating system presents compute resources to a Cloud Operating System.

The same way as a conventional operating system determines which CPU core is the most appropriate to execute a specific process or thread, a cloud operating system should identify which instance of the cloud execution component is most appropriate to execute a task. 

A cloud operating system with multiple execute instances on numerous hosts can schedule tasks based on the available resources of an execute instance. By abstracting task scheduling to a higher layer the underlying operating system is still required to optimise performance  using techniques such as Symmetric Multiprocessing (SMP), processor affinity and thread priorities.

The application developer has for many years been abstracted from the hardware with development environments such as C#, Java and even PHP. Operating systems have not adapted to the Cloud concept of providing compute resources beyond a single computer. 

The most comparable implementation is the route taken by Application Servers with solutions such as JAVA EJB where lookups can occur to find providers.  Automatic scalability is however limited with these solutions.

Hardware vendors are moving ahead by creating cloud optimised platforms. The concept is that many smaller platforms create optimal compute capacity. HP seem to be leading this sector with their Moonshot solution. The question however remains: How do you make many look like one?  

Enterprises have existing data centres where very little of the overall compute capacity is actually leveraged on an ongoing basis. When one system is busy, numerous others are idle. A cloud compute environment that can automatically scale across a collection of servers providing actual cost savings. Compute capacity would be additive using existing infrastructure for workload based on available resources. According to the IDC report on world wide server shipments the server market is in excess of $12B per quarter. The major vendors are looking for ways to differentiate their solutions and provide optimal value to customers.

Combining hardware, virtualisation and a Cloud Operating System organisations will benefit from a reduction in the cost to provide adequate compute capacity to serve business needs.

Gideon Serfontein is a co-founder of the Bongi Cloud Operating System research project. Additional information at

Page 17 of 143

Upcoming Linux Foundation Courses

  1. LFD320 Linux Kernel Internals and Debugging
    03 Nov » 07 Nov - Virtual
  2. LFS416 Linux Security
    03 Nov » 06 Nov - Virtual
  3. LFS426 Linux Performance Tuning
    10 Nov » 13 Nov - Virtual

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board