Home Linux Community Community Blogs

Community Blogs

5 commands to check memory usage on Linux

Memory Usage On linux, there are commands for almost everything, because the gui might not be always available. When working on servers only shell access is available and everything has to be done from these commands. So today we shall be checking the commands that can be used to check memory usage on a linux system. Memory include RAM and swap. It is often important to check memory usage and memory used per process on servers so that resources do not fall short and users are able to access the server. For example a website. If you are running a webserver, then the server must have enough memory to serve the visitors to the site. If not, the site would become very slow or even go down when there is a traffic spike, simply because memory would fall short. Its just like what happens on your desktop PC. free command The free command is the most simple and easy to use command to check memory usage on linux. Here is a quick example $ free -m total used free shared buffers cached Mem: 7976 6459 1517 0 865 2248 -/+ buffers/cache: 3344 4631 Swap: 1951 0 1951 The m option displays all data in MBs. The total os 7976 MB is the total amount of RAM installed on the system, that is 8GB. The used column shows the amount of RAM that has been used by linux, in this case around 6.4 GB. The output is pretty self explanatory. The catch over here is the cached and buffers column. The second line tells that 4.6 GB is free. This is the free memory in first line added with the buffers and cached amount of memory. Linux has the habit of caching lots of things for faster performance, so that memory can be freed and used if needed. The last line...

Read more... Comment (1)

Open http port ( 80 ) in iptables on CentOS

Iptables firewall I was recently setting up a web server on centos with nginx and php. The installation of nginx was fine, but the http port of the system was not accessible from outside. This is because centOS by default has some iptables firewall rules in effect. Only the ssh port (22) was accessible and remote shell worked. So its necessary to open up port 80 for webserver like nginx to work. Iptables is the firewall on linux that can be configured to accept or reject network traffic based on various kinds of packet level rulesets. So it is necessary to configure this firewall to enable connections on network ports. Iptables rules There are 2 ways to configure iptables to open up port 80. First is using the iptables command and second is by creating a configuration file. First check the existing iptables rules in effect. The command is quite simple. Here is a sample output. [root@dhcppc2 ~]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@dhcppc2 ~]# As can be seen in the output, there is...
Read more... Comment (0)

Monitor disk io on linux server with iotop and cron

Iotop - Disk Input Output usage Recently my server was giving notifications of disk io activity rising above a certain threshold at regular intervals. My first guess was that some cronjob task was causing that. So I tried to check various cron tasks to find out which task or process was causing the io burst. On servers specially its always a good practice to monitor resource usage to make sure that websites work fast and well. However searching manually is not quite easy and this is where utilities like iotop come in. iotop shows what or how much disk io are all current processes doing. Its quite easy to use. Just run it from a terminal and you should see some output like this Total DISK READ: 0.00 B/s | Total DISK WRITE: 106.14 K/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 335 be/3 root 0.00 B/s 98.56 K/s 0.00 % 2.03 % [jbd2/sda6-8] 4096 be/4 www-data 0.00 B/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 4100 be/4 www-data 0.00 B/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 5 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H] 4102 be/4 www-data 0.00 B/s 0.00 B/s 0.00 % 0.00 % apache2 -k start 7 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/u:0H] 8 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0] 9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_bh] 10 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched] As we can see, each row shows a certain process and the amount of data it is reading or writing to. This information is actually instantaneous, so iotop keeps updating the values at certain interval like 1 second. Running iotop like this just tells the current io usage. What if we want...
Read more... Comment (0)

Install the Pantheon desktop on Ubuntu

Pantheon desktop environment Pantheon is a new desktop environment that comes with elementary OS. It has a mac like look and feel and has been designed very well. It looks superb specially on laptops and can give you a cool looking desktop that you want to show off. It can be installed on Ubuntu if you want to get the elementary OS like look and feel without changing the OS. In this example we are installing it on Ubuntu 13.04 Add the elementary ppa to ubuntu The elementary ppa provides all the necessary packages to get...

Read more... Comment (0)

A tiny BASH Script to Demonstrate 'IP Aliasing'

"The process in which multiple addresses are created on a single network interface, is known as "IP Aliasing". IP Aliasing will be very much useful when you need multiple IP addresses to set up multiple virtual sites on Apache making the use of only one network adapter. The most important advantage of IP Aliasing is that you do not need one hardware per IP Address, rather you can generate a pool of virtual network interfaces (i.e. Aliases) on a single device."

Read More at YourOwnLinux.


PKI Implementation for the Linux Admin


Public-key infrastructure (PKI) is what makes internet encryption and digital signatures work. When you visit your bank website you are told it is encrypted and verified. If you install software on Windows machines you may notice a popup when Microsoft cannot verify the digital signature of the software. In this article I give my explanation of how PKI works then a solution for it’s implementation in a private environment within a Linux shop.

How PKI works


Most have heard the names of Verisign, Network Solutions, and GoDaddy. These companies along with many more are in the business of allowing the public to trust an organization's encryption keys and are referred to as Registration Authorities (RA)


If an organization, let’s say your bank, wants to encrypt their website that you use to manage your accounts they would first need to generate a private encryption key. The private key is something that should remain just that; private. With the private key one can extract a public key. While a public key can be created from a private key, the reverse should not be possible.



Now the fun part, any messages encrypted using this private key can be decrypted via the public key and vice versa. The bank now decides that they want a third party to verify the authenticity of their private/public key pair because without it their users get a warning in their web browser stating that the site shouldn’t be trusted. I am sure most of you have seen this error before:

The bank sends their public key to an RA and pays for them to digitally sign it with their Root Certificate Authority (CA). This CA is nothing more than another public/private key pair. Remember how I said that anything encrypted with a private key can be decrypted with the public key? At the time it may not have sounded like a good idea to have any message encrypted that can be decrypted by anyone. In this case, the RA is the only one with access to their private key and everyone has their public key (You literally do have it, every modern operating system maintains a copy of these trusted Root Certificates). The RA adds a message to the bank’s public key using their private key with the goal that your web browser should now be able to decrypt and view the message. If this is all successful, you do not see the nasty page like the image above and you can be assured that this site is using a certificate that the third party has verified..

So you now have your bank’s public key. This means you can encrypt a message with it and only the bank can see that message using their private key. The bank does the same thing to send you messages. When you initiate a connection with a web browser to a site that uses HTTPS, you also send your own public key to your bank. They will use that to encrypt messages and send back to you for you to decrypt using your private key. Since you do not need to supply any authenticity of who you are to the website, there is no problem with your public/private keys being generated dynamically.

Lastly, public/private key encryption is wildly inefficient due to the existence of the public key. In order for the public key to be securely derived from the private, we have to use very large key sizes. At the time of this article I recommend 4096-bit key size.

So why do we use PKI if it is so inefficient?

This sort of encryption is also referred to as asymmetric encryption. Symmetric encryption is much more efficient, but has one flaw; both sides need to know what the private key is. The purpose of PKI is the means for two endpoints to securely decide on a symmetric key to use to continue communication; usually a 128-bit or 256-bit key.  

To summarize:

  1. You type in your bank’s address in your web browser
  2. Your web browser provides your bank with its public key.
  3. Your bank responds with its public key
  4. Your web browser checks it’s list of Root Certificates to try to decrypt the digital signature
  5. If successful, both sides run through a series of algorithms and exchanges to derive the same symmetric key to be used for the rest of communication within this session.
  6. At this point, your bank’s website finally appears in your browser and any communication henceforth will be encrypted using the symmetric key.

Security Concerns

So what benefit does a company get when a root certificate authority is used to verify the public key a server presents to a client? The answer is more human than you would think. When a self-signed certificate is used, a user gets that nasty warning I discussed earlier. If this warning is disregarded and the user continues to the site, the communication between the server and client is still encrypted.

However, If we train our users to ignore those security warnings about the inability to prove the authenticity of the server’s public key then our users will always ignore this error. In 99.999% of cases this is probably okay, but what can potentially happen is what is known as a Man-In-The-Middle(MITM) attack.

Using the brief description of PKI above, imagine if there was a malicious user somewhere on the internet, or more likely somewhere locally, placed directly between your computer and the server. When the initial key exchange takes place the attacker can intercept your public key, and send you theirs instead of the server’s of which you are trying to talk to. His public key will most likely not be digitally signed with the same domain name as the site you are visiting, although if this was a targeted attack on a particular server it could be possible.

Since the attacker likely wants to steal data, he will then proxy a connection to the server your originally wanted to go to, but instead of your public key going to the server it would be the attacker’s. This places the attackers computer directly between you and the server and will be able to capture and view any data sent between both machines. Well, that is if the user has been trained to simply ignore the warning they got from their web browser.

My Linux PKI Implementation

Most organizations rely on internal sites for their business operations and purchasing a certificate signed by one of these root authorities for each site can become costly. In addition, if an organization makes use sub-subdomains it becomes harder to simply use wildcard certificates for each subdomain. The solution here is for that company to become its own Root Certificate Authority.

In the Windows Server world, this is quite easy using their PKI Services Manager. If you are anything like me you cringe at the thought of Windows Servers! In the Linux world there is TinyCA, but it depends on a graphical environment. I am sort of a minimalist, so a Desktop GUI on my servers is just not going to work for me. Under this dilemma I decided to use OpenSSL which has all the necessary functions built within it. However, these commands are long and difficult to remember and I hate having to look up syntax or notes every time I want to perform a task.

Here is where my bash script comes in. Using whiptail to add a decent interface while keeping everything within one script I included functions that:

  • Manages multiple domains

  • Creates a Root Certificate for each domain

  • Unlimited subdomains

  • Certificate revocation

When you are running your own Root CA, it is critical that your certificate be installed in your company’s web browsers and other applications that access other applications encrypted using your certificates. For each domain you create using my shell script, there is a ca.crt file that is created under the certificates directory. This ca.crt is a public key and can be freely distributed and installed within your company. Additionally there is a certificate revocation list titled ca.crl. This should also be made available to ensure that a certificate that has been revoked is properly blocked by the configured clients.

While this script functions well under Ubuntu 12.04 LTS, I have not tested under other distributions. I also wanted to add a feature to completely rekey an entire domain in the event of a compromised Root CA. Alas, I did not get around to it which is why I decided to write this article and share my work allowing for feedback and further development.

Feel free to take this script and modify it as your please. I only ask if you make improvements that you share them with me so that I can update this script.


The directory and file structure

I have typically kept this under the /etc/pki directory, but it does not really matter.


This file is the main shell script that uses Whiptail and OpenSSL commands for managing the PKI domain. The full script is available at the end of this article.



This file contains a list of all active domains that are

managed by the script.


Each domain will have its own folder



Each domain has its own openssl.cnf file. If any configuration changes are desired for the domain, this is the guy to edit. The script creates this file and you can edit the defaults within that file.


This file counts the number of revoked certificates by this CA.


This file counts the number of serials signed by this CA.


This file is a database of all serial numbers of certificates signed by this CA.


This folder is where all public keys are placed. They are created with a .crt extension and the ca.crt is also stored in this directory. As you create certificates for subdomains they will be stored here. For example,


This file is the bread and butter for the domain. This file needs to be distributed to all systems that would verify the authenticity of the certificates created for this domain using the script.


This folder contains the ca.crl file


This file needs to be distributed to all systems that would verify whether the certificate being presented has

actually been revoked.



This folder contains the Certificate Signing Requests. These are generated when a private key is created for a domain. CSR files contain information about the private key that the CA uses when digitally signing the certificate generated from that private key. It is a way to pass information about the private key to the CA without having to provide the private key itself because sharing it would defeat the purpose of the private key. In our scenario the private key is on the same server as the CA, but it’s role is more critical when the CA is a third party.


This folder is a default place for new certificates. It is unused by the script.


This folder is where all private keys are placed and strict access should be enforced. They are created with a .key extension and the ca.key is also stored in this directory. As you create certificates for subdomains their private keys will be stored here. For example,


This file is the private key for the public key; ca.crt. Access to this file should be as limited as possible.


When a private key for a domain has been compromised you will want to revoke that key and its certificate. WHen you do, they are placed in this folder.


These folders are where their corresponding file types are placed when a key has been revoked.


Understanding the OpenSSL Commands


Although you may never look at the contents of the script, for those that do this is a break down of the more complicated parts and the overall justification for writing this script; horrifically long OpenSSL commands. For the purpose of demonstration the domain used will be and a private/public key pair will be generated for and signed with the CA key file for


Creating a CA



openssl req -config /etc/pki/ -new -x509 -extensions v3_ca -keyout /etc/pki/ -out /etc/pki/ -days 365 -passout pass:Somepassword -passin pass:Somepassword -batch \
                -subj "/C=US/ST=CA/L=Los Angeles/ Name"

Hey OpenSSL, I am making a request for a new certificate using the configuration file, /etc/pki/ Since this will be a CA, just output self-sign the certificate (-x509) using the v3_ca extension. Save the private key as /etc/pki/ and the certificate as /etc/pki/ Put an expiration of 365 days and set the password to be “Somepassword”. This is being done via a script so don’t ask me for anything as I am providing it all here (-batch). Finally, the subject field should state that the machine to use this certificate is in Los Angeles, CA in the US belonging to Some Company and using the domain name

Creating a New Private Key and CSR



openssl req -config /etc/pki/ -new -nodes -keyout /etc/pki/ -out /etc/pki/ -days 365 \
                       -subj "/C=US/ST=CA/L=Los Angeles/ Name"

Hey OpenSSL, I am making a request for a new private key for using the configuration file, /etc/pki/

The key can be saved as /etc/pki/ with an expiration date of 365 days. In addition, create a CSR for use to have the certificate created and signed by a CA.

Creating and Signing a New Certificate



openssl ca -batch -config /etc/pki/ -passin pass:Somepassword -policy policy_anything -out /etc/pki/ -infiles /etc/pki/

Hey OpenSSL, I am making a request for a new certificate using the CA setup earlier. Using the openssl.cnf file, it should be known that the CA key file is in /etc/pki/ and the password is being provided within this command. The CSR file to use is in /etc/pki/ Save the final certificate under /etc/pki/

Revoke a Certificate



openssl ca -config /etc/pki/ -revoke /etc/pki/ -batch -passin pass:Somepassword

Hey OpenSSL, using the openssl.cnf config for most settings, revoke the certificate This will update the index.txt file stating that this certificate has been revoked.

Generate a New CRL File



openssl ca -config /etc/pki/ -gencrl -out /etc/pki/ -batch -passin pass:Somepassword

Hey OpenSSL, using the openssl.cnf config for most settings, generate a new CRL for this CA and save it to /etc/pki/ Just like the ca.crt file, this should be made easily accessible for clients to check the validity of whatever certificate is presented.


Summary and Downloads

The biggest downside of managing your own PKI is being persistent in enforcing clients, users or other, to check the authenticity of your certificates. In addition, the CRL is often an overlooked and yet superbly critical aspect of PKI. If security is important for your organization then I suggest the following:

  • Only generate certificates with some sort of signing CA.
  • Put the CA certificate file and the CA revoke list on a web server unencrypted.
  • Use a cronjob to periodically publish the CA revoke list and CA certificate file to the aforementioned web server.
  • On clients, use a cronjob to periodically grabs these two files from the web server in order to be used for any scripts or programs.
  • Lastly, with these two files easily accessed from a web browser it becomes pretty easy to install them in your user’s web browser.

A current copy of the complete script can be found at Place it in /etc/pki/ and edit it as you see fit.




Samba 4.0.10 Released – Fixes Some Major Bugs

Samba is a software that uses TCP/IP protocol to enable interaction between Microsoft Windows and other systems (like Unix, Linux etc.). It is a popular tool for file transfers across the systems. A new version of Samba 4.0.10 has been released on 8th October 2013. It is the latest stable release that includes fixes to the following major bugs.


Elementary OS 0.2 Luna – now that’s more like it

Elementary OS - Luna Elementary OS 0.2 Luna is a linux distro that has become quite popular recently. It is based on Ubuntu and designed to look somewhat like a mac. There have been many attempts to get a mac like feel on the linux desktop and Pear OS is the most significant one. However all of them fall short somewhere or the other. Elementary introduces a new desktop environment called Pantheon that achieves a lot. First of all it gives a mac like look and feel, but most importantly it makes the desktop remarkably simple and productive. I think I shall use it on notebook and those desktops where I don't code. I have been using KDE for the last decade and never moved out of it. Never liked gnome, never needed xfce or any of those light desktops. But elementary seems something irresistible. Not only are the looks elegant and beautiful, the usability is remarkable compared to other more common desktop environments like Gnome or Unity. Elementary has the perfect balance of style and usability and this would no doubt make it a very popular distro. Now lets take a quick tour of this brand new linux distro that you would love, if you want to be fashionable and show off. Yeah really. elementary on a sleek ultra thin notebook would make the macs envious!! Onto the desktop The desktop is well drawn and painted. The wallpapers are quite pleasant. Infact its the first time I see such good looking wallpapers packed by default with a distro. As for the fonts, elementary is smart enough to use the Droid fonts which definitely is one of the best. The wallpaper and font selections definitely make a lot of sense and there is nothing wrong in what elementary developers claim on their website elementary is crafted by designers and developers who believe that computers can be easy, fun, and gorgeous. By putting design first, we ensure we're not compromising on quality or usability. On the top left is the menu for accessing applications which is called the Slingshot. It has 2 modes of display, either only icons, or icons with categories on the left. And at the base you see Plank, the dock for icons. It is configurable and has a very decent appearance. Elementary uses its own window manager called Gala. It is considered to be very resource consuming, so you would need high CPU power or a dedicated graphics unit for elementary to work smoothly. Otherwise cpu usage would rise high making the system sluggish. Gnome and KDE all have desktop effects but they are all very raw in nature. They just keep showing the user with all possible desktop effects that could be created. Elementary on the other hand has very balanced mixture of various effects...
Read more... Comment (0)

A Simple BASH Script to Test Your Internet Connectivity

Most of the users all over the world make use of Google's Index Page to check whether their Internet connection is working or not.  Many times it is required to check periodically whether the server you are running is connected to internet or not. It is very cumbersome to open the web page every time you wish to check the connection. As an alternative, it definitely makes sense to run some scripts in the background periodically scheduling them using cron.

Read More on YourOwnLinux...


How To : Configure Ubuntu as a Router


If you are having two network interface cards or some other component that connects you to the internet along with a network interface card installed in your ubuntu system, it can be transformed into an immensely powerful router. You can establish basic NAT (Network Address Translation), activate port forwarding, form a proxy, and prioritize traffic observed by your system so that your downloading stuff do not intervene with gaming. This article will explicate setting up your ubuntu system as a router which can later be configured as a firewall with prior knowledge of 'IPTables'. The resulting setup will help you to control traffic over ports and make your system less vulnerable to security breaches.


Read More at YourOwnLinux


Kernel Crash Seen In case of VLAN Tagged ICMP packets

Hi Friends,

I am using Windriver customized kernel 3.0 based on main line Linux version

Getting Kernel crash In case of Vlan Tagged ICMP packets. Please find below stack trace for the crash.


Kindly help me in analysing the trace and to identify the main culprit.

[<ffffffff81154108>] warn_on_slowpath+0x58/0x90

[<ffffffff8115bd20>] local_bh_enable+0x88/0xf8

[<ffffffff81344a44>] dev_queue_xmit+0x144/0x688

[<ffffffff81301f04>] bond_dev_queue_xmit+0x44/0x178

[<ffffffff81302408>] bond_xmit_activebackup+0xb0/0xe8

[<ffffffff81344ea4>] dev_queue_xmit+0x5a4/0x688

[<ffffffff813d59b4>] vlan_dev_hwaccel_hard_start_xmit+0x8c/0xa0

[<ffffffff81344ea4>] dev_queue_xmit+0x5a4/0x688

[<ffffffff81377cbc>] ip_push_pending_frames+0x37c/0x4c0

[<ffffffff813a0768>] icmp_reply+0x170/0x290

[<ffffffff813a0a58>] icmp_echo+0x58/0x68

[<ffffffff813a11b4>] icmp_rcv+0x334/0x390

[<ffffffff813721a4>] ip_local_deliver_finish+0x13c/0x2d8

[<ffffffff813718c4>] ip_rcv_finish+0x134/0x510

[<ffffffff81343af4>] netif_receive_skb+0x41c/0x5d8

[<ffffffff81343d58>] process_backlog+0xa8/0x160

[<ffffffff8134186c>] net_rx_action+0x194/0x2e8

[<ffffffff8115b71c>] __do_softirq+0x114/0x288

[<ffffffff8115b910>] do_softirq+0x80/0x98

[<ffffffff8115bb8c>] irq_exit+0x64/0x78

[<ffffffff81100e40>] plat_irq_dispatch+0xd0/0x1d0

[<ffffffff81120c80>] ret_from_irq+0x0/0x4

[<ffffffff81120ea0>] r4k_wait+0x20/0x40

[<ffffffff81123414>] cpu_idle+0x34/0x60


Thanks in Advance


Page 19 of 142

Upcoming Linux Foundation Courses

  1. LFS230 Linux Network Management
    06 Oct » 09 Oct - Virtual
  2. LFD331 Developing Linux Device Drivers
    13 Oct » 17 Oct - Virtual
  3. LFS430 Linux Enterprise Automation
    13 Oct » 16 Oct - Virtual

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board