Home Linux Community Community Blogs

Community Blogs

A tiny BASH Script to Demonstrate 'IP Aliasing'

"The process in which multiple addresses are created on a single network interface, is known as "IP Aliasing". IP Aliasing will be very much useful when you need multiple IP addresses to set up multiple virtual sites on Apache making the use of only one network adapter. The most important advantage of IP Aliasing is that you do not need one hardware per IP Address, rather you can generate a pool of virtual network interfaces (i.e. Aliases) on a single device."

Read More at YourOwnLinux.


PKI Implementation for the Linux Admin


Public-key infrastructure (PKI) is what makes internet encryption and digital signatures work. When you visit your bank website you are told it is encrypted and verified. If you install software on Windows machines you may notice a popup when Microsoft cannot verify the digital signature of the software. In this article I give my explanation of how PKI works then a solution for it’s implementation in a private environment within a Linux shop.

How PKI works


Most have heard the names of Verisign, Network Solutions, and GoDaddy. These companies along with many more are in the business of allowing the public to trust an organization's encryption keys and are referred to as Registration Authorities (RA)


If an organization, let’s say your bank, wants to encrypt their website that you use to manage your accounts they would first need to generate a private encryption key. The private key is something that should remain just that; private. With the private key one can extract a public key. While a public key can be created from a private key, the reverse should not be possible.



Now the fun part, any messages encrypted using this private key can be decrypted via the public key and vice versa. The bank now decides that they want a third party to verify the authenticity of their private/public key pair because without it their users get a warning in their web browser stating that the site shouldn’t be trusted. I am sure most of you have seen this error before:

The bank sends their public key to an RA and pays for them to digitally sign it with their Root Certificate Authority (CA). This CA is nothing more than another public/private key pair. Remember how I said that anything encrypted with a private key can be decrypted with the public key? At the time it may not have sounded like a good idea to have any message encrypted that can be decrypted by anyone. In this case, the RA is the only one with access to their private key and everyone has their public key (You literally do have it, every modern operating system maintains a copy of these trusted Root Certificates). The RA adds a message to the bank’s public key using their private key with the goal that your web browser should now be able to decrypt and view the message. If this is all successful, you do not see the nasty page like the image above and you can be assured that this site is using a certificate that the third party has verified..

So you now have your bank’s public key. This means you can encrypt a message with it and only the bank can see that message using their private key. The bank does the same thing to send you messages. When you initiate a connection with a web browser to a site that uses HTTPS, you also send your own public key to your bank. They will use that to encrypt messages and send back to you for you to decrypt using your private key. Since you do not need to supply any authenticity of who you are to the website, there is no problem with your public/private keys being generated dynamically.

Lastly, public/private key encryption is wildly inefficient due to the existence of the public key. In order for the public key to be securely derived from the private, we have to use very large key sizes. At the time of this article I recommend 4096-bit key size.

So why do we use PKI if it is so inefficient?

This sort of encryption is also referred to as asymmetric encryption. Symmetric encryption is much more efficient, but has one flaw; both sides need to know what the private key is. The purpose of PKI is the means for two endpoints to securely decide on a symmetric key to use to continue communication; usually a 128-bit or 256-bit key.  

To summarize:

  1. You type in your bank’s address in your web browser
  2. Your web browser provides your bank with its public key.
  3. Your bank responds with its public key
  4. Your web browser checks it’s list of Root Certificates to try to decrypt the digital signature
  5. If successful, both sides run through a series of algorithms and exchanges to derive the same symmetric key to be used for the rest of communication within this session.
  6. At this point, your bank’s website finally appears in your browser and any communication henceforth will be encrypted using the symmetric key.

Security Concerns

So what benefit does a company get when a root certificate authority is used to verify the public key a server presents to a client? The answer is more human than you would think. When a self-signed certificate is used, a user gets that nasty warning I discussed earlier. If this warning is disregarded and the user continues to the site, the communication between the server and client is still encrypted.

However, If we train our users to ignore those security warnings about the inability to prove the authenticity of the server’s public key then our users will always ignore this error. In 99.999% of cases this is probably okay, but what can potentially happen is what is known as a Man-In-The-Middle(MITM) attack.

Using the brief description of PKI above, imagine if there was a malicious user somewhere on the internet, or more likely somewhere locally, placed directly between your computer and the server. When the initial key exchange takes place the attacker can intercept your public key, and send you theirs instead of the server’s of which you are trying to talk to. His public key will most likely not be digitally signed with the same domain name as the site you are visiting, although if this was a targeted attack on a particular server it could be possible.

Since the attacker likely wants to steal data, he will then proxy a connection to the server your originally wanted to go to, but instead of your public key going to the server it would be the attacker’s. This places the attackers computer directly between you and the server and will be able to capture and view any data sent between both machines. Well, that is if the user has been trained to simply ignore the warning they got from their web browser.

My Linux PKI Implementation

Most organizations rely on internal sites for their business operations and purchasing a certificate signed by one of these root authorities for each site can become costly. In addition, if an organization makes use sub-subdomains it becomes harder to simply use wildcard certificates for each subdomain. The solution here is for that company to become its own Root Certificate Authority.

In the Windows Server world, this is quite easy using their PKI Services Manager. If you are anything like me you cringe at the thought of Windows Servers! In the Linux world there is TinyCA, but it depends on a graphical environment. I am sort of a minimalist, so a Desktop GUI on my servers is just not going to work for me. Under this dilemma I decided to use OpenSSL which has all the necessary functions built within it. However, these commands are long and difficult to remember and I hate having to look up syntax or notes every time I want to perform a task.

Here is where my bash script comes in. Using whiptail to add a decent interface while keeping everything within one script I included functions that:

  • Manages multiple domains

  • Creates a Root Certificate for each domain

  • Unlimited subdomains

  • Certificate revocation

When you are running your own Root CA, it is critical that your certificate be installed in your company’s web browsers and other applications that access other applications encrypted using your certificates. For each domain you create using my shell script, there is a ca.crt file that is created under the certificates directory. This ca.crt is a public key and can be freely distributed and installed within your company. Additionally there is a certificate revocation list titled ca.crl. This should also be made available to ensure that a certificate that has been revoked is properly blocked by the configured clients.

While this script functions well under Ubuntu 12.04 LTS, I have not tested under other distributions. I also wanted to add a feature to completely rekey an entire domain in the event of a compromised Root CA. Alas, I did not get around to it which is why I decided to write this article and share my work allowing for feedback and further development.

Feel free to take this script and modify it as your please. I only ask if you make improvements that you share them with me so that I can update this script.


The directory and file structure

I have typically kept this under the /etc/pki directory, but it does not really matter.


This file is the main shell script that uses Whiptail and OpenSSL commands for managing the PKI domain. The full script is available at the end of this article.



This file contains a list of all active domains that are

managed by the script.


Each domain will have its own folder



Each domain has its own openssl.cnf file. If any configuration changes are desired for the domain, this is the guy to edit. The script creates this file and you can edit the defaults within that file.


This file counts the number of revoked certificates by this CA.


This file counts the number of serials signed by this CA.


This file is a database of all serial numbers of certificates signed by this CA.


This folder is where all public keys are placed. They are created with a .crt extension and the ca.crt is also stored in this directory. As you create certificates for subdomains they will be stored here. For example,


This file is the bread and butter for the domain. This file needs to be distributed to all systems that would verify the authenticity of the certificates created for this domain using the script.


This folder contains the ca.crl file


This file needs to be distributed to all systems that would verify whether the certificate being presented has

actually been revoked.



This folder contains the Certificate Signing Requests. These are generated when a private key is created for a domain. CSR files contain information about the private key that the CA uses when digitally signing the certificate generated from that private key. It is a way to pass information about the private key to the CA without having to provide the private key itself because sharing it would defeat the purpose of the private key. In our scenario the private key is on the same server as the CA, but it’s role is more critical when the CA is a third party.


This folder is a default place for new certificates. It is unused by the script.


This folder is where all private keys are placed and strict access should be enforced. They are created with a .key extension and the ca.key is also stored in this directory. As you create certificates for subdomains their private keys will be stored here. For example,


This file is the private key for the public key; ca.crt. Access to this file should be as limited as possible.


When a private key for a domain has been compromised you will want to revoke that key and its certificate. WHen you do, they are placed in this folder.


These folders are where their corresponding file types are placed when a key has been revoked.


Understanding the OpenSSL Commands


Although you may never look at the contents of the script, for those that do this is a break down of the more complicated parts and the overall justification for writing this script; horrifically long OpenSSL commands. For the purpose of demonstration the domain used will be and a private/public key pair will be generated for and signed with the CA key file for


Creating a CA



openssl req -config /etc/pki/ -new -x509 -extensions v3_ca -keyout /etc/pki/ -out /etc/pki/ -days 365 -passout pass:Somepassword -passin pass:Somepassword -batch \
                -subj "/C=US/ST=CA/L=Los Angeles/ Name"

Hey OpenSSL, I am making a request for a new certificate using the configuration file, /etc/pki/ Since this will be a CA, just output self-sign the certificate (-x509) using the v3_ca extension. Save the private key as /etc/pki/ and the certificate as /etc/pki/ Put an expiration of 365 days and set the password to be “Somepassword”. This is being done via a script so don’t ask me for anything as I am providing it all here (-batch). Finally, the subject field should state that the machine to use this certificate is in Los Angeles, CA in the US belonging to Some Company and using the domain name

Creating a New Private Key and CSR



openssl req -config /etc/pki/ -new -nodes -keyout /etc/pki/ -out /etc/pki/ -days 365 \
                       -subj "/C=US/ST=CA/L=Los Angeles/ Name"

Hey OpenSSL, I am making a request for a new private key for using the configuration file, /etc/pki/

The key can be saved as /etc/pki/ with an expiration date of 365 days. In addition, create a CSR for use to have the certificate created and signed by a CA.

Creating and Signing a New Certificate



openssl ca -batch -config /etc/pki/ -passin pass:Somepassword -policy policy_anything -out /etc/pki/ -infiles /etc/pki/

Hey OpenSSL, I am making a request for a new certificate using the CA setup earlier. Using the openssl.cnf file, it should be known that the CA key file is in /etc/pki/ and the password is being provided within this command. The CSR file to use is in /etc/pki/ Save the final certificate under /etc/pki/

Revoke a Certificate



openssl ca -config /etc/pki/ -revoke /etc/pki/ -batch -passin pass:Somepassword

Hey OpenSSL, using the openssl.cnf config for most settings, revoke the certificate This will update the index.txt file stating that this certificate has been revoked.

Generate a New CRL File



openssl ca -config /etc/pki/ -gencrl -out /etc/pki/ -batch -passin pass:Somepassword

Hey OpenSSL, using the openssl.cnf config for most settings, generate a new CRL for this CA and save it to /etc/pki/ Just like the ca.crt file, this should be made easily accessible for clients to check the validity of whatever certificate is presented.


Summary and Downloads

The biggest downside of managing your own PKI is being persistent in enforcing clients, users or other, to check the authenticity of your certificates. In addition, the CRL is often an overlooked and yet superbly critical aspect of PKI. If security is important for your organization then I suggest the following:

  • Only generate certificates with some sort of signing CA.
  • Put the CA certificate file and the CA revoke list on a web server unencrypted.
  • Use a cronjob to periodically publish the CA revoke list and CA certificate file to the aforementioned web server.
  • On clients, use a cronjob to periodically grabs these two files from the web server in order to be used for any scripts or programs.
  • Lastly, with these two files easily accessed from a web browser it becomes pretty easy to install them in your user’s web browser.

A current copy of the complete script can be found at Place it in /etc/pki/ and edit it as you see fit.




Samba 4.0.10 Released – Fixes Some Major Bugs

Samba is a software that uses TCP/IP protocol to enable interaction between Microsoft Windows and other systems (like Unix, Linux etc.). It is a popular tool for file transfers across the systems. A new version of Samba 4.0.10 has been released on 8th October 2013. It is the latest stable release that includes fixes to the following major bugs.


Elementary OS 0.2 Luna – now that’s more like it

Elementary OS - Luna Elementary OS 0.2 Luna is a linux distro that has become quite popular recently. It is based on Ubuntu and designed to look somewhat like a mac. There have been many attempts to get a mac like feel on the linux desktop and Pear OS is the most significant one. However all of them fall short somewhere or the other. Elementary introduces a new desktop environment called Pantheon that achieves a lot. First of all it gives a mac like look and feel, but most importantly it makes the desktop remarkably simple and productive. I think I shall use it on notebook and those desktops where I don't code. I have been using KDE for the last decade and never moved out of it. Never liked gnome, never needed xfce or any of those light desktops. But elementary seems something irresistible. Not only are the looks elegant and beautiful, the usability is remarkable compared to other more common desktop environments like Gnome or Unity. Elementary has the perfect balance of style and usability and this would no doubt make it a very popular distro. Now lets take a quick tour of this brand new linux distro that you would love, if you want to be fashionable and show off. Yeah really. elementary on a sleek ultra thin notebook would make the macs envious!! Onto the desktop The desktop is well drawn and painted. The wallpapers are quite pleasant. Infact its the first time I see such good looking wallpapers packed by default with a distro. As for the fonts, elementary is smart enough to use the Droid fonts which definitely is one of the best. The wallpaper and font selections definitely make a lot of sense and there is nothing wrong in what elementary developers claim on their website elementary is crafted by designers and developers who believe that computers can be easy, fun, and gorgeous. By putting design first, we ensure we're not compromising on quality or usability. On the top left is the menu for accessing applications which is called the Slingshot. It has 2 modes of display, either only icons, or icons with categories on the left. And at the base you see Plank, the dock for icons. It is configurable and has a very decent appearance. Elementary uses its own window manager called Gala. It is considered to be very resource consuming, so you would need high CPU power or a dedicated graphics unit for elementary to work smoothly. Otherwise cpu usage would rise high making the system sluggish. Gnome and KDE all have desktop effects but they are all very raw in nature. They just keep showing the user with all possible desktop effects that could be created. Elementary on the other hand has very balanced mixture of various effects...
Read more... Comment (0)

A Simple BASH Script to Test Your Internet Connectivity

Most of the users all over the world make use of Google's Index Page to check whether their Internet connection is working or not.  Many times it is required to check periodically whether the server you are running is connected to internet or not. It is very cumbersome to open the web page every time you wish to check the connection. As an alternative, it definitely makes sense to run some scripts in the background periodically scheduling them using cron.

Read More on YourOwnLinux...


How To : Configure Ubuntu as a Router


If you are having two network interface cards or some other component that connects you to the internet along with a network interface card installed in your ubuntu system, it can be transformed into an immensely powerful router. You can establish basic NAT (Network Address Translation), activate port forwarding, form a proxy, and prioritize traffic observed by your system so that your downloading stuff do not intervene with gaming. This article will explicate setting up your ubuntu system as a router which can later be configured as a firewall with prior knowledge of 'IPTables'. The resulting setup will help you to control traffic over ports and make your system less vulnerable to security breaches.


Read More at YourOwnLinux


Kernel Crash Seen In case of VLAN Tagged ICMP packets

Hi Friends,

I am using Windriver customized kernel 3.0 based on main line Linux version

Getting Kernel crash In case of Vlan Tagged ICMP packets. Please find below stack trace for the crash.


Kindly help me in analysing the trace and to identify the main culprit.

[<ffffffff81154108>] warn_on_slowpath+0x58/0x90

[<ffffffff8115bd20>] local_bh_enable+0x88/0xf8

[<ffffffff81344a44>] dev_queue_xmit+0x144/0x688

[<ffffffff81301f04>] bond_dev_queue_xmit+0x44/0x178

[<ffffffff81302408>] bond_xmit_activebackup+0xb0/0xe8

[<ffffffff81344ea4>] dev_queue_xmit+0x5a4/0x688

[<ffffffff813d59b4>] vlan_dev_hwaccel_hard_start_xmit+0x8c/0xa0

[<ffffffff81344ea4>] dev_queue_xmit+0x5a4/0x688

[<ffffffff81377cbc>] ip_push_pending_frames+0x37c/0x4c0

[<ffffffff813a0768>] icmp_reply+0x170/0x290

[<ffffffff813a0a58>] icmp_echo+0x58/0x68

[<ffffffff813a11b4>] icmp_rcv+0x334/0x390

[<ffffffff813721a4>] ip_local_deliver_finish+0x13c/0x2d8

[<ffffffff813718c4>] ip_rcv_finish+0x134/0x510

[<ffffffff81343af4>] netif_receive_skb+0x41c/0x5d8

[<ffffffff81343d58>] process_backlog+0xa8/0x160

[<ffffffff8134186c>] net_rx_action+0x194/0x2e8

[<ffffffff8115b71c>] __do_softirq+0x114/0x288

[<ffffffff8115b910>] do_softirq+0x80/0x98

[<ffffffff8115bb8c>] irq_exit+0x64/0x78

[<ffffffff81100e40>] plat_irq_dispatch+0xd0/0x1d0

[<ffffffff81120c80>] ret_from_irq+0x0/0x4

[<ffffffff81120ea0>] r4k_wait+0x20/0x40

[<ffffffff81123414>] cpu_idle+0x34/0x60


Thanks in Advance



An introduction to Linux Deepin’s way of innovation

Author: Andy Stewart, co-founder and leader of the Linux Deepin team

Note: This article is translated from this page.


When Linux Deepin team was organized two years ago, we already have a clear idea of what a perfect deskop operating system would be like. Over the last two years, our team has grown from several people to more than 30 members. We've always had a clear-cut goal, that is, to make a Linux operating system with the best interactive user experience.


Our view about interactive experience

In our opinion, the criteria for good interactive experience are as follows:

1. It's not the users' job to work out the details

There are lots of things to learn about Linux. Programmers can examine underlying algorithms. Designers can do visual studies. Experts in other subjects can do research in their fields. However, ordinary users will basically need to listen to the music, watch movies or the like.

Traditionally, Linux users, especially Chinese users, have to spend days to get fonts, character encodings and codecs working properly. Sometimes they go to extremes to get bleeding-edge versions of underlying libraries. I am a geek myself. I never use a mouse when coding and I use Emacs to get everything done. I also lived the days when I was full of enthusiasm and spend days and nights playing with my system. However, as time goes by, I would rather see that things *JUST* work and do not need configuration after installation.

So We have put the idea into practice. The arduous and daunting configurations are already done by Deepin. All users need to do is enjoy.

2. Good interactive design is not just about themes.

Some people who work with the command line every day still think of interactive design as good-looking themes. In fact, good themes merely give pleasure to the eyes. However, interactive design comes from deep thoughts about humanity. Based on the research, we make decisions and feedback which are considered natural and meet users' expectations.

Let's take DSnapshot and DPlayer as examples.

1). Perhaps the best screenshot tool with GUI before DSnapshot was Shutter. What did we do if we wanted to take a screenshot and share it with a friend?

Steps: Take a screenshot -> Save it -> Open the picture and edit it -> Save it again -> Upload to social websites

Users could not edit the picture immediately after the screenshot had been taken. They had to save it, open it for editing, save it again and then open a browser or use other tools to upload it.

Let's see what our users really need?

a). Select the area to take a screenshot as they wish;

b). Edit it immediately if they need to;

c). Share it with friends once the previous preparations are done.

So what we need to do is get rid of the unneccessary steps and only "bother" our users where choices are needed. Taking a screenshot, editing it and sharing - no extra steps. The simplest way to realize user experience is the best interactive design.

2). What does DPlayer do when it is minimized?

Let's analyze why a user who was focusing on a movie wants to minimize the player? Because he/she has other things to do. What is he/she going to do when he/she's finished? That's right. He/she's going to continue to watch the movie.

So what do WE do now? When the user minimize DPlayer, we pause the movie for him/her. When he/she restored the player window, we continue to play the movie. This is basically what interactive design is like. When the user needs to pause, we help him/her pause it. When they come back, we help them play it.

As is shown above, it is the details that we care most about.

3). What do users do when they've finished installing an application in the Software Center?

They'll need to launch it. And no, they don't need to go to the launcher menu to start them. We give them a startup button on the app's page. Users don't have to worry about where the application was installed. They can just click the button and launch it.


Linux Deepin is *NOT* reinventing the wheel. They are creating an excellent interactive design.


Many Chinese Linux fans often ask us the question, "Why are you reinventing the wheel when there are so many distributions out there?" So I think we need to make our point clear. The powerful tools on Linux is beyond counting, but they rarely give the pleasure to an ordinary user as being considered easy to use.

It is not the answer to the question that matters. What surprised us is that lovely monomania deeply rooted in the heart of Linux techies, who work with git, patches, mailing lists, IRC and bugs everyday. In China, a misunderstanding about Linux is always around. On one hand, ordinary people tend to think Linux is for experts. On the other hand, the enthusiasm of Linux users has, to some degree, developed into some sort of religion. The techies love to make Linux a symbol of expert. They wouldn't see their lovely toy ended up as easy to use for newbies. Some even obstruct efforts to make Linux available for average computer users.

We all love Linux. Any efforts on Free and Open Source, being it on underlying algorithms or simply making Linux easy to use, are worth praising. We are all working for a better Linux with more users and great future.


Linux Deepin has always been leading the Chinese way of Open Source.


Linux Deepin has contributed heavily to the Free and Open Source world. The projects we created in the past two years are shown below:

a). Deepin Software Center

b). DSnapshot

c). DMusic

d). DPlayer

e). Deepin Desktop Environment

We'll bring more innovative design to the world, such as desktop apps, community tools and many other unprecedented creations.


As we are moving forward faster than ever before, we are facing considerable challenges. Every week, we receive about 30 to 50 bug reports or suggestions from our users.

For example:

a). Add a launcher icon for DSnapshot;

b). Provide a weather forecast item for Taipei (P.R. China) in the weather widget;

c). Give a switch to users to turn on/off automatic updates in the Software Center.


So how are we going to make our System and apps stable and deal with improvement suggestions from our users? If Linux Deepin only focused on new features and wouldn't listen to feedback, their product would be like many other desktop distributions. It wouldn't be like those predecessors as being average and unbearable as to details.

Suggestions from users, no matter how "trivial" the idea may seem, would be accepted as long as we think it will improve user experience.

Therefore, we decided to spend one working day each week to deal with feedback and known issues. Now we are investing 80 percent of our time in innovation and 20% in improvement.

In a word, we are taking 20 percent of our time to improve user experience as we are rapidly making innovations.


Precise Puppy 5.7.1 review – a small and swift linux distro

Puppy Linux - the Precise variant Precise Puppy is a puppy linux variant that is "based" on Ubuntu 12.04 precise. It is designed as a small and fast distro that can run on older hardware with low resources. It is intended to be run in live mode rather than installing on the hard drive. The iso file can be burnt to a disc or put on a flash drive and it would boot like any other linux distro. I always wanted to try puppy linux and this time I finally got my hands on it. Version 5.7.1 was recently released. So what is puppy linux. Well, if you don't already know, puppy linux is not a distinct distro by itself. It is more of a concept with lots of distros being build on it. For example precise puppy is a puppy linux variant built using packages from ubuntu precise. Similarly there is slacko puppy that is based on slackware. The term "based on" is not very strict in sense and should not be mistaken for a trimmed down version of a large distro. It is more of a compatibility factor such that packages from a larger distros are used to build the particular puppy variant. You might be surprised to know how many puppies are there in the town. Check this link to find out. Archpup, Attackpup, Macpup, pup .... pup .... pup ... So in this post we are focussing on precise puppy 5.7.1 Download and run Puppy linux distros are always small in size compared to other larger distros. Most are within 150MB and although that is not really small, but doesn't matter. There are distros that are smaller, damn small linux for example. You should be able to find the precise puppy distros at Navigate to the directory for version 5.7.1 and download the right iso. You would find lots of "retro" builds. Retros are those builds which have additional software/driver to support older hardware. This makes them larger in size. I am trying out puppy on an old Samsung N110 netbook (not very old). It has a dual core intel atom processor with 1 GB ram. It has Lubuntu installed, which works fine, till you fire too many applications or browser tabs, which would lead to a clear speed lag. You can try it inside virtualbox if you want to. Virtualbox would allow to set the hardware configuration parameters like ram and cpu so it can be tested in a restricted environment. I used unetbootin to put the puppy iso on a flash drive. Easy enough and works the same way as any other distro. Onto the desktop Puppy boots right into its Jwm desktop which is very colorful like kid...
Read more... Comment (2)

25+ examples of Linux find command – search files from command line

Linux find command The Linux find command is a very useful and handy command to search for files from the command line. It can be used to search for files based on various criterias like permissions, user ownership, modification date/time, size etc. In this post we shall learn to use the find command along with various options that it supports. The examples are broken down into discrete examples making it easy to learn and comprehend. The find command is available on most linux distros by default so you do not have to install any package. This is a command you must master, if you want to get comfortable with your linux system. So lets begin with the command. The basic format of the syntax is like this find where-to-look criteria what-to-look-for Basic examples 1. List all files in current and sub directories This command lists out all the files in the current directory as well as the subdirectories in the current directory. $ find . ./abc.txt ./subdir ./subdir/how.php ./cool.php The command is same as the following $ find . $ find . -print 2. Search specific directory or path The following command will look for files in the test directory in the current directory. Lists out all files by default. $ find ./test ./test ./test/abc.txt ./test/subdir ./test/subdir/how.php ./test/cool.php The following command searches for files by their name. $ find ./test -name abc.txt ./test/abc.txt We can also use wildcards $ find ./test -name *.php ./test/subdir/how.php ./test/cool.php Note that all sub directories are searched recursively. So this is a very powerful way to find all files of a given extension. Trying to search the "/" directory which is the root, would search the entire file system including mounted devices and network storage devices. So be careful. Of course you can press Ctrl + c anytime to stop the command. Ignore the case It is often useful to ignore the case when searching for file names. To ignore the case, just use the "iname" option instead of the "name" option. $ find ./test -iname *.Php ./test/subdir/how.php ./test/cool.php 3. Limit depth of directory traversal The find command by default travels down the entire directory tree recursively, which is time and resource consuming. However the depth of directory travesal can be specified. For example we don't want to go more than 2 or 3 levels down in the sub directories. This is done using the maxdepth option. $ find ./test -maxdepth 2 -name *.php ./test/subdir/how.php ./test/cool.php $ find ./test -maxdepth 1 -name *.php ./test/cool.php The second example uses maxdepth of 1, which means it will not go lower than 1 level deep, either only in the current directory. This is very useful when we want to do a limited search only in the current directory or max 1 level deep sub directories and not the entire directory tree which would take more time. Just like maxdepth there is an option called mindepth which does what the name suggests, that is, it will go atleast N...
Read more... Comment (3)

Reviewing Kali Linux – The Distro for Security Geeks

An introduction to Kali Linux - the distro for security geeks

When it comes to hacking, security, forensics etc, linux is the only and the preferred tool. Linux is very hacker friendly from ground up. But still there are distros that are more oriented towards assisting hackers. To name a few, backtrack, backbox, blackbuntu etc.

Backtrack is the most popular distro when it comes to penetration testing and security stuff. And now it has taken a new avatar called Kali Linux. Kali Linux is the new name of backtrack (version 5 rc3 was the last backtrack release).


Read more at BinaryTides

Page 18 of 140

Upcoming Linux Foundation Courses

  1. LFD320 Linux Kernel Internals and Debugging
    15 Sep » 19 Sep - Virtual
  2. LFS220 Linux System Administration
    22 Sep » 25 Sep - Virtual
  3. LFS520 OpenStack Cloud Architecture and Deployment
    29 Sep » 02 Oct - Costa Mesa

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board