Linux.com

Home Linux Community Community Blogs

Community Blogs



Building a Compute Cluster with the BeagleBone Black

Building a Compute Cluster with the BeagleBone Black

As a developer, I've always been interested in learning about and developing for new technologies. Distributed and parallel computing are two topics I'm especially interested in, leading to my interest in creating a home cluster. Home clusters are of course nothing new and can easily be done using old desktops running Linux. Constantly running desktops (and laptops) consume space, use up a decent amount of power, cost money to set up and can emit a fair amount of heat. Thankfully, there has been a recent explosion of enthusiast interest in cheap arm based computers, the most popular of which is the Raspberry Pi. With a small size, extremely low power consumption and great Linux support, arm based boards are great for developer home projects. While the Raspberry Pi is a great little package and enjoys good community support, I decided to go with an alternative, the BeagleBone Black.

Launched in 2008, the original BeagleBoard was developed by Texas Instruments as an open source computer. It featured a 720 MHz Cortex A8 arm chip and 256MB of memory. The BeagleBoard-xm and BeagleBone were released in subsequent years leading to the BeagleBone Black as the most recent release. Though its $45 price tag is a little higher than a Raspberry Pi, it has a faster 1GHz Cortex 8 chip, 512 MB of RAM and extra USB connections. In addition to 2GB of onboard memory that comes with a pre-installed Linux distribution, there is a micro SD card slot allowing you to load additional versions of Linux and boot to them instead. Thanks to existing support for multiple Linux distributions such as Debian and Ubuntu, BeagleBone Black looked to me like a great inexpensive starting point for creating my very own home server cluster.

Setting up the Cluster

For my personal cluster I decided to start small and try it out with just three machines. The list of equipment that I bought is as follows:

1x 8 port gigabit switch
3x beaglebone blacks
3x ethernet cables
3x 5V 2 amp power supplys
3x 4 GB microSD cards

To keep it simple, I decided to build a command line cluster that I would control through my laptop or desktop. The BeagleBone Black supports HDMI output so you can use them as standalone computers but I figured that would not be necessary for my needs. The simplest way to get the BeagleBone Black running is to use the supplied USB cable to hook it up to an existing computer and SSH to the pre-installed OS. For my project though I chose to use the SD card slot and start with a fresh install. To accomplish this, I had to first load a version of Linux on to each of the thre SD cards. I used my existing Ubuntu Linux machine with a USB SD card reader to accomplish this task.

Initial searches for BeagleBone compatible distributions reveals there are a few places to download them. I decided to go with Ubuntu and found a nice pre-created image from http://rcn-ee.net/deb/rootfs/raring/. At the time I searched and downloaded, the most recent image was from August but there are now more recent builds. Once you have un-tared the file, you will see a lot of files and directories inside the newly created folder. Included is a nice utility for loading the OS on to an SD card called setup_sdcard.sh. If you aren't sure what device Linux is reading your SD card as, you can use the following to show you your devices:

sudo ./setup_sdcard.sh --probe-mmc

On my machine the SD card was listed as /dev/sdb with its main partition showing as /dev/sdb1. If you see the partition listed as I did, you need to unmount it before you can install the image on it. Once the card was ready, I ran the following:

sudo ./setup_sdcard.sh --mmc /dev/sdb --uboot bone

This command took care of the full install of the OS on to the SD card. Once it was finished I repeated I for the other two SD cards. The default user name for the installed distribution is ubuntu with password temppwd. I inserted the SD cards in to the BeagleBones and them connected them to the ethernet switch.

The last step was to power them up and boot them using the micro SD cards. Doing this required holding down the user boot button while connecting the 5V power connector. The button is located on a little raised section near the usb port and tells the device to read from the SD card. Once you see the lights flashing repeatedly you can release the button. Since each instance will have the same default hostname when initially booting, it is advisable to power them on one at a time and follow the steps below to set the IP and hostname before powering up the next one.

Configuring the BeagleBones

Once the hardware is set up and a machine is connected to the network, Putty or any other SSH client can be used to connect to the machines. The default hostname to connect to using the above image is ubuntu-armhf. My first task was to change the hostname. I chose to name mine beaglebone1, beaglebone2 and beaglebone3. First I used the hostname command:

sudo hostname beaglebone1

Next I edited /etc/hostname and placed the new hostname in the file. The next step was to hard code the IP address for so I could probably map it in the hosts file. I did this by editing /etc/network/interfaces to tell it to use static IPs. In my case I have a local network with a router at 192.168.1.1. I decided to start the IP addresses at 192.168.1.51 so the file on the first node looked like this:

    iface eth0 inet static
       address 192.168.1.51
       netmask 255.255.255.0
       network 192.168.1.0
       broadcast 192.168.1.255
       gateway 192.168.1.1

It is usually a good idea to pick something outside the range of IPs that your router might assign if you are going to have a lot of devices. Usually you can configure this range on your router. With this done, the final step to perform was to edit /etc/hosts and list the name and IP address of each node that would be in the cluster. My file ended up looking like this on each of them:

127.0.0.1     localhost
192.168.1.51  beaglebone1
192.168.1.52  beaglebone2
192.168.1.53  beaglebone3

Creating a Compute Cluster With MPI

After setting up all 3 BeagleBones, I was ready to tackle my first compute project. I figured a good starting point for this was to set up MPI. MPI is a standardized system for passing messages between machines on a network. It is powerful in that it distributes programs across nodes so each instance has access to the local memory of its machine and is supported by several languages such as C, Python and Java. There are many versions of MPI available so I chose MPICH which I was already familiar with. Installation was simple, consisting of the following three steps:

sudo apt-get update
sudo apt-get install gcc
sudo apt-get install libcr-dev mpich2 mpich2-doc

MPI works by using SSH to communicate between nodes and using a shared folder to share data. The first step to allowing this was to install NFS. I picked beaglebone1 to act as the master node in the MPI cluster and installed NFS server on it:

sudo apt-get install nfs-client

With this done, I installed the client version on the other two nodes:

sudo apt-get install nfs-server

Next I created a user and folder on each node that would be used by MPI. I decided to call mine hpcuser and started with its folder:

sudo mkdir /hpcuser

Once it was created on all the nodes, I synced up the folders by issuing this on the master node:

echo "/hpcuser *(rw,sync)" | sudo tee -a /etc/exports

Then I mounted the master's node on each slave so they can see any files that are added to the master node:

sudo mount beaglebone1:/hpcuser /hpcuser

To make sure this is mounted on reboots I edited /etc/fstab and added the following:

beaglebone1:/hpcuser    /hpcuser    nfs

Finally I created the hpcuser and assigned it the shared folder:

sudo useradd -d /hpcuser hpcuser

With network sharing set up across the machines, I installed SSH on all of them so that MPI could communicate with each:

sudo apt-get install openssh-server

The next step was to generate a key to use for the SSH communication. First I switched to the hpcuser and then used ssh-keygen to create the key.

su - hpcuser
ssh­keygen ­-t rsa

When performing this step, for simplicity you can keep the passphrase blank. When asked for a location, you can keep the default. If you want to use a passphrase, you will need to take extra steps to prevent SSH from prompting you to enter the phrase. You can use ssh-agent to store the key and prevent this. Once the key is generated, you simply store it in our authorized keys collection:

 
cd .ssh
cat id_rsa.pub >> authorized_keys

I then verified that the connections worked using ssh:

ssh hpcuser@beaglebone2

Testing MPI

Once the machines were able to successfully connect to each other, I wrote a simple program on the master node to try out. While logged in as hpcuser, I created a simple program in its root directory /hpcuser called mpi1.c. MPI needs the program to exist in the shared folder so it can run on each machine. The program below simply displays the index number of the current process, the total number of processes running and the name of the host of the current process. Finally, the main node receives a sum of all the process indexes from the other nodes and displays it:

#include <mpi.h>
#include <stdio.h>
int main(int argc, char* argv[])
{
    int rank, size, total;
    char hostname[1024];
    gethostname(hostname, 1023);
    MPI_Init(&argc, &argv);
    MPI_Comm_rank (MPI_COMM_WORLD, &rank);
    MPI_Comm_size (MPI_COMM_WORLD, &size);
    MPI_Reduce(&rank, &total, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);
    printf("Testing MPI index %d of %d on hostname %s\n", rank, size, hostname);
    if (rank==0)
    {
        printf("Process sum is %d\n", total);
    }
    MPI_Finalize();
    return 0;
}

Next I created a file called machines.txt in the same directory and placed the names of the nodes in the cluster inside, one per line. This file tells MPI where it should run:

beaglebone1
beaglebone2
beaglebone3

With both files created, I finally compiled the program using mpicc and ran the test:

mpicc mpi1.c -o mpiprogram
mpiexec -n 8 -f machines.txt ./mpiprogram

This resulted in the following output demonstrating it ran on all 3 nodes:

Testing MPI index 4 of 8 on hostname beaglebone2
Testing MPI index 7 of 8 on hostname beaglebone2
Testing MPI index 5 of 8 on hostname beaglebone3
Testing MPI index 6 of 8 on hostname beaglebone1
Testing MPI index 1 of 8 on hostname beaglebone2
Testing MPI index 3 of 8 on hostname beaglebone1
Testing MPI index 2 of 8 on hostname beaglebone3
Testing MPI index 0 of 8 on hostname beaglebone1
Process sum is 28

Additional Projects

While MPI is a fairly straightforward starting point, a lot of people are more familiar with Hadoop. To test out Hadoop compatibility, I downloaded version 1.2 of Hadoop (hadoop-1.2.1.tar.gz) from the Hadoop downloads at http://www.apache.org/dyn/closer.cgi/hadoop/common. After following the basic set up steps I was able to get it running simple jobs on all nodes. Hadoop, however, shows a major limitation of the BeagleBone which is the speed of SD cards. As a result, using HDFS for jobs is especially slow so you may have mixed luck running anything this is disk IO heavy.

Another great use of the BeagleBones are as web servers and code repositories. It is very easy to install Git and Apache or Node.js the same as you would on other Ubuntu servers. Additionally you can install Jenkins or Hudson to create your own personal build server. Finally, you can utilize all the hookups of the BeagleBone and install xbmc to turn a BeagleBone in to a full media server.

The Future

In addition to single core boards such as the BeagleBones or Raspberry Pi, there are dual core boards starting to appear such as Pandaboard and Cubieboard with likely more on the way. The latter is priced only a little higher than the BeagleBone, supports connecting a 2.5 inch SATA hard disk and features a dual core chip in its latest version. Similar steps to those performed here can be used to set them up, giving hobbyists like me some really good options for home server building. I encourage anyone with the time to try them out and see what you can create.

 

 

Linux Shell Script To Monitor Space Usage and Send Email

Linux shell script to check /var logs space and send email if used space reach 80%. Also print space usage of each directory inside /var. Useful to find out which folder use most of space under /var. This script really helps system administrator to monitor their servers space usage. Based on the requirement , administrators can change the directoires they want to monitor.

 

#!/bin/bash

LIMIT='80'

#Here we declare variable LIMIT with max of used spave

DIR='/var'

#Here we declare variable DIR with name of directory

MAILTO=' This e-mail address is being protected from spambots. You need JavaScript enabled to view it '

#Here we declare variable MAILTO with email address

SUBJECT="$DIR disk usage"

#Here we declare variable SUBJECT with subject of email

MAILX='mailx'

#Here we declare variable MAILX with mailx command that will send email

which $MAILX > /dev/null 2>&1

#Here we check if mailx command exist

if ! [ $? -eq 0 ]

#We check exit status of previous command if exit status not 0 this mean that mailx is not installed on system

then

          echo "Please install $MAILX"

#Here we warn user that mailx not installed

          exit 1

#Here we will exit from script

fi

cd $DIR

#To check real used size, we need to navigate to folder

USED=`df . | awk '{print $5}' | sed -ne 2p | cut -d"%" -f1`

#This line will get used space of partition where we currently, this will use df command, and get used space in %, and after cut % from value.

if [ $USED -gt $LIMIT ]

#If used space is bigger than LIMIT

then

      du -sh ${DIR}/* | $MAILX -s "$SUBJECT" "$MAILTO"

#This will print space usage by each directory inside directory $DIR, and after MAILX will send email with SUBJECT to MAILTO

fi

Sample Output

./check_var.sh

37M     /var/cache

32K     /var/db

8.0K    /var/empty

4.0K    /var/games

70M     /var/lib

4.0K    /var/local

8.0K    /var/lock

38M     /var/log

0       /var/mail

4.0K    /var/nis

4.0K    /var/opt

4.0K    /var/preserve

88K     /var/run

220K    /var/spool

37M     /var/tmp

24M     /var/www

4.0K    /var/yp

Read more linux shell scripts

 

 

New and simple desktop for Linux

Based on ancient Rox and Window Manager fluxbox . Zappwm is a desktop that consumes only 39MB and offers an environment with drag'n drop very simple and practical .

Currently in version 4.2 , includes the addition of 10 wallpapers, screen corners rounded , links to cloud computing and three main inputs for applications that resemble the interfaces for smartphones :

Social Life - Shortcuts to social networks and cloud computing .
All Apps- shortcuts for all graphics applications .
Home- Personal Folder .

The desktop configuration is very simple , though limited to wallpaper . Just click with the right mouse button and choose Backdrop > Show. Then drag your wallpaper it will be assigned .

This is just a feature of ROX : Drag shortcuts to the panel , desktop and between windows . It is also possible to use " composite " application with xcompmgr , but this sacrifices the environment and lightness .

The system requirements are: rox-filer , fluxbox , pcmanfm and xdg - utils . For those who download the application on Debian format ( compatible with ubuntu 12.x, Debian, Mint etc. ) You can install via gdebi , or with the command in terminal mode, with root,  "dpkg -i zappwm" followed by " apt -get - f install" to correct dependencies.

zappwm 42 shortcuts

 

Eztables: simple yet powerful firewall configuration for Linux

Anyone who ever has a need to setup a firewall on Linux may be interested in Eztables.

It doesn't matter if you need to protect a laptop, server or want to setup a network firewall. Eztables supports it all.

If you're not afraid to touch the command line and edit a text file, you may be quite pleased with Eztables. 

Some features:

  • Basic input / output filtering
  • Network address translation (NAT)
  • Port address translation (PAT)
  • Support for VLANs
  • Working with Groups / Objects to aggregate hosts and services
  • Logging to syslog
  • Support for plugins
  • Automatically detects all network interfaces

 

 

How to Install TeamViewer 9 on Linux

TeamViewer is very useful app for connecting remote systems with graphical environment in easy steps. Till now most of users used on it windows systems. But as the desktop users are switching to linux distribution's, So they will requires TeamViewer on linux desktop also.  

This article How to Install TeamViewer 9 on Linux Distributions will provide you easy steps to install it.

 

How to Install Zabbix Monitoring Tool on Linux

How to Install Zabbix Monitoring Tool on Linux. Zabbix is an open source software for networks and application monitoring. Zabbix provides agents to monitor remote hosts as well as Zabbix includes support for monitoring via SNMP, TCP and ICMP checks. Click here to know more about zabbix.


Click here to read full article

Zabbix is an open source software for networks and application monitoring. Zabbix provides agents to monitor remote hosts as well as Zabbix includes support for monitoring via SNMP, TCP and ICMP checks. Click here to know more about zabbix.
 

Dissection of Android Services Internals

Have you ever wondered how an app gets an handle to the system services like POWER MANAGER or ACTIVITY MANAGER or LOCATION MANAGER and several others like these. To know that i dug into the source code of Android and found out how this is done internally.

So let me start from the application side’s java code.


At the application side we have to call the function getService and pass the ID of the system service (say POWER_SERVCE) to get an handle to the service.


Here is the code for getService defined in  /frameworks/base/core/java/android/os/ServiceManager.java


/**
44     * Returns a reference to a service with the given name.
45     *
46     * @param name the name of the service to get
47     * @return a reference to the service, or <code>null</code> if the service doesn't exist
48     */
49    public static IBinder getService(String name) {
50        try {
51            IBinder service = sCache.get(name);
52            if (service != null) {
53                return service;
54            } else {
55                return getIServiceManager().getService(name);
56            }
57        } catch (RemoteException e) {
58            Log.e(TAG, "error in getService", e);
59        }
60        return null;
61    }


Suppose we don’t have the service in the cache. Hence we need to concentrate on the line 55

return getIServiceManager().getService(name);


This call actually gets an handle to the service manager and asks it to return a reference of the service whose name we have passed as a parameter.



Now let us see how the getIServiceManager() function returns a handle to the ServiceManager.


Here is the code of getIserviceManager() from /frameworks/base/core/java/android/os/ServiceManager.java


private static IServiceManager getIServiceManager() {
34        if (sServiceManager != null) {
35            return sServiceManager;
36        }
37
38        // Find the service manager
39        sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());
40        return sServiceManager;
41    }


The ServicemanagerNative.asInterface() looks like the following:


/**
28     * Cast a Binder object into a service manager interface, generating
29     * a proxy if needed.
30     */
31    static public IServiceManager asInterface(IBinder obj)
32    {
33        if (obj == null) {
34            return null;
35        }
36        IServiceManager in =
37            (IServiceManager)obj.queryLocalInterface(descriptor);
38        if (in != null) {
39            return in;
40        }
41
42        return new ServiceManagerProxy(obj);
43    }



So basically we are getting an handle to the native servicemanager.


This asInterface function is actually buried inside the two macros DECLARE_META_INTERFACE(ServiceManager) and IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");

defined in IserviceManager.h and IServiceManager.cpp respectively.


Lets delve into the two macros defined in /frameworks/base/include/binder/IInterface.h


DECLARE_META_INTERFACE(ServiceManager) macro.


Its defined as


// ----------------------------------------------------------------------
73
74#define DECLARE_META_INTERFACE(INTERFACE)                               \
75    static const android::String16 descriptor;                          \
76    static android::sp<I##INTERFACE> asInterface(                       \
77            const android::sp<android::IBinder>& obj);                  \
78    virtual const android::String16& getInterfaceDescriptor() const;    \
79    I##INTERFACE();                                                     \
80    virtual ~I##INTERFACE();                                            \

And the IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");

has been defined as follows:


#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)                       \
84    const android::String16 I##INTERFACE::descriptor(NAME);             \
85    const android::String16&                                            \
86            I##INTERFACE::getInterfaceDescriptor() const {              \
87        return I##INTERFACE::descriptor;                                \
88    }                                                                   \
89    android::sp<I##INTERFACE> I##INTERFACE::asInterface(                \
90            const android::sp<android::IBinder>& obj)                   \
91    {                                                                   \
92        android::sp<I##INTERFACE> intr;                                 \
93        if (obj != NULL) {                                              \
94            intr = static_cast<I##INTERFACE*>(                          \
95                obj->queryLocalInterface(                               \
96                        I##INTERFACE::descriptor).get());               \
97            if (intr == NULL) {                                         \
98                intr = new Bp##INTERFACE(obj);                          \
99            }                                                           \
100        }                                                               \
101        return intr;                                                    \
102    }                                                                   \
103    I##INTERFACE::I##INTERFACE() { }                                    \
104    I##INTERFACE::~I##INTERFACE() { }  


So if we replace expand these two macros in IServiceManager.h & IServiceManager.cpp file with the appropriate replacement parameters they look like the following:



  1. class IServiceManager : public IInterface
    {
    public:
      
    static const android::String16 descriptor;  

  2.    static android::sp<IServiceManager> asInterface( const android::sp<android::IBinder>& obj);  

  3.    virtual const android::String16& getInterfaceDescriptor() const;

  4.    IServicemanager();  

  5.    virtual ~IServiceManager();  

…......

….....

…...

…..


And in
IServiceManager.cpp


  1.  

  2.    const android::String16 IServiceManager::descriptor("android.os.IServiceManager”);             

  3.    const android::String16&  

  4.           IServiceManager::getInterfaceDescriptor() const {  

  5.        return  IServiceManager::descriptor;

  6.    }    

  7.    android::sp<IServiceManager> IServiceManager::asInterface(   

  8.            const android::sp<android::IBinder>& obj)  

  9.    {   

  10.        android::sp< IServiceManager> intr;    

  11.        if (obj != NULL) {     

  12.            intr = static_cast<IServiceManager*>(   

  13.                obj->queryLocalInterface(  

  14.                        IServiceManager::descriptor).get());    

  15.            if (intr == NULL) {   

  16.                intr = new BpServiceManager(obj);  

  17.            }  

  18.        }     

  19.        return intr;    

  20.    }     

  21.    IServiceManager::IServiceManager() { }    

  22.    IServiceManager::~IIServiceManager { }      


So if you see the line 12 which shows if the Service Manager is up and running (and it should because the service manager starts in the init process during Android boot up) it returns the reference to it through the queryLocalinterface function and it goes up all the way to the java interface.


now once we get the reference of the Service Manager, we next call


public IBinder getService(String name) throws RemoteException {
116        Parcel data = Parcel.obtain();
117        Parcel reply = Parcel.obtain();
118        data.writeInterfaceToken(IServiceManager.descriptor);
119        data.writeString(name);
120        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
121        IBinder binder = reply.readStrongBinder();
122        reply.recycle();
123        data.recycle();
124        return binder;
125    }



from  ServiceManagerNative.java. in this function we pass the service that we are looking for.


And the onTransact function for GET_SERVICE_TRANSACTION on the remote stub looks like the following:


public boolean onTransact(int code, Parcel data, Parcel reply, int flags)
51    {
52        try {
53            switch (code) {
54            case IServiceManager.GET_SERVICE_TRANSACTION: {
55                data.enforceInterface(IServiceManager.descriptor);
56                String name = data.readString();
57                IBinder service = getService(name);
58                reply.writeStrongBinder(service);
59                return true;
60            }
61
62            case IServiceManager.CHECK_SERVICE_TRANSACTION: {
63                data.enforceInterface(IServiceManager.descriptor);
64                String name = data.readString();
65                IBinder service = checkService(name);
66                reply.writeStrongBinder(service);
67                return true;
68            }
69

//Rest has been discarded for brevity…………………..


………………….

………………….

…………………



It returns the reference to the needed service through the function getService.






/////////////////////////////////////

The getService function from /frameworks/base/libs/binder/IServiceManager.cpp

looks like the following:


 virtual sp<IBinder> getService(const String16& name) const
134    {
135        unsigned n;
136        for (n = 0; n < 5; n++){
137            sp<IBinder> svc = checkService(name);
138            if (svc != NULL) return svc;
139            LOGI("Waiting for service %s...\n", String8(name).string());
140            sleep(1);
141        }
142        return NULL;
143    }

So it actually checks if the Service is available and then returns a reference to it. Here i would like to add that when we return a reference to an IBinder object, unlike other data types it does not get copied in the client’s address space, but its actually the same reference of the IBinder object which is shared to the client through a special technique called object mapping in the Binder driver.

 

Transfer Linux data to ios

I have downloaded data files from Dish DVR to an external HDD.  Win7 can't see the HDD. I would like to transfer to DVD. Is there an easy way or am I barking up the wrong tree? I understand I can use the DVR to play through and record off the TV, but I was hoping to fnd an easier way. Does Nero have such ability? I have Nero and another conversion program, but I did't find how within those progrqms. Thanks

 

Install Nginx + Php FPM + APC on CentOS 6.4

LEMP server A lemp server runs Nginx web server along with Php and Mysql or MariaDB on a Linux system. Nginx is increasing becoming popular because of its lightweight structure and ability to handle large amounts of traffic in an optimum manner. Mariadb is the replacement for mysql because mysql is not very free anymore. In this tutorial we shall be setting up Nginx with Php FPM on CentOS. The instructions to instal MariaDB shall be covered in another post. CentOS is a very popular os for linux based web servers. CentOS (Community Enterprise Operating System) is based on RHEL (RedHat Enterprice Linux) and is 100% binary compatible with it. For us it simply means that its similar to rhel in its working and environment and that we have the handy yum command available to install software easily from the repositories. In this example we shall be working on CentOS 6.4 which, at the time of this post is a recent version. Install Nginx The first step is to install Nginx web server. Nginx is not available in the default CentOS repositories but nginx provides centos specific repositories for easy use. Add the nginx repository We create a repository file in /etc/yum.repos.d directory $ nano /etc/yum.repos.d/nginx.repo Now open the file and add the following lines. These instructions are provided by Nginx directly. [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ gpgcheck=0 enabled=1 Save and close. Now nginx can be installed. $ yum install nginx The above will download and install the nginx web server and make it ready to use. After the installation completes, its time to do some inspection. First use the service command to check the status of nginx. [root@dhcppc2 ~]# service nginx status nginx is stopped The above shows that nginx service is there but is stopped. Next check the configuration using the t option with nginx command. [root@dhcppc2 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful The above command tells that the configuration is OK and all set to run. And most importantly, it also tells the location of the nginx configuration file. For creating virtual hosts/multiple domains, its important to create separate configuration files for each host. The virtual hosts configurations are located at /etc/nginx/conf.d/ OK, now lets start nginx server. [root@dhcppc2 conf.d]# service nginx start Starting nginx: [ OK ] Now nginx is up and running. Find the ip address of the nginx server using ifconfig and connect to that ip from a browser to test it out. http://192.168.1.4/ You might have to open port 80 on the centos server, if it is not already open. Check this tutorial on how to open http port on centos. Once its open the ip address of the nginx server should load the page with content like this Welcome to nginx! If you see this page, the nginx web...
Read more... Comment (0)
 

Install Apache web server on Ubuntu 13.10

Install Apache Web server A while back I updated my ubuntu to 13.10 and then the apache php installation got messed up. So I had to reinstall it quickly to continue working on my php projects. Apache is there in the ubuntu repositories so can be installed without much effort. Here is the quick command you need to fire at the terminal. $ sudo apt-get install apache2 Apache by default configures itself quickly so that you can open up it from the browser with the localhost url http://localhost/ The default web root directory is /var/www. So whatever files are put in this directory are accessible from the localhost url. Later we shall check how to change the default web root directory To check what version of apache is installed use the apache2 command with the v option/ $ apache2 -v Server version: Apache/2.4.6 (Ubuntu) Server built: Aug 9 2013 14:28:56 Locate configuration files To get more information about how exactly apache is configure on your system, use the apache2ctl command. $ apache2ctl -V AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message Server version: Apache/2.4.6 (Ubuntu) Server built: Aug 9 2013 14:28:56 Server's Module Magic Number: 20120211:23 Server loaded: APR 1.4.8, APR-UTIL 1.5.2 Compiled using: APR 1.4.8, APR-UTIL 1.5.2 Architecture: 64-bit Server MPM: prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=256 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" enlightened@enlightened:~$ It tells the name of the configuration file, the server mpm being used and lots of other details. These are useful when configuring apache further. The main configuration file is located at /etc/apache2/apache2.conf Just prepend the HTTPD_ROOT with SERVER_CONFIG_FILE to get the actual location of the configuration file. There are lots of configuration files involved with apache. The main configuration file is "apache2.conf" as mentioned above. This configuration file has instructions to load further configuration files along. Here is the line that does it. # Include generic snippets of statements IncludeOptional conf-enabled/*.conf # Include the virtual host configurations: IncludeOptional sites-enabled/*.conf Change web root directory To change the web root, we need to change the setting in the sites-enabled configuration files. As a standard practice a separate configuration file is created inside sites-enabled directory for each vhost or virtual host. A virtual host is a domain. So you can have multiple domains served by apache. In this example however we just use the default configuration file. There should be a file called 000-default.conf inside the /etc/apache2/sites-enabled directory. If its not there then copy it from /etc/apache2/sites-available. The file looks like this initially. <VirtualHost *:80> # The ServerName directive sets the...
Read more... Comment (0)
 

Xubuntu 13.10 review – good as always

Xubuntu 13.10 Its over a week since Xubuntu 13.10 was released which is based on Ubuntu 13.10 Saucy Salamander and I finally got time to take a look at it. If you are new to this, then know that Xubuntu is a Xfce desktop based spin/variant of Ubuntu. It is not a derivative, just the same ubuntu wrapped with the xfce desktop. So you got the same repositories of software to use and everything else that is there in the original Ubuntu. Xubuntu gives you all the power of ubuntu with a lightweight desktop that is not cumbersome to work with. Xfce offers a clean conventional style desktop ideal for those who want to focus more on getting things done. Xubuntu packs xfce with the greybird theme which makes the desktop look very pragmatic. Its my favorite theme on both Lubuntu and Xubuntu. Lightweight Ubuntu alternative The lightweight nature of xubuntu makes it a good choice for low-spec pcs and netbooks/notebooks where you don't want to waste resources un-necessarily. I use it on my Samsung n110 netbook without any issues. Xubuntu's simplicity also makes it ideal for work environments where functionality and productivity is most important. However, do not expect it to be super fast, because its only the desktop environment that is lightweight, the rest of ubuntu system (and linux beneath it) are in full size. Therefore its not a distro for slow/old machines. For slow and old machines you need an entirely lightweight linux distro like puppy linux. When using desktop like xfce or lxde with optimum configuration, you can definitely expect to free up some ram and cpu. Xubuntu being a close variant of Ubuntu sees most changes only in the desktop environment. Here is a list of updates that took place in 13.10 A new version of xfce4-settings has been uploaded,...
Read more... Comment (0)
 
Page 17 of 140

Upcoming Linux Foundation Courses

  1. LFD320 Linux Kernel Internals and Debugging
    15 Sep » 19 Sep - Virtual
    Details
  2. LFS220 Linux System Administration
    22 Sep » 25 Sep - Virtual
    Details
  3. LFS520 OpenStack Cloud Architecture and Deployment
    29 Sep » 02 Oct - Costa Mesa
    Details

View All Upcoming Courses


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board