Linux.com

Home Linux Community Community Blogs

Community Blogs



How to Install Apache OpenOffice 4.0.1 on CentOS, RHEL and Fedora

Apache OpenOffice 4.0 versions have significant changes to your OpenOffice directory setup which effect your older OpenOffice profile. The Apache OpenOffice 4.0 Release Notes provide an explanation of these changes. However,you should have the opportunity to migrate your old profile settings to the new profile area.

Read complete article on  Install Apache Openoffice 4.0.1 on CentOS, RHEL and Fedora systems. If you are running Libre-Office or Older version of Apache OpenOffice, We recommend to remove that from system.

 

The Tyranny of the Clouds

Or “How I learned to start worrying and never trust the cloud.”

The Clouderati have been derping for some time now about how we’re all going towards the public cloud and “private cloud” will soon become a distant, painful memory, much like electric generators filled the gap before power grids became the norm. They seem far too glib about that prospect, and frankly, they should know better. When the Clouderati see the inevitability of the public cloud, their minds lead to unicorns and rainbows that are sure to follow. When I think of the inevitability of the public cloud, my mind strays to “The Empire Strikes Back” and who’s going to end up as Han Solo. When the Clouderati extol the virtues of public cloud providers, they prove to be very useful idiots advancing service providers’ aims, sort of the Lando Calrissians of the cloud wars. I, on the other hand, see an empire striking back at end users and developers, taking away our hard-fought gains made from the proliferation of free/open source software. That “the empire” is doing this *with* free/open source software just makes it all the more painful an irony to bear.

I wrote previously that It Was Never About Innovation, and that article was set up to lead to this one, which is all about the cloud. I can still recall talking to Nicholas Carr about his new book at the time, “The Big Switch“, all about how we were heading towards a future of utility computing, and what that would portend. Nicholas saw the same trends the Clouderati did, except a few years earlier, and came away with a much different impression. Where the Clouderati are bowled over by Technology! and Innovation!, Nicholas saw a harbinger of potential harm and warned of a potential economic calamity as a result. While I also see a potential calamity, it has less to do with economic stagnation and more to do with the loss of both freedom and equality.

The virtuous cycle I mentioned in the previous article does not exist when it comes to abstracting software over a network, into the cloud, and away from the end user and developer. In the world of cloud computing, there is no level playing field – at least, not at the moment. Customers are at the mercy of service providers and operators, and there are no “four freedoms” to fall back on.

When several of us co-founded the Open Cloud Initiative (OCI), it was with the intent, as Simon Phipps so eloquently put it, of projecting the four freedoms onto the cloud. There have been attempts to mandate additional terms in licensing that would force service providers to participate in a level playing field. See, for example, the great debates over “closing the web services loophole” as we called it then, during the process to create the successor to the GNU General Public License version 2. Unfortunately, while we didn’t yet realize it, we didn’t have the same leverage as we had when software was something that you installed and maintained on a local machine.

The Way to the Open Cloud

Many “open cloud” efforts have come and gone over the years, none of them leading to anything of substance or gaining traction where it matters. Bradley Kuhn helped drive the creation of the Affero GPL version 3, which set out to define what software distribution and conveyance mean in a web-driven world, but the rest of the world has been slow to adopt because, again, service providers have no economic incentive to do so. Where we find ourselves today is a world without a level playing field, which will, in my opinion, stifle creativity and, yes, innovation. It is this desire for “innovation” that drives the service providers to behave as they do, although as you might surmise, I do not think that word means what they think it means. As in many things, service providers want to be the arbiters of said innovation without letting those dreaded freeloaders have much of a say. Worse yet, they create services that push freeloaders into becoming part of the product – not a participant in the process that drives product direction. (I know, I know: yes, users can get together and complain or file bugs, but they cannot mandate anything over the providers)

Most surprising is that the closed cloud is aided and abetted by well-intentioned, but ultimately harmful actors. If you listen to the Clouderati, public cloud providers are the wonderful innovators in the space, along with heaping helpings of concern trolling over OpenStack’s future prospects. And when customers lose because a cloud company shuts its doors, the clouderati can’t be bothered to bring themselves to care: c’est la vie and let them eat cake. The problem is that too many of the clouderati think that Innovation! is a means to its own ends without thinking of ground rules or a “bill of rights” for the cloud. Innovation! and Technology! must rule all, and therefore the most innovative take all, and anything else is counter-productive or hindering the “free market”. This is what happens when the libertarian-minded carry prejudiced notions of what enabled open source success without understanding what made it possible: the establishment and codification of rights and freedoms. None of the Clouderati are evil, freedom-stealing, or greedy, per se, but their actions serve to enable those who are. Because they think solely in terms of Innovation! and Technology!, they set the stage for some companies to dominate the cloud space without any regard for establishing a level playing field.

Let us enumerate the essential items for open innovation:

  1. Set of ground rules by which everyone must abide, eg. the four freedoms
  2. Level playing field where every participant is a stakeholder in a collaborative effort
  3. Economic incentives for participation

These will be vigorously opposed by those who argue that establishing such a list is too restrictive for innovation to happen, because… free market! The irony is that establishing such rules enabled Open Source communities to become the engine that runs the world’s economy. Let us take each and discuss its role in creating the open cloud.

Ground Rules

We have already established the irony that the four freedoms led to the creation of software that was used as the infrastructure for creating proprietary cloud services. What if the four freedoms where tweaked for cloud services. As a reminder, here are the four freedoms:

 

  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1).
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3).

 

If we rewrote this to apply to cloud services, how much would need to change? I made an attempt at this, and it turns out that only a couple of words need to change:

 

  • The freedom to run the program or service, for any purpose (freedom 0).
  • The freedom to study how the service works, and change it so it does your computing as you wish (freedom 1).
  • The freedom to implement and redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to implement your modified versions for others (freedom 3).

 

Freedom 0 adds “or service” to denote that we’re not just talking about a single program, but a set of programs that act in concert to deliver a service.

Freedom 1 allows end users and developers to peak under the hood.

Freedom 2 adds “implement and” to remind us that the software alone is not much use – the data forms a crucial part of any service.

Freedom 3 also changes “distribute copies of” to “implement” because of the fundamental role that data plays in any service. Distributing copies of software in this case doesn’t help anyone without also adding the capability of implementing the modified service, data and all.

Establishing these rules will be met, of course, with howls of rancor from the established players in the market, as it should be.

Level Playing Field

With the establishment of the service-oriented freedoms, above, we have the foundation for a level playing field with actors from all sides having a stake in each other’s success. Each of the enumerated freedoms serves to establish a managed ecosystem, rather than a winner-take-all pillage and plunder system. This will be countered by the argument that if we hinder the development of innovative companies won’t we a.) hinder economic growth in general and b.) socialism!

In the first case, there is a very real threat from a winner-take-all system. In its formative stages, when everyone has the economic incentive to innovate (there’s that word again!), everyone wins. Companies create and disrupt each other, and everyone else wins by utilizing the creations of those companies. But there’s a well known consequence of this activity: each actor will try to build in the ability to retain customers at all costs. We have seen this happen in many markets, such as the creation of proprietary, undocumented data formats in the office productivity market. And we have seen it in the cloud, with the creation of proprietary APIs that lock in customers to a particular service offering. This, too, chokes off economic development and, eventually, innovation. At first, this lock in happens via the creation of new products and services which usually offer new features that enable customers to be more productive and agile. Over time, however, once the lock-in is established, customers find that their long-term margins are not in their favor, and moving to another platform proves too costly and time-consuming. If all vendors are equal, this may not be so bad, because vendors have an incentive to lure customers away from their existing providers, and the market becomes populated by vendors competing for customers, acting in their interest. Allow one vendor to establish a larger share than others, and this model breaks down. In a monopoly situation, the incumbent vendor has many levers to lock in their customers, making the transition cost too high to switch to another provider. In cloud computing, this winner-take-all effect is magnified by the massive economies of scale enjoyed by the incumbent providers. Thus, the customer is unable to be as innovative as they could be due to their vendor’s lock-in schemes. If you believe in unfettered Innovation! at all costs, then you must also understand the very real economic consequences of vendor lock-in. By creating a level playing field through the establishment of ground rules that ensure freedom, a sustainable and innovative market is at least feasible. Without that, an unfettered winner-take-all approach will invariably result in the loss of freedom and, consequently, agility and innovation.

Economic Incentives

This is the hard one. We have already established that open source ecosystems work because all actors have an incentive to participate, but we have not established whether the same incentives apply here. In the open source software world, developers participate because they had to, because the price of software is always dropping, and customers enjoy open source software too much to give it up for anything else. One thing that may be in our favor is the distinct lack of profits in the cloud computing space, although that changes once you include services built on cloud computing architectures.

If we focus on infrastructure as a service (IaaS) and platform as a service (PaaS), the primary gateways to creating cloud-based services, then the margins and profits are quite low. This market is, by its nature, open to competition because the race is on to lure as many developers and customers as possible to the respective platform offerings. However, the danger becomes if one particular service provider is able to offer proprietary services that give it leverage over the others, establishing the lock-in levers needed to pound the competition into oblivion.

In contrast to basic infrastructure, the profit margins of proprietary products built on top of cloud infrastructure has been growing for some time, which incentivizes the IaaS and PaaS vendors to keep stacking proprietary services on top of their basic infrastructure. This results in a situation where increasing numbers of people and businesses have happily donated their most important business processes and workflows to these service providers. If any of them are to grow unhappy with the service, they cannot easily switch, because no competitor would have access to the same data or implementation of that service. In this case, not only is there a high cost associated with moving to another service, there is the distinct loss of utility (and revenue) that the customer would experience. There is a cost that comes from entrusting so much of your business to single points of failure with no known mechanism for migrating to a competitor.

In this model, there is no incentive for service providers to voluntarily open up their data or services to other service providers. There is, however, an incentive for competing service providers to be more open with their products. One possible solution could be to create an Open Cloud certification that would allow services that abide by the four freedoms in the cloud to differentiate themselves from the rest of the pack. If enough service providers signed on, it would lead to a network effect adding pressure to those providers who don’t abide by the four freedoms. This is similar to the model established by the Free Software Foundation and, although the GNU people would be loathe to admit it, the Open Source Initiative. The OCI’s goal was to ultimately create this, but we have not yet been able to follow through on those efforts.

Conclusion

We have a pretty good idea why open source succeeded, but we don’t know if the open cloud will follow the same path. At the moment, end users and developers have little leverage in this game. One possibility would be if end users chose, at massive scale, to use services that adhered to open cloud principles, but we are a long way away from this reality. Ultimately, in order for the open cloud to succeed, there must be economic incentives for all parties involved. Perhaps pricing demands will drive some of the lower rung service providers to adopt more open policies. Perhaps end users will flock to those service providers, starting a new virtuous cycle. We don’t yet know. What we do know is that attempts to create Innovation! will undoubtedly lead to a stacked deck and a lack of leverage for those who rely on these services.

If we are to resolve this problem, it can’t be about innovation for innovation’s sake – it must be, once again, about freedom.

This article originally appeared on the Gluster Community blog.

 

CentOS / RHEL: See Detailed History Of yum Commands

The yum command has history option on the latest version of CentOS / RHEL v6.x+. To database are normally found in /var/lib/yum/history/ directory. The history option was added at the the end of 2009 (or thereabouts) to yum command. The history command allows an admin to access detailed information on the history of yum transactions that have been run on a system. You can see what has happened in past transactions (assuming the history_record config option is set). You can use various command line options to view what happened, undo/redo/rollback to act on that information and start a new history file.

Read more: CentOS / RHEL: See Detailed History Of yum Commands

 

Setting up an ARM Based Cassandra Cluster with Beagle Bone Black

A great project to try on cheap ARM boards such as the BeagleBone cluster is to set up a database cluster. For a developer, this is useful both for storage for projects and to gain experience administrating NoSQL databases with replication. Using my existing three BeagleBone Black cluster I decided to try using the Cassandra database. Cassandra is easy to use for those already familiar with SQL databases and is freely available. All of these steps were done on an Ubuntu install and should work on any ARM board running an Ubuntu based OS.

To get started, you need the Cassandra binaries. I was unable to find a repository that had an arm version of Cassandra so I downloaded straight from the apache site and untared it. You can go to http://cassandra.apache.org/download/ and use wget to get the gzip file from one of the mirrors. The version I downloaded was apache-cassandra-2.0.2-bin.tar.gz. Once you have it on each machine, place it in a directory you want to use for its home, for example /app/cassandra and then unzip it:

tar -xvzf apache-cassandra-2.0.2-bin.tar.gz

Now you have everything you need to configure and run cassandra. To run in a cluster, we need to set up some basic properties on each machine so they know how to join the cluster. Start by navigating to the conf directory inside the folder you just extracted and open the cassandra.yaml file for editing. First find listen_address and set it to the ip or name of the current machine. For example for the first machine in my cluster:

listen_address: 192.168.1.51

Then do the same for rpc_address:

rpc_address: 192.168.1.51

Finally we list all of our ips in the seeds section. As I have three nodes in my cluster, the setting on each machine looked like this:

- seeds: "192.168.1.51,192.168.1.52,192.168.1.53"

Additionally if you want to give your cluster a specific name, you can set the cluster_name property. Once all three machines are set up as you like them, you can start up cassandra by going to the bin directory on each and running:

sudo ./cassandra

Using Cassandra

Once all three nodes are running, we can test on one of the nodes with cqlsh and create a database. Cqlsh is a command line program for using Cassandra's SQL like language called CQL. I connected to the utility from my first node like this:

./cqlsh 192.168.1.51

The first step is to create a keyspace which acts like a schema in a SQL database like Oracle. Keyspaces store columnsets which act like tables and store data with like columns:

>create keyspace testspace with replication = { 'class': 'SimpleStrategy', 'replication_factor': 3 };
>use testspace;

Now we can create our column set and add some rows to it. It looks just like creating a database table:

>create table machines (id int primary key, name text);
>insert into machines values (1, 'beaglebone1');
>insert into machines values (2, 'beaglebone2');
> insert into machines values (3, 'beaglebone3');
>select * from machines;

Now we have a simple set of columns with some rows of data. We can check that replication is working by logging in from a different node and performing the same selection:

./cqlsh 192.168.1.52
>use testspace;
>select * from machines;

So now we have a working arm based database cluster. There is a lot more you can do with Cassandra and some good documentation can be found here. The biggest issue with using the BeagleBone Black for this is the speed of the reads and writes. The SD card is definitely not ideal for real time applications or anything needing performance but this tutorial is certainly applicable for faster arm machines like the Cubieboard and of course desktop clusters as well.

 

On the use of low thread high speed “gaming computers” to solve engineer simulations.

On the use of low thread high speed “gaming computers” to solve engineer simulations.

   Many of us in the linux community work only with software that is FOSS, which stands for Free open-source software. This is software that is not only open source but is available without licensing fees. There are many outstanding products out on the market today that are considered FOSS from the Firefox browser to most distributions of linux to the OpenOffice suite of text editing programs. However, there are times when FOSS is not an option, a good example is in my line of work supporting engineering software especially CAD tools and simulators. This software is not only costly but it is very restrictive. Each aspect of the software is charged. For example many of the simulators can run multithreaded. With one piece of software running on up to 16 threads for a single simulation.  More threads require more tokens, and we pay per token available. This puts us in a situation that we wish to maximize the amount we accomplish with as little threads as we can.

   If for example an engineer needs a simulation to finish to prove a design concept and it will take 6 hours to simulate at 1 thread he will want another token in order to use more threads. Using one token may buy him or her a reduction of 3 hours in simulation time, but the cost is that the tokens used for his or her simulation cannot be used by another engineer. The simple solution would be to keep on buying more and more tokens until every engineer has enough to run on the maximum number of threads at all times. If there are 5 engineers who run simulations that can run on 16 threads each for the cost of 5 tokens then we will need 25 tokens. Of course the simple solution rarely works. The cost of 25 is so high that it can easily bankrupt a medium sized company.

   Another solution would be to use less tokens but implement advance queuing software. This has the advantage that engineers can submit tasks and the servers running the simulations will run at all time (we hope) using the tokens we do have to the utmost. This strategy works well when deadlines are far away, but as they get close the fight for slots can grow.

  Since the limiting resource here is the number of threads we tried a different approach. As we are paying per thread we run, we should try to run threads as fast as possible (increasing our performance) rather then our throughput.  To further justify our reasoning we looked at creating benchmarks for our tools and comparing the amount of time it took to run a simulation compared to the number of threads we employed for it.

  The conclusion was:  Independent of software and the type of simulation we ran the performance increased exponentially to 4.5 threads and then leveled off. A shocking result given that the tools we used came out in different years and were produced by different venders.

   Given this information we concluded that if we ran 4 threads 25% faster on machine A (by overclocking) we could achieve better results then on machine B despite the same architecture.  This meant that for the near trivial price (compared to a server’s cost or additional tokens) of a modified desktop computer we could outperform a server with the maximum number of tokens we could purchase.

Our new system specifications:

Newegg #

Price

Item name

Quantity

N82E16819115095

349.99

Intel core i7 1155 socket

1

N82E16813131837

139.99

Asus motherboard

1

N82E16817171048

149.99

Cooler master power supply

1

N82E16820231611

139.99

G.Skill DDR3 ram

2

N82E16835103181

84.99

All in one liquid cpu cooler

1

N82E16811119213

164.99

Cooler master PC case

1

N82E16833106126

131.99

Ethernet server adapater

1

N82E16820167115

204.99

SSD 180GB

1

Amazon order

349

I7

1

   The total cost was approximately 1200 per unit after rebates. Assembly took about 3 hours. Overclocking was achieved at 4.7Ghz stable with the maximum recorded temperature at 70 C. The operating system is centos with the full desktop installed. The NICs have two connections link aggregated to our servers.

  To test overclocking we wrote a simple infinite loop floating point operation in perl and launched 8 instances of it while monitoring the results using a FOSS program called i7z. The hard drive only exists to provide a boot drive all other functions are performed via ssh and NFS exports. The units sit headless in our server room. It has been estimated that we have reduced overall simulation time across the company by 50% with only two units.

  The analogy we give is one of transportation. We have servers which function like buses. They can move a great deal of people at a time, which is great but buses are slow. Now we constructed high speed sports cars, these cars can only move a few people at a time but can move them much faster.


Isiah Schwartz

Teledyne Lecroy

 

 

Several Monitoring Metrics for Virtualized Infrastructure

To build a virtualized cloud infrastructure, four major elements play the vital role in the way to design a virtual infrastructure from its physical counterpart. These four elements are at the level of processors, memory, storage and network. In the continuity of the design, there is the life of the infrastructure, that is to say the daily operations. Is it administering the daily virtual infrastructure as they do in a physical environment? Pooling many resources involves consideration of other parameters that we will see here.

Processors performance monitoring

Although the concept of virtual (vCPU) for a hypervisor (VMware or Microsoft Hyper-V) is close to the notion of the physical heart, a virtual processor is much less powerful than a physical processor. Noticeable CPU load on a virtual server can be absolutely more important than the physical environment. But nothing to worry about if the server has been sized to manage peak loads. In contrast, maintenance of indict alert threshold often makes more sense and it is wise to adjust.

By the way, all hypervisors are not necessarily within a server farm. The differences may be at the level of processors (generation, frequency, cache) or other technical characteristics. This is something to consider as a virtual server that can be migrated to warm hypervisor to another (vMotion in VMware, Microsoft Live Motion). After a trip to warm virtual server, the utilization of the processor can then vary.

Does the alert threshold define by taking into account the context changes?

The role of the scheduler in access to physical processors is per virtual servers. You may not be able to access it immediately, because there would be more virtual servers, more access and waiting time (latency) which is important to consider. In addition, performance monitoring is not limited solely to identify how power is used, but also to be able to detect where it is available. In extreme cases it is possible to visualize a virtual server with low CPU load. This indicator of processor latency for each virtual server is an indicative of a good use of the available power. Do not believe that the power increases by adding virtual processors to the server, it is actually more complex and it is often the opposite effect of what is required to happen. Usually, you need to send this problem by analyzing the total number of virtual processors on each hypervisor.

Monitoring the memory utilization

If the memory is shared between the virtual servers running within the same hypervisor, it is important to distinguish used and unused memory. The used memory is usually statically allocated to virtual server while the unused memory is pooled. Due to this reason it is possible to run virtual servers with a total memory which also exceeds the memory capacity of hypervisor. The memory over-provisioning of virtual servers on hypervisor is not trivial. It is a kind of risk taking, betting that all servers will not use their memory at the same time. There is no concern of the type “it is” or “this is wrong”, much depends on the design of virtual servers. However, monitoring will prevent it from becoming a source of performance degradation.

A first important result is that over provisioning prevents starting of all virtual servers simultaneously. Indeed, a hypervisor verifies that it can affect the entire memory of a virtual server before starting it. It is possible to start with a timer so that everyone releases its unused memory in a specific order.

The slower memory access is second consequence. VMware has implemented a method of garbage collection (called as a ballooning) with virtual servers. It occurs when the hypervisor is set to provide memory to a virtual server when available capacity is insufficient. The hypervisor strength release memory with virtual servers. This freed memory is then distributed to servers as needed. This mechanism is not immediate; there is some latency between the request for memory allocation and its effectiveness. This is the reason it slows down memory access. This undermines the good performance of the servers.

Another consequence of the ballooning, it can also completely change the behavior of the server’s memory. Linux systems are known to use available memory and keep it cached when it is released by the process (memory that does not appear to be free). The occupancy rate of memory of such a server is relatively stable and high. This rate will decrease with ballooning and will not be stable (frequent releases and increases).

Also there are cases where used memory can be pooled between virtual servers. This appears when the ballooning is not enough. Performance is greatly degraded when it is a situation to be avoided.

 

How to Install ImageMagick on CentOS, RHEL and Ubuntu

ImageMagick is a software suite to create, edit, compose, or convert bitmap images. It can read and write images in a variety of formats like GIF, JPEG, PNG, Postscript, and TIFF. We can also use ImageMagick to resize, flip, mirror, rotate, distort, shear and transform images, adjust image colors, apply various special effects, or draw text, lines, polygons, ellipses and Bézier curves.

ImageMagick is typically used from command line. Also we can use it from any programming language by using its interface like Magick.NET (.Net),IMagick (PHP), PerlMagick (Perl) etc.

Read complete article at Install ImageMagick on CentOS, RHEL and Ubuntu

 

Meet HD Camera Cape for BBB- $ 49.99 low cost camera cape

  • Adding a camera cape to the latest BeagleBone Black, RadiumBoards has increased its “Cape”-ability to provide a portable camera solution. Thousands of designers, makers, hobbyists and engineers are adopting BeagleBone Black and are becoming huge fans because of its unique functionality as a pocket-sized expandable Linux computer that can connect to the Internet. To augment them in their work we have designed this HD Camera Cape with following features and benefits as:
  •  

    • 720p HD video at 30fps
    • Superior low-light performance
    • Ultra-low-power
    • Progressive scan with Electronic Rolling Shutter
    • No software effort required for OEM
    • Proven, off-the-shelf driver available for quick and easy software integration
    • Minimize development time for system designers
    • Easily Customized
    • Simple Design with Low Cost
  • Priced for just $49.99, this game-changing cape plug for the latest credit card sized BBB can help developers differentiate their product and get to market faster.
  • To learn more about the new and exciting cape checkout www.radiumboards.com
  •  
     

    10 basic examples of Linux ps command

    Linux ps command The ps command on linux is one of the most basic commands for viewing the processes running on the system. It provides a snapshot of the current processes along with detailed information like user id, cpu usage, memory usage, command name etc. It does not display data in real time like top or htop commands. But even though being simpler in features and output it is still an essential...
    Read more... Comment (0)
     

    Setup Apache 2.4 and Php FPM with mod proxy fcgi on Ubuntu 13.10

    mod_proxy_fcgi The module mod_proxy_fcgi is a new one and it allows apache to connect to/forward requests to an external fastcgi process manager like php fpm. This allows for a complete separation between the running of php scripts and Apache. Earlier we had to use modules like mod_fcgid and mod_fastcgi which all had some limitations. Mod_fcgid for example did not properly utilise the process management capability of php-cgi whereas mod_fastcgi is a third party module. With the arrival of mod_proxy_fcgi Apache...
    Read more... Comment (0)
     

    Three Best Network Programming Debugging Tools

    3 Best Network Programming Debugging Tools
    ==========================================

    It is always time consuming if we don't use the right network debugging tools when do we socket programming or trying to run a client server program for the first time.

    When we do network programming sometimes you want to know why send() from your client or
    serverfailing, why I'm not re-start my server program, whether any other process is already using the port that
    you are planning to use for your server etc.

    There are many Tools available today in Linux. But we will see the most important 3 Tools in this article.

    I.netstat
    =========

    Netstat command displays various network related information such as network connections, routing tables, interface statistics, masquerade connections, multicast memberships etc

    1) Show the list of network interfaces

    OpenSuse12.3#netstat -i
    Kernel Interface table
    Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
    eth0 1500 0 0 0 0 0 0 0 0 0 BMU
    lo 65536 0 45 0 0 0 45 0 0 0 LRU
    wlan0 1500 0 25092 0 0 0 22958 0 0 0 BMRU

    2) To list all Ports(both listening and non-listening, TCP, UDP, Unix)

    OpenSuse12.3#netstat -at
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address Foreign Address State
    tcp 0 0 localhost:smtp *:* LISTEN
    tcp 0 0 192.168.1.5:6688 *:* LISTEN
    tcp 0 0 192.168.1.5:49875 safebrowsing:www-http ESTABLISHED
    tcp 0 1 192.168.1.5:60804 fls.doubleclic:www-http FIN_WAIT1
    tcp 0 0 192.168.1.5:43589 safebrowsing.c:www-http ESTABLISHED
    tcp 0 0 *:33532 *:* LISTEN
    unix 2 [ ACC ] STREAM LISTENING 8645 /var/run/sdp
    unix 2 [ ] DGRAM 12241

    3) List only TCP Port

    OpenSuse12.3#netstat -at
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address Foreign Address State
    tcp 0 0 localhost:smtp *:* LISTEN
    tcp 0 0 192.168.1.5:6688 *:* LISTEN
    tcp 0 0 localhost:ipp *:* LISTEN
    tcp 0 0 *:33532 *:* LISTEN

    Similarly for UDP, "netstat -au"

    3) List the Sockets which are in Listening state

    OpenSuse12.3#netstat -l
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address Foreign Address State
    tcp 0 0 localhost:smtp *:* LISTEN
    tcp 0 0 192.168.1.5:6688 *:* LISTEN
    tcp 0 0 *:52980 *:* LISTEN
    tcp 0 0 localhost:ipp *:* LISTEN
    tcp 0 0 *:33532 *:* LISTEN
    Active UNIX domain sockets (only servers)
    Proto RefCnt Flags Type State I-Node Path
    unix 2 [ ACC ] STREAM LISTENING 10227 private/scache
    unix 2 [ ACC ] STREAM LISTENING 11714 @/tmp/dbus-NmyF9Qx2gH

    List only listening TCP Ports using netstat -lt
    List only listening UDP Ports using netstat -lu
    List only the listening UNIX Ports using netstat -lx

    4) Display PID and program names in netstat output using netstat -p

    5) Print netstat information continuously
    netstat -c

    6) Find out on which port a program is running

    OpenSuse12.3#netstat -ap | grep servermine
    tcp 0 0 192.168.1.5:6688 *:* LISTEN 2135/servermine

    II. tcpdump
    ===========
    tcpdump allows us to capture all packets that are received and sent. This helps us to see what all tcp segments are
    sent and received (like SYN, FIN, RST etc) and can understand the root cause of our issue.

    1) Capture packets from a particular ethernet interface using tcpdump -i
    tcpdump capture for a simple tcp client & server example starting from SYN to FIN/ACK with one data packet in between.

    OpenSuse12.3#tcpdump -i lo
    11:05:27.026304 IP 192.168.1.5.34289 > 192.168.1.5.6688: Flags [S], seq 1990318384, win 43690, options [mss 65495,sackOK,TS val 6116309 ecr 0,nop,wscale 7], length 0
    11:05:27.026331 IP 192.168.1.5.6688 > 192.168.1.5.34289: Flags [S.], seq 3856734826, ack 1990318385, win 43690, options [mss 65495,sackOK,TS val 6116309 ecr 6116309,nop,wscale 7], length 0
    11:05:27.026357 IP 192.168.1.5.34289 > 192.168.1.5.6688: Flags [.], ack 1, win 342, options [nop,nop,TS val 6116309 ecr 6116309], length 0
    11:05:27.026689 IP 192.168.1.5.6688 > 192.168.1.5.34289: Flags [P.], seq 1:27, ack 1, win 342, options [nop,nop,TS val 6116310 ecr 6116309], length 26
    11:05:27.026703 IP 192.168.1.5.34289 > 192.168.1.5.6688: Flags [.], ack 27, win 342, options [nop,nop,TS val 6116310 ecr 6116310], length 0
    11:05:27.026839 IP 192.168.1.5.34289 > 192.168.1.5.6688: Flags [F.], seq 1, ack 27, win 342, options [nop,nop,TS val 6116310 ecr 6116310], length 0
    11:05:27.027445 IP 192.168.1.5.6688 > 192.168.1.5.34289: Flags [.], ack 2, win 342, options [nop,nop,TS val 6116311 ecr 6116310], length 0
    11:05:32.026898 IP 192.168.1.5.6688 > 192.168.1.5.34289: Flags [F.], seq 27, ack 2, win 342, options [nop,nop,TS val 6121310 ecr 6116310], length 0
    11:05:32.026920 IP 192.168.1.5.34289 > 192.168.1.5.6688: Flags [.], ack 28, win 342, options [nop,nop,TS val 6121310 ecr 6121310], len

    2) Capture only N number of packets using tcpdump -c
    OpenSuse12.3#tcpdump -c 100 -i lo
    capture only 100 packets

    3) Capture the packets and write into a file using tcpdump -w
    OpenSuse12.3# tcpdump -w myprogamdump.pcap -i lo
    tcpdump: listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
    9 packets captured
    18 packets received by filter
    0 packets dropped by kernel

    4) Reading/viewing the packets from a saved file using tcpdump -r
    OpenSuse12.3#tcpdump -tttt -r myprogamdump.pcap
    reading from file myprogamdump.pcap, link-type EN10MB (Ethernet)
    2013-11-30 11:12:55.019872 IP 192.168.1.5.34290 > 192.168.1.5.6688: Flags [S], seq 2718665633, win 43690, options [mss 65495,sackOK,TS val 6564303 ecr 0,nop,wscale 7], length 0
    2013-11-30 11:12:55.019899 IP 192.168.1.5.6688 > 192.168.1.5.34290: Flags [S.], seq 2448605009, ack 2718665634, win 43690, options [mss 65495,sackOK,TS val 6564303 ecr 6564303,nop,wscale 7], length 0
    2013-11-30 11:12:55.019929 IP 192.168.1.5.34290 > 192.168.1.5.6688: Flags [.], ack 1, win 342, options [nop,nop,TS val 6564303 ecr 6564303], length 0
    2013-11-30 11:12:55.020228 IP 192.168.1.5.6688 > 192.168.1.5.34290: Flags [P.], seq 1:27, ack 1, win 342, options [nop,nop,TS val 6564303 ecr 6564303], length 26
    2013-11-30 11:12:55.020243 IP 192.168.1.5.34290 > 192.168.1.5.6688: Flags [.], ack 27, win 342, options [nop,nop,TS val 6564303 ecr 6564303], length 0
    2013-11-30 11:12:55.020346 IP 192.168.1.5.34290 > 192.168.1.5.6688: Flags [F.], seq 1, ack 27, win 342, options [nop,nop,TS val 6564303 ecr 6564303], length 0
    2013-11-30 11:12:55.020442 IP 192.168.1.5.6688 > 192.168.1.5.34290: Flags [.], ack 2, win 342, options [nop,nop,TS val 6564304 ecr 6564303], length 0
    2013-11-30 11:13:00.020477 IP 192.168.1.5.6688 > 192.168.1.5.34290: Flags [F.], seq 27, ack 2, win 342, options [nop,nop,TS val 6569304 ecr 6564303], length 0
    2013-11-30 11:13:00.020506 IP 192.168.1.5.34290 > 192.168.1.5.6688: Flags [.], ack 28, win 342, options [nop,nop,TS val 6569304 ecr 6569304], length 0

    5) Receive only the packets of a specific protocol type like arp, tcp, udp, ip etc

    OpenSuse12.3#tcpdump -i wlan0 ip
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on wlan0, link-type EN10MB (Ethernet), capture size 65535 bytes
    11:18:04.193704 IP 132.213.238.6.http > 192.168.1.5.32991: Flags [.], seq 2723848246:2723849686, ack 3820601748, win 6432, options [nop,nop,TS val 786299612 ecr 6873162], length 1440
    11:18:04.194241 IP 192.168.1.5.50414 > 192.168.1.1.domain: 36798+ PTR? 5.1.168.192.in-addr.arpa. (42)
    11:18:04.196315 IP 132.213.238.6.http > 192.168.1.5.32991: Flags [P.], seq 1440:2880, ack 1, win 6432, options [nop,nop,TS val 786299612 ecr 6873162], length 1440

    5) Receive packets flows on a particular port using tcpdump port
    tcpdump -i eth0 port 4040

    6) Capture packets for particular destination IP and Port
    tcpdump -w mypackets.pcap -i eth0 dst 192.168.1.6 and port 22

    III. lsof
    =========
    lsof meaning 'LiSt Open Files' is used to find out which files are open by which process. As we all know Linux/Unix considers everything as a files (pipes, sockets, directories, devices etc).
    So by using lsof, you can get the information about any opened files. But here we primarily see options related to
    network files.

    1) List all network connections

    OpenSuse12.3#lsof -i
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    systemd 1 root 32u IPv6 6955 0t0 TCP *:ipp (LISTEN)
    avahi-dae 475 avahi 11u IPv4 9245 0t0 UDP *:mdns
    avahi-dae 475 avahi 14u IPv6 9248 0t0 UDP *:46627
    master 766 root 12u IPv4 10100 0t0 TCP localhost:smtp (LISTEN)

    2) List processes which are listening on a particular port

    OpenSuse12.3#lsof -i :6688
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    servermine 3127 prince 3u IPv4 1256979 0t0 TCP 192.168.1.5:6688 (LISTEN)

    3) List all TCP or UDP connections

    OpenSuse12.3#lsof -i tcp; lsof -i udp
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    systemd 1 root 32u IPv6 6955 0t0 TCP *:ipp (LISTEN)
    master 766 root 12u IPv4 10100 0t0 TCP localhost:smtp (LISTEN)
    master 766 root 13u IPv6 10102 0t0 TCP localhost:smtp (LISTEN)
    gnome-ses 800 prince 13u IPv6 11789 0t0 TCP *:33532 (LISTEN)
    gnome-ses 800 prince 14u IPv4 11790 0t0 TCP *:52980 (LISTEN)
    cupsd 1029 root 4u IPv6 6955 0t0 TCP *:ipp (LISTEN)
    cupsd 1029 root 10u IPv4 12739 0t0 TCP localhost:ipp (LISTEN)

    4) List all IPv4 and IPv6 network files

    OpenSuse12.3#lsof -i 4
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    systemd 1 root 33u IPv4 6956 0t0 UDP *:ipp
    avahi-dae 475 avahi 11u IPv4 9245 0t0 UDP *:mdns
    avahi-dae 475 avahi 13u IPv4 9247 0t0 UDP *:37715
    master 766 root 12u IPv4 10100 0t0 TCP localhost:smtp (LISTEN)
    gnome-ses 800 prince 14u IPv4 11790 0t0 TCP *:52980 (LISTEN)
    dhclient 926 root 6u IPv4 12038 0t0 UDP *:bootpc

    OpenSuse12.3#lsof -i 6
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    systemd 1 root 32u IPv6 6955 0t0 TCP *:ipp (LISTEN)
    avahi-dae 475 avahi 12u IPv6 9246 0t0 UDP *:mdns
    avahi-dae 475 avahi 14u IPv6 9248 0t0 UDP *:46627
    master 766 root 13u IPv6 10102 0t0 TCP localhost:smtp (LISTEN)
    gnome-ses 800 prince 13u IPv6 11789 0t0 TCP *:33532 (LISTEN)
    dhclient 926 root 21u IPv6 12022 0t0 UDP *:55332
    cupsd 1029 root 4u IPv6 6955 0t0 TCP *:ipp (LISTEN)

    5) To list all the running process of open files of TCP Port ranges from 1-1024

    OpenSuse12.3#lsof -i TCP:1-1024
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    systemd 1 root 32u IPv6 6955 0t0 TCP *:ipp (LISTEN)
    master 766 root 12u IPv4 10100 0t0 TCP localhost:smtp (LISTEN)
    master 766 root 13u IPv6 10102 0t0 TCP localhost:smtp (LISTEN)
    cupsd 1029 root 4u IPv6 6955 0t0 TCP *:ipp (LISTEN)
    cupsd 1029 root 10u IPv4 12739 0t0 TCP localhost:ipp (LISTEN)

    6) List all network files in use by a specific process
    OpenSuse12.3#lsof -i -a -p 234

    7) list of all open files belonging to all active processes

    OpenSuse12.3#lsof

    COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    systemd 1 root cwd DIR 8,6 4096 2 /
    systemd 1 root rtd DIR 8,6 4096 2 /
    systemd 1 root mem REG 8,6 126480 131141 /lib64/libselinux.so.1
    systemd 1 root mem REG 8,6 163493 131128 /lib64/ld-2.17.so
    systemd 1 root 0u CHR 1,3 0t0 2595 /dev/null
    systemd 1 root 6r DIR 0,18 0 3637 /sys/fs/cgroup/systemd/system
    systemd 1 root 16u unix 0xffff88007c0ec100 0t0 3857 socket

     
    Page 18 of 143

    Upcoming Linux Foundation Courses

    1. LFD320 Linux Kernel Internals and Debugging
      03 Nov » 07 Nov - Virtual
      Details
    2. LFS416 Linux Security
      03 Nov » 06 Nov - Virtual
      Details
    3. LFS426 Linux Performance Tuning
      10 Nov » 13 Nov - Virtual
      Details

    View All Upcoming Courses


    Who we are ?

    The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

    More About the foundation...

    Frequent Questions

    Join / Linux Training / Board