Last week it was made public that the Linux-based SteamOS-powered Steam Machines console will use Intel CPUs and NVIDIA GPUs in the living room prototypes shipping this calendar year. Now there’s a bit more on Valve’s relationship with NVIDIA…
How to Visualize Disk Usage on Linux
They say a picture is worth a thousand words. This age-old saying applies to disk usage as well. As your Linux system becomes old, chances are that you start to run out of the disk space. Visualizing disk usage can help you in this case with understanding how the overall disk space is being used, […]
Continue reading…
The post How to visualize disk usage on Linux appeared first on Xmodulo.
Related FAQs:
OpenDaylight Foundation Aims to Shape the Future of Software Defined Networking (SDN)

Earlier this year, the Linux Foundation announced the founding of the OpenDaylight Project, a new open source framework designed to shape the future of Software Defined Networking (SDN). The project launched with significant industry support and has the goal of “a common and open SDN platform for developers to utilize, contribute to, and build commercial products and technologies.”
Apple iOS Gains on Google Android in Mobile OS Race
Android’s still number one, but with the arrival of Apple’s new iPhone5c and iPhone5s, Apple’s iOS narrowed the distance.
A Simple BASH Script to Test Your Internet Connectivity
Most of the users all over the world make use of Google’s Index Page to check whether their Internet connection is working or not. Many times it is required to check periodically whether the server you are running is connected to internet or not. It is very cumbersome to open the web page every time you wish to check the connection. As an alternative, it definitely makes sense to run some scripts in the background periodically scheduling them using cron.
Read More on YourOwnLinux…
Linux 3.12-rc4 Kernel Has Lots Of File-System Commits
As usual, Linus Torvalds released the latest Linux development kernel on Sunday…
Kernel Prepatch 3.12-rc4
The fourth 3.12 prepatch is out for testing. “Hmm. rc4 has more new commits than rc3, which doesn’t make me feel all warm and fuzzy, but nothing major really stands out. More filesystem updates than normal at this stage, perhaps, but I suspect that is just happenstance. We have cifs, xfs, btrfs, fuse and nilfs2 fixes here.“
Valve Details Specs for Linux-Based Steam Machine Prototype Gaming PCs
The 300 prototypes will feature a variety of Intel processors and Nvidia graphics cards inside.
Reality Check: Supercomputers Still Rule… and Linux Still Rules Them
Editor’s Note: This is part of a series by SUSE community marketing manager Brian Proffitt for Linux.com called “Reality Check” that looks at Linux in the real world.
Earlier this summer, The Linux Foundation released the report 20 years of Top500.org, which marked the 20th anniversary of the Top 500 supercomputer ranking, and (quite naturally) highlighted Linux’s dominance on systems within the Top 500 over time.
From the very beginning of my time with Linux, the notion of Linux’ scalability up to high performance architecture has stuck with me. I used to tell people when I explained Linux that it was the one operating system that could run a wristwatch or a supercomputer.
But with the advent of cloud and virtual data center computing, are the days of supercomputers approaching an end, making them nothing more than trophies for universities and nations to show off when they have their top-ranked systems running?
Supercomputers tend to do well in scenarios where a lot of data-processing has to be done in a very calculation-intensive and iterative way. Modeling weather data, chemical and biological interactions, geological data… these are all standard fare in the typical supercomputer’s diet.
But given the rise of clustered computers, is supercomputing even worth it anymore? After all, supercomputers are not easy to build and tend to need a lot of resources (like power) to operate. Couldn’t cloud computing or even a cluster of Hadoop systems do the same thing for a lot less hassle?
It depends on the problem. For the iterative calculations described above, the input of one step depends on the output of steps that have come before… and there are many, many steps to be taken.
In this case, it can make more sense for the data to be near the supercomputer processors, all in one place on one machine, as opposed to being spread out amongst distributed systems. Moving the data back and forth to the processing machine(s) would take a very long time and therefore would be very inefficient.
With Hadoop, though, the issue of moving data around is basically solved… because the nature of a Hadoop cluster means that the data is scaled out to reside on the machines where processing is going on.
But Hadoop has limitations, not the least of which is it uses a batch processor called MapReduce to search for and manipulate data. That tends to line jobs up in serial fashion, which is not good for iterative data processing. Plus MapReduce, for now, is not exactly easy to code, so rigging up the right algorithms for processing data can be very challenging.
This is why, really, supercomputing is still useful. Because system designers are using a modified Linux kernel as the core of the HPC system, then building apps for the platform is a much easier proposition, as is data management.
It’s not waving a magic wand… supercomputers are not exactly like putting together a Lego set. But the scalability of Linux does make it a more straightforward, albeit expensive, process to build one of these monsters.
All the better to work on problems that can truly find solutions to make the world a better place.
For more discussion on Linux and supercomputing, visit the supercomputing section of SUSE Conversations.
HP Cloud Selected to Host USPS Authentication Services
HP to provide a virtual private cloud as the underpinning for the secure authentication platform