Home Blog Page 1712

How to Install and Configure Cacti on Linux

Any system admin working in a service provider network would certainly agree that Cacti is one of the most widely used tools in network management solutions. It is open source, has built in user authentications and user permission features, shipped with frequently used graph templates like bandwidth, 95th percentile, hard disk usage, CPU usage, load […]
Continue reading…

The post How to install and configure Cacti on Linux appeared first on Xmodulo.

Read more at Xmodulo

How To Parse Squid Proxy Access.log File using Squid Analyzer

Squid provides access.log to record all user activities which through it. IT Administrator can parse the file to see what happens there. But access.log is a raw file. You really to read it carefully to get valuable information. Since access.log file is a raw file, a third party software is needed to process it into a human readable information. Read more about squid analyzer parser tool posted at Linoxide.

How a Hackathon Can Transform Your Community

local government hackathons

What started as an uphill battle in Burlington, Vermont on the National Day of Civic Hacking in June 2013, transversed into an understanding between local government, non-profits, the media, and the community four months later. What they came to understand was that we can grow stronger when we work together. When we partner. When we work on stuff that matters.

Robert Coleburn, a Technology Librarian (and systems administrator) at Fletcher Free Library, jumped at the opportunity to partner with Code for Burlington, a Code for America brigade, to help host a hackathon on the last weekend in October called Hack the Stacks. The event drew over 30 people volunteering to improve their community through open source technology.

read more

Read more at OpenSource.com

Canonical Working On Mesa Code Again For Mir

With Canonical’s small X.Org team back to publishing patches on the Mesa mailing list, it looks like they might be trying again soon for pushing forward their Mir EGL back-end…

Read more at Phoronix

KDE Plasma Media Center 1.2 Beta Has New Features

Plasma Media Center, the KDE project to slowly take on the likes of XBMC and provide a nice user-interface for multimedia tasks atop the KDE experience, is up to version 1.2 beta. The 1.2 beta release of Plasma Media Center is packing a number of new features…

Read more at Phoronix

Update: Univention Corporate Server 3.2

Univention Corporate Server 3.2 released

Improved user guidance and compatibility


From now on, the new version 3.2 of Univention Corporate Server (UCS) is available. The release distinguishes itself with a highly optimized user guidance. In addition, compatibility with current hardware and the Microsoft systems Server 2012 and Windows 8.1 has also been improved, among other things through an update to Samba 4.1.

With UCS 3.2, Univention delivers a sophisticated and cost-efficient alternative to Microsoft Server, which is also very attractive for companies without a big IT department due to the easy-to-use management system – and on top with an attractive cost structure.

Detailed information on the release UCS 3.2 can be found in Univention’s press release.

Qualcomm Announces Next-Generation Snapdragon 805 “Ultra HD” Processor

The new Snapdragon 805, featuring a Krait 450 quad-core CPU and the new Adreno 420 GPU, brings Ultra HD (4K) resolution video to both mobile devices and via Ultra HD TVs.

Linux-Fueled Networked DVR Adds Second Tuner

Really Simple Software has begun accepting pre-orders for the second generation of its Linux-powered networked DVR. The new model, known as “Simple.TV by SiliconDust” and priced at $250, adds a second TV tuner and is expected to ship by the end of the year, by which time Android and iOS apps for both generations of […]

Read more at LinuxGizmos

New ‘Real-World’ Benchmark Could Shake Up Top500 Supercomputer List

The 42nd edition of the TOP500 list of supercomputers has been released, featuring the most powerful Linux machines in the world. Leading the pack again is Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, with a performance of 33.86 petaflop/s on the Linpack benchmark. Tianhe-2 uses the Kylin Linux operating system (OS).

The newest supercomputer to make the Top 10, Piz Daint, is also the most energy efficient system in the Top 10, consuming a total of 2.33 MW and delivering 2.7 Gflops/W. The number two computer, Titan is also one of the most energy efficient systems, consuming a total of 8.21 MW and delivering 2.143 Gflops/W. (See the top 10 list, below.)

Piz Daint Destra MR 01-2

Supercomputing speed test change-up announced

The next round of supercomputer rankings may shake things up however, as the Top500 editors have announced a new testing regime. On Nov. 18 the organizers released the High Performance Conjugate Gradient (HPCG) that is designed to better predict a supercomputer’s real-world usefulness.

In a June paper on the HPCG Benchmark, Top500 list editors Michael Heroux and Jack Dongarra say that the High Performance Linpack (HPL) test is “increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications.” The problem, according to Heroux and Dongarra is that designing for good HPL performance can “lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system.” High performance computing applications that are governed by differential equations, which tend to need more bandwidth and low latency and access data using irregular patterns are specifically not well served by the HPL design standards, according to the authors.

The new HPCG test won’t show real change to the list rankings too quickly though, as the test will need to be run and be accepted by the supercomputing community first.

“Once the definition and code for the HPCG is in a stable condition we envision collecting results for it in parallel to the ongoing effort for the HPL benchmark,” said Erich Strohmaier, head of the Future Technologies Group at Lawrence Berkeley National Laboratories and Top500.org editor. “For the foreseeable future the TOP500 will be based on the HPL benchmark test but we would hope to provide additional value and information by collecting and publishing numbers for new benchmark such as HPCG as well.”

Experts: GPU accelerator speed feats fail real-world application

One aspect of the supercomputer design that leads to higher data processing under the HPL test are the GPU accelerators that are found in all of the top 10 supercomputers. These accelerators, such as the just-announced NVIDIA Tesla K40 GPU accelerator boost the performance of the top supercomputers, moving workloads around and helping crunch data at incredible speeds in the Linpack tests.

“GPU accelerators have gone mainstream in the HPC and supercomputing industries,” said Sumit Gupta, general manager of Tesla Accelerated Computing products at Nvidia.

In their June paper on HPCG, Dongarra and Heroux point out that the way these accelerators work doesn’t always reflect real-world applications that would more selectively port data to the accelerators and rely on CPU processing, with the result of slower computation.

“For example, the Titan system at Oak Ridge National Laboratory has 18,688 nodes, each with a 16-core, 32GB AMD Opteron processor and a 6GB Nvidia K20 GPU. Titan was the top-ranked system in November 2012 using HPL [Linpack]. However, in obtaining the HPL result on Titan, the Opteron processors played only a supporting role in the result. All floating-point computation and all data were resident on the GPUs. In contrast, real applications, when initially ported to Titan, will typically run solely on the CPUs and selectively offload computations to the GPU for acceleration.”

The complete Top500 Supercomputer list for November 2013 is available from Top500.org.

The November 2013 Top 10 supercomputers are:

1. Tianhe-2, developed by China’s National University of Defense Technology – 33.86 petaflop/s – Kylin Linux operating system (OS)
2. Titan, a Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory – 17.59 Pflop/s – Cray Linux Environment

3. Sequoia, an IBM BlueGene/Q system installed at DOE’s Lawrence Livermore National Laboratory – 17.17 Pflop/s – Linux

4. K computer, a Fujitsu system installed at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe, Japan – 10.51 Pflop/s – Linux

5. Mira, a BlueGene/Q system installed at DOE’s Argonne National Laboratory – 8.59 Pflop/s – Linux

6. Piz Daint, a Cray XC30 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland – 6.27 Pflop/s – Cray Linux Environment

7. Stampede, a Dell at the Texas Advanced Computing Center of the University of Texas, Austin – 5.17 Pflop/s – Linux

8. JUQEEN, a BlueGene/Q system installed at the Forschungszentrum Juelich in Germany – 5.01 Pflops/s – Linux

9. Vulcan, an IBM BlueGene/Q system at Lawrence Livermore National Laboratory – 4.29 Pflop/s – Linux

10. SuperMUC, at Leibniz Rechenzentrum in Germany – 2.90 Pflop/s – Linux

Not Everyone Believes That OpenStack Has Succeeded

Debate continues to swirl over whether OpenStack has emerged as a successful cloud computing platform in terms of actual deployments, or whether it is overhyped and immature.  Earlier this month, we reported on survey results from The OpenStack Foundation that showed that many enterprises are deploying or plan to deploy the platform.

Now, though, Gartner Research Director Allessandro Perilli, is out with an essay that paints a much gloomier picture of actual OpenStack deployments. Perilli was at OpenStack Summit, too, where there were numerous promising announcements surrounding the platform.

 


 
Read more at Ostatic