Linux.com

Home News Enterprise Computing High Performance High-Performance Penguin Puts GPUs In On Demand Offering

Penguin Puts GPUs In On Demand Offering

Article Source insideHPC
November 18, 2009, 9:53 pm

Penguin Computing announced this week that it’s added GPU goodness to its Penguin On Demand host computing service (announced back in August)

Penguin Computing logoPenguin Computing, experts in high performance computing solutions, today announced that Tesla GPU compute nodes are available in its Penguin on Demand (POD) system. Tesla equipped PODs will now provide a pay-as-you-go environment for researchers, scientists and engineers to explore the benefits of GPU computing in a hosted environment.

The POD system makes available on demand a computing infrastructure of highly optimized Linux clusters with specialized hardware interconnects and software configurations tuned specifically for HPC workloads. The addition of NVIDIA’s Tesla GPU Compute systems to POD now allows users to port their applications to CUDA or OpenCL and test their results very quickly and without capital costs.

Not up to speed on POD? More here.

 

Comments

Subscribe to Comments Feed

Upcoming Linux Foundation Courses

  1. LFD320 Linux Kernel Internals and Debugging
    03 Nov » 07 Nov - Virtual
    Details
  2. LFS416 Linux Security
    03 Nov » 06 Nov - Virtual
    Details
  3. LFS426 Linux Performance Tuning
    10 Nov » 13 Nov - Virtual
    Details

View All Upcoming Courses

Become an Individual Member
Check out the Friday Funnies

Sign Up For the Linux.com Newsletter


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board