Linux.com

Home News Enterprise Computing High Performance High-Performance Penguin Puts GPUs In On Demand Offering

Penguin Puts GPUs In On Demand Offering

Article Source insideHPC
November 18, 2009, 9:53 pm

Penguin Computing announced this week that it’s added GPU goodness to its Penguin On Demand host computing service (announced back in August)

Penguin Computing logoPenguin Computing, experts in high performance computing solutions, today announced that Tesla GPU compute nodes are available in its Penguin on Demand (POD) system. Tesla equipped PODs will now provide a pay-as-you-go environment for researchers, scientists and engineers to explore the benefits of GPU computing in a hosted environment.

The POD system makes available on demand a computing infrastructure of highly optimized Linux clusters with specialized hardware interconnects and software configurations tuned specifically for HPC workloads. The addition of NVIDIA’s Tesla GPU Compute systems to POD now allows users to port their applications to CUDA or OpenCL and test their results very quickly and without capital costs.

Not up to speed on POD? More here.

 

Comments

Subscribe to Comments Feed

Upcoming Linux Foundation Courses

  1. LFS426 Linux Performance Tuning
    21 Apr » 24 Apr - Virtual
    Details
  2. LFS520 OpenStack Cloud Architecture and Deployment
    05 May » 09 May - Virtual
    Details
  3. LFD320 Linux Kernel Internals and Debugging
    12 May » 16 May - Virtual
    Details

View All Upcoming Courses

Become an Individual Member
Check out the Friday Funnies

Sign Up For the Linux.com Newsletter


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board