Home News Enterprise Computing High Performance High-Performance Penguin Puts GPUs In On Demand Offering

Penguin Puts GPUs In On Demand Offering

Article Source insideHPC
November 18, 2009, 9:53 pm

Penguin Computing announced this week that it’s added GPU goodness to its Penguin On Demand host computing service (announced back in August)

Penguin Computing logoPenguin Computing, experts in high performance computing solutions, today announced that Tesla GPU compute nodes are available in its Penguin on Demand (POD) system. Tesla equipped PODs will now provide a pay-as-you-go environment for researchers, scientists and engineers to explore the benefits of GPU computing in a hosted environment.

The POD system makes available on demand a computing infrastructure of highly optimized Linux clusters with specialized hardware interconnects and software configurations tuned specifically for HPC workloads. The addition of NVIDIA’s Tesla GPU Compute systems to POD now allows users to port their applications to CUDA or OpenCL and test their results very quickly and without capital costs.

Not up to speed on POD? More here.



Subscribe to Comments Feed

Upcoming Linux Foundation Courses

  1. LFD312 Developing Applications For Linux
    16 Feb » 20 Feb - Atlanta - GA
  2. LFD331 Developing Linux Device Drivers
    16 Feb » 20 Feb - San Jose - CA
  3. LFS220 Linux System Administration
    16 Feb » 19 Feb - Virtual

View All Upcoming Courses

Become an Individual Member
Check out the Friday Funnies

Sign Up For the Newsletter

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board