Linux.com

Home News Enterprise Computing High Performance High-Performance New Whitepaper: Accelerating Lossless Data Compression with GPUs

New Whitepaper: Accelerating Lossless Data Compression with GPUs

A newly published whitepaper looks at the modification of the Huffman algorithm which permits uncompressed data to be decomposed into independently compressible and decompressible blocks, allowing for concurrent compression and decompression on multiple processors.

One major difficulty here in achieving good speedup with slim negative side effects that lossless data compression algorithms can generally not be, in their unaltered form, thought of as highly parallelizable. Indeed, if one wishes to express these algorithms in parallel, one often needs to consider tradeoffs between compression efficiency and performance. Nevertheless, we hope to effectively demonstrate that it is possible to come to a reasonable middle ground with respect to coding acceleration and efficiency loss.

 
Read more at insideHPC
 

Comments

Subscribe to Comments Feed

Upcoming Linux Foundation Courses

  1. LFD320 Linux Kernel Internals and Debugging
    03 Nov » 07 Nov - Virtual
    Details
  2. LFS416 Linux Security
    03 Nov » 06 Nov - Virtual
    Details
  3. LFS426 Linux Performance Tuning
    10 Nov » 13 Nov - Virtual
    Details

View All Upcoming Courses

Become an Individual Member
Check out the Friday Funnies

Sign Up For the Linux.com Newsletter


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board