A message for Linux.com registered users: We are in the process of making changes to the Linux forums. Starting Monday, 8/13/18 at 6:00 PM PT, you will be unable to access the forums. They will re-launch as soon as possible on Wednesday, 8/15/18 with new features and improved functionality. Thank you for your patience and stay tuned for the new improved forums.
When will MMAP be able to use HUGE PAGES to map a real file?
Clearly there are filesystem implications for the storage used, i.e. you must have contiguous allocation for 2MB extents (or blocks if not extent based), but for larger date sets this seems most appropriate. All the references I have found so far are only about using hugepages for more efficient memory allocation (i.e. fewer TLB entries) which is an excellent usage, but the same holds for reading through large files (e.g. TBs worth). In an age of big data, what is the hold up here? (or does this work already? and if so what file systems support a 2MB min block or extent size?).