Clearly there are filesystem implications for the storage used, i.e. you must have contiguous allocation for 2MB extents (or blocks if not extent based), but for larger date sets this seems most appropriate. All the references I have found so far are only about using hugepages for more efficient memory allocation (i.e. fewer TLB entries) which is an excellent usage, but the same holds for reading through large files (e.g. TBs worth). In an age of big data, what is the hold up here? (or does this work already? and if so what file systems support a 2MB min block or extent size?).