Konrad's explanation is precise.
There are applications which have a process model; and if you assume 10,000 processes attempting to mmap all the 6TB memory available on a server; we are looking at the following:
As you can see with 2M pages, this system will use up an exorbitant amount of DRAM to hold the page tables; but the 1G pages finally brings it down to a reasonable level.processes ; 10,000 memory : 6TB pte @ 4k page size: 8 bytes / 4K of memory * #processes = 6TB / 4k * 8 * 10000 = 1.5GB * 80000 = 120,000GB pmd @ 2M page size: 120,000 / 512 = ~240GB pud @ 1G page size: 240GB / 512 = ~480MB
On Tue, Jan 24, 2017 at 10:26:54AM -0800, Dan Williams wrote:On Tue, Jan 24, 2017 at 3:12 AM, Jan Kara <jack@suse.cz> wrote:On Mon 23-01-17 16:47:18, Dave Jiang wrote:The following series implements support for 1G trasparent hugepage on x86 for device dax. The bulk of the code was written by Mathew Wilcox a while back supporting transparent 1G hugepage for fs DAX. I have forward ported the relevant bits to 4.10-rc. The current submission has only the necessary code to support device DAX.Well, you should really explain why do we want this functionality... Is anybody going to use it? Why would he want to and what will he gain by doing so? Because so far I haven't heard of a convincing usecase.So the motivation and intended user of this functionality mirrors the motivation and users of 1GB page support in hugetlbfs. Given expected capacities of persistent memory devices an in-memory database may want to reduce tlb pressure beyond what they can already achieve with 2MB mappings of a device-dax file. We have customer feedback to that effect as Willy mentioned in his previous version of these patches [1].CCing Nilesh who may be able to shed some more light on this.[1]: https://lkml.org/lkml/2016/1/31/52 _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm