linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Kiryl Shutsemau <kas@kernel.org>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Mike Rapoport <rppt@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Usama Arif <usama.arif@linux.dev>
Subject: Re: [LSF/MM/BPF TOPIC] 64k (or 16k) base page size on x86
Date: Thu, 19 Feb 2026 17:09:20 +0100	[thread overview]
Message-ID: <f261995f-a45a-448d-b72d-18d476697d88@kernel.org> (raw)
In-Reply-To: <aZcxWsWO7AxQW6JC@thinkstation>

On 2/19/26 16:54, Kiryl Shutsemau wrote:
> On Thu, Feb 19, 2026 at 04:39:34PM +0100, David Hildenbrand (Arm) wrote:
>> On 2/19/26 16:08, Kiryl Shutsemau wrote:
>>> No, there's no new hardware (that I know of). I want to explore what page size
>>> means.
>>>
>>> The kernel uses the same value - PAGE_SIZE - for two things:
>>>
>>>     - the order-0 buddy allocation size;
>>>
>>>     - the granularity of virtual address space mapping;
>>>
>>> I think we can benefit from separating these two meanings and allowing
>>> order-0 allocations to be larger than the virtual address space covered by a
>>> PTE entry.
>>>
>>> The main motivation is scalability. Managing memory on multi-terabyte
>>> machines in 4k is suboptimal, to say the least.
>>>
>>> Potential benefits of the approach (assuming 64k pages):
>>>
>>>     - The order-0 page size cuts struct page overhead by a factor of 16. From
>>>       ~1.6% of RAM to ~0.1%;
>>>
>>>     - TLB wins on machines with TLB coalescing as long as mapping is naturally
>>>       aligned;
>>>
>>>     - Order-5 allocation is 2M, resulting in less pressure on the zone lock;
>>>
>>>     - 1G pages are within possibility for the buddy allocator - order-14
>>>       allocation. It can open the road to 1G THPs.
>>>
>>>     - As with THP, fewer pages - less pressure on the LRU lock;
>>>
>>>     - ...
>>>
>>> The trade-off is memory waste (similar to what we have on architectures with
>>> native 64k pages today) and complexity, mostly in the core-MM code.
>>>
>>> == Design considerations ==
>>>
>>> I want to split PAGE_SIZE into two distinct values:
>>>
>>>     - PTE_SIZE defines the virtual address space granularity;
>>>
>>>     - PG_SIZE defines the size of the order-0 buddy allocation;
>>>
>>> PAGE_SIZE is only defined if PTE_SIZE == PG_SIZE. It will flag which code
>>> requires conversion, and keep existing code working while conversion is in
>>> progress.
>>>
>>> The same split happens for other page-related macros: mask, shift,
>>> alignment helpers, etc.
>>>
>>> PFNs are in PTE_SIZE units.
>>>
>>> The buddy allocator and page cache (as well as all I/O) operate in PG_SIZE
>>> units.
>>>
>>> Userspace mappings are maintained with PTE_SIZE granularity. No ABI changes
>>> for userspace. But we might want to communicate PG_SIZE to userspace to
>>> get the optimal results for userspace that cares.
>>>
>>> PTE_SIZE granularity requires a substantial rework of page fault and VMA
>>> handling:
>>>
>>>     - A struct page pointer and pgprot_t are not enough to create a PTE entry.
>>>       We also need the offset within the page we are creating the PTE for.
>>>
>>>     - Since the VMA start can be aligned arbitrarily with respect to the
>>>       underlying page, vma->vm_pgoff has to be changed to vma->vm_pteoff,
>>>       which is in PTE_SIZE units.
>>>
>>>     - The page fault handler needs to handle PTE_SIZE < PG_SIZE, including
>>>       misaligned cases;
>>>
>>> Page faults into file mappings are relatively simple to handle as we
>>> always have the page cache to refer to. So you can map only the part of the
>>> page that fits in the page table, similarly to fault-around.
>>>
>>> Anonymous and file-CoW faults should also be simple as long as the VMA is
>>> aligned to PG_SIZE in both the virtual address space and with respect to
>>> vm_pgoff. We might waste some memory on the ends of the VMA, but it is
>>> tolerable.
>>>
>>> Misaligned anonymous and file-CoW faults are a pain. Specifically, mapping
>>> pages across a page table boundary. In the worst case, a page is mapped across
>>> a PGD entry boundary and PTEs for the page have to be put in two separate
>>> subtrees of page tables.
>>>
>>> A naive implementation would map different pages on different sides of a
>>> page table boundary and accept the waste of one page per page table crossing.
>>> The hope is that misaligned mappings are rare, but this is suboptimal.
>>>
>>> mremap(2) is the ultimate stress test for the design.
>>>
>>> On x86, page tables are allocated from the buddy allocator and if PG_SIZE
>>> is greater than 4 KB, we need a way to pack multiple page tables into a
>>> single page. We could use the slab allocator for this, but it would
>>> require relocating the page-table metadata out of struct page.
>>
>> When discussing per-process page sizes with Ryan and Dev, I mentioned that
>> having a larger emulated page size could be interesting for other
>> architectures as well.
>>
>> That is, we would emulate a 64K page size on Intel for user space as well,
>> but let the OS work with 4K pages.
>>
>> We'd only allocate+map large folios into user space + pagecache, but still
>> allow for page tables etc. to not waste memory.
>>
>> So "most" of your allocations in the system would actually be at least 64k,
>> reducing zone lock contention etc.
> 
> I am not convinced emulation would help zone lock contention. I expect
> contention to be higher if page allocator would see a mix of 4k and 64k
> requests. It sounds like constant split/merge under the lock.

If most your allocations are larger, then there isn't that much 
splitting/merging.

There will be some for the < 64k allocations of course, but when all 
user space+page cache is >= 64 then the split/merge + zone lock should 
be heavily reduced.

> 
>> It doesn't solve all the problems you wanted to tackle on your list (e.g.,
>> "struct page" overhead, which will be sorted out by memdescs).
> 
> I don't think we can serve 1G pages out of buddy allocator with 4k
> order-0. And without it, I don't see how to get to a viable 1G THPs.

Zi Yan was one working on this, and I think we had ideas on how to make 
that work in the long run.

-- 
Cheers,

David


  reply	other threads:[~2026-02-19 16:09 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-19 15:08 Kiryl Shutsemau
2026-02-19 15:17 ` Peter Zijlstra
2026-02-19 15:20   ` Peter Zijlstra
2026-02-19 15:27     ` Kiryl Shutsemau
2026-02-19 15:33 ` Pedro Falcato
2026-02-19 15:50   ` Kiryl Shutsemau
2026-02-19 15:53     ` David Hildenbrand (Arm)
2026-02-19 19:31       ` Pedro Falcato
2026-02-19 15:39 ` David Hildenbrand (Arm)
2026-02-19 15:54   ` Kiryl Shutsemau
2026-02-19 16:09     ` David Hildenbrand (Arm) [this message]
2026-02-20  2:55       ` Zi Yan
2026-02-19 17:09   ` Kiryl Shutsemau
2026-02-20 10:24     ` David Hildenbrand (Arm)
2026-02-20 12:07       ` Kiryl Shutsemau
2026-02-20 16:30         ` David Hildenbrand (Arm)
2026-02-20 19:33           ` Kalesh Singh
2026-02-19 23:24   ` Kalesh Singh
2026-02-20 12:10     ` Kiryl Shutsemau
2026-02-20 19:21       ` Kalesh Singh
2026-02-19 17:08 ` Dave Hansen
2026-02-19 22:05   ` Kiryl Shutsemau
2026-02-20  3:28     ` Liam R. Howlett
2026-02-20 12:33       ` Kiryl Shutsemau
2026-02-20 15:17         ` Liam R. Howlett
2026-02-20 15:50           ` Kiryl Shutsemau
2026-02-19 17:30 ` Dave Hansen
2026-02-19 22:14   ` Kiryl Shutsemau
2026-02-19 22:21     ` Dave Hansen
2026-02-19 17:47 ` Matthew Wilcox
2026-02-19 22:26   ` Kiryl Shutsemau
2026-02-20  9:04 ` David Laight
2026-02-20 12:12   ` Kiryl Shutsemau

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f261995f-a45a-448d-b72d-18d476697d88@kernel.org \
    --to=david@kernel.org \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=kas@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mingo@redhat.com \
    --cc=rppt@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=usama.arif@linux.dev \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox