linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: Kiryl Shutsemau <kas@kernel.org>,
	lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Mike Rapoport <rppt@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Usama Arif <usama.arif@linux.dev>
Subject: Re: [LSF/MM/BPF TOPIC] 64k (or 16k) base page size on x86
Date: Thu, 19 Feb 2026 09:08:57 -0800	[thread overview]
Message-ID: <46817fe5-7166-4734-bad3-3109cc7feb1e@intel.com> (raw)
In-Reply-To: <aZcmlIF4bmG0twkp@thinkstation>

On 2/19/26 07:08, Kiryl Shutsemau wrote:
>   - The order-0 page size cuts struct page overhead by a factor of 16. From
>     ~1.6% of RAM to ~0.1%;

First of all, this looks like fun. Nice work! I'm not opposed at all in
concept to cleaning up things and doing the logical separation you
described to split buddy granularity and mapping granularity. That seems
like a worthy endeavor and some of the union/#define tricks look like a
likely viable way to do it incrementally.

But I don't think there's going to be a lot of memory savings in the
end. Maybe this would bring the mem= hyperscalers back into the fold and
have them actually start using 'struct page' again for their VM memory.
Dunno.

But, let's look at my kernel directory and round the file sizes up to
4k, 16k and 64k:

find .  -printf '%s\n' | while read size; do echo	\
		$(((size + 0x0fff) & 0xfffff000))	\
		$(((size + 0x3fff) & 0xffffc000))	\
		$(((size + 0xffff) & 0xffff0000));
done

... and add them all up:

11,297,648 KB - on disk
11,297,712 KB - in a 4k page cache
12,223,488 KB - in a 16k page cache
16,623,296 KB - in a 64k page cache

So a 64k page cache eats ~5GB of extra memory for a kernel tree (well,
_my_ kernel tree). In other words, if you are looking for memory savings
on my laptop, you'll need ~300GB of RAM before 'struct page' overhead
overwhelms the page cache bloat from a single kernel tree.

The whole kernel obviously isn't in the page cache all at the same time.
The page cache across the system is also obviously different than a
kernel tree, but you get the point.

That's not to diminish how useful something like this might be,
especially for folks that are sensitive to 'struct page' overhead or
allocator performance.

But, it will mostly be getting better performance at the _cost_ of
consuming more RAM, not saving RAM.


  parent reply	other threads:[~2026-02-19 17:09 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-19 15:08 Kiryl Shutsemau
2026-02-19 15:17 ` Peter Zijlstra
2026-02-19 15:20   ` Peter Zijlstra
2026-02-19 15:27     ` Kiryl Shutsemau
2026-02-19 15:33 ` Pedro Falcato
2026-02-19 15:50   ` Kiryl Shutsemau
2026-02-19 15:53     ` David Hildenbrand (Arm)
2026-02-19 19:31       ` Pedro Falcato
2026-02-19 15:39 ` David Hildenbrand (Arm)
2026-02-19 15:54   ` Kiryl Shutsemau
2026-02-19 16:09     ` David Hildenbrand (Arm)
2026-02-20  2:55       ` Zi Yan
2026-02-19 17:09   ` Kiryl Shutsemau
2026-02-20 10:24     ` David Hildenbrand (Arm)
2026-02-20 12:07       ` Kiryl Shutsemau
2026-02-20 16:30         ` David Hildenbrand (Arm)
2026-02-20 19:33           ` Kalesh Singh
2026-02-19 23:24   ` Kalesh Singh
2026-02-20 12:10     ` Kiryl Shutsemau
2026-02-20 19:21       ` Kalesh Singh
2026-02-19 17:08 ` Dave Hansen [this message]
2026-02-19 22:05   ` Kiryl Shutsemau
2026-02-20  3:28     ` Liam R. Howlett
2026-02-20 12:33       ` Kiryl Shutsemau
2026-02-20 15:17         ` Liam R. Howlett
2026-02-20 15:50           ` Kiryl Shutsemau
2026-02-19 17:30 ` Dave Hansen
2026-02-19 22:14   ` Kiryl Shutsemau
2026-02-19 22:21     ` Dave Hansen
2026-02-19 17:47 ` Matthew Wilcox
2026-02-19 22:26   ` Kiryl Shutsemau
2026-02-20  9:04 ` David Laight
2026-02-20 12:12   ` Kiryl Shutsemau

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=46817fe5-7166-4734-bad3-3109cc7feb1e@intel.com \
    --to=dave.hansen@intel.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kas@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=mingo@redhat.com \
    --cc=rppt@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=usama.arif@linux.dev \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox