From: Kiryl Shutsemau <kas@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>,
Muchun Song <muchun.song@linux.dev>,
David Hildenbrand <david@redhat.com>,
Matthew Wilcox <willy@infradead.org>,
Usama Arif <usamaarif642@gmail.com>,
Frank van der Linden <fvdl@google.com>
Cc: Oscar Salvador <osalvador@suse.de>,
Mike Rapoport <rppt@kernel.org>, Vlastimil Babka <vbabka@suse.cz>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Zi Yan <ziy@nvidia.com>, Baoquan He <bhe@redhat.com>,
Michal Hocko <mhocko@suse.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Jonathan Corbet <corbet@lwn.net>,
Huacai Chen <chenhuacai@kernel.org>,
WANG Xuerui <kernel@xen0n.name>,
Palmer Dabbelt <palmer@dabbelt.com>,
Paul Walmsley <paul.walmsley@sifive.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Alexandre Ghiti <alex@ghiti.fr>,
kernel-team@meta.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
loongarch@lists.linux.dev, linux-riscv@lists.infradead.org,
Kiryl Shutsemau <kas@kernel.org>
Subject: [PATCHv5 00/17] mm: Eliminate fake head pages from vmemmap optimization
Date: Wed, 28 Jan 2026 13:54:41 +0000 [thread overview]
Message-ID: <20260128135500.22121-1-kas@kernel.org> (raw)
This series removes "fake head pages" from the HugeTLB vmemmap
optimization (HVO) by changing how tail pages encode their relationship
to the head page.
It simplifies compound_head() and page_ref_add_unless(). Both are in the
hot path.
Background
==========
HVO reduces memory overhead by freeing vmemmap pages for HugeTLB pages
and remapping the freed virtual addresses to a single physical page.
Previously, all tail page vmemmap entries were remapped to the first
vmemmap page (containing the head struct page), creating "fake heads" -
tail pages that appear to have PG_head set when accessed through the
deduplicated vmemmap.
This required special handling in compound_head() to detect and work
around fake heads, adding complexity and overhead to a very hot path.
New Approach
============
For architectures/configs where sizeof(struct page) is a power of 2 (the
common case), this series changes how position of the head page is encoded
in the tail pages.
Instead of storing a pointer to the head page, the ->compound_info
(renamed from ->compound_head) now stores a mask.
The mask can be applied to any tail page's virtual address to compute
the head page address. Critically, all tail pages of the same order now
have identical compound_info values, regardless of which compound page
they belong to.
The key insight is that all tail pages of the same order now have
identical compound_info values, regardless of which compound page they
belong to. This allows a single page of tail struct pages to be shared
across all huge pages of the same order on a NUMA node.
Benefits
========
1. Simplified compound_head(): No fake head detection needed, can be
implemented in a branchless manner.
2. Simplified page_ref_add_unless(): RCU protection removed since there's
no race with fake head remapping.
3. Cleaner architecture: The shared tail pages are truly read-only and
contain valid tail page metadata.
If sizeof(struct page) is not power-of-2, there are no functional changes.
HVO is not supported in this configuration.
I had hoped to see performance improvement, but my testing thus far has
shown either no change or only a slight improvement within the noise.
Series Organization
===================
Patch 1: Preparation - move MAX_FOLIO_ORDER to mmzone.h
Patches 2-4: Refactoring - interface changes, field rename, code movement
Patches 5-6: Arch fixes - align vmemmap for riscv and LoongArch
Patch 7: Core change - new mask-based compound_head() encoding
Patch 8: Correctness fix - page_zonenum() must use head page
Patch 9: Add memmap alignment check for compound_info_has_mask()
Patch 10: Refactor vmemmap_walk for new design
Patch 11: Eliminate fake heads with shared tail pages
Patches 12-15: Cleanup - remove fake head infrastructure
Patch 16: Documentation update
Patch 17: Get rid of opencoded compound_head() in page_slab()
Changes in v5:
==============
- Rebased to mm-everything-2026-01-27-04-35
- Add arch-specific patches to align vmemmap to maximal folio size
for riscv and LoongArch architectures.
- Strengthen the memmap alignment check in mm/sparse.c: use BUG()
for CONFIG_DEBUG_VM, WARN() otherwise. (Muchun)
- Use cmpxchg() instead of hugetlb_lock to update vmemmap_tails
array. (Muchun)
- Update page_slab().
Changes in v4:
==============
- Fix build issues due to linux/mmzone.h <-> linux/pgtable.h
dependency loop by avoiding including linux/pgtable.h into
linux/mmzone.h
- Rework vmemmap_remap_alloc() interface. (Muchun)
- Use &folio->page instead of folio address for optimization
target. (Muchun)
Changes in v3:
==============
- Fixed error recovery path in vmemmap_remap_free() to pass correct start
address for TLB flush. (Muchun)
- Wrapped the mask-based compound_info encoding within CONFIG_SPARSEMEM_VMEMMAP
check via compound_info_has_mask(). For other memory models, alignment
guarantees are harder to verify. (Muchun)
- Updated vmemmap_dedup.rst documentation wording: changed "vmemmap_tail
shared for the struct hstate" to "A single, per-node page frame shared
among all hugepages of the same size". (Muchun)
- Fixed build error with MAX_FOLIO_ORDER expanding to undefined PUD_ORDER
in certain configurations. (kernel test robot)
Changes in v2:
==============
- Handle boot-allocated huge pages correctly. (Frank)
- Changed from per-hstate vmemmap_tail to per-node vmemmap_tails[] array
in pglist_data. (Muchun)
- Added spin_lock(&hugetlb_lock) protection in vmemmap_get_tail() to fix
a race condition where two threads could both allocate tail pages.
The losing thread now properly frees its allocated page. (Usama)
- Add warning if memmap is not aligned to MAX_FOLIO_SIZE, which is
required for the mask approach. (Muchun)
- Make page_zonenum() use head page - correctness fix since shared
tail pages cannot have valid zone information. (Muchun)
- Added 'const' qualifier to head parameter in set_compound_head() and
prep_compound_tail(). (Usama)
- Updated commit messages.
Kiryl Shutsemau (17):
mm: Move MAX_FOLIO_ORDER definition to mmzone.h
mm: Change the interface of prep_compound_tail()
mm: Rename the 'compound_head' field in the 'struct page' to
'compound_info'
mm: Move set/clear_compound_head() next to compound_head()
riscv/mm: Align vmemmap to maximal folio size
LoongArch/mm: Align vmemmap to maximal folio size
mm: Rework compound_head() for power-of-2 sizeof(struct page)
mm: Make page_zonenum() use head page
mm/sparse: Check memmap alignment for compound_info_has_mask()
mm/hugetlb: Refactor code around vmemmap_walk
mm/hugetlb: Remove fake head pages
mm: Drop fake head checks
hugetlb: Remove VMEMMAP_SYNCHRONIZE_RCU
mm/hugetlb: Remove hugetlb_optimize_vmemmap_key static key
mm: Remove the branch from compound_head()
hugetlb: Update vmemmap_dedup.rst
mm/slab: Use compound_head() in page_slab()
.../admin-guide/kdump/vmcoreinfo.rst | 2 +-
Documentation/mm/vmemmap_dedup.rst | 62 ++--
arch/loongarch/include/asm/pgtable.h | 3 +-
arch/riscv/mm/init.c | 3 +-
include/linux/mm.h | 31 --
include/linux/mm_types.h | 20 +-
include/linux/mmzone.h | 46 +++
include/linux/page-flags.h | 167 +++++-----
include/linux/page_ref.h | 8 +-
include/linux/types.h | 2 +-
kernel/vmcore_info.c | 2 +-
mm/hugetlb.c | 8 +-
mm/hugetlb_vmemmap.c | 290 ++++++++----------
mm/internal.h | 12 +-
mm/mm_init.c | 2 +-
mm/page_alloc.c | 4 +-
mm/slab.h | 8 +-
mm/sparse-vmemmap.c | 44 ++-
mm/sparse.c | 13 +
mm/util.c | 16 +-
20 files changed, 371 insertions(+), 372 deletions(-)
--
2.51.2
next reply other threads:[~2026-01-28 13:55 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-28 13:54 Kiryl Shutsemau [this message]
2026-01-28 13:54 ` [PATCHv5 01/17] mm: Move MAX_FOLIO_ORDER definition to mmzone.h Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 02/17] mm: Change the interface of prep_compound_tail() Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 03/17] mm: Rename the 'compound_head' field in the 'struct page' to 'compound_info' Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 04/17] mm: Move set/clear_compound_head() next to compound_head() Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 05/17] riscv/mm: Align vmemmap to maximal folio size Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 06/17] LoongArch/mm: " Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 07/17] mm: Rework compound_head() for power-of-2 sizeof(struct page) Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 08/17] mm: Make page_zonenum() use head page Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 09/17] mm/sparse: Check memmap alignment for compound_info_has_mask() Kiryl Shutsemau
2026-01-29 3:00 ` Muchun Song
2026-01-29 3:10 ` Zi Yan
2026-01-29 3:23 ` Muchun Song
2026-01-29 3:29 ` Zi Yan
2026-01-29 7:03 ` Muchun Song
2026-01-29 17:33 ` Zi Yan
2026-01-28 13:54 ` [PATCHv5 10/17] mm/hugetlb: Refactor code around vmemmap_walk Kiryl Shutsemau
2026-01-29 2:51 ` Muchun Song
2026-01-28 13:54 ` [PATCHv5 11/17] mm/hugetlb: Remove fake head pages Kiryl Shutsemau
2026-01-29 6:54 ` Muchun Song
2026-01-28 13:54 ` [PATCHv5 12/17] mm: Drop fake head checks Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 13/17] hugetlb: Remove VMEMMAP_SYNCHRONIZE_RCU Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 14/17] mm/hugetlb: Remove hugetlb_optimize_vmemmap_key static key Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 15/17] mm: Remove the branch from compound_head() Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 16/17] hugetlb: Update vmemmap_dedup.rst Kiryl Shutsemau
2026-01-28 13:54 ` [PATCHv5 17/17] mm/slab: Use compound_head() in page_slab() Kiryl Shutsemau
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260128135500.22121-1-kas@kernel.org \
--to=kas@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=alex@ghiti.fr \
--cc=aou@eecs.berkeley.edu \
--cc=bhe@redhat.com \
--cc=chenhuacai@kernel.org \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=fvdl@google.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=kernel@xen0n.name \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=loongarch@lists.linux.dev \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=rppt@kernel.org \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox