From: Kiryl Shutsemau <kas@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>,
Muchun Song <muchun.song@linux.dev>,
David Hildenbrand <david@kernel.org>,
Matthew Wilcox <willy@infradead.org>,
Usama Arif <usamaarif642@gmail.com>,
Frank van der Linden <fvdl@google.com>
Cc: Oscar Salvador <osalvador@suse.de>,
Mike Rapoport <rppt@kernel.org>, Vlastimil Babka <vbabka@suse.cz>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Zi Yan <ziy@nvidia.com>, Baoquan He <bhe@redhat.com>,
Michal Hocko <mhocko@suse.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Jonathan Corbet <corbet@lwn.net>,
kernel-team@meta.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
Kiryl Shutsemau <kas@kernel.org>
Subject: [PATCHv3 06/15] mm: Rework compound_head() for power-of-2 sizeof(struct page)
Date: Thu, 15 Jan 2026 14:45:52 +0000 [thread overview]
Message-ID: <20260115144604.822702-7-kas@kernel.org> (raw)
In-Reply-To: <20260115144604.822702-1-kas@kernel.org>
For tail pages, the kernel uses the 'compound_info' field to get to the
head page. The bit 0 of the field indicates whether the page is a
tail page, and if set, the remaining bits represent a pointer to the
head page.
For cases when size of struct page is power-of-2, change the encoding of
compound_info to store a mask that can be applied to the virtual address
of the tail page in order to access the head page. It is possible
because struct page of the head page is naturally aligned with regards
to order of the page.
The significant impact of this modification is that all tail pages of
the same order will now have identical 'compound_info', regardless of
the compound page they are associated with. This paves the way for
eliminating fake heads.
The HugeTLB Vmemmap Optimization (HVO) creates fake heads and it is only
applied when the sizeof(struct page) is power-of-2. Having identical
tail pages allows the same page to be mapped into the vmemmap of all
pages, maintaining memory savings without fake heads.
If sizeof(struct page) is not power-of-2, there is no functional
changes.
Limit mask usage to SPARSEMEM_VMEMMAP where it makes a difference
because HVO. The approach with mask would work for any memory model,
but it requires validating that struct pages are naturally aligned for
all orders up to the MAX_FOLIO order, which can be tricky.
Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
---
include/linux/page-flags.h | 81 ++++++++++++++++++++++++++++++++++----
mm/util.c | 16 ++++++--
2 files changed, 85 insertions(+), 12 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 0de7db7efb00..e16a4bc82856 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -198,6 +198,29 @@ enum pageflags {
#ifndef __GENERATING_BOUNDS_H
+/*
+ * For tail pages, if the size of struct page is power-of-2 ->compound_info
+ * encodes the mask that converts the address of the tail page address to
+ * the head page address.
+ *
+ * Otherwise, ->compound_info has direct pointer to head pages.
+ */
+static __always_inline bool compound_info_has_mask(void)
+{
+ /*
+ * Limit mask usage to SPARSEMEM_VMEMMAP where it makes a difference
+ * because of the HugeTLB vmemmap optimization (HVO).
+ *
+ * The approach with mask would work for any memory model, but it
+ * requires validating that struct pages are naturally aligned for
+ * all orders up to the MAX_FOLIO order, which can be tricky.
+ */
+ if (!IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP))
+ return false;
+
+ return is_power_of_2(sizeof(struct page));
+}
+
#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key);
@@ -210,6 +233,10 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
if (!static_branch_unlikely(&hugetlb_optimize_vmemmap_key))
return page;
+ /* Fake heads only exists if compound_info_has_mask() is true */
+ if (!compound_info_has_mask())
+ return page;
+
/*
* Only addresses aligned with PAGE_SIZE of struct page may be fake head
* struct page. The alignment check aims to avoid access the fields (
@@ -223,10 +250,14 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
* because the @page is a compound page composed with at least
* two contiguous pages.
*/
- unsigned long head = READ_ONCE(page[1].compound_info);
+ unsigned long info = READ_ONCE(page[1].compound_info);
- if (likely(head & 1))
- return (const struct page *)(head - 1);
+ /* See set_compound_head() */
+ if (likely(info & 1)) {
+ unsigned long p = (unsigned long)page;
+
+ return (const struct page *)(p & info);
+ }
}
return page;
}
@@ -281,11 +312,26 @@ static __always_inline int page_is_fake_head(const struct page *page)
static __always_inline unsigned long _compound_head(const struct page *page)
{
- unsigned long head = READ_ONCE(page->compound_info);
+ unsigned long info = READ_ONCE(page->compound_info);
- if (unlikely(head & 1))
- return head - 1;
- return (unsigned long)page_fixed_fake_head(page);
+ /* Bit 0 encodes PageTail() */
+ if (!(info & 1))
+ return (unsigned long)page_fixed_fake_head(page);
+
+ /*
+ * If compound_info_has_mask() is false, the rest of compound_info is
+ * the pointer to the head page.
+ */
+ if (!compound_info_has_mask())
+ return info - 1;
+
+ /*
+ * If compoun_info_has_mask() is true the rest of the info encodes
+ * the mask that converts the address of the tail page to the head page.
+ *
+ * No need to clear bit 0 in the mask as 'page' always has it clear.
+ */
+ return (unsigned long)page & info;
}
#define compound_head(page) ((typeof(page))_compound_head(page))
@@ -294,7 +340,26 @@ static __always_inline void set_compound_head(struct page *page,
const struct page *head,
unsigned int order)
{
- WRITE_ONCE(page->compound_info, (unsigned long)head + 1);
+ unsigned int shift;
+ unsigned long mask;
+
+ if (!compound_info_has_mask()) {
+ WRITE_ONCE(page->compound_info, (unsigned long)head | 1);
+ return;
+ }
+
+ /*
+ * If the size of struct page is power-of-2, bits [shift:0] of the
+ * virtual address of compound head are zero.
+ *
+ * Calculate mask that can be applied to the virtual address of
+ * the tail page to get address of the head page.
+ */
+ shift = order + order_base_2(sizeof(struct page));
+ mask = GENMASK(BITS_PER_LONG - 1, shift);
+
+ /* Bit 0 encodes PageTail() */
+ WRITE_ONCE(page->compound_info, mask | 1);
}
static __always_inline void clear_compound_head(struct page *page)
diff --git a/mm/util.c b/mm/util.c
index cbf93cf3223a..f01a9655067f 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -1234,7 +1234,7 @@ static void set_ps_flags(struct page_snapshot *ps, const struct folio *folio,
*/
void snapshot_page(struct page_snapshot *ps, const struct page *page)
{
- unsigned long head, nr_pages = 1;
+ unsigned long info, nr_pages = 1;
struct folio *foliop;
int loops = 5;
@@ -1244,8 +1244,8 @@ void snapshot_page(struct page_snapshot *ps, const struct page *page)
again:
memset(&ps->folio_snapshot, 0, sizeof(struct folio));
memcpy(&ps->page_snapshot, page, sizeof(*page));
- head = ps->page_snapshot.compound_info;
- if ((head & 1) == 0) {
+ info = ps->page_snapshot.compound_info;
+ if ((info & 1) == 0) {
ps->idx = 0;
foliop = (struct folio *)&ps->page_snapshot;
if (!folio_test_large(foliop)) {
@@ -1256,7 +1256,15 @@ void snapshot_page(struct page_snapshot *ps, const struct page *page)
}
foliop = (struct folio *)page;
} else {
- foliop = (struct folio *)(head - 1);
+ /* See compound_head() */
+ if (compound_info_has_mask()) {
+ unsigned long p = (unsigned long)page;
+
+ foliop = (struct folio *)(p & info);
+ } else {
+ foliop = (struct folio *)(info - 1);
+ }
+
ps->idx = folio_page_idx(foliop, page);
}
--
2.51.2
next prev parent reply other threads:[~2026-01-15 14:46 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-15 14:45 [PATCHv3 00/15] mm: Eliminate fake head pages from vmemmap optimization Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 01/15] x86/vdso32: Prepare for <linux/pgtable.h> inclusion Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 02/15] mm: Move MAX_FOLIO_ORDER definition to mmzone.h Kiryl Shutsemau
2026-01-15 16:35 ` David Hildenbrand (Red Hat)
2026-01-15 16:48 ` David Hildenbrand (Red Hat)
2026-01-15 17:26 ` Kiryl Shutsemau
2026-01-15 17:45 ` David Hildenbrand (Red Hat)
2026-01-15 14:45 ` [PATCHv3 03/15] mm: Change the interface of prep_compound_tail() Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 04/15] mm: Rename the 'compound_head' field in the 'struct page' to 'compound_info' Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 05/15] mm: Move set/clear_compound_head() next to compound_head() Kiryl Shutsemau
2026-01-15 14:45 ` Kiryl Shutsemau [this message]
2026-01-15 14:45 ` [PATCHv3 07/15] mm: Make page_zonenum() use head page Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 08/15] mm/sparse: Check memmap alignment for compound_info_has_mask() Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 09/15] mm/hugetlb: Refactor code around vmemmap_walk Kiryl Shutsemau
2026-01-19 10:04 ` Muchun Song
2026-01-19 15:26 ` Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 10/15] mm/hugetlb: Remove fake head pages Kiryl Shutsemau
2026-01-15 16:49 ` David Hildenbrand (Red Hat)
2026-01-15 17:23 ` Kiryl Shutsemau
2026-01-15 17:41 ` David Hildenbrand (Red Hat)
2026-01-15 18:58 ` Kiryl Shutsemau
2026-01-15 19:33 ` David Hildenbrand (Red Hat)
2026-01-15 19:46 ` David Hildenbrand (Red Hat)
2026-01-16 2:38 ` Muchun Song
2026-01-16 15:52 ` Kiryl Shutsemau
2026-01-17 2:38 ` Muchun Song
2026-01-19 15:15 ` Kiryl Shutsemau
2026-01-20 2:50 ` Muchun Song
2026-01-16 16:18 ` [PATCHv3.1 " Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 11/15] mm: Drop fake head checks Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 12/15] hugetlb: Remove VMEMMAP_SYNCHRONIZE_RCU Kiryl Shutsemau
2026-01-15 14:45 ` [PATCHv3 13/15] mm/hugetlb: Remove hugetlb_optimize_vmemmap_key static key Kiryl Shutsemau
2026-01-15 14:46 ` [PATCHv3 14/15] mm: Remove the branch from compound_head() Kiryl Shutsemau
2026-01-15 14:46 ` [PATCHv3 15/15] hugetlb: Update vmemmap_dedup.rst Kiryl Shutsemau
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260115144604.822702-7-kas@kernel.org \
--to=kas@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=corbet@lwn.net \
--cc=david@kernel.org \
--cc=fvdl@google.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=rppt@kernel.org \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox