From: Kiryl Shutsemau <kas@kernel.org>
To: Usama Arif <usamaarif642@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Muchun Song <muchun.song@linux.dev>,
David Hildenbrand <david@kernel.org>,
Oscar Salvador <osalvador@suse.de>,
Mike Rapoport <rppt@kernel.org>,
Vlastimil Babka <vbabka@suse.cz>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Matthew Wilcox <willy@infradead.org>, Zi Yan <ziy@nvidia.com>,
Baoquan He <bhe@redhat.com>, Michal Hocko <mhocko@suse.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Jonathan Corbet <corbet@lwn.net>,
kernel-team@meta.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org
Subject: Re: [PATCH 04/11] mm: Rework compound_head() for power-of-2 sizeof(struct page)
Date: Sat, 6 Dec 2025 16:29:44 +0000 [thread overview]
Message-ID: <t3z3msqpbtnkgwqs5fxvnd4zsymclxzzr6vcaubv7z5jtqd46i@g5vtuktue54s> (raw)
In-Reply-To: <22609798-e84b-46ca-9cb5-649ffba4a2a4@gmail.com>
On Sat, Dec 06, 2025 at 12:25:12AM +0000, Usama Arif wrote:
>
>
> On 05/12/2025 19:43, Kiryl Shutsemau wrote:
> > For tail pages, the kernel uses the 'compound_info' field to get to the
> > head page. The bit 0 of the field indicates whether the page is a
> > tail page, and if set, the remaining bits represent a pointer to the
> > head page.
> >
> > For cases when size of struct page is power-of-2, change the encoding of
> > compound_info to store a mask that can be applied to the virtual address
> > of the tail page in order to access the head page. It is possible
> > because sturct page of the head page is naturally aligned with regards
>
> nit: s/sturct/struct/
Ack.
> > to order of the page.
>
> Might be good to add to state here that no change expected if the struct page
> is not a power of 2.
Okay.
> >
> > The significant impact of this modification is that all tail pages of
> > the same order will now have identical 'compound_info', regardless of
> > the compound page they are associated with. This paves the way for
> > eliminating fake heads.
> >
> > Signed-off-by: Kiryl Shutsemau <kas@kernel.org>
> > ---
> > include/linux/page-flags.h | 61 +++++++++++++++++++++++++++++++++-----
> > mm/util.c | 15 +++++++---
> > 2 files changed, 64 insertions(+), 12 deletions(-)
> >
> > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> > index 11d9499e5ced..eef02fbbb40f 100644
> > --- a/include/linux/page-flags.h
> > +++ b/include/linux/page-flags.h
> > @@ -210,6 +210,13 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
> > if (!static_branch_unlikely(&hugetlb_optimize_vmemmap_key))
> > return page;
> >
> > + /*
> > + * Fake heads only exists if size of struct page is power-of-2.
> > + * See hugetlb_vmemmap_optimizable_size().
> > + */
> > + if (!is_power_of_2(sizeof(struct page)))
> > + return page;
> > +
>
>
> hmm my understanding reviewing up until this patch of the series is that everything works
> the same as old code when struct page is not a power of 2. Returning page here means you dont
> fix page head when sizeof(struct page) is not a power of 2?
There's no change for non-power-of-2 sizeof(struct page) as there's no
fake heads because there's no HVO for such cases.
See hugetlb_vmemmap_optimizable_size() as I mentioned in the comment.
>
> > /*
> > * Only addresses aligned with PAGE_SIZE of struct page may be fake head
> > * struct page. The alignment check aims to avoid access the fields (
> > @@ -223,10 +230,13 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
> > * because the @page is a compound page composed with at least
> > * two contiguous pages.
> > */
> > - unsigned long head = READ_ONCE(page[1].compound_info);
> > + unsigned long info = READ_ONCE(page[1].compound_info);
> >
> > - if (likely(head & 1))
> > - return (const struct page *)(head - 1);
> > + if (likely(info & 1)) {
> > + unsigned long p = (unsigned long)page;
> > +
> > + return (const struct page *)(p & info);
>
> Would it be worth writing a comment over here similar to what you have in set_compound_head
> to explain why this works? i.e. compound_info contains the mask derived from folio order that
> can be applied to the virtual address to get the head page.
But this code is about to be deleted. Is it really worth it?
> Also, it takes a few minutes to wrap your head around the fact that this works because the struct
> page of the head page is aligned wrt to the order. Maybe it might be good to add that somewhere as
> a comment somewhere? I dont see it documented in this patch, if its in a future patch, please ignore
> this comment.
Okay, I will try to explain it better.
>
> > + }
> > }
> > return page;
> > }
> > @@ -281,11 +291,27 @@ static __always_inline int page_is_fake_head(const struct page *page)
> >
> > static __always_inline unsigned long _compound_head(const struct page *page)
> > {
> > - unsigned long head = READ_ONCE(page->compound_info);
> > + unsigned long info = READ_ONCE(page->compound_info);
> >
> > - if (unlikely(head & 1))
> > - return head - 1;
> > - return (unsigned long)page_fixed_fake_head(page);
> > + /* Bit 0 encodes PageTail() */
> > + if (!(info & 1))
> > + return (unsigned long)page_fixed_fake_head(page);
> > +
> > + /*
> > + * If the size of struct page is not power-of-2, the rest if
>
> nit: s/if/of
Ack.
>
> > + * compound_info is the pointer to the head page.
> > + */
> > + if (!is_power_of_2(sizeof(struct page)))
> > + return info - 1;
> > +
> > + /*
> > + * If the size of struct page is power-of-2 it is set the rest of
>
> nit: remove "it is set"
Ack.
>
> > + * the info encodes the mask that converts the address of the tail
> > + * page to the head page.
> > + *
> > + * No need to clear bit 0 in the mask as 'page' always has it clear.
> > + */
> > + return (unsigned long)page & info;
> > }
> >
> > #define compound_head(page) ((typeof(page))_compound_head(page))
> > @@ -294,7 +320,26 @@ static __always_inline void set_compound_head(struct page *page,
> > struct page *head,
> > unsigned int order)
> > {
> > - WRITE_ONCE(page->compound_info, (unsigned long)head + 1);
> > + unsigned int shift;
> > + unsigned long mask;
> > +
> > + if (!is_power_of_2(sizeof(struct page))) {
> > + WRITE_ONCE(page->compound_info, (unsigned long)head | 1);
> > + return;
> > + }
> > +
> > + /*
> > + * If the size of struct page is power-of-2, bits [shift:0] of the
> > + * virtual address of compound head are zero.
> > + *
> > + * Calculate mask that can be applied the virtual address of the
>
> nit: applied to the ..
Ack.
>
> > + * tail page to get address of the head page.
> > + */
> > + shift = order + order_base_2(sizeof(struct page));
> > + mask = GENMASK(BITS_PER_LONG - 1, shift);
> > +
> > + /* Bit 0 encodes PageTail() */
> > + WRITE_ONCE(page->compound_info, mask | 1);
> > }
> >
> > static __always_inline void clear_compound_head(struct page *page)
> > diff --git a/mm/util.c b/mm/util.c
> > index cbf93cf3223a..6723d2bb7f1e 100644
> > --- a/mm/util.c
> > +++ b/mm/util.c
> > @@ -1234,7 +1234,7 @@ static void set_ps_flags(struct page_snapshot *ps, const struct folio *folio,
> > */
> > void snapshot_page(struct page_snapshot *ps, const struct page *page)
> > {
> > - unsigned long head, nr_pages = 1;
> > + unsigned long info, nr_pages = 1;
> > struct folio *foliop;
> > int loops = 5;
> >
> > @@ -1244,8 +1244,8 @@ void snapshot_page(struct page_snapshot *ps, const struct page *page)
> > again:
> > memset(&ps->folio_snapshot, 0, sizeof(struct folio));
> > memcpy(&ps->page_snapshot, page, sizeof(*page));
> > - head = ps->page_snapshot.compound_info;
> > - if ((head & 1) == 0) {
> > + info = ps->page_snapshot.compound_info;
> > + if ((info & 1) == 0) {
> > ps->idx = 0;
> > foliop = (struct folio *)&ps->page_snapshot;
> > if (!folio_test_large(foliop)) {
> > @@ -1256,7 +1256,14 @@ void snapshot_page(struct page_snapshot *ps, const struct page *page)
> > }
> > foliop = (struct folio *)page;
> > } else {
> > - foliop = (struct folio *)(head - 1);
> > + unsigned long p = (unsigned long)page;
> > +
> > + /* See compound_head() */
> > + if (is_power_of_2(sizeof(struct page)))
> > + foliop = (struct folio *)(p & info);
> > + else
> > + foliop = (struct folio *)(info - 1);
> > +
>
> Would it be better to do below, as you dont need to than declare p if sizeof(struct page) is not
> a power of 2?
>
> if (!is_power_of_2(sizeof(struct page)))
> foliop = (struct folio *)(info - 1);
> else {
> unsigned long p = (unsigned long)page;
> foliop = (struct folio *)(p & info);
> }
Okay.
>
> > ps->idx = folio_page_idx(foliop, page);
> > }
> >
>
--
Kiryl Shutsemau / Kirill A. Shutemov
next prev parent reply other threads:[~2025-12-06 16:29 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-05 19:43 [PATCH 00/11] mm/hugetlb: Eliminate fake head pages from vmemmap optimization Kiryl Shutsemau
2025-12-05 19:43 ` [PATCH 01/11] mm: Change the interface of prep_compound_tail() Kiryl Shutsemau
2025-12-05 21:49 ` Usama Arif
2025-12-05 22:10 ` Kiryl Shutsemau
2025-12-05 22:15 ` Usama Arif
2025-12-05 19:43 ` [PATCH 02/11] mm: Rename the 'compound_head' field in the 'struct page' to 'compound_info' Kiryl Shutsemau
2025-12-05 19:43 ` [PATCH 03/11] mm: Move set/clear_compound_head() to compound_head() Kiryl Shutsemau
2025-12-05 19:43 ` [PATCH 04/11] mm: Rework compound_head() for power-of-2 sizeof(struct page) Kiryl Shutsemau
2025-12-06 0:25 ` Usama Arif
2025-12-06 16:29 ` Kiryl Shutsemau [this message]
2025-12-06 17:36 ` Usama Arif
2025-12-05 19:43 ` [PATCH 05/11] mm/hugetlb: Refactor code around vmemmap_walk Kiryl Shutsemau
2025-12-06 16:42 ` Usama Arif
2025-12-05 19:43 ` [PATCH 06/11] mm/hugetlb: Remove fake head pages Kiryl Shutsemau
2025-12-06 17:03 ` Usama Arif
2025-12-05 19:43 ` [PATCH 07/11] mm: Drop fake head checks and fix a race condition Kiryl Shutsemau
2025-12-06 17:27 ` Usama Arif
2025-12-05 19:43 ` [PATCH 08/11] hugetlb: Remove VMEMMAP_SYNCHRONIZE_RCU Kiryl Shutsemau
2025-12-05 19:43 ` [PATCH 09/11] mm/hugetlb: Remove hugetlb_optimize_vmemmap_key static key Kiryl Shutsemau
2025-12-05 19:43 ` [PATCH 10/11] mm: Remove the branch from compound_head() Kiryl Shutsemau
2025-12-05 19:43 ` [PATCH 11/11] hugetlb: Update vmemmap_dedup.rst Kiryl Shutsemau
2025-12-05 20:16 ` [PATCH 00/11] mm/hugetlb: Eliminate fake head pages from vmemmap optimization David Hildenbrand (Red Hat)
2025-12-05 20:33 ` Kiryl Shutsemau
2025-12-05 20:44 ` David Hildenbrand (Red Hat)
2025-12-05 20:54 ` Kiryl Shutsemau
2025-12-05 21:34 ` David Hildenbrand (Red Hat)
2025-12-05 21:41 ` Kiryl Shutsemau
2025-12-06 17:47 ` Usama Arif
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=t3z3msqpbtnkgwqs5fxvnd4zsymclxzzr6vcaubv7z5jtqd46i@g5vtuktue54s \
--to=kas@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=corbet@lwn.net \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=rppt@kernel.org \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox