From: Rik van Riel <riel@surriel.com>
To: Johannes Weiner <hannes@cmpxchg.org>, linux-mm@kvack.org
Cc: Vlastimil Babka <vbabka@suse.cz>, Zi Yan <ziy@nvidia.com>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
linux-kernel@vger.kernel.org
Subject: Re: [RFC 2/2] mm: page_alloc: per-cpu pageblock buddy allocator
Date: Fri, 03 Apr 2026 21:42:08 -0400 [thread overview]
Message-ID: <984aee1a7af2ea4b576a0114a367402537d3deca.camel@surriel.com> (raw)
In-Reply-To: <20260403194526.477775-3-hannes@cmpxchg.org>
[-- Attachment #1: Type: text/plain, Size: 1676 bytes --]
On Fri, 2026-04-03 at 15:40 -0400, Johannes Weiner wrote:
>
> @@ -755,6 +752,9 @@ struct per_cpu_pages {
> #endif
> short free_count; /* consecutive free count */
>
> + /* Pageblocks owned by this CPU, for fragment recovery */
> + struct list_head owned_blocks;
> +
> /* Lists of pages, one per migrate type stored on the pcp-lists */
> struct list_head lists[NR_PCP_LISTS];
> } ____cacheline_aligned_in_smp;
>
> + /*
> + * Phase 0: Recover fragments from owned blocks.
> + *
> + * The owned_blocks list tracks blocks that have fragments
> + * sitting in zone buddy (put there by drains). Pull matching
> + * fragments back to PCP with PagePCPBuddy so they participate
> + * in merging, instead of claiming fresh blocks and spreading
> + * fragmentation further.
> + *
> + * Only recover blocks matching the requested migratetype.
> + * After recovery, remove the block from the list -- the drain
> + * path re-adds it if new fragments arrive.
> + */
> + list_for_each_entry_safe(pbd, tmp, &pcp->owned_blocks, cpu_node) {
> + unsigned long base_pfn, pfn;
> + int block_mt;
> +
> + base_pfn = pbd->block_pfn;
> + block_mt = pbd_migratetype(pbd);
> + if (block_mt != migratetype)
> + continue;
GIven that you just skip over blocks of the wrong migratetype,
I wonder if it makes sense to have a different list head for each
migratetype in the per_cpu_pages struct.
Not that I should be saying anything that would slow down
the merging of these patches, since making the buddy allocator
more of a slow path is pretty much a prerequisite for the 1GB
allocation stuff I'm working on :)
--
All Rights Reversed.
[-- Attachment #2: Type: text/html, Size: 2656 bytes --]
next prev parent reply other threads:[~2026-04-04 1:42 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-03 19:40 [RFC 0/2] mm: page_alloc: pcp " Johannes Weiner
2026-04-03 19:40 ` [RFC 1/2] mm: page_alloc: replace pageblock_flags bitmap with struct pageblock_data Johannes Weiner
2026-04-04 1:43 ` Rik van Riel
2026-04-03 19:40 ` [RFC 2/2] mm: page_alloc: per-cpu pageblock buddy allocator Johannes Weiner
2026-04-04 1:42 ` Rik van Riel [this message]
2026-04-06 16:12 ` Johannes Weiner
2026-04-06 17:31 ` Frank van der Linden
2026-04-06 21:58 ` Johannes Weiner
2026-04-04 2:27 ` [RFC 0/2] mm: page_alloc: pcp " Zi Yan
2026-04-06 15:24 ` Johannes Weiner
2026-04-07 2:42 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=984aee1a7af2ea4b576a0114a367402537d3deca.camel@surriel.com \
--to=riel@surriel.com \
--cc=Liam.Howlett@oracle.com \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=vbabka@suse.cz \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox