From: Muhammad Usama Anjum <usama.anjum@arm.com>
To: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>,
Uladzislau Rezki <urezki@gmail.com>,
Nick Terrell <terrelln@fb.com>, David Sterba <dsterba@suse.com>,
Vishal Moola <vishal.moola@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
bpf@vger.kernel.org, Ryan.Roberts@arm.com,
david.hildenbrand@arm.com
Cc: Muhammad Usama Anjum <usama.anjum@arm.com>
Subject: [PATCH v6 3/3] mm/page_alloc: Optimize __free_contig_frozen_range()
Date: Wed, 1 Apr 2026 11:16:21 +0100 [thread overview]
Message-ID: <20260401101634.2868165-4-usama.anjum@arm.com> (raw)
In-Reply-To: <20260401101634.2868165-1-usama.anjum@arm.com>
Apply the same batch-freeing optimization from free_contig_range() to the
frozen page path. The previous __free_contig_frozen_range() freed each
order-0 page individually via free_frozen_pages(), which is slow for the
same reason the old free_contig_range() was: each page goes to the
order-0 pcp list rather than being coalesced into higher-order blocks.
Rewrite __free_contig_frozen_range() to call free_pages_prepare() for
each order-0 page, then batch the prepared pages into the largest
possible power-of-2 aligned chunks via free_prepared_contig_range().
If free_pages_prepare() fails (e.g. HWPoison, bad page) the page is
deliberately not freed; it should not be returned to the allocator.
I've tested CMA through debugfs. The test allocates 16384 pages per
allocation for several iterations. There is 3.5x improvement.
Before: 1406 usec per iteration
After: 402 usec per iteration
Before:
70.89% 0.69% cma [kernel.kallsyms] [.] free_contig_frozen_range
|
|--70.20%--free_contig_frozen_range
| |
| |--46.41%--__free_frozen_pages
| | |
| | --36.18%--free_frozen_page_commit
| | |
| | --29.63%--_raw_spin_unlock_irqrestore
| |
| |--8.76%--_raw_spin_trylock
| |
| |--7.03%--__preempt_count_dec_and_test
| |
| |--4.57%--_raw_spin_unlock
| |
| |--1.96%--__get_pfnblock_flags_mask.isra.0
| |
| --1.15%--free_frozen_page_commit
|
--0.69%--el0t_64_sync
After:
23.57% 0.00% cma [kernel.kallsyms] [.] free_contig_frozen_range
|
---free_contig_frozen_range
|
|--20.45%--__free_contig_frozen_range
| |
| |--17.77%--free_pages_prepare
| |
| --0.72%--free_prepared_contig_range
| |
| --0.55%--__free_frozen_pages
|
--3.12%--free_pages_prepare
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Suggested-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
---
Changes since v4:
- Use /* */ style comment with boolean variable
Changes since v3:
- Use newly introduced __free_contig_range_common() as the pattern was
very similar to __free_contig_range()
Changes since v2:
- Rework the loop to check for memory sections just like __free_contig_range()
- Didn't add reviewed-by tags because of rework
---
mm/page_alloc.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6568af69b5cdb..dec72cedcf674 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7054,8 +7054,7 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
{
- for (; nr_pages--; pfn++)
- free_frozen_pages(pfn_to_page(pfn), 0);
+ __free_contig_range_common(pfn, nr_pages, /* is_frozen= */ true);
}
/**
--
2.47.3
prev parent reply other threads:[~2026-04-01 10:16 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-01 10:16 [PATCH v6 0/3] mm: Free contiguous order-0 pages efficiently Muhammad Usama Anjum
2026-04-01 10:16 ` [PATCH v6 1/3] mm/page_alloc: Optimize free_contig_range() Muhammad Usama Anjum
2026-04-01 10:16 ` [PATCH v6 2/3] vmalloc: Optimize vfree with free_pages_bulk() Muhammad Usama Anjum
2026-04-01 10:19 ` David Hildenbrand (Arm)
2026-04-01 15:13 ` Uladzislau Rezki
2026-04-01 10:16 ` Muhammad Usama Anjum [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260401101634.2868165-4-usama.anjum@arm.com \
--to=usama.anjum@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=Ryan.Roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=bpf@vger.kernel.org \
--cc=david.hildenbrand@arm.com \
--cc=david@kernel.org \
--cc=dsterba@suse.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=terrelln@fb.com \
--cc=urezki@gmail.com \
--cc=vbabka@kernel.org \
--cc=vishal.moola@gmail.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox