From: Muhammad Usama Anjum <usama.anjum@arm.com>
To: Zi Yan <ziy@nvidia.com>, David Hildenbrand <david.hildenbrand@arm.com>
Cc: usama.anjum@arm.com, Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Uladzislau Rezki <urezki@gmail.com>,
Nick Terrell <terrelln@fb.com>, David Sterba <dsterba@suse.com>,
Vishal Moola <vishal.moola@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
bpf@vger.kernel.org, Ryan.Roberts@arm.com
Subject: Re: [PATCH v3 1/3] mm/page_alloc: Optimize free_contig_range()
Date: Wed, 25 Mar 2026 14:06:50 +0000 [thread overview]
Message-ID: <a68800f1-ad23-4788-af70-50c93c91aaf3@arm.com> (raw)
In-Reply-To: <42C0A333-EB71-42A5-83A2-36831E1F5E50@nvidia.com>
On 24/03/2026 5:14 pm, Zi Yan wrote:
> On 24 Mar 2026, at 11:22, David Hildenbrand wrote:
>
>> On 3/24/26 15:46, Zi Yan wrote:
>>> On 24 Mar 2026, at 9:35, Muhammad Usama Anjum wrote:
>>>
>>>> From: Ryan Roberts <ryan.roberts@arm.com>
>>>>
>>>> Decompose the range of order-0 pages to be freed into the set of largest
>>>> possible power-of-2 size and aligned chunks and free them to the pcp or
>>>> buddy. This improves on the previous approach which freed each order-0
>>>> page individually in a loop. Testing shows performance to be improved by
>>>> more than 10x in some cases.
>>>>
>>>> Since each page is order-0, we must decrement each page's reference
>>>> count individually and only consider the page for freeing as part of a
>>>> high order chunk if the reference count goes to zero. Additionally
>>>> free_pages_prepare() must be called for each individual order-0 page
>>>> too, so that the struct page state and global accounting state can be
>>>> appropriately managed. But once this is done, the resulting high order
>>>> chunks can be freed as a unit to the pcp or buddy.
>>>>
>>>> This significantly speeds up the free operation but also has the side
>>>> benefit that high order blocks are added to the pcp instead of each page
>>>> ending up on the pcp order-0 list; memory remains more readily available
>>>> in high orders.
>>>>
>>>> vmalloc will shortly become a user of this new optimized
>>>> free_contig_range() since it aggressively allocates high order
>>>> non-compound pages, but then calls split_page() to end up with
>>>> contiguous order-0 pages. These can now be freed much more efficiently.
>>>>
>>>> The execution time of the following function was measured in a server
>>>> class arm64 machine:
>>>>
>>>> static int page_alloc_high_order_test(void)
>>>> {
>>>> unsigned int order = HPAGE_PMD_ORDER;
>>>> struct page *page;
>>>> int i;
>>>>
>>>> for (i = 0; i < 100000; i++) {
>>>> page = alloc_pages(GFP_KERNEL, order);
>>>> if (!page)
>>>> return -1;
>>>> split_page(page, order);
>>>> free_contig_range(page_to_pfn(page), 1UL << order);
>>>> }
>>>>
>>>> return 0;
>>>> }
>>>>
>>>> Execution time before: 4097358 usec
>>>> Execution time after: 729831 usec
>>>>
>>>> Perf trace before:
>>>>
>>>> 99.63% 0.00% kthreadd [kernel.kallsyms] [.] kthread
>>>> |
>>>> ---kthread
>>>> 0xffffb33c12a26af8
>>>> |
>>>> |--98.13%--0xffffb33c12a26060
>>>> | |
>>>> | |--97.37%--free_contig_range
>>>> | | |
>>>> | | |--94.93%--___free_pages
>>>> | | | |
>>>> | | | |--55.42%--__free_frozen_pages
>>>> | | | | |
>>>> | | | | --43.20%--free_frozen_page_commit
>>>> | | | | |
>>>> | | | | --35.37%--_raw_spin_unlock_irqrestore
>>>> | | | |
>>>> | | | |--11.53%--_raw_spin_trylock
>>>> | | | |
>>>> | | | |--8.19%--__preempt_count_dec_and_test
>>>> | | | |
>>>> | | | |--5.64%--_raw_spin_unlock
>>>> | | | |
>>>> | | | |--2.37%--__get_pfnblock_flags_mask.isra.0
>>>> | | | |
>>>> | | | --1.07%--free_frozen_page_commit
>>>> | | |
>>>> | | --1.54%--__free_frozen_pages
>>>> | |
>>>> | --0.77%--___free_pages
>>>> |
>>>> --0.98%--0xffffb33c12a26078
>>>> alloc_pages_noprof
>>>>
>>>> Perf trace after:
>>>>
>>>> 8.42% 2.90% kthreadd [kernel.kallsyms] [k] __free_contig_range
>>>> |
>>>> |--5.52%--__free_contig_range
>>>> | |
>>>> | |--5.00%--free_prepared_contig_range
>>>> | | |
>>>> | | |--1.43%--__free_frozen_pages
>>>> | | | |
>>>> | | | --0.51%--free_frozen_page_commit
>>>> | | |
>>>> | | |--1.08%--_raw_spin_trylock
>>>> | | |
>>>> | | --0.89%--_raw_spin_unlock
>>>> | |
>>>> | --0.52%--free_pages_prepare
>>>> |
>>>> --2.90%--ret_from_fork
>>>> kthread
>>>> 0xffffae1c12abeaf8
>>>> 0xffffae1c12abe7a0
>>>> |
>>>> --2.69%--vfree
>>>> __free_contig_range
>>>>
>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>> Co-developed-by: Muhammad Usama Anjum <usama.anjum@arm.com>
>>>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
>>>> ---
>>>> Changes since v2:
>>>> - Handle different possible section boundries in __free_contig_range()
>>>> - Drop the TODO
>>>> - Remove return value from __free_contig_range()
>>>> - Remove non-functional change from __free_pages_ok()
>>>>
>>>> Changes since v1:
>>>> - Rebase on mm-new
>>>> - Move FPI_PREPARED check inside __free_pages_prepare() now that
>>>> fpi_flags are already being passed.
>>>> - Add todo (Zi Yan)
>>>> - Rerun benchmarks
>>>> - Convert VM_BUG_ON_PAGE() to VM_WARN_ON_ONCE()
>>>> - Rework order calculation in free_prepared_contig_range() and use
>>>> MAX_PAGE_ORDER as high limit instead of pageblock_order as it must
>>>> be up to internal __free_frozen_pages() how it frees them
>>>>
>>>> Made-with: Cursor
>>>> ---
>>>> include/linux/gfp.h | 2 +
>>>> mm/page_alloc.c | 97 ++++++++++++++++++++++++++++++++++++++++++++-
>>>> 2 files changed, 97 insertions(+), 2 deletions(-)
>>>>
>>>
>>> <snip>
>>>
>>>> +
>>>> +/**
>>>> + * __free_contig_range - Free contiguous range of order-0 pages.
>>>> + * @pfn: Page frame number of the first page in the range.
>>>> + * @nr_pages: Number of pages to free.
>>>> + *
>>>> + * For each order-0 struct page in the physically contiguous range, put a
>>>> + * reference. Free any page who's reference count falls to zero. The
>>>> + * implementation is functionally equivalent to, but significantly faster than
>>>> + * calling __free_page() for each struct page in a loop.
>>>> + *
>>>> + * Memory allocated with alloc_pages(order>=1) then subsequently split to
>>>> + * order-0 with split_page() is an example of appropriate contiguous pages that
>>>> + * can be freed with this API.
>>>> + *
>>>> + * Context: May be called in interrupt context or while holding a normal
>>>> + * spinlock, but not in NMI context or while holding a raw spinlock.
>>>> + */
>>>> +void __free_contig_range(unsigned long pfn, unsigned long nr_pages)
>>>> +{
>>>> + struct page *page = pfn_to_page(pfn);
>>>> + struct page *start = NULL;
>>>> + unsigned long start_sec;
>>>> + unsigned long i;
>>>> + bool can_free;
>>>> +
>>>> + /*
>>>> + * Chunk the range into contiguous runs of pages for which the refcount
>>>> + * went to zero and for which free_pages_prepare() succeeded. If
>>>> + * free_pages_prepare() fails we consider the page to have been freed;
>>>> + * deliberately leak it.
>>>> + *
>>>> + * Code assumes contiguous PFNs have contiguous struct pages, but not
>>>> + * vice versa. Break batches at section boundaries since pages from
>>>> + * different sections must not be coalesced into a single high-order
>>>> + * block.
>>>> + */
>>>> + for (i = 0; i < nr_pages; i++, page++) {
>>>> + VM_WARN_ON_ONCE(PageHead(page));
>>>> + VM_WARN_ON_ONCE(PageTail(page));
>>>> +
>>>> + can_free = put_page_testzero(page);
>>>> + if (can_free && !free_pages_prepare(page, 0))
>>>> + can_free = false;
>>>> +
>>>> + if (can_free && start &&
>>>> + memdesc_section(page->flags) != start_sec) {
>>>> + free_prepared_contig_range(start, page - start);
>>>> + start = page;
>>>> + start_sec = memdesc_section(page->flags);
>>>> + } else if (!can_free && start) {
>>>> + free_prepared_contig_range(start, page - start);
>>>> + start = NULL;
>>>> + } else if (can_free && !start) {
>>>> + start = page;
>>>> + start_sec = memdesc_section(page->flags);
>>>> + }
>>>> + }
>>>
>>> It can be simplified to:
>>>
>>> for (i = 0; i < nr_pages; i++, page++) {
>>> VM_WARN_ON_ONCE(PageHead(page));
>>> VM_WARN_ON_ONCE(PageTail(page));
>>>
>>> can_free = put_page_testzero(page) && free_pages_prepare(page, 0);
>>>
>>> if (!can_free) {
>>> if (start) {
>>> free_prepared_contig_range(start, page - start);
>>> start = NULL;
>>> }
>>> continue;
>>> }
>>>
>>> if (start && memdesc_section(page->flags) != start_sec) {
>>> free_prepared_contig_range(start, page - start);
>>> start = page;
>>> start_sec = memdesc_section(page->flags);
>>> } else if (!start) {
>>> start = page;
>>> start_sec = memdesc_section(page->flags);
>>> }
>>> }
I'll simplify in the next version. Thanks.
>>>
>>> BTW, memdesc_section() returns 0 for !SECTION_IN_PAGE_FLAGS.
>>> Is pfn_to_section_nr() more robust?
>>
>> That's the whole trick: it's optimized out in that case. Linus proposed
>> that for num_pages_contiguous().
>>
>> The cover letter should likely refer to num_pages_contiguous() :)
I'll refer to num_pages_contiguous() as well.
>
> Oh, I needed to refresh my memory on SPARSEMEM to remember
> !SECTION_IN_PAGE_FLAGS is for SPARSE_VMEMMAP and the contiguous PFNs vs
> contiguous struct page thing.
>
> Now memdesc_section() makes sense to me. Thanks.
>
>
> Best Regards,
> Yan, Zi
next prev parent reply other threads:[~2026-03-25 14:08 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-24 13:35 [PATCH v3 0/3] mm: Free contiguous order-0 pages efficiently Muhammad Usama Anjum
2026-03-24 13:35 ` [PATCH v3 1/3] mm/page_alloc: Optimize free_contig_range() Muhammad Usama Anjum
2026-03-24 14:46 ` Zi Yan
2026-03-24 15:22 ` David Hildenbrand
2026-03-24 17:14 ` Zi Yan
2026-03-25 14:06 ` Muhammad Usama Anjum [this message]
2026-03-24 20:56 ` David Hildenbrand (Arm)
2026-03-25 14:11 ` Muhammad Usama Anjum
2026-03-24 13:35 ` [PATCH v3 2/3] vmalloc: Optimize vfree Muhammad Usama Anjum
2026-03-24 14:55 ` Zi Yan
2026-03-25 8:56 ` Uladzislau Rezki
2026-03-25 15:02 ` Muhammad Usama Anjum
2026-03-25 16:16 ` Uladzislau Rezki
2026-03-25 16:25 ` Muhammad Usama Anjum
2026-03-25 16:34 ` David Hildenbrand (Arm)
2026-03-25 16:49 ` Uladzislau Rezki
2026-03-25 14:34 ` Usama Anjum
2026-03-25 10:05 ` David Hildenbrand (Arm)
2026-03-25 14:26 ` Muhammad Usama Anjum
2026-03-25 15:01 ` David Hildenbrand (Arm)
2026-03-24 13:35 ` [PATCH v3 3/3] mm/page_alloc: Optimize __free_contig_frozen_range() Muhammad Usama Anjum
2026-03-24 15:06 ` Zi Yan
2026-03-25 10:14 ` David Hildenbrand (Arm)
2026-03-25 16:03 ` Muhammad Usama Anjum
2026-03-25 19:52 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a68800f1-ad23-4788-af70-50c93c91aaf3@arm.com \
--to=usama.anjum@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=Ryan.Roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=bpf@vger.kernel.org \
--cc=david.hildenbrand@arm.com \
--cc=david@kernel.org \
--cc=dsterba@suse.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=terrelln@fb.com \
--cc=urezki@gmail.com \
--cc=vbabka@kernel.org \
--cc=vishal.moola@gmail.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox