From: Zi Yan <ziy@nvidia.com>
To: Muhammad Usama Anjum <usama.anjum@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Uladzislau Rezki <urezki@gmail.com>,
Nick Terrell <terrelln@fb.com>, David Sterba <dsterba@suse.com>,
Vishal Moola <vishal.moola@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
bpf@vger.kernel.org, Ryan.Roberts@arm.com,
david.hildenbrand@arm.com
Subject: Re: [PATCH v4 1/3] mm/page_alloc: Optimize free_contig_range()
Date: Fri, 27 Mar 2026 11:54:02 -0400 [thread overview]
Message-ID: <9A3A4520-76F9-41CF-926E-AE2882814C84@nvidia.com> (raw)
In-Reply-To: <20260327125720.2270651-2-usama.anjum@arm.com>
On 27 Mar 2026, at 8:57, Muhammad Usama Anjum wrote:
> From: Ryan Roberts <ryan.roberts@arm.com>
>
> Decompose the range of order-0 pages to be freed into the set of largest
> possible power-of-2 size and aligned chunks and free them to the pcp or
> buddy. This improves on the previous approach which freed each order-0
> page individually in a loop. Testing shows performance to be improved by
> more than 10x in some cases.
>
> Since each page is order-0, we must decrement each page's reference
> count individually and only consider the page for freeing as part of a
> high order chunk if the reference count goes to zero. Additionally
> free_pages_prepare() must be called for each individual order-0 page
> too, so that the struct page state and global accounting state can be
> appropriately managed. But once this is done, the resulting high order
> chunks can be freed as a unit to the pcp or buddy.
>
> This significantly speeds up the free operation but also has the side
> benefit that high order blocks are added to the pcp instead of each page
> ending up on the pcp order-0 list; memory remains more readily available
> in high orders.
>
> vmalloc will shortly become a user of this new optimized
> free_contig_range() since it aggressively allocates high order
> non-compound pages, but then calls split_page() to end up with
> contiguous order-0 pages. These can now be freed much more efficiently.
>
> The execution time of the following function was measured in a server
> class arm64 machine:
>
> static int page_alloc_high_order_test(void)
> {
> unsigned int order = HPAGE_PMD_ORDER;
> struct page *page;
> int i;
>
> for (i = 0; i < 100000; i++) {
> page = alloc_pages(GFP_KERNEL, order);
> if (!page)
> return -1;
> split_page(page, order);
> free_contig_range(page_to_pfn(page), 1UL << order);
> }
>
> return 0;
> }
>
> Execution time before: 4097358 usec
> Execution time after: 729831 usec
>
> Perf trace before:
>
> 99.63% 0.00% kthreadd [kernel.kallsyms] [.] kthread
> |
> ---kthread
> 0xffffb33c12a26af8
> |
> |--98.13%--0xffffb33c12a26060
> | |
> | |--97.37%--free_contig_range
> | | |
> | | |--94.93%--___free_pages
> | | | |
> | | | |--55.42%--__free_frozen_pages
> | | | | |
> | | | | --43.20%--free_frozen_page_commit
> | | | | |
> | | | | --35.37%--_raw_spin_unlock_irqrestore
> | | | |
> | | | |--11.53%--_raw_spin_trylock
> | | | |
> | | | |--8.19%--__preempt_count_dec_and_test
> | | | |
> | | | |--5.64%--_raw_spin_unlock
> | | | |
> | | | |--2.37%--__get_pfnblock_flags_mask.isra.0
> | | | |
> | | | --1.07%--free_frozen_page_commit
> | | |
> | | --1.54%--__free_frozen_pages
> | |
> | --0.77%--___free_pages
> |
> --0.98%--0xffffb33c12a26078
> alloc_pages_noprof
>
> Perf trace after:
>
> 8.42% 2.90% kthreadd [kernel.kallsyms] [k] __free_contig_range
> |
> |--5.52%--__free_contig_range
> | |
> | |--5.00%--free_prepared_contig_range
> | | |
> | | |--1.43%--__free_frozen_pages
> | | | |
> | | | --0.51%--free_frozen_page_commit
> | | |
> | | |--1.08%--_raw_spin_trylock
> | | |
> | | --0.89%--_raw_spin_unlock
> | |
> | --0.52%--free_pages_prepare
> |
> --2.90%--ret_from_fork
> kthread
> 0xffffae1c12abeaf8
> 0xffffae1c12abe7a0
> |
> --2.69%--vfree
> __free_contig_range
>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> Co-developed-by: Muhammad Usama Anjum <usama.anjum@arm.com>
> Signed-off-by: Muhammad Usama Anjum <usama.anjum@arm.com>
> ---
> Changes since v3:
> - Move __free_contig_range() to more generic __free_contig_range_common()
> which will used to free frozen pages as well
> - Simplify the loop in __free_contig_range_common()
> - Rewrite the comment
>
> Changes since v2:
> - Handle different possible section boundries in __free_contig_range()
> - Drop the TODO
> - Remove return value from __free_contig_range()
> - Remove non-functional change from __free_pages_ok()
>
> Changes since v1:
> - Rebase on mm-new
> - Move FPI_PREPARED check inside __free_pages_prepare() now that
> fpi_flags are already being passed.
> - Add todo (Zi Yan)
> - Rerun benchmarks
> - Convert VM_BUG_ON_PAGE() to VM_WARN_ON_ONCE()
> - Rework order calculation in free_prepared_contig_range() and use
> MAX_PAGE_ORDER as high limit instead of pageblock_order as it must
> be up to internal __free_frozen_pages() how it frees them
> ---
> include/linux/gfp.h | 2 +
> mm/page_alloc.c | 103 +++++++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 103 insertions(+), 2 deletions(-)
LGTM, except some nits below.
Reviewed-by: Zi Yan <ziy@nvidia.com>
> +/**
> + * __free_contig_range - Free contiguous range of order-0 pages.
> + * @pfn: Page frame number of the first page in the range.
> + * @nr_pages: Number of pages to free.
> + *
> + * For each order-0 struct page in the physically contiguous range, put a
> + * reference. Free any page who's reference count falls to zero. The
s/who’s/whose
> + * implementation is functionally equivalent to, but significantly faster than
> + * calling __free_page() for each struct page in a loop.
> + *
> + * Memory allocated with alloc_pages(order>=1) then subsequently split to
> + * order-0 with split_page() is an example of appropriate contiguous pages that
> + * can be freed with this API.
> + *
> + * Context: May be called in interrupt context or while holding a normal
> + * spinlock, but not in NMI context or while holding a raw spinlock.
> + */
> +void __free_contig_range(unsigned long pfn, unsigned long nr_pages)
> +{
> + __free_contig_range_common(pfn, nr_pages, false);
__free_contig_range_common(pfn, nr_pages, /* is_frozen= */ false);
is what we usually do for bool input for a better readability.
> +}
> +EXPORT_SYMBOL(__free_contig_range);
> +
> #ifdef CONFIG_CONTIG_ALLOC
> /* Usage: See admin-guide/dynamic-debug-howto.rst */
> static void alloc_contig_dump_pages(struct list_head *page_list)
> @@ -7330,8 +7430,7 @@ void free_contig_range(unsigned long pfn, unsigned long nr_pages)
> if (WARN_ON_ONCE(PageHead(pfn_to_page(pfn))))
> return;
>
> - for (; nr_pages--; pfn++)
> - __free_page(pfn_to_page(pfn));
> + __free_contig_range(pfn, nr_pages);
> }
> EXPORT_SYMBOL(free_contig_range);
> #endif /* CONFIG_CONTIG_ALLOC */
> --
> 2.47.3
Best Regards,
Yan, Zi
next prev parent reply other threads:[~2026-03-27 15:54 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-27 12:57 [PATCH v4 0/3] mm: Free contiguous order-0 pages efficiently Muhammad Usama Anjum
2026-03-27 12:57 ` [PATCH v4 1/3] mm/page_alloc: Optimize free_contig_range() Muhammad Usama Anjum
2026-03-27 15:54 ` Zi Yan [this message]
2026-03-30 14:27 ` Vlastimil Babka (SUSE)
2026-03-31 13:51 ` Muhammad Usama Anjum
2026-03-30 14:30 ` Vlastimil Babka (SUSE)
2026-03-30 16:36 ` Muhammad Usama Anjum
2026-03-30 14:33 ` David Hildenbrand (Arm)
2026-03-31 13:52 ` Muhammad Usama Anjum
2026-03-27 12:57 ` [PATCH v4 2/3] vmalloc: Optimize vfree Muhammad Usama Anjum
2026-03-30 12:30 ` Uladzislau Rezki
2026-03-31 15:08 ` Muhammad Usama Anjum
2026-03-30 14:35 ` Vlastimil Babka (SUSE)
2026-03-31 15:09 ` Muhammad Usama Anjum
2026-03-30 14:38 ` David Hildenbrand (Arm)
2026-03-30 16:15 ` Muhammad Usama Anjum
2026-03-31 10:08 ` David Hildenbrand
2026-03-27 12:57 ` [PATCH v4 3/3] mm/page_alloc: Optimize __free_contig_frozen_range() Muhammad Usama Anjum
2026-03-27 15:54 ` Zi Yan
2026-03-30 14:36 ` Vlastimil Babka (SUSE)
2026-03-30 14:40 ` David Hildenbrand (Arm)
2026-03-30 14:41 ` David Hildenbrand (Arm)
2026-03-27 19:42 ` [PATCH v4 0/3] mm: Free contiguous order-0 pages efficiently Andrew Morton
2026-03-30 11:27 ` Muhammad Usama Anjum
2026-03-30 14:43 ` David Hildenbrand (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9A3A4520-76F9-41CF-926E-AE2882814C84@nvidia.com \
--to=ziy@nvidia.com \
--cc=Liam.Howlett@oracle.com \
--cc=Ryan.Roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=bpf@vger.kernel.org \
--cc=david.hildenbrand@arm.com \
--cc=david@kernel.org \
--cc=dsterba@suse.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=terrelln@fb.com \
--cc=urezki@gmail.com \
--cc=usama.anjum@arm.com \
--cc=vbabka@kernel.org \
--cc=vishal.moola@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox