linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>,
	Oscar Salvador <osalvador@suse.de>,
	Muchun Song <muchun.song@linux.dev>,
	linux-mm@kvack.org, sidhartha.kumar@oracle.com,
	jane.chu@oracle.com, Vlastimil Babka <vbabka@suse.cz>,
	Brendan Jackman <jackmanb@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@kernel.org>
Subject: Re: [PATCH v4 4/6] mm: page_alloc: add alloc_contig_frozen_{range,pages}()
Date: Wed, 17 Dec 2025 14:20:36 -0500	[thread overview]
Message-ID: <AAFDAF36-E02D-41E0-B40A-AA5BBC084F9A@nvidia.com> (raw)
In-Reply-To: <d7275c36-f5bb-4baf-8a94-b8522d4aef21@huawei.com>

On 17 Dec 2025, at 2:17, Kefeng Wang wrote:

> On 2025/12/17 1:20, Zi Yan wrote:
>> On 16 Dec 2025, at 6:48, Kefeng Wang wrote:
>>
>>> In order to allocate given range of pages or allocate compound
>>> pages without incrementing their refcount, adding two new helper
>>> alloc_contig_frozen_{range,pages}() which may be beneficial
>>> to some users (eg hugetlb).
>>>
>>> The new alloc_contig_{range,pages} only take !__GFP_COMP gfp now,
>>> and the free_contig_range() is refactored to only free non-compound
>>> pages, the only caller to free compound pages in cma_free_folio() is
>>> changed accordingly, and the free_contig_frozen_range() is provided
>>> to match the alloc_contig_frozen_range(), which is used to free
>>> frozen pages.
>>>
>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>> ---
>>>   include/linux/gfp.h |  52 +++++--------
>>>   mm/cma.c            |  15 ++--
>>>   mm/hugetlb.c        |   9 ++-
>>>   mm/internal.h       |  13 ++++
>>>   mm/page_alloc.c     | 183 ++++++++++++++++++++++++++++++++------------
>>>   5 files changed, 184 insertions(+), 88 deletions(-)
>>>
>>
>> <snip>
>>
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index e430da900430..75f624236ff8 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -513,6 +513,19 @@ static inline void set_page_refcounted(struct page *page)
>>>   	set_page_count(page, 1);
>>>   }
>>>
>>> +static inline void set_pages_refcounted(struct page *page, unsigned long nr_pages)
>>> +{
>>> +	unsigned long pfn = page_to_pfn(page);
>>> +
>>> +	if (PageHead(page)) {
>>> +		set_page_refcounted(page);
>>> +		return;
>>> +	}
>>
>> This looks fragile, since if a tail page is passed, the refcount will be wrong.
>> But I see you remove this part in the next patch. It might be OK as a temporary
>> step.
>
> Yes, this temporary.
>
>>
>>> +
>>> +	for (; nr_pages--; pfn++)
>>> +		set_page_refcounted(pfn_to_page(pfn));
>>> +}
>>> +
>>>   /*
>>>    * Return true if a folio needs ->release_folio() calling upon it.
>>>    */
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index aa30d4436296..a7fc83bf806f 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>
>> <snip>
>>
>>>
>>> +static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
>>> +{
>>> +	for (; nr_pages--; pfn++)
>>> +		free_frozen_pages(pfn_to_page(pfn), 0);
>>> +}
>>> +
>>
>> Is it possible to use pageblock_order to speed this up?
>
> It should be no different since the page order always zero, maybe I
> didn't get your point.

Something like but take care of the cases when pfn and nr_pages are
not aligned to pageblock_nr_pages:

for (; nr_pages -= pageblock_nr_pages; pfn += pageblock_nr_pages)
	free_frozen_pages(pfn_to_page(pfn), pageblock_order);

It makes fewer calls.


>> And can it be moved before free_contig_frozen_range() for a easy read?
>>
>
> It is used by alloc_contig_frozen_range too, put it here could avoid an additional declaration.
>

Got it.

>
>> <snip>
>>
>>> +
>>> +/**
>>> + * free_contig_frozen_range() -- free the contiguous range of frozen pages
>>> + * @pfn:	start PFN to free
>>> + * @nr_pages:	Number of contiguous frozen pages to free
>>> + *
>>> + * This can be used to free the allocated compound/non-compound frozen pages.
>>> + */
>>> +void free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
>>> +{
>>> +	struct page *first_page = pfn_to_page(pfn);
>>> +	const unsigned int order = ilog2(nr_pages);
>>
>> Maybe WARN_ON_ONCE(first_page != compound_head(first_page) and return
>> immediately here to catch a tail page.
>
> Sure, will add this new check here.
>
>>
>>> +
>>> +	if (PageHead(first_page)) {
>>> +		WARN_ON_ONCE(order != compound_order(first_page));
>>> +		free_frozen_pages(first_page, order);
>>>   		return;
>>>   	}
>>>
>>> -	for (; nr_pages--; pfn++) {
>>> -		struct page *page = pfn_to_page(pfn);
>>> +	__free_contig_frozen_range(pfn, nr_pages);
>>> +}
>>> +EXPORT_SYMBOL(free_contig_frozen_range);
>>> +
> Thanks.
>>
>> Best Regards,
>> Yan, Zi
>>


Best Regards,
Yan, Zi


  reply	other threads:[~2025-12-17 19:20 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-16 11:48 [PATCH v4 RESEND 0/6] mm: hugetlb: allocate frozen gigantic folio Kefeng Wang
2025-12-16 11:48 ` [PATCH v4 1/6] mm: debug_vm_pgtable: add debug_vm_pgtable_free_huge_page() Kefeng Wang
2025-12-16 16:08   ` Zi Yan
2025-12-17  2:40   ` Muchun Song
2025-12-16 11:48 ` [PATCH v4 2/6] mm: page_alloc: add __split_page() Kefeng Wang
2025-12-16 16:21   ` Zi Yan
2025-12-17  7:01     ` Kefeng Wang
2025-12-17  2:45   ` Muchun Song
2025-12-16 11:48 ` [PATCH v4 3/6] mm: cma: add __cma_release() Kefeng Wang
2025-12-16 16:39   ` Zi Yan
2025-12-17  2:46   ` Muchun Song
2025-12-16 11:48 ` [PATCH v4 4/6] mm: page_alloc: add alloc_contig_frozen_{range,pages}() Kefeng Wang
2025-12-16 17:20   ` Zi Yan
2025-12-17  7:17     ` Kefeng Wang
2025-12-17 19:20       ` Zi Yan [this message]
2025-12-18 12:00         ` Kefeng Wang
2025-12-16 11:48 ` [PATCH v4 5/6] mm: cma: add cma_alloc_frozen{_compound}() Kefeng Wang
2025-12-16 18:40   ` Zi Yan
2025-12-17  8:02     ` Kefeng Wang
2025-12-17 19:38       ` Zi Yan
2025-12-18 12:54         ` Kefeng Wang
2025-12-18 15:52           ` Zi Yan
2025-12-19  4:09             ` Kefeng Wang
2025-12-22  2:30               ` Zi Yan
2025-12-22 13:03                 ` Kefeng Wang
2025-12-20 14:34   ` kernel test robot
2025-12-22  1:46     ` Kefeng Wang
2025-12-16 11:48 ` [PATCH v4 6/6] mm: hugetlb: allocate frozen pages in alloc_gigantic_folio() Kefeng Wang
2025-12-16 18:44   ` Zi Yan
2025-12-17  8:09     ` Kefeng Wang
2025-12-17 19:40       ` Zi Yan
2025-12-18 12:56         ` Kefeng Wang
  -- strict thread matches above, loose matches on Subject: below --
2025-10-23 11:59 [PATCH v4 0/6] mm: hugetlb: allocate frozen gigantic folio Kefeng Wang
2025-10-23 11:59 ` [PATCH v4 4/6] mm: page_alloc: add alloc_contig_frozen_{range,pages}() Kefeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AAFDAF36-E02D-41E0-B40A-AA5BBC084F9A@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=jackmanb@google.com \
    --cc=jane.chu@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    --cc=sidhartha.kumar@oracle.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox