From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Zi Yan <ziy@nvidia.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Oscar Salvador <osalvador@suse.de>,
Muchun Song <muchun.song@linux.dev>, <linux-mm@kvack.org>,
<sidhartha.kumar@oracle.com>, <jane.chu@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Matthew Wilcox <willy@infradead.org>,
David Hildenbrand <david@kernel.org>
Subject: Re: [PATCH v4 5/6] mm: cma: add cma_alloc_frozen{_compound}()
Date: Thu, 18 Dec 2025 20:54:19 +0800 [thread overview]
Message-ID: <6e7df7a8-aaf4-4960-82be-dea118c5955c@huawei.com> (raw)
In-Reply-To: <A276ED1B-89C8-45A0-8DA9-9D5CA1D8E2FF@nvidia.com>
On 2025/12/18 3:38, Zi Yan wrote:
> On 17 Dec 2025, at 3:02, Kefeng Wang wrote:
>
>> On 2025/12/17 2:40, Zi Yan wrote:
>>> On 16 Dec 2025, at 6:48, Kefeng Wang wrote:
>>>
>>>> Introduce cma_alloc_frozen{_compound}() helper to alloc pages without
>>>> incrementing their refcount, then convert hugetlb cma to use the
>>>> cma_alloc_frozen_compound() and cma_release_frozen() and remove the
>>>> unused cma_{alloc,free}_folio(), also move the cma_validate_zones()
>>>> into mm/internal.h since no outside user.
>>>>
>>>> The set_pages_refcounted() is only called to set non-compound pages
>>>> after above changes, so remove the processing about PageHead.
>>>>
>>>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>>>> ---
>>>> include/linux/cma.h | 26 ++++++------------------
>>>> mm/cma.c | 48 +++++++++++++++++++++++++--------------------
>>>> mm/hugetlb_cma.c | 24 +++++++++++++----------
>>>> mm/internal.h | 10 +++++-----
>>>> 4 files changed, 52 insertions(+), 56 deletions(-)
>>>>
>>
>> ...
>>
>>>> static bool __cma_release(struct cma *cma, const struct page *pages,
>>>> - unsigned long count, bool compound)
>>>> + unsigned long count, bool frozen)
>>>> {
>>>> unsigned long pfn, end;
>>>> int r;
>>>> @@ -974,8 +982,8 @@ static bool __cma_release(struct cma *cma, const struct page *pages,
>>>> return false;
>>>> }
>>>>
>>>> - if (compound)
>>>> - __free_pages((struct page *)pages, compound_order(pages));
>>>> + if (frozen)
>>>> + free_contig_frozen_range(pfn, count);
>>>> else
>>>> free_contig_range(pfn, count);
>>>
>>> Can we get rid of free_contig_range() branch by making cma_release() put
>>> each page’s refcount? Then, __cma_relase() becomes cma_release_frozen()
>>> and the release pattern matches allocation pattern:
>>> 1. cma_alloc() calls cma_alloc_frozen() and manipulates page refcount.
>>> 2. cma_release() manipulates page refcount and calls cma_release_frozen().
>>>
>>
>> Have considered similar things before, but we need manipulates page
>> refcount only find the correct cma memrange from cma/pages, it seems
>> that no big improvement, any more comments?
>>
>> 1) for cma_release:
>> a. cma find memrange
>> b. manipulates page refcount when cmr found
>> c. free page and release cma resource
>> 2) for cma_release_frozen
>> a. cma find memrange
>> b. free page and release cma resource whne cmr found
>
> Right, I think it makes code simpler.
>
> Basically add a helper function:
> struct cma_memrange* find_cma_memrange(struct cma *cma,
> const struct page *pages, unsigned long count);
>
> Then
>
> __cma_release_frozen()
> {
> free_contig_frozen_range(pfn, count);
> cma_clear_bitmap(cma, cmr, pfn, count);
> cma_sysfs_account_release_pages(cma, count);
> trace_cma_release(cma->name, pfn, pages, count);
> }
>
>
> cma_release()
> {
> cmr = find_cma_memrange();
>
> if (!cmr)
> return false;
>
> for (; count--; pages++)
> VM_WARN_ON(!put_page_testzero(pages);
>
> __cma_release_frozen();
> }
>
> cma_release_frozen()
> {
> cmr = find_cma_memrange();
>
> if (!cmr)
> return false;
>
> __cma_release_frozen();
>
> }
>
> Let me know your thoughts.
Yes, this is exactly what I described above that needs to be done, but I
think it will add more codes :)
Our goal is that convert all cma_{alloc,release} caller to
cma_frozen_{alloc,release}, and complete remove free_contig_range in
cma, Maybe no changes? But if you prefer above way, I can also update
it.
Thanks
next prev parent reply other threads:[~2025-12-18 12:54 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-16 11:48 [PATCH v4 RESEND 0/6] mm: hugetlb: allocate frozen gigantic folio Kefeng Wang
2025-12-16 11:48 ` [PATCH v4 1/6] mm: debug_vm_pgtable: add debug_vm_pgtable_free_huge_page() Kefeng Wang
2025-12-16 16:08 ` Zi Yan
2025-12-17 2:40 ` Muchun Song
2025-12-16 11:48 ` [PATCH v4 2/6] mm: page_alloc: add __split_page() Kefeng Wang
2025-12-16 16:21 ` Zi Yan
2025-12-17 7:01 ` Kefeng Wang
2025-12-17 2:45 ` Muchun Song
2025-12-16 11:48 ` [PATCH v4 3/6] mm: cma: add __cma_release() Kefeng Wang
2025-12-16 16:39 ` Zi Yan
2025-12-17 2:46 ` Muchun Song
2025-12-16 11:48 ` [PATCH v4 4/6] mm: page_alloc: add alloc_contig_frozen_{range,pages}() Kefeng Wang
2025-12-16 17:20 ` Zi Yan
2025-12-17 7:17 ` Kefeng Wang
2025-12-17 19:20 ` Zi Yan
2025-12-18 12:00 ` Kefeng Wang
2025-12-16 11:48 ` [PATCH v4 5/6] mm: cma: add cma_alloc_frozen{_compound}() Kefeng Wang
2025-12-16 18:40 ` Zi Yan
2025-12-17 8:02 ` Kefeng Wang
2025-12-17 19:38 ` Zi Yan
2025-12-18 12:54 ` Kefeng Wang [this message]
2025-12-18 15:52 ` Zi Yan
2025-12-19 4:09 ` Kefeng Wang
2025-12-22 2:30 ` Zi Yan
2025-12-22 13:03 ` Kefeng Wang
2025-12-20 14:34 ` kernel test robot
2025-12-22 1:46 ` Kefeng Wang
2025-12-16 11:48 ` [PATCH v4 6/6] mm: hugetlb: allocate frozen pages in alloc_gigantic_folio() Kefeng Wang
2025-12-16 18:44 ` Zi Yan
2025-12-17 8:09 ` Kefeng Wang
2025-12-17 19:40 ` Zi Yan
2025-12-18 12:56 ` Kefeng Wang
-- strict thread matches above, loose matches on Subject: below --
2025-10-23 11:59 [PATCH v4 0/6] mm: hugetlb: allocate frozen gigantic folio Kefeng Wang
2025-10-23 11:59 ` [PATCH v4 5/6] mm: cma: add cma_alloc_frozen{_compound}() Kefeng Wang
2025-10-24 1:12 ` Andrew Morton
2025-10-24 1:31 ` Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6e7df7a8-aaf4-4960-82be-dea118c5955c@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=jane.chu@oracle.com \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=sidhartha.kumar@oracle.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox