From: David Hildenbrand <david@redhat.com>
To: Zi Yan <ziy@nvidia.com>
Cc: Juan Yescas <jyescas@google.com>,
akash.tyagi@mediatek.com,
Andrew Morton <akpm@linux-foundation.org>,
angelogioacchino.delregno@collabora.com, hannes@cmpxchg.org,
Brendan Jackman <jackmanb@google.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mediatek@lists.infradead.org,
Linux Memory Management List <linux-mm@kvack.org>,
matthias.bgg@gmail.com, Michal Hocko <mhocko@suse.com>,
Suren Baghdasaryan <surenb@google.com>,
Vlastimil Babka <vbabka@suse.cz>,
wsd_upstream@mediatek.com, Kalesh Singh <kaleshsingh@google.com>,
"T.J. Mercier" <tjmercier@google.com>,
Isaac Manjarres <isaacmanjarres@google.com>
Subject: Re: [RFC PATCH] mm/page_alloc: Add PCP list for THP CMA
Date: Mon, 4 Aug 2025 21:10:13 +0200 [thread overview]
Message-ID: <5bf65002-8d2f-4b9b-8f22-3ba69124335c@redhat.com> (raw)
In-Reply-To: <8A3D2D44-DCE9-48FC-A684-C43006B3912F@nvidia.com>
On 04.08.25 21:00, Zi Yan wrote:
> On 4 Aug 2025, at 14:49, David Hildenbrand wrote:
>
>> On 04.08.25 20:20, Juan Yescas wrote:
>>> Hi David/Zi,
>>>
>>> Is there any reason why the MIGRATE_CMA pages are not in the PCP lists?
>>>
>>> There are many devices that need fast allocation of MIGRATE_CMA pages,
>>> and they have to get them from the buddy allocator, which is a bit
>>> slower in comparison to the PCP lists.
>>>
>>> We also have cases where the MIGRATE_CMA memory requirements are big.
>>> For example, GPUs need MIGRATE_CMA memory in the ranges of 30MiB to 500MiBs.
>>> These cases would benefit if we have THPs for CMAs.
>>>
>>> Could we add the support for MIGRATE_CMA pages on the PCP and THP lists?
>>
>> Remember how CMA memory is used:
>>
>> The owner allocates it through cma_alloc() and friends, where the CMA allocator will try allocating *specific physical memory regions* using alloc_contig_range(). It doesn't just go ahead and pick a random CMA page from the buddy (or PCP) lists. Doesn't work (just imagine having different CMA areas etc).
>
> Yeah, unless some code is relying on gfp_to_alloc_flags_cma() to get ALLOC_CMA
> to try to get CMA pages from buddy.
Right, but that's just for internal purposes IIUC, to grab pages from
the CMA lists when serving movable allocations.
>
>>
>> Anybody else is free to use CMA pages for MOVABLE allocations. So we treat them as being MOVABLE on the PCP.
>>
>> Having a separate CMA PCP list doesn't solve or speedup anything, really.
>
> It can be slower when small CMA pages are on PCP lists and large CMA pages
> cannot be allocated, one needs to drain PCP lists. This assumes the code is
> trying to get CMA pages from buddy, which is not how CMA memory is designed
> to be used like David mentioned above.
Right. And alloc_contig_range_noprof() already does a
drain_all_pages(cc.zone).
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-08-04 19:10 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-04 18:20 Juan Yescas
2025-08-04 18:49 ` David Hildenbrand
2025-08-04 19:00 ` Zi Yan
2025-08-04 19:10 ` David Hildenbrand [this message]
2025-08-05 1:24 ` Juan Yescas
2025-08-05 1:22 ` Juan Yescas
2025-08-05 9:54 ` Vlastimil Babka
2025-08-05 16:46 ` Juan Yescas
2025-08-05 17:12 ` Juan Yescas
2025-08-05 21:09 ` Vlastimil Babka
2025-08-06 21:54 ` Juan Yescas
2025-08-05 9:58 ` David Hildenbrand
2025-08-05 16:57 ` Juan Yescas
2025-08-05 21:08 ` David Hildenbrand
2025-08-06 21:44 ` Juan Yescas
2025-08-06 21:51 ` Juan Yescas
2025-09-09 20:07 ` Juan Yescas
2025-09-09 20:11 ` Juan Yescas
-- strict thread matches above, loose matches on Subject: below --
2025-07-24 7:53 akash.tyagi
2025-07-24 9:52 ` David Hildenbrand
2025-07-25 5:08 ` akash.tyagi
2025-07-25 7:04 ` David Hildenbrand
2025-07-25 14:27 ` Zi Yan
2025-07-29 12:30 ` akash.tyagi
2025-07-29 12:42 ` David Hildenbrand
2025-07-29 12:50 ` Matthew Wilcox
2025-08-04 18:31 ` Juan Yescas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5bf65002-8d2f-4b9b-8f22-3ba69124335c@redhat.com \
--to=david@redhat.com \
--cc=akash.tyagi@mediatek.com \
--cc=akpm@linux-foundation.org \
--cc=angelogioacchino.delregno@collabora.com \
--cc=hannes@cmpxchg.org \
--cc=isaacmanjarres@google.com \
--cc=jackmanb@google.com \
--cc=jyescas@google.com \
--cc=kaleshsingh@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=linux-mm@kvack.org \
--cc=matthias.bgg@gmail.com \
--cc=mhocko@suse.com \
--cc=surenb@google.com \
--cc=tjmercier@google.com \
--cc=vbabka@suse.cz \
--cc=wsd_upstream@mediatek.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox