linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Juan Yescas <jyescas@google.com>
To: David Hildenbrand <david@redhat.com>
Cc: akash.tyagi@mediatek.com,
	Andrew Morton <akpm@linux-foundation.org>,
	 angelogioacchino.delregno@collabora.com, hannes@cmpxchg.org,
	 Brendan Jackman <jackmanb@google.com>,
	linux-arm-kernel@lists.infradead.org,
	 linux-kernel@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	 Linux Memory Management List <linux-mm@kvack.org>,
	matthias.bgg@gmail.com, Michal Hocko <mhocko@suse.com>,
	 Suren Baghdasaryan <surenb@google.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	wsd_upstream@mediatek.com,  Zi Yan <ziy@nvidia.com>,
	Kalesh Singh <kaleshsingh@google.com>,
	 "T.J. Mercier" <tjmercier@google.com>,
	Isaac Manjarres <isaacmanjarres@google.com>
Subject: Re: [RFC PATCH] mm/page_alloc: Add PCP list for THP CMA
Date: Tue, 5 Aug 2025 09:57:39 -0700	[thread overview]
Message-ID: <CAJDx_ri==3SxFcuKXHpNjrtxbp0hLyhM+zXeJ-LQX38rfbUChw@mail.gmail.com> (raw)
In-Reply-To: <486c5773-c7fa-4e19-b429-90823ed2f7d5@redhat.com>

On Tue, Aug 5, 2025 at 2:58 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 05.08.25 03:22, Juan Yescas wrote:
> > On Mon, Aug 4, 2025 at 11:50 AM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 04.08.25 20:20, Juan Yescas wrote:
> >>> Hi David/Zi,
> >>>
> >>> Is there any reason why the MIGRATE_CMA pages are not in the PCP lists?
> >>>
> >>> There are many devices that need fast allocation of MIGRATE_CMA pages,
> >>> and they have to get them from the buddy allocator, which is a bit
> >>> slower in comparison to the PCP lists.
> >>>
> >>> We also have cases where the MIGRATE_CMA memory requirements are big.
> >>> For example, GPUs need MIGRATE_CMA memory in the ranges of 30MiB to 500MiBs.
> >>> These cases would benefit if we have THPs for CMAs.
> >>>
> >>> Could we add the support for MIGRATE_CMA pages on the PCP and THP lists?
> >>
> >> Remember how CMA memory is used:
> >>
> >> The owner allocates it through cma_alloc() and friends, where the CMA
> >> allocator will try allocating *specific physical memory regions* using
> >> alloc_contig_range(). It doesn't just go ahead and pick a random CMA
> >> page from the buddy (or PCP) lists. Doesn't work (just imagine having
> >> different CMA areas etc).
> >>
> >> Anybody else is free to use CMA pages for MOVABLE allocations. So we
> >> treat them as being MOVABLE on the PCP.
> >>
> >> Having a separate CMA PCP list doesn't solve or speedup anything, really.
> >>
> >
> > Thanks David for the quick overview.
> >
> >> I still have no clue what this patch here tried to solve: it doesn't
> >> make any sense.
> >>
> >
> > The story started with this out of tree patch that is part of Android.
> >
> > https://lore.kernel.org/lkml/cover.1604282969.git.cgoldswo@codeaurora.org/T/#u
> >
> > This patch introduced the __GFP_CMA flag that allocates pages from
> > MIGRATE_MOVABLE
> > or MIGRATE_CMA. What it happens then, it is that the MIGRATE_MOVABLE
> > pages in the
> > PCP lists were consumed pretty fast. To solve this issue, the PCP
> > MIGRATE_CMA list was added.
> > This list is initialized by rmqueue_bulk() when it is empty. That's
> > how we end up with the PCP MIGRATE_CMA list
> > in Android. In addition to this, the THP list for MIGRATE_MOVABLE was
> > allowed to contain
> > MIGRATE_CMA pages. This is causing THP MIGRATE_CMA pages to be used
> > for THP MIGRATE_MOVABLE
> > making later allocations from THP MIGRATE_CMA to fail.
>
> Okay, so this patch here really is not suitable for the upstream kernel
> as is. It's purely targeted at the OOT Android patch.
>
Right, it is a temporary solution for the pinned MIGRATE_CMA pages.

> >
> > These workarounds are mainly because we need to solve this issue upstream:
> >
> > - When devices reserve big blocks of MIGRATE_CMA pages, the
> > underutilized MIGRATE_CMA
> > can fall back to MIGRATE_MOVABLE and these pages can be pinned, so if
> > we require MIGRATE_CMA
> > pages, the allocations might fail.
> >
> > I remember that you presented the problem in LPC. Were you able to
> > make some progress on that?
>
> There is the problem of CMA pages getting allocated by someone for a
> MOVABLE allocation, to then short-term pin it for DMA. Long-term
> pinnings are disallowed (we just recently fixed a bug where we
> accidentally allowed it).
>
Nice, it is great the issue got caught and fixed upstream :)

> One concern is that a steady stream of short-term pinnings can turn such
> pages unmovable. We discussed ideas on how to handle that, but there is
> no solution upstream yet.

Are there any plans to continue the discussion? Is it in the priority
list? Maybe
a topic we can discuss in LPC Japan?

Thanks
Juan

>
> --
> Cheers,
>
> David / dhildenb
>


  reply	other threads:[~2025-08-05 16:57 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-04 18:20 Juan Yescas
2025-08-04 18:49 ` David Hildenbrand
2025-08-04 19:00   ` Zi Yan
2025-08-04 19:10     ` David Hildenbrand
2025-08-05  1:24     ` Juan Yescas
2025-08-05  1:22   ` Juan Yescas
2025-08-05  9:54     ` Vlastimil Babka
2025-08-05 16:46       ` Juan Yescas
2025-08-05 17:12       ` Juan Yescas
2025-08-05 21:09         ` Vlastimil Babka
2025-08-06 21:54           ` Juan Yescas
2025-08-05  9:58     ` David Hildenbrand
2025-08-05 16:57       ` Juan Yescas [this message]
2025-08-05 21:08         ` David Hildenbrand
2025-08-06 21:44           ` Juan Yescas
2025-08-06 21:51           ` Juan Yescas
2025-09-09 20:07           ` Juan Yescas
2025-09-09 20:11           ` Juan Yescas
  -- strict thread matches above, loose matches on Subject: below --
2025-07-24  7:53 akash.tyagi
2025-07-24  9:52 ` David Hildenbrand
2025-07-25  5:08   ` akash.tyagi
2025-07-25  7:04     ` David Hildenbrand
2025-07-25 14:27       ` Zi Yan
2025-07-29 12:30         ` akash.tyagi
2025-07-29 12:42           ` David Hildenbrand
2025-07-29 12:50           ` Matthew Wilcox
2025-08-04 18:31   ` Juan Yescas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJDx_ri==3SxFcuKXHpNjrtxbp0hLyhM+zXeJ-LQX38rfbUChw@mail.gmail.com' \
    --to=jyescas@google.com \
    --cc=akash.tyagi@mediatek.com \
    --cc=akpm@linux-foundation.org \
    --cc=angelogioacchino.delregno@collabora.com \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=isaacmanjarres@google.com \
    --cc=jackmanb@google.com \
    --cc=kaleshsingh@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=matthias.bgg@gmail.com \
    --cc=mhocko@suse.com \
    --cc=surenb@google.com \
    --cc=tjmercier@google.com \
    --cc=vbabka@suse.cz \
    --cc=wsd_upstream@mediatek.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox