From: Matthew Brost <matthew.brost@intel.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Francois Dugast <francois.dugast@intel.com>,
<intel-xe@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
Balbir Singh <balbirs@nvidia.com>, <linux-mm@kvack.org>
Subject: Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
Date: Mon, 5 Jan 2026 18:39:06 -0800 [thread overview]
Message-ID: <aVx1ylEmJOfsrh98@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <aUHRpvsa80wg04r7@lstrano-desk.jf.intel.com>
On Tue, Dec 16, 2025 at 01:39:50PM -0800, Matthew Brost wrote:
> On Tue, Dec 16, 2025 at 08:34:46PM +0000, Matthew Wilcox wrote:
> > On Tue, Dec 16, 2025 at 09:10:11PM +0100, Francois Dugast wrote:
> > > + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
> >
> > We're trying to get rid of uniform splits. Why do you need this to be
> > uniform?
I looked into this bit more - we do want a uniform split here. What we
want is to split the THP into 512 4k pages here.
Per the doc for __split_unmapped_folio:
3590 * @split_at: in buddy allocator like split, the folio containing @split_at
3591 * will be split until its order becomes @new_order.
I think this implies some of the pages may still be a higher order which
is not desired behavior for this usage.
Matt
>
> It’s very possible we’re doing this incorrectly due to a lack of core MM
> experience. I believe Zi Yan suggested this approach (use
> __split_unmapped_folio) a while back.
>
> Let me start by explaining what we’re trying to do and see if there’s a
> better suggestion for how to accomplish it.
>
> Would SPLIT_TYPE_NON_UNIFORM split work here? Or do you have another
> suggestion on how to split the folio aside from __split_unmapped_folio?
>
> This covers the case where a GPU device page was allocated as a THP
> (e.g., we call zone_device_folio_init with an order of 9). Later, this
> page is freed/unmapped and then reallocated for a CPU VMA that is
> smaller than a THP (e.g., we’d allocate either 4KB or 64KB based on
> CPU VMA size alignment). At this point, we need to split the device
> folio so we can migrate data into 4KB device pages.
>
> Would SPLIT_TYPE_NON_UNIFORM work here? Or do you have another
> suggestion for splitting the folio aside from __split_unmapped_folio?
>
> Matt
next prev parent reply other threads:[~2026-01-06 2:39 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20251216201206.1660899-1-francois.dugast@intel.com>
2025-12-16 20:10 ` Francois Dugast
2025-12-16 20:34 ` Matthew Wilcox
2025-12-16 21:39 ` Matthew Brost
2026-01-06 2:39 ` Matthew Brost [this message]
2026-01-07 20:15 ` Zi Yan
2026-01-07 20:20 ` Zi Yan
2026-01-07 20:38 ` Zi Yan
2026-01-07 21:15 ` Matthew Brost
2026-01-07 22:03 ` Zi Yan
2026-01-08 0:56 ` Balbir Singh
2026-01-08 2:17 ` Matthew Brost
2026-01-08 2:53 ` Zi Yan
2026-01-08 3:14 ` Alistair Popple
2026-01-08 3:42 ` Matthew Brost
2026-01-08 4:47 ` Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aVx1ylEmJOfsrh98@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=akpm@linux-foundation.org \
--cc=balbirs@nvidia.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox