From: Wei Yang <richard.weiyang@gmail.com>
To: Balbir Singh <balbirs@nvidia.com>
Cc: "Wei Yang" <richard.weiyang@gmail.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
dri-devel@lists.freedesktop.org,
"Andrew Morton" <akpm@linux-foundation.org>,
"David Hildenbrand" <david@redhat.com>, "Zi Yan" <ziy@nvidia.com>,
"Joshua Hahn" <joshua.hahnjy@gmail.com>,
"Rakie Kim" <rakie.kim@sk.com>,
"Byungchul Park" <byungchul@sk.com>,
"Gregory Price" <gourry@gourry.net>,
"Ying Huang" <ying.huang@linux.alibaba.com>,
"Alistair Popple" <apopple@nvidia.com>,
"Oscar Salvador" <osalvador@suse.de>,
"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>,
"Baolin Wang" <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
"Nico Pache" <npache@redhat.com>,
"Ryan Roberts" <ryan.roberts@arm.com>,
"Dev Jain" <dev.jain@arm.com>, "Barry Song" <baohua@kernel.org>,
"Lyude Paul" <lyude@redhat.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Ralph Campbell" <rcampbell@nvidia.com>,
"Mika Penttilä" <mpenttil@redhat.com>,
"Matthew Brost" <matthew.brost@intel.com>,
"Francois Dugast" <francois.dugast@intel.com>
Subject: Re: [PATCH] mm/huge_memory.c: introduce folio_split_unmapped
Date: Fri, 14 Nov 2025 08:02:32 +0000 [thread overview]
Message-ID: <20251114080232.kxms4vjlkiiuxqpl@master> (raw)
In-Reply-To: <870151ce-ca90-4cd4-8f21-35f4da329924@nvidia.com>
On Fri, Nov 14, 2025 at 02:30:03PM +1100, Balbir Singh wrote:
>On 11/14/25 14:21, Wei Yang wrote:
>> On Fri, Nov 14, 2025 at 12:22:28PM +1100, Balbir Singh wrote:
>> [...]
>>> @@ -4079,6 +4091,36 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>> return ret;
>>> }
>>>
>>> +/*
>>> + * This function is a helper for splitting folios that have already been unmapped.
>>> + * The use case is that the device or the CPU can refuse to migrate THP pages in
>>> + * the middle of migration, due to allocation issues on either side
>>> + *
>>> + * The high level code is copied from __folio_split, since the pages are anonymous
>>> + * and are already isolated from the LRU, the code has been simplified to not
>>> + * burden __folio_split with unmapped sprinkled into the code.
>>> + *
>>> + * None of the split folios are unlocked
>>> + */
>>> +int folio_split_unmapped(struct folio *folio, unsigned int new_order)
>>> +{
>>> + int extra_pins, ret = 0;
>>> +
>>> + VM_WARN_ON_FOLIO(folio_mapped(folio), folio);
>>> + VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio);
>>> + VM_WARN_ON_ONCE_FOLIO(!folio_test_large(folio), folio);
>>> +
>>
>> Compared with original logic, we did check folio_split_supported() and
>> check whether new_order is supported for the file system.
>>
>> Currently folio_split_unmapped() only pass 0 as new_order, which looks good.
>> But for a generic helper, it looks reasonable to do the check, IMHO.
>>
>
>This is meant to be used in the middle of a migration where the src/dst do
>no agree on the folio_order() due to allocation issues. When mTHP support
>is added to device migration, order support will be added and checked.
>FYI: This routines supports just anonymous pages ATM
>
OK, I don't see these assumptions. Not sure it would be abused.
Maybe a comment would help? Or remove the new_order now? We can add it when it
is truly used.
>>> + if (!can_split_folio(folio, 1, &extra_pins))
>>> + return -EAGAIN;
>>> +
>>> + local_irq_disable();
>>> + ret = __folio_freeze_and_split_unmapped(folio, new_order, &folio->page, NULL,
>>> + NULL, false, NULL, SPLIT_TYPE_UNIFORM,
>>> + 0, extra_pins);
>>> + local_irq_enable();
>>> + return ret;
>>> +}
>>> +
>>> /*
>>> * This function splits a large folio into smaller folios of order @new_order.
>>> * @page can point to any page of the large folio to split. The split operation
>>
>>
>
>Balbir
--
Wei Yang
Help you, Help me
next prev parent reply other threads:[~2025-11-14 8:02 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-14 1:22 Balbir Singh
2025-11-14 2:11 ` Zi Yan
2025-11-14 3:21 ` Wei Yang
2025-11-14 3:25 ` Wei Yang
2025-11-14 3:30 ` Balbir Singh
2025-11-14 8:02 ` Wei Yang [this message]
2025-11-14 8:36 ` David Hildenbrand (Red Hat)
2025-11-14 9:10 ` Balbir Singh
2025-11-18 20:18 ` David Hildenbrand (Red Hat)
2025-11-15 2:15 ` kernel test robot
2025-11-15 2:33 ` Balbir Singh
2025-11-15 2:36 ` Zi Yan
2025-11-19 12:32 ` Dan Carpenter
2025-11-19 23:58 ` Balbir Singh
2025-11-20 0:29 ` Zi Yan
2025-11-20 5:26 ` Dan Carpenter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251114080232.kxms4vjlkiiuxqpl@master \
--to=richard.weiyang@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=byungchul@sk.com \
--cc=dakr@kernel.org \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lyude@redhat.com \
--cc=matthew.brost@intel.com \
--cc=mpenttil@redhat.com \
--cc=npache@redhat.com \
--cc=osalvador@suse.de \
--cc=rakie.kim@sk.com \
--cc=rcampbell@nvidia.com \
--cc=ryan.roberts@arm.com \
--cc=simona@ffwll.ch \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox