linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zi Yan <ziy@nvidia.com>
To: Balbir Singh <balbirs@nvidia.com>
Cc: "Wei Yang" <richard.weiyang@gmail.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	"David Hildenbrand" <david@redhat.com>,
	"Joshua Hahn" <joshua.hahnjy@gmail.com>,
	"Rakie Kim" <rakie.kim@sk.com>,
	"Byungchul Park" <byungchul@sk.com>,
	"Gregory Price" <gourry@gourry.net>,
	"Ying Huang" <ying.huang@linux.alibaba.com>,
	"Alistair Popple" <apopple@nvidia.com>,
	"Oscar Salvador" <osalvador@suse.de>,
	"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>,
	"Baolin Wang" <baolin.wang@linux.alibaba.com>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	"Nico Pache" <npache@redhat.com>,
	"Ryan Roberts" <ryan.roberts@arm.com>,
	"Dev Jain" <dev.jain@arm.com>, "Barry Song" <baohua@kernel.org>,
	"Lyude Paul" <lyude@redhat.com>,
	"Danilo Krummrich" <dakr@kernel.org>,
	"David Airlie" <airlied@gmail.com>,
	"Simona Vetter" <simona@ffwll.ch>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	"Mika Penttilä" <mpenttil@redhat.com>,
	"Matthew Brost" <matthew.brost@intel.com>,
	"Francois Dugast" <francois.dugast@intel.com>
Subject: Re: [v7 11/16] mm/migrate_device: add THP splitting during migration
Date: Wed, 22 Oct 2025 11:26:07 -0400	[thread overview]
Message-ID: <8D530774-F2CE-444F-8453-529821F30E4D@nvidia.com> (raw)
In-Reply-To: <3d58b839-a24a-4481-9687-3083541b80a1@nvidia.com>

On 22 Oct 2025, at 3:16, Balbir Singh wrote:

> On 10/22/25 13:59, Zi Yan wrote:
>> On 21 Oct 2025, at 17:34, Balbir Singh wrote:
>>
>>> On 10/20/25 09:59, Zi Yan wrote:
>>>> On 19 Oct 2025, at 18:49, Balbir Singh wrote:
>>>>
>>>>> On 10/19/25 19:19, Wei Yang wrote:
>>>>>> On Wed, Oct 01, 2025 at 04:57:02PM +1000, Balbir Singh wrote:
>>>>>> [...]
>>>>>>> static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>>>> 		struct page *split_at, struct page *lock_at,
>>>>>>> -		struct list_head *list, bool uniform_split)
>>>>>>> +		struct list_head *list, bool uniform_split, bool unmapped)
>>>>>>> {
>>>>>>> 	struct deferred_split *ds_queue = get_deferred_split_queue(folio);
>>>>>>> 	XA_STATE(xas, &folio->mapping->i_pages, folio->index);
>>>>>>> @@ -3765,13 +3757,15 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>>>> 		 * is taken to serialise against parallel split or collapse
>>>>>>> 		 * operations.
>>>>>>> 		 */
>>>>>>> -		anon_vma = folio_get_anon_vma(folio);
>>>>>>> -		if (!anon_vma) {
>>>>>>> -			ret = -EBUSY;
>>>>>>> -			goto out;
>>>>>>> +		if (!unmapped) {
>>>>>>> +			anon_vma = folio_get_anon_vma(folio);
>>>>>>> +			if (!anon_vma) {
>>>>>>> +				ret = -EBUSY;
>>>>>>> +				goto out;
>>>>>>> +			}
>>>>>>> +			anon_vma_lock_write(anon_vma);
>>>>>>> 		}
>>>>>>> 		mapping = NULL;
>>>>>>> -		anon_vma_lock_write(anon_vma);
>>>>>>> 	} else {
>>>>>>> 		unsigned int min_order;
>>>>>>> 		gfp_t gfp;
>>>>>>> @@ -3838,7 +3832,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>>>> 		goto out_unlock;
>>>>>>> 	}
>>>>>>>
>>>>>>> -	unmap_folio(folio);
>>>>>>> +	if (!unmapped)
>>>>>>> +		unmap_folio(folio);
>>>>>>>
>>>>>>> 	/* block interrupt reentry in xa_lock and spinlock */
>>>>>>> 	local_irq_disable();
>>>>>>> @@ -3925,10 +3920,13 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>>>>
>>>>>>> 			next = folio_next(new_folio);
>>>>>>>
>>>>>>> +			zone_device_private_split_cb(folio, new_folio);
>>>>>>> +
>>>>>>> 			expected_refs = folio_expected_ref_count(new_folio) + 1;
>>>>>>> 			folio_ref_unfreeze(new_folio, expected_refs);
>>>>>>>
>>>>>>> -			lru_add_split_folio(folio, new_folio, lruvec, list);
>>>>>>> +			if (!unmapped)
>>>>>>> +				lru_add_split_folio(folio, new_folio, lruvec, list);
>>>>>>>
>>>>>>> 			/*
>>>>>>> 			 * Anonymous folio with swap cache.
>>>>>>> @@ -3959,6 +3957,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>>>> 			__filemap_remove_folio(new_folio, NULL);
>>>>>>> 			folio_put_refs(new_folio, nr_pages);
>>>>>>> 		}
>>>>>>> +
>>>>>>> +		zone_device_private_split_cb(folio, NULL);
>>>>>>> 		/*
>>>>>>> 		 * Unfreeze @folio only after all page cache entries, which
>>>>>>> 		 * used to point to it, have been updated with new folios.
>>>>>>> @@ -3982,6 +3982,9 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>>>>>
>>>>>>> 	local_irq_enable();
>>>>>>>
>>>>>>> +	if (unmapped)
>>>>>>> +		return ret;
>>>>>>
>>>>>> As the comment of __folio_split() and __split_huge_page_to_list_to_order()
>>>>>> mentioned:
>>>>>>
>>>>>>   * The large folio must be locked
>>>>>>   * After splitting, the after-split folio containing @lock_at remains locked
>>>>>>
>>>>>> But here we seems to change the prerequisites.
>>>>>>
>>>>>> Hmm.. I am not sure this is correct.
>>>>>>
>>>>>
>>>>> The code is correct, but you are right in that the documentation needs to be updated.
>>>>> When "unmapped", we do want to leave the folios locked after the split.
>>>>
>>>> Sigh, this "unmapped" code needs so many special branches and a different locking
>>>> requirement. It should be a separate function to avoid confusions.
>>>>
>>>
>>> Yep, I have a patch for it, I am also waiting on Matthew's feedback, FYI, here is
>>> a WIP patch that can be applied on top of the series
>>
>> Nice cleanup! Thanks.
>>
>>>
>>> ---
>>>  include/linux/huge_mm.h |   5 +-
>>>  mm/huge_memory.c        | 137 ++++++++++++++++++++++++++++++++++------
>>>  mm/migrate_device.c     |   3 +-
>>>  3 files changed, 120 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>> index c4a811958cda..86e1cefaf391 100644
>>> --- a/include/linux/huge_mm.h
>>> +++ b/include/linux/huge_mm.h
>>> @@ -366,7 +366,8 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
>>>
>>>  bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
>>>  int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>>> -		unsigned int new_order, bool unmapped);
>>> +		unsigned int new_order);
>>> +int split_unmapped_folio_to_order(struct folio *folio, unsigned int new_order);
>>>  int min_order_for_split(struct folio *folio);
>>>  int split_folio_to_list(struct folio *folio, struct list_head *list);
>>>  bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>> @@ -379,7 +380,7 @@ int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>>>  static inline int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>>>  		unsigned int new_order)
>>>  {
>>> -	return __split_huge_page_to_list_to_order(page, list, new_order, false);
>>> +	return __split_huge_page_to_list_to_order(page, list, new_order);
>>>  }
>>>
>>>  /*
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 8c82a0ac6e69..e20cbf68d037 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -3711,7 +3711,6 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>   * @lock_at: a page within @folio to be left locked to caller
>>>   * @list: after-split folios will be put on it if non NULL
>>>   * @uniform_split: perform uniform split or not (non-uniform split)
>>> - * @unmapped: The pages are already unmapped, they are migration entries.
>>>   *
>>>   * It calls __split_unmapped_folio() to perform uniform and non-uniform split.
>>>   * It is in charge of checking whether the split is supported or not and
>>> @@ -3727,7 +3726,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>>>   */
>>>  static int __folio_split(struct folio *folio, unsigned int new_order,
>>>  		struct page *split_at, struct page *lock_at,
>>> -		struct list_head *list, bool uniform_split, bool unmapped)
>>> +		struct list_head *list, bool uniform_split)
>>>  {
>>>  	struct deferred_split *ds_queue;
>>>  	XA_STATE(xas, &folio->mapping->i_pages, folio->index);
>>> @@ -3777,14 +3776,12 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>  		 * is taken to serialise against parallel split or collapse
>>>  		 * operations.
>>>  		 */
>>> -		if (!unmapped) {
>>> -			anon_vma = folio_get_anon_vma(folio);
>>> -			if (!anon_vma) {
>>> -				ret = -EBUSY;
>>> -				goto out;
>>> -			}
>>> -			anon_vma_lock_write(anon_vma);
>>> +		anon_vma = folio_get_anon_vma(folio);
>>> +		if (!anon_vma) {
>>> +			ret = -EBUSY;
>>> +			goto out;
>>>  		}
>>> +		anon_vma_lock_write(anon_vma);
>>>  		mapping = NULL;
>>>  	} else {
>>>  		unsigned int min_order;
>>> @@ -3852,8 +3849,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>  		goto out_unlock;
>>>  	}
>>>
>>> -	if (!unmapped)
>>> -		unmap_folio(folio);
>>> +	unmap_folio(folio);
>>>
>>>  	/* block interrupt reentry in xa_lock and spinlock */
>>>  	local_irq_disable();
>>> @@ -3954,8 +3950,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>  			expected_refs = folio_expected_ref_count(new_folio) + 1;
>>>  			folio_ref_unfreeze(new_folio, expected_refs);
>>>
>>> -			if (!unmapped)
>>> -				lru_add_split_folio(folio, new_folio, lruvec, list);
>>> +			lru_add_split_folio(folio, new_folio, lruvec, list);
>>>
>>>  			/*
>>>  			 * Anonymous folio with swap cache.
>>> @@ -4011,9 +4006,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>
>>>  	local_irq_enable();
>>>
>>> -	if (unmapped)
>>> -		return ret;
>>> -
>>>  	if (nr_shmem_dropped)
>>>  		shmem_uncharge(mapping->host, nr_shmem_dropped);
>>>
>>> @@ -4057,6 +4049,111 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>>  	return ret;
>>>  }
>>>
>>> +/*
>>> + * This function is a helper for splitting folios that have already been unmapped.
>>> + * The use case is that the device or the CPU can refuse to migrate THP pages in
>>> + * the middle of migration, due to allocation issues on either side
>>> + *
>>> + * The high level code is copied from __folio_split, since the pages are anonymous
>>> + * and are already isolated from the LRU, the code has been simplified to not
>>> + * burden __folio_split with unmapped sprinkled into the code.
>>
>> I wonder if it makes sense to remove CPU side folio from both deferred_split queue
>> and swap cache before migration to further simplify split_unmapped_folio_to_order().
>> Basically require that device private folios cannot be on deferred_split queue nor
>> swap cache.
>>
>
> This API can be called for non-device private folios as well. Device private folios are
> already not on the deferred queue. The use case is
>
> 1. Migrate a large folio page from CPU to Device
> 2. SRC - CPU has a THP (large folio page)
> 3. DST - Device cannot allocate a large page, hence split the SRC page

Right. That is what I am talking about, sorry I was not clear.
I mean when migrating a large folio from CPU to device, the CPU large folio
can be first removed from deferred_split queue and swap cache, if it is there,
then the migration process begins, so that the CPU large folio will always
be out of deferred_split queue and not in swap cache. As a result, this split
function does not need to handle these two situations.

--
Best Regards,
Yan, Zi


  reply	other threads:[~2025-10-22 15:26 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-01  6:56 [v7 00/16] mm: support device-private THP Balbir Singh
2025-10-01  6:56 ` [v7 01/16] mm/zone_device: support large zone device private folios Balbir Singh
2025-10-12  6:10   ` Lance Yang
2025-10-12 22:54     ` Balbir Singh
2025-10-01  6:56 ` [v7 02/16] mm/zone_device: Rename page_free callback to folio_free Balbir Singh
2025-10-01  6:56 ` [v7 03/16] mm/huge_memory: add device-private THP support to PMD operations Balbir Singh
2025-10-12 15:46   ` Lance Yang
2025-10-13  0:01     ` Balbir Singh
2025-10-13  1:48       ` Lance Yang
2025-10-17 14:49   ` linux-next: KVM/s390x regression (was: [v7 03/16] mm/huge_memory: add device-private THP support to PMD operations) Christian Borntraeger
2025-10-17 14:54     ` linux-next: KVM/s390x regression David Hildenbrand
2025-10-17 15:01       ` Christian Borntraeger
2025-10-17 15:07         ` David Hildenbrand
2025-10-17 15:20           ` Christian Borntraeger
2025-10-17 17:07             ` David Hildenbrand
2025-10-17 21:56               ` Balbir Singh
2025-10-17 22:15                 ` David Hildenbrand
2025-10-17 22:41                   ` David Hildenbrand
2025-10-20  7:01                     ` Christian Borntraeger
2025-10-20  7:00                 ` Christian Borntraeger
2025-10-20  8:41                   ` David Hildenbrand
2025-10-20  9:04                     ` Claudio Imbrenda
2025-10-27 16:47                     ` Claudio Imbrenda
2025-10-27 16:59                       ` David Hildenbrand
2025-10-27 17:06                       ` Christian Borntraeger
2025-10-28  9:24                         ` Balbir Singh
2025-10-28 13:01                         ` [PATCH v1 0/1] KVM: s390: Fix missing present bit for gmap puds Claudio Imbrenda
2025-10-28 13:01                           ` [PATCH v1 1/1] " Claudio Imbrenda
2025-10-28 21:23                             ` Balbir Singh
2025-10-29 10:00                             ` David Hildenbrand
2025-10-29 10:20                               ` Claudio Imbrenda
2025-10-28 22:53                           ` [PATCH v1 0/1] " Andrew Morton
2025-10-01  6:56 ` [v7 04/16] mm/rmap: extend rmap and migration support device-private entries Balbir Singh
2025-10-22 11:54   ` Lance Yang
2025-10-01  6:56 ` [v7 05/16] mm/huge_memory: implement device-private THP splitting Balbir Singh
2025-10-01  6:56 ` [v7 06/16] mm/migrate_device: handle partially mapped folios during collection Balbir Singh
2025-10-01  6:56 ` [v7 07/16] mm/migrate_device: implement THP migration of zone device pages Balbir Singh
2025-10-01  6:56 ` [v7 08/16] mm/memory/fault: add THP fault handling for zone device private pages Balbir Singh
2025-10-01  6:57 ` [v7 09/16] lib/test_hmm: add zone device private THP test infrastructure Balbir Singh
2025-10-01  6:57 ` [v7 10/16] mm/memremap: add driver callback support for folio splitting Balbir Singh
2025-10-01  6:57 ` [v7 11/16] mm/migrate_device: add THP splitting during migration Balbir Singh
2025-10-13 21:17   ` Zi Yan
2025-10-13 21:33     ` Balbir Singh
2025-10-13 21:55       ` Zi Yan
2025-10-13 22:50         ` Balbir Singh
2025-10-19  8:19   ` Wei Yang
2025-10-19 22:49     ` Balbir Singh
2025-10-19 22:59       ` Zi Yan
2025-10-21 21:34         ` Balbir Singh
2025-10-22  2:59           ` Zi Yan
2025-10-22  7:16             ` Balbir Singh
2025-10-22 15:26               ` Zi Yan [this message]
2025-10-28  9:32                 ` Balbir Singh
2025-10-01  6:57 ` [v7 12/16] lib/test_hmm: add large page allocation failure testing Balbir Singh
2025-10-01  6:57 ` [v7 13/16] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-10-01  6:57 ` [v7 14/16] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Balbir Singh
2025-10-01  6:57 ` [v7 15/16] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-10-01  6:57 ` [v7 16/16] gpu/drm/nouveau: enable THP support for GPU memory migration Balbir Singh
2025-10-09  3:17 ` [v7 00/16] mm: support device-private THP Andrew Morton
2025-10-09  3:26   ` Balbir Singh
2025-10-09 10:33     ` Matthew Brost
2025-10-13 22:51       ` Balbir Singh
2025-11-11 23:43       ` Andrew Morton
2025-11-11 23:52         ` Balbir Singh
2025-11-12  0:24           ` Andrew Morton
2025-11-12  0:36             ` Balbir Singh
2025-11-20  2:40           ` Matthew Brost
2025-11-20  2:50             ` Balbir Singh
2025-11-20  2:59               ` Balbir Singh
2025-11-20  3:15                 ` Matthew Brost
2025-11-20  3:58                   ` Balbir Singh
2025-11-20  5:46                     ` Balbir Singh
2025-11-20  5:53                     ` Matthew Brost
2025-11-20  6:03                       ` Balbir Singh
2025-11-20 17:27                         ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8D530774-F2CE-444F-8453-529821F30E4D@nvidia.com \
    --to=ziy@nvidia.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=airlied@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=balbirs@nvidia.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=byungchul@sk.com \
    --cc=dakr@kernel.org \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=francois.dugast@intel.com \
    --cc=gourry@gourry.net \
    --cc=joshua.hahnjy@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lyude@redhat.com \
    --cc=matthew.brost@intel.com \
    --cc=mpenttil@redhat.com \
    --cc=npache@redhat.com \
    --cc=osalvador@suse.de \
    --cc=rakie.kim@sk.com \
    --cc=rcampbell@nvidia.com \
    --cc=richard.weiyang@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=simona@ffwll.ch \
    --cc=ying.huang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox