linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Felix Kuehling <felix.kuehling@amd.com>
To: Philip Yang <yangp@amd.com>, Philip Yang <Philip.Yang@amd.com>,
	amd-gfx@lists.freedesktop.org, Linux MM <linux-mm@kvack.org>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Leon Romanovsky <leonro@nvidia.com>
Subject: Re: [PATCH] drm/amdkfd: Fix svm_bo and vram page refcount
Date: Fri, 3 Oct 2025 18:16:14 -0400	[thread overview]
Message-ID: <17ee1a4d-69cd-4be9-bd6a-2924e8731db8@amd.com> (raw)
In-Reply-To: <aa910171-bc96-d8b1-1bee-65f3ef5d1f46@amd.com>

[+Linux MM and HMM maintainers]

Please see below my question about the safety of using 
zone_device_page_init.

On 2025-10-03 18:02, Philip Yang wrote:
>
> On 2025-10-03 17:46, Felix Kuehling wrote:
>>
>> On 2025-10-03 17:18, Philip Yang wrote:
>>>
>>> On 2025-10-03 17:05, Felix Kuehling wrote:
>>>> On 2025-09-26 17:03, Philip Yang wrote:
>>>>> zone_device_page_init uses set_page_count to set vram page 
>>>>> refcount to
>>>>> 1, there is race if step 2 happens between step 1 and 3.
>>>>>
>>>>> 1. CPU page fault handler get vram page, migrate the vram page to
>>>>> system page
>>>>> 2. GPU page fault migrate to the vram page, set page refcount to 1
>>>>> 3. CPU page fault handler put vram page, the vram page refcount is
>>>>> 0 and reduce the vram_bo refcount
>>>>> 4. vram_bo refcount is 1 off because the vram page is still used.
>>>>>
>>>>> Afterwards, this causes use-after-free bug and page refcount warning.
>>>>
>>>> This implies that migration to RAM and to VRAM of the same range 
>>>> are happening at the same time. Isn't that a bigger problem? It 
>>>> means someone doing a migration is not holding the 
>>>> prange->migrate_mutex.
>>>
>>> Migration hold prange->migrate_mutex so we don't have migration to 
>>> RAM and VRAM of same range at same time, the issue is in step 3, CPU 
>>> page fault handler do_swap_page put_page after 
>>> pgmap->ops->migrate_to_ram() returns and during migate_to_vram.
>>
>> That's the part I don't understand. The CPU page fault handler 
>> (svm_migrate_to_ram) is holding prange->migrate_mutex until the very 
>> end. Where do we have a put_page for a zone_device page outside the 
>> prange->migrate_mutex? Do you have a backtrace?
> do_swap_page() {
>    .......
>         } else if (is_device_private_entry(entry)) {
>    ........
>
>             /*
>              * Get a page reference while we know the page can't be
>              * freed.
>              */
>             if (trylock_page(vmf->page)) {
>                 struct dev_pagemap *pgmap;
>
>                 get_page(vmf->page);
>                 pte_unmap_unlock(vmf->pte, vmf->ptl);
>                 pgmap = page_pgmap(vmf->page);
>                 ret = pgmap->ops->migrate_to_ram(vmf);
>                 unlock_page(vmf->page);
>                 put_page(vmf->page);
>
> This put_page reduce the vram page refcount to zero if migrate_to_vram 
> -> svm_migrate_get_vram_page already call zone_device_page_init set 
> page refcount to 1.
>
> put_page must be after unlock_page as put_page may free the page, 
> svm_migrate_get_vram_page can lock the page, but page refcount becomes 0.

OK. Then you must have hit the 
WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref)) in that 
function.

It sounds like zone_device_page_init is just unsafe to use in general. 
It assumes that pages have a 0 refcount. But I don't see a good way for 
drivers to guarantee that, because they are not in control of when the 
page refcounts for their zone-device pages get decremented.

Regards,
   Felix


>
> Regards,
>
> Philip
>
>>
>> Regards,
>>   Felix
>>
>>
>>>
>>> Regards,
>>>
>>> Philip
>>>
>>>>
>>>> Regards,
>>>>   Felix
>>>>
>>>>
>>>>>
>>>>> zone_device_page_init should not use in page migration, change to
>>>>> get_page fix the race bug.
>>>>>
>>>>> Add WARN_ONCE to report this issue early because the refcount bug is
>>>>> hard to investigate.
>>>>>
>>>>> Signed-off-by: Philip Yang <Philip.Yang@amd.com>
>>>>> ---
>>>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 14 +++++++++++++-
>>>>>   1 file changed, 13 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c 
>>>>> b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>> index d10c6673f4de..15ab2db4af1d 100644
>>>>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>>>> @@ -217,7 +217,8 @@ svm_migrate_get_vram_page(struct svm_range 
>>>>> *prange, unsigned long pfn)
>>>>>       page = pfn_to_page(pfn);
>>>>>       svm_range_bo_ref(prange->svm_bo);
>>>>>       page->zone_device_data = prange->svm_bo;
>>>>> -    zone_device_page_init(page);
>>>>> +    get_page(page);
>>>>> +    lock_page(page);
>>>>>   }
>>>>>     static void
>>>>> @@ -552,6 +553,17 @@ svm_migrate_ram_to_vram(struct svm_range 
>>>>> *prange, uint32_t best_loc,
>>>>>       if (mpages) {
>>>>>           prange->actual_loc = best_loc;
>>>>>           prange->vram_pages += mpages;
>>>>> +        /*
>>>>> +         * To guarent we hold correct page refcount for all 
>>>>> prange vram
>>>>> +         * pages and svm_bo refcount.
>>>>> +         * After prange migrated to VRAM, each vram page refcount 
>>>>> hold
>>>>> +         * one svm_bo refcount, and vram node hold one refcount.
>>>>> +         * After page migrated to system memory, vram page refcount
>>>>> +         * reduced to 0, svm_migrate_page_free reduce svm_bo 
>>>>> refcount.
>>>>> +         * svm_range_vram_node_free will free the svm_bo.
>>>>> +         */
>>>>> +        WARN_ONCE(prange->vram_pages == 
>>>>> kref_read(&prange->svm_bo->kref),
>>>>> +              "svm_bo refcount leaking\n");
>>>>>       } else if (!prange->actual_loc) {
>>>>>           /* if no page migrated and all pages from prange are at
>>>>>            * sys ram drop svm_bo got from svm_range_vram_node_new


       reply	other threads:[~2025-10-03 22:16 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20250926210331.17401-1-Philip.Yang@amd.com>
     [not found] ` <87ae1017-5990-4d6e-b42c-7a15f5663281@amd.com>
     [not found]   ` <f3349a43-446f-f712-ac61-fa867cd74242@amd.com>
     [not found]     ` <674f455e-434d-43d2-8f4f-18f577479ac9@amd.com>
     [not found]       ` <aa910171-bc96-d8b1-1bee-65f3ef5d1f46@amd.com>
2025-10-03 22:16         ` Felix Kuehling [this message]
2025-10-06 12:55           ` Philip Yang
2025-10-06 13:21           ` Jason Gunthorpe
2025-10-06 17:51             ` Felix Kuehling
2025-10-06 18:35               ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=17ee1a4d-69cd-4be9-bd6a-2924e8731db8@amd.com \
    --to=felix.kuehling@amd.com \
    --cc=Philip.Yang@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=jgg@nvidia.com \
    --cc=leonro@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=yangp@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox