linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: Alistair Popple <apopple@nvidia.com>, <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: Ralph Campbell <rcampbell@nvidia.com>,
	<nouveau@lists.freedesktop.org>,
	Matthew Wilcox <willy@infradead.org>, <stable@vger.kernel.org>
Subject: Re: [PATCH v2] mm: Take a page reference when removing device exclusive entries
Date: Wed, 29 Mar 2023 18:44:48 -0700	[thread overview]
Message-ID: <83040531-ce19-0dca-6e73-ef08407a6669@nvidia.com> (raw)
In-Reply-To: <20230330012519.804116-1-apopple@nvidia.com>

On 3/29/23 18:25, Alistair Popple wrote:
> Device exclusive page table entries are used to prevent CPU access to
> a page whilst it is being accessed from a device. Typically this is
> used to implement atomic operations when the underlying bus does not
> support atomic access. When a CPU thread encounters a device exclusive
> entry it locks the page and restores the original entry after calling
> mmu notifiers to signal drivers that exclusive access is no longer
> available.
> 
> The device exclusive entry holds a reference to the page making it
> safe to access the struct page whilst the entry is present. However
> the fault handling code does not hold the PTL when taking the page
> lock. This means if there are multiple threads faulting concurrently
> on the device exclusive entry one will remove the entry whilst others
> will wait on the page lock without holding a reference.
> 
> This can lead to threads locking or waiting on a folio with a zero
> refcount. Whilst mmap_lock prevents the pages getting freed via
> munmap() they may still be freed by a migration. This leads to
> warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked
> when the refcount drops to zero.
> 
> Fix this by trying to take a reference on the folio before locking
> it. The code already checks the PTE under the PTL and aborts if the
> entry is no longer there. It is also possible the folio has been
> unmapped, freed and re-allocated allowing a reference to be taken on
> an unrelated folio. This case is also detected by the PTE check and
> the folio is unlocked without further changes.
> 
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
> Cc: stable@vger.kernel.org
> 
> ---
> 
> Changes for v2:
> 
>   - Rebased to Linus master
>   - Reworded commit message
>   - Switched to using folios (thanks Matthew!)
>   - Added Reviewed-by's

v2 looks correct to me.

thanks,
-- 
John Hubbard
NVIDIA

> ---
>   mm/memory.c | 16 +++++++++++++++-
>   1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index f456f3b5049c..01a23ad48a04 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3563,8 +3563,21 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>   	struct vm_area_struct *vma = vmf->vma;
>   	struct mmu_notifier_range range;
>   
> -	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
> +	/*
> +	 * We need a reference to lock the folio because we don't hold
> +	 * the PTL so a racing thread can remove the device-exclusive
> +	 * entry and unmap it. If the folio is free the entry must
> +	 * have been removed already. If it happens to have already
> +	 * been re-allocated after being freed all we do is lock and
> +	 * unlock it.
> +	 */
> +	if (!folio_try_get(folio))
> +		return 0;
> +
> +	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
> +		folio_put(folio);
>   		return VM_FAULT_RETRY;
> +	}
>   	mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0,
>   				vma->vm_mm, vmf->address & PAGE_MASK,
>   				(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
> @@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>   
>   	pte_unmap_unlock(vmf->pte, vmf->ptl);
>   	folio_unlock(folio);
> +	folio_put(folio);
>   
>   	mmu_notifier_invalidate_range_end(&range);
>   	return 0;




  reply	other threads:[~2023-03-30  1:45 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-30  1:25 Alistair Popple
2023-03-30  1:44 ` John Hubbard [this message]
2023-03-30  2:23 ` Christoph Hellwig
2023-03-30  3:11   ` Alistair Popple
2023-04-03 12:02 ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=83040531-ce19-0dca-6e73-ef08407a6669@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rcampbell@nvidia.com \
    --cc=stable@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox