linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ralph Campbell <rcampbell@nvidia.com>
To: "Christoph Hellwig" <hch@lst.de>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Bharata B Rao" <bharata@linux.ibm.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Ben Skeggs" <bskeggs@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>, <kvm-ppc@vger.kernel.org>,
	<amd-gfx@lists.freedesktop.org>,
	<dri-devel@lists.freedesktop.org>,
	<nouveau@lists.freedesktop.org>, <linux-mm@kvack.org>
Subject: Re: [PATCH 4/4] mm: check the device private page owner in hmm_range_fault
Date: Mon, 16 Mar 2020 16:11:45 -0700	[thread overview]
Message-ID: <a2d0ab4c-2494-297c-3762-af6145a35b05@nvidia.com> (raw)
In-Reply-To: <20200316193216.920734-5-hch@lst.de>


On 3/16/20 12:32 PM, Christoph Hellwig wrote:
> Hmm range fault will succeed for any kind of device private memory,
> even if it doesn't belong to the calling entity.  While nouveau
> has some crude checks for that, they are broken because they assume
> nouveau is the only user of device private memory.  Fix this by
> passing in an expected pgmap owner in the hmm_range_fault structure.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Fixes: 4ef589dc9b10 ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")

Looks good.
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>

> ---
>   drivers/gpu/drm/nouveau/nouveau_dmem.c | 12 ------------
>   include/linux/hmm.h                    |  2 ++
>   mm/hmm.c                               | 10 +++++++++-
>   3 files changed, 11 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index edfd0805fba4..ad89e09a0be3 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -672,12 +672,6 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
>   	return ret;
>   }
>   
> -static inline bool
> -nouveau_dmem_page(struct nouveau_drm *drm, struct page *page)
> -{
> -	return is_device_private_page(page) && drm->dmem == page_to_dmem(page);
> -}
> -
>   void
>   nouveau_dmem_convert_pfn(struct nouveau_drm *drm,
>   			 struct hmm_range *range)
> @@ -696,12 +690,6 @@ nouveau_dmem_convert_pfn(struct nouveau_drm *drm,
>   		if (!is_device_private_page(page))
>   			continue;
>   
> -		if (!nouveau_dmem_page(drm, page)) {
> -			WARN(1, "Some unknown device memory !\n");
> -			range->pfns[i] = 0;
> -			continue;
> -		}
> -
>   		addr = nouveau_dmem_page_addr(page);
>   		range->pfns[i] &= ((1UL << range->pfn_shift) - 1);
>   		range->pfns[i] |= (addr >> PAGE_SHIFT) << range->pfn_shift;
> diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> index 5e6034f105c3..bb6be4428633 100644
> --- a/include/linux/hmm.h
> +++ b/include/linux/hmm.h
> @@ -132,6 +132,7 @@ enum hmm_pfn_value_e {
>    * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter
>    * @pfn_shifts: pfn shift value (should be <= PAGE_SHIFT)
>    * @valid: pfns array did not change since it has been fill by an HMM function
> + * @dev_private_owner: owner of device private pages
>    */
>   struct hmm_range {
>   	struct mmu_interval_notifier *notifier;
> @@ -144,6 +145,7 @@ struct hmm_range {
>   	uint64_t		default_flags;
>   	uint64_t		pfn_flags_mask;
>   	uint8_t			pfn_shift;
> +	void			*dev_private_owner;
>   };
>   
>   /*
> diff --git a/mm/hmm.c b/mm/hmm.c
> index cfad65f6a67b..b75b3750e03d 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -216,6 +216,14 @@ int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
>   		unsigned long end, uint64_t *pfns, pmd_t pmd);
>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>   
> +static inline bool hmm_is_device_private_entry(struct hmm_range *range,
> +		swp_entry_t entry)
> +{
> +	return is_device_private_entry(entry) &&
> +		device_private_entry_to_page(entry)->pgmap->owner ==
> +		range->dev_private_owner;
> +}
> +
>   static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte)
>   {
>   	if (pte_none(pte) || !pte_present(pte) || pte_protnone(pte))
> @@ -254,7 +262,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
>   		 * Never fault in device private pages pages, but just report
>   		 * the PFN even if not present.
>   		 */
> -		if (is_device_private_entry(entry)) {
> +		if (hmm_is_device_private_entry(range, entry)) {
>   			*pfn = hmm_device_entry_from_pfn(range,
>   					    swp_offset(entry));
>   			*pfn |= range->flags[HMM_PFN_VALID];
> 


  parent reply	other threads:[~2020-03-16 23:11 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-16 19:32 ensure device private pages have an owner v2 Christoph Hellwig
2020-03-16 19:32 ` [PATCH 1/4] memremap: add an owner field to struct dev_pagemap Christoph Hellwig
2020-03-16 20:55   ` Ralph Campbell
2020-03-16 19:32 ` [PATCH 2/4] mm: handle multiple owners of device private pages in migrate_vma Christoph Hellwig
2020-03-16 21:43   ` Ralph Campbell
2020-03-16 19:32 ` [PATCH 3/4] mm: simplify device private page handling in hmm_range_fault Christoph Hellwig
2020-03-16 19:59   ` Jason Gunthorpe
2020-03-16 21:33     ` Christoph Hellwig
2020-03-16 22:49   ` Ralph Campbell
2020-03-17  7:34     ` Christoph Hellwig
2020-03-17 22:43       ` Ralph Campbell
2020-03-18  9:34         ` Christoph Hellwig
2020-03-17 12:15     ` Jason Gunthorpe
2020-03-17 12:24       ` Christoph Hellwig
2020-03-17 12:28         ` Christoph Hellwig
2020-03-17 12:47           ` Jason Gunthorpe
2020-03-17 12:59             ` Christoph Hellwig
2020-03-17 17:32               ` Jason Gunthorpe
2020-03-17 23:14               ` Ralph Campbell
2020-03-19 18:17                 ` Jason Gunthorpe
2020-03-19 22:56                   ` Ralph Campbell
2020-03-20  0:03                     ` Jason Gunthorpe
2020-03-21  8:20                       ` Christoph Hellwig
2020-03-20  0:14                 ` Jason Gunthorpe
2020-03-20  1:33                   ` Ralph Campbell
2020-03-20 12:58                     ` Jason Gunthorpe
2020-03-16 19:32 ` [PATCH 4/4] mm: check the device private page owner " Christoph Hellwig
2020-03-16 19:49   ` Jason Gunthorpe
2020-03-16 23:11   ` Ralph Campbell [this message]
2020-03-20 13:41   ` Jason Gunthorpe
2020-03-21  8:22     ` Christoph Hellwig
2020-03-21 12:38       ` Jason Gunthorpe
2020-03-21 15:18         ` Christoph Hellwig
2020-03-17  5:31 ` ensure device private pages have an owner v2 Bharata B Rao
2020-03-19  0:28 ` Jason Gunthorpe
2020-03-19  7:16   ` Christoph Hellwig
2020-03-19 11:50     ` Jason Gunthorpe
2020-03-19 18:50     ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a2d0ab4c-2494-297c-3762-af6145a35b05@nvidia.com \
    --to=rcampbell@nvidia.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=bharata@linux.ibm.com \
    --cc=bskeggs@redhat.com \
    --cc=christian.koenig@amd.com \
    --cc=dan.j.williams@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=jgg@ziepe.ca \
    --cc=jglisse@redhat.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox