From: Christoph Hellwig <hch@lst.de>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Christoph Hellwig <hch@lst.de>,
Jerome Glisse <jglisse@redhat.com>,
Ralph Campbell <rcampbell@nvidia.com>,
Felix.Kuehling@amd.com, linux-mm@kvack.org,
John Hubbard <jhubbard@nvidia.com>,
dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
Philip Yang <Philip.Yang@amd.com>
Subject: Re: [PATCH hmm 2/8] mm/hmm: don't free the cached pgmap while scanning
Date: Mon, 16 Mar 2020 19:13:24 +0100 [thread overview]
Message-ID: <20200316181324.GA24533@lst.de> (raw)
In-Reply-To: <20200316180713.GI20941@ziepe.ca>
On Mon, Mar 16, 2020 at 03:07:13PM -0300, Jason Gunthorpe wrote:
> I chose this to be simple without having to goto unwind it.
>
> So, instead like this:
As ѕaid, and per the previous discussion: I think just removing the
pgmap lookup is the right thing to do here. Something like this patch:
diff --git a/mm/hmm.c b/mm/hmm.c
index 3d10485bf323..9f1049815d44 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -28,7 +28,6 @@
struct hmm_vma_walk {
struct hmm_range *range;
- struct dev_pagemap *pgmap;
unsigned long last;
unsigned int flags;
};
@@ -198,15 +197,8 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
return hmm_vma_fault(addr, end, fault, write_fault, walk);
pfn = pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
- for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) {
- if (pmd_devmap(pmd)) {
- hmm_vma_walk->pgmap = get_dev_pagemap(pfn,
- hmm_vma_walk->pgmap);
- if (unlikely(!hmm_vma_walk->pgmap))
- return -EBUSY;
- }
+ for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++)
pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags;
- }
hmm_vma_walk->last = end;
return 0;
}
@@ -277,15 +269,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
if (fault || write_fault)
goto fault;
- if (pte_devmap(pte)) {
- hmm_vma_walk->pgmap = get_dev_pagemap(pte_pfn(pte),
- hmm_vma_walk->pgmap);
- if (unlikely(!hmm_vma_walk->pgmap)) {
- pte_unmap(ptep);
- return -EBUSY;
- }
- }
-
/*
* Since each architecture defines a struct page for the zero page, just
* fall through and treat it like a normal page.
@@ -455,12 +438,6 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
for (i = 0; i < npages; ++i, ++pfn) {
- hmm_vma_walk->pgmap = get_dev_pagemap(pfn,
- hmm_vma_walk->pgmap);
- if (unlikely(!hmm_vma_walk->pgmap)) {
- ret = -EBUSY;
- goto out_unlock;
- }
pfns[i] = hmm_device_entry_from_pfn(range, pfn) |
cpu_flags;
}
@@ -614,15 +591,6 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
return -EBUSY;
ret = walk_page_range(mm, hmm_vma_walk.last, range->end,
&hmm_walk_ops, &hmm_vma_walk);
- /*
- * A pgmap is kept cached in the hmm_vma_walk to avoid expensive
- * searching in the probably common case that the pgmap is the
- * same for the entire requested range.
- */
- if (hmm_vma_walk.pgmap) {
- put_dev_pagemap(hmm_vma_walk.pgmap);
- hmm_vma_walk.pgmap = NULL;
- }
} while (ret == -EBUSY);
if (ret)
next prev parent reply other threads:[~2020-03-16 18:13 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-11 18:34 [PATCH hmm 0/8] Various error case bug fixes for hmm_range_fault() Jason Gunthorpe
2020-03-11 18:34 ` [PATCH hmm 1/8] mm/hmm: add missing unmaps of the ptep during hmm_vma_handle_pte() Jason Gunthorpe
2020-03-12 1:28 ` Ralph Campbell
2020-03-12 14:24 ` Jason Gunthorpe
2020-03-16 8:58 ` Christoph Hellwig
2020-03-11 18:35 ` [PATCH hmm 2/8] mm/hmm: don't free the cached pgmap while scanning Jason Gunthorpe
2020-03-12 1:29 ` Ralph Campbell
2020-03-16 9:02 ` Christoph Hellwig
2020-03-16 18:07 ` Jason Gunthorpe
2020-03-16 18:13 ` Christoph Hellwig [this message]
2020-03-16 19:23 ` Jason Gunthorpe
2020-03-11 18:35 ` [PATCH hmm 3/8] mm/hmm: do not call hmm_vma_walk_hole() while holding a spinlock Jason Gunthorpe
2020-03-12 1:31 ` Ralph Campbell
2020-03-12 8:54 ` Steven Price
2020-03-12 10:28 ` [PATCH] mm/hmm: Simplify hmm_vma_walk_pud slightly Steven Price
2020-03-12 14:27 ` Jason Gunthorpe
2020-03-12 14:40 ` Steven Price
2020-03-12 15:11 ` Jason Gunthorpe
2020-03-12 16:16 ` Steven Price
2020-03-12 16:37 ` Jason Gunthorpe
2020-03-12 17:02 ` Steven Price
2020-03-12 17:17 ` Jason Gunthorpe
2020-03-13 19:55 ` Jason Gunthorpe
2020-03-13 21:04 ` Matthew Wilcox
2020-03-13 22:51 ` Jason Gunthorpe
2020-03-16 9:05 ` [PATCH hmm 3/8] mm/hmm: do not call hmm_vma_walk_hole() while holding a spinlock Christoph Hellwig
2020-03-16 12:56 ` Jason Gunthorpe
2020-03-11 18:35 ` [PATCH hmm 4/8] mm/hmm: add missing pfns set to hmm_vma_walk_pmd() Jason Gunthorpe
2020-03-12 1:33 ` Ralph Campbell
2020-03-16 9:06 ` Christoph Hellwig
2020-03-11 18:35 ` [PATCH hmm 5/8] mm/hmm: add missing call to hmm_range_need_fault() before returning EFAULT Jason Gunthorpe
2020-03-12 1:34 ` Ralph Campbell
2020-03-16 9:07 ` Christoph Hellwig
2020-03-11 18:35 ` [PATCH hmm 6/8] mm/hmm: reorganize how !pte_present is handled in hmm_vma_handle_pte() Jason Gunthorpe
2020-03-12 1:36 ` Ralph Campbell
2020-03-16 9:11 ` Christoph Hellwig
2020-03-11 18:35 ` [PATCH hmm 7/8] mm/hmm: return -EFAULT when setting HMM_PFN_ERROR on requested valid pages Jason Gunthorpe
2020-03-12 1:36 ` Ralph Campbell
2020-03-12 14:35 ` Jason Gunthorpe
2020-03-16 9:12 ` Christoph Hellwig
2020-03-11 18:35 ` [PATCH hmm 8/8] mm/hmm: add missing call to hmm_pte_need_fault in HMM_PFN_SPECIAL handling Jason Gunthorpe
2020-03-12 1:38 ` Ralph Campbell
2020-03-16 9:13 ` Christoph Hellwig
2020-03-16 12:10 ` Jason Gunthorpe
2020-03-16 12:49 ` Christoph Hellwig
2020-03-16 13:04 ` Jason Gunthorpe
2020-03-16 13:12 ` Christoph Hellwig
2020-03-17 12:32 ` Christoph Hellwig
2020-03-17 12:53 ` Jason Gunthorpe
2020-03-17 13:06 ` Christoph Hellwig
2020-03-17 13:25 ` Jason Gunthorpe
2020-03-16 12:51 ` Christoph Hellwig
2020-03-12 19:33 ` [PATCH hmm 9/8] mm/hmm: do not check pmd_protnone twice in hmm_vma_handle_pmd() Jason Gunthorpe
2020-03-12 23:50 ` Ralph Campbell
2020-03-16 9:14 ` Christoph Hellwig
2020-03-16 18:25 ` [PATCH hmm 0/8] Various error case bug fixes for hmm_range_fault() Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200316181324.GA24533@lst.de \
--to=hch@lst.de \
--cc=Felix.Kuehling@amd.com \
--cc=Philip.Yang@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=linux-mm@kvack.org \
--cc=rcampbell@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox