From: Jerome Glisse <jglisse@redhat.com>
To: Ira Weiny <ira.weiny@intel.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
John Hubbard <jhubbard@nvidia.com>,
Dan Williams <dan.j.williams@intel.com>
Subject: Re: [PATCH v2 05/11] mm/hmm: improve and rename hmm_vma_fault() to hmm_range_fault() v2
Date: Thu, 28 Mar 2019 18:03:32 -0400 [thread overview]
Message-ID: <20190328220332.GD13560@redhat.com> (raw)
In-Reply-To: <20190328134351.GD31324@iweiny-DESK2.sc.intel.com>
On Thu, Mar 28, 2019 at 06:43:51AM -0700, Ira Weiny wrote:
> On Mon, Mar 25, 2019 at 10:40:05AM -0400, Jerome Glisse wrote:
> > From: Jérôme Glisse <jglisse@redhat.com>
> >
> > Rename for consistency between code, comments and documentation. Also
> > improves the comments on all the possible returns values. Improve the
> > function by returning the number of populated entries in pfns array.
> >
> > Changes since v1:
> > - updated documentation
> > - reformated some comments
> >
> > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: Dan Williams <dan.j.williams@intel.com>
> > ---
> > Documentation/vm/hmm.rst | 8 +---
> > include/linux/hmm.h | 13 +++++-
> > mm/hmm.c | 91 +++++++++++++++++-----------------------
> > 3 files changed, 52 insertions(+), 60 deletions(-)
> >
> > diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
> > index d9b27bdadd1b..61f073215a8d 100644
> > --- a/Documentation/vm/hmm.rst
> > +++ b/Documentation/vm/hmm.rst
> > @@ -190,13 +190,7 @@ When the device driver wants to populate a range of virtual addresses, it can
> > use either::
> >
> > long hmm_range_snapshot(struct hmm_range *range);
> > - int hmm_vma_fault(struct vm_area_struct *vma,
> > - struct hmm_range *range,
> > - unsigned long start,
> > - unsigned long end,
> > - hmm_pfn_t *pfns,
> > - bool write,
> > - bool block);
> > + long hmm_range_fault(struct hmm_range *range, bool block);
> >
> > The first one (hmm_range_snapshot()) will only fetch present CPU page table
> > entries and will not trigger a page fault on missing or non-present entries.
> > diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> > index 32206b0b1bfd..e9afd23c2eac 100644
> > --- a/include/linux/hmm.h
> > +++ b/include/linux/hmm.h
> > @@ -391,7 +391,18 @@ bool hmm_vma_range_done(struct hmm_range *range);
> > *
> > * See the function description in mm/hmm.c for further documentation.
> > */
> > -int hmm_vma_fault(struct hmm_range *range, bool block);
> > +long hmm_range_fault(struct hmm_range *range, bool block);
> > +
> > +/* This is a temporary helper to avoid merge conflict between trees. */
> > +static inline int hmm_vma_fault(struct hmm_range *range, bool block)
> > +{
> > + long ret = hmm_range_fault(range, block);
> > + if (ret == -EBUSY)
> > + ret = -EAGAIN;
> > + else if (ret == -EAGAIN)
> > + ret = -EBUSY;
> > + return ret < 0 ? ret : 0;
> > +}
> >
> > /* Below are for HMM internal use only! Not to be used by device driver! */
> > void hmm_mm_destroy(struct mm_struct *mm);
> > diff --git a/mm/hmm.c b/mm/hmm.c
> > index 91361aa74b8b..7860e63c3ba7 100644
> > --- a/mm/hmm.c
> > +++ b/mm/hmm.c
> > @@ -336,13 +336,13 @@ static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr,
> > flags |= write_fault ? FAULT_FLAG_WRITE : 0;
> > ret = handle_mm_fault(vma, addr, flags);
> > if (ret & VM_FAULT_RETRY)
> > - return -EBUSY;
> > + return -EAGAIN;
> > if (ret & VM_FAULT_ERROR) {
> > *pfn = range->values[HMM_PFN_ERROR];
> > return -EFAULT;
> > }
> >
> > - return -EAGAIN;
> > + return -EBUSY;
> > }
> >
> > static int hmm_pfns_bad(unsigned long addr,
> > @@ -368,7 +368,7 @@ static int hmm_pfns_bad(unsigned long addr,
> > * @fault: should we fault or not ?
> > * @write_fault: write fault ?
> > * @walk: mm_walk structure
> > - * Returns: 0 on success, -EAGAIN after page fault, or page fault error
> > + * Returns: 0 on success, -EBUSY after page fault, or page fault error
> > *
> > * This function will be called whenever pmd_none() or pte_none() returns true,
> > * or whenever there is no page directory covering the virtual address range.
> > @@ -391,12 +391,12 @@ static int hmm_vma_walk_hole_(unsigned long addr, unsigned long end,
> >
> > ret = hmm_vma_do_fault(walk, addr, write_fault,
> > &pfns[i]);
> > - if (ret != -EAGAIN)
> > + if (ret != -EBUSY)
> > return ret;
> > }
> > }
> >
> > - return (fault || write_fault) ? -EAGAIN : 0;
> > + return (fault || write_fault) ? -EBUSY : 0;
> > }
> >
> > static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma_walk,
> > @@ -527,11 +527,11 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> > uint64_t orig_pfn = *pfn;
> >
> > *pfn = range->values[HMM_PFN_NONE];
> > - cpu_flags = pte_to_hmm_pfn_flags(range, pte);
> > - hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags,
> > - &fault, &write_fault);
> > + fault = write_fault = false;
> >
> > if (pte_none(pte)) {
> > + hmm_pte_need_fault(hmm_vma_walk, orig_pfn, 0,
> > + &fault, &write_fault);
>
> This really threw me until I applied the patches to a tree. It looks like this
> is just optimizing away a pte_none() check. The only functional change which
> was mentioned was returning the number of populated pfns. So I spent a bit of
> time trying to figure out why hmm_pte_need_fault() needed to move _here_ to do
> that... :-(
>
> It would have been nice to have said something about optimizing in the commit
> message.
Yes i should have added that to the commit message i forgot.
>
> > if (fault || write_fault)
> > goto fault;
> > return 0;
> > @@ -570,7 +570,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> > hmm_vma_walk->last = addr;
> > migration_entry_wait(vma->vm_mm,
> > pmdp, addr);
> > - return -EAGAIN;
> > + return -EBUSY;
> > }
> > return 0;
> > }
> > @@ -578,6 +578,10 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> > /* Report error for everything else */
> > *pfn = range->values[HMM_PFN_ERROR];
> > return -EFAULT;
> > + } else {
> > + cpu_flags = pte_to_hmm_pfn_flags(range, pte);
> > + hmm_pte_need_fault(hmm_vma_walk, orig_pfn, cpu_flags,
> > + &fault, &write_fault);
>
> Looks like the same situation as above.
>
> > }
> >
> > if (fault || write_fault)
> > @@ -628,7 +632,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
> > if (fault || write_fault) {
> > hmm_vma_walk->last = addr;
> > pmd_migration_entry_wait(vma->vm_mm, pmdp);
> > - return -EAGAIN;
> > + return -EBUSY;
>
> While I am at it. Why are we swapping EAGAIN and EBUSY everywhere?
It is a part of the API change when going from hmm_vma_fault() to
hmm_range_fault() and unifying the return values with the old
hmm_vma_get_pfns() so that all API have the same meaning behind
the same return value.
Cheers,
Jérôme
next prev parent reply other threads:[~2019-03-28 22:03 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-25 14:40 [PATCH v2 00/11] Improve HMM driver API v2 jglisse
2019-03-25 14:40 ` [PATCH v2 01/11] mm/hmm: select mmu notifier when selecting HMM jglisse
2019-03-28 20:33 ` John Hubbard
2019-03-29 21:15 ` Jerome Glisse
2019-03-29 21:42 ` John Hubbard
2019-03-25 14:40 ` [PATCH v2 02/11] mm/hmm: use reference counting for HMM struct v2 jglisse
2019-03-28 11:07 ` Ira Weiny
2019-03-28 19:11 ` Jerome Glisse
2019-03-28 20:43 ` John Hubbard
2019-03-28 21:21 ` Jerome Glisse
2019-03-29 0:39 ` John Hubbard
2019-03-28 16:57 ` Ira Weiny
2019-03-29 1:00 ` Jerome Glisse
2019-03-29 1:18 ` John Hubbard
2019-03-29 1:50 ` Jerome Glisse
2019-03-28 18:21 ` Ira Weiny
2019-03-29 2:25 ` Jerome Glisse
2019-03-29 20:07 ` John Hubbard
2019-03-29 2:11 ` John Hubbard
2019-03-29 2:22 ` Jerome Glisse
2019-03-25 14:40 ` [PATCH v2 03/11] mm/hmm: do not erase snapshot when a range is invalidated jglisse
2019-03-25 14:40 ` [PATCH v2 04/11] mm/hmm: improve and rename hmm_vma_get_pfns() to hmm_range_snapshot() v2 jglisse
2019-03-28 13:30 ` Ira Weiny
2019-03-25 14:40 ` [PATCH v2 05/11] mm/hmm: improve and rename hmm_vma_fault() to hmm_range_fault() v2 jglisse
2019-03-28 13:43 ` Ira Weiny
2019-03-28 22:03 ` Jerome Glisse [this message]
2019-03-25 14:40 ` [PATCH v2 06/11] mm/hmm: improve driver API to work and wait over a range v2 jglisse
2019-03-28 13:11 ` Ira Weiny
2019-03-28 21:39 ` Jerome Glisse
2019-03-28 16:12 ` Ira Weiny
2019-03-29 0:56 ` Jerome Glisse
2019-03-28 18:49 ` Ira Weiny
2019-03-25 14:40 ` [PATCH v2 07/11] mm/hmm: add default fault flags to avoid the need to pre-fill pfns arrays jglisse
2019-03-28 21:59 ` John Hubbard
2019-03-28 22:12 ` Jerome Glisse
2019-03-28 22:19 ` John Hubbard
2019-03-28 22:31 ` Jerome Glisse
2019-03-28 22:40 ` John Hubbard
2019-03-28 23:21 ` Jerome Glisse
2019-03-28 23:28 ` John Hubbard
2019-03-28 16:42 ` Ira Weiny
2019-03-29 1:17 ` Jerome Glisse
2019-03-29 1:30 ` John Hubbard
2019-03-29 1:42 ` Jerome Glisse
2019-03-29 1:59 ` Jerome Glisse
2019-03-29 2:05 ` John Hubbard
2019-03-29 2:12 ` Jerome Glisse
2019-03-28 23:43 ` Jerome Glisse
2019-03-25 14:40 ` [PATCH v2 08/11] mm/hmm: mirror hugetlbfs (snapshoting, faulting and DMA mapping) v2 jglisse
2019-03-28 16:53 ` Ira Weiny
2019-03-25 14:40 ` [PATCH v2 09/11] mm/hmm: allow to mirror vma of a file on a DAX backed filesystem v2 jglisse
2019-03-28 18:04 ` Ira Weiny
2019-03-29 2:17 ` Jerome Glisse
2019-03-25 14:40 ` [PATCH v2 10/11] mm/hmm: add helpers for driver to safely take the mmap_sem v2 jglisse
2019-03-28 20:54 ` John Hubbard
2019-03-28 21:30 ` Jerome Glisse
2019-03-28 21:41 ` John Hubbard
2019-03-28 22:08 ` Jerome Glisse
2019-03-28 22:25 ` John Hubbard
2019-03-28 22:40 ` Jerome Glisse
2019-03-28 22:43 ` John Hubbard
2019-03-28 23:05 ` Jerome Glisse
2019-03-28 23:20 ` John Hubbard
2019-03-28 23:24 ` Jerome Glisse
2019-03-28 23:34 ` John Hubbard
2019-03-28 18:44 ` Ira Weiny
2019-03-25 14:40 ` [PATCH v2 11/11] mm/hmm: add an helper function that fault pages and map them to a device v2 jglisse
2019-04-01 11:59 ` Souptick Joarder
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190328220332.GD13560@redhat.com \
--to=jglisse@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=ira.weiny@intel.com \
--cc=jhubbard@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox