From: Jerome Glisse <jglisse@redhat.com>
To: Ira Weiny <ira.weiny@intel.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Dan Williams <dan.j.williams@intel.com>,
John Hubbard <jhubbard@nvidia.com>, Arnd Bergmann <arnd@arndb.de>
Subject: Re: [PATCH v2 09/11] mm/hmm: allow to mirror vma of a file on a DAX backed filesystem v2
Date: Thu, 28 Mar 2019 22:17:49 -0400 [thread overview]
Message-ID: <20190329021748.GH16680@redhat.com> (raw)
In-Reply-To: <20190328180425.GI31324@iweiny-DESK2.sc.intel.com>
On Thu, Mar 28, 2019 at 11:04:26AM -0700, Ira Weiny wrote:
> On Mon, Mar 25, 2019 at 10:40:09AM -0400, Jerome Glisse wrote:
> > From: Jérôme Glisse <jglisse@redhat.com>
> >
> > HMM mirror is a device driver helpers to mirror range of virtual address.
> > It means that the process jobs running on the device can access the same
> > virtual address as the CPU threads of that process. This patch adds support
> > for mirroring mapping of file that are on a DAX block device (ie range of
> > virtual address that is an mmap of a file in a filesystem on a DAX block
> > device). There is no reason to not support such case when mirroring virtual
> > address on a device.
> >
> > Note that unlike GUP code we do not take page reference hence when we
> > back-off we have nothing to undo.
> >
> > Changes since v1:
> > - improved commit message
> > - squashed: Arnd Bergmann: fix unused variable warning in hmm_vma_walk_pud
> >
> > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Dan Williams <dan.j.williams@intel.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> > mm/hmm.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++---------
> > 1 file changed, 111 insertions(+), 21 deletions(-)
> >
> > diff --git a/mm/hmm.c b/mm/hmm.c
> > index 64a33770813b..ce33151c6832 100644
> > --- a/mm/hmm.c
> > +++ b/mm/hmm.c
> > @@ -325,6 +325,7 @@ EXPORT_SYMBOL(hmm_mirror_unregister);
> >
> > struct hmm_vma_walk {
> > struct hmm_range *range;
> > + struct dev_pagemap *pgmap;
> > unsigned long last;
> > bool fault;
> > bool block;
> > @@ -499,6 +500,15 @@ static inline uint64_t pmd_to_hmm_pfn_flags(struct hmm_range *range, pmd_t pmd)
> > range->flags[HMM_PFN_VALID];
> > }
> >
> > +static inline uint64_t pud_to_hmm_pfn_flags(struct hmm_range *range, pud_t pud)
> > +{
> > + if (!pud_present(pud))
> > + return 0;
> > + return pud_write(pud) ? range->flags[HMM_PFN_VALID] |
> > + range->flags[HMM_PFN_WRITE] :
> > + range->flags[HMM_PFN_VALID];
> > +}
> > +
> > static int hmm_vma_handle_pmd(struct mm_walk *walk,
> > unsigned long addr,
> > unsigned long end,
> > @@ -520,8 +530,19 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,
> > return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
> >
> > pfn = pmd_pfn(pmd) + pte_index(addr);
> > - for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++)
> > + for (i = 0; addr < end; addr += PAGE_SIZE, i++, pfn++) {
> > + if (pmd_devmap(pmd)) {
> > + hmm_vma_walk->pgmap = get_dev_pagemap(pfn,
> > + hmm_vma_walk->pgmap);
> > + if (unlikely(!hmm_vma_walk->pgmap))
> > + return -EBUSY;
> > + }
> > pfns[i] = hmm_pfn_from_pfn(range, pfn) | cpu_flags;
> > + }
> > + if (hmm_vma_walk->pgmap) {
> > + put_dev_pagemap(hmm_vma_walk->pgmap);
> > + hmm_vma_walk->pgmap = NULL;
> > + }
> > hmm_vma_walk->last = end;
> > return 0;
> > }
> > @@ -608,10 +629,24 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
> > if (fault || write_fault)
> > goto fault;
> >
> > + if (pte_devmap(pte)) {
> > + hmm_vma_walk->pgmap = get_dev_pagemap(pte_pfn(pte),
> > + hmm_vma_walk->pgmap);
> > + if (unlikely(!hmm_vma_walk->pgmap))
> > + return -EBUSY;
> > + } else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) {
> > + *pfn = range->values[HMM_PFN_SPECIAL];
> > + return -EFAULT;
> > + }
> > +
> > *pfn = hmm_pfn_from_pfn(range, pte_pfn(pte)) | cpu_flags;
>
> <tag>
>
> > return 0;
> >
> > fault:
> > + if (hmm_vma_walk->pgmap) {
> > + put_dev_pagemap(hmm_vma_walk->pgmap);
> > + hmm_vma_walk->pgmap = NULL;
> > + }
> > pte_unmap(ptep);
> > /* Fault any virtual address we were asked to fault */
> > return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk);
> > @@ -699,12 +734,83 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
> > return r;
> > }
> > }
> > + if (hmm_vma_walk->pgmap) {
> > + put_dev_pagemap(hmm_vma_walk->pgmap);
> > + hmm_vma_walk->pgmap = NULL;
> > + }
>
>
> Why is this here and not in hmm_vma_handle_pte()? Unless I'm just getting
> tired this is the corresponding put when hmm_vma_handle_pte() returns 0 above
> at <tag> above.
This is because get_dev_pagemap() optimize away the reference getting
if we already hold a reference on the correct dev_pagemap. So if we
were releasing the reference within hmm_vma_handle_pte() then we would
loose the get_dev_pagemap() optimization.
Cheers,
Jérôme
next prev parent reply other threads:[~2019-03-29 2:17 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-25 14:40 [PATCH v2 00/11] Improve HMM driver API v2 jglisse
2019-03-25 14:40 ` [PATCH v2 01/11] mm/hmm: select mmu notifier when selecting HMM jglisse
2019-03-28 20:33 ` John Hubbard
2019-03-29 21:15 ` Jerome Glisse
2019-03-29 21:42 ` John Hubbard
2019-03-25 14:40 ` [PATCH v2 02/11] mm/hmm: use reference counting for HMM struct v2 jglisse
2019-03-28 11:07 ` Ira Weiny
2019-03-28 19:11 ` Jerome Glisse
2019-03-28 20:43 ` John Hubbard
2019-03-28 21:21 ` Jerome Glisse
2019-03-29 0:39 ` John Hubbard
2019-03-28 16:57 ` Ira Weiny
2019-03-29 1:00 ` Jerome Glisse
2019-03-29 1:18 ` John Hubbard
2019-03-29 1:50 ` Jerome Glisse
2019-03-28 18:21 ` Ira Weiny
2019-03-29 2:25 ` Jerome Glisse
2019-03-29 20:07 ` John Hubbard
2019-03-29 2:11 ` John Hubbard
2019-03-29 2:22 ` Jerome Glisse
2019-03-25 14:40 ` [PATCH v2 03/11] mm/hmm: do not erase snapshot when a range is invalidated jglisse
2019-03-25 14:40 ` [PATCH v2 04/11] mm/hmm: improve and rename hmm_vma_get_pfns() to hmm_range_snapshot() v2 jglisse
2019-03-28 13:30 ` Ira Weiny
2019-03-25 14:40 ` [PATCH v2 05/11] mm/hmm: improve and rename hmm_vma_fault() to hmm_range_fault() v2 jglisse
2019-03-28 13:43 ` Ira Weiny
2019-03-28 22:03 ` Jerome Glisse
2019-03-25 14:40 ` [PATCH v2 06/11] mm/hmm: improve driver API to work and wait over a range v2 jglisse
2019-03-28 13:11 ` Ira Weiny
2019-03-28 21:39 ` Jerome Glisse
2019-03-28 16:12 ` Ira Weiny
2019-03-29 0:56 ` Jerome Glisse
2019-03-28 18:49 ` Ira Weiny
2019-03-25 14:40 ` [PATCH v2 07/11] mm/hmm: add default fault flags to avoid the need to pre-fill pfns arrays jglisse
2019-03-28 21:59 ` John Hubbard
2019-03-28 22:12 ` Jerome Glisse
2019-03-28 22:19 ` John Hubbard
2019-03-28 22:31 ` Jerome Glisse
2019-03-28 22:40 ` John Hubbard
2019-03-28 23:21 ` Jerome Glisse
2019-03-28 23:28 ` John Hubbard
2019-03-28 16:42 ` Ira Weiny
2019-03-29 1:17 ` Jerome Glisse
2019-03-29 1:30 ` John Hubbard
2019-03-29 1:42 ` Jerome Glisse
2019-03-29 1:59 ` Jerome Glisse
2019-03-29 2:05 ` John Hubbard
2019-03-29 2:12 ` Jerome Glisse
2019-03-28 23:43 ` Jerome Glisse
2019-03-25 14:40 ` [PATCH v2 08/11] mm/hmm: mirror hugetlbfs (snapshoting, faulting and DMA mapping) v2 jglisse
2019-03-28 16:53 ` Ira Weiny
2019-03-25 14:40 ` [PATCH v2 09/11] mm/hmm: allow to mirror vma of a file on a DAX backed filesystem v2 jglisse
2019-03-28 18:04 ` Ira Weiny
2019-03-29 2:17 ` Jerome Glisse [this message]
2019-03-25 14:40 ` [PATCH v2 10/11] mm/hmm: add helpers for driver to safely take the mmap_sem v2 jglisse
2019-03-28 20:54 ` John Hubbard
2019-03-28 21:30 ` Jerome Glisse
2019-03-28 21:41 ` John Hubbard
2019-03-28 22:08 ` Jerome Glisse
2019-03-28 22:25 ` John Hubbard
2019-03-28 22:40 ` Jerome Glisse
2019-03-28 22:43 ` John Hubbard
2019-03-28 23:05 ` Jerome Glisse
2019-03-28 23:20 ` John Hubbard
2019-03-28 23:24 ` Jerome Glisse
2019-03-28 23:34 ` John Hubbard
2019-03-28 18:44 ` Ira Weiny
2019-03-25 14:40 ` [PATCH v2 11/11] mm/hmm: add an helper function that fault pages and map them to a device v2 jglisse
2019-04-01 11:59 ` Souptick Joarder
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190329021748.GH16680@redhat.com \
--to=jglisse@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=dan.j.williams@intel.com \
--cc=ira.weiny@intel.com \
--cc=jhubbard@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox