linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: James Houghton <jthoughton@google.com>
To: Mike Kravetz <mike.kravetz@oracle.com>,
	Muchun Song <songmuchun@bytedance.com>,
	 Peter Xu <peterx@redhat.com>
Cc: David Hildenbrand <david@redhat.com>,
	David Rientjes <rientjes@google.com>,
	 Axel Rasmussen <axelrasmussen@google.com>,
	Mina Almasry <almasrymina@google.com>,
	 "Zach O'Keefe" <zokeefe@google.com>,
	Manish Mishra <manish.mishra@nutanix.com>,
	 Naoya Horiguchi <naoya.horiguchi@nec.com>,
	"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
	 "Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	Miaohe Lin <linmiaohe@huawei.com>,
	 Yang Shi <shy828301@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org,  linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v2 24/47] hugetlb: update page_vma_mapped to do high-granularity walks
Date: Thu, 15 Dec 2022 12:49:18 -0500	[thread overview]
Message-ID: <CADrL8HU=39e8ZJkmnXNKrMM=f-v1T+SF1yykC9KzwAi6T+MA4w@mail.gmail.com> (raw)
In-Reply-To: <20221021163703.3218176-25-jthoughton@google.com>

On Fri, Oct 21, 2022 at 12:37 PM James Houghton <jthoughton@google.com> wrote:
>
> This updates the HugeTLB logic to look a lot more like the PTE-mapped
> THP logic. When a user calls us in a loop, we will update pvmw->address
> to walk to each page table entry that could possibly map the hugepage
> containing pvmw->pfn.
>
> This makes use of the new pte_order so callers know what size PTE
> they're getting.
>
> Signed-off-by: James Houghton <jthoughton@google.com>
> ---
>  include/linux/rmap.h |  4 +++
>  mm/page_vma_mapped.c | 59 ++++++++++++++++++++++++++++++++++++--------
>  mm/rmap.c            | 48 +++++++++++++++++++++--------------
>  3 files changed, 83 insertions(+), 28 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index e0557ede2951..d7d2d9f65a01 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -13,6 +13,7 @@
>  #include <linux/highmem.h>
>  #include <linux/pagemap.h>
>  #include <linux/memremap.h>
> +#include <linux/hugetlb.h>
>
>  /*
>   * The anon_vma heads a list of private "related" vmas, to scan if
> @@ -409,6 +410,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
>                 pte_unmap(pvmw->pte);
>         if (pvmw->ptl)
>                 spin_unlock(pvmw->ptl);
> +       if (pvmw->pte && is_vm_hugetlb_page(pvmw->vma) &&
> +                       hugetlb_hgm_enabled(pvmw->vma))
> +               hugetlb_vma_unlock_read(pvmw->vma);
>  }
>
>  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 395ca4e21c56..1994b3f9a4c2 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -133,7 +133,8 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
>   *
>   * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point
>   * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is
> - * adjusted if needed (for PTE-mapped THPs).
> + * adjusted if needed (for PTE-mapped THPs and high-granularity--mapped HugeTLB
> + * pages).
>   *
>   * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page
>   * (usually THP). For PTE-mapped THP, you should run page_vma_mapped_walk() in
> @@ -166,19 +167,57 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>         if (unlikely(is_vm_hugetlb_page(vma))) {
>                 struct hstate *hstate = hstate_vma(vma);
>                 unsigned long size = huge_page_size(hstate);
> -               /* The only possible mapping was handled on last iteration */
> -               if (pvmw->pte)
> -                       return not_found(pvmw);
> +               struct hugetlb_pte hpte;
> +               pte_t *pte;
> +               pte_t pteval;
> +
> +               end = (pvmw->address & huge_page_mask(hstate)) +
> +                       huge_page_size(hstate);
>
>                 /* when pud is not present, pte will be NULL */
> -               pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> -               if (!pvmw->pte)
> +               pte = huge_pte_offset(mm, pvmw->address, size);
> +               if (!pte)
>                         return false;
>
> -               pvmw->pte_order = huge_page_order(hstate);
> -               pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> -               if (!check_pte(pvmw))
> -                       return not_found(pvmw);
> +               do {
> +                       hugetlb_pte_populate(&hpte, pte, huge_page_shift(hstate),
> +                                       hpage_size_to_level(size));
> +
> +                       /*
> +                        * Do a high granularity page table walk. The vma lock
> +                        * is grabbed to prevent the page table from being
> +                        * collapsed mid-walk. It is dropped in
> +                        * page_vma_mapped_walk_done().
> +                        */
> +                       if (pvmw->pte) {
> +                               if (pvmw->ptl)
> +                                       spin_unlock(pvmw->ptl);
> +                               pvmw->ptl = NULL;
> +                               pvmw->address += PAGE_SIZE << pvmw->pte_order;
> +                               if (pvmw->address >= end)
> +                                       return not_found(pvmw);
> +                       } else if (hugetlb_hgm_enabled(vma))
> +                               /* Only grab the lock once. */
> +                               hugetlb_vma_lock_read(vma);

I realize that I can't do this -- we're already holding the
i_mmap_rwsem, and we have to take the VMA lock first. It seems like
we're always holding it for writing in this case, so if I make
hugetlb_collapse taking the i_mmap_rwsem for reading, this will be
safe.

Peter, you looked at this recently [1] -- do you know if we're always
holding i_mmap_rwsem *for writing* here?

[1] https://lore.kernel.org/linux-mm/20221209170100.973970-10-peterx@redhat.com/

Thanks!

- James

> +
> +retry_walk:
> +                       hugetlb_hgm_walk(mm, vma, &hpte, pvmw->address,
> +                                       PAGE_SIZE, /*stop_at_none=*/true);
> +
> +                       pvmw->pte = hpte.ptep;
> +                       pvmw->pte_order = hpte.shift - PAGE_SHIFT;
> +                       pvmw->ptl = hugetlb_pte_lock(mm, &hpte);
> +                       pteval = huge_ptep_get(hpte.ptep);
> +                       if (pte_present(pteval) && !hugetlb_pte_present_leaf(
> +                                               &hpte, pteval)) {
> +                               /*
> +                                * Someone split from under us, so keep
> +                                * walking.
> +                                */
> +                               spin_unlock(pvmw->ptl);
> +                               goto retry_walk;
> +                       }
> +               } while (!check_pte(pvmw));
>                 return true;
>         }
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 527463c1e936..a8359584467e 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1552,17 +1552,23 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>                         flush_cache_range(vma, range.start, range.end);
>
>                         /*
> -                        * To call huge_pmd_unshare, i_mmap_rwsem must be
> -                        * held in write mode.  Caller needs to explicitly
> -                        * do this outside rmap routines.
> -                        *
> -                        * We also must hold hugetlb vma_lock in write mode.
> -                        * Lock order dictates acquiring vma_lock BEFORE
> -                        * i_mmap_rwsem.  We can only try lock here and fail
> -                        * if unsuccessful.
> +                        * If HGM is enabled, we have already grabbed the VMA
> +                        * lock for reading, and we cannot safely release it.
> +                        * Because HGM-enabled VMAs have already unshared all
> +                        * PMDs, we can safely ignore PMD unsharing here.
>                          */
> -                       if (!anon) {
> +                       if (!anon && !hugetlb_hgm_enabled(vma)) {
>                                 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
> +                               /*
> +                                * To call huge_pmd_unshare, i_mmap_rwsem must
> +                                * be held in write mode.  Caller needs to
> +                                * explicitly do this outside rmap routines.
> +                                *
> +                                * We also must hold hugetlb vma_lock in write
> +                                * mode. Lock order dictates acquiring vma_lock
> +                                * BEFORE i_mmap_rwsem.  We can only try lock
> +                                * here and fail if unsuccessful.
> +                                */
>                                 if (!hugetlb_vma_trylock_write(vma)) {
>                                         page_vma_mapped_walk_done(&pvmw);
>                                         ret = false;
> @@ -1946,17 +1952,23 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>                         flush_cache_range(vma, range.start, range.end);
>
>                         /*
> -                        * To call huge_pmd_unshare, i_mmap_rwsem must be
> -                        * held in write mode.  Caller needs to explicitly
> -                        * do this outside rmap routines.
> -                        *
> -                        * We also must hold hugetlb vma_lock in write mode.
> -                        * Lock order dictates acquiring vma_lock BEFORE
> -                        * i_mmap_rwsem.  We can only try lock here and
> -                        * fail if unsuccessful.
> +                        * If HGM is enabled, we have already grabbed the VMA
> +                        * lock for reading, and we cannot safely release it.
> +                        * Because HGM-enabled VMAs have already unshared all
> +                        * PMDs, we can safely ignore PMD unsharing here.
>                          */
> -                       if (!anon) {
> +                       if (!anon && !hugetlb_hgm_enabled(vma)) {
>                                 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
> +                               /*
> +                                * To call huge_pmd_unshare, i_mmap_rwsem must
> +                                * be held in write mode.  Caller needs to
> +                                * explicitly do this outside rmap routines.
> +                                *
> +                                * We also must hold hugetlb vma_lock in write
> +                                * mode. Lock order dictates acquiring vma_lock
> +                                * BEFORE i_mmap_rwsem.  We can only try lock
> +                                * here and fail if unsuccessful.
> +                                */
>                                 if (!hugetlb_vma_trylock_write(vma)) {
>                                         page_vma_mapped_walk_done(&pvmw);
>                                         ret = false;
> --
> 2.38.0.135.g90850a2211-goog
>


  reply	other threads:[~2022-12-15 17:49 UTC|newest]

Thread overview: 122+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-21 16:36 [RFC PATCH v2 00/47] hugetlb: introduce HugeTLB high-granularity mapping James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 01/47] hugetlb: don't set PageUptodate for UFFDIO_CONTINUE James Houghton
2022-11-16 16:30   ` Peter Xu
2022-11-21 18:33     ` James Houghton
2022-12-08 22:55       ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 02/47] hugetlb: remove mk_huge_pte; it is unused James Houghton
2022-11-16 16:35   ` Peter Xu
2022-12-07 23:13   ` Mina Almasry
2022-12-08 23:42   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 03/47] hugetlb: remove redundant pte_mkhuge in migration path James Houghton
2022-11-16 16:36   ` Peter Xu
2022-12-07 23:16   ` Mina Almasry
2022-12-09  0:10   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 04/47] hugetlb: only adjust address ranges when VMAs want PMD sharing James Houghton
2022-11-16 16:50   ` Peter Xu
2022-12-09  0:22   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 05/47] hugetlb: make hugetlb_vma_lock_alloc return its failure reason James Houghton
2022-11-16 17:08   ` Peter Xu
2022-11-21 18:11     ` James Houghton
2022-12-07 23:33   ` Mina Almasry
2022-12-09 22:36   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 06/47] hugetlb: extend vma lock for shared vmas James Houghton
2022-11-30 21:01   ` Peter Xu
2022-11-30 23:29     ` James Houghton
2022-12-09 22:48     ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 07/47] hugetlb: add CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING James Houghton
2022-12-09 22:52   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 08/47] hugetlb: add HGM enablement functions James Houghton
2022-11-16 17:19   ` Peter Xu
2022-12-08  0:26   ` Mina Almasry
2022-12-09 15:41     ` James Houghton
2022-12-13  0:13   ` Mike Kravetz
2022-12-13 15:49     ` James Houghton
2022-12-15 17:51       ` Mike Kravetz
2022-12-15 18:08         ` James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 09/47] hugetlb: make huge_pte_lockptr take an explicit shift argument James Houghton
2022-12-08  0:30   ` Mina Almasry
2022-12-13  0:25   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 10/47] hugetlb: add hugetlb_pte to track HugeTLB page table entries James Houghton
2022-11-16 22:17   ` Peter Xu
2022-11-17  1:00     ` James Houghton
2022-11-17 16:27       ` Peter Xu
2022-12-08  0:46   ` Mina Almasry
2022-12-09 16:02     ` James Houghton
2022-12-13 18:44       ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 11/47] hugetlb: add hugetlb_pmd_alloc and hugetlb_pte_alloc James Houghton
2022-12-13 19:32   ` Mike Kravetz
2022-12-13 20:18     ` James Houghton
2022-12-14  0:04       ` James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 12/47] hugetlb: add hugetlb_hgm_walk and hugetlb_walk_step James Houghton
2022-11-16 22:02   ` Peter Xu
2022-11-17  1:39     ` James Houghton
2022-12-14  0:47   ` Mike Kravetz
2023-01-05  0:57   ` Jane Chu
2023-01-05  1:12     ` Jane Chu
2023-01-05  1:23     ` James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 13/47] hugetlb: add make_huge_pte_with_shift James Houghton
2022-12-14  1:08   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 14/47] hugetlb: make default arch_make_huge_pte understand small mappings James Houghton
2022-12-14 22:17   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 15/47] hugetlbfs: for unmapping, treat HGM-mapped pages as potentially mapped James Houghton
2022-12-14 23:37   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 16/47] hugetlb: make unmapping compatible with high-granularity mappings James Houghton
2022-12-15  0:28   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 17/47] hugetlb: make hugetlb_change_protection compatible with HGM James Houghton
2022-12-15 18:15   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 18/47] hugetlb: enlighten follow_hugetlb_page to support HGM James Houghton
2022-12-15 19:29   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 19/47] hugetlb: make hugetlb_follow_page_mask HGM-enabled James Houghton
2022-12-16  0:25   ` Mike Kravetz
2022-10-21 16:36 ` [RFC PATCH v2 20/47] hugetlb: use struct hugetlb_pte for walk_hugetlb_range James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 21/47] mm: rmap: provide pte_order in page_vma_mapped_walk James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 22/47] mm: rmap: make page_vma_mapped_walk callers use pte_order James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 23/47] rmap: update hugetlb lock comment for HGM James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 24/47] hugetlb: update page_vma_mapped to do high-granularity walks James Houghton
2022-12-15 17:49   ` James Houghton [this message]
2022-12-15 18:45     ` Peter Xu
2022-10-21 16:36 ` [RFC PATCH v2 25/47] hugetlb: add HGM support for copy_hugetlb_page_range James Houghton
2022-11-30 21:32   ` Peter Xu
2022-11-30 23:18     ` James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 26/47] hugetlb: make move_hugetlb_page_tables compatible with HGM James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 27/47] hugetlb: add HGM support for hugetlb_fault and hugetlb_no_page James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 28/47] rmap: in try_to_{migrate,unmap}_one, check head page for page flags James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 29/47] hugetlb: add high-granularity migration support James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 30/47] hugetlb: add high-granularity check for hwpoison in fault path James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 31/47] hugetlb: sort hstates in hugetlb_init_hstates James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 32/47] hugetlb: add for_each_hgm_shift James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 33/47] userfaultfd: add UFFD_FEATURE_MINOR_HUGETLBFS_HGM James Houghton
2022-11-16 22:28   ` Peter Xu
2022-11-16 23:30     ` James Houghton
2022-12-21 19:23       ` Peter Xu
2022-12-21 20:21         ` James Houghton
2022-12-21 21:39           ` Mike Kravetz
2022-12-21 22:10             ` Peter Xu
2022-12-21 22:31               ` Mike Kravetz
2022-12-22  0:02                 ` James Houghton
2022-12-22  0:38                   ` Mike Kravetz
2022-12-22  1:24                     ` James Houghton
2022-12-22 14:30                       ` Peter Xu
2022-12-27 17:02                         ` James Houghton
2023-01-03 17:06                           ` Peter Xu
2022-10-21 16:36 ` [RFC PATCH v2 34/47] hugetlb: userfaultfd: add support for high-granularity UFFDIO_CONTINUE James Houghton
2022-11-17 16:58   ` Peter Xu
2022-12-23 18:38   ` Peter Xu
2022-12-27 16:38     ` James Houghton
2023-01-03 17:09       ` Peter Xu
2022-10-21 16:36 ` [RFC PATCH v2 35/47] userfaultfd: require UFFD_FEATURE_EXACT_ADDRESS when using HugeTLB HGM James Houghton
2022-12-22 21:47   ` Peter Xu
2022-12-27 16:39     ` James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 36/47] hugetlb: add MADV_COLLAPSE for hugetlb James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 37/47] hugetlb: remove huge_pte_lock and huge_pte_lockptr James Houghton
2022-11-16 20:16   ` Peter Xu
2022-10-21 16:36 ` [RFC PATCH v2 38/47] hugetlb: replace make_huge_pte with make_huge_pte_with_shift James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 39/47] mm: smaps: add stats for HugeTLB mapping size James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 40/47] hugetlb: x86: enable high-granularity mapping James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 41/47] docs: hugetlb: update hugetlb and userfaultfd admin-guides with HGM info James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 42/47] docs: proc: include information about HugeTLB HGM James Houghton
2022-10-21 16:36 ` [RFC PATCH v2 43/47] selftests/vm: add HugeTLB HGM to userfaultfd selftest James Houghton
2022-10-21 16:37 ` [RFC PATCH v2 44/47] selftests/kvm: add HugeTLB HGM to KVM demand paging selftest James Houghton
2022-10-21 16:37 ` [RFC PATCH v2 45/47] selftests/vm: add anon and shared hugetlb to migration test James Houghton
2022-10-21 16:37 ` [RFC PATCH v2 46/47] selftests/vm: add hugetlb HGM test to migration selftest James Houghton
2022-10-21 16:37 ` [RFC PATCH v2 47/47] selftests/vm: add HGM UFFDIO_CONTINUE and hwpoison tests James Houghton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADrL8HU=39e8ZJkmnXNKrMM=f-v1T+SF1yykC9KzwAi6T+MA4w@mail.gmail.com' \
    --to=jthoughton@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=axelrasmussen@google.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=manish.mishra@nutanix.com \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@nec.com \
    --cc=peterx@redhat.com \
    --cc=rientjes@google.com \
    --cc=shy828301@gmail.com \
    --cc=songmuchun@bytedance.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox