From: James Houghton <jthoughton@google.com>
To: "manish.mishra" <manish.mishra@nutanix.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>,
Muchun Song <songmuchun@bytedance.com>,
Peter Xu <peterx@redhat.com>,
David Hildenbrand <david@redhat.com>,
David Rientjes <rientjes@google.com>,
Axel Rasmussen <axelrasmussen@google.com>,
Mina Almasry <almasrymina@google.com>,
Jue Wang <juew@google.com>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 17/26] hugetlb: update follow_hugetlb_page to support HGM
Date: Tue, 19 Jul 2022 09:19:29 -0700 [thread overview]
Message-ID: <CADrL8HVFWidxaMZ++WfMfYb6pO2pEsDiVghe+8kKzE2kTvO9YA@mail.gmail.com> (raw)
In-Reply-To: <673a3024-bf82-3770-b737-4c7e53e70fe5@nutanix.com>
On Tue, Jul 19, 2022 at 3:48 AM manish.mishra <manish.mishra@nutanix.com> wrote:
>
>
> On 24/06/22 11:06 pm, James Houghton wrote:
> > This enables support for GUP, and it is needed for the KVM demand paging
> > self-test to work.
> >
> > One important change here is that, before, we never needed to grab the
> > i_mmap_sem, but now, to prevent someone from collapsing the page tables
> > out from under us, we grab it for reading when doing high-granularity PT
> > walks.
> >
> > Signed-off-by: James Houghton <jthoughton@google.com>
> > ---
> > mm/hugetlb.c | 70 ++++++++++++++++++++++++++++++++++++++++++----------
> > 1 file changed, 57 insertions(+), 13 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index f9c7daa6c090..aadfcee947cf 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -6298,14 +6298,18 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > unsigned long vaddr = *position;
> > unsigned long remainder = *nr_pages;
> > struct hstate *h = hstate_vma(vma);
> > + struct address_space *mapping = vma->vm_file->f_mapping;
> > int err = -EFAULT, refs;
> > + bool has_i_mmap_sem = false;
> >
> > while (vaddr < vma->vm_end && remainder) {
> > pte_t *pte;
> > spinlock_t *ptl = NULL;
> > bool unshare = false;
> > int absent;
> > + unsigned long pages_per_hpte;
> > struct page *page;
> > + struct hugetlb_pte hpte;
> >
> > /*
> > * If we have a pending SIGKILL, don't keep faulting pages and
> > @@ -6325,9 +6329,23 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > */
> > pte = huge_pte_offset(mm, vaddr & huge_page_mask(h),
> > huge_page_size(h));
> > - if (pte)
> > - ptl = huge_pte_lock(h, mm, pte);
> > - absent = !pte || huge_pte_none(huge_ptep_get(pte));
> > + if (pte) {
> > + hugetlb_pte_populate(&hpte, pte, huge_page_shift(h));
> > + if (hugetlb_hgm_enabled(vma)) {
> > + BUG_ON(has_i_mmap_sem);
>
> Just thinking can we do without i_mmap_lock_read in most cases. Like earlier
>
> this function was good without i_mmap_lock_read doing almost everything
>
> which is happening now?
We need something to prevent the page tables from being rearranged
while we're walking them. In this RFC, I used the i_mmap_lock. I'm
going to change it, probably to a per-VMA lock (or maybe a per-hpage
lock. I'm trying to figure out if a system with PTLs/hugetlb_pte_lock
could work too :)).
>
> > + i_mmap_lock_read(mapping);
> > + /*
> > + * Need to hold the mapping semaphore for
> > + * reading to do a HGM walk.
> > + */
> > + has_i_mmap_sem = true;
> > + hugetlb_walk_to(mm, &hpte, vaddr, PAGE_SIZE,
> > + /*stop_at_none=*/true);
> > + }
> > + ptl = hugetlb_pte_lock(mm, &hpte);
> > + }
> > +
> > + absent = !pte || hugetlb_pte_none(&hpte);
> >
> > /*
> > * When coredumping, it suits get_dump_page if we just return
> > @@ -6338,8 +6356,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > */
> > if (absent && (flags & FOLL_DUMP) &&
> > !hugetlbfs_pagecache_present(h, vma, vaddr)) {
> > - if (pte)
> > + if (pte) {
> > + if (has_i_mmap_sem) {
> > + i_mmap_unlock_read(mapping);
> > + has_i_mmap_sem = false;
> > + }
> > spin_unlock(ptl);
> > + }
> > remainder = 0;
> > break;
> > }
> > @@ -6359,8 +6382,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > vm_fault_t ret;
> > unsigned int fault_flags = 0;
> >
> > - if (pte)
> > + if (pte) {
> > + if (has_i_mmap_sem) {
> > + i_mmap_unlock_read(mapping);
> > + has_i_mmap_sem = false;
> > + }
> > spin_unlock(ptl);
> > + }
> > if (flags & FOLL_WRITE)
> > fault_flags |= FAULT_FLAG_WRITE;
> > else if (unshare)
> > @@ -6403,8 +6431,11 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > continue;
> > }
> >
> > - pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT;
> > - page = pte_page(huge_ptep_get(pte));
> > + pfn_offset = (vaddr & ~hugetlb_pte_mask(&hpte)) >> PAGE_SHIFT;
> > + page = pte_page(hugetlb_ptep_get(&hpte));
> > + pages_per_hpte = hugetlb_pte_size(&hpte) / PAGE_SIZE;
> > + if (hugetlb_hgm_enabled(vma))
> > + page = compound_head(page);
> >
> > VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) &&
> > !PageAnonExclusive(page), page);
> > @@ -6414,17 +6445,21 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > * and skip the same_page loop below.
> > */
> > if (!pages && !vmas && !pfn_offset &&
> > - (vaddr + huge_page_size(h) < vma->vm_end) &&
> > - (remainder >= pages_per_huge_page(h))) {
> > - vaddr += huge_page_size(h);
> > - remainder -= pages_per_huge_page(h);
> > - i += pages_per_huge_page(h);
> > + (vaddr + pages_per_hpte < vma->vm_end) &&
> > + (remainder >= pages_per_hpte)) {
> > + vaddr += pages_per_hpte;
> > + remainder -= pages_per_hpte;
> > + i += pages_per_hpte;
> > spin_unlock(ptl);
> > + if (has_i_mmap_sem) {
> > + has_i_mmap_sem = false;
> > + i_mmap_unlock_read(mapping);
> > + }
> > continue;
> > }
> >
> > /* vaddr may not be aligned to PAGE_SIZE */
> > - refs = min3(pages_per_huge_page(h) - pfn_offset, remainder,
> > + refs = min3(pages_per_hpte - pfn_offset, remainder,
> > (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT);
> >
> > if (pages || vmas)
> > @@ -6447,6 +6482,10 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs,
> > flags))) {
> > spin_unlock(ptl);
> > + if (has_i_mmap_sem) {
> > + has_i_mmap_sem = false;
> > + i_mmap_unlock_read(mapping);
> > + }
> > remainder = 0;
> > err = -ENOMEM;
> > break;
> > @@ -6458,8 +6497,13 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> > i += refs;
> >
> > spin_unlock(ptl);
> > + if (has_i_mmap_sem) {
> > + has_i_mmap_sem = false;
> > + i_mmap_unlock_read(mapping);
> > + }
> > }
> > *nr_pages = remainder;
> > + BUG_ON(has_i_mmap_sem);
> > /*
> > * setting position is actually required only if remainder is
> > * not zero but it's faster not to add a "if (remainder)"
>
> Thanks
>
> Manish Mishra
>
next prev parent reply other threads:[~2022-07-19 16:19 UTC|newest]
Thread overview: 125+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-24 17:36 [RFC PATCH 00/26] hugetlb: Introduce HugeTLB high-granularity mapping James Houghton
2022-06-24 17:36 ` [RFC PATCH 01/26] hugetlb: make hstate accessor functions const James Houghton
2022-06-24 18:43 ` Mina Almasry
[not found] ` <e55f90f5-ba14-5d6e-8f8f-abf731b9095e@nutanix.com>
2022-06-27 12:09 ` manish.mishra
2022-06-28 17:08 ` James Houghton
2022-06-29 6:18 ` Muchun Song
2022-06-24 17:36 ` [RFC PATCH 02/26] hugetlb: sort hstates in hugetlb_init_hstates James Houghton
2022-06-24 18:51 ` Mina Almasry
2022-06-27 12:08 ` manish.mishra
2022-06-28 15:35 ` James Houghton
2022-06-27 18:42 ` Mike Kravetz
2022-06-28 15:40 ` James Houghton
2022-06-29 6:39 ` Muchun Song
2022-06-29 21:06 ` Mike Kravetz
2022-06-29 21:13 ` James Houghton
2022-06-24 17:36 ` [RFC PATCH 03/26] hugetlb: add make_huge_pte_with_shift James Houghton
2022-06-24 19:01 ` Mina Almasry
2022-06-27 12:13 ` manish.mishra
2022-06-24 17:36 ` [RFC PATCH 04/26] hugetlb: make huge_pte_lockptr take an explicit shift argument James Houghton
2022-06-27 12:26 ` manish.mishra
2022-06-27 20:51 ` Mike Kravetz
2022-06-28 15:29 ` James Houghton
2022-06-29 6:09 ` Muchun Song
2022-06-29 21:03 ` Mike Kravetz
2022-06-29 21:39 ` James Houghton
2022-06-29 22:24 ` Mike Kravetz
2022-06-30 9:35 ` Muchun Song
2022-06-30 16:23 ` James Houghton
2022-06-30 17:40 ` Mike Kravetz
2022-07-01 3:32 ` Muchun Song
2022-06-24 17:36 ` [RFC PATCH 05/26] hugetlb: add CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING James Houghton
2022-06-27 12:28 ` manish.mishra
2022-06-28 20:03 ` Mina Almasry
2022-06-24 17:36 ` [RFC PATCH 06/26] mm: make free_p?d_range functions public James Houghton
2022-06-27 12:31 ` manish.mishra
2022-06-28 20:35 ` Mike Kravetz
2022-07-12 20:52 ` James Houghton
2022-06-24 17:36 ` [RFC PATCH 07/26] hugetlb: add hugetlb_pte to track HugeTLB page table entries James Houghton
2022-06-27 12:47 ` manish.mishra
2022-06-29 16:28 ` James Houghton
2022-06-28 20:25 ` Mina Almasry
2022-06-29 16:42 ` James Houghton
2022-06-28 20:44 ` Mike Kravetz
2022-06-29 16:24 ` James Houghton
2022-07-11 23:32 ` Mike Kravetz
2022-07-12 9:42 ` Dr. David Alan Gilbert
2022-07-12 17:51 ` Mike Kravetz
2022-07-15 16:35 ` Peter Xu
2022-07-15 21:52 ` Axel Rasmussen
2022-07-15 23:03 ` Peter Xu
2022-09-08 17:38 ` Peter Xu
2022-09-08 17:54 ` James Houghton
2022-06-24 17:36 ` [RFC PATCH 08/26] hugetlb: add hugetlb_free_range to free PT structures James Houghton
2022-06-27 12:52 ` manish.mishra
2022-06-28 20:27 ` Mina Almasry
2022-06-24 17:36 ` [RFC PATCH 09/26] hugetlb: add hugetlb_hgm_enabled James Houghton
2022-06-27 12:55 ` manish.mishra
2022-06-28 20:33 ` Mina Almasry
2022-09-08 18:07 ` Peter Xu
2022-09-08 18:13 ` James Houghton
2022-06-24 17:36 ` [RFC PATCH 10/26] hugetlb: add for_each_hgm_shift James Houghton
2022-06-27 13:01 ` manish.mishra
2022-06-28 21:58 ` Mina Almasry
2022-07-07 21:39 ` Mike Kravetz
2022-07-08 15:52 ` James Houghton
2022-07-09 21:55 ` Mina Almasry
2022-06-24 17:36 ` [RFC PATCH 11/26] hugetlb: add hugetlb_walk_to to do PT walks James Houghton
2022-06-27 13:07 ` manish.mishra
2022-07-07 23:03 ` Mike Kravetz
2022-09-08 18:20 ` Peter Xu
2022-06-24 17:36 ` [RFC PATCH 12/26] hugetlb: add HugeTLB splitting functionality James Houghton
2022-06-27 13:50 ` manish.mishra
2022-06-29 16:10 ` James Houghton
2022-06-29 14:33 ` manish.mishra
2022-06-29 16:20 ` James Houghton
2022-06-24 17:36 ` [RFC PATCH 13/26] hugetlb: add huge_pte_alloc_high_granularity James Houghton
2022-06-29 14:11 ` manish.mishra
2022-06-24 17:36 ` [RFC PATCH 14/26] hugetlb: add HGM support for hugetlb_fault and hugetlb_no_page James Houghton
2022-06-29 14:40 ` manish.mishra
2022-06-29 15:56 ` James Houghton
2022-06-24 17:36 ` [RFC PATCH 15/26] hugetlb: make unmapping compatible with high-granularity mappings James Houghton
2022-07-19 10:19 ` manish.mishra
2022-07-19 15:58 ` James Houghton
2022-06-24 17:36 ` [RFC PATCH 16/26] hugetlb: make hugetlb_change_protection compatible with HGM James Houghton
2022-06-24 17:36 ` [RFC PATCH 17/26] hugetlb: update follow_hugetlb_page to support HGM James Houghton
2022-07-19 10:48 ` manish.mishra
2022-07-19 16:19 ` James Houghton [this message]
2022-06-24 17:36 ` [RFC PATCH 18/26] hugetlb: use struct hugetlb_pte for walk_hugetlb_range James Houghton
2022-06-24 17:36 ` [RFC PATCH 19/26] hugetlb: add HGM support for copy_hugetlb_page_range James Houghton
2022-07-11 23:41 ` Mike Kravetz
2022-07-12 17:19 ` James Houghton
2022-07-12 18:06 ` Mike Kravetz
2022-07-15 21:39 ` Axel Rasmussen
2022-06-24 17:36 ` [RFC PATCH 20/26] hugetlb: add support for high-granularity UFFDIO_CONTINUE James Houghton
2022-07-15 16:21 ` Peter Xu
2022-07-15 16:58 ` James Houghton
2022-07-15 17:20 ` Peter Xu
2022-07-20 20:58 ` James Houghton
2022-07-21 19:09 ` Peter Xu
2022-07-21 19:44 ` James Houghton
2022-07-21 19:53 ` Peter Xu
2022-06-24 17:36 ` [RFC PATCH 21/26] hugetlb: add hugetlb_collapse James Houghton
2022-06-24 17:36 ` [RFC PATCH 22/26] madvise: add uapi for HugeTLB HGM collapse: MADV_COLLAPSE James Houghton
2022-06-24 17:36 ` [RFC PATCH 23/26] userfaultfd: add UFFD_FEATURE_MINOR_HUGETLBFS_HGM James Houghton
2022-06-24 17:36 ` [RFC PATCH 24/26] arm64/hugetlb: add support for high-granularity mappings James Houghton
2022-06-24 17:36 ` [RFC PATCH 25/26] selftests: add HugeTLB HGM to userfaultfd selftest James Houghton
2022-06-24 17:36 ` [RFC PATCH 26/26] selftests: add HugeTLB HGM to KVM demand paging selftest James Houghton
2022-06-24 18:29 ` [RFC PATCH 00/26] hugetlb: Introduce HugeTLB high-granularity mapping Matthew Wilcox
2022-06-27 16:36 ` James Houghton
2022-06-27 17:56 ` Dr. David Alan Gilbert
2022-06-27 20:31 ` James Houghton
2022-06-28 0:04 ` Nadav Amit
2022-06-30 19:21 ` Peter Xu
2022-07-01 5:54 ` Nadav Amit
2022-06-28 8:20 ` Dr. David Alan Gilbert
2022-06-30 16:09 ` Peter Xu
2022-06-24 18:41 ` Mina Almasry
2022-06-27 16:27 ` James Houghton
2022-06-28 14:17 ` Muchun Song
2022-06-28 17:26 ` Mina Almasry
2022-06-28 17:56 ` Dr. David Alan Gilbert
2022-06-29 18:31 ` James Houghton
2022-06-29 20:39 ` Axel Rasmussen
2022-06-24 18:47 ` Matthew Wilcox
2022-06-27 16:48 ` James Houghton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CADrL8HVFWidxaMZ++WfMfYb6pO2pEsDiVghe+8kKzE2kTvO9YA@mail.gmail.com \
--to=jthoughton@google.com \
--cc=almasrymina@google.com \
--cc=axelrasmussen@google.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=juew@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=manish.mishra@nutanix.com \
--cc=mike.kravetz@oracle.com \
--cc=peterx@redhat.com \
--cc=rientjes@google.com \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox