linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Yin, Fengwei" <fengwei.yin@intel.com>
To: "mike.kravetz@oracle.com" <mike.kravetz@oracle.com>
Cc: "david@redhat.com" <david@redhat.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"willy@infradead.org" <willy@infradead.org>,
	"naoya.horiguchi@nec.com" <naoya.horiguchi@nec.com>,
	"sidhartha.kumar@oracle.com" <sidhartha.kumar@oracle.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"chu, jane" <jane.chu@oracle.com>
Subject: Re: [PATCH v3 1/5] rmap: move hugetlb try_to_unmap to dedicated function
Date: Thu, 9 Mar 2023 05:13:24 +0000	[thread overview]
Message-ID: <2290d48c4e2d01ffd4358bd5580aeed7ffda5152.camel@intel.com> (raw)
In-Reply-To: <20230308213827.GB4005@monkey>

On Wed, 2023-03-08 at 13:38 -0800, Mike Kravetz wrote:
> On 03/06/23 17:22, Yin Fengwei wrote:
> > It's to prepare the batched rmap update for large folio.
> > No need to looped handle hugetlb. Just handle hugetlb and
> > bail out early.
> > 
> > Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
> > ---
> >  mm/rmap.c | 200 +++++++++++++++++++++++++++++++++-----------------
> > ----
> >  1 file changed, 121 insertions(+), 79 deletions(-)
> 
> Looks good,
> 
> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Thanks a lot for reviewing.

> 
> A few nits below.
> 
> > 
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index ba901c416785..508d141dacc5 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1441,6 +1441,103 @@ void page_remove_rmap(struct page *page,
> > struct vm_area_struct *vma,
> >         munlock_vma_folio(folio, vma, compound);
> >  }
> >  
> > +static bool try_to_unmap_one_hugetlb(struct folio *folio,
> > +               struct vm_area_struct *vma, struct
> > mmu_notifier_range range,
> > +               struct page_vma_mapped_walk pvmw, unsigned long
> > address,
> > +               enum ttu_flags flags)
> > +{
> > +       struct mm_struct *mm = vma->vm_mm;
> > +       pte_t pteval;
> > +       bool ret = true, anon = folio_test_anon(folio);
> > +
> > +       /*
> > +        * The try_to_unmap() is only passed a hugetlb page
> > +        * in the case where the hugetlb page is poisoned.
> > +        */
> > +       VM_BUG_ON_FOLIO(!folio_test_hwpoison(folio), folio);
> > +       /*
> > +        * huge_pmd_unshare may unmap an entire PMD page.
> > +        * There is no way of knowing exactly which PMDs may
> > +        * be cached for this mm, so we must flush them all.
> > +        * start/end were already adjusted above to cover this
> 
> nit, start/end are adjusted in caller (try_to_unmap_one) not above.
Yes. Will update the comment.

> 
> > +        * range.
> > +        */
> > +       flush_cache_range(vma, range.start, range.end);
> > +
> > +       /*
> > +        * To call huge_pmd_unshare, i_mmap_rwsem must be
> > +        * held in write mode.  Caller needs to explicitly
> > +        * do this outside rmap routines.
> > +        *
> > +        * We also must hold hugetlb vma_lock in write mode.
> > +        * Lock order dictates acquiring vma_lock BEFORE
> > +        * i_mmap_rwsem.  We can only try lock here and fail
> > +        * if unsuccessful.
> > +        */
> > +       if (!anon) {
> > +               VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
> > +               if (!hugetlb_vma_trylock_write(vma)) {
> > +                       ret = false;
> > +                       goto out;
> > +               }
> > +               if (huge_pmd_unshare(mm, vma, address, pvmw.pte)) {
> > +                       hugetlb_vma_unlock_write(vma);
> > +                       flush_tlb_range(vma,
> > +                                       range.start, range.end);
> > +                       mmu_notifier_invalidate_range(mm,
> > +                                       range.start, range.end);
> > +                       /*
> > +                        * The ref count of the PMD page was
> > +                        * dropped which is part of the way map
> > +                        * counting is done for shared PMDs.
> > +                        * Return 'true' here.  When there is
> > +                        * no other sharing, huge_pmd_unshare
> > +                        * returns false and we will unmap the
> > +                        * actual page and drop map count
> > +                        * to zero.
> > +                        */
> > +                       goto out;
> > +               }
> > +               hugetlb_vma_unlock_write(vma);
> > +       }
> > +       pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
> > +
> > +       /*
> > +        * Now the pte is cleared. If this pte was uffd-wp armed,
> > +        * we may want to replace a none pte with a marker pte if
> > +        * it's file-backed, so we don't lose the tracking info.
> > +        */
> > +       pte_install_uffd_wp_if_needed(vma, address, pvmw.pte,
> > pteval);
> > +
> > +       /* Set the dirty flag on the folio now the pte is gone. */
> > +       if (pte_dirty(pteval))
> 
> nit, technically, I suppose this should be huge_pte_dirty but it
> really is
> the same as pte_dirty which is why it works in current code.
Yes. Will update to use huge_pte_dirty().

> 
> > +               folio_mark_dirty(folio);
> > +
> > +       /* Update high watermark before we lower rss */
> > +       update_hiwater_rss(mm);
> > +
> > +       /* Poisoned hugetlb folio with TTU_HWPOISON always cleared
> > in flags */
> > +       pteval = swp_entry_to_pte(make_hwpoison_entry(&folio-
> > >page));
> > +       set_huge_pte_at(mm, address, pvmw.pte, pteval);
> > +       hugetlb_count_sub(folio_nr_pages(folio), mm);
> > +
> > +       /*
> > +        * No need to call mmu_notifier_invalidate_range() it has
> > be
> > +        * done above for all cases requiring it to happen under
> > page
> > +        * table lock before mmu_notifier_invalidate_range_end()
> > +        *
> > +        * See Documentation/mm/mmu_notifier.rst
> > +        */
> > +       page_remove_rmap(&folio->page, vma,
> > folio_test_hugetlb(folio));
> 
> nit, we KNOW folio_test_hugetlb(folio) is true here so can just pass
> 'true'.  In addition, the same call in try_to_unmap_one is now known
> to
> always be false.  No need to check folio_test_hugetlb(folio) there as
> well.
The "folio_test_hugetlb(folio) -> true" changes was in patch 3. I tried
to apply "one patch for code moving and one patch for code change)" to
make review easy. But this patch already changed the code, I will move
"folio_test_hugetlb(folio)->true" from patch 3 to this one. Thanks.


Regards
Yin, Fengwei

> 


  reply	other threads:[~2023-03-09  5:13 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-06  9:22 [PATCH v3 0/5] batched remove rmap in try_to_unmap_one() Yin Fengwei
2023-03-06  9:22 ` [PATCH v3 1/5] rmap: move hugetlb try_to_unmap to dedicated function Yin Fengwei
2023-03-08 21:38   ` Mike Kravetz
2023-03-09  5:13     ` Yin, Fengwei [this message]
2023-03-06  9:22 ` [PATCH v3 2/5] rmap: move page unmap operation " Yin Fengwei
2023-03-06  9:22 ` [PATCH v3 3/5] rmap: cleanup exit path of try_to_unmap_one_page() Yin Fengwei
2023-03-06  9:22 ` [PATCH v3 4/5] rmap:addd folio_remove_rmap_range() Yin Fengwei
2023-03-06  9:22 ` [PATCH v3 5/5] try_to_unmap_one: batched remove rmap, update folio refcount Yin Fengwei
2023-03-06 12:39   ` haoxin
2023-03-07  2:45     ` Yin, Fengwei
2023-03-06 21:12 ` [PATCH v3 0/5] batched remove rmap in try_to_unmap_one() Andrew Morton
2023-03-07  2:44   ` Yin, Fengwei
2023-03-09 13:56   ` Yin, Fengwei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2290d48c4e2d01ffd4358bd5580aeed7ffda5152.camel@intel.com \
    --to=fengwei.yin@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=jane.chu@oracle.com \
    --cc=linux-mm@kvack.org \
    --cc=mike.kravetz@oracle.com \
    --cc=naoya.horiguchi@nec.com \
    --cc=sidhartha.kumar@oracle.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox