From: Lance Yang <ioworker0@gmail.com>
To: David Hildenbrand <david@redhat.com>
Cc: akpm@linux-foundation.org, willy@infradead.org, sj@kernel.org,
baolin.wang@linux.alibaba.com, maskray@google.com,
ziy@nvidia.com, ryan.roberts@arm.com, 21cnbao@gmail.com,
mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com,
shy828301@gmail.com, xiehuan09@gmail.com,
libang.li@antgroup.com, wangkefeng.wang@huawei.com,
songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v6 2/3] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop
Date: Wed, 5 Jun 2024 22:20:39 +0800 [thread overview]
Message-ID: <CAK1f24kwf4gDwK=8X4z1bM9-H6_M9QKy6-ko9pTUZij-W=40wg@mail.gmail.com> (raw)
In-Reply-To: <fd16b219-bc46-484a-8581-a21240440fa6@redhat.com>
Hi David,
On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 21.05.24 06:02, Lance Yang wrote:
> > In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> > folios, start the pagewalk first, then call split_huge_pmd_address() to
> > split the folio.
> >
> > Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> > encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> > the page walk. It’s probably necessary to mlock this THP to prevent it from
> > being picked up during page reclaim.
> >
> > Suggested-by: David Hildenbrand <david@redhat.com>
> > Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> > Signed-off-by: Lance Yang <ioworker0@gmail.com>
> > ---
>
> [...] again, sorry for the late review.
No worries at all, thanks for taking time to review!
>
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index ddffa30c79fb..08a93347f283 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> > if (flags & TTU_SYNC)
> > pvmw.flags = PVMW_SYNC;
> >
> > - if (flags & TTU_SPLIT_HUGE_PMD)
> > - split_huge_pmd_address(vma, address, false, folio);
> > -
> > /*
> > * For THP, we have to assume the worse case ie pmd for invalidation.
> > * For hugetlb, it could be much worse if we need to do pud
> > @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> > mmu_notifier_invalidate_range_start(&range);
> >
> > while (page_vma_mapped_walk(&pvmw)) {
> > - /* Unexpected PMD-mapped THP? */
> > - VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> > -
> > /*
> > * If the folio is in an mlock()d vma, we must not swap it out.
> > */
> > if (!(flags & TTU_IGNORE_MLOCK) &&
> > (vma->vm_flags & VM_LOCKED)) {
> > /* Restore the mlock which got missed */
> > - if (!folio_test_large(folio))
> > + if (!folio_test_large(folio) ||
> > + (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> > mlock_vma_folio(folio, vma);
>
> Can you elaborate why you think this would be required? If we would have
> performed the split_huge_pmd_address() beforehand, we would still be
> left with a large folio, no?
Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
folio, but there are a few scenarios where we don't mlock a large folio, such
as when it crosses a VM_LOCKed VMA boundary.
- if (!folio_test_large(folio))
+ if (!folio_test_large(folio) ||
+ (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
And this check is just future-proofing and likely unnecessary. If encountering a
PMD-mapped THP missing the mlock for some reason, we can mlock this
THP to prevent it from being picked up during page reclaim, since it is fully
mapped and doesn't cross the VMA boundary, IIUC.
What do you think?
I would appreciate any suggestions regarding this check ;)
[1] https://lore.kernel.org/all/20230918073318.1181104-3-fengwei.yin@intel.com/T/#mdab40248cf3705581d8bfb64e1ebf2d9cd81c095
>
> > goto walk_done_err;
> > }
> >
> > + if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) {
> > + /*
> > + * We temporarily have to drop the PTL and start once
> > + * again from that now-PTE-mapped page table.
> > + */
> > + split_huge_pmd_locked(vma, range.start, pvmw.pmd, false,
> > + folio);
>
> Using range.start here is a bit weird. Wouldn't this be pvmw.address?
> [did not check]
Hmm... we may adjust range.start before the page walk, but pvmw.address
does not.
At least for now, pvmw.address seems better. Will adjust as you suggested.
>
> > + pvmw.pmd = NULL;
> > + spin_unlock(pvmw.ptl);
> > + pvmw.ptl = NULL;
>
>
> Would we want a
>
> page_vma_mapped_walk_restart() that is exactly for that purpose?
Nice, let's add page_vma_mapped_walk_restart() for that purpose :)
Thanks again for your time!
Lance
>
> > + flags &= ~TTU_SPLIT_HUGE_PMD;
> > + continue;
> > + }
> > +
> > + /* Unexpected PMD-mapped THP? */
> > + VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>
> --
> Cheers,
>
> David / dhildenb
>
next prev parent reply other threads:[~2024-06-05 14:20 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-21 4:02 [PATCH v6 0/3] Reclaim lazyfree THP without splitting Lance Yang
2024-05-21 4:02 ` [PATCH v6 1/3] mm/rmap: remove duplicated exit code in pagewalk loop Lance Yang
2024-06-05 12:34 ` David Hildenbrand
2024-06-05 12:49 ` Lance Yang
2024-05-21 4:02 ` [PATCH v6 2/3] mm/rmap: integrate PMD-mapped folio splitting into " Lance Yang
2024-06-05 12:46 ` David Hildenbrand
2024-06-05 14:20 ` Lance Yang [this message]
2024-06-05 14:28 ` David Hildenbrand
2024-06-05 14:39 ` David Hildenbrand
2024-06-05 14:57 ` Lance Yang
2024-06-05 15:02 ` David Hildenbrand
2024-06-05 15:43 ` Lance Yang
2024-06-05 16:16 ` David Hildenbrand
2024-06-06 3:57 ` Lance Yang
2024-06-06 3:55 ` Lance Yang
2024-06-06 8:01 ` David Hildenbrand
2024-06-06 8:06 ` David Hildenbrand
2024-06-06 9:38 ` Lance Yang
2024-06-06 9:41 ` David Hildenbrand
2024-06-07 1:50 ` Lance Yang
2024-05-21 4:02 ` [PATCH v6 3/3] mm/vmscan: avoid split lazyfree THP during shrink_folio_list() Lance Yang
[not found] ` <ede2a2ad-1046-4967-a930-692cfa829c7b@redhat.com>
2024-06-05 14:40 ` Lance Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAK1f24kwf4gDwK=8X4z1bM9-H6_M9QKy6-ko9pTUZij-W=40wg@mail.gmail.com' \
--to=ioworker0@gmail.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=libang.li@antgroup.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maskray@google.com \
--cc=mhocko@suse.com \
--cc=minchan@kernel.org \
--cc=peterx@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=sj@kernel.org \
--cc=songmuchun@bytedance.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiehuan09@gmail.com \
--cc=ziy@nvidia.com \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox