From: Jann Horn <jannh@google.com>
To: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: david@redhat.com, hughd@google.com, willy@infradead.org,
mgorman@suse.de, muchun.song@linux.dev, vbabka@kernel.org,
akpm@linux-foundation.org, zokeefe@google.com,
rientjes@google.com, peterx@redhat.com, catalin.marinas@arm.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
x86@kernel.org
Subject: Re: [PATCH v2 1/7] mm: khugepaged: retract_page_tables() use pte_offset_map_rw_nolock()
Date: Thu, 7 Nov 2024 18:57:16 +0100 [thread overview]
Message-ID: <CAG48ez18QoQdJqBXo0FW9qw5CkTUFqKD8iZ195sFud0GPCRywQ@mail.gmail.com> (raw)
In-Reply-To: <8c27c1f8-9573-4777-8397-929a15e67f60@bytedance.com>
On Thu, Nov 7, 2024 at 8:54 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote:
> On 2024/11/7 05:48, Jann Horn wrote:
> > On Thu, Oct 31, 2024 at 9:14 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote:
> >> In retract_page_tables(), we may modify the pmd entry after acquiring the
> >> pml and ptl, so we should also check whether the pmd entry is stable.
> >
> > Why does taking the PMD lock not guarantee that the PMD entry is stable?
>
> Because the pmd entry may have changed before taking the pmd lock, so we
> need to recheck it after taking the pmd or pte lock.
You mean it could have changed from the value we obtained from
find_pmd_or_thp_or_none(mm, addr, &pmd)? I don't think that matters
though.
> >> Using pte_offset_map_rw_nolock() + pmd_same() to do it, and then we can
> >> also remove the calling of the pte_lockptr().
> >>
> >> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
> >> ---
> >> mm/khugepaged.c | 17 ++++++++++++++++-
> >> 1 file changed, 16 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> >> index 6f8d46d107b4b..6d76dde64f5fb 100644
> >> --- a/mm/khugepaged.c
> >> +++ b/mm/khugepaged.c
> >> @@ -1721,6 +1721,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
> >> spinlock_t *pml;
> >> spinlock_t *ptl;
> >> bool skipped_uffd = false;
> >> + pte_t *pte;
> >>
> >> /*
> >> * Check vma->anon_vma to exclude MAP_PRIVATE mappings that
> >> @@ -1756,11 +1757,25 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
> >> addr, addr + HPAGE_PMD_SIZE);
> >> mmu_notifier_invalidate_range_start(&range);
> >>
> >> + pte = pte_offset_map_rw_nolock(mm, pmd, addr, &pgt_pmd, &ptl);
> >> + if (!pte) {
> >> + mmu_notifier_invalidate_range_end(&range);
> >> + continue;
> >> + }
> >> +
> >> pml = pmd_lock(mm, pmd);
> >
> > I don't understand why you're mapping the page table before locking
> > the PMD. Doesn't that just mean we need more error checking
> > afterwards?
>
> The main purpose is to obtain the pmdval. If we don't use
> pte_offset_map_rw_nolock, we should pay attention to recheck pmd entry
> before pte_lockptr(), like this:
>
> pmdval = pmdp_get_lockless(pmd);
> pmd_lock
> recheck pmdval
> pte_lockptr(mm, pmd)
>
> Otherwise, it may cause the system to crash. Consider the following
> situation:
>
> CPU 0 CPU 1
>
> zap_pte_range
> --> clear pmd entry
> free pte page (by RCU)
>
> retract_page_tables
> --> pmd_lock
> pte_lockptr(mm, pmd) <-- BOOM!!
>
> So maybe calling pte_offset_map_rw_nolock() is more convenient.
How about refactoring find_pmd_or_thp_or_none() like this, by moving
the checks of the PMD entry value into a separate helper:
-static int find_pmd_or_thp_or_none(struct mm_struct *mm,
- unsigned long address,
- pmd_t **pmd)
+static int check_pmd_state(pmd_t *pmd)
{
- pmd_t pmde;
+ pmd_t pmde = pmdp_get_lockless(*pmd);
- *pmd = mm_find_pmd(mm, address);
- if (!*pmd)
- return SCAN_PMD_NULL;
-
- pmde = pmdp_get_lockless(*pmd);
if (pmd_none(pmde))
return SCAN_PMD_NONE;
if (!pmd_present(pmde))
return SCAN_PMD_NULL;
if (pmd_trans_huge(pmde))
return SCAN_PMD_MAPPED;
if (pmd_devmap(pmde))
return SCAN_PMD_NULL;
if (pmd_bad(pmde))
return SCAN_PMD_NULL;
return SCAN_SUCCEED;
}
+static int find_pmd_or_thp_or_none(struct mm_struct *mm,
+ unsigned long address,
+ pmd_t **pmd)
+{
+
+ *pmd = mm_find_pmd(mm, address);
+ if (!*pmd)
+ return SCAN_PMD_NULL;
+ return check_pmd_state(*pmd);
+}
+
And simplifying retract_page_tables() a little bit like this:
i_mmap_lock_read(mapping);
vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
struct mmu_notifier_range range;
struct mm_struct *mm;
unsigned long addr;
pmd_t *pmd, pgt_pmd;
spinlock_t *pml;
spinlock_t *ptl;
- bool skipped_uffd = false;
+ bool success = false;
/*
* Check vma->anon_vma to exclude MAP_PRIVATE mappings that
* got written to. These VMAs are likely not worth removing
* page tables from, as PMD-mapping is likely to be split later.
*/
if (READ_ONCE(vma->anon_vma))
continue;
addr = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
@@ -1763,34 +1767,34 @@ static void retract_page_tables(struct
address_space *mapping, pgoff_t pgoff)
/*
* Huge page lock is still held, so normally the page table
* must remain empty; and we have already skipped anon_vma
* and userfaultfd_wp() vmas. But since the mmap_lock is not
* held, it is still possible for a racing userfaultfd_ioctl()
* to have inserted ptes or markers. Now that we hold ptlock,
* repeating the anon_vma check protects from one category,
* and repeating the userfaultfd_wp() check from another.
*/
- if (unlikely(vma->anon_vma || userfaultfd_wp(vma))) {
- skipped_uffd = true;
- } else {
+ if (likely(!vma->anon_vma && !userfaultfd_wp(vma))) {
pgt_pmd = pmdp_collapse_flush(vma, addr, pmd);
pmdp_get_lockless_sync();
+ success = true;
}
if (ptl != pml)
spin_unlock(ptl);
+drop_pml:
spin_unlock(pml);
mmu_notifier_invalidate_range_end(&range);
- if (!skipped_uffd) {
+ if (success) {
mm_dec_nr_ptes(mm);
page_table_check_pte_clear_range(mm, addr, pgt_pmd);
pte_free_defer(mm, pmd_pgtable(pgt_pmd));
}
}
i_mmap_unlock_read(mapping);
And then instead of your patch, I think you can just do this?
@@ -1754,20 +1754,22 @@ static void retract_page_tables(struct
address_space *mapping, pgoff_t pgoff)
*/
if (userfaultfd_wp(vma))
continue;
/* PTEs were notified when unmapped; but now for the PMD? */
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm,
addr, addr + HPAGE_PMD_SIZE);
mmu_notifier_invalidate_range_start(&range);
pml = pmd_lock(mm, pmd);
+ if (check_pmd_state(mm, addr, pmd) != SCAN_SUCCEED)
+ goto drop_pml;
ptl = pte_lockptr(mm, pmd);
if (ptl != pml)
spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
/*
* Huge page lock is still held, so normally the page table
* must remain empty; and we have already skipped anon_vma
* and userfaultfd_wp() vmas. But since the mmap_lock is not
* held, it is still possible for a racing userfaultfd_ioctl()
* to have inserted ptes or markers. Now that we hold ptlock,
next prev parent reply other threads:[~2024-11-07 17:57 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-31 8:13 [PATCH v2 0/7] synchronously scan and reclaim empty user PTE pages Qi Zheng
2024-10-31 8:13 ` [PATCH v2 1/7] mm: khugepaged: retract_page_tables() use pte_offset_map_rw_nolock() Qi Zheng
2024-11-06 21:48 ` Jann Horn
2024-11-07 7:54 ` Qi Zheng
2024-11-07 17:57 ` Jann Horn [this message]
2024-11-08 6:31 ` Qi Zheng
2024-10-31 8:13 ` [PATCH v2 2/7] mm: introduce zap_nonpresent_ptes() Qi Zheng
2024-11-06 21:48 ` Jann Horn
2024-11-12 16:58 ` David Hildenbrand
2024-10-31 8:13 ` [PATCH v2 3/7] mm: introduce do_zap_pte_range() Qi Zheng
2024-11-07 21:50 ` Jann Horn
2024-11-12 17:00 ` David Hildenbrand
2024-11-13 2:40 ` Qi Zheng
2024-11-13 11:43 ` David Hildenbrand
2024-11-13 12:19 ` Qi Zheng
2024-11-14 3:09 ` Qi Zheng
2024-11-14 4:12 ` Qi Zheng
2024-10-31 8:13 ` [PATCH v2 4/7] mm: make zap_pte_range() handle full within-PMD range Qi Zheng
2024-11-07 21:46 ` Jann Horn
2024-10-31 8:13 ` [PATCH v2 5/7] mm: pgtable: try to reclaim empty PTE page in madvise(MADV_DONTNEED) Qi Zheng
2024-11-07 23:35 ` Jann Horn
2024-11-08 7:13 ` Qi Zheng
2024-11-08 18:04 ` Jann Horn
2024-11-09 3:07 ` Qi Zheng
2024-10-31 8:13 ` [PATCH v2 6/7] x86: mm: free page table pages by RCU instead of semi RCU Qi Zheng
2024-11-07 22:39 ` Jann Horn
2024-11-08 7:38 ` Qi Zheng
2024-11-08 20:09 ` Jann Horn
2024-11-09 3:14 ` Qi Zheng
2024-11-13 11:26 ` Qi Zheng
2024-10-31 8:13 ` [PATCH v2 7/7] x86: select ARCH_SUPPORTS_PT_RECLAIM if X86_64 Qi Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAG48ez18QoQdJqBXo0FW9qw5CkTUFqKD8iZ195sFud0GPCRywQ@mail.gmail.com \
--to=jannh@google.com \
--cc=akpm@linux-foundation.org \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=muchun.song@linux.dev \
--cc=peterx@redhat.com \
--cc=rientjes@google.com \
--cc=vbabka@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=zhengqi.arch@bytedance.com \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox