From: Qi Zheng <zhengqi.arch@bytedance.com>
To: david@redhat.com, hughd@google.com, willy@infradead.org,
muchun.song@linux.dev, vbabka@kernel.org,
akpm@linux-foundation.org, rppt@kernel.org,
vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com,
christophe.leroy2@cs-soprasteria.com
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-arm-kernel@lists.infradead.org,
linuxppc-dev@lists.ozlabs.org,
Qi Zheng <zhengqi.arch@bytedance.com>
Subject: [PATCH v5 10/13] mm: page_vma_mapped_walk: map_pte() use pte_offset_map_rw_nolock()
Date: Thu, 26 Sep 2024 14:46:23 +0800 [thread overview]
Message-ID: <2620a48f34c9f19864ab0169cdbf253d31a8fcaa.1727332572.git.zhengqi.arch@bytedance.com> (raw)
In-Reply-To: <cover.1727332572.git.zhengqi.arch@bytedance.com>
In the caller of map_pte(), we may modify the pvmw->pte after acquiring
the pvmw->ptl, so convert it to using pte_offset_map_rw_nolock(). At
this time, the pte_same() check is not performed after the pvmw->ptl held,
so we should get pmdval and do pmd_same() check to ensure the stability of
pvmw->pmd.
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
---
mm/page_vma_mapped.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index ae5cc42aa2087..ab1671e71cb2d 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -13,7 +13,8 @@ static inline bool not_found(struct page_vma_mapped_walk *pvmw)
return false;
}
-static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp)
+static bool map_pte(struct page_vma_mapped_walk *pvmw, pmd_t *pmdvalp,
+ spinlock_t **ptlp)
{
pte_t ptent;
@@ -25,6 +26,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp)
return !!pvmw->pte;
}
+again:
/*
* It is important to return the ptl corresponding to pte,
* in case *pvmw->pmd changes underneath us; so we need to
@@ -32,8 +34,8 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp)
* proceeds to loop over next ptes, and finds a match later.
* Though, in most cases, page lock already protects this.
*/
- pvmw->pte = pte_offset_map_nolock(pvmw->vma->vm_mm, pvmw->pmd,
- pvmw->address, ptlp);
+ pvmw->pte = pte_offset_map_rw_nolock(pvmw->vma->vm_mm, pvmw->pmd,
+ pvmw->address, pmdvalp, ptlp);
if (!pvmw->pte)
return false;
@@ -67,8 +69,13 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp)
} else if (!pte_present(ptent)) {
return false;
}
+ spin_lock(*ptlp);
+ if (unlikely(!pmd_same(*pmdvalp, pmdp_get_lockless(pvmw->pmd)))) {
+ pte_unmap_unlock(pvmw->pte, *ptlp);
+ goto again;
+ }
pvmw->ptl = *ptlp;
- spin_lock(pvmw->ptl);
+
return true;
}
@@ -278,7 +285,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
step_forward(pvmw, PMD_SIZE);
continue;
}
- if (!map_pte(pvmw, &ptl)) {
+ if (!map_pte(pvmw, &pmde, &ptl)) {
if (!pvmw->pte)
goto restart;
goto next_pte;
@@ -305,8 +312,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
} while (pte_none(ptep_get(pvmw->pte)));
if (!pvmw->ptl) {
+ spin_lock(ptl);
+ if (unlikely(!pmd_same(pmde, pmdp_get_lockless(pvmw->pmd)))) {
+ pte_unmap_unlock(pvmw->pte, ptl);
+ pvmw->pte = NULL;
+ goto restart;
+ }
pvmw->ptl = ptl;
- spin_lock(pvmw->ptl);
}
goto this_pte;
} while (pvmw->address < end);
--
2.20.1
next prev parent reply other threads:[~2024-09-26 6:48 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-26 6:46 [PATCH v5 00/13] introduce pte_offset_map_{ro|rw}_nolock() Qi Zheng
2024-09-26 6:46 ` [PATCH v5 01/13] mm: pgtable: " Qi Zheng
2024-09-26 6:46 ` [PATCH v5 02/13] powerpc: assert_pte_locked() use pte_offset_map_ro_nolock() Qi Zheng
2024-09-26 6:46 ` [PATCH v5 03/13] mm: filemap: filemap_fault_recheck_pte_none() " Qi Zheng
2024-09-26 6:46 ` [PATCH v5 04/13] mm: khugepaged: __collapse_huge_page_swapin() " Qi Zheng
2024-09-26 6:46 ` [PATCH v5 05/13] arm: adjust_pte() use pte_offset_map_rw_nolock() Qi Zheng
2024-09-26 6:46 ` [PATCH v5 06/13] mm: handle_pte_fault() " Qi Zheng
2024-09-26 6:46 ` [PATCH v5 07/13] mm: khugepaged: collapse_pte_mapped_thp() " Qi Zheng
2024-09-26 7:07 ` Muchun Song
2024-09-26 6:46 ` [PATCH v5 08/13] mm: copy_pte_range() " Qi Zheng
2024-09-26 6:46 ` [PATCH v5 09/13] mm: mremap: move_ptes() " Qi Zheng
2024-09-26 6:46 ` Qi Zheng [this message]
2024-09-26 7:08 ` [PATCH v5 10/13] mm: page_vma_mapped_walk: map_pte() " Muchun Song
2024-09-26 6:46 ` [PATCH v5 11/13] mm: userfaultfd: move_pages_pte() " Qi Zheng
2024-09-26 6:46 ` [PATCH v5 12/13] mm: multi-gen LRU: walk_pte_range() " Qi Zheng
2024-09-26 6:46 ` [PATCH v5 13/13] mm: pgtable: remove pte_offset_map_nolock() Qi Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2620a48f34c9f19864ab0169cdbf253d31a8fcaa.1727332572.git.zhengqi.arch@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=christophe.leroy2@cs-soprasteria.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=muchun.song@linux.dev \
--cc=peterx@redhat.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=vbabka@kernel.org \
--cc=vishal.moola@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox