linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 1/1] mm/mlock: implement folio_mlock_step() using folio_pte_batch()
@ 2024-06-03 14:07 Lance Yang
  2024-06-03 14:43 ` Matthew Wilcox
  0 siblings, 1 reply; 10+ messages in thread
From: Lance Yang @ 2024-06-03 14:07 UTC (permalink / raw)
  To: akpm
  Cc: ryan.roberts, david, 21cnbao, baolin.wang, ziy, fengwei.yin,
	ying.huang, libang.li, willy, linux-mm, linux-kernel, Lance Yang

Let's make folio_mlock_step() simply a wrapper around folio_pte_batch(),
which will greatly reduce the cost of ptep_get() when scanning a range of
contptes.

Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Suggested-by: Barry Song <21cnbao@gmail.com>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Lance Yang <ioworker0@gmail.com>
---
v1 -> v2:
 - Remove the likely() hint (per Matthew)
 - Keep type declarations at the beginning of the function (per Matthew)
 - Make a minimum change (per Barry)
 - Pick RB from Baolin - thanks!
 - Pick AB from David - thanks!

 mm/mlock.c | 19 ++++---------------
 1 file changed, 4 insertions(+), 15 deletions(-)

diff --git a/mm/mlock.c b/mm/mlock.c
index 30b51cdea89d..52d6e401ad67 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -307,26 +307,15 @@ void munlock_folio(struct folio *folio)
 static inline unsigned int folio_mlock_step(struct folio *folio,
 		pte_t *pte, unsigned long addr, unsigned long end)
 {
-	unsigned int count, i, nr = folio_nr_pages(folio);
-	unsigned long pfn = folio_pfn(folio);
+	const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+	unsigned int count = (end - addr) >> PAGE_SHIFT;
 	pte_t ptent = ptep_get(pte);
 
 	if (!folio_test_large(folio))
 		return 1;
 
-	count = pfn + nr - pte_pfn(ptent);
-	count = min_t(unsigned int, count, (end - addr) >> PAGE_SHIFT);
-
-	for (i = 0; i < count; i++, pte++) {
-		pte_t entry = ptep_get(pte);
-
-		if (!pte_present(entry))
-			break;
-		if (pte_pfn(entry) - pfn >= nr)
-			break;
-	}
-
-	return i;
+	return folio_pte_batch(folio, addr, pte, ptent, count, fpb_flags, NULL,
+			       NULL, NULL);
 }
 
 static inline bool allow_mlock_munlock(struct folio *folio,
-- 
2.33.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-06-03 21:01 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-03 14:07 [PATCH v2 1/1] mm/mlock: implement folio_mlock_step() using folio_pte_batch() Lance Yang
2024-06-03 14:43 ` Matthew Wilcox
2024-06-03 14:55   ` Lance Yang
2024-06-03 14:56   ` David Hildenbrand
2024-06-03 15:01     ` Matthew Wilcox
2024-06-03 15:26       ` David Hildenbrand
2024-06-03 15:46         ` Lance Yang
2024-06-03 21:00           ` Barry Song
2024-06-03 15:08     ` Lance Yang
2024-06-03 15:27       ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox