linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/1] mm/rmap: inline folio_test_large_maybe_mapped_shared() into callers
@ 2025-04-24 15:56 Lance Yang
  2025-04-24 20:28 ` David Hildenbrand
  0 siblings, 1 reply; 2+ messages in thread
From: Lance Yang @ 2025-04-24 15:56 UTC (permalink / raw)
  To: akpm; +Cc: mingzhe.yang, david, linux-mm, linux-kernel, Lance Yang

From: Lance Yang <lance.yang@linux.dev>

To prevent the function from being used when CONFIG_MM_ID is disabled, we
intend to inline it into its few callers, which also would help maintain
the expected code placement.

Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
v2 -> v3:
 * Inline the function, suggested by David
 * https://lore.kernel.org/all/20250418152228.20545-1-lance.yang@linux.dev

v1 -> v2:
 * Update the changelog, suggested by Andrew and David
 * https://lore.kernel.org/linux-mm/20250417124908.58543-1-ioworker0@gmail.com

 include/linux/mm.h         | 2 +-
 include/linux/page-flags.h | 4 ----
 include/linux/rmap.h       | 2 +-
 mm/memory.c                | 4 ++--
 4 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index bf55206935c4..67e3b4f9cdc8 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2303,7 +2303,7 @@ static inline bool folio_maybe_mapped_shared(struct folio *folio)
 	 */
 	if (mapcount <= 1)
 		return false;
-	return folio_test_large_maybe_mapped_shared(folio);
+	return test_bit(FOLIO_MM_IDS_SHARED_BITNUM, &folio->_mm_ids);
 }
 
 #ifndef HAVE_ARCH_MAKE_FOLIO_ACCESSIBLE
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index e6a21b62dcce..8107c2ea43c4 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -1230,10 +1230,6 @@ static inline int folio_has_private(const struct folio *folio)
 	return !!(folio->flags & PAGE_FLAGS_PRIVATE);
 }
 
-static inline bool folio_test_large_maybe_mapped_shared(const struct folio *folio)
-{
-	return test_bit(FOLIO_MM_IDS_SHARED_BITNUM, &folio->_mm_ids);
-}
 #undef PF_ANY
 #undef PF_HEAD
 #undef PF_NO_TAIL
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 6b82b618846e..c4f4903b1088 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -223,7 +223,7 @@ static inline void __folio_large_mapcount_sanity_checks(const struct folio *foli
 	VM_WARN_ON_ONCE(folio_mm_id(folio, 1) != MM_ID_DUMMY &&
 			folio->_mm_id_mapcount[1] < 0);
 	VM_WARN_ON_ONCE(!folio_mapped(folio) &&
-			folio_test_large_maybe_mapped_shared(folio));
+			test_bit(FOLIO_MM_IDS_SHARED_BITNUM, &folio->_mm_ids));
 }
 
 static __always_inline void folio_set_large_mapcount(struct folio *folio,
diff --git a/mm/memory.c b/mm/memory.c
index ba3ea0a82f7f..5e033adf67b1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3730,7 +3730,7 @@ static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
 	 * If all folio references are from mappings, and all mappings are in
 	 * the page tables of this MM, then this folio is exclusive to this MM.
 	 */
-	if (folio_test_large_maybe_mapped_shared(folio))
+	if (test_bit(FOLIO_MM_IDS_SHARED_BITNUM, &folio->_mm_ids))
 		return false;
 
 	VM_WARN_ON_ONCE(folio_test_ksm(folio));
@@ -3753,7 +3753,7 @@ static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
 	folio_lock_large_mapcount(folio);
 	VM_WARN_ON_ONCE(folio_large_mapcount(folio) < folio_ref_count(folio));
 
-	if (folio_test_large_maybe_mapped_shared(folio))
+	if (test_bit(FOLIO_MM_IDS_SHARED_BITNUM, &folio->_mm_ids))
 		goto unlock;
 	if (folio_large_mapcount(folio) != folio_ref_count(folio))
 		goto unlock;
-- 
2.49.0



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH v3 1/1] mm/rmap: inline folio_test_large_maybe_mapped_shared() into callers
  2025-04-24 15:56 [PATCH v3 1/1] mm/rmap: inline folio_test_large_maybe_mapped_shared() into callers Lance Yang
@ 2025-04-24 20:28 ` David Hildenbrand
  0 siblings, 0 replies; 2+ messages in thread
From: David Hildenbrand @ 2025-04-24 20:28 UTC (permalink / raw)
  To: Lance Yang, akpm; +Cc: mingzhe.yang, linux-mm, linux-kernel, Lance Yang

On 24.04.25 17:56, Lance Yang wrote:
> From: Lance Yang <lance.yang@linux.dev>
> 
> To prevent the function from being used when CONFIG_MM_ID is disabled, we
> intend to inline it into its few callers, which also would help maintain
> the expected code placement.
> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Lance Yang <lance.yang@linux.dev>
> ---

Yeah, this way it's harder to abuse ... having the test functions is a 
leftover from when I had set/clear functions during development and 
experimented with using a pageflag (which involves atomics even on tail 
pages unfortunately).

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-04-24 20:28 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-24 15:56 [PATCH v3 1/1] mm/rmap: inline folio_test_large_maybe_mapped_shared() into callers Lance Yang
2025-04-24 20:28 ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox