* [PATCH] mm: convert partially_mapped set/clear operations to be atomic
@ 2024-12-12 13:54 Usama Arif
2024-12-12 14:39 ` David Hildenbrand
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Usama Arif @ 2024-12-12 13:54 UTC (permalink / raw)
To: david, akpm, linux-mm
Cc: hannes, riel, shakeel.butt, roman.gushchin, yuzhao, npache,
baohua, ryan.roberts, rppt, willy, cerasuolodomenico, ryncsn,
corbet, linux-kernel, kernel-team, Usama Arif
Other page flags in the 2nd page, like PG_hwpoison and
PG_anon_exclusive can get modified concurrently. Hence,
partially_mapped flags need to be changed atomically.
Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
include/linux/page-flags.h | 12 ++----------
mm/huge_memory.c | 8 ++++----
2 files changed, 6 insertions(+), 14 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index cf46ac720802..691506bdf2c5 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -862,18 +862,10 @@ static inline void ClearPageCompound(struct page *page)
ClearPageHead(page);
}
FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
-FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
-/*
- * PG_partially_mapped is protected by deferred_split split_queue_lock,
- * so its safe to use non-atomic set/clear.
- */
-__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
-__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
#else
FOLIO_FLAG_FALSE(large_rmappable)
-FOLIO_TEST_FLAG_FALSE(partially_mapped)
-__FOLIO_SET_FLAG_NOOP(partially_mapped)
-__FOLIO_CLEAR_FLAG_NOOP(partially_mapped)
+FOLIO_FLAG_FALSE(partially_mapped)
#endif
#define PG_head_mask ((1UL << PG_head))
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2da5520bfe24..120cd2cdc614 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3583,7 +3583,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
!list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--;
if (folio_test_partially_mapped(folio)) {
- __folio_clear_partially_mapped(folio);
+ folio_clear_partially_mapped(folio);
mod_mthp_stat(folio_order(folio),
MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
}
@@ -3695,7 +3695,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio)
if (!list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--;
if (folio_test_partially_mapped(folio)) {
- __folio_clear_partially_mapped(folio);
+ folio_clear_partially_mapped(folio);
mod_mthp_stat(folio_order(folio),
MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
}
@@ -3739,7 +3739,7 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
if (partially_mapped) {
if (!folio_test_partially_mapped(folio)) {
- __folio_set_partially_mapped(folio);
+ folio_set_partially_mapped(folio);
if (folio_test_pmd_mappable(folio))
count_vm_event(THP_DEFERRED_SPLIT_PAGE);
count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
@@ -3832,7 +3832,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
} else {
/* We lost race with folio_put() */
if (folio_test_partially_mapped(folio)) {
- __folio_clear_partially_mapped(folio);
+ folio_clear_partially_mapped(folio);
mod_mthp_stat(folio_order(folio),
MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
}
--
2.43.5
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: convert partially_mapped set/clear operations to be atomic
2024-12-12 13:54 [PATCH] mm: convert partially_mapped set/clear operations to be atomic Usama Arif
@ 2024-12-12 14:39 ` David Hildenbrand
2024-12-12 17:22 ` Johannes Weiner
2024-12-12 18:07 ` Roman Gushchin
2 siblings, 0 replies; 4+ messages in thread
From: David Hildenbrand @ 2024-12-12 14:39 UTC (permalink / raw)
To: Usama Arif, akpm, linux-mm
Cc: hannes, riel, shakeel.butt, roman.gushchin, yuzhao, npache,
baohua, ryan.roberts, rppt, willy, cerasuolodomenico, ryncsn,
corbet, linux-kernel, kernel-team
On 12.12.24 14:54, Usama Arif wrote:
> Other page flags in the 2nd page, like PG_hwpoison and
> PG_anon_exclusive can get modified concurrently. Hence,
> partially_mapped flags need to be changed atomically.
>
> Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
Fortunately we have the test-before-set checks already in place.
Acked-by: David Hildenbrand <david@redhat.com>
> ---
> include/linux/page-flags.h | 12 ++----------
> mm/huge_memory.c | 8 ++++----
> 2 files changed, 6 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index cf46ac720802..691506bdf2c5 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -862,18 +862,10 @@ static inline void ClearPageCompound(struct page *page)
> ClearPageHead(page);
> }
> FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
> -FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> -/*
> - * PG_partially_mapped is protected by deferred_split split_queue_lock,
> - * so its safe to use non-atomic set/clear.
> - */
> -__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> -__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> +FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> #else
> FOLIO_FLAG_FALSE(large_rmappable)
> -FOLIO_TEST_FLAG_FALSE(partially_mapped)
> -__FOLIO_SET_FLAG_NOOP(partially_mapped)
> -__FOLIO_CLEAR_FLAG_NOOP(partially_mapped)
> +FOLIO_FLAG_FALSE(partially_mapped)
> #endif
>
> #define PG_head_mask ((1UL << PG_head))
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2da5520bfe24..120cd2cdc614 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3583,7 +3583,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
> !list_empty(&folio->_deferred_list)) {
> ds_queue->split_queue_len--;
> if (folio_test_partially_mapped(folio)) {
> - __folio_clear_partially_mapped(folio);
> + folio_clear_partially_mapped(folio);
> mod_mthp_stat(folio_order(folio),
> MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
> }
> @@ -3695,7 +3695,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio)
> if (!list_empty(&folio->_deferred_list)) {
> ds_queue->split_queue_len--;
> if (folio_test_partially_mapped(folio)) {
> - __folio_clear_partially_mapped(folio);
> + folio_clear_partially_mapped(folio);
> mod_mthp_stat(folio_order(folio),
> MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
> }
> @@ -3739,7 +3739,7 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
> spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
> if (partially_mapped) {
> if (!folio_test_partially_mapped(folio)) {
> - __folio_set_partially_mapped(folio);
> + folio_set_partially_mapped(folio);
> if (folio_test_pmd_mappable(folio))
> count_vm_event(THP_DEFERRED_SPLIT_PAGE);
> count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
> @@ -3832,7 +3832,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
> } else {
> /* We lost race with folio_put() */
> if (folio_test_partially_mapped(folio)) {
> - __folio_clear_partially_mapped(folio);
> + folio_clear_partially_mapped(folio);
> mod_mthp_stat(folio_order(folio),
> MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
> }
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: convert partially_mapped set/clear operations to be atomic
2024-12-12 13:54 [PATCH] mm: convert partially_mapped set/clear operations to be atomic Usama Arif
2024-12-12 14:39 ` David Hildenbrand
@ 2024-12-12 17:22 ` Johannes Weiner
2024-12-12 18:07 ` Roman Gushchin
2 siblings, 0 replies; 4+ messages in thread
From: Johannes Weiner @ 2024-12-12 17:22 UTC (permalink / raw)
To: Usama Arif
Cc: david, akpm, linux-mm, riel, shakeel.butt, roman.gushchin,
yuzhao, npache, baohua, ryan.roberts, rppt, willy,
cerasuolodomenico, ryncsn, corbet, linux-kernel, kernel-team
On Thu, Dec 12, 2024 at 01:54:47PM +0000, Usama Arif wrote:
> Other page flags in the 2nd page, like PG_hwpoison and
> PG_anon_exclusive can get modified concurrently. Hence,
> partially_mapped flags need to be changed atomically.
Can you please include a description of possible consequences if this
happens? I realize this was spotted through review, but it's still
important to break down how this might impact users.
> Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
This is in an alreaady released kernel. Do we need this:
Cc: stable@vger.kernel.org
> Reported-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/all/e53b04ad-1827-43a2-a1ab-864c7efecf6e@redhat.com/
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] mm: convert partially_mapped set/clear operations to be atomic
2024-12-12 13:54 [PATCH] mm: convert partially_mapped set/clear operations to be atomic Usama Arif
2024-12-12 14:39 ` David Hildenbrand
2024-12-12 17:22 ` Johannes Weiner
@ 2024-12-12 18:07 ` Roman Gushchin
2 siblings, 0 replies; 4+ messages in thread
From: Roman Gushchin @ 2024-12-12 18:07 UTC (permalink / raw)
To: Usama Arif
Cc: david, akpm, linux-mm, hannes, riel, shakeel.butt, yuzhao,
npache, baohua, ryan.roberts, rppt, willy, cerasuolodomenico,
ryncsn, corbet, linux-kernel, kernel-team
On Thu, Dec 12, 2024 at 01:54:47PM +0000, Usama Arif wrote:
> Other page flags in the 2nd page, like PG_hwpoison and
> PG_anon_exclusive can get modified concurrently. Hence,
> partially_mapped flags need to be changed atomically.
>
> Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
Thanks!
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-12-12 18:07 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-12-12 13:54 [PATCH] mm: convert partially_mapped set/clear operations to be atomic Usama Arif
2024-12-12 14:39 ` David Hildenbrand
2024-12-12 17:22 ` Johannes Weiner
2024-12-12 18:07 ` Roman Gushchin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox