linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Usama Arif <usamaarif642@gmail.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org
Cc: hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev,
	roman.gushchin@linux.dev, yuzhao@google.com, npache@redhat.com,
	baohua@kernel.org, ryan.roberts@arm.com, rppt@kernel.org,
	willy@infradead.org, cerasuolodomenico@gmail.com,
	ryncsn@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org,
	kernel-team@meta.com
Subject: Re: [PATCH] mm: convert partially_mapped set/clear operations to be atomic
Date: Thu, 12 Dec 2024 15:39:37 +0100	[thread overview]
Message-ID: <a7e4a425-6006-43bb-b311-f1c547606425@redhat.com> (raw)
In-Reply-To: <20241212135447.3530047-1-usamaarif642@gmail.com>

On 12.12.24 14:54, Usama Arif wrote:
> Other page flags in the 2nd page, like PG_hwpoison and
> PG_anon_exclusive can get modified concurrently. Hence,
> partially_mapped flags need to be changed atomically.
> 
> Fixes: 8422acdc97ed ("mm: introduce a pageflag for partially mapped folios")
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>

Fortunately we have the test-before-set checks already in place.

Acked-by: David Hildenbrand <david@redhat.com>

> ---
>   include/linux/page-flags.h | 12 ++----------
>   mm/huge_memory.c           |  8 ++++----
>   2 files changed, 6 insertions(+), 14 deletions(-)
> 
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index cf46ac720802..691506bdf2c5 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -862,18 +862,10 @@ static inline void ClearPageCompound(struct page *page)
>   	ClearPageHead(page);
>   }
>   FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
> -FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> -/*
> - * PG_partially_mapped is protected by deferred_split split_queue_lock,
> - * so its safe to use non-atomic set/clear.
> - */
> -__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> -__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
> +FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
>   #else
>   FOLIO_FLAG_FALSE(large_rmappable)
> -FOLIO_TEST_FLAG_FALSE(partially_mapped)
> -__FOLIO_SET_FLAG_NOOP(partially_mapped)
> -__FOLIO_CLEAR_FLAG_NOOP(partially_mapped)
> +FOLIO_FLAG_FALSE(partially_mapped)
>   #endif
>   
>   #define PG_head_mask ((1UL << PG_head))
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2da5520bfe24..120cd2cdc614 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3583,7 +3583,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>   		    !list_empty(&folio->_deferred_list)) {
>   			ds_queue->split_queue_len--;
>   			if (folio_test_partially_mapped(folio)) {
> -				__folio_clear_partially_mapped(folio);
> +				folio_clear_partially_mapped(folio);
>   				mod_mthp_stat(folio_order(folio),
>   					      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>   			}
> @@ -3695,7 +3695,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio)
>   	if (!list_empty(&folio->_deferred_list)) {
>   		ds_queue->split_queue_len--;
>   		if (folio_test_partially_mapped(folio)) {
> -			__folio_clear_partially_mapped(folio);
> +			folio_clear_partially_mapped(folio);
>   			mod_mthp_stat(folio_order(folio),
>   				      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>   		}
> @@ -3739,7 +3739,7 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
>   	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
>   	if (partially_mapped) {
>   		if (!folio_test_partially_mapped(folio)) {
> -			__folio_set_partially_mapped(folio);
> +			folio_set_partially_mapped(folio);
>   			if (folio_test_pmd_mappable(folio))
>   				count_vm_event(THP_DEFERRED_SPLIT_PAGE);
>   			count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
> @@ -3832,7 +3832,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
>   		} else {
>   			/* We lost race with folio_put() */
>   			if (folio_test_partially_mapped(folio)) {
> -				__folio_clear_partially_mapped(folio);
> +				folio_clear_partially_mapped(folio);
>   				mod_mthp_stat(folio_order(folio),
>   					      MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
>   			}


-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-12-12 14:39 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-12 13:54 Usama Arif
2024-12-12 14:39 ` David Hildenbrand [this message]
2024-12-12 17:22 ` Johannes Weiner
2024-12-12 18:07 ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a7e4a425-6006-43bb-b311-f1c547606425@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=cerasuolodomenico@gmail.com \
    --cc=corbet@lwn.net \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=npache@redhat.com \
    --cc=riel@surriel.com \
    --cc=roman.gushchin@linux.dev \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=ryncsn@gmail.com \
    --cc=shakeel.butt@linux.dev \
    --cc=usamaarif642@gmail.com \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox