linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Gladyshev Ilya <gladyshev.ilya1@h-partners.com>
To: <patchwork@huawei.com>
Cc: <guohanjun@huawei.com>, <wangkefeng.wang@huawei.com>,
	<weiyongjun1@huawei.com>, <yusongping@huawei.com>,
	<leijitang@huawei.com>, <artem.kuzin@huawei.com>,
	<stepanov.anatoly@huawei.com>, <alexander.grubnikov@huawei.com>,
	<gorbunov.ivan@h-partners.com>, <akpm@linux-foundation.org>,
	<david@kernel.org>, <lorenzo.stoakes@oracle.com>,
	<Liam.Howlett@oracle.com>, <vbabka@suse.cz>, <rppt@kernel.org>,
	<surenb@google.com>, <mhocko@suse.com>, <ziy@nvidia.com>,
	<harry.yoo@oracle.com>, <willy@infradead.org>,
	<gladyshev.ilya1@h-partners.com>, <yuzhao@google.com>,
	<baolin.wang@linux.alibaba.com>, <muchun.song@linux.dev>,
	<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>
Subject: [RFC PATCH 1/2] mm: make ref_unless functions unless_zero only
Date: Fri, 19 Dec 2025 12:46:38 +0000	[thread overview]
Message-ID: <d30cf64b3b48080d21a5d14b79d51a82e0f80e7e.1766145604.git.gladyshev.ilya1@h-partners.com> (raw)
In-Reply-To: <cover.1766145604.git.gladyshev.ilya1@h-partners.com>

There are no users of (folio/page)_ref_add_unless(page, nr, u) with
u != 0 [1] and all current users are "internal" for page refcounting API.
This allows us to safely drop this parameter and reduce function
semantics to the "unless zero" cases only, which will be optimized in
the following patch.

If needed, these functions for the u!=0 cases can be trivially
reintroduced later using the same atomic_add_unless operations as before.

[1]: The last user was dropped in v5.18 kernel, commit 27674ef6c73f
("mm: remove the extra ZONE_DEVICE struct page refcount"). There is no
trace of discussion as to why this cleanup wasn't done earlier.

Co-developed-by: Gorbunov Ivan <gorbunov.ivan@h-partners.com>
Signed-off-by: Gorbunov Ivan <gorbunov.ivan@h-partners.com>
Signed-off-by: Gladyshev Ilya <gladyshev.ilya1@h-partners.com>
---
 include/linux/mm.h         |  2 +-
 include/linux/page-flags.h |  6 +++---
 include/linux/page_ref.h   | 14 +++++++-------
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7c79b3369b82..f652426cc218 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1115,7 +1115,7 @@ static inline int folio_put_testzero(struct folio *folio)
  */
 static inline bool get_page_unless_zero(struct page *page)
 {
-	return page_ref_add_unless(page, 1, 0);
+	return page_ref_add_unless_zero(page, 1);
 }
 
 static inline struct folio *folio_get_nontail_page(struct page *page)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 0091ad1986bf..7c2195baf4c1 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -231,7 +231,7 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page
 	return page;
 }
 
-static __always_inline bool page_count_writable(const struct page *page, int u)
+static __always_inline bool page_count_writable(const struct page *page)
 {
 	if (!static_branch_unlikely(&hugetlb_optimize_vmemmap_key))
 		return true;
@@ -257,7 +257,7 @@ static __always_inline bool page_count_writable(const struct page *page, int u)
 	 * The refcount check also prevents modification attempts to other (r/o)
 	 * tail pages that are not fake heads.
 	 */
-	if (atomic_read_acquire(&page->_refcount) == u)
+	if (!atomic_read_acquire(&page->_refcount))
 		return false;
 
 	return page_fixed_fake_head(page) == page;
@@ -268,7 +268,7 @@ static inline const struct page *page_fixed_fake_head(const struct page *page)
 	return page;
 }
 
-static inline bool page_count_writable(const struct page *page, int u)
+static inline bool page_count_writable(const struct page *page)
 {
 	return true;
 }
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 544150d1d5fd..b0e3f4a4b4b8 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -228,14 +228,14 @@ static inline int folio_ref_dec_return(struct folio *folio)
 	return page_ref_dec_return(&folio->page);
 }
 
-static inline bool page_ref_add_unless(struct page *page, int nr, int u)
+static inline bool page_ref_add_unless_zero(struct page *page, int nr)
 {
 	bool ret = false;
 
 	rcu_read_lock();
 	/* avoid writing to the vmemmap area being remapped */
-	if (page_count_writable(page, u))
-		ret = atomic_add_unless(&page->_refcount, nr, u);
+	if (page_count_writable(page))
+		ret = atomic_add_unless(&page->_refcount, nr, 0);
 	rcu_read_unlock();
 
 	if (page_ref_tracepoint_active(page_ref_mod_unless))
@@ -243,9 +243,9 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u)
 	return ret;
 }
 
-static inline bool folio_ref_add_unless(struct folio *folio, int nr, int u)
+static inline bool folio_ref_add_unless_zero(struct folio *folio, int nr)
 {
-	return page_ref_add_unless(&folio->page, nr, u);
+	return page_ref_add_unless_zero(&folio->page, nr);
 }
 
 /**
@@ -261,12 +261,12 @@ static inline bool folio_ref_add_unless(struct folio *folio, int nr, int u)
  */
 static inline bool folio_try_get(struct folio *folio)
 {
-	return folio_ref_add_unless(folio, 1, 0);
+	return folio_ref_add_unless_zero(folio, 1);
 }
 
 static inline bool folio_ref_try_add(struct folio *folio, int count)
 {
-	return folio_ref_add_unless(folio, count, 0);
+	return folio_ref_add_unless_zero(folio, count);
 }
 
 static inline int page_ref_freeze(struct page *page, int count)
-- 
2.43.0



  reply	other threads:[~2025-12-19 12:47 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-19 12:46 [RFC PATCH 0/2] mm: improve folio refcount scalability Gladyshev Ilya
2025-12-19 12:46 ` Gladyshev Ilya [this message]
2025-12-19 12:46 ` [RFC PATCH 2/2] mm: implement page refcount locking via dedicated bit Gladyshev Ilya
2025-12-19 14:50   ` Kiryl Shutsemau
2025-12-19 16:18     ` Gladyshev Ilya
2025-12-19 17:46       ` Kiryl Shutsemau
2025-12-19 19:08         ` Gladyshev Ilya
2025-12-22 13:33           ` Kiryl Shutsemau
2025-12-19 18:17   ` Gregory Price
2025-12-22 12:42     ` Gladyshev Ilya

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d30cf64b3b48080d21a5d14b79d51a82e0f80e7e.1766145604.git.gladyshev.ilya1@h-partners.com \
    --to=gladyshev.ilya1@h-partners.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.grubnikov@huawei.com \
    --cc=artem.kuzin@huawei.com \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@kernel.org \
    --cc=gorbunov.ivan@h-partners.com \
    --cc=guohanjun@huawei.com \
    --cc=harry.yoo@oracle.com \
    --cc=leijitang@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=patchwork@huawei.com \
    --cc=rppt@kernel.org \
    --cc=stepanov.anatoly@huawei.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=wangkefeng.wang@huawei.com \
    --cc=weiyongjun1@huawei.com \
    --cc=willy@infradead.org \
    --cc=yusongping@huawei.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox