From: Gorbunov Ivan <gorbunov.ivan@h-partners.com>
To: <gorbunov.ivan@h-partners.com>
Cc: <david@kernel.org>, <Liam.Howlett@oracle.com>,
<akpm@linux-foundation.org>, <apopple@nvidia.com>,
<baolin.wang@linux.alibaba.com>, <gladyshev.ilya1@h-partners.com>,
<harry.yoo@oracle.com>, <kirill@shutemov.name>,
<linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
<lorenzo.stoakes@oracle.com>, <mhocko@suse.com>,
<muchun.song@linux.dev>, <rppt@kernel.org>, <surenb@google.com>,
<torvalds@linuxfoundation.org>, <vbabka@suse.cz>,
<willy@infradead.org>, <yuzhao@google.com>, <ziy@nvidia.com>,
<artem.kuzin@huawei.com>
Subject: [PATCH v2 2/2] mm: implement page refcount locking via dedicated bit
Date: Mon, 20 Apr 2026 08:01:19 +0000 [thread overview]
Message-ID: <9936cc799ac8b637ee58ae3bf6ec0e5eeb5306e9.1776350895.git.gorbunov.ivan@h-partners.com> (raw)
In-Reply-To: <cover.1776350895.git.gorbunov.ivan@h-partners.com>
From: Gladyshev Ilya <gladyshev.ilya1@h-partners.com>
The current atomic-based page refcount implementation treats zero
counter as dead and requires a compare-and-swap loop in folio_try_get()
to prevent incrementing a dead refcount. This CAS loop acts as a
serialization point and can become a significant bottleneck during
high-frequency file read operations.
This patch introduces PAGEREF_FROZEN_BIT to distinguish between a
(temporary) zero refcount and a locked (dead/frozen) state. Because now
incrementing counter doesn't affect it's locked/unlocked state, it is
possible to use an optimistic atomic_add_return() in
page_ref_add_unless_zero() that operates independently of the locked bit.
The locked state is handled after the increment attempt, eliminating the
need for the CAS loop.
If locked state is detected after atomic_add(), pageref counter will be
reset with CAS loop, eliminating theoretical possibility of overflow.
Co-developed-by: Gorbunov Ivan <gorbunov.ivan@h-partners.com>
Signed-off-by: Gorbunov Ivan <gorbunov.ivan@h-partners.com>
Signed-off-by: Gladyshev Ilya <gladyshev.ilya1@h-partners.com>
Acked-by: Linus Torvalds <torvalds@linuxfoundation.org>
---
include/linux/page-flags.h | 13 +++++++++++++
include/linux/page_ref.h | 28 ++++++++++++++++++++++++----
2 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 0e03d816e8b9..b3e3da91a90a 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -196,6 +196,19 @@ enum pageflags {
#define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1)
+/* Most significant bit in page refcount */
+#define PAGEREF_FROZEN_BIT BIT(31)
+
+/* Page reference counter can be in 4 logical states,
+ * which are described below with their value representation
+ * state | value
+ * (1) safe with owners | 1...INT_MAX
+ * (2) safe with no owners | 0
+ * (3) frozen | INT_MIN....-1
+ *
+ * State (2) can be only temporally inside dec_and_test.
+ */
+
#ifndef __GENERATING_BOUNDS_H
/*
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index a7a07b61d2ae..32194e953674 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -64,12 +64,17 @@ static inline void __page_ref_unfreeze(struct page *page, int v)
static inline bool __page_count_is_frozen(int count)
{
- return count == 0;
+ return count > 0 && !((count & PAGEREF_FROZEN_BIT) != 0);
}
static inline int page_ref_count(const struct page *page)
{
- return atomic_read(&page->_refcount);
+ int val = atomic_read(&page->_refcount);
+
+ if (unlikely(val & PAGEREF_FROZEN_BIT))
+ return 0;
+
+ return val;
}
/**
@@ -191,6 +196,9 @@ static inline int page_ref_sub_and_test(struct page *page, int nr)
{
int ret = atomic_sub_and_test(nr, &page->_refcount);
+ if (ret)
+ ret = !atomic_cmpxchg_relaxed(&page->_refcount, 0, PAGEREF_FROZEN_BIT);
+
if (page_ref_tracepoint_active(page_ref_mod_and_test))
__page_ref_mod_and_test(page, -nr, ret);
return ret;
@@ -220,6 +228,9 @@ static inline int page_ref_dec_and_test(struct page *page)
{
int ret = atomic_dec_and_test(&page->_refcount);
+ if (ret)
+ ret = !atomic_cmpxchg_relaxed(&page->_refcount, 0, PAGEREF_FROZEN_BIT);
+
if (page_ref_tracepoint_active(page_ref_mod_and_test))
__page_ref_mod_and_test(page, -1, ret);
return ret;
@@ -245,9 +256,18 @@ static inline int folio_ref_dec_return(struct folio *folio)
return page_ref_dec_return(&folio->page);
}
+#define _PAGEREF_FROZEN_LIMIT ((1 << 30) | PAGEREF_FROZEN_BIT)
+
static inline bool page_ref_add_unless_zero(struct page *page, int nr)
{
- bool ret = atomic_add_unless(&page->_refcount, nr, 0);
+ bool ret = false;
+ int val = atomic_add_return(nr, &page->_refcount);
+ // See PAGEREF_FROZEN_BIT declaration in page-flags.h for details
+ ret = !(val & PAGEREF_FROZEN_BIT);
+
+ /* Undo atomic_add() if counter is locked and scary big */
+ while (unlikely((unsigned int)val >= _PAGEREF_FROZEN_LIMIT))
+ val = atomic_cmpxchg_relaxed(&page->_refcount, val, PAGEREF_FROZEN_BIT);
if (page_ref_tracepoint_active(page_ref_mod_unless))
__page_ref_mod_unless(page, nr, ret);
@@ -282,7 +302,7 @@ static inline bool folio_ref_try_add(struct folio *folio, int count)
static inline int page_ref_freeze(struct page *page, int count)
{
- int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count);
+ int ret = likely(atomic_cmpxchg(&page->_refcount, count, PAGEREF_FROZEN_BIT) == count);
if (page_ref_tracepoint_active(page_ref_freeze))
__page_ref_freeze(page, count, ret);
--
2.43.0
next prev parent reply other threads:[~2026-04-20 8:02 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-20 8:01 [PATCH v2 0/2] mm: improve folio refcount scalability Gorbunov Ivan
2026-04-20 8:01 ` [PATCH v2 1/2] mm: drop page refcount zero state semantics Gorbunov Ivan
2026-04-20 8:01 ` Gorbunov Ivan [this message]
2026-04-20 10:07 ` [syzbot ci] Re: mm: improve folio refcount scalability syzbot ci
2026-04-20 12:29 ` Gorbunov Ivan
2026-04-20 13:21 ` syzbot ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9936cc799ac8b637ee58ae3bf6ec0e5eeb5306e9.1776350895.git.gorbunov.ivan@h-partners.com \
--to=gorbunov.ivan@h-partners.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=artem.kuzin@huawei.com \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=gladyshev.ilya1@h-partners.com \
--cc=harry.yoo@oracle.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=torvalds@linuxfoundation.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox