From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: akpm@linux-foundation.org, david@kernel.org,
catalin.marinas@arm.com, will@kernel.org
Cc: lorenzo.stoakes@oracle.com, ryan.roberts@arm.com,
Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org,
surenb@google.com, mhocko@suse.com, riel@surriel.com,
harry.yoo@oracle.com, jannh@google.com, willy@infradead.org,
baohua@kernel.org, baolin.wang@linux.alibaba.com,
linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: [PATCH 0/2] support batched checks of the references for large folios
Date: Tue, 25 Nov 2025 08:56:49 +0800 [thread overview]
Message-ID: <cover.1763976541.git.baolin.wang@linux.alibaba.com> (raw)
Currently, folio_referenced_one() always checks the young flag for each PTE
sequentially, which is inefficient for large folios. This inefficiency is
especially noticeable when reclaiming clean file-backed large folios, where
folio_referenced() is observed as a significant performance hotspot.
Moreover, on Arm architecture, which supports contiguous PTEs, there is already
an optimization to clear the young flags for PTEs within a contiguous range.
However, this is not sufficient. We can extend this to perform batched operations
for the entire large folio (which might exceed the contiguous range: CONT_PTE_SIZE).
By supporting batched checking of the young flags and flushing TLB entries,
I observed a 33% performance improvement in my file-backed folios reclaim tests.
BTW, I still noticed a hotspot in try_to_unmap() in my test. Hope Barry can
resend the optimization patch for try_to_unmap() [1].
[1] https://lore.kernel.org/all/20250513084620.58231-1-21cnbao@gmail.com/
Baolin Wang (2):
arm64: mm: support batch clearing of the young flag for large folios
mm: rmap: support batched checks of the references for large folios
arch/arm64/include/asm/pgtable.h | 23 ++++++++++++-----
arch/arm64/mm/contpte.c | 44 ++++++++++++++++++++++----------
include/linux/mmu_notifier.h | 9 ++++---
include/linux/pgtable.h | 19 ++++++++++++++
mm/rmap.c | 22 ++++++++++++++--
5 files changed, 92 insertions(+), 25 deletions(-)
--
2.47.3
next reply other threads:[~2025-11-25 0:57 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-25 0:56 Baolin Wang [this message]
2025-11-25 0:56 ` [PATCH 1/2] arm64: mm: support batch clearing of the young flag " Baolin Wang
2025-11-25 0:56 ` [PATCH 2/2] mm: rmap: support batched checks of the references " Baolin Wang
2025-11-25 9:29 ` [PATCH 0/2] " Barry Song
2025-11-25 17:38 ` Kairui Song
2025-12-01 16:23 ` David Hildenbrand (Red Hat)
2025-12-02 5:37 ` Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1763976541.git.baolin.wang@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=david@kernel.org \
--cc=harry.yoo@oracle.com \
--cc=jannh@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox