From: Barry Song <21cnbao@gmail.com>
To: akpm@linux-foundation.org, linux-mm@kvack.org
Cc: 21cnbao@gmail.com, baolin.wang@linux.alibaba.com,
chrisl@kernel.org, david@redhat.com, ioworker0@gmail.com,
kasong@tencent.com, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org,
lorenzo.stoakes@oracle.com, ryan.roberts@arm.com,
v-songbaohua@oppo.com, x86@kernel.org, ying.huang@intel.com,
zhengtangquan@oppo.com
Subject: [PATCH v4 0/4] mm: batched unmap lazyfree large folios during reclamation
Date: Fri, 14 Feb 2025 22:30:11 +1300 [thread overview]
Message-ID: <20250214093015.51024-1-21cnbao@gmail.com> (raw)
From: Barry Song <v-songbaohua@oppo.com>
Commit 735ecdfaf4e8 ("mm/vmscan: avoid split lazyfree THP during
shrink_folio_list()") prevents the splitting of MADV_FREE'd THP in
madvise.c.
However, those folios are still added to the deferred_split list in
try_to_unmap_one() because we are unmapping PTEs and removing rmap
entries one by one.
Firstly, this has rendered the following counter somewhat confusing,
/sys/kernel/mm/transparent_hugepage/hugepages-size/stats/split_deferred
The split_deferred counter was originally designed to track operations
such as partial unmap or madvise of large folios. However, in practice,
most split_deferred cases arise from memory reclamation of aligned
lazyfree mTHPs as observed by Tangquan. This discrepancy has made
the split_deferred counter highly misleading.
Secondly, this approach is slow because it requires iterating through
each PTE and removing the rmap one by one for a large folio. In fact,
all PTEs of a pte-mapped large folio should be unmapped at once, and
the entire folio should be removed from the rmap as a whole.
Thirdly, it also increases the risk of a race condition where lazyfree
folios are incorrectly set back to swapbacked, as a speculative folio_get
may occur in the shrinker's callback.
deferred_split_scan() might call folio_try_get(folio) since we have
added the folio to split_deferred list while removing rmap for the
1st subpage, and while we are scanning the 2nd to nr_pages PTEs of
this folio in try_to_unmap_one(), the entire mTHP could be
transitioned back to swap-backed because the reference count is
incremented, which can make "ref_count == 1 + map_count" within
try_to_unmap_one() false.
/*
* The only page refs must be one from isolation
* plus the rmap(s) (dropped by discard:).
*/
if (ref_count == 1 + map_count &&
(!folio_test_dirty(folio) ||
...
(vma->vm_flags & VM_DROPPABLE))) {
dec_mm_counter(mm, MM_ANONPAGES);
goto discard;
}
This patchset resolves the issue by marking only genuinely dirty folios
as swap-backed, as suggested by David, and transitioning to batched
unmapping of entire folios in try_to_unmap_one(). Consequently, the
deferred_split count drops to zero, and memory reclamation performance
improves significantly — reclaiming 64KiB lazyfree large folios is now
2.5x faster(The specific data is embedded in the changelog of patch
3/4).
By the way, while the patchset is primarily aimed at PTE-mapped large
folios, Baolin and Lance also found that try_to_unmap_one() handles
lazyfree redirtied PMD-mapped large folios inefficiently — it splits
the PMD into PTEs and iterates over them. This patchset removes the
unnecessary splitting, enabling us to skip redirtied PMD-mapped large
folios 3.5X faster during memory reclamation. (The specific data can
be found in the changelog of patch 4/4).
-v4:
* collect reviewed-by of Kefeng, Baolin, Lance, thanks!
* rebase on top of David's "mm: fixes for device-exclusive entries
(hmm)" patchset v2:
https://lore.kernel.org/all/20250210193801.781278-1-david@redhat.com/
-v3:
https://lore.kernel.org/all/20250115033808.40641-1-21cnbao@gmail.com/
* collect reviewed-by and acked-by of Baolin, David, Lance and Will.
thanks!
* refine pmd-mapped THP lazyfree code per Baolin and Lance.
* refine tlbbatch deferred flushing range support code per David.
-v2:
https://lore.kernel.org/linux-mm/20250113033901.68951-1-21cnbao@gmail.com/
* describle backgrounds, problems more clearly in cover-letter per
Lorenzo Stoakes;
* also handle redirtied pmd-mapped large folios per Baolin and Lance;
* handle some corner cases such as HWPOSION, pte_unused;
* riscv and x86 build issues.
-v1:
https://lore.kernel.org/linux-mm/20250106031711.82855-1-21cnbao@gmail.com/
Barry Song (4):
mm: Set folio swapbacked iff folios are dirty in try_to_unmap_one
mm: Support tlbbatch flush for a range of PTEs
mm: Support batched unmap for lazyfree large folios during reclamation
mm: Avoid splitting pmd for lazyfree pmd-mapped THP in try_to_unmap
arch/arm64/include/asm/tlbflush.h | 23 +++--
arch/arm64/mm/contpte.c | 2 +-
arch/riscv/include/asm/tlbflush.h | 3 +-
arch/riscv/mm/tlbflush.c | 3 +-
arch/x86/include/asm/tlbflush.h | 3 +-
mm/huge_memory.c | 24 ++++--
mm/rmap.c | 136 ++++++++++++++++++------------
7 files changed, 115 insertions(+), 79 deletions(-)
--
2.39.3 (Apple Git-146)
next reply other threads:[~2025-02-14 9:30 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-14 9:30 Barry Song [this message]
2025-02-14 9:30 ` [PATCH v4 1/4] mm: Set folio swapbacked iff folios are dirty in try_to_unmap_one Barry Song
2025-02-14 9:30 ` [PATCH v4 2/4] mm: Support tlbbatch flush for a range of PTEs Barry Song
2025-02-14 9:30 ` [PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation Barry Song
2025-06-24 12:55 ` David Hildenbrand
2025-06-24 15:26 ` Lance Yang
2025-06-24 15:34 ` David Hildenbrand
2025-06-24 16:25 ` Lance Yang
2025-06-25 9:38 ` Barry Song
2025-06-25 10:00 ` David Hildenbrand
2025-06-25 10:38 ` Barry Song
2025-06-25 10:43 ` David Hildenbrand
2025-06-25 10:49 ` Barry Song
2025-06-25 10:59 ` David Hildenbrand
2025-06-25 10:47 ` Lance Yang
2025-06-25 10:49 ` David Hildenbrand
2025-06-25 10:57 ` Barry Song
2025-06-25 11:01 ` David Hildenbrand
2025-06-25 11:15 ` Barry Song
2025-06-25 11:27 ` David Hildenbrand
2025-06-25 11:42 ` Barry Song
2025-06-25 12:09 ` David Hildenbrand
2025-06-25 12:20 ` Lance Yang
2025-06-25 12:25 ` David Hildenbrand
2025-06-25 12:35 ` Lance Yang
2025-06-25 21:03 ` Barry Song
2025-06-26 1:17 ` Lance Yang
2025-06-26 8:17 ` David Hildenbrand
2025-06-26 9:29 ` Lance Yang
2025-06-26 12:44 ` Lance Yang
2025-06-26 13:16 ` David Hildenbrand
2025-06-26 13:52 ` Lance Yang
2025-06-26 14:39 ` David Hildenbrand
2025-06-26 15:06 ` Lance Yang
2025-06-26 21:46 ` Barry Song
2025-06-26 21:52 ` David Hildenbrand
2025-06-25 12:58 ` Lance Yang
2025-06-25 13:02 ` David Hildenbrand
2025-06-25 8:44 ` Lance Yang
2025-06-25 9:29 ` Lance Yang
2025-07-01 10:03 ` Harry Yoo
2025-07-01 13:27 ` Harry Yoo
2025-07-01 16:17 ` David Hildenbrand
2025-02-14 9:30 ` [PATCH v4 4/4] mm: Avoid splitting pmd for lazyfree pmd-mapped THP in try_to_unmap Barry Song
2025-06-25 13:49 ` [PATCH v4 0/4] mm: batched unmap lazyfree large folios during reclamation Lorenzo Stoakes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250214093015.51024-1-21cnbao@gmail.com \
--to=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=chrisl@kernel.org \
--cc=david@redhat.com \
--cc=ioworker0@gmail.com \
--cc=kasong@tencent.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=ryan.roberts@arm.com \
--cc=v-songbaohua@oppo.com \
--cc=x86@kernel.org \
--cc=ying.huang@intel.com \
--cc=zhengtangquan@oppo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox