* [PATCH mm-new v7 0/5] Improve khugepaged scan logic
@ 2026-02-07 8:16 Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 1/5] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
` (4 more replies)
0 siblings, 5 replies; 22+ messages in thread
From: Vernon Yang @ 2026-02-07 8:16 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
From: Vernon Yang <yanglincheng@kylinos.cn>
Hi all,
This series is improve the khugepaged scan logic, reduce CPU consumption,
prioritize scanning task that access memory frequently.
The following data is traced by bpftrace[1] on a desktop system. After
the system has been left idle for 10 minutes upon booting, a lot of
SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan by
khugepaged.
@scan_pmd_status[1]: 1 ## SCAN_SUCCEED
@scan_pmd_status[6]: 2 ## SCAN_EXCEED_SHARED_PTE
@scan_pmd_status[3]: 142 ## SCAN_PMD_MAPPED
@scan_pmd_status[2]: 178 ## SCAN_NO_PTE_TABLE
total progress size: 674 MB
Total time : 419 seconds ## include khugepaged_scan_sleep_millisecs
The khugepaged has below phenomenon: the khugepaged list is scanned in a
FIFO manner, as long as the task is not destroyed,
1. the task no longer has memory that can be collapsed into hugepage,
continues scan it always.
2. the task at the front of the khugepaged scan list is cold, they are
still scanned first.
3. everyone scan at intervals of khugepaged_scan_sleep_millisecs
(default 10s). If we always scan the above two cases first, the valid
scan will have to wait for a long time.
For the first case, when the memory is either SCAN_PMD_MAPPED or
SCAN_NO_PTE_TABLE, just skip it.
For the second case, if the user has explicitly informed us via
MADV_FREE that these folios will be freed, just skip it only.
The below is some performance test results.
kernbench results (testing on x86_64 machine):
baseline w/o patches test w/ patches
Amean user-32 18522.51 ( 0.00%) 18333.64 * 1.02%*
Amean syst-32 1137.96 ( 0.00%) 1113.79 * 2.12%*
Amean elsp-32 666.04 ( 0.00%) 659.44 * 0.99%*
BAmean-95 user-32 18520.01 ( 0.00%) 18323.57 ( 1.06%)
BAmean-95 syst-32 1137.68 ( 0.00%) 1110.50 ( 2.39%)
BAmean-95 elsp-32 665.92 ( 0.00%) 659.06 ( 1.03%)
BAmean-99 user-32 18520.01 ( 0.00%) 18323.57 ( 1.06%)
BAmean-99 syst-32 1137.68 ( 0.00%) 1110.50 ( 2.39%)
BAmean-99 elsp-32 665.92 ( 0.00%) 659.06 ( 1.03%)
Create three task[2]: hot1 -> cold -> hot2. After all three task are
created, each allocate memory 128MB. the hot1/hot2 task continuously
access 128 MB memory, while the cold task only accesses its memory
briefly andthen call madvise(MADV_FREE). Here are the performance test
results:
(Throughput bigger is better, other smaller is better)
Testing on x86_64 machine:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.14 sec | 2.93 sec | -6.69% |
| cycles per access | 4.96 | 2.21 | -55.44% |
| Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
| dTLB-load-misses | 284814532 | 69597236 | -75.56% |
Testing on qemu-system-x86_64 -enable-kvm:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.35 sec | 2.96 sec | -11.64% |
| cycles per access | 7.29 | 2.07 | -71.60% |
| Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
| dTLB-load-misses | 241600871 | 3216108 | -98.67% |
This series is based on mm-new.
Thank you very much for your comments and discussions.
V6 -> V7:
- Use "*cur_progress += 1" at the beginning of the loop in anon case.
- Always "cur_progress" equal to HPAGE_PMD_NR in file case.
- Some cleaning, and pickup Acked-by and Reviewed-by.
V5 -> V6:
- Simplify hpage_collapse_scan_file() [3] and hpage_collapse_scan_pmd().
- Skip lazy-free folios in the khugepaged only [4].
- pickup Reviewed-by.
V4 -> V5:
- Patch #3 are squashed to Patch #2
- File patch utilize "xas->xa_index" to fix issue.
- folio_is_lazyfree() to folio_test_lazyfree()
- Just skip lazyfree folio simply.
- Again test kernbench in the performance mode to improve stability.
- pickup Acked-by and Reviewed-by.
V3 -> V4:
- Rebase on mm-new.
- Make Patch #2 cleaner
- Fix the lazyfree folio continue to be collapsed when skipped ahead.
V2 -> V3:
- Refine scan progress number, add folio_is_lazyfree helper
- Fix warnings at SCAN_PTE_MAPPED_HUGEPAGE.
- For MADV_FREE, we will skip the lazy-free folios instead.
- For MADV_COLD, remove it.
- Used hpage_collapse_test_exit_or_disable() instead of vma = NULL.
- pickup Reviewed-by.
V1 -> V2:
- Rename full to full_scan_finished, pickup Acked-by.
- Just skip SCAN_PMD_MAPPED/NO_PTE_TABLE memory, not remove mm.
- Set VM_NOHUGEPAGE flag when MADV_COLD/MADV_FREE to just skip, not move mm.
- Again test performance at the v6.19-rc2.
V6 : https://lore.kernel.org/linux-mm/20260201122554.1470071-1-vernon2gm@gmail.com
V5 : https://lore.kernel.org/linux-mm/20260123082232.16413-1-vernon2gm@gmail.com
V4 : https://lore.kernel.org/linux-mm/20260111121909.8410-1-yanglincheng@kylinos.cn
V3 : https://lore.kernel.org/linux-mm/20260104054112.4541-1-yanglincheng@kylinos.cn
V2 : https://lore.kernel.org/linux-mm/20251229055151.54887-1-yanglincheng@kylinos.cn
V1 : https://lore.kernel.org/linux-mm/20251215090419.174418-1-yanglincheng@kylinos.cn
[1] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/khugepaged_mm.bt
[2] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/app.c
[3] https://lore.kernel.org/linux-mm/4c35391e-a944-4e62-9103-4a1c4961f62a@arm.com
[4] https://lore.kernel.org/linux-mm/CACZaFFNY8+UKLzBGnmB3ij9amzBdKJgytcSNtA8fLCake8Ua=A@mail.gmail.com
Vernon Yang (5):
mm: khugepaged: add trace_mm_khugepaged_scan event
mm: khugepaged: refine scan progress number
mm: add folio_test_lazyfree helper
mm: khugepaged: skip lazy-free folios
mm: khugepaged: set to next mm direct when mm has
MMF_DISABLE_THP_COMPLETELY
include/linux/page-flags.h | 5 +++
include/trace/events/huge_memory.h | 26 ++++++++++++++
mm/khugepaged.c | 57 +++++++++++++++++++++++-------
mm/rmap.c | 2 +-
mm/vmscan.c | 5 ++-
5 files changed, 79 insertions(+), 16 deletions(-)
base-commit: a1a876489abcc1e75b03bd3b2f6739ceeaaec8c5
--
2.51.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mm-new v7 1/5] mm: khugepaged: add trace_mm_khugepaged_scan event
2026-02-07 8:16 [PATCH mm-new v7 0/5] Improve khugepaged scan logic Vernon Yang
@ 2026-02-07 8:16 ` Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number Vernon Yang
` (3 subsequent siblings)
4 siblings, 0 replies; 22+ messages in thread
From: Vernon Yang @ 2026-02-07 8:16 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
From: Vernon Yang <yanglincheng@kylinos.cn>
Add mm_khugepaged_scan event to track the total time for full scan
and the total number of pages scanned of khugepaged.
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Barry Song <baohua@kernel.org>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Dev Jain <dev.jain@arm.com>
---
include/trace/events/huge_memory.h | 25 +++++++++++++++++++++++++
mm/khugepaged.c | 2 ++
2 files changed, 27 insertions(+)
diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 4e41bff31888..384e29f6bef0 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -237,5 +237,30 @@ TRACE_EVENT(mm_khugepaged_collapse_file,
__print_symbolic(__entry->result, SCAN_STATUS))
);
+TRACE_EVENT(mm_khugepaged_scan,
+
+ TP_PROTO(struct mm_struct *mm, unsigned int progress,
+ bool full_scan_finished),
+
+ TP_ARGS(mm, progress, full_scan_finished),
+
+ TP_STRUCT__entry(
+ __field(struct mm_struct *, mm)
+ __field(unsigned int, progress)
+ __field(bool, full_scan_finished)
+ ),
+
+ TP_fast_assign(
+ __entry->mm = mm;
+ __entry->progress = progress;
+ __entry->full_scan_finished = full_scan_finished;
+ ),
+
+ TP_printk("mm=%p, progress=%u, full_scan_finished=%d",
+ __entry->mm,
+ __entry->progress,
+ __entry->full_scan_finished)
+);
+
#endif /* __HUGE_MEMORY_H */
#include <trace/define_trace.h>
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index fa1e57fd2c46..4049234e1c8b 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2536,6 +2536,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
collect_mm_slot(slot);
}
+ trace_mm_khugepaged_scan(mm, progress, khugepaged_scan.mm_slot == NULL);
+
return progress;
}
--
2.51.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number
2026-02-07 8:16 [PATCH mm-new v7 0/5] Improve khugepaged scan logic Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 1/5] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
@ 2026-02-07 8:16 ` Vernon Yang
2026-02-08 9:17 ` Dev Jain
2026-02-18 3:55 ` Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 3/5] mm: add folio_test_lazyfree helper Vernon Yang
` (2 subsequent siblings)
4 siblings, 2 replies; 22+ messages in thread
From: Vernon Yang @ 2026-02-07 8:16 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
From: Vernon Yang <yanglincheng@kylinos.cn>
Currently, each scan always increases "progress" by HPAGE_PMD_NR,
even if only scanning a single PTE/PMD entry.
- When only scanning a sigle PTE entry, let me provide a detailed
example:
static int hpage_collapse_scan_pmd()
{
for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
_pte++, addr += PAGE_SIZE) {
pte_t pteval = ptep_get(_pte);
...
if (pte_uffd_wp(pteval)) { <-- first scan hit
result = SCAN_PTE_UFFD_WP;
goto out_unmap;
}
}
}
During the first scan, if pte_uffd_wp(pteval) is true, the loop exits
directly. In practice, only one PTE is scanned before termination.
Here, "progress += 1" reflects the actual number of PTEs scanned, but
previously "progress += HPAGE_PMD_NR" always.
- When the memory has been collapsed to PMD, let me provide a detailed
example:
The following data is traced by bpftrace on a desktop system. After
the system has been left idle for 10 minutes upon booting, a lot of
SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan
by khugepaged.
From trace_mm_khugepaged_scan_pmd and trace_mm_khugepaged_scan_file, the
following statuses were observed, with frequency mentioned next to them:
SCAN_SUCCEED : 1
SCAN_EXCEED_SHARED_PTE: 2
SCAN_PMD_MAPPED : 142
SCAN_NO_PTE_TABLE : 178
total progress size : 674 MB
Total time : 419 seconds, include khugepaged_scan_sleep_millisecs
The khugepaged_scan list save all task that support collapse into hugepage,
as long as the task is not destroyed, khugepaged will not remove it from
the khugepaged_scan list. This exist a phenomenon where task has already
collapsed all memory regions into hugepage, but khugepaged continues to
scan it, which wastes CPU time and invalid, and due to
khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for
scanning a large number of invalid task, so scanning really valid task
is later.
After applying this patch, when the memory is either SCAN_PMD_MAPPED or
SCAN_NO_PTE_TABLE, just skip it, as follow:
SCAN_EXCEED_SHARED_PTE: 2
SCAN_PMD_MAPPED : 147
SCAN_NO_PTE_TABLE : 173
total progress size : 45 MB
Total time : 20 seconds
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
---
mm/khugepaged.c | 38 ++++++++++++++++++++++++++++----------
1 file changed, 28 insertions(+), 10 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 4049234e1c8b..8b68ae3bc2c5 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -68,7 +68,10 @@ enum scan_result {
static struct task_struct *khugepaged_thread __read_mostly;
static DEFINE_MUTEX(khugepaged_mutex);
-/* default scan 8*HPAGE_PMD_NR ptes (or vmas) every 10 second */
+/*
+ * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas
+ * every 10 second.
+ */
static unsigned int khugepaged_pages_to_scan __read_mostly;
static unsigned int khugepaged_pages_collapsed;
static unsigned int khugepaged_full_scans;
@@ -1240,7 +1243,8 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a
}
static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
- struct vm_area_struct *vma, unsigned long start_addr, bool *mmap_locked,
+ struct vm_area_struct *vma, unsigned long start_addr,
+ bool *mmap_locked, unsigned int *cur_progress,
struct collapse_control *cc)
{
pmd_t *pmd;
@@ -1256,19 +1260,27 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK);
result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
- if (result != SCAN_SUCCEED)
+ if (result != SCAN_SUCCEED) {
+ if (cur_progress)
+ *cur_progress = 1;
goto out;
+ }
memset(cc->node_load, 0, sizeof(cc->node_load));
nodes_clear(cc->alloc_nmask);
pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
if (!pte) {
+ if (cur_progress)
+ *cur_progress = 1;
result = SCAN_NO_PTE_TABLE;
goto out;
}
for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
_pte++, addr += PAGE_SIZE) {
+ if (cur_progress)
+ *cur_progress += 1;
+
pte_t pteval = ptep_get(_pte);
if (pte_none_or_zero(pteval)) {
++none_or_zero;
@@ -2288,8 +2300,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
return result;
}
-static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
- struct file *file, pgoff_t start, struct collapse_control *cc)
+static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
+ unsigned long addr, struct file *file, pgoff_t start,
+ unsigned int *cur_progress, struct collapse_control *cc)
{
struct folio *folio = NULL;
struct address_space *mapping = file->f_mapping;
@@ -2378,6 +2391,8 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
cond_resched_rcu();
}
}
+ if (cur_progress)
+ *cur_progress = HPAGE_PMD_NR;
rcu_read_unlock();
if (result == SCAN_SUCCEED) {
@@ -2457,6 +2472,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
while (khugepaged_scan.address < hend) {
bool mmap_locked = true;
+ unsigned int cur_progress = 0;
cond_resched();
if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
@@ -2473,7 +2489,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
mmap_read_unlock(mm);
mmap_locked = false;
*result = hpage_collapse_scan_file(mm,
- khugepaged_scan.address, file, pgoff, cc);
+ khugepaged_scan.address, file, pgoff,
+ &cur_progress, cc);
fput(file);
if (*result == SCAN_PTE_MAPPED_HUGEPAGE) {
mmap_read_lock(mm);
@@ -2487,7 +2504,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
}
} else {
*result = hpage_collapse_scan_pmd(mm, vma,
- khugepaged_scan.address, &mmap_locked, cc);
+ khugepaged_scan.address, &mmap_locked,
+ &cur_progress, cc);
}
if (*result == SCAN_SUCCEED)
@@ -2495,7 +2513,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
/* move to next address */
khugepaged_scan.address += HPAGE_PMD_SIZE;
- progress += HPAGE_PMD_NR;
+ progress += cur_progress;
if (!mmap_locked)
/*
* We released mmap_lock so break loop. Note
@@ -2818,7 +2836,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
mmap_locked = false;
*lock_dropped = true;
result = hpage_collapse_scan_file(mm, addr, file, pgoff,
- cc);
+ NULL, cc);
if (result == SCAN_PAGE_DIRTY_OR_WRITEBACK && !triggered_wb &&
mapping_can_writeback(file->f_mapping)) {
@@ -2833,7 +2851,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
fput(file);
} else {
result = hpage_collapse_scan_pmd(mm, vma, addr,
- &mmap_locked, cc);
+ &mmap_locked, NULL, cc);
}
if (!mmap_locked)
*lock_dropped = true;
--
2.51.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mm-new v7 3/5] mm: add folio_test_lazyfree helper
2026-02-07 8:16 [PATCH mm-new v7 0/5] Improve khugepaged scan logic Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 1/5] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number Vernon Yang
@ 2026-02-07 8:16 ` Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 5/5] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
4 siblings, 0 replies; 22+ messages in thread
From: Vernon Yang @ 2026-02-07 8:16 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
From: Vernon Yang <yanglincheng@kylinos.cn>
Add folio_test_lazyfree() function to identify lazy-free folios to improve
code readability.
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Barry Song <baohua@kernel.org>
---
include/linux/page-flags.h | 5 +++++
mm/rmap.c | 2 +-
mm/vmscan.c | 5 ++---
3 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index f7a0e4af0c73..415e9f2ef616 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -724,6 +724,11 @@ static __always_inline bool folio_test_anon(const struct folio *folio)
return ((unsigned long)folio->mapping & FOLIO_MAPPING_ANON) != 0;
}
+static __always_inline bool folio_test_lazyfree(const struct folio *folio)
+{
+ return folio_test_anon(folio) && !folio_test_swapbacked(folio);
+}
+
static __always_inline bool PageAnonNotKsm(const struct page *page)
{
unsigned long flags = (unsigned long)page_folio(page)->mapping;
diff --git a/mm/rmap.c b/mm/rmap.c
index c67a374bb1a3..e3f799c99057 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2049,7 +2049,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
}
if (!pvmw.pte) {
- if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
+ if (folio_test_lazyfree(folio)) {
if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
goto walk_done;
/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 44e4fcd6463c..02b522b8c66c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -963,8 +963,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
* They could be mistakenly treated as file lru. So further anon
* test is needed.
*/
- if (!folio_is_file_lru(folio) ||
- (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
+ if (!folio_is_file_lru(folio) || folio_test_lazyfree(folio)) {
*dirty = false;
*writeback = false;
return;
@@ -1508,7 +1507,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
}
}
- if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
+ if (folio_test_lazyfree(folio)) {
/* follow __remove_mapping for reference */
if (!folio_ref_freeze(folio, 1))
goto keep_locked;
--
2.51.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 8:16 [PATCH mm-new v7 0/5] Improve khugepaged scan logic Vernon Yang
` (2 preceding siblings ...)
2026-02-07 8:16 ` [PATCH mm-new v7 3/5] mm: add folio_test_lazyfree helper Vernon Yang
@ 2026-02-07 8:16 ` Vernon Yang
2026-02-07 8:34 ` Barry Song
2026-02-07 8:16 ` [PATCH mm-new v7 5/5] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
4 siblings, 1 reply; 22+ messages in thread
From: Vernon Yang @ 2026-02-07 8:16 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
From: Vernon Yang <yanglincheng@kylinos.cn>
For example, create three task: hot1 -> cold -> hot2. After all three
task are created, each allocate memory 128MB. the hot1/hot2 task
continuously access 128 MB memory, while the cold task only accesses
its memory briefly and then call madvise(MADV_FREE). However, khugepaged
still prioritizes scanning the cold task and only scans the hot2 task
after completing the scan of the cold task.
And if we collapse with a lazyfree page, that content will never be none
and the deferred shrinker cannot reclaim them.
So if the user has explicitly informed us via MADV_FREE that this memory
will be freed, it is appropriate for khugepaged to skip it only, thereby
avoiding unnecessary scan and collapse operations to reducing CPU
wastage.
Here are the performance test results:
(Throughput bigger is better, other smaller is better)
Testing on x86_64 machine:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.14 sec | 2.93 sec | -6.69% |
| cycles per access | 4.96 | 2.21 | -55.44% |
| Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
| dTLB-load-misses | 284814532 | 69597236 | -75.56% |
Testing on qemu-system-x86_64 -enable-kvm:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.35 sec | 2.96 sec | -11.64% |
| cycles per access | 7.29 | 2.07 | -71.60% |
| Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
| dTLB-load-misses | 241600871 | 3216108 | -98.67% |
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
Acked-by: David Hildenbrand (arm) <david@kernel.org>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
---
include/trace/events/huge_memory.h | 1 +
mm/khugepaged.c | 13 +++++++++++++
2 files changed, 14 insertions(+)
diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 384e29f6bef0..bcdc57eea270 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -25,6 +25,7 @@
EM( SCAN_PAGE_LRU, "page_not_in_lru") \
EM( SCAN_PAGE_LOCK, "page_locked") \
EM( SCAN_PAGE_ANON, "page_not_anon") \
+ EM( SCAN_PAGE_LAZYFREE, "page_lazyfree") \
EM( SCAN_PAGE_COMPOUND, "page_compound") \
EM( SCAN_ANY_PROCESS, "no_process_for_page") \
EM( SCAN_VMA_NULL, "vma_null") \
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8b68ae3bc2c5..0d160e612e16 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -46,6 +46,7 @@ enum scan_result {
SCAN_PAGE_LRU,
SCAN_PAGE_LOCK,
SCAN_PAGE_ANON,
+ SCAN_PAGE_LAZYFREE,
SCAN_PAGE_COMPOUND,
SCAN_ANY_PROCESS,
SCAN_VMA_NULL,
@@ -583,6 +584,12 @@ static enum scan_result __collapse_huge_page_isolate(struct vm_area_struct *vma,
folio = page_folio(page);
VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
+ if (cc->is_khugepaged && !pte_dirty(pteval) &&
+ folio_test_lazyfree(folio)) {
+ result = SCAN_PAGE_LAZYFREE;
+ goto out;
+ }
+
/* See hpage_collapse_scan_pmd(). */
if (folio_maybe_mapped_shared(folio)) {
++shared;
@@ -1335,6 +1342,12 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
}
folio = page_folio(page);
+ if (cc->is_khugepaged && !pte_dirty(pteval) &&
+ folio_test_lazyfree(folio)) {
+ result = SCAN_PAGE_LAZYFREE;
+ goto out_unmap;
+ }
+
if (!folio_test_anon(folio)) {
result = SCAN_PAGE_ANON;
goto out_unmap;
--
2.51.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mm-new v7 5/5] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY
2026-02-07 8:16 [PATCH mm-new v7 0/5] Improve khugepaged scan logic Vernon Yang
` (3 preceding siblings ...)
2026-02-07 8:16 ` [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios Vernon Yang
@ 2026-02-07 8:16 ` Vernon Yang
4 siblings, 0 replies; 22+ messages in thread
From: Vernon Yang @ 2026-02-07 8:16 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
From: Vernon Yang <yanglincheng@kylinos.cn>
When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
reduce redundant operation.
Without this patch, entering khugepaged_scan_mm_slot() next time, we
will set khugepaged_scan.mm_slot to the next mm_slot.
With this patch, we will directly set khugepaged_scan.mm_slot to the
next mm_slot.
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Barry Song <baohua@kernel.org>
---
mm/khugepaged.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 0d160e612e16..b3854d990fd9 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2548,9 +2548,9 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
VM_BUG_ON(khugepaged_scan.mm_slot != slot);
/*
* Release the current mm_slot if this mm is about to die, or
- * if we scanned all vmas of this mm.
+ * if we scanned all vmas of this mm, or THP got disabled.
*/
- if (hpage_collapse_test_exit(mm) || !vma) {
+ if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
/*
* Make sure that if mm_users is reaching zero while
* khugepaged runs here, khugepaged_exit will find
--
2.51.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 8:16 ` [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios Vernon Yang
@ 2026-02-07 8:34 ` Barry Song
2026-02-07 13:51 ` Lance Yang
0 siblings, 1 reply; 22+ messages in thread
From: Barry Song @ 2026-02-07 8:34 UTC (permalink / raw)
To: Vernon Yang
Cc: akpm, david, lorenzo.stoakes, ziy, dev.jain, lance.yang,
linux-mm, linux-kernel, Vernon Yang
On Sat, Feb 7, 2026 at 4:16 PM Vernon Yang <vernon2gm@gmail.com> wrote:
>
> From: Vernon Yang <yanglincheng@kylinos.cn>
>
> For example, create three task: hot1 -> cold -> hot2. After all three
> task are created, each allocate memory 128MB. the hot1/hot2 task
> continuously access 128 MB memory, while the cold task only accesses
> its memory briefly and then call madvise(MADV_FREE). However, khugepaged
> still prioritizes scanning the cold task and only scans the hot2 task
> after completing the scan of the cold task.
>
> And if we collapse with a lazyfree page, that content will never be none
> and the deferred shrinker cannot reclaim them.
>
> So if the user has explicitly informed us via MADV_FREE that this memory
> will be freed, it is appropriate for khugepaged to skip it only, thereby
> avoiding unnecessary scan and collapse operations to reducing CPU
> wastage.
>
> Here are the performance test results:
> (Throughput bigger is better, other smaller is better)
>
> Testing on x86_64 machine:
>
> | task hot2 | without patch | with patch | delta |
> |---------------------|---------------|---------------|---------|
> | total accesses time | 3.14 sec | 2.93 sec | -6.69% |
> | cycles per access | 4.96 | 2.21 | -55.44% |
> | Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
> | dTLB-load-misses | 284814532 | 69597236 | -75.56% |
>
> Testing on qemu-system-x86_64 -enable-kvm:
>
> | task hot2 | without patch | with patch | delta |
> |---------------------|---------------|---------------|---------|
> | total accesses time | 3.35 sec | 2.96 sec | -11.64% |
> | cycles per access | 7.29 | 2.07 | -71.60% |
> | Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
> | dTLB-load-misses | 241600871 | 3216108 | -98.67% |
>
> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> Acked-by: David Hildenbrand (arm) <david@kernel.org>
> Reviewed-by: Lance Yang <lance.yang@linux.dev>
> ---
> include/trace/events/huge_memory.h | 1 +
> mm/khugepaged.c | 13 +++++++++++++
> 2 files changed, 14 insertions(+)
>
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 384e29f6bef0..bcdc57eea270 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -25,6 +25,7 @@
> EM( SCAN_PAGE_LRU, "page_not_in_lru") \
> EM( SCAN_PAGE_LOCK, "page_locked") \
> EM( SCAN_PAGE_ANON, "page_not_anon") \
> + EM( SCAN_PAGE_LAZYFREE, "page_lazyfree") \
> EM( SCAN_PAGE_COMPOUND, "page_compound") \
> EM( SCAN_ANY_PROCESS, "no_process_for_page") \
> EM( SCAN_VMA_NULL, "vma_null") \
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 8b68ae3bc2c5..0d160e612e16 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -46,6 +46,7 @@ enum scan_result {
> SCAN_PAGE_LRU,
> SCAN_PAGE_LOCK,
> SCAN_PAGE_ANON,
> + SCAN_PAGE_LAZYFREE,
> SCAN_PAGE_COMPOUND,
> SCAN_ANY_PROCESS,
> SCAN_VMA_NULL,
> @@ -583,6 +584,12 @@ static enum scan_result __collapse_huge_page_isolate(struct vm_area_struct *vma,
> folio = page_folio(page);
> VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
>
> + if (cc->is_khugepaged && !pte_dirty(pteval) &&
> + folio_test_lazyfree(folio)) {
We have two corner cases here:
1. Even if a lazyfree folio is dirty, if the VMA has the VM_DROPPABLE flag,
a lazyfree folio may still be dropped, even when its PTE is dirty.
2. GUP operation can cause a folio to become dirty.
I see the corner cases from try_to_unmap_one():
if (folio_test_dirty(folio) &&
!(vma->vm_flags & VM_DROPPABLE)) {
/*
* redirtied either using the
page table or a previously
* obtained GUP reference.
*/
set_ptes(mm, address,
pvmw.pte, pteval, nr_pages);
folio_set_swapbacked(folio);
goto walk_abort;
}
Should we take these two corner cases into account?
> + result = SCAN_PAGE_LAZYFREE;
> + goto out;
> + }
> +
> /* See hpage_collapse_scan_pmd(). */
> if (folio_maybe_mapped_shared(folio)) {
> ++shared;
> @@ -1335,6 +1342,12 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
> }
> folio = page_folio(page);
>
> + if (cc->is_khugepaged && !pte_dirty(pteval) &&
> + folio_test_lazyfree(folio)) {
> + result = SCAN_PAGE_LAZYFREE;
> + goto out_unmap;
> + }
> +
> if (!folio_test_anon(folio)) {
> result = SCAN_PAGE_ANON;
> goto out_unmap;
Thanks
Barry
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 8:34 ` Barry Song
@ 2026-02-07 13:51 ` Lance Yang
2026-02-07 21:38 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 22+ messages in thread
From: Lance Yang @ 2026-02-07 13:51 UTC (permalink / raw)
To: Barry Song, Vernon Yang
Cc: akpm, david, lorenzo.stoakes, ziy, dev.jain, linux-mm,
linux-kernel, Vernon Yang
On 2026/2/7 16:34, Barry Song wrote:
> On Sat, Feb 7, 2026 at 4:16 PM Vernon Yang <vernon2gm@gmail.com> wrote:
>>
>> From: Vernon Yang <yanglincheng@kylinos.cn>
>>
>> For example, create three task: hot1 -> cold -> hot2. After all three
>> task are created, each allocate memory 128MB. the hot1/hot2 task
>> continuously access 128 MB memory, while the cold task only accesses
>> its memory briefly and then call madvise(MADV_FREE). However, khugepaged
>> still prioritizes scanning the cold task and only scans the hot2 task
>> after completing the scan of the cold task.
>>
>> And if we collapse with a lazyfree page, that content will never be none
>> and the deferred shrinker cannot reclaim them.
>>
>> So if the user has explicitly informed us via MADV_FREE that this memory
>> will be freed, it is appropriate for khugepaged to skip it only, thereby
>> avoiding unnecessary scan and collapse operations to reducing CPU
>> wastage.
>>
>> Here are the performance test results:
>> (Throughput bigger is better, other smaller is better)
>>
>> Testing on x86_64 machine:
>>
>> | task hot2 | without patch | with patch | delta |
>> |---------------------|---------------|---------------|---------|
>> | total accesses time | 3.14 sec | 2.93 sec | -6.69% |
>> | cycles per access | 4.96 | 2.21 | -55.44% |
>> | Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
>> | dTLB-load-misses | 284814532 | 69597236 | -75.56% |
>>
>> Testing on qemu-system-x86_64 -enable-kvm:
>>
>> | task hot2 | without patch | with patch | delta |
>> |---------------------|---------------|---------------|---------|
>> | total accesses time | 3.35 sec | 2.96 sec | -11.64% |
>> | cycles per access | 7.29 | 2.07 | -71.60% |
>> | Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
>> | dTLB-load-misses | 241600871 | 3216108 | -98.67% |
>>
>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
>> Acked-by: David Hildenbrand (arm) <david@kernel.org>
>> Reviewed-by: Lance Yang <lance.yang@linux.dev>
>> ---
>> include/trace/events/huge_memory.h | 1 +
>> mm/khugepaged.c | 13 +++++++++++++
>> 2 files changed, 14 insertions(+)
>>
>> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
>> index 384e29f6bef0..bcdc57eea270 100644
>> --- a/include/trace/events/huge_memory.h
>> +++ b/include/trace/events/huge_memory.h
>> @@ -25,6 +25,7 @@
>> EM( SCAN_PAGE_LRU, "page_not_in_lru") \
>> EM( SCAN_PAGE_LOCK, "page_locked") \
>> EM( SCAN_PAGE_ANON, "page_not_anon") \
>> + EM( SCAN_PAGE_LAZYFREE, "page_lazyfree") \
>> EM( SCAN_PAGE_COMPOUND, "page_compound") \
>> EM( SCAN_ANY_PROCESS, "no_process_for_page") \
>> EM( SCAN_VMA_NULL, "vma_null") \
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 8b68ae3bc2c5..0d160e612e16 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -46,6 +46,7 @@ enum scan_result {
>> SCAN_PAGE_LRU,
>> SCAN_PAGE_LOCK,
>> SCAN_PAGE_ANON,
>> + SCAN_PAGE_LAZYFREE,
>> SCAN_PAGE_COMPOUND,
>> SCAN_ANY_PROCESS,
>> SCAN_VMA_NULL,
>> @@ -583,6 +584,12 @@ static enum scan_result __collapse_huge_page_isolate(struct vm_area_struct *vma,
>> folio = page_folio(page);
>> VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
>>
>> + if (cc->is_khugepaged && !pte_dirty(pteval) &&
>> + folio_test_lazyfree(folio)) {
>
> We have two corner cases here:
Good catch!
>
> 1. Even if a lazyfree folio is dirty, if the VMA has the VM_DROPPABLE flag,
> a lazyfree folio may still be dropped, even when its PTE is dirty.
Right. When the VMA has VM_DROPPABLE, we would drop the lazyfree folio
regardless of whether it (or the PTE) is dirty in try_to_unmap_one().
So, IMHO, we could go with:
cc->is_khugepaged && folio_test_lazyfree(folio) &&
(!pte_dirty(pteval) || (vma->vm_flags & VM_DROPPABLE))
>
> 2. GUP operation can cause a folio to become dirty.
Emm... I don't think we need to do anything special for GUP here :)
IIUC, if the range is pinned, MADV_COLLAPSE/khugepaged already fails;
We hit the refcount check in hpage_collapse_scan_pmd() (expected vs
actual refcount) and return -EAGAIN.
```
/*
* Check if the page has any GUP (or other external) pins.
*
* Here the check may be racy:
* it may see folio_mapcount() > folio_ref_count().
* But such case is ephemeral we could always retry collapse
* later. However it may report false positive if the page
* has excessive GUP pins (i.e. 512). Anyway the same check
* will be done again later the risk seems low.
*/
if (folio_expected_ref_count(folio) != folio_ref_count(folio)) {
result = SCAN_PAGE_COUNT;
goto out_unmap;
}
```
Cheers,
Lance
>
> I see the corner cases from try_to_unmap_one():
>
> if (folio_test_dirty(folio) &&
> !(vma->vm_flags & VM_DROPPABLE)) {
> /*
> * redirtied either using the
> page table or a previously
> * obtained GUP reference.
> */
> set_ptes(mm, address,
> pvmw.pte, pteval, nr_pages);
> folio_set_swapbacked(folio);
> goto walk_abort;
> }
>
> Should we take these two corner cases into account?
>
>
>> + result = SCAN_PAGE_LAZYFREE;
>> + goto out;
>> + }
>> +
>> /* See hpage_collapse_scan_pmd(). */
>> if (folio_maybe_mapped_shared(folio)) {
>> ++shared;
>> @@ -1335,6 +1342,12 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
>> }
>> folio = page_folio(page);
>>
>> + if (cc->is_khugepaged && !pte_dirty(pteval) &&
>> + folio_test_lazyfree(folio)) {
>> + result = SCAN_PAGE_LAZYFREE;
>> + goto out_unmap;
>> + }
>> +
>> if (!folio_test_anon(folio)) {
>> result = SCAN_PAGE_ANON;
>> goto out_unmap;
>
> Thanks
> Barry
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 13:51 ` Lance Yang
@ 2026-02-07 21:38 ` David Hildenbrand (Arm)
2026-02-07 22:01 ` Barry Song
2026-02-08 4:06 ` Lance Yang
0 siblings, 2 replies; 22+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-07 21:38 UTC (permalink / raw)
To: Lance Yang, Barry Song, Vernon Yang
Cc: akpm, lorenzo.stoakes, ziy, dev.jain, linux-mm, linux-kernel,
Vernon Yang
On 2/7/26 14:51, Lance Yang wrote:
>
>
> On 2026/2/7 16:34, Barry Song wrote:
>> On Sat, Feb 7, 2026 at 4:16 PM Vernon Yang <vernon2gm@gmail.com> wrote:
>>>
>>> From: Vernon Yang <yanglincheng@kylinos.cn>
>>>
>>> For example, create three task: hot1 -> cold -> hot2. After all three
>>> task are created, each allocate memory 128MB. the hot1/hot2 task
>>> continuously access 128 MB memory, while the cold task only accesses
>>> its memory briefly and then call madvise(MADV_FREE). However, khugepaged
>>> still prioritizes scanning the cold task and only scans the hot2 task
>>> after completing the scan of the cold task.
>>>
>>> And if we collapse with a lazyfree page, that content will never be none
>>> and the deferred shrinker cannot reclaim them.
>>>
>>> So if the user has explicitly informed us via MADV_FREE that this memory
>>> will be freed, it is appropriate for khugepaged to skip it only, thereby
>>> avoiding unnecessary scan and collapse operations to reducing CPU
>>> wastage.
>>>
>>> Here are the performance test results:
>>> (Throughput bigger is better, other smaller is better)
>>>
>>> Testing on x86_64 machine:
>>>
>>> | task hot2 | without patch | with patch | delta |
>>> |---------------------|---------------|---------------|---------|
>>> | total accesses time | 3.14 sec | 2.93 sec | -6.69% |
>>> | cycles per access | 4.96 | 2.21 | -55.44% |
>>> | Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
>>> | dTLB-load-misses | 284814532 | 69597236 | -75.56% |
>>>
>>> Testing on qemu-system-x86_64 -enable-kvm:
>>>
>>> | task hot2 | without patch | with patch | delta |
>>> |---------------------|---------------|---------------|---------|
>>> | total accesses time | 3.35 sec | 2.96 sec | -11.64% |
>>> | cycles per access | 7.29 | 2.07 | -71.60% |
>>> | Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
>>> | dTLB-load-misses | 241600871 | 3216108 | -98.67% |
>>>
>>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
>>> Acked-by: David Hildenbrand (arm) <david@kernel.org>
>>> Reviewed-by: Lance Yang <lance.yang@linux.dev>
>>> ---
>>> include/trace/events/huge_memory.h | 1 +
>>> mm/khugepaged.c | 13 +++++++++++++
>>> 2 files changed, 14 insertions(+)
>>>
>>> diff --git a/include/trace/events/huge_memory.h b/include/trace/
>>> events/huge_memory.h
>>> index 384e29f6bef0..bcdc57eea270 100644
>>> --- a/include/trace/events/huge_memory.h
>>> +++ b/include/trace/events/huge_memory.h
>>> @@ -25,6 +25,7 @@
>>> EM( SCAN_PAGE_LRU,
>>> "page_not_in_lru") \
>>> EM( SCAN_PAGE_LOCK,
>>> "page_locked") \
>>> EM( SCAN_PAGE_ANON,
>>> "page_not_anon") \
>>> + EM( SCAN_PAGE_LAZYFREE,
>>> "page_lazyfree") \
>>> EM( SCAN_PAGE_COMPOUND,
>>> "page_compound") \
>>> EM( SCAN_ANY_PROCESS,
>>> "no_process_for_page") \
>>> EM( SCAN_VMA_NULL,
>>> "vma_null") \
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 8b68ae3bc2c5..0d160e612e16 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -46,6 +46,7 @@ enum scan_result {
>>> SCAN_PAGE_LRU,
>>> SCAN_PAGE_LOCK,
>>> SCAN_PAGE_ANON,
>>> + SCAN_PAGE_LAZYFREE,
>>> SCAN_PAGE_COMPOUND,
>>> SCAN_ANY_PROCESS,
>>> SCAN_VMA_NULL,
>>> @@ -583,6 +584,12 @@ static enum scan_result
>>> __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>> folio = page_folio(page);
>>> VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
>>>
>>> + if (cc->is_khugepaged && !pte_dirty(pteval) &&
>>> + folio_test_lazyfree(folio)) {
>>
>> We have two corner cases here:
>
> Good catch!
>
>>
>> 1. Even if a lazyfree folio is dirty, if the VMA has the VM_DROPPABLE
>> flag,
>> a lazyfree folio may still be dropped, even when its PTE is dirty.
Good point!
>
> Right. When the VMA has VM_DROPPABLE, we would drop the lazyfree folio
> regardless of whether it (or the PTE) is dirty in try_to_unmap_one().
>
> So, IMHO, we could go with:
>
> cc->is_khugepaged && folio_test_lazyfree(folio) &&
> (!pte_dirty(pteval) || (vma->vm_flags & VM_DROPPABLE))
Hm. In a VM_DROPPABLE mapping all folios should be marked as lazy-free
(see folio_add_new_anon_rmap()).
The new (collapse) folio will also be marked lazy (due to
folio_add_new_anon_rmap()) free and can just get dropped any time.
So likely we should just not skip collapse for lazyfree folios in
VM_DROPPABLE mappings?
if (cc->is_khugepaged && !(vma->vm_flags & VM_DROPPABLE) &&
folio_test_lazyfree(folio) && !pte_dirty(pteval)) {
...
}
--
Cheers,
David
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 21:38 ` David Hildenbrand (Arm)
@ 2026-02-07 22:01 ` Barry Song
2026-02-07 22:05 ` David Hildenbrand (Arm)
2026-02-08 4:06 ` Lance Yang
1 sibling, 1 reply; 22+ messages in thread
From: Barry Song @ 2026-02-07 22:01 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Lance Yang, Vernon Yang, akpm, lorenzo.stoakes, ziy, dev.jain,
linux-mm, linux-kernel, Vernon Yang
On Sun, Feb 8, 2026 at 5:38 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
>
> On 2/7/26 14:51, Lance Yang wrote:
> >
> >
> > On 2026/2/7 16:34, Barry Song wrote:
> >> On Sat, Feb 7, 2026 at 4:16 PM Vernon Yang <vernon2gm@gmail.com> wrote:
> >>>
> >>> From: Vernon Yang <yanglincheng@kylinos.cn>
> >>>
> >>> For example, create three task: hot1 -> cold -> hot2. After all three
> >>> task are created, each allocate memory 128MB. the hot1/hot2 task
> >>> continuously access 128 MB memory, while the cold task only accesses
> >>> its memory briefly and then call madvise(MADV_FREE). However, khugepaged
> >>> still prioritizes scanning the cold task and only scans the hot2 task
> >>> after completing the scan of the cold task.
> >>>
> >>> And if we collapse with a lazyfree page, that content will never be none
> >>> and the deferred shrinker cannot reclaim them.
> >>>
> >>> So if the user has explicitly informed us via MADV_FREE that this memory
> >>> will be freed, it is appropriate for khugepaged to skip it only, thereby
> >>> avoiding unnecessary scan and collapse operations to reducing CPU
> >>> wastage.
> >>>
> >>> Here are the performance test results:
> >>> (Throughput bigger is better, other smaller is better)
> >>>
> >>> Testing on x86_64 machine:
> >>>
> >>> | task hot2 | without patch | with patch | delta |
> >>> |---------------------|---------------|---------------|---------|
> >>> | total accesses time | 3.14 sec | 2.93 sec | -6.69% |
> >>> | cycles per access | 4.96 | 2.21 | -55.44% |
> >>> | Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
> >>> | dTLB-load-misses | 284814532 | 69597236 | -75.56% |
> >>>
> >>> Testing on qemu-system-x86_64 -enable-kvm:
> >>>
> >>> | task hot2 | without patch | with patch | delta |
> >>> |---------------------|---------------|---------------|---------|
> >>> | total accesses time | 3.35 sec | 2.96 sec | -11.64% |
> >>> | cycles per access | 7.29 | 2.07 | -71.60% |
> >>> | Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
> >>> | dTLB-load-misses | 241600871 | 3216108 | -98.67% |
> >>>
> >>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> >>> Acked-by: David Hildenbrand (arm) <david@kernel.org>
> >>> Reviewed-by: Lance Yang <lance.yang@linux.dev>
> >>> ---
> >>> include/trace/events/huge_memory.h | 1 +
> >>> mm/khugepaged.c | 13 +++++++++++++
> >>> 2 files changed, 14 insertions(+)
> >>>
> >>> diff --git a/include/trace/events/huge_memory.h b/include/trace/
> >>> events/huge_memory.h
> >>> index 384e29f6bef0..bcdc57eea270 100644
> >>> --- a/include/trace/events/huge_memory.h
> >>> +++ b/include/trace/events/huge_memory.h
> >>> @@ -25,6 +25,7 @@
> >>> EM( SCAN_PAGE_LRU,
> >>> "page_not_in_lru") \
> >>> EM( SCAN_PAGE_LOCK,
> >>> "page_locked") \
> >>> EM( SCAN_PAGE_ANON,
> >>> "page_not_anon") \
> >>> + EM( SCAN_PAGE_LAZYFREE,
> >>> "page_lazyfree") \
> >>> EM( SCAN_PAGE_COMPOUND,
> >>> "page_compound") \
> >>> EM( SCAN_ANY_PROCESS,
> >>> "no_process_for_page") \
> >>> EM( SCAN_VMA_NULL,
> >>> "vma_null") \
> >>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> >>> index 8b68ae3bc2c5..0d160e612e16 100644
> >>> --- a/mm/khugepaged.c
> >>> +++ b/mm/khugepaged.c
> >>> @@ -46,6 +46,7 @@ enum scan_result {
> >>> SCAN_PAGE_LRU,
> >>> SCAN_PAGE_LOCK,
> >>> SCAN_PAGE_ANON,
> >>> + SCAN_PAGE_LAZYFREE,
> >>> SCAN_PAGE_COMPOUND,
> >>> SCAN_ANY_PROCESS,
> >>> SCAN_VMA_NULL,
> >>> @@ -583,6 +584,12 @@ static enum scan_result
> >>> __collapse_huge_page_isolate(struct vm_area_struct *vma,
> >>> folio = page_folio(page);
> >>> VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
> >>>
> >>> + if (cc->is_khugepaged && !pte_dirty(pteval) &&
> >>> + folio_test_lazyfree(folio)) {
> >>
> >> We have two corner cases here:
> >
> > Good catch!
> >
> >>
> >> 1. Even if a lazyfree folio is dirty, if the VMA has the VM_DROPPABLE
> >> flag,
> >> a lazyfree folio may still be dropped, even when its PTE is dirty.
>
> Good point!
>
> >
> > Right. When the VMA has VM_DROPPABLE, we would drop the lazyfree folio
> > regardless of whether it (or the PTE) is dirty in try_to_unmap_one().
> >
> > So, IMHO, we could go with:
> >
> > cc->is_khugepaged && folio_test_lazyfree(folio) &&
> > (!pte_dirty(pteval) || (vma->vm_flags & VM_DROPPABLE))
>
> Hm. In a VM_DROPPABLE mapping all folios should be marked as lazy-free
> (see folio_add_new_anon_rmap()).
>
> The new (collapse) folio will also be marked lazy (due to
> folio_add_new_anon_rmap()) free and can just get dropped any time.
>
> So likely we should just not skip collapse for lazyfree folios in
> VM_DROPPABLE mappings?
Maybe change “just not skip” to “just skip”?
If the goal is to avoid the collapse overhead for folios that are
about to be dropped, we might consider skipping collapse for the
entire VMA?
>
> if (cc->is_khugepaged && !(vma->vm_flags & VM_DROPPABLE) &&
> folio_test_lazyfree(folio) && !pte_dirty(pteval)) {
> ...
> }
Thanks
Barry
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 22:01 ` Barry Song
@ 2026-02-07 22:05 ` David Hildenbrand (Arm)
2026-02-07 22:17 ` Barry Song
0 siblings, 1 reply; 22+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-07 22:05 UTC (permalink / raw)
To: Barry Song
Cc: Lance Yang, Vernon Yang, akpm, lorenzo.stoakes, ziy, dev.jain,
linux-mm, linux-kernel, Vernon Yang
On 2/7/26 23:01, Barry Song wrote:
> On Sun, Feb 8, 2026 at 5:38 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
>>
>> On 2/7/26 14:51, Lance Yang wrote:
>>>
>>>
>>>
>>> Good catch!
>>>
>>
>> Good point!
>>
>>>
>>> Right. When the VMA has VM_DROPPABLE, we would drop the lazyfree folio
>>> regardless of whether it (or the PTE) is dirty in try_to_unmap_one().
>>>
>>> So, IMHO, we could go with:
>>>
>>> cc->is_khugepaged && folio_test_lazyfree(folio) &&
>>> (!pte_dirty(pteval) || (vma->vm_flags & VM_DROPPABLE))
>>
>> Hm. In a VM_DROPPABLE mapping all folios should be marked as lazy-free
>> (see folio_add_new_anon_rmap()).
>>
>> The new (collapse) folio will also be marked lazy (due to
>> folio_add_new_anon_rmap()) free and can just get dropped any time.
>>
>> So likely we should just not skip collapse for lazyfree folios in
>> VM_DROPPABLE mappings?
>
> Maybe change “just not skip” to “just skip”?
>
> If the goal is to avoid the collapse overhead for folios that are
> about to be dropped, we might consider skipping collapse for the
> entire VMA?
If there is no memory pressure in the system, why wouldn't you just want
to collapse in a VM_DROPPABLE region?
"about to be dropped" only applies once there is actual memory pressure.
If not, these pages stick around forever.
--
Cheers,
David
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 22:05 ` David Hildenbrand (Arm)
@ 2026-02-07 22:17 ` Barry Song
2026-02-07 22:25 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 22+ messages in thread
From: Barry Song @ 2026-02-07 22:17 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Lance Yang, Vernon Yang, akpm, lorenzo.stoakes, ziy, dev.jain,
linux-mm, linux-kernel, Vernon Yang
On Sun, Feb 8, 2026 at 6:05 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
>
> On 2/7/26 23:01, Barry Song wrote:
> > On Sun, Feb 8, 2026 at 5:38 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
> >>
> >> On 2/7/26 14:51, Lance Yang wrote:
> >>>
> >>>
> >>>
> >>> Good catch!
> >>>
> >>
> >> Good point!
> >>
> >>>
> >>> Right. When the VMA has VM_DROPPABLE, we would drop the lazyfree folio
> >>> regardless of whether it (or the PTE) is dirty in try_to_unmap_one().
> >>>
> >>> So, IMHO, we could go with:
> >>>
> >>> cc->is_khugepaged && folio_test_lazyfree(folio) &&
> >>> (!pte_dirty(pteval) || (vma->vm_flags & VM_DROPPABLE))
> >>
> >> Hm. In a VM_DROPPABLE mapping all folios should be marked as lazy-free
> >> (see folio_add_new_anon_rmap()).
> >>
> >> The new (collapse) folio will also be marked lazy (due to
> >> folio_add_new_anon_rmap()) free and can just get dropped any time.
> >>
> >> So likely we should just not skip collapse for lazyfree folios in
> >> VM_DROPPABLE mappings?
> >
> > Maybe change “just not skip” to “just skip”?
> >
> > If the goal is to avoid the collapse overhead for folios that are
> > about to be dropped, we might consider skipping collapse for the
> > entire VMA?
> If there is no memory pressure in the system, why wouldn't you just want
> to collapse in a VM_DROPPABLE region?
>
> "about to be dropped" only applies once there is actual memory pressure.
> If not, these pages stick around forever.
agree. But this brings us back to the philosophy of the original patch.
If there is no memory pressure, lazyfree folios won’t be dropped, so
collapsing them might also be reasonable.
Just collapsing fully lazyfree folios with VM_DROPPABLE while
skipping partially lazyfree VMAs seems a bit confusing to me :-)
Thanks
Barry
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 22:17 ` Barry Song
@ 2026-02-07 22:25 ` David Hildenbrand (Arm)
2026-02-07 22:31 ` Barry Song
2026-02-08 13:26 ` Vernon Yang
0 siblings, 2 replies; 22+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-07 22:25 UTC (permalink / raw)
To: Barry Song
Cc: Lance Yang, Vernon Yang, akpm, lorenzo.stoakes, ziy, dev.jain,
linux-mm, linux-kernel, Vernon Yang
On 2/7/26 23:17, Barry Song wrote:
> On Sun, Feb 8, 2026 at 6:05 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
>>
>> On 2/7/26 23:01, Barry Song wrote:
>>>
>>> Maybe change “just not skip” to “just skip”?
>>>
>>> If the goal is to avoid the collapse overhead for folios that are
>>> about to be dropped, we might consider skipping collapse for the
>>> entire VMA?
>> If there is no memory pressure in the system, why wouldn't you just want
>> to collapse in a VM_DROPPABLE region?
>>
>> "about to be dropped" only applies once there is actual memory pressure.
>> If not, these pages stick around forever.
>
> agree. But this brings us back to the philosophy of the original patch.
> If there is no memory pressure, lazyfree folios won’t be dropped, so
> collapsing them might also be reasonable.
It's about memory pressure in the future.
>
> Just collapsing fully lazyfree folios with VM_DROPPABLE while
> skipping partially lazyfree VMAs seems a bit confusing to me :-)
Think of it like this:
All folios in VM_DROPPABLE are lazyfree. Collapsing maintains that
property. So you can just collapse and memory pressure in the future
will free it up.
In contrast, collapsing in !VM_DROPPABLE does not maintain that
property. The collapsed folio will not be lazyfree and memory pressure
in the future will not be able to free it up.
--
Cheers,
David
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 22:25 ` David Hildenbrand (Arm)
@ 2026-02-07 22:31 ` Barry Song
2026-02-08 13:26 ` Vernon Yang
1 sibling, 0 replies; 22+ messages in thread
From: Barry Song @ 2026-02-07 22:31 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Lance Yang, Vernon Yang, akpm, lorenzo.stoakes, ziy, dev.jain,
linux-mm, linux-kernel, Vernon Yang
On Sun, Feb 8, 2026 at 6:25 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
>
> On 2/7/26 23:17, Barry Song wrote:
> > On Sun, Feb 8, 2026 at 6:05 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
> >>
> >> On 2/7/26 23:01, Barry Song wrote:
> >>>
> >>> Maybe change “just not skip” to “just skip”?
> >>>
> >>> If the goal is to avoid the collapse overhead for folios that are
> >>> about to be dropped, we might consider skipping collapse for the
> >>> entire VMA?
> >> If there is no memory pressure in the system, why wouldn't you just want
> >> to collapse in a VM_DROPPABLE region?
> >>
> >> "about to be dropped" only applies once there is actual memory pressure.
> >> If not, these pages stick around forever.
> >
> > agree. But this brings us back to the philosophy of the original patch.
> > If there is no memory pressure, lazyfree folios won’t be dropped, so
> > collapsing them might also be reasonable.
>
> It's about memory pressure in the future.
>
> >
> > Just collapsing fully lazyfree folios with VM_DROPPABLE while
> > skipping partially lazyfree VMAs seems a bit confusing to me :-)
>
> Think of it like this:
>
> All folios in VM_DROPPABLE are lazyfree. Collapsing maintains that
> property. So you can just collapse and memory pressure in the future
> will free it up.
>
> In contrast, collapsing in !VM_DROPPABLE does not maintain that
> property. The collapsed folio will not be lazyfree and memory pressure
> in the future will not be able to free it up.
Thanks for the clarification. I agree with your point — whether lazyfree
folios are carried over to the new folios changes the whole story.
Best Regards
Barry
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 21:38 ` David Hildenbrand (Arm)
2026-02-07 22:01 ` Barry Song
@ 2026-02-08 4:06 ` Lance Yang
1 sibling, 0 replies; 22+ messages in thread
From: Lance Yang @ 2026-02-08 4:06 UTC (permalink / raw)
To: David Hildenbrand (Arm), Barry Song, Vernon Yang
Cc: akpm, lorenzo.stoakes, ziy, dev.jain, linux-mm, linux-kernel,
Vernon Yang
On 2026/2/8 05:38, David Hildenbrand (Arm) wrote:
> On 2/7/26 14:51, Lance Yang wrote:
>>
>>
>> On 2026/2/7 16:34, Barry Song wrote:
>>> On Sat, Feb 7, 2026 at 4:16 PM Vernon Yang <vernon2gm@gmail.com> wrote:
>>>>
>>>> From: Vernon Yang <yanglincheng@kylinos.cn>
>>>>
>>>> For example, create three task: hot1 -> cold -> hot2. After all three
>>>> task are created, each allocate memory 128MB. the hot1/hot2 task
>>>> continuously access 128 MB memory, while the cold task only accesses
>>>> its memory briefly and then call madvise(MADV_FREE). However,
>>>> khugepaged
>>>> still prioritizes scanning the cold task and only scans the hot2 task
>>>> after completing the scan of the cold task.
>>>>
>>>> And if we collapse with a lazyfree page, that content will never be
>>>> none
>>>> and the deferred shrinker cannot reclaim them.
>>>>
>>>> So if the user has explicitly informed us via MADV_FREE that this
>>>> memory
>>>> will be freed, it is appropriate for khugepaged to skip it only,
>>>> thereby
>>>> avoiding unnecessary scan and collapse operations to reducing CPU
>>>> wastage.
>>>>
>>>> Here are the performance test results:
>>>> (Throughput bigger is better, other smaller is better)
>>>>
>>>> Testing on x86_64 machine:
>>>>
>>>> | task hot2 | without patch | with patch | delta |
>>>> |---------------------|---------------|---------------|---------|
>>>> | total accesses time | 3.14 sec | 2.93 sec | -6.69% |
>>>> | cycles per access | 4.96 | 2.21 | -55.44% |
>>>> | Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
>>>> | dTLB-load-misses | 284814532 | 69597236 | -75.56% |
>>>>
>>>> Testing on qemu-system-x86_64 -enable-kvm:
>>>>
>>>> | task hot2 | without patch | with patch | delta |
>>>> |---------------------|---------------|---------------|---------|
>>>> | total accesses time | 3.35 sec | 2.96 sec | -11.64% |
>>>> | cycles per access | 7.29 | 2.07 | -71.60% |
>>>> | Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
>>>> | dTLB-load-misses | 241600871 | 3216108 | -98.67% |
>>>>
>>>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
>>>> Acked-by: David Hildenbrand (arm) <david@kernel.org>
>>>> Reviewed-by: Lance Yang <lance.yang@linux.dev>
>>>> ---
>>>> include/trace/events/huge_memory.h | 1 +
>>>> mm/khugepaged.c | 13 +++++++++++++
>>>> 2 files changed, 14 insertions(+)
>>>>
>>>> diff --git a/include/trace/events/huge_memory.h b/include/trace/
>>>> events/huge_memory.h
>>>> index 384e29f6bef0..bcdc57eea270 100644
>>>> --- a/include/trace/events/huge_memory.h
>>>> +++ b/include/trace/events/huge_memory.h
>>>> @@ -25,6 +25,7 @@
>>>> EM( SCAN_PAGE_LRU, "page_not_in_lru") \
>>>> EM( SCAN_PAGE_LOCK, "page_locked") \
>>>> EM( SCAN_PAGE_ANON, "page_not_anon") \
>>>> + EM( SCAN_PAGE_LAZYFREE, "page_lazyfree") \
>>>> EM( SCAN_PAGE_COMPOUND, "page_compound") \
>>>> EM( SCAN_ANY_PROCESS, "no_process_for_page") \
>>>> EM( SCAN_VMA_NULL, "vma_null") \
>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>>> index 8b68ae3bc2c5..0d160e612e16 100644
>>>> --- a/mm/khugepaged.c
>>>> +++ b/mm/khugepaged.c
>>>> @@ -46,6 +46,7 @@ enum scan_result {
>>>> SCAN_PAGE_LRU,
>>>> SCAN_PAGE_LOCK,
>>>> SCAN_PAGE_ANON,
>>>> + SCAN_PAGE_LAZYFREE,
>>>> SCAN_PAGE_COMPOUND,
>>>> SCAN_ANY_PROCESS,
>>>> SCAN_VMA_NULL,
>>>> @@ -583,6 +584,12 @@ static enum scan_result
>>>> __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>>> folio = page_folio(page);
>>>> VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
>>>>
>>>> + if (cc->is_khugepaged && !pte_dirty(pteval) &&
>>>> + folio_test_lazyfree(folio)) {
>>>
>>> We have two corner cases here:
>>
>> Good catch!
>>
>>>
>>> 1. Even if a lazyfree folio is dirty, if the VMA has the VM_DROPPABLE
>>> flag,
>>> a lazyfree folio may still be dropped, even when its PTE is dirty.
>
> Good point!
>
>>
>> Right. When the VMA has VM_DROPPABLE, we would drop the lazyfree folio
>> regardless of whether it (or the PTE) is dirty in try_to_unmap_one().
>>
>> So, IMHO, we could go with:
>>
>> cc->is_khugepaged && folio_test_lazyfree(folio) &&
>> (!pte_dirty(pteval) || (vma->vm_flags & VM_DROPPABLE))
>
> Hm. In a VM_DROPPABLE mapping all folios should be marked as lazy-free
> (see folio_add_new_anon_rmap()).
Ah, I missed that apparently :)
> The new (collapse) folio will also be marked lazy (due to
> folio_add_new_anon_rmap()) free and can just get dropped any time.
>
> So likely we should just not skip collapse for lazyfree folios in
> VM_DROPPABLE mappings?
>
> if (cc->is_khugepaged && !(vma->vm_flags & VM_DROPPABLE) &&
> folio_test_lazyfree(folio) && !pte_dirty(pteval)) {
> ...
> }
Yep. That should be doing the trick. Thanks!
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number
2026-02-07 8:16 ` [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number Vernon Yang
@ 2026-02-08 9:17 ` Dev Jain
2026-02-08 13:25 ` Vernon Yang
2026-02-18 3:55 ` Vernon Yang
1 sibling, 1 reply; 22+ messages in thread
From: Dev Jain @ 2026-02-08 9:17 UTC (permalink / raw)
To: Vernon Yang, akpm, david
Cc: lorenzo.stoakes, ziy, baohua, lance.yang, linux-mm, linux-kernel,
Vernon Yang
On 07/02/26 1:46 pm, Vernon Yang wrote:
> From: Vernon Yang <yanglincheng@kylinos.cn>
>
> Currently, each scan always increases "progress" by HPAGE_PMD_NR,
> even if only scanning a single PTE/PMD entry.
>
> - When only scanning a sigle PTE entry, let me provide a detailed
> example:
>
> static int hpage_collapse_scan_pmd()
> {
> for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> _pte++, addr += PAGE_SIZE) {
> pte_t pteval = ptep_get(_pte);
> ...
> if (pte_uffd_wp(pteval)) { <-- first scan hit
> result = SCAN_PTE_UFFD_WP;
> goto out_unmap;
> }
> }
> }
>
> During the first scan, if pte_uffd_wp(pteval) is true, the loop exits
> directly. In practice, only one PTE is scanned before termination.
> Here, "progress += 1" reflects the actual number of PTEs scanned, but
> previously "progress += HPAGE_PMD_NR" always.
>
> - When the memory has been collapsed to PMD, let me provide a detailed
> example:
>
> The following data is traced by bpftrace on a desktop system. After
> the system has been left idle for 10 minutes upon booting, a lot of
> SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan
> by khugepaged.
>
> From trace_mm_khugepaged_scan_pmd and trace_mm_khugepaged_scan_file, the
> following statuses were observed, with frequency mentioned next to them:
>
> SCAN_SUCCEED : 1
> SCAN_EXCEED_SHARED_PTE: 2
> SCAN_PMD_MAPPED : 142
> SCAN_NO_PTE_TABLE : 178
> total progress size : 674 MB
> Total time : 419 seconds, include khugepaged_scan_sleep_millisecs
>
> The khugepaged_scan list save all task that support collapse into hugepage,
> as long as the task is not destroyed, khugepaged will not remove it from
> the khugepaged_scan list. This exist a phenomenon where task has already
> collapsed all memory regions into hugepage, but khugepaged continues to
> scan it, which wastes CPU time and invalid, and due to
> khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for
> scanning a large number of invalid task, so scanning really valid task
> is later.
>
> After applying this patch, when the memory is either SCAN_PMD_MAPPED or
> SCAN_NO_PTE_TABLE, just skip it, as follow:
>
> SCAN_EXCEED_SHARED_PTE: 2
> SCAN_PMD_MAPPED : 147
> SCAN_NO_PTE_TABLE : 173
> total progress size : 45 MB
> Total time : 20 seconds
>
> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> ---
> mm/khugepaged.c | 38 ++++++++++++++++++++++++++++----------
> 1 file changed, 28 insertions(+), 10 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 4049234e1c8b..8b68ae3bc2c5 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -68,7 +68,10 @@ enum scan_result {
> static struct task_struct *khugepaged_thread __read_mostly;
> static DEFINE_MUTEX(khugepaged_mutex);
>
> -/* default scan 8*HPAGE_PMD_NR ptes (or vmas) every 10 second */
> +/*
> + * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas
> + * every 10 second.
> + */
> static unsigned int khugepaged_pages_to_scan __read_mostly;
> static unsigned int khugepaged_pages_collapsed;
> static unsigned int khugepaged_full_scans;
> @@ -1240,7 +1243,8 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a
> }
>
> static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
> - struct vm_area_struct *vma, unsigned long start_addr, bool *mmap_locked,
> + struct vm_area_struct *vma, unsigned long start_addr,
> + bool *mmap_locked, unsigned int *cur_progress,
> struct collapse_control *cc)
> {
> pmd_t *pmd;
> @@ -1256,19 +1260,27 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
> VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK);
>
> result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
> - if (result != SCAN_SUCCEED)
> + if (result != SCAN_SUCCEED) {
> + if (cur_progress)
> + *cur_progress = 1;
> goto out;
> + }
>
> memset(cc->node_load, 0, sizeof(cc->node_load));
> nodes_clear(cc->alloc_nmask);
> pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
> if (!pte) {
> + if (cur_progress)
> + *cur_progress = 1;
> result = SCAN_NO_PTE_TABLE;
> goto out;
> }
>
> for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> _pte++, addr += PAGE_SIZE) {
> + if (cur_progress)
> + *cur_progress += 1;
> +
> pte_t pteval = ptep_get(_pte);
> if (pte_none_or_zero(pteval)) {
> ++none_or_zero;
> @@ -2288,8 +2300,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> return result;
> }
>
> -static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> - struct file *file, pgoff_t start, struct collapse_control *cc)
> +static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> + unsigned long addr, struct file *file, pgoff_t start,
> + unsigned int *cur_progress, struct collapse_control *cc)
> {
> struct folio *folio = NULL;
> struct address_space *mapping = file->f_mapping;
> @@ -2378,6 +2391,8 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
> cond_resched_rcu();
> }
> }
> + if (cur_progress)
> + *cur_progress = HPAGE_PMD_NR;
> rcu_read_unlock();
>
>
Nit: Could move this at the end of the function. Looks weird before the
rcu_read_unlock.
Reviewed-by: Dev Jain <dev.jain@arm.com>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number
2026-02-08 9:17 ` Dev Jain
@ 2026-02-08 13:25 ` Vernon Yang
0 siblings, 0 replies; 22+ messages in thread
From: Vernon Yang @ 2026-02-08 13:25 UTC (permalink / raw)
To: Dev Jain
Cc: akpm, david, lorenzo.stoakes, ziy, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
On Sun, Feb 8, 2026 at 5:17 PM Dev Jain <dev.jain@arm.com> wrote:
>
> On 07/02/26 1:46 pm, Vernon Yang wrote:
> > From: Vernon Yang <yanglincheng@kylinos.cn>
> >
> > Currently, each scan always increases "progress" by HPAGE_PMD_NR,
> > even if only scanning a single PTE/PMD entry.
> >
> > - When only scanning a sigle PTE entry, let me provide a detailed
> > example:
> >
> > static int hpage_collapse_scan_pmd()
> > {
> > for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> > _pte++, addr += PAGE_SIZE) {
> > pte_t pteval = ptep_get(_pte);
> > ...
> > if (pte_uffd_wp(pteval)) { <-- first scan hit
> > result = SCAN_PTE_UFFD_WP;
> > goto out_unmap;
> > }
> > }
> > }
> >
> > During the first scan, if pte_uffd_wp(pteval) is true, the loop exits
> > directly. In practice, only one PTE is scanned before termination.
> > Here, "progress += 1" reflects the actual number of PTEs scanned, but
> > previously "progress += HPAGE_PMD_NR" always.
> >
> > - When the memory has been collapsed to PMD, let me provide a detailed
> > example:
> >
> > The following data is traced by bpftrace on a desktop system. After
> > the system has been left idle for 10 minutes upon booting, a lot of
> > SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan
> > by khugepaged.
> >
> > From trace_mm_khugepaged_scan_pmd and trace_mm_khugepaged_scan_file, the
> > following statuses were observed, with frequency mentioned next to them:
> >
> > SCAN_SUCCEED : 1
> > SCAN_EXCEED_SHARED_PTE: 2
> > SCAN_PMD_MAPPED : 142
> > SCAN_NO_PTE_TABLE : 178
> > total progress size : 674 MB
> > Total time : 419 seconds, include khugepaged_scan_sleep_millisecs
> >
> > The khugepaged_scan list save all task that support collapse into hugepage,
> > as long as the task is not destroyed, khugepaged will not remove it from
> > the khugepaged_scan list. This exist a phenomenon where task has already
> > collapsed all memory regions into hugepage, but khugepaged continues to
> > scan it, which wastes CPU time and invalid, and due to
> > khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for
> > scanning a large number of invalid task, so scanning really valid task
> > is later.
> >
> > After applying this patch, when the memory is either SCAN_PMD_MAPPED or
> > SCAN_NO_PTE_TABLE, just skip it, as follow:
> >
> > SCAN_EXCEED_SHARED_PTE: 2
> > SCAN_PMD_MAPPED : 147
> > SCAN_NO_PTE_TABLE : 173
> > total progress size : 45 MB
> > Total time : 20 seconds
> >
> > Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> > ---
> > mm/khugepaged.c | 38 ++++++++++++++++++++++++++++----------
> > 1 file changed, 28 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 4049234e1c8b..8b68ae3bc2c5 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -68,7 +68,10 @@ enum scan_result {
> > static struct task_struct *khugepaged_thread __read_mostly;
> > static DEFINE_MUTEX(khugepaged_mutex);
> >
> > -/* default scan 8*HPAGE_PMD_NR ptes (or vmas) every 10 second */
> > +/*
> > + * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas
> > + * every 10 second.
> > + */
> > static unsigned int khugepaged_pages_to_scan __read_mostly;
> > static unsigned int khugepaged_pages_collapsed;
> > static unsigned int khugepaged_full_scans;
> > @@ -1240,7 +1243,8 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a
> > }
> >
> > static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
> > - struct vm_area_struct *vma, unsigned long start_addr, bool *mmap_locked,
> > + struct vm_area_struct *vma, unsigned long start_addr,
> > + bool *mmap_locked, unsigned int *cur_progress,
> > struct collapse_control *cc)
> > {
> > pmd_t *pmd;
> > @@ -1256,19 +1260,27 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
> > VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK);
> >
> > result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
> > - if (result != SCAN_SUCCEED)
> > + if (result != SCAN_SUCCEED) {
> > + if (cur_progress)
> > + *cur_progress = 1;
> > goto out;
> > + }
> >
> > memset(cc->node_load, 0, sizeof(cc->node_load));
> > nodes_clear(cc->alloc_nmask);
> > pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
> > if (!pte) {
> > + if (cur_progress)
> > + *cur_progress = 1;
> > result = SCAN_NO_PTE_TABLE;
> > goto out;
> > }
> >
> > for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> > _pte++, addr += PAGE_SIZE) {
> > + if (cur_progress)
> > + *cur_progress += 1;
> > +
> > pte_t pteval = ptep_get(_pte);
> > if (pte_none_or_zero(pteval)) {
> > ++none_or_zero;
> > @@ -2288,8 +2300,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> > return result;
> > }
> >
> > -static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> > - struct file *file, pgoff_t start, struct collapse_control *cc)
> > +static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> > + unsigned long addr, struct file *file, pgoff_t start,
> > + unsigned int *cur_progress, struct collapse_control *cc)
> > {
> > struct folio *folio = NULL;
> > struct address_space *mapping = file->f_mapping;
> > @@ -2378,6 +2391,8 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
> > cond_resched_rcu();
> > }
> > }
> > + if (cur_progress)
> > + *cur_progress = HPAGE_PMD_NR;
> > rcu_read_unlock();
> >
> >
>
> Nit: Could move this at the end of the function. Looks weird before the
> rcu_read_unlock.
I placed it on the next line of rcu_read_unlock() because it follows
the loop more clearly for readability.
> Reviewed-by: Dev Jain <dev.jain@arm.com>
Thank you for review.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios
2026-02-07 22:25 ` David Hildenbrand (Arm)
2026-02-07 22:31 ` Barry Song
@ 2026-02-08 13:26 ` Vernon Yang
1 sibling, 0 replies; 22+ messages in thread
From: Vernon Yang @ 2026-02-08 13:26 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Barry Song, Lance Yang, akpm, lorenzo.stoakes, ziy, dev.jain,
linux-mm, linux-kernel, Vernon Yang
On Sun, Feb 8, 2026 at 6:25 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
>
> On 2/7/26 23:17, Barry Song wrote:
> > On Sun, Feb 8, 2026 at 6:05 AM David Hildenbrand (Arm) <david@kernel.org> wrote:
> >>
> >> On 2/7/26 23:01, Barry Song wrote:
> >>>
> >>> Maybe change “just not skip” to “just skip”?
> >>>
> >>> If the goal is to avoid the collapse overhead for folios that are
> >>> about to be dropped, we might consider skipping collapse for the
> >>> entire VMA?
> >> If there is no memory pressure in the system, why wouldn't you just want
> >> to collapse in a VM_DROPPABLE region?
> >>
> >> "about to be dropped" only applies once there is actual memory pressure.
> >> If not, these pages stick around forever.
> >
> > agree. But this brings us back to the philosophy of the original patch.
> > If there is no memory pressure, lazyfree folios won’t be dropped, so
> > collapsing them might also be reasonable.
>
> It's about memory pressure in the future.
>
> >
> > Just collapsing fully lazyfree folios with VM_DROPPABLE while
> > skipping partially lazyfree VMAs seems a bit confusing to me :-)
>
> Think of it like this:
>
> All folios in VM_DROPPABLE are lazyfree. Collapsing maintains that
> property. So you can just collapse and memory pressure in the future
> will free it up.
>
> In contrast, collapsing in !VM_DROPPABLE does not maintain that
> property. The collapsed folio will not be lazyfree and memory pressure
> in the future will not be able to free it up.
Thank you Barry for pointing out this corner case, and thank you David for
suggestions and explanations.
LGTM, I will fix it in the next version.
---
Thanks,
Vernon
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number
2026-02-07 8:16 ` [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number Vernon Yang
2026-02-08 9:17 ` Dev Jain
@ 2026-02-18 3:55 ` Vernon Yang
2026-02-18 8:05 ` David Hildenbrand (Arm)
1 sibling, 1 reply; 22+ messages in thread
From: Vernon Yang @ 2026-02-18 3:55 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
On Sat, Feb 07, 2026 at 04:16:10PM +0800, Vernon Yang wrote:
> From: Vernon Yang <yanglincheng@kylinos.cn>
>
> Currently, each scan always increases "progress" by HPAGE_PMD_NR,
> even if only scanning a single PTE/PMD entry.
>
> - When only scanning a sigle PTE entry, let me provide a detailed
> example:
>
> static int hpage_collapse_scan_pmd()
> {
> for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> _pte++, addr += PAGE_SIZE) {
> pte_t pteval = ptep_get(_pte);
> ...
> if (pte_uffd_wp(pteval)) { <-- first scan hit
> result = SCAN_PTE_UFFD_WP;
> goto out_unmap;
> }
> }
> }
>
> During the first scan, if pte_uffd_wp(pteval) is true, the loop exits
> directly. In practice, only one PTE is scanned before termination.
> Here, "progress += 1" reflects the actual number of PTEs scanned, but
> previously "progress += HPAGE_PMD_NR" always.
>
> - When the memory has been collapsed to PMD, let me provide a detailed
> example:
>
> The following data is traced by bpftrace on a desktop system. After
> the system has been left idle for 10 minutes upon booting, a lot of
> SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan
> by khugepaged.
>
> From trace_mm_khugepaged_scan_pmd and trace_mm_khugepaged_scan_file, the
> following statuses were observed, with frequency mentioned next to them:
>
> SCAN_SUCCEED : 1
> SCAN_EXCEED_SHARED_PTE: 2
> SCAN_PMD_MAPPED : 142
> SCAN_NO_PTE_TABLE : 178
> total progress size : 674 MB
> Total time : 419 seconds, include khugepaged_scan_sleep_millisecs
>
> The khugepaged_scan list save all task that support collapse into hugepage,
> as long as the task is not destroyed, khugepaged will not remove it from
> the khugepaged_scan list. This exist a phenomenon where task has already
> collapsed all memory regions into hugepage, but khugepaged continues to
> scan it, which wastes CPU time and invalid, and due to
> khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for
> scanning a large number of invalid task, so scanning really valid task
> is later.
>
> After applying this patch, when the memory is either SCAN_PMD_MAPPED or
> SCAN_NO_PTE_TABLE, just skip it, as follow:
>
> SCAN_EXCEED_SHARED_PTE: 2
> SCAN_PMD_MAPPED : 147
> SCAN_NO_PTE_TABLE : 173
> total progress size : 45 MB
> Total time : 20 seconds
>
> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> ---
> mm/khugepaged.c | 38 ++++++++++++++++++++++++++++----------
> 1 file changed, 28 insertions(+), 10 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 4049234e1c8b..8b68ae3bc2c5 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -68,7 +68,10 @@ enum scan_result {
> static struct task_struct *khugepaged_thread __read_mostly;
> static DEFINE_MUTEX(khugepaged_mutex);
>
> -/* default scan 8*HPAGE_PMD_NR ptes (or vmas) every 10 second */
> +/*
> + * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas
> + * every 10 second.
> + */
> static unsigned int khugepaged_pages_to_scan __read_mostly;
> static unsigned int khugepaged_pages_collapsed;
> static unsigned int khugepaged_full_scans;
> @@ -1240,7 +1243,8 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a
> }
>
> static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
> - struct vm_area_struct *vma, unsigned long start_addr, bool *mmap_locked,
> + struct vm_area_struct *vma, unsigned long start_addr,
> + bool *mmap_locked, unsigned int *cur_progress,
> struct collapse_control *cc)
> {
> pmd_t *pmd;
> @@ -1256,19 +1260,27 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
> VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK);
>
> result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
> - if (result != SCAN_SUCCEED)
> + if (result != SCAN_SUCCEED) {
> + if (cur_progress)
> + *cur_progress = 1;
> goto out;
> + }
>
> memset(cc->node_load, 0, sizeof(cc->node_load));
> nodes_clear(cc->alloc_nmask);
> pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
> if (!pte) {
> + if (cur_progress)
> + *cur_progress = 1;
> result = SCAN_NO_PTE_TABLE;
> goto out;
> }
>
> for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> _pte++, addr += PAGE_SIZE) {
> + if (cur_progress)
> + *cur_progress += 1;
> +
> pte_t pteval = ptep_get(_pte);
> if (pte_none_or_zero(pteval)) {
> ++none_or_zero;
> @@ -2288,8 +2300,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> return result;
> }
>
> -static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> - struct file *file, pgoff_t start, struct collapse_control *cc)
> +static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> + unsigned long addr, struct file *file, pgoff_t start,
> + unsigned int *cur_progress, struct collapse_control *cc)
> {
> struct folio *folio = NULL;
> struct address_space *mapping = file->f_mapping;
> @@ -2378,6 +2391,8 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
> cond_resched_rcu();
> }
> }
> + if (cur_progress)
> + *cur_progress = HPAGE_PMD_NR;
Hi David,
When using Fedora Server, I found a lot of SCAN_PTE_MAPPED_HUGEPAGE.
The following data is traced by bpftrace[1] on Fedora Server. After
the system has been left idle for 10 minutes upon booting, a lot of
SCAN_PTE_MAPPED_HUGEPAGE are observed during a full scan by khugepaged,
as shown below:
SCAN_SUCCEED : 1
SCAN_PMD_MAPPED : 22
SCAN_EXCEED_NONE_PTE : 67
SCAN_PTE_MAPPED_HUGEPAGE: 919
I simply handled SCAN_PTE_MAPPED_HUGEPAGE by "cur_progress" equal to 1,
as follows:
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 437783cf2873..7f301bebfb11 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2405,8 +2405,12 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
}
}
rcu_read_unlock();
- if (cur_progress)
- *cur_progress = HPAGE_PMD_NR;
+ if (cur_progress) {
+ if (result == SCAN_PTE_MAPPED_HUGEPAGE)
+ *cur_progress = 1;
+ else
+ *cur_progress = HPAGE_PMD_NR;
+ }
The below is some performance test results.
kernbench results (testing on x86_64 machine):
baseline w/o patches test w/ patches
Amean user-32 18633.85 ( 0.00%) 18346.30 * 1.54%*
Amean syst-32 1138.82 ( 0.00%) 1109.68 * 2.56%*
Amean elsp-32 669.60 ( 0.00%) 659.79 * 1.47%*
BAmean-95 user-32 18631.33 ( 0.00%) 18340.10 ( 1.56%)
BAmean-95 syst-32 1138.36 ( 0.00%) 1108.05 ( 2.66%)
BAmean-95 elsp-32 669.55 ( 0.00%) 659.61 ( 1.48%)
BAmean-99 user-32 18631.33 ( 0.00%) 18340.10 ( 1.56%)
BAmean-99 syst-32 1138.36 ( 0.00%) 1108.05 ( 2.66%)
BAmean-99 elsp-32 669.55 ( 0.00%) 659.61 ( 1.48%)
Kernbench performance improved by 2.56%, so we truly need to address
this issue. I will fix it in the next version.
If I missed something, please let me know, Thanks!
[1] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/khugepaged_mm.bt
--
Cheers,
Vernon
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number
2026-02-18 3:55 ` Vernon Yang
@ 2026-02-18 8:05 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-18 8:05 UTC (permalink / raw)
To: Vernon Yang, akpm
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
On 2/18/26 04:55, Vernon Yang wrote:
> On Sat, Feb 07, 2026 at 04:16:10PM +0800, Vernon Yang wrote:
>> From: Vernon Yang <yanglincheng@kylinos.cn>
>>
>> Currently, each scan always increases "progress" by HPAGE_PMD_NR,
>> even if only scanning a single PTE/PMD entry.
>>
>> - When only scanning a sigle PTE entry, let me provide a detailed
>> example:
>>
>> static int hpage_collapse_scan_pmd()
>> {
>> for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
>> _pte++, addr += PAGE_SIZE) {
>> pte_t pteval = ptep_get(_pte);
>> ...
>> if (pte_uffd_wp(pteval)) { <-- first scan hit
>> result = SCAN_PTE_UFFD_WP;
>> goto out_unmap;
>> }
>> }
>> }
>>
>> During the first scan, if pte_uffd_wp(pteval) is true, the loop exits
>> directly. In practice, only one PTE is scanned before termination.
>> Here, "progress += 1" reflects the actual number of PTEs scanned, but
>> previously "progress += HPAGE_PMD_NR" always.
>>
>> - When the memory has been collapsed to PMD, let me provide a detailed
>> example:
>>
>> The following data is traced by bpftrace on a desktop system. After
>> the system has been left idle for 10 minutes upon booting, a lot of
>> SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan
>> by khugepaged.
>>
>> From trace_mm_khugepaged_scan_pmd and trace_mm_khugepaged_scan_file, the
>> following statuses were observed, with frequency mentioned next to them:
>>
>> SCAN_SUCCEED : 1
>> SCAN_EXCEED_SHARED_PTE: 2
>> SCAN_PMD_MAPPED : 142
>> SCAN_NO_PTE_TABLE : 178
>> total progress size : 674 MB
>> Total time : 419 seconds, include khugepaged_scan_sleep_millisecs
>>
>> The khugepaged_scan list save all task that support collapse into hugepage,
>> as long as the task is not destroyed, khugepaged will not remove it from
>> the khugepaged_scan list. This exist a phenomenon where task has already
>> collapsed all memory regions into hugepage, but khugepaged continues to
>> scan it, which wastes CPU time and invalid, and due to
>> khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for
>> scanning a large number of invalid task, so scanning really valid task
>> is later.
>>
>> After applying this patch, when the memory is either SCAN_PMD_MAPPED or
>> SCAN_NO_PTE_TABLE, just skip it, as follow:
>>
>> SCAN_EXCEED_SHARED_PTE: 2
>> SCAN_PMD_MAPPED : 147
>> SCAN_NO_PTE_TABLE : 173
>> total progress size : 45 MB
>> Total time : 20 seconds
>>
>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
>> ---
>> mm/khugepaged.c | 38 ++++++++++++++++++++++++++++----------
>> 1 file changed, 28 insertions(+), 10 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 4049234e1c8b..8b68ae3bc2c5 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -68,7 +68,10 @@ enum scan_result {
>> static struct task_struct *khugepaged_thread __read_mostly;
>> static DEFINE_MUTEX(khugepaged_mutex);
>>
>> -/* default scan 8*HPAGE_PMD_NR ptes (or vmas) every 10 second */
>> +/*
>> + * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas
>> + * every 10 second.
>> + */
>> static unsigned int khugepaged_pages_to_scan __read_mostly;
>> static unsigned int khugepaged_pages_collapsed;
>> static unsigned int khugepaged_full_scans;
>> @@ -1240,7 +1243,8 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a
>> }
>>
>> static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
>> - struct vm_area_struct *vma, unsigned long start_addr, bool *mmap_locked,
>> + struct vm_area_struct *vma, unsigned long start_addr,
>> + bool *mmap_locked, unsigned int *cur_progress,
>> struct collapse_control *cc)
>> {
>> pmd_t *pmd;
>> @@ -1256,19 +1260,27 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
>> VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK);
>>
>> result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
>> - if (result != SCAN_SUCCEED)
>> + if (result != SCAN_SUCCEED) {
>> + if (cur_progress)
>> + *cur_progress = 1;
>> goto out;
>> + }
>>
>> memset(cc->node_load, 0, sizeof(cc->node_load));
>> nodes_clear(cc->alloc_nmask);
>> pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
>> if (!pte) {
>> + if (cur_progress)
>> + *cur_progress = 1;
>> result = SCAN_NO_PTE_TABLE;
>> goto out;
>> }
>>
>> for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
>> _pte++, addr += PAGE_SIZE) {
>> + if (cur_progress)
>> + *cur_progress += 1;
>> +
>> pte_t pteval = ptep_get(_pte);
>> if (pte_none_or_zero(pteval)) {
>> ++none_or_zero;
>> @@ -2288,8 +2300,9 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
>> return result;
>> }
>>
>> -static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
>> - struct file *file, pgoff_t start, struct collapse_control *cc)
>> +static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
>> + unsigned long addr, struct file *file, pgoff_t start,
>> + unsigned int *cur_progress, struct collapse_control *cc)
>> {
>> struct folio *folio = NULL;
>> struct address_space *mapping = file->f_mapping;
>> @@ -2378,6 +2391,8 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
>> cond_resched_rcu();
>> }
>> }
>> + if (cur_progress)
>> + *cur_progress = HPAGE_PMD_NR;
>
> Hi David,
>
> When using Fedora Server, I found a lot of SCAN_PTE_MAPPED_HUGEPAGE.
>
> The following data is traced by bpftrace[1] on Fedora Server. After
> the system has been left idle for 10 minutes upon booting, a lot of
> SCAN_PTE_MAPPED_HUGEPAGE are observed during a full scan by khugepaged,
> as shown below:
>
> SCAN_SUCCEED : 1
> SCAN_PMD_MAPPED : 22
> SCAN_EXCEED_NONE_PTE : 67
> SCAN_PTE_MAPPED_HUGEPAGE: 919
>
> I simply handled SCAN_PTE_MAPPED_HUGEPAGE by "cur_progress" equal to 1,
> as follows:
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 437783cf2873..7f301bebfb11 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2405,8 +2405,12 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> }
> }
> rcu_read_unlock();
> - if (cur_progress)
> - *cur_progress = HPAGE_PMD_NR;
> + if (cur_progress) {
> + if (result == SCAN_PTE_MAPPED_HUGEPAGE)
> + *cur_progress = 1;
> + else
> + *cur_progress = HPAGE_PMD_NR;
> + }
That makes sense to me.
--
Cheers,
David
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH mm-new v7 0/5] Improve khugepaged scan logic
2026-02-07 8:11 [PATCH mm-new v7 0/5] Improve khugepaged scan logic Vernon Yang
@ 2026-02-07 8:24 ` Vernon Yang
0 siblings, 0 replies; 22+ messages in thread
From: Vernon Yang @ 2026-02-07 8:24 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
Sorry, the network was interrupted while I was sending the patchset. I
am resending a patchset.
Currently https://lore.kernel.org/linux-mm/20260207081144.588545-1-vernon2gm@gmail.com/T/#t
is BAD.
New https://lore.kernel.org/linux-mm/20260207081613.588598-1-vernon2gm@gmail.com/
is GOOD.
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH mm-new v7 0/5] Improve khugepaged scan logic
@ 2026-02-07 8:11 Vernon Yang
2026-02-07 8:24 ` Vernon Yang
0 siblings, 1 reply; 22+ messages in thread
From: Vernon Yang @ 2026-02-07 8:11 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
From: Vernon Yang <yanglincheng@kylinos.cn>
Hi all,
This series is improve the khugepaged scan logic, reduce CPU consumption,
prioritize scanning task that access memory frequently.
The following data is traced by bpftrace[1] on a desktop system. After
the system has been left idle for 10 minutes upon booting, a lot of
SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan by
khugepaged.
@scan_pmd_status[1]: 1 ## SCAN_SUCCEED
@scan_pmd_status[6]: 2 ## SCAN_EXCEED_SHARED_PTE
@scan_pmd_status[3]: 142 ## SCAN_PMD_MAPPED
@scan_pmd_status[2]: 178 ## SCAN_NO_PTE_TABLE
total progress size: 674 MB
Total time : 419 seconds ## include khugepaged_scan_sleep_millisecs
The khugepaged has below phenomenon: the khugepaged list is scanned in a
FIFO manner, as long as the task is not destroyed,
1. the task no longer has memory that can be collapsed into hugepage,
continues scan it always.
2. the task at the front of the khugepaged scan list is cold, they are
still scanned first.
3. everyone scan at intervals of khugepaged_scan_sleep_millisecs
(default 10s). If we always scan the above two cases first, the valid
scan will have to wait for a long time.
For the first case, when the memory is either SCAN_PMD_MAPPED or
SCAN_NO_PTE_TABLE, just skip it.
For the second case, if the user has explicitly informed us via
MADV_FREE that these folios will be freed, just skip it only.
The below is some performance test results.
kernbench results (testing on x86_64 machine):
baseline w/o patches test w/ patches
Amean user-32 18522.51 ( 0.00%) 18333.64 * 1.02%*
Amean syst-32 1137.96 ( 0.00%) 1113.79 * 2.12%*
Amean elsp-32 666.04 ( 0.00%) 659.44 * 0.99%*
BAmean-95 user-32 18520.01 ( 0.00%) 18323.57 ( 1.06%)
BAmean-95 syst-32 1137.68 ( 0.00%) 1110.50 ( 2.39%)
BAmean-95 elsp-32 665.92 ( 0.00%) 659.06 ( 1.03%)
BAmean-99 user-32 18520.01 ( 0.00%) 18323.57 ( 1.06%)
BAmean-99 syst-32 1137.68 ( 0.00%) 1110.50 ( 2.39%)
BAmean-99 elsp-32 665.92 ( 0.00%) 659.06 ( 1.03%)
Create three task[2]: hot1 -> cold -> hot2. After all three task are
created, each allocate memory 128MB. the hot1/hot2 task continuously
access 128 MB memory, while the cold task only accesses its memory
briefly andthen call madvise(MADV_FREE). Here are the performance test
results:
(Throughput bigger is better, other smaller is better)
Testing on x86_64 machine:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.14 sec | 2.93 sec | -6.69% |
| cycles per access | 4.96 | 2.21 | -55.44% |
| Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
| dTLB-load-misses | 284814532 | 69597236 | -75.56% |
Testing on qemu-system-x86_64 -enable-kvm:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.35 sec | 2.96 sec | -11.64% |
| cycles per access | 7.29 | 2.07 | -71.60% |
| Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
| dTLB-load-misses | 241600871 | 3216108 | -98.67% |
This series is based on mm-new.
Thank you very much for your comments and discussions.
V6 -> V7:
- Use "*cur_progress += 1" at the beginning of the loop in anon case.
- Always "cur_progress" equal to HPAGE_PMD_NR in file case.
- Some cleaning, and pickup Acked-by and Reviewed-by.
V5 -> V6:
- Simplify hpage_collapse_scan_file() [3] and hpage_collapse_scan_pmd().
- Skip lazy-free folios in the khugepaged only [4].
- pickup Reviewed-by.
V4 -> V5:
- Patch #3 are squashed to Patch #2
- File patch utilize "xas->xa_index" to fix issue.
- folio_is_lazyfree() to folio_test_lazyfree()
- Just skip lazyfree folio simply.
- Again test kernbench in the performance mode to improve stability.
- pickup Acked-by and Reviewed-by.
V3 -> V4:
- Rebase on mm-new.
- Make Patch #2 cleaner
- Fix the lazyfree folio continue to be collapsed when skipped ahead.
V2 -> V3:
- Refine scan progress number, add folio_is_lazyfree helper
- Fix warnings at SCAN_PTE_MAPPED_HUGEPAGE.
- For MADV_FREE, we will skip the lazy-free folios instead.
- For MADV_COLD, remove it.
- Used hpage_collapse_test_exit_or_disable() instead of vma = NULL.
- pickup Reviewed-by.
V1 -> V2:
- Rename full to full_scan_finished, pickup Acked-by.
- Just skip SCAN_PMD_MAPPED/NO_PTE_TABLE memory, not remove mm.
- Set VM_NOHUGEPAGE flag when MADV_COLD/MADV_FREE to just skip, not move mm.
- Again test performance at the v6.19-rc2.
V6 : https://lore.kernel.org/linux-mm/20260201122554.1470071-1-vernon2gm@gmail.com
V5 : https://lore.kernel.org/linux-mm/20260123082232.16413-1-vernon2gm@gmail.com
V4 : https://lore.kernel.org/linux-mm/20260111121909.8410-1-yanglincheng@kylinos.cn
V3 : https://lore.kernel.org/linux-mm/20260104054112.4541-1-yanglincheng@kylinos.cn
V2 : https://lore.kernel.org/linux-mm/20251229055151.54887-1-yanglincheng@kylinos.cn
V1 : https://lore.kernel.org/linux-mm/20251215090419.174418-1-yanglincheng@kylinos.cn
[1] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/khugepaged_mm.bt
[2] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/app.c
[3] https://lore.kernel.org/linux-mm/4c35391e-a944-4e62-9103-4a1c4961f62a@arm.com
[4] https://lore.kernel.org/linux-mm/CACZaFFNY8+UKLzBGnmB3ij9amzBdKJgytcSNtA8fLCake8Ua=A@mail.gmail.com
Vernon Yang (5):
mm: khugepaged: add trace_mm_khugepaged_scan event
mm: khugepaged: refine scan progress number
mm: add folio_test_lazyfree helper
mm: khugepaged: skip lazy-free folios
mm: khugepaged: set to next mm direct when mm has
MMF_DISABLE_THP_COMPLETELY
include/linux/page-flags.h | 5 +++
include/trace/events/huge_memory.h | 26 ++++++++++++++
mm/khugepaged.c | 57 +++++++++++++++++++++++-------
mm/rmap.c | 2 +-
mm/vmscan.c | 5 ++-
5 files changed, 79 insertions(+), 16 deletions(-)
base-commit: a1a876489abcc1e75b03bd3b2f6739ceeaaec8c5
--
2.51.0
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2026-02-18 8:05 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-07 8:16 [PATCH mm-new v7 0/5] Improve khugepaged scan logic Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 1/5] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 2/5] mm: khugepaged: refine scan progress number Vernon Yang
2026-02-08 9:17 ` Dev Jain
2026-02-08 13:25 ` Vernon Yang
2026-02-18 3:55 ` Vernon Yang
2026-02-18 8:05 ` David Hildenbrand (Arm)
2026-02-07 8:16 ` [PATCH mm-new v7 3/5] mm: add folio_test_lazyfree helper Vernon Yang
2026-02-07 8:16 ` [PATCH mm-new v7 4/5] mm: khugepaged: skip lazy-free folios Vernon Yang
2026-02-07 8:34 ` Barry Song
2026-02-07 13:51 ` Lance Yang
2026-02-07 21:38 ` David Hildenbrand (Arm)
2026-02-07 22:01 ` Barry Song
2026-02-07 22:05 ` David Hildenbrand (Arm)
2026-02-07 22:17 ` Barry Song
2026-02-07 22:25 ` David Hildenbrand (Arm)
2026-02-07 22:31 ` Barry Song
2026-02-08 13:26 ` Vernon Yang
2026-02-08 4:06 ` Lance Yang
2026-02-07 8:16 ` [PATCH mm-new v7 5/5] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
-- strict thread matches above, loose matches on Subject: below --
2026-02-07 8:11 [PATCH mm-new v7 0/5] Improve khugepaged scan logic Vernon Yang
2026-02-07 8:24 ` Vernon Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox