* [PATCH mm-new v4 0/6] Improve khugepaged scan logic
@ 2026-01-11 12:19 Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
` (5 more replies)
0 siblings, 6 replies; 10+ messages in thread
From: Vernon Yang @ 2026-01-11 12:19 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
Hi all,
This series is improve the khugepaged scan logic, reduce CPU consumption,
prioritize scanning task that access memory frequently.
The following data is traced by bpftrace[1] on a desktop system. After
the system has been left idle for 10 minutes upon booting, a lot of
SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan by
khugepaged.
@scan_pmd_status[1]: 1 ## SCAN_SUCCEED
@scan_pmd_status[6]: 2 ## SCAN_EXCEED_SHARED_PTE
@scan_pmd_status[3]: 142 ## SCAN_PMD_MAPPED
@scan_pmd_status[2]: 178 ## SCAN_NO_PTE_TABLE
total progress size: 674 MB
Total time : 419 seconds ## include khugepaged_scan_sleep_millisecs
The khugepaged has below phenomenon: the khugepaged list is scanned in a
FIFO manner, as long as the task is not destroyed,
1. the task no longer has memory that can be collapsed into hugepage,
continues scan it always.
2. the task at the front of the khugepaged scan list is cold, they are
still scanned first.
3. everyone scan at intervals of khugepaged_scan_sleep_millisecs
(default 10s). If we always scan the above two cases first, the valid
scan will have to wait for a long time.
For the first case, when the memory is either SCAN_PMD_MAPPED or
SCAN_NO_PTE_TABLE, just skip it.
For the second case, if the user has explicitly informed us via
MADV_FREE that these folios will be freed, just skip it only.
The below is some performance test results.
kernbench results (testing on x86_64 machine):
baseline w/o patches test w/ patches
Amean user-32 18586.99 ( 0.00%) 18562.36 * 0.13%*
Amean syst-32 1133.61 ( 0.00%) 1126.02 * 0.67%*
Amean elsp-32 668.05 ( 0.00%) 667.13 * 0.14%*
BAmean-95 user-32 18585.23 ( 0.00%) 18559.71 ( 0.14%)
BAmean-95 syst-32 1133.22 ( 0.00%) 1125.49 ( 0.68%)
BAmean-95 elsp-32 667.94 ( 0.00%) 667.08 ( 0.13%)
BAmean-99 user-32 18585.23 ( 0.00%) 18559.71 ( 0.14%)
BAmean-99 syst-32 1133.22 ( 0.00%) 1125.49 ( 0.68%)
BAmean-99 elsp-32 667.94 ( 0.00%) 667.08 ( 0.13%)
Create three task[2]: hot1 -> cold -> hot2. After all three task are
created, each allocate memory 128MB. the hot1/hot2 task continuously
access 128 MB memory, while the cold task only accesses its memory
briefly andthen call madvise(MADV_FREE). Here are the performance test
results:
(Throughput bigger is better, other smaller is better)
Testing on x86_64 machine:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.14 sec | 2.93 sec | -6.69% |
| cycles per access | 4.96 | 2.21 | -55.44% |
| Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
| dTLB-load-misses | 284814532 | 69597236 | -75.56% |
Testing on qemu-system-x86_64 -enable-kvm:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.35 sec | 2.96 sec | -11.64% |
| cycles per access | 7.29 | 2.07 | -71.60% |
| Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
| dTLB-load-misses | 241600871 | 3216108 | -98.67% |
This series is based on mm-new.
Thank you very much for your comments and discussions.
V3 -> V4:
- Rebase on mm-new.
- Make Patch #2 cleaner
- Fix the lazyfree folio continue to be collapsed when skipped ahead.
V2 -> V3:
- Refine scan progress number, add folio_is_lazyfree helper
- Fix warnings at SCAN_PTE_MAPPED_HUGEPAGE.
- For MADV_FREE, we will skip the lazy-free folios instead.
- For MADV_COLD, remove it.
- Used hpage_collapse_test_exit_or_disable() instead of vma = NULL.
- pickup Reviewed-by.
V1 -> V2:
- Rename full to full_scan_finished, pickup Acked-by.
- Just skip SCAN_PMD_MAPPED/NO_PTE_TABLE memory, not remove mm.
- Set VM_NOHUGEPAGE flag when MADV_COLD/MADV_FREE to just skip, not move mm.
- Again test performance at the v6.19-rc2.
V3 : https://lore.kernel.org/linux-mm/20260104054112.4541-1-yanglincheng@kylinos.cn
V2 : https://lore.kernel.org/linux-mm/20251229055151.54887-1-yanglincheng@kylinos.cn
V1 : https://lore.kernel.org/linux-mm/20251215090419.174418-1-yanglincheng@kylinos.cn
[1] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/khugepaged_mm.bt
[2] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/app.c
Vernon Yang (6):
mm: khugepaged: add trace_mm_khugepaged_scan event
mm: khugepaged: refine scan progress number
mm: khugepaged: just skip when the memory has been collapsed
mm: add folio_is_lazyfree helper
mm: khugepaged: skip lazy-free folios at scanning
mm: khugepaged: set to next mm direct when mm has
MMF_DISABLE_THP_COMPLETELY
include/linux/mm_inline.h | 5 +++
include/trace/events/huge_memory.h | 25 +++++++++++++
mm/khugepaged.c | 56 +++++++++++++++++++++++++-----
mm/rmap.c | 2 +-
mm/vmscan.c | 5 ++-
5 files changed, 81 insertions(+), 12 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mm-new v4 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event
2026-01-11 12:19 [PATCH mm-new v4 0/6] Improve khugepaged scan logic Vernon Yang
@ 2026-01-11 12:19 ` Vernon Yang
2026-01-11 13:49 ` Lance Yang
2026-01-11 12:19 ` [PATCH mm-new v4 2/6] mm: khugepaged: refine scan progress number Vernon Yang
` (4 subsequent siblings)
5 siblings, 1 reply; 10+ messages in thread
From: Vernon Yang @ 2026-01-11 12:19 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
Add mm_khugepaged_scan event to track the total time for full scan
and the total number of pages scanned of khugepaged.
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Barry Song <baohua@kernel.org>
---
include/trace/events/huge_memory.h | 24 ++++++++++++++++++++++++
mm/khugepaged.c | 2 ++
2 files changed, 26 insertions(+)
diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 4e41bff31888..3d1069c3f0c5 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -237,5 +237,29 @@ TRACE_EVENT(mm_khugepaged_collapse_file,
__print_symbolic(__entry->result, SCAN_STATUS))
);
+TRACE_EVENT(mm_khugepaged_scan,
+
+ TP_PROTO(struct mm_struct *mm, int progress, bool full_scan_finished),
+
+ TP_ARGS(mm, progress, full_scan_finished),
+
+ TP_STRUCT__entry(
+ __field(struct mm_struct *, mm)
+ __field(int, progress)
+ __field(bool, full_scan_finished)
+ ),
+
+ TP_fast_assign(
+ __entry->mm = mm;
+ __entry->progress = progress;
+ __entry->full_scan_finished = full_scan_finished;
+ ),
+
+ TP_printk("mm=%p, progress=%d, full_scan_finished=%d",
+ __entry->mm,
+ __entry->progress,
+ __entry->full_scan_finished)
+);
+
#endif /* __HUGE_MEMORY_H */
#include <trace/define_trace.h>
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 9f790ec34400..2e570f83778c 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2545,6 +2545,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
collect_mm_slot(slot);
}
+ trace_mm_khugepaged_scan(mm, progress, khugepaged_scan.mm_slot == NULL);
+
return progress;
}
--
2.51.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mm-new v4 2/6] mm: khugepaged: refine scan progress number
2026-01-11 12:19 [PATCH mm-new v4 0/6] Improve khugepaged scan logic Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
@ 2026-01-11 12:19 ` Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 3/6] mm: khugepaged: just skip when the memory has been collapsed Vernon Yang
` (3 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Vernon Yang @ 2026-01-11 12:19 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
Currently, each scan always increases "progress" by HPAGE_PMD_NR,
even if only scanning a single pte.
This patch does not change the original semantics of "progress", it
simply uses the exact number of PTEs counted to replace HPAGE_PMD_NR.
Let me provide a detailed example:
static int hpage_collapse_scan_pmd()
{
for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
_pte++, addr += PAGE_SIZE) {
pte_t pteval = ptep_get(_pte);
...
if (pte_uffd_wp(pteval)) { <-- first scan hit
result = SCAN_PTE_UFFD_WP;
goto out_unmap;
}
}
}
During the first scan, if pte_uffd_wp(pteval) is true, the loop exits
directly. In practice, only one PTE is scanned before termination.
Here, "progress += 1" reflects the actual number of PTEs scanned, but
previously "progress += HPAGE_PMD_NR" always.
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
---
mm/khugepaged.c | 28 ++++++++++++++++++++++------
1 file changed, 22 insertions(+), 6 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2e570f83778c..5c6015ac7b5e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1249,6 +1249,7 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long a
static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long start_addr, bool *mmap_locked,
+ int *cur_progress,
struct collapse_control *cc)
{
pmd_t *pmd;
@@ -1264,19 +1265,27 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK);
result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
- if (result != SCAN_SUCCEED)
+ if (result != SCAN_SUCCEED) {
+ if (cur_progress)
+ *cur_progress = HPAGE_PMD_NR;
goto out;
+ }
memset(cc->node_load, 0, sizeof(cc->node_load));
nodes_clear(cc->alloc_nmask);
pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
if (!pte) {
+ if (cur_progress)
+ *cur_progress = HPAGE_PMD_NR;
result = SCAN_NO_PTE_TABLE;
goto out;
}
for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
_pte++, addr += PAGE_SIZE) {
+ if (cur_progress)
+ *cur_progress += 1;
+
pte_t pteval = ptep_get(_pte);
if (pte_none_or_zero(pteval)) {
++none_or_zero;
@@ -2297,6 +2306,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
struct file *file, pgoff_t start,
+ int *cur_progress,
struct collapse_control *cc)
{
struct folio *folio = NULL;
@@ -2337,6 +2347,9 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
continue;
}
+ if (cur_progress)
+ *cur_progress += folio_nr_pages(folio);
+
if (folio_order(folio) == HPAGE_PMD_ORDER &&
folio->index == start) {
/* Maybe PMD-mapped */
@@ -2466,6 +2479,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
while (khugepaged_scan.address < hend) {
bool mmap_locked = true;
+ int cur_progress = 0;
cond_resched();
if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
@@ -2482,7 +2496,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
mmap_read_unlock(mm);
mmap_locked = false;
*result = hpage_collapse_scan_file(mm,
- khugepaged_scan.address, file, pgoff, cc);
+ khugepaged_scan.address, file, pgoff,
+ &cur_progress, cc);
fput(file);
if (*result == SCAN_PTE_MAPPED_HUGEPAGE) {
mmap_read_lock(mm);
@@ -2496,7 +2511,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
}
} else {
*result = hpage_collapse_scan_pmd(mm, vma,
- khugepaged_scan.address, &mmap_locked, cc);
+ khugepaged_scan.address, &mmap_locked,
+ &cur_progress, cc);
}
if (*result == SCAN_SUCCEED)
@@ -2504,7 +2520,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
/* move to next address */
khugepaged_scan.address += HPAGE_PMD_SIZE;
- progress += HPAGE_PMD_NR;
+ progress += cur_progress;
if (!mmap_locked)
/*
* We released mmap_lock so break loop. Note
@@ -2826,11 +2842,11 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
mmap_read_unlock(mm);
mmap_locked = false;
result = hpage_collapse_scan_file(mm, addr, file, pgoff,
- cc);
+ NULL, cc);
fput(file);
} else {
result = hpage_collapse_scan_pmd(mm, vma, addr,
- &mmap_locked, cc);
+ &mmap_locked, NULL, cc);
}
if (!mmap_locked)
*lock_dropped = true;
--
2.51.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mm-new v4 3/6] mm: khugepaged: just skip when the memory has been collapsed
2026-01-11 12:19 [PATCH mm-new v4 0/6] Improve khugepaged scan logic Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 2/6] mm: khugepaged: refine scan progress number Vernon Yang
@ 2026-01-11 12:19 ` Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 4/6] mm: add folio_is_lazyfree helper Vernon Yang
` (2 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Vernon Yang @ 2026-01-11 12:19 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
The following data is traced by bpftrace on a desktop system. After
the system has been left idle for 10 minutes upon booting, a lot of
SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan
by khugepaged.
@scan_pmd_status[1]: 1 ## SCAN_SUCCEED
@scan_pmd_status[6]: 2 ## SCAN_EXCEED_SHARED_PTE
@scan_pmd_status[3]: 142 ## SCAN_PMD_MAPPED
@scan_pmd_status[2]: 178 ## SCAN_NO_PTE_TABLE
total progress size: 674 MB
Total time : 419 seconds ## include khugepaged_scan_sleep_millisecs
The khugepaged_scan list save all task that support collapse into hugepage,
as long as the task is not destroyed, khugepaged will not remove it from
the khugepaged_scan list. This exist a phenomenon where task has already
collapsed all memory regions into hugepage, but khugepaged continues to
scan it, which wastes CPU time and invalid, and due to
khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for
scanning a large number of invalid task, so scanning really valid task
is later.
After applying this patch, when the memory is either SCAN_PMD_MAPPED or
SCAN_NO_PTE_TABLE, just skip it, as follow:
@scan_pmd_status[6]: 2
@scan_pmd_status[3]: 147
@scan_pmd_status[2]: 173
total progress size: 45 MB
Total time : 20 seconds
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
---
mm/khugepaged.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5c6015ac7b5e..6df2857d94c6 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -68,7 +68,10 @@ enum scan_result {
static struct task_struct *khugepaged_thread __read_mostly;
static DEFINE_MUTEX(khugepaged_mutex);
-/* default scan 8*HPAGE_PMD_NR ptes (or vmas) every 10 second */
+/*
+ * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas
+ * every 10 second.
+ */
static unsigned int khugepaged_pages_to_scan __read_mostly;
static unsigned int khugepaged_pages_collapsed;
static unsigned int khugepaged_full_scans;
@@ -1267,7 +1270,7 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
if (result != SCAN_SUCCEED) {
if (cur_progress)
- *cur_progress = HPAGE_PMD_NR;
+ *cur_progress = 1;
goto out;
}
@@ -1276,7 +1279,7 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
if (!pte) {
if (cur_progress)
- *cur_progress = HPAGE_PMD_NR;
+ *cur_progress = 1;
result = SCAN_NO_PTE_TABLE;
goto out;
}
@@ -2347,9 +2350,6 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
continue;
}
- if (cur_progress)
- *cur_progress += folio_nr_pages(folio);
-
if (folio_order(folio) == HPAGE_PMD_ORDER &&
folio->index == start) {
/* Maybe PMD-mapped */
@@ -2361,9 +2361,14 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm, unsigned
* returning.
*/
folio_put(folio);
+ if (cur_progress)
+ *cur_progress += 1;
break;
}
+ if (cur_progress)
+ *cur_progress += folio_nr_pages(folio);
+
node = folio_nid(folio);
if (hpage_collapse_scan_abort(node, cc)) {
result = SCAN_SCAN_ABORT;
--
2.51.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mm-new v4 4/6] mm: add folio_is_lazyfree helper
2026-01-11 12:19 [PATCH mm-new v4 0/6] Improve khugepaged scan logic Vernon Yang
` (2 preceding siblings ...)
2026-01-11 12:19 ` [PATCH mm-new v4 3/6] mm: khugepaged: just skip when the memory has been collapsed Vernon Yang
@ 2026-01-11 12:19 ` Vernon Yang
2026-01-11 13:41 ` Lance Yang
2026-01-11 12:19 ` [PATCH mm-new v4 5/6] mm: khugepaged: skip lazy-free folios at scanning Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
5 siblings, 1 reply; 10+ messages in thread
From: Vernon Yang @ 2026-01-11 12:19 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
Add folio_is_lazyfree() function to identify lazy-free folios to improve
code readability.
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
---
include/linux/mm_inline.h | 5 +++++
mm/rmap.c | 2 +-
mm/vmscan.c | 5 ++---
3 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index fa2d6ba811b5..65a4ae52d915 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -35,6 +35,11 @@ static inline int page_is_file_lru(struct page *page)
return folio_is_file_lru(page_folio(page));
}
+static inline int folio_is_lazyfree(const struct folio *folio)
+{
+ return folio_test_anon(folio) && !folio_test_swapbacked(folio);
+}
+
static __always_inline void __update_lru_size(struct lruvec *lruvec,
enum lru_list lru, enum zone_type zid,
long nr_pages)
diff --git a/mm/rmap.c b/mm/rmap.c
index 336b27e00238..fd335b171ea7 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2042,7 +2042,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
}
if (!pvmw.pte) {
- if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
+ if (folio_is_lazyfree(folio)) {
if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
goto walk_done;
/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 81828fa625ed..ad3516ff1381 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -964,8 +964,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
* They could be mistakenly treated as file lru. So further anon
* test is needed.
*/
- if (!folio_is_file_lru(folio) ||
- (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
+ if (!folio_is_file_lru(folio) || folio_is_lazyfree(folio)) {
*dirty = false;
*writeback = false;
return;
@@ -1506,7 +1505,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
}
}
- if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
+ if (folio_is_lazyfree(folio)) {
/* follow __remove_mapping for reference */
if (!folio_ref_freeze(folio, 1))
goto keep_locked;
--
2.51.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mm-new v4 5/6] mm: khugepaged: skip lazy-free folios at scanning
2026-01-11 12:19 [PATCH mm-new v4 0/6] Improve khugepaged scan logic Vernon Yang
` (3 preceding siblings ...)
2026-01-11 12:19 ` [PATCH mm-new v4 4/6] mm: add folio_is_lazyfree helper Vernon Yang
@ 2026-01-11 12:19 ` Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
5 siblings, 0 replies; 10+ messages in thread
From: Vernon Yang @ 2026-01-11 12:19 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
For example, create three task: hot1 -> cold -> hot2. After all three
task are created, each allocate memory 128MB. the hot1/hot2 task
continuously access 128 MB memory, while the cold task only accesses
its memory briefly andthen call madvise(MADV_FREE). However, khugepaged
still prioritizes scanning the cold task and only scans the hot2 task
after completing the scan of the cold task.
So if the user has explicitly informed us via MADV_FREE that this memory
will be freed, it is appropriate for khugepaged to skip it only, thereby
avoiding unnecessary scan and collapse operations to reducing CPU
wastage.
Here are the performance test results:
(Throughput bigger is better, other smaller is better)
Testing on x86_64 machine:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.14 sec | 2.93 sec | -6.69% |
| cycles per access | 4.96 | 2.21 | -55.44% |
| Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
| dTLB-load-misses | 284814532 | 69597236 | -75.56% |
Testing on qemu-system-x86_64 -enable-kvm:
| task hot2 | without patch | with patch | delta |
|---------------------|---------------|---------------|---------|
| total accesses time | 3.35 sec | 2.96 sec | -11.64% |
| cycles per access | 7.29 | 2.07 | -71.60% |
| Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
| dTLB-load-misses | 241600871 | 3216108 | -98.67% |
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
---
include/trace/events/huge_memory.h | 1 +
mm/khugepaged.c | 17 +++++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 3d1069c3f0c5..e3856f8ab9eb 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -25,6 +25,7 @@
EM( SCAN_PAGE_LRU, "page_not_in_lru") \
EM( SCAN_PAGE_LOCK, "page_locked") \
EM( SCAN_PAGE_ANON, "page_not_anon") \
+ EM( SCAN_PAGE_LAZYFREE, "page_lazyfree") \
EM( SCAN_PAGE_COMPOUND, "page_compound") \
EM( SCAN_ANY_PROCESS, "no_process_for_page") \
EM( SCAN_VMA_NULL, "vma_null") \
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 6df2857d94c6..8a7008760566 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -46,6 +46,7 @@ enum scan_result {
SCAN_PAGE_LRU,
SCAN_PAGE_LOCK,
SCAN_PAGE_ANON,
+ SCAN_PAGE_LAZYFREE,
SCAN_PAGE_COMPOUND,
SCAN_ANY_PROCESS,
SCAN_VMA_NULL,
@@ -1258,6 +1259,7 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
pmd_t *pmd;
pte_t *pte, *_pte;
int none_or_zero = 0, shared = 0, referenced = 0;
+ int lazyfree = 0;
enum scan_result result = SCAN_FAIL;
struct page *page = NULL;
struct folio *folio = NULL;
@@ -1343,6 +1345,21 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
}
folio = page_folio(page);
+ if (cc->is_khugepaged && !pte_dirty(pteval) &&
+ folio_is_lazyfree(folio)) {
+ ++lazyfree;
+
+ /*
+ * The lazyfree folios are reclaimed and become pte_none.
+ * Ensure they do not continue to be collapsed when
+ * skipped ahead.
+ */
+ if ((lazyfree + none_or_zero) > khugepaged_max_ptes_none) {
+ result = SCAN_PAGE_LAZYFREE;
+ goto out_unmap;
+ }
+ }
+
if (!folio_test_anon(folio)) {
result = SCAN_PAGE_ANON;
goto out_unmap;
--
2.51.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mm-new v4 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY
2026-01-11 12:19 [PATCH mm-new v4 0/6] Improve khugepaged scan logic Vernon Yang
` (4 preceding siblings ...)
2026-01-11 12:19 ` [PATCH mm-new v4 5/6] mm: khugepaged: skip lazy-free folios at scanning Vernon Yang
@ 2026-01-11 12:19 ` Vernon Yang
2026-01-11 13:44 ` Lance Yang
5 siblings, 1 reply; 10+ messages in thread
From: Vernon Yang @ 2026-01-11 12:19 UTC (permalink / raw)
To: akpm, david
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, lance.yang, linux-mm,
linux-kernel, Vernon Yang
When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
reduce redundant operation.
Without this patch, entering khugepaged_scan_mm_slot() next time, we
will set khugepaged_scan.mm_slot to the next mm_slot.
With this patch, we will directly set khugepaged_scan.mm_slot to the
next mm_slot.
Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
---
mm/khugepaged.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 8a7008760566..4c055d6c2717 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2566,7 +2566,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
* Release the current mm_slot if this mm is about to die, or
* if we scanned all vmas of this mm.
*/
- if (hpage_collapse_test_exit(mm) || !vma) {
+ if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
/*
* Make sure that if mm_users is reaching zero while
* khugepaged runs here, khugepaged_exit will find
--
2.51.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mm-new v4 4/6] mm: add folio_is_lazyfree helper
2026-01-11 12:19 ` [PATCH mm-new v4 4/6] mm: add folio_is_lazyfree helper Vernon Yang
@ 2026-01-11 13:41 ` Lance Yang
0 siblings, 0 replies; 10+ messages in thread
From: Lance Yang @ 2026-01-11 13:41 UTC (permalink / raw)
To: Vernon Yang
Cc: lorenzo.stoakes, ziy, akpm, dev.jain, baohua, linux-mm,
linux-kernel, Vernon Yang, david
On 2026/1/11 20:19, Vernon Yang wrote:
> Add folio_is_lazyfree() function to identify lazy-free folios to improve
> code readability.
>
> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> ---
> include/linux/mm_inline.h | 5 +++++
> mm/rmap.c | 2 +-
> mm/vmscan.c | 5 ++---
> 3 files changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index fa2d6ba811b5..65a4ae52d915 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -35,6 +35,11 @@ static inline int page_is_file_lru(struct page *page)
> return folio_is_file_lru(page_folio(page));
> }
>
> +static inline int folio_is_lazyfree(const struct folio *folio)
> +{
It's 2026, could we use bool instead of int?
Yeah, I see folio_is_file_lru() uses int but that's legacy ...
> + return folio_test_anon(folio) && !folio_test_swapbacked(folio);
> +}
> +
> static __always_inline void __update_lru_size(struct lruvec *lruvec,
> enum lru_list lru, enum zone_type zid,
> long nr_pages)
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 336b27e00238..fd335b171ea7 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2042,7 +2042,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> }
>
> if (!pvmw.pte) {
> - if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
> + if (folio_is_lazyfree(folio)) {
> if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
> goto walk_done;
> /*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 81828fa625ed..ad3516ff1381 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -964,8 +964,7 @@ static void folio_check_dirty_writeback(struct folio *folio,
> * They could be mistakenly treated as file lru. So further anon
> * test is needed.
> */
> - if (!folio_is_file_lru(folio) ||
> - (folio_test_anon(folio) && !folio_test_swapbacked(folio))) {
> + if (!folio_is_file_lru(folio) || folio_is_lazyfree(folio)) {
> *dirty = false;
> *writeback = false;
> return;
> @@ -1506,7 +1505,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> }
> }
>
> - if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
> + if (folio_is_lazyfree(folio)) {
> /* follow __remove_mapping for reference */
> if (!folio_ref_freeze(folio, 1))
> goto keep_locked;
Otherwise, LGTM.
Reviewed-by: Lance Yang <lance.yang@linux.dev>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mm-new v4 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY
2026-01-11 12:19 ` [PATCH mm-new v4 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
@ 2026-01-11 13:44 ` Lance Yang
0 siblings, 0 replies; 10+ messages in thread
From: Lance Yang @ 2026-01-11 13:44 UTC (permalink / raw)
To: Vernon Yang
Cc: lorenzo.stoakes, ziy, dev.jain, baohua, linux-mm, david,
linux-kernel, Vernon Yang, akpm
On 2026/1/11 20:19, Vernon Yang wrote:
> When an mm with the MMF_DISABLE_THP_COMPLETELY flag is detected during
> scanning, directly set khugepaged_scan.mm_slot to the next mm_slot,
> reduce redundant operation.
>
> Without this patch, entering khugepaged_scan_mm_slot() next time, we
> will set khugepaged_scan.mm_slot to the next mm_slot.
>
> With this patch, we will directly set khugepaged_scan.mm_slot to the
> next mm_slot.
>
> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> ---
Thanks! LGTM, so
Reviewed-by: Lance Yang <lance.yang@linux.dev>
> mm/khugepaged.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 8a7008760566..4c055d6c2717 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2566,7 +2566,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
> * Release the current mm_slot if this mm is about to die, or
> * if we scanned all vmas of this mm.
> */
> - if (hpage_collapse_test_exit(mm) || !vma) {
> + if (hpage_collapse_test_exit_or_disable(mm) || !vma) {
> /*
> * Make sure that if mm_users is reaching zero while
> * khugepaged runs here, khugepaged_exit will find
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mm-new v4 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event
2026-01-11 12:19 ` [PATCH mm-new v4 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
@ 2026-01-11 13:49 ` Lance Yang
0 siblings, 0 replies; 10+ messages in thread
From: Lance Yang @ 2026-01-11 13:49 UTC (permalink / raw)
To: Vernon Yang
Cc: lorenzo.stoakes, ziy, akpm, dev.jain, baohua, linux-mm,
linux-kernel, Vernon Yang, david
On 2026/1/11 20:19, Vernon Yang wrote:
> Add mm_khugepaged_scan event to track the total time for full scan
> and the total number of pages scanned of khugepaged.
>
> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Reviewed-by: Barry Song <baohua@kernel.org>
> ---
> include/trace/events/huge_memory.h | 24 ++++++++++++++++++++++++
> mm/khugepaged.c | 2 ++
> 2 files changed, 26 insertions(+)
>
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 4e41bff31888..3d1069c3f0c5 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -237,5 +237,29 @@ TRACE_EVENT(mm_khugepaged_collapse_file,
> __print_symbolic(__entry->result, SCAN_STATUS))
> );
>
> +TRACE_EVENT(mm_khugepaged_scan,
> +
> + TP_PROTO(struct mm_struct *mm, int progress, bool full_scan_finished),
> +
> + TP_ARGS(mm, progress, full_scan_finished),
> +
> + TP_STRUCT__entry(
> + __field(struct mm_struct *, mm)
> + __field(int, progress)
Nit: progress should be unsigned int here, not int :)
Otherwise, LGTM.
Reviewed-by: Lance Yang <lance.yang@linux.dev>
> + __field(bool, full_scan_finished)
> + ),
> +
> + TP_fast_assign(
> + __entry->mm = mm;
> + __entry->progress = progress;
> + __entry->full_scan_finished = full_scan_finished;
> + ),
> +
> + TP_printk("mm=%p, progress=%d, full_scan_finished=%d",
> + __entry->mm,
> + __entry->progress,
> + __entry->full_scan_finished)
> +);
> +
> #endif /* __HUGE_MEMORY_H */
> #include <trace/define_trace.h>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 9f790ec34400..2e570f83778c 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2545,6 +2545,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, enum scan_result
> collect_mm_slot(slot);
> }
>
> + trace_mm_khugepaged_scan(mm, progress, khugepaged_scan.mm_slot == NULL);
> +
> return progress;
> }
>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-01-11 13:50 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-11 12:19 [PATCH mm-new v4 0/6] Improve khugepaged scan logic Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
2026-01-11 13:49 ` Lance Yang
2026-01-11 12:19 ` [PATCH mm-new v4 2/6] mm: khugepaged: refine scan progress number Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 3/6] mm: khugepaged: just skip when the memory has been collapsed Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 4/6] mm: add folio_is_lazyfree helper Vernon Yang
2026-01-11 13:41 ` Lance Yang
2026-01-11 12:19 ` [PATCH mm-new v4 5/6] mm: khugepaged: skip lazy-free folios at scanning Vernon Yang
2026-01-11 12:19 ` [PATCH mm-new v4 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
2026-01-11 13:44 ` Lance Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox