linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vernon Yang <vernon2gm@gmail.com>
To: akpm@linux-foundation.org, david@kernel.org
Cc: lorenzo.stoakes@oracle.com, ziy@nvidia.com, dev.jain@arm.com,
	baohua@kernel.org, lance.yang@linux.dev,
	richard.weiyang@gmail.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Vernon Yang <yanglincheng@kylinos.cn>
Subject: [PATCH v3 0/6] Improve khugepaged scan logic
Date: Sun,  4 Jan 2026 13:41:06 +0800	[thread overview]
Message-ID: <20260104054112.4541-1-yanglincheng@kylinos.cn> (raw)

Hi all,

This series is improve the khugepaged scan logic, reduce CPU consumption,
prioritize scanning task that access memory frequently.

The following data is traced by bpftrace[1] on a desktop system. After
the system has been left idle for 10 minutes upon booting, a lot of
SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE are observed during a full scan by
khugepaged.

@scan_pmd_status[1]: 1           ## SCAN_SUCCEED
@scan_pmd_status[6]: 2           ## SCAN_EXCEED_SHARED_PTE
@scan_pmd_status[3]: 142         ## SCAN_PMD_MAPPED
@scan_pmd_status[2]: 178         ## SCAN_NO_PTE_TABLE
total progress size: 674 MB
Total time         : 419 seconds ## include khugepaged_scan_sleep_millisecs

The khugepaged has below phenomenon: the khugepaged list is scanned in a
FIFO manner, as long as the task is not destroyed,
1. the task no longer has memory that can be collapsed into hugepage,
   continues scan it always.
2. the task at the front of the khugepaged scan list is cold, they are
   still scanned first.
3. everyone scan at intervals of khugepaged_scan_sleep_millisecs
   (default 10s). If we always scan the above two cases first, the valid
   scan will have to wait for a long time.

For the first case, when the memory is either SCAN_PMD_MAPPED or
SCAN_NO_PTE_TABLE, just skip it.

For the second case, if the user has explicitly informed us via
MADV_FREE that these folios will be freed, just skip it only.

The below is some performance test results.

kernbench results (testing on x86_64 machine):

                     baseline w/o patches   test w/ patches
Amean     user-32    18586.99 (   0.00%)    18562.36 *   0.13%*
Amean     syst-32     1133.61 (   0.00%)     1126.02 *   0.67%*
Amean     elsp-32      668.05 (   0.00%)      667.13 *   0.14%*
BAmean-95 user-32    18585.23 (   0.00%)    18559.71 (   0.14%)
BAmean-95 syst-32     1133.22 (   0.00%)     1125.49 (   0.68%)
BAmean-95 elsp-32      667.94 (   0.00%)      667.08 (   0.13%)
BAmean-99 user-32    18585.23 (   0.00%)    18559.71 (   0.14%)
BAmean-99 syst-32     1133.22 (   0.00%)     1125.49 (   0.68%)
BAmean-99 elsp-32      667.94 (   0.00%)      667.08 (   0.13%)

Create three task[2]: hot1 -> cold -> hot2. After all three task are
created, each allocate memory 128MB. the hot1/hot2 task continuously
access 128 MB memory, while the cold task only accesses its memory
briefly andthen call madvise(MADV_FREE). Here are the performance test
results:
(Throughput bigger is better, other smaller is better)

Testing on x86_64 machine:

| task hot2           | without patch | with patch    |  delta  |
|---------------------|---------------|---------------|---------|
| total accesses time |  3.14 sec     |  2.93 sec     | -6.69%  |
| cycles per access   |  4.96         |  2.21         | -55.44% |
| Throughput          |  104.38 M/sec |  111.89 M/sec | +7.19%  |
| dTLB-load-misses    |  284814532    |  69597236     | -75.56% |

Testing on qemu-system-x86_64 -enable-kvm:

| task hot2           | without patch | with patch    |  delta  |
|---------------------|---------------|---------------|---------|
| total accesses time |  3.35 sec     |  2.96 sec     | -11.64% |
| cycles per access   |  7.29         |  2.07         | -71.60% |
| Throughput          |  97.67 M/sec  |  110.77 M/sec | +13.41% |
| dTLB-load-misses    |  241600871    |  3216108      | -98.67% |

This series is based on Linux v6.19-rc3.

Thank you very much for your comments and discussions :)


V2 -> V3:
- Refine scan progress number, add folio_is_lazyfree helper
- Fix warnings at SCAN_PTE_MAPPED_HUGEPAGE.
- For MADV_FREE, we will skip the lazy-free folios instead.
- For MADV_COLD, remove it.
- Used hpage_collapse_test_exit_or_disable() instead of vma = NULL.
- pickup Reviewed-by.

V1 -> V2:
- Rename full to full_scan_finished, pickup Acked-by.
- Just skip SCAN_PMD_MAPPED/NO_PTE_TABLE memory, not remove mm.
- Set VM_NOHUGEPAGE flag when MADV_COLD/MADV_FREE to just skip, not move mm.
- Again test performance at the v6.19-rc2.

V1 : https://lore.kernel.org/linux-mm/20251215090419.174418-1-yanglincheng@kylinos.cn
V2 : https://lore.kernel.org/linux-mm/20251229055151.54887-1-yanglincheng@kylinos.cn

[1] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/khugepaged_mm.bt
[2] https://github.com/vernon2gh/app_and_module/blob/main/khugepaged/app.c

Vernon Yang (6):
  mm: khugepaged: add trace_mm_khugepaged_scan event
  mm: khugepaged: refine scan progress number
  mm: khugepaged: just skip when the memory has been collapsed
  mm: add folio_is_lazyfree helper
  mm: khugepaged: skip lazy-free folios at scanning
  mm: khugepaged: set to next mm direct when mm has
    MMF_DISABLE_THP_COMPLETELY

 include/linux/mm_inline.h          |  5 ++++
 include/trace/events/huge_memory.h | 25 ++++++++++++++++
 mm/khugepaged.c                    | 47 +++++++++++++++++++++++-------
 mm/rmap.c                          |  4 +--
 mm/vmscan.c                        |  5 ++--
 5 files changed, 71 insertions(+), 15 deletions(-)

--
2.51.0



             reply	other threads:[~2026-01-04  5:41 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-04  5:41 Vernon Yang [this message]
2026-01-04  5:41 ` [PATCH v3 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
2026-01-04  5:41 ` [PATCH v3 2/6] mm: khugepaged: refine scan progress number Vernon Yang
2026-01-05 16:49   ` David Hildenbrand (Red Hat)
2026-01-06  5:55     ` Vernon Yang
2026-01-04  5:41 ` [PATCH v3 3/6] mm: khugepaged: just skip when the memory has been collapsed Vernon Yang
2026-01-04  5:41 ` [PATCH v3 4/6] mm: add folio_is_lazyfree helper Vernon Yang
2026-01-04 11:42   ` Lance Yang
2026-01-05  2:09     ` Vernon Yang
2026-01-04  5:41 ` [PATCH v3 5/6] mm: khugepaged: skip lazy-free folios at scanning Vernon Yang
2026-01-04 12:10   ` Lance Yang
2026-01-05  1:48     ` Vernon Yang
2026-01-05  2:51       ` Lance Yang
2026-01-05  3:12         ` Vernon Yang
2026-01-05  3:35           ` Lance Yang
2026-01-05 12:30             ` Vernon Yang
2026-01-06 10:33               ` Barry Song
2026-01-07  8:36                 ` Vernon Yang
2026-01-04  5:41 ` [PATCH v3 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
2026-01-04 12:20   ` Lance Yang
2026-01-05  0:31     ` Wei Yang
2026-01-05  2:09       ` Lance Yang
2026-01-05  2:06     ` Vernon Yang
2026-01-05  2:20       ` Lance Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260104054112.4541-1-yanglincheng@kylinos.cn \
    --to=vernon2gm@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=david@kernel.org \
    --cc=dev.jain@arm.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=richard.weiyang@gmail.com \
    --cc=yanglincheng@kylinos.cn \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox