From: Vernon Yang <vernon2gm@gmail.com>
To: "David Hildenbrand (Red Hat)" <david@kernel.org>
Cc: akpm@linux-foundation.org, lorenzo.stoakes@oracle.com,
ziy@nvidia.com, dev.jain@arm.com, baohua@kernel.org,
lance.yang@linux.dev, richard.weiyang@gmail.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Vernon Yang <yanglincheng@kylinos.cn>
Subject: Re: [PATCH v3 2/6] mm: khugepaged: refine scan progress number
Date: Tue, 6 Jan 2026 13:55:44 +0800 [thread overview]
Message-ID: <zc4yspu5rq4he64std2f6l7mnyua36tudnwj7fgqmk4aioevzw@ungu66emhlgf> (raw)
In-Reply-To: <69c9ac59-4fb2-42fc-a8ae-32f583e47de4@kernel.org>
On Mon, Jan 05, 2026 at 05:49:22PM +0100, David Hildenbrand (Red Hat) wrote:
> On 1/4/26 06:41, Vernon Yang wrote:
> > Currently, each PMD scan always increases `progress` by HPAGE_PMD_NR,
> > even if only scanning a single page. By counting the actual number of
>
> "... a single pmd" ?
>
> > pages scanned, the `progress` is tracked accurately.
>
> "page table entries / pages scanned" ?
The single page is pte-4KB only. This patch does not change the original
semantics of "progress", it simply uses the exact number of PTEs counted
to replace HPAGE_PMD_NR.
Let me provide a detailed example:
static int hpage_collapse_scan_pmd()
{
for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
_pte++, addr += PAGE_SIZE) {
_progress++;
pte_t pteval = ptep_get(_pte);
...
if (pte_uffd_wp(pteval)) { <-- first scan hit
result = SCAN_PTE_UFFD_WP;
goto out_unmap;
}
}
}
During the first scan, if pte_uffd_wp(pteval) is true, the loop exits
directly. In practice, only one PTE is scanned before termination.
Here, "progress += 1" reflects the actual number of PTEs scanned, but
previously "progress += HPAGE_PMD_NR" always.
Previously discussed, just skip SCAN_PMD_MAPPED or SCAN_NO_PTE_TABLE,
currently in Patch #3, not this Patch #2.
> >
> > Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
> > ---
> > mm/khugepaged.c | 31 +++++++++++++++++++++++--------
> > 1 file changed, 23 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 9f99f61689f8..4b124e854e2e 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1247,7 +1247,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > static int hpage_collapse_scan_pmd(struct mm_struct *mm,
> > struct vm_area_struct *vma,
> > unsigned long start_addr, bool *mmap_locked,
> > - struct collapse_control *cc)
> > + int *progress, struct collapse_control *cc)
> > {
> > pmd_t *pmd;
> > pte_t *pte, *_pte;
> > @@ -1258,23 +1258,28 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
> > unsigned long addr;
> > spinlock_t *ptl;
> > int node = NUMA_NO_NODE, unmapped = 0;
> > + int _progress = 0;
>
> "cur_progress" ?
Yes.
> > VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK);
> > result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
> > - if (result != SCAN_SUCCEED)
> > + if (result != SCAN_SUCCEED) {
> > + _progress = HPAGE_PMD_NR;
> > goto out;
> > + }
> > memset(cc->node_load, 0, sizeof(cc->node_load));
> > nodes_clear(cc->alloc_nmask);
> > pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
> > if (!pte) {
> > + _progress = HPAGE_PMD_NR;
> > result = SCAN_NO_PTE_TABLE;
> > goto out;
> > }
> > for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> > _pte++, addr += PAGE_SIZE) {
> > + _progress++;
> > pte_t pteval = ptep_get(_pte);
> > if (pte_none_or_zero(pteval)) {
> > ++none_or_zero;
> > @@ -1410,6 +1415,9 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
> > *mmap_locked = false;
> > }
> > out:
> > + if (progress)
> > + *progress += _progress;
> > +
> > trace_mm_khugepaged_scan_pmd(mm, folio, referenced,
> > none_or_zero, result, unmapped);
> > return result;
> > @@ -2287,7 +2295,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> > static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> > struct file *file, pgoff_t start,
> > - struct collapse_control *cc)
> > + int *progress, struct collapse_control *cc)
> > {
> > struct folio *folio = NULL;
> > struct address_space *mapping = file->f_mapping;
> > @@ -2295,6 +2303,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> > int present, swap;
> > int node = NUMA_NO_NODE;
> > int result = SCAN_SUCCEED;
> > + int _progress = 0;
>
> Same here.
>
>
> Not sure if it would be cleaner to just let the parent increment its counter
> and returning instead the "cur_progress" from the function.
Both are good for me, I have implemented one version as follows, please
see if it is cleaner.
--
Thanks,
Vernon
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 9f99f61689f8..4cf24553c2bd 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1247,6 +1247,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
static int hpage_collapse_scan_pmd(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long start_addr, bool *mmap_locked,
+ int *cur_progress,
struct collapse_control *cc)
{
pmd_t *pmd;
@@ -1262,19 +1263,27 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
VM_BUG_ON(start_addr & ~HPAGE_PMD_MASK);
result = find_pmd_or_thp_or_none(mm, start_addr, &pmd);
- if (result != SCAN_SUCCEED)
+ if (result != SCAN_SUCCEED) {
+ if (cur_progress)
+ *cur_progress = HPAGE_PMD_NR;
goto out;
+ }
memset(cc->node_load, 0, sizeof(cc->node_load));
nodes_clear(cc->alloc_nmask);
pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
if (!pte) {
+ if (cur_progress)
+ *cur_progress = HPAGE_PMD_NR;
result = SCAN_NO_PTE_TABLE;
goto out;
}
for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
_pte++, addr += PAGE_SIZE) {
+ if (cur_progress)
+ *cur_progress += 1;
+
pte_t pteval = ptep_get(_pte);
if (pte_none_or_zero(pteval)) {
++none_or_zero;
@@ -2287,6 +2296,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
struct file *file, pgoff_t start,
+ int *cur_progress,
struct collapse_control *cc)
{
struct folio *folio = NULL;
@@ -2327,6 +2337,9 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
continue;
}
+ if (cur_progress)
+ *cur_progress += folio_nr_pages(folio);
+
if (folio_order(folio) == HPAGE_PMD_ORDER &&
folio->index == start) {
/* Maybe PMD-mapped */
@@ -2454,6 +2467,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
while (khugepaged_scan.address < hend) {
bool mmap_locked = true;
+ int cur_progress = 0;
cond_resched();
if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
@@ -2470,7 +2484,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
mmap_read_unlock(mm);
mmap_locked = false;
*result = hpage_collapse_scan_file(mm,
- khugepaged_scan.address, file, pgoff, cc);
+ khugepaged_scan.address, file, pgoff,
+ &cur_progress, cc);
fput(file);
if (*result == SCAN_PTE_MAPPED_HUGEPAGE) {
mmap_read_lock(mm);
@@ -2484,7 +2499,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
}
} else {
*result = hpage_collapse_scan_pmd(mm, vma,
- khugepaged_scan.address, &mmap_locked, cc);
+ khugepaged_scan.address, &mmap_locked,
+ &cur_progress, cc);
}
if (*result == SCAN_SUCCEED)
@@ -2492,7 +2508,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
/* move to next address */
khugepaged_scan.address += HPAGE_PMD_SIZE;
- progress += HPAGE_PMD_NR;
+ progress += cur_progress;
if (!mmap_locked)
/*
* We released mmap_lock so break loop. Note
@@ -2810,11 +2826,11 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
mmap_read_unlock(mm);
mmap_locked = false;
result = hpage_collapse_scan_file(mm, addr, file, pgoff,
- cc);
+ NULL, cc);
fput(file);
} else {
result = hpage_collapse_scan_pmd(mm, vma, addr,
- &mmap_locked, cc);
+ &mmap_locked, NULL, cc);
}
if (!mmap_locked)
*lock_dropped = true;
next prev parent reply other threads:[~2026-01-06 5:56 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-04 5:41 [PATCH v3 0/6] Improve khugepaged scan logic Vernon Yang
2026-01-04 5:41 ` [PATCH v3 1/6] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
2026-01-04 5:41 ` [PATCH v3 2/6] mm: khugepaged: refine scan progress number Vernon Yang
2026-01-05 16:49 ` David Hildenbrand (Red Hat)
2026-01-06 5:55 ` Vernon Yang [this message]
2026-01-04 5:41 ` [PATCH v3 3/6] mm: khugepaged: just skip when the memory has been collapsed Vernon Yang
2026-01-04 5:41 ` [PATCH v3 4/6] mm: add folio_is_lazyfree helper Vernon Yang
2026-01-04 11:42 ` Lance Yang
2026-01-05 2:09 ` Vernon Yang
2026-01-04 5:41 ` [PATCH v3 5/6] mm: khugepaged: skip lazy-free folios at scanning Vernon Yang
2026-01-04 12:10 ` Lance Yang
2026-01-05 1:48 ` Vernon Yang
2026-01-05 2:51 ` Lance Yang
2026-01-05 3:12 ` Vernon Yang
2026-01-05 3:35 ` Lance Yang
2026-01-05 12:30 ` Vernon Yang
2026-01-06 10:33 ` Barry Song
2026-01-07 8:36 ` Vernon Yang
2026-01-04 5:41 ` [PATCH v3 6/6] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
2026-01-04 12:20 ` Lance Yang
2026-01-05 0:31 ` Wei Yang
2026-01-05 2:09 ` Lance Yang
2026-01-05 2:06 ` Vernon Yang
2026-01-05 2:20 ` Lance Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=zc4yspu5rq4he64std2f6l7mnyua36tudnwj7fgqmk4aioevzw@ungu66emhlgf \
--to=vernon2gm@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=richard.weiyang@gmail.com \
--cc=yanglincheng@kylinos.cn \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox