From: Nico Pache <npache@redhat.com>
To: "David Hildenbrand (Arm)" <david@kernel.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
aarcange@redhat.com, akpm@linux-foundation.org,
anshuman.khandual@arm.com, apopple@nvidia.com,
baohua@kernel.org, baolin.wang@linux.alibaba.com,
byungchul@sk.com, catalin.marinas@arm.com, cl@gentwo.org,
corbet@lwn.net, dave.hansen@linux.intel.com, dev.jain@arm.com,
gourry@gourry.net, hannes@cmpxchg.org, hughd@google.com,
jackmanb@google.com, jack@suse.cz, jannh@google.com,
jglisse@google.com, joshua.hahnjy@gmail.com, kas@kernel.org,
lance.yang@linux.dev, Liam.Howlett@oracle.com,
lorenzo.stoakes@oracle.com, mathieu.desnoyers@efficios.com,
matthew.brost@intel.com, mhiramat@kernel.org, mhocko@suse.com,
peterx@redhat.com, pfalcato@suse.de, rakie.kim@sk.com,
raquini@redhat.com, rdunlap@infradead.org,
richard.weiyang@gmail.com, rientjes@google.com,
rostedt@goodmis.org, rppt@kernel.org, ryan.roberts@arm.com,
shivankg@amd.com, sunnanyong@huawei.com, surenb@google.com,
thomas.hellstrom@linux.intel.com, tiwai@suse.de,
usamaarif642@gmail.com, vbabka@suse.cz, vishal.moola@gmail.com,
wangkefeng.wang@huawei.com, will@kernel.org,
willy@infradead.org, yang@os.amperecomputing.com,
ying.huang@linux.alibaba.com, ziy@nvidia.com,
zokeefe@google.com
Subject: Re: [PATCH mm-unstable v2 5/5] mm/khugepaged: unify khugepaged and madv_collapse with collapse_single_pmd()
Date: Thu, 26 Feb 2026 13:27:45 -0700 [thread overview]
Message-ID: <CAA1CXcDyt1gMWetfqrqGQOpB2B7n=+H_WYbY+pJyuGoDFf4u+A@mail.gmail.com> (raw)
In-Reply-To: <81ff9caa-50f2-4951-8d82-2c8dcdf3db91@kernel.org>
On Thu, Feb 26, 2026 at 2:41 AM David Hildenbrand (Arm)
<david@kernel.org> wrote:
>
> On 2/26/26 02:29, Nico Pache wrote:
> > The khugepaged daemon and madvise_collapse have two different
> > implementations that do almost the same thing.
> >
> > Create collapse_single_pmd to increase code reuse and create an entry
> > point to these two users.
> >
> > Refactor madvise_collapse and collapse_scan_mm_slot to use the new
> > collapse_single_pmd function. This introduces a minor behavioral change
> > that is most likely an undiscovered bug. The current implementation of
> > khugepaged tests collapse_test_exit_or_disable before calling
> > collapse_pte_mapped_thp, but we weren't doing it in the madvise_collapse
> > case. By unifying these two callers madvise_collapse now also performs
> > this check. We also modify the return value to be SCAN_ANY_PROCESS which
> > properly indicates that this process is no longer valid to operate on.
> >
> > We also guard the khugepaged_pages_collapsed variable to ensure its only
> > incremented for khugepaged.
> >
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
> Probably best to drop Lorenzo's RB after bigger changes.
>
> > Signed-off-by: Nico Pache <npache@redhat.com>
> > ---
> > mm/khugepaged.c | 128 ++++++++++++++++++++++++++----------------------
> > 1 file changed, 69 insertions(+), 59 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 64086488ca01..0058970d4579 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -2417,6 +2417,70 @@ static enum scan_result collapse_scan_file(struct mm_struct *mm, unsigned long a
> > return result;
> > }
> >
> > +/*
> > + * Try to collapse a single PMD starting at a PMD aligned addr, and return
> > + * the results.
> > + */
> > +static enum scan_result collapse_single_pmd(unsigned long addr,
> > + struct vm_area_struct *vma, bool *mmap_locked,
> > + unsigned int *cur_progress, struct collapse_control *cc)
> > +{
> > + struct mm_struct *mm = vma->vm_mm;
> > + bool triggered_wb = false;
> > + enum scan_result result;
> > + struct file *file;
> > + pgoff_t pgoff;
> > +
> > + if (vma_is_anonymous(vma)) {
> > + result = collapse_scan_pmd(mm, vma, addr, mmap_locked, cur_progress, cc);
> > + goto end;
> > + }
> > +
> > + file = get_file(vma->vm_file);
> > + pgoff = linear_page_index(vma, addr);
> > +
> > + mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > +retry:
> > + result = collapse_scan_file(mm, addr, file, pgoff, cur_progress, cc);
> > +
> > + /*
> > + * For MADV_COLLAPSE, when encountering dirty pages, try to writeback,
> > + * then retry the collapse one time.
> > + */
> > + if (!cc->is_khugepaged && result == SCAN_PAGE_DIRTY_OR_WRITEBACK &&
> > + triggered_wb && mapping_can_writeback(file->f_mapping)) {
>
> !triggered_wb, right?
>
>
> > + const loff_t lstart = (loff_t)pgoff << PAGE_SHIFT;
> > + const loff_t lend = lstart + HPAGE_PMD_SIZE - 1;
> > +
> > + filemap_write_and_wait_range(file->f_mapping, lstart, lend);
> > + triggered_wb = true;
> > + goto retry;
> > + }
> > + fput(file);
> > +
> > + if (result != SCAN_PTE_MAPPED_HUGEPAGE)
> > + goto end;
> > +
> > + mmap_read_lock(mm);
> > + *mmap_locked = true;
>
> On all paths below, you set "*mmap_locked = false". Why even bother about setting the variable?
Yeah I believe someone (Lorenzo?) pointed that out during the last
review cycle. I forgot to look into it :<
As you state, I believe we can drop the repetitive mmap_locked (iirc
this was introduced in an earlier version before `lock_dropped`) and
move it into the single_pmd() function.
>
> > + if (collapse_test_exit_or_disable(mm)) {
> > + mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > + return SCAN_ANY_PROCESS;
> > + }
> > + result = try_collapse_pte_mapped_thp(mm, addr, !cc->is_khugepaged);
> > + if (result == SCAN_PMD_MAPPED)
> > + result = SCAN_SUCCEED;
> > + mmap_read_unlock(mm);
> > + *mmap_locked = false;
>
> This might all read nicer without the goto and without the early return.
I'll see what I can do!
>
> /* If we have a THP in the pagecache, try to retract the pagetable. */
> if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
> mmap_read_lock(mm);
> if (collapse_test_exit_or_disable(mm))
> result = SCAN_ANY_PROCESS;
> else
> result = try_collapse_pte_mapped_thp(mm, addr, !cc->is_khugepaged);
> if (result == SCAN_PMD_MAPPED)
> result = SCAN_SUCCEED
> }
> mmap_read_unlock(mm);
> }
Oh thanks! I'll try this
>
> > +
> > +end:
> > + if (cc->is_khugepaged && result == SCAN_SUCCEED)
> > + ++khugepaged_pages_collapsed;
> > + return result;
> > +}
> > +
> > static unsigned int collapse_scan_mm_slot(unsigned int pages, enum scan_result *result,
> > struct collapse_control *cc)
> > __releases(&khugepaged_mm_lock)
> > @@ -2489,36 +2553,9 @@ static unsigned int collapse_scan_mm_slot(unsigned int pages, enum scan_result *
> > VM_BUG_ON(khugepaged_scan.address < hstart ||
> > khugepaged_scan.address + HPAGE_PMD_SIZE >
> > hend);
> > - if (!vma_is_anonymous(vma)) {
> > - struct file *file = get_file(vma->vm_file);
> > - pgoff_t pgoff = linear_page_index(vma,
> > - khugepaged_scan.address);
> > -
> > - mmap_read_unlock(mm);
> > - mmap_locked = false;
> > - *result = collapse_scan_file(mm,
> > - khugepaged_scan.address, file, pgoff,
> > - &cur_progress, cc);
> > - fput(file);
> > - if (*result == SCAN_PTE_MAPPED_HUGEPAGE) {
> > - mmap_read_lock(mm);
> > - if (collapse_test_exit_or_disable(mm))
> > - goto breakouterloop;
> > - *result = try_collapse_pte_mapped_thp(mm,
> > - khugepaged_scan.address, false);
> > - if (*result == SCAN_PMD_MAPPED)
> > - *result = SCAN_SUCCEED;
> > - mmap_read_unlock(mm);
> > - }
> > - } else {
> > - *result = collapse_scan_pmd(mm, vma,
> > - khugepaged_scan.address, &mmap_locked,
> > - &cur_progress, cc);
> > - }
> > -
> > - if (*result == SCAN_SUCCEED)
> > - ++khugepaged_pages_collapsed;
> >
> > + *result = collapse_single_pmd(khugepaged_scan.address,
> > + vma, &mmap_locked, &cur_progress, cc);
> > /* move to next address */
> > khugepaged_scan.address += HPAGE_PMD_SIZE;
> > progress += cur_progress;
> > @@ -2819,13 +2856,12 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
> >
> > for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
> > enum scan_result result = SCAN_FAIL;
> > - bool triggered_wb = false;
> >
> > -retry:
> > if (!mmap_locked) {
> > cond_resched();
> > mmap_read_lock(mm);
> > mmap_locked = true;
> > + *lock_dropped = true;
>
> Hm, is this change here required at all? Shouldn't we instead need to know from
> collapse_single_pmd() whether it dropped the lock?
I'll verify all this locking and post a fixup! This 'lock dropped'
feature was introduced mid series. And I think it makes mmap_locked
redundant. I verified this once before but forgot most of the details
ATM.
Cheers,
-- Nico
>
>
> --
> Cheers,
>
> David
>
prev parent reply other threads:[~2026-02-26 20:28 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-26 1:29 [PATCH mm-unstable v2 0/5] mm: khugepaged cleanups and mTHP prerequisites Nico Pache
2026-02-26 1:29 ` [PATCH mm-unstable v2 1/5] mm: consolidate anonymous folio PTE mapping into helpers Nico Pache
2026-02-26 9:27 ` David Hildenbrand (Arm)
2026-02-26 1:29 ` [PATCH mm-unstable v2 2/5] mm: introduce is_pmd_order helper Nico Pache
2026-02-26 8:55 ` Baolin Wang
2026-02-26 1:29 ` [PATCH mm-unstable v2 3/5] mm/khugepaged: define COLLAPSE_MAX_PTES_LIMIT as HPAGE_PMD_NR - 1 Nico Pache
2026-02-26 8:56 ` Baolin Wang
2026-02-26 9:28 ` David Hildenbrand (Arm)
2026-02-26 20:17 ` Nico Pache
2026-02-26 1:29 ` [PATCH mm-unstable v2 4/5] mm/khugepaged: rename hpage_collapse_* to collapse_* Nico Pache
2026-02-26 1:29 ` [PATCH mm-unstable v2 5/5] mm/khugepaged: unify khugepaged and madv_collapse with collapse_single_pmd() Nico Pache
2026-02-26 9:23 ` Baolin Wang
2026-02-26 20:20 ` Nico Pache
2026-02-26 9:40 ` David Hildenbrand (Arm)
2026-02-26 20:27 ` Nico Pache [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAA1CXcDyt1gMWetfqrqGQOpB2B7n=+H_WYbY+pJyuGoDFf4u+A@mail.gmail.com' \
--to=npache@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=byungchul@sk.com \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=jackmanb@google.com \
--cc=jannh@google.com \
--cc=jglisse@google.com \
--cc=joshua.hahnjy@gmail.com \
--cc=kas@kernel.org \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=matthew.brost@intel.com \
--cc=mhiramat@kernel.org \
--cc=mhocko@suse.com \
--cc=peterx@redhat.com \
--cc=pfalcato@suse.de \
--cc=rakie.kim@sk.com \
--cc=raquini@redhat.com \
--cc=rdunlap@infradead.org \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shivankg@amd.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tiwai@suse.de \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=vishal.moola@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox