From: Chris Mason <clm@meta.com>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Chris Mason <clm@meta.com>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Jann Horn <jannh@google.com>,
<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 2/2] mm/madvise: allow guard page install/remove under VMA lock
Date: Tue, 13 Jan 2026 14:08:51 -0800 [thread overview]
Message-ID: <20260113220856.2358195-1-clm@meta.com> (raw)
In-Reply-To: <cca1edbd99cd1386ad20556d08ebdb356c45ef91.1762795245.git.lorenzo.stoakes@oracle.com>
On Mon, 10 Nov 2025 17:22:58 +0000 Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote:
> We only need to keep the page table stable so we can perform this operation
> under the VMA lock. PTE installation is stabilised via the PTE lock.
>
> One caveat is that, if we prepare vma->anon_vma we must hold the mmap read
> lock. We can account for this by adapting the VMA locking logic to
> explicitly check for this case and prevent a VMA lock from being acquired
> should it be the case.
>
> This check is safe, as while we might be raced on anon_vma installation,
> this would simply make the check conservative, there's no way for us to see
> an anon_vma and then for it to be cleared, as doing so requires the
> mmap/VMA write lock.
>
> We abstract the VMA lock validity logic to is_vma_lock_sufficient() for
> this purpose, and add prepares_anon_vma() to abstract the anon_vma logic.
>
> In order to do this we need to have a way of installing page tables
> explicitly for an identified VMA, so we export walk_page_range_vma() in an
> unsafe variant - walk_page_range_vma_unsafe() and use this should the VMA
> read lock be taken.
>
> We additionally update the comments in madvise_guard_install() to more
> accurately reflect the cases in which the logic may be reattempted,
> specifically THP huge pages being present.
>
Hi Lorenzo, one more flagged by AI:
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 7ed5bedb8f8ef..2a165e9beb5bb 100644
[ ... ]
> @@ -1173,8 +1180,13 @@ static long madvise_guard_install(struct madvise_behavior *madv_behavior)
> unsigned long nr_pages = 0;
>
> /* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
> - err = walk_page_range_mm_unsafe(vma->vm_mm, range->start,
> - range->end, &guard_install_walk_ops, &nr_pages);
> + if (madv_behavior->lock_mode == MADVISE_VMA_READ_LOCK)
> + err = walk_page_range_vma_unsafe(madv_behavior->vma,
> + range->start, range->end, &walk_ops,
> + &nr_pages);
> + else
> + err = walk_page_range_mm_unsafe(vma->vm_mm, range->start,
> + range->end, &walk_ops, &nr_pages);
> if (err < 0)
> return err;
>
> @@ -1195,8 +1207,7 @@ static long madvise_guard_install(struct madvise_behavior *madv_behavior)
> }
When err > 0 (zap needed), zap_page_range_single() is called below
regardless of lock_mode. If lock_mode == MADVISE_VMA_READ_LOCK (only VMA
lock is held), and the zap range contains a THP PUD mapping that requires
splitting, the call chain:
madvise_guard_install()
-> zap_page_range_single()
-> zap_page_range_single_batched()
-> unmap_single_vma()
-> unmap_page_range()
-> zap_p4d_range()
-> zap_pud_range()
will hit mmap_assert_locked() in zap_pud_range() (mm/memory.c) when
pud_trans_huge() is true and partial PUD splitting is required.
The assertion will trigger a WARN_ON because only the VMA lock is held,
not the mmap lock.
Is there an additional check needed before calling zap_page_range_single()
when in VMA lock mode, or should this path fall back to mmap lock when THP
is possible?
> @@ -1715,9 +1763,9 @@ static enum madvise_lock_mode get_lock_mode(struct madvise_behavior *madv_behavi
> case MADV_POPULATE_READ:
> case MADV_POPULATE_WRITE:
> case MADV_COLLAPSE:
> + return MADVISE_MMAP_READ_LOCK;
> case MADV_GUARD_INSTALL:
> case MADV_GUARD_REMOVE:
> - return MADVISE_MMAP_READ_LOCK;
> case MADV_DONTNEED:
> case MADV_DONTNEED_LOCKED:
> case MADV_FREE:
This change moves MADV_GUARD_INSTALL to use MADVISE_VMA_READ_LOCK, but
zap_page_range_single() called later in madvise_guard_install() may
require the mmap lock for THP PUD splitting as noted above.
prev parent reply other threads:[~2026-01-13 22:09 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-10 17:22 [PATCH v2 0/2] mm: perform guard region " Lorenzo Stoakes
2025-11-10 17:22 ` [PATCH v2 1/2] mm: rename walk_page_range_mm() Lorenzo Stoakes
2025-11-10 18:27 ` David Hildenbrand (Red Hat)
2025-11-11 7:56 ` Vlastimil Babka
2025-11-11 17:12 ` Davidlohr Bueso
2025-11-10 17:22 ` [PATCH v2 2/2] mm/madvise: allow guard page install/remove under VMA lock Lorenzo Stoakes
2025-11-10 18:29 ` David Hildenbrand (Red Hat)
2025-11-11 8:12 ` Vlastimil Babka
2025-11-11 17:26 ` Davidlohr Bueso
2026-01-13 22:08 ` Chris Mason [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260113220856.2358195-1-clm@meta.com \
--to=clm@meta.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=jannh@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox