linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
@ 2026-02-24 11:09 Dev Jain
  2026-02-24 11:31 ` Lorenzo Stoakes
  0 siblings, 1 reply; 13+ messages in thread
From: Dev Jain @ 2026-02-24 11:09 UTC (permalink / raw)
  To: akpm, david, lorenzo.stoakes
  Cc: riel, Liam.Howlett, vbabka, harry.yoo, jannh, baohua, linux-mm,
	linux-kernel, Dev Jain, stable

We batch unmapping of anonymous lazyfree folios by folio_unmap_pte_batch.
If the batch has a mix of writable and non-writable bits, we may end up
setting the entire batch writable. Fix this by write-protecting the ptes
during pte restoration in the failure path.

I was able to write the below reproducer and crash the kernel.
Explanation of reproducer (set 64K mTHP to always):

Fault in a 64K large folio. Split the VMA at mid-point with MADV_DONTFORK.
fork() - parent points to the folio with 8 writable ptes and 8 non-writable
ptes. Merge the VMAs with MADV_DOFORK so that folio_unmap_pte_batch() can
determine all the 16 ptes as a batch. Do MADV_FREE on the range to mark
the folio as lazyfree. Write to the memory to dirty the pte, eventually
rmap will dirty the folio. Then trigger reclaim, we will hit the pte
restoration path, and the kernel will crash with the following trace:

[   21.134473] kernel BUG at mm/page_table_check.c:118!
[   21.134497] Internal error: Oops - BUG: 00000000f2000800 [#1]  SMP
[   21.135917] Modules linked in:
[   21.136085] CPU: 1 UID: 0 PID: 1735 Comm: dup-lazyfree Not tainted 7.0.0-rc1-00116-g018018a17770 #1028 PREEMPT
[   21.136858] Hardware name: linux,dummy-virt (DT)
[   21.137019] pstate: 21400005 (nzCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
[   21.137308] pc : page_table_check_set+0x28c/0x2a8
[   21.137607] lr : page_table_check_set+0x134/0x2a8
[   21.137885] sp : ffff80008a3b3340
[   21.138124] x29: ffff80008a3b3340 x28: fffffdffc3d14400 x27: ffffd1a55e03d000
[   21.138623] x26: 0040000000000040 x25: ffffd1a55f7dd000 x24: 0000000000000001
[   21.139045] x23: 0000000000000001 x22: 0000000000000001 x21: ffffd1a55f217f30
[   21.139629] x20: 0000000000134521 x19: 0000000000134519 x18: 005c43e000040000
[   21.140027] x17: 0001400000000000 x16: 0001700000000000 x15: 000000000000ffff
[   21.140578] x14: 000000000000000c x13: 005c006000000000 x12: 0000000000000020
[   21.140828] x11: 0000000000000000 x10: 005c000000000000 x9 : ffffd1a55c079ee0
[   21.141077] x8 : 0000000000000001 x7 : 005c03e000040000 x6 : 000000004000ffff
[   21.141490] x5 : ffff00017fffce00 x4 : 0000000000000001 x3 : 0000000000000002
[   21.141741] x2 : 0000000000134510 x1 : 0000000000000000 x0 : ffff0000c08228c0
[   21.141991] Call trace:
[   21.142093]  page_table_check_set+0x28c/0x2a8 (P)
[   21.142265]  __page_table_check_ptes_set+0x144/0x1e8
[   21.142441]  __set_ptes_anysz.constprop.0+0x160/0x1a8
[   21.142766]  contpte_set_ptes+0xe8/0x140
[   21.142907]  try_to_unmap_one+0x10c4/0x10d0
[   21.143177]  rmap_walk_anon+0x100/0x250
[   21.143315]  try_to_unmap+0xa0/0xc8
[   21.143441]  shrink_folio_list+0x59c/0x18a8
[   21.143759]  shrink_lruvec+0x664/0xbf0
[   21.144043]  shrink_node+0x218/0x878
[   21.144285]  __node_reclaim.constprop.0+0x98/0x338
[   21.144763]  user_proactive_reclaim+0x2a4/0x340
[   21.145056]  reclaim_store+0x3c/0x60
[   21.145216]  dev_attr_store+0x20/0x40
[   21.145585]  sysfs_kf_write+0x84/0xa8
[   21.145835]  kernfs_fop_write_iter+0x130/0x1c8
[   21.145994]  vfs_write+0x2b8/0x368
[   21.146119]  ksys_write+0x70/0x110
[   21.146240]  __arm64_sys_write+0x24/0x38
[   21.146380]  invoke_syscall+0x50/0x120
[   21.146513]  el0_svc_common.constprop.0+0x48/0xf8
[   21.146679]  do_el0_svc+0x28/0x40
[   21.146798]  el0_svc+0x34/0x110
[   21.146926]  el0t_64_sync_handler+0xa0/0xe8
[   21.147074]  el0t_64_sync+0x198/0x1a0
[   21.147225] Code: f9400441 b4fff241 17ffff94 d4210000 (d4210000)
[   21.147440] ---[ end trace 0000000000000000 ]---


#define _GNU_SOURCE
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <string.h>
#include <sys/wait.h>
#include <sched.h>
#include <fcntl.h>

void write_to_reclaim() {
    const char *path = "/sys/devices/system/node/node0/reclaim";
    const char *value = "409600000000";
    int fd = open(path, O_WRONLY);
    if (fd == -1) {
        perror("open");
        exit(EXIT_FAILURE);
    }

    if (write(fd, value, sizeof("409600000000") - 1) == -1) {
        perror("write");
        close(fd);
        exit(EXIT_FAILURE);
    }

    printf("Successfully wrote %s to %s\n", value, path);
    close(fd);
}

int main()
{
	char *ptr = mmap((void *)(1UL << 30), 1UL << 16, PROT_READ | PROT_WRITE,
			 MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
	if ((unsigned long)ptr != (1UL << 30)) {
		perror("mmap");
		return 1;
	}
	
	/* a 64K folio gets faulted in */
	memset(ptr, 0, 1UL << 16);

	/* 32K half will not be shared into child */
	if (madvise(ptr, 1UL << 15, MADV_DONTFORK)) {
		perror("madvise madv dontfork");
		return 1;
	}

	pid_t pid = fork();

	if (pid < 0) {
		perror("fork");
		return 1;
	} else if (pid == 0) {
		sleep(15);
	} else {
		/* merge VMAs. now first half of the 16 ptes are writable, the other half not. */
		if (madvise(ptr, 1UL << 15, MADV_DOFORK)) {
			perror("madvise madv fork");
			return 1;
		}
		if (madvise(ptr, (1UL << 16), MADV_FREE)) {
			perror("madvise madv free");
			return 1;
		}

		/* dirty the large folio */
		(*ptr) += 10;

		write_to_reclaim();
		// sleep(10);
		waitpid(pid, NULL, 0);

	}
}

Fixes: 354dffd29575 ("mm: support batched unmap for lazyfree large folios during reclamation")
Cc: stable <stable@kernel.org>
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
Applies on mm-new (commit 018018a17770).

 mm/rmap.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/mm/rmap.c b/mm/rmap.c
index bff8f222004e4..501519844f290 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2235,6 +2235,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 				smp_rmb();
 
 				if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) {
+					/*
+					 * The pte batch may have a mix of writable and non-writable
+					 * ptes. If the first pte of the batch was writable, we may
+					 * end up restoring the ptes incorrectly by setting the
+					 * entire batch writable. Avoid this by setting the batch
+					 * non-writable; this is not optimal, but improbable to
+					 * reach by virtue of being a failure path.
+					 */
+					pteval = pte_wrprotect(pteval);
+
 					/*
 					 * redirtied either using the page table or a previously
 					 * obtained GUP reference.
@@ -2243,6 +2253,9 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 					folio_set_swapbacked(folio);
 					goto walk_abort;
 				} else if (ref_count != 1 + map_count) {
+					/* See comment above */
+					pteval = pte_wrprotect(pteval);
+
 					/*
 					 * Additional reference. Could be a GUP reference or any
 					 * speculative reference. GUP users must mark the folio
-- 
2.34.1



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-24 11:09 [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios Dev Jain
@ 2026-02-24 11:31 ` Lorenzo Stoakes
  2026-02-24 11:43   ` Lorenzo Stoakes
  0 siblings, 1 reply; 13+ messages in thread
From: Lorenzo Stoakes @ 2026-02-24 11:31 UTC (permalink / raw)
  To: Dev Jain
  Cc: akpm, david, riel, Liam.Howlett, vbabka, harry.yoo, jannh,
	baohua, linux-mm, linux-kernel, stable

Thanks Dev.

Andrew - why was commit 354dffd29575 ("mm: support batched unmap for lazyfree
large folios during reclamation") merged?

It had enormous amounts of review commentary at
https://lore.kernel.org/all/146b4cb1-aa1e-4519-9e03-f98cfb1135d2@redhat.com/ and
no tags, this should be a signal to wait for a respin _at least_, and really if
late in cycle suggests it should wait a cycle.

I've said going forward I'm going to check THP series for tags and if not
present NAK if they hit mm-stable, I guess I'll extend that to rmap also.

It'd be easier for all concerned if we could yank stuff earlier
though. Waiting for the next cycle isn't a bad thing and avoids this kind
of bug.

Dev - I wonder if we shouldn't just revert 354dffd29575. I don't like how
the original patch piles more mess into an already HUGE function and it's
clearly adding risk here.

On Tue, Feb 24, 2026 at 04:39:34PM +0530, Dev Jain wrote:
> We batch unmapping of anonymous lazyfree folios by folio_unmap_pte_batch.
> If the batch has a mix of writable and non-writable bits, we may end up
> setting the entire batch writable. Fix this by write-protecting the ptes
> during pte restoration in the failure path.
>
> I was able to write the below reproducer and crash the kernel.
> Explanation of reproducer (set 64K mTHP to always):
>
> Fault in a 64K large folio. Split the VMA at mid-point with MADV_DONTFORK.
> fork() - parent points to the folio with 8 writable ptes and 8 non-writable
> ptes. Merge the VMAs with MADV_DOFORK so that folio_unmap_pte_batch() can
> determine all the 16 ptes as a batch. Do MADV_FREE on the range to mark
> the folio as lazyfree. Write to the memory to dirty the pte, eventually
> rmap will dirty the folio. Then trigger reclaim, we will hit the pte
> restoration path, and the kernel will crash with the following trace:
>
> [   21.134473] kernel BUG at mm/page_table_check.c:118!
> [   21.134497] Internal error: Oops - BUG: 00000000f2000800 [#1]  SMP
> [   21.135917] Modules linked in:
> [   21.136085] CPU: 1 UID: 0 PID: 1735 Comm: dup-lazyfree Not tainted 7.0.0-rc1-00116-g018018a17770 #1028 PREEMPT
> [   21.136858] Hardware name: linux,dummy-virt (DT)
> [   21.137019] pstate: 21400005 (nzCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
> [   21.137308] pc : page_table_check_set+0x28c/0x2a8
> [   21.137607] lr : page_table_check_set+0x134/0x2a8
> [   21.137885] sp : ffff80008a3b3340
> [   21.138124] x29: ffff80008a3b3340 x28: fffffdffc3d14400 x27: ffffd1a55e03d000
> [   21.138623] x26: 0040000000000040 x25: ffffd1a55f7dd000 x24: 0000000000000001
> [   21.139045] x23: 0000000000000001 x22: 0000000000000001 x21: ffffd1a55f217f30
> [   21.139629] x20: 0000000000134521 x19: 0000000000134519 x18: 005c43e000040000
> [   21.140027] x17: 0001400000000000 x16: 0001700000000000 x15: 000000000000ffff
> [   21.140578] x14: 000000000000000c x13: 005c006000000000 x12: 0000000000000020
> [   21.140828] x11: 0000000000000000 x10: 005c000000000000 x9 : ffffd1a55c079ee0
> [   21.141077] x8 : 0000000000000001 x7 : 005c03e000040000 x6 : 000000004000ffff
> [   21.141490] x5 : ffff00017fffce00 x4 : 0000000000000001 x3 : 0000000000000002
> [   21.141741] x2 : 0000000000134510 x1 : 0000000000000000 x0 : ffff0000c08228c0
> [   21.141991] Call trace:
> [   21.142093]  page_table_check_set+0x28c/0x2a8 (P)
> [   21.142265]  __page_table_check_ptes_set+0x144/0x1e8
> [   21.142441]  __set_ptes_anysz.constprop.0+0x160/0x1a8
> [   21.142766]  contpte_set_ptes+0xe8/0x140
> [   21.142907]  try_to_unmap_one+0x10c4/0x10d0
> [   21.143177]  rmap_walk_anon+0x100/0x250
> [   21.143315]  try_to_unmap+0xa0/0xc8
> [   21.143441]  shrink_folio_list+0x59c/0x18a8
> [   21.143759]  shrink_lruvec+0x664/0xbf0
> [   21.144043]  shrink_node+0x218/0x878
> [   21.144285]  __node_reclaim.constprop.0+0x98/0x338
> [   21.144763]  user_proactive_reclaim+0x2a4/0x340
> [   21.145056]  reclaim_store+0x3c/0x60
> [   21.145216]  dev_attr_store+0x20/0x40
> [   21.145585]  sysfs_kf_write+0x84/0xa8
> [   21.145835]  kernfs_fop_write_iter+0x130/0x1c8
> [   21.145994]  vfs_write+0x2b8/0x368
> [   21.146119]  ksys_write+0x70/0x110
> [   21.146240]  __arm64_sys_write+0x24/0x38
> [   21.146380]  invoke_syscall+0x50/0x120
> [   21.146513]  el0_svc_common.constprop.0+0x48/0xf8
> [   21.146679]  do_el0_svc+0x28/0x40
> [   21.146798]  el0_svc+0x34/0x110
> [   21.146926]  el0t_64_sync_handler+0xa0/0xe8
> [   21.147074]  el0t_64_sync+0x198/0x1a0
> [   21.147225] Code: f9400441 b4fff241 17ffff94 d4210000 (d4210000)
> [   21.147440] ---[ end trace 0000000000000000 ]---
>
>
> #define _GNU_SOURCE
> #include <stdio.h>
> #include <unistd.h>
> #include <stdlib.h>
> #include <sys/mman.h>
> #include <string.h>
> #include <sys/wait.h>
> #include <sched.h>
> #include <fcntl.h>
>
> void write_to_reclaim() {
>     const char *path = "/sys/devices/system/node/node0/reclaim";
>     const char *value = "409600000000";
>     int fd = open(path, O_WRONLY);
>     if (fd == -1) {
>         perror("open");
>         exit(EXIT_FAILURE);
>     }
>
>     if (write(fd, value, sizeof("409600000000") - 1) == -1) {
>         perror("write");
>         close(fd);
>         exit(EXIT_FAILURE);
>     }
>
>     printf("Successfully wrote %s to %s\n", value, path);
>     close(fd);
> }
>
> int main()
> {
> 	char *ptr = mmap((void *)(1UL << 30), 1UL << 16, PROT_READ | PROT_WRITE,
> 			 MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> 	if ((unsigned long)ptr != (1UL << 30)) {
> 		perror("mmap");
> 		return 1;
> 	}
>
> 	/* a 64K folio gets faulted in */
> 	memset(ptr, 0, 1UL << 16);
>
> 	/* 32K half will not be shared into child */
> 	if (madvise(ptr, 1UL << 15, MADV_DONTFORK)) {
> 		perror("madvise madv dontfork");
> 		return 1;
> 	}
>
> 	pid_t pid = fork();
>
> 	if (pid < 0) {
> 		perror("fork");
> 		return 1;
> 	} else if (pid == 0) {
> 		sleep(15);
> 	} else {
> 		/* merge VMAs. now first half of the 16 ptes are writable, the other half not. */
> 		if (madvise(ptr, 1UL << 15, MADV_DOFORK)) {
> 			perror("madvise madv fork");
> 			return 1;
> 		}
> 		if (madvise(ptr, (1UL << 16), MADV_FREE)) {
> 			perror("madvise madv free");
> 			return 1;
> 		}
>
> 		/* dirty the large folio */
> 		(*ptr) += 10;
>
> 		write_to_reclaim();
> 		// sleep(10);
> 		waitpid(pid, NULL, 0);
>
> 	}
> }
>
> Fixes: 354dffd29575 ("mm: support batched unmap for lazyfree large folios during reclamation")
> Cc: stable <stable@kernel.org>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> Applies on mm-new (commit 018018a17770).

Thanks, but please base on mm-unstable, as mm-new is for now considered a
testing base only (yes we will endure merge conflict pain but I think
worthwhile).

>
>  mm/rmap.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index bff8f222004e4..501519844f290 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2235,6 +2235,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>  				smp_rmb();
>
>  				if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) {
> +					/*
> +					 * The pte batch may have a mix of writable and non-writable
> +					 * ptes. If the first pte of the batch was writable, we may
> +					 * end up restoring the ptes incorrectly by setting the
> +					 * entire batch writable. Avoid this by setting the batch
> +					 * non-writable; this is not optimal, but improbable to
> +					 * reach by virtue of being a failure path.
> +					 */
> +					pteval = pte_wrprotect(pteval);

Is this really a good long-term solution?

This feels like a hack.

> +
>  					/*
>  					 * redirtied either using the page table or a previously
>  					 * obtained GUP reference.
> @@ -2243,6 +2253,9 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>  					folio_set_swapbacked(folio);
>  					goto walk_abort;
>  				} else if (ref_count != 1 + map_count) {
> +					/* See comment above */
> +					pteval = pte_wrprotect(pteval);
> +

Again, feels like a hack.

>  					/*
>  					 * Additional reference. Could be a GUP reference or any
>  					 * speculative reference. GUP users must mark the folio
> --
> 2.34.1
>

So maybe a revert + a rethink?

David - what do you think?

Thanks, Lorenzo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-24 11:31 ` Lorenzo Stoakes
@ 2026-02-24 11:43   ` Lorenzo Stoakes
  2026-02-24 16:01     ` David Hildenbrand (Arm)
  0 siblings, 1 reply; 13+ messages in thread
From: Lorenzo Stoakes @ 2026-02-24 11:43 UTC (permalink / raw)
  To: Dev Jain
  Cc: akpm, david, riel, Liam.Howlett, vbabka, harry.yoo, jannh,
	baohua, linux-mm, linux-kernel, stable

On Tue, Feb 24, 2026 at 11:31:24AM +0000, Lorenzo Stoakes wrote:
> Thanks Dev.
>
> Andrew - why was commit 354dffd29575 ("mm: support batched unmap for lazyfree
> large folios during reclamation") merged?
>
> It had enormous amounts of review commentary at
> https://lore.kernel.org/all/146b4cb1-aa1e-4519-9e03-f98cfb1135d2@redhat.com/ and
> no tags, this should be a signal to wait for a respin _at least_, and really if
> late in cycle suggests it should wait a cycle.
>
> I've said going forward I'm going to check THP series for tags and if not
> present NAK if they hit mm-stable, I guess I'll extend that to rmap also.

Sorry I misread the original mail rushing through this is old... so this is less
pressing than I thought (for some reason I thought it was merged last cycle...!)
but it's a good example of how stuff can go unnoticed for a while.

In that case maybe a revert is a bit much and we just want the simplest possible
fix for backporting.

But is the proposed 'just assume wrprotect' sensible? David?

Thanks, Lorenzo


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-24 11:43   ` Lorenzo Stoakes
@ 2026-02-24 16:01     ` David Hildenbrand (Arm)
  2026-02-25  5:11       ` Dev Jain
                         ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-24 16:01 UTC (permalink / raw)
  To: Lorenzo Stoakes, Dev Jain
  Cc: akpm, riel, Liam.Howlett, vbabka, harry.yoo, jannh, baohua,
	linux-mm, linux-kernel, stable

On 2/24/26 12:43, Lorenzo Stoakes wrote:
> On Tue, Feb 24, 2026 at 11:31:24AM +0000, Lorenzo Stoakes wrote:
>> Thanks Dev.
>>
>> Andrew - why was commit 354dffd29575 ("mm: support batched unmap for lazyfree
>> large folios during reclamation") merged?
>>
>> It had enormous amounts of review commentary at
>> https://lore.kernel.org/all/146b4cb1-aa1e-4519-9e03-f98cfb1135d2@redhat.com/ and
>> no tags, this should be a signal to wait for a respin _at least_, and really if
>> late in cycle suggests it should wait a cycle.
>>
>> I've said going forward I'm going to check THP series for tags and if not
>> present NAK if they hit mm-stable, I guess I'll extend that to rmap also.
> 
> Sorry I misread the original mail rushing through this is old... so this is less
> pressing than I thought (for some reason I thought it was merged last cycle...!)
> but it's a good example of how stuff can go unnoticed for a while.
> 
> In that case maybe a revert is a bit much and we just want the simplest possible
> fix for backporting.

Dev volunteered to un-messify some of the stuff here. In particular, to
extend batching to all cases, not just some hand-selected ones.

Support for file folios is on the way.

> 
> But is the proposed 'just assume wrprotect' sensible? David?

In general, I think so. If PTEs were writable, they certainly have
PAE set. The write-fault handler can fully recover from that (as PAE is
set). If it's ever a performance problem (doubt), we can revisit.

I'm wondering whether we should just perform the wrprotect earlier:

diff --git a/mm/rmap.c b/mm/rmap.c
index 0f00570d1b9e..19b875ee3fad 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 
                        /* Nuke the page table entry. */
                        pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
+
+                       /*
+                        * Our batch might include writable and read-only
+                        * PTEs. When we have to restore the mapping, just
+                        * assume read-only to not accidentally upgrade
+                        * write permissions for PTEs that must not be
+                        * writable.
+                        */
+                       pteval = pte_wrprotect(pteval);
+
                        /*
                         * We clear the PTE but do not flush so potentially
                         * a remote CPU could still be writing to the folio


Given that nobody asks for writability (pte_write()) later.

Or does someone care?

Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
architecture (write-only)? I don't think so.


We have the following options:

1) pte_wrprotect(): fake that all was read-only.

Either we do it like Dev suggests, or we do it as above early.

The downside is that any code that might later want to know "was
this possibly writable" would get that information. Well, it wouldn't
get that information reliably *today* already (and that sounds a bit shaky).

2) Tell batching logic to honor pte_write()

Sounds suboptimal for some cases that really don't care in the future.

3) Tell batching logic to tell us if any pte was writable: FPB_MERGE_WRITE

... then we know for sure whether any PTE was writable and we could

(a) Pass it as we did before around to all checks, like pte_accessible().

(b) Have an explicit restore PTE where we play save.


I raised to Dev in private that softdirty handling is also shaky, as we
batch over that. Meaning that we could lose or gain softdirty PTE bits in
a batch.

For lazyfree and file folios it doesn't really matter I guess. But it will
matter once we unlock it for all anon folios.


1) sounds simplest, 3) sounds cleanest long-term.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-24 16:01     ` David Hildenbrand (Arm)
@ 2026-02-25  5:11       ` Dev Jain
  2026-02-26 10:21         ` David Hildenbrand (Arm)
  2026-02-26  7:01       ` Barry Song
  2026-02-26  7:09       ` Lance Yang
  2 siblings, 1 reply; 13+ messages in thread
From: Dev Jain @ 2026-02-25  5:11 UTC (permalink / raw)
  To: David Hildenbrand (Arm), Lorenzo Stoakes
  Cc: akpm, riel, Liam.Howlett, vbabka, harry.yoo, jannh, baohua,
	linux-mm, linux-kernel, stable



On 24/02/26 9:31 pm, David Hildenbrand (Arm) wrote:
> On 2/24/26 12:43, Lorenzo Stoakes wrote:
>> On Tue, Feb 24, 2026 at 11:31:24AM +0000, Lorenzo Stoakes wrote:
>>> Thanks Dev.
>>>
>>> Andrew - why was commit 354dffd29575 ("mm: support batched unmap for lazyfree
>>> large folios during reclamation") merged?
>>>
>>> It had enormous amounts of review commentary at
>>> https://lore.kernel.org/all/146b4cb1-aa1e-4519-9e03-f98cfb1135d2@redhat.com/ and
>>> no tags, this should be a signal to wait for a respin _at least_, and really if
>>> late in cycle suggests it should wait a cycle.
>>>
>>> I've said going forward I'm going to check THP series for tags and if not
>>> present NAK if they hit mm-stable, I guess I'll extend that to rmap also.
>>
>> Sorry I misread the original mail rushing through this is old... so this is less
>> pressing than I thought (for some reason I thought it was merged last cycle...!)
>> but it's a good example of how stuff can go unnoticed for a while.
>>
>> In that case maybe a revert is a bit much and we just want the simplest possible
>> fix for backporting.
> 
> Dev volunteered to un-messify some of the stuff here. In particular, to
> extend batching to all cases, not just some hand-selected ones.
> 
> Support for file folios is on the way.

Typo - anonymous non-lazyfree folios : )

> 
>>
>> But is the proposed 'just assume wrprotect' sensible? David?
> 
> In general, I think so. If PTEs were writable, they certainly have
> PAE set. The write-fault handler can fully recover from that (as PAE is
> set). If it's ever a performance problem (doubt), we can revisit.
> 
> I'm wondering whether we should just perform the wrprotect earlier:
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 0f00570d1b9e..19b875ee3fad 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>  
>                         /* Nuke the page table entry. */
>                         pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
> +
> +                       /*
> +                        * Our batch might include writable and read-only
> +                        * PTEs. When we have to restore the mapping, just
> +                        * assume read-only to not accidentally upgrade
> +                        * write permissions for PTEs that must not be
> +                        * writable.
> +                        */
> +                       pteval = pte_wrprotect(pteval);
> +
>                         /*
>                          * We clear the PTE but do not flush so potentially
>                          * a remote CPU could still be writing to the folio
> 
> 
> Given that nobody asks for writability (pte_write()) later.
> 
> Or does someone care?
> 
> Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
> not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
> architecture (write-only)? I don't think so.
> 
> 
> We have the following options:
> 
> 1) pte_wrprotect(): fake that all was read-only.
> 
> Either we do it like Dev suggests, or we do it as above early.
> 
> The downside is that any code that might later want to know "was
> this possibly writable" would get that information. Well, it wouldn't
> get that information reliably *today* already (and that sounds a bit shaky).

I would vote for this, since if we were to follow the current patch,
the extension to anon folios will make it worse (pte_wrprotect at 5 places
- the 3 additional places being in the if conditions consisting of
folio_dup_swap, arch_unmap_one, folio_try_share_anon_rmap_pte)
The downside being that if we fail in this rmap path, the ptes are all
write-protected. But then the page is already there - the fault is going
to be processed fast.

> 
> 2) Tell batching logic to honor pte_write()
> 
> Sounds suboptimal for some cases that really don't care in the future.
> 
> 3) Tell batching logic to tell us if any pte was writable: FPB_MERGE_WRITE
> 
> ... then we know for sure whether any PTE was writable and we could

Well, we don't need this? The problem here is that we are making a decision
on the basis of the writability of the *first* pte of the batch - so if
the first pte is writable, only then we have the problem we have been
talking about.

We could have had a FPB_MERGE_WRPROTECT (which I know, is totally
incompatible with FPB_MERGE_WRITE) - that would tell whether at least one
pte in the patch was non-writable, in which case we will be able to avoid
the restoration of the entire batch to writeprotected if all the ptes
were writable (which I am assuming is the common case). But of course this
is not possible to do with the current shape of folio_pte_batch_flags. We
will have to revert the FPB_MERGE_* stuff to just collect the "at least one
is writable, at least one is dirty, at least one is young, at least one is
non-writable" etc information from the function and let the caller handle
it. That will kill all the work you did in simplifying that function :)


> 
> (a) Pass it as we did before around to all checks, like pte_accessible().
> 
> (b) Have an explicit restore PTE where we play save.
> 
> 
> I raised to Dev in private that softdirty handling is also shaky, as we
> batch over that. Meaning that we could lose or gain softdirty PTE bits in
> a batch.
> 
> For lazyfree and file folios it doesn't really matter I guess. But it will
> matter once we unlock it for all anon folios.
> 
> 
> 1) sounds simplest, 3) sounds cleanest long-term.
> 



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-24 16:01     ` David Hildenbrand (Arm)
  2026-02-25  5:11       ` Dev Jain
@ 2026-02-26  7:01       ` Barry Song
  2026-02-26 10:09         ` David Hildenbrand (Arm)
  2026-02-26  7:09       ` Lance Yang
  2 siblings, 1 reply; 13+ messages in thread
From: Barry Song @ 2026-02-26  7:01 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: Lorenzo Stoakes, Dev Jain, akpm, riel, Liam.Howlett, vbabka,
	harry.yoo, jannh, linux-mm, linux-kernel, stable

On Wed, Feb 25, 2026 at 12:01 AM David Hildenbrand (Arm)
<david@kernel.org> wrote:
>
> On 2/24/26 12:43, Lorenzo Stoakes wrote:
> > On Tue, Feb 24, 2026 at 11:31:24AM +0000, Lorenzo Stoakes wrote:
> >> Thanks Dev.
> >>
> >> Andrew - why was commit 354dffd29575 ("mm: support batched unmap for lazyfree
> >> large folios during reclamation") merged?
> >>
> >> It had enormous amounts of review commentary at
> >> https://lore.kernel.org/all/146b4cb1-aa1e-4519-9e03-f98cfb1135d2@redhat.com/ and
> >> no tags, this should be a signal to wait for a respin _at least_, and really if
> >> late in cycle suggests it should wait a cycle.
> >>
> >> I've said going forward I'm going to check THP series for tags and if not
> >> present NAK if they hit mm-stable, I guess I'll extend that to rmap also.
> >
> > Sorry I misread the original mail rushing through this is old... so this is less
> > pressing than I thought (for some reason I thought it was merged last cycle...!)
> > but it's a good example of how stuff can go unnoticed for a while.
> >
> > In that case maybe a revert is a bit much and we just want the simplest possible
> > fix for backporting.

Apologies for the mess I caused, and thanks to Dev for catching this bug.

>
> Dev volunteered to un-messify some of the stuff here. In particular, to
> extend batching to all cases, not just some hand-selected ones.
>
> Support for file folios is on the way.
>
> >
> > But is the proposed 'just assume wrprotect' sensible? David?
>
> In general, I think so. If PTEs were writable, they certainly have
> PAE set. The write-fault handler can fully recover from that (as PAE is
> set). If it's ever a performance problem (doubt), we can revisit.
>
> I'm wondering whether we should just perform the wrprotect earlier:
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 0f00570d1b9e..19b875ee3fad 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>
>                         /* Nuke the page table entry. */
>                         pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
> +
> +                       /*
> +                        * Our batch might include writable and read-only
> +                        * PTEs. When we have to restore the mapping, just
> +                        * assume read-only to not accidentally upgrade
> +                        * write permissions for PTEs that must not be
> +                        * writable.
> +                        */
> +                       pteval = pte_wrprotect(pteval);
> +
>                         /*
>                          * We clear the PTE but do not flush so potentially
>                          * a remote CPU could still be writing to the folio
>
>
> Given that nobody asks for writability (pte_write()) later.
>
> Or does someone care?
>
> Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
> not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
> architecture (write-only)? I don't think so.
>
>
> We have the following options:
>
> 1) pte_wrprotect(): fake that all was read-only.
>
> Either we do it like Dev suggests, or we do it as above early.
>
> The downside is that any code that might later want to know "was
> this possibly writable" would get that information. Well, it wouldn't
> get that information reliably *today* already (and that sounds a bit shaky).
>
> 2) Tell batching logic to honor pte_write()
>
> Sounds suboptimal for some cases that really don't care in the future.

I'm still curious what the downside would be to applying the
simple fix instead of introducing more "hacks". As I assume,
cases where a folio has both writable and non-writable PTEs
are not common?

diff --git a/mm/rmap.c b/mm/rmap.c
index bff8f222004e..48ad3435593a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1955,7 +1955,7 @@ static inline unsigned int
folio_unmap_pte_batch(struct folio *folio,
        if (userfaultfd_wp(vma))
                return 1;

-       return folio_pte_batch(folio, pvmw->pte, pte, max_nr);
+       return folio_pte_batch_flags(folio, NULL, pvmw->pte, &pte,
max_nr, FPB_RESPECT_WRITE);
 }

>
> 3) Tell batching logic to tell us if any pte was writable: FPB_MERGE_WRITE
>
> ... then we know for sure whether any PTE was writable and we could
>
> (a) Pass it as we did before around to all checks, like pte_accessible().
>
> (b) Have an explicit restore PTE where we play save.
>
>
> I raised to Dev in private that softdirty handling is also shaky, as we
> batch over that. Meaning that we could lose or gain softdirty PTE bits in
> a batch.
>
> For lazyfree and file folios it doesn't really matter I guess. But it will
> matter once we unlock it for all anon folios.
>
>
> 1) sounds simplest, 3) sounds cleanest long-term.

Thanks
Barry


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-24 16:01     ` David Hildenbrand (Arm)
  2026-02-25  5:11       ` Dev Jain
  2026-02-26  7:01       ` Barry Song
@ 2026-02-26  7:09       ` Lance Yang
  2026-02-26 10:06         ` David Hildenbrand (Arm)
  2 siblings, 1 reply; 13+ messages in thread
From: Lance Yang @ 2026-02-26  7:09 UTC (permalink / raw)
  To: david
  Cc: Liam.Howlett, akpm, baohua, dev.jain, harry.yoo, jannh,
	linux-kernel, linux-mm, lorenzo.stoakes, riel, stable, vbabka,
	Lance Yang


On Tue, Feb 24, 2026 at 05:01:50PM +0100, David Hildenbrand (Arm) wrote:
>On 2/24/26 12:43, Lorenzo Stoakes wrote:
>> On Tue, Feb 24, 2026 at 11:31:24AM +0000, Lorenzo Stoakes wrote:
>>> Thanks Dev.
>>>
>>> Andrew - why was commit 354dffd29575 ("mm: support batched unmap for lazyfree
>>> large folios during reclamation") merged?
>>>
>>> It had enormous amounts of review commentary at
>>> https://lore.kernel.org/all/146b4cb1-aa1e-4519-9e03-f98cfb1135d2@redhat.com/ and
>>> no tags, this should be a signal to wait for a respin _at least_, and really if
>>> late in cycle suggests it should wait a cycle.
>>>
>>> I've said going forward I'm going to check THP series for tags and if not
>>> present NAK if they hit mm-stable, I guess I'll extend that to rmap also.
>> 
>> Sorry I misread the original mail rushing through this is old... so this is less
>> pressing than I thought (for some reason I thought it was merged last cycle...!)
>> but it's a good example of how stuff can go unnoticed for a while.
>> 
>> In that case maybe a revert is a bit much and we just want the simplest possible
>> fix for backporting.
>
>Dev volunteered to un-messify some of the stuff here. In particular, to
>extend batching to all cases, not just some hand-selected ones.
>
>Support for file folios is on the way.
>
>> 
>> But is the proposed 'just assume wrprotect' sensible? David?
>
>In general, I think so. If PTEs were writable, they certainly have
>PAE set. The write-fault handler can fully recover from that (as PAE is
>set). If it's ever a performance problem (doubt), we can revisit.
>
>I'm wondering whether we should just perform the wrprotect earlier:
>
>diff --git a/mm/rmap.c b/mm/rmap.c
>index 0f00570d1b9e..19b875ee3fad 100644
>--- a/mm/rmap.c
>+++ b/mm/rmap.c
>@@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> 
>                        /* Nuke the page table entry. */
>                        pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
>+
>+                       /*
>+                        * Our batch might include writable and read-only
>+                        * PTEs. When we have to restore the mapping, just
>+                        * assume read-only to not accidentally upgrade
>+                        * write permissions for PTEs that must not be
>+                        * writable.
>+                        */
>+                       pteval = pte_wrprotect(pteval);
>+
>                        /*
>                         * We clear the PTE but do not flush so potentially
>                         * a remote CPU could still be writing to the folio
>
>
>Given that nobody asks for writability (pte_write()) later.
>
>Or does someone care?
>
>Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
>not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
>architecture (write-only)? I don't think so.
>
>
>We have the following options:
>
>1) pte_wrprotect(): fake that all was read-only.
>
>Either we do it like Dev suggests, or we do it as above early.
>
>The downside is that any code that might later want to know "was
>this possibly writable" would get that information. Well, it wouldn't
>get that information reliably *today* already (and that sounds a bit shaky).

Makes sense to me :)

>2) Tell batching logic to honor pte_write()
>
>Sounds suboptimal for some cases that really don't care in the future.
>
>3) Tell batching logic to tell us if any pte was writable: FPB_MERGE_WRITE
>
>... then we know for sure whether any PTE was writable and we could
>
>(a) Pass it as we did before around to all checks, like pte_accessible().
>
>(b) Have an explicit restore PTE where we play save.
>
>
>I raised to Dev in private that softdirty handling is also shaky, as we
>batch over that. Meaning that we could lose or gain softdirty PTE bits in
>a batch.

I guess we won't lose soft_dirty bits - only gain them (false positive):

1) get_and_clear_ptes() merges dirty bits from all PTEs via pte_mkdirty()
2) pte_mkdirty() atomically sets both _PAGE_DIRTY and _PAGE_SOFT_DIRTY on
all architectures that support soft_dirty (x86, s390, powerpc, riscv)
3) set_ptes() uses pte_advance_pfn() which keeps all flags intact

So if any PTE in the batch was dirty, all PTEs become soft_dirty after
restore.

>For lazyfree and file folios it doesn't really matter I guess. But it will
>matter once we unlock it for all anon folios.
>
>
>1) sounds simplest, 3) sounds cleanest long-term.
>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-26  7:09       ` Lance Yang
@ 2026-02-26 10:06         ` David Hildenbrand (Arm)
  2026-02-26 10:28           ` Lance Yang
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-26 10:06 UTC (permalink / raw)
  To: Lance Yang
  Cc: Liam.Howlett, akpm, baohua, dev.jain, harry.yoo, jannh,
	linux-kernel, linux-mm, lorenzo.stoakes, riel, stable, vbabka

On 2/26/26 08:09, Lance Yang wrote:
> 
> On Tue, Feb 24, 2026 at 05:01:50PM +0100, David Hildenbrand (Arm) wrote:
>> On 2/24/26 12:43, Lorenzo Stoakes wrote:
>>>
>>> Sorry I misread the original mail rushing through this is old... so this is less
>>> pressing than I thought (for some reason I thought it was merged last cycle...!)
>>> but it's a good example of how stuff can go unnoticed for a while.
>>>
>>> In that case maybe a revert is a bit much and we just want the simplest possible
>>> fix for backporting.
>>
>> Dev volunteered to un-messify some of the stuff here. In particular, to
>> extend batching to all cases, not just some hand-selected ones.
>>
>> Support for file folios is on the way.
>>
>>>
>>> But is the proposed 'just assume wrprotect' sensible? David?
>>
>> In general, I think so. If PTEs were writable, they certainly have
>> PAE set. The write-fault handler can fully recover from that (as PAE is
>> set). If it's ever a performance problem (doubt), we can revisit.
>>
>> I'm wondering whether we should just perform the wrprotect earlier:
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 0f00570d1b9e..19b875ee3fad 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>
>>                        /* Nuke the page table entry. */
>>                        pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
>> +
>> +                       /*
>> +                        * Our batch might include writable and read-only
>> +                        * PTEs. When we have to restore the mapping, just
>> +                        * assume read-only to not accidentally upgrade
>> +                        * write permissions for PTEs that must not be
>> +                        * writable.
>> +                        */
>> +                       pteval = pte_wrprotect(pteval);
>> +
>>                        /*
>>                         * We clear the PTE but do not flush so potentially
>>                         * a remote CPU could still be writing to the folio
>>
>>
>> Given that nobody asks for writability (pte_write()) later.
>>
>> Or does someone care?
>>
>> Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
>> not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
>> architecture (write-only)? I don't think so.
>>
>>
>> We have the following options:
>>
>> 1) pte_wrprotect(): fake that all was read-only.
>>
>> Either we do it like Dev suggests, or we do it as above early.
>>
>> The downside is that any code that might later want to know "was
>> this possibly writable" would get that information. Well, it wouldn't
>> get that information reliably *today* already (and that sounds a bit shaky).
> 
> Makes sense to me :)
> 
>> 2) Tell batching logic to honor pte_write()
>>
>> Sounds suboptimal for some cases that really don't care in the future.
>>
>> 3) Tell batching logic to tell us if any pte was writable: FPB_MERGE_WRITE
>>
>> ... then we know for sure whether any PTE was writable and we could
>>
>> (a) Pass it as we did before around to all checks, like pte_accessible().
>>
>> (b) Have an explicit restore PTE where we play save.
>>
>>
>> I raised to Dev in private that softdirty handling is also shaky, as we
>> batch over that. Meaning that we could lose or gain softdirty PTE bits in
>> a batch.
> 
> I guess we won't lose soft_dirty bits - only gain them (false positive):
> 
> 1) get_and_clear_ptes() merges dirty bits from all PTEs via pte_mkdirty()
> 2) pte_mkdirty() atomically sets both _PAGE_DIRTY and _PAGE_SOFT_DIRTY on
> all architectures that support soft_dirty (x86, s390, powerpc, riscv)
> 3) set_ptes() uses pte_advance_pfn() which keeps all flags intact
> 
> So if any PTE in the batch was dirty, all PTEs become soft_dirty after
> restore.

PTEs can be softdirty without being dirty. That over-complicates the
situation.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-26  7:01       ` Barry Song
@ 2026-02-26 10:09         ` David Hildenbrand (Arm)
  2026-02-26 10:20           ` Barry Song
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-26 10:09 UTC (permalink / raw)
  To: Barry Song
  Cc: Lorenzo Stoakes, Dev Jain, akpm, riel, Liam.Howlett, vbabka,
	harry.yoo, jannh, linux-mm, linux-kernel, stable

On 2/26/26 08:01, Barry Song wrote:
> On Wed, Feb 25, 2026 at 12:01 AM David Hildenbrand (Arm)
> <david@kernel.org> wrote:
>>
>> On 2/24/26 12:43, Lorenzo Stoakes wrote:
>>>
>>> Sorry I misread the original mail rushing through this is old... so this is less
>>> pressing than I thought (for some reason I thought it was merged last cycle...!)
>>> but it's a good example of how stuff can go unnoticed for a while.
>>>
>>> In that case maybe a revert is a bit much and we just want the simplest possible
>>> fix for backporting.
> 
> Apologies for the mess I caused, and thanks to Dev for catching this bug.
> 
>>
>> Dev volunteered to un-messify some of the stuff here. In particular, to
>> extend batching to all cases, not just some hand-selected ones.
>>
>> Support for file folios is on the way.
>>
>>>
>>> But is the proposed 'just assume wrprotect' sensible? David?
>>
>> In general, I think so. If PTEs were writable, they certainly have
>> PAE set. The write-fault handler can fully recover from that (as PAE is
>> set). If it's ever a performance problem (doubt), we can revisit.
>>
>> I'm wondering whether we should just perform the wrprotect earlier:
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 0f00570d1b9e..19b875ee3fad 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>
>>                         /* Nuke the page table entry. */
>>                         pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
>> +
>> +                       /*
>> +                        * Our batch might include writable and read-only
>> +                        * PTEs. When we have to restore the mapping, just
>> +                        * assume read-only to not accidentally upgrade
>> +                        * write permissions for PTEs that must not be
>> +                        * writable.
>> +                        */
>> +                       pteval = pte_wrprotect(pteval);
>> +
>>                         /*
>>                          * We clear the PTE but do not flush so potentially
>>                          * a remote CPU could still be writing to the folio
>>
>>
>> Given that nobody asks for writability (pte_write()) later.
>>
>> Or does someone care?
>>
>> Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
>> not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
>> architecture (write-only)? I don't think so.
>>
>>
>> We have the following options:
>>
>> 1) pte_wrprotect(): fake that all was read-only.
>>
>> Either we do it like Dev suggests, or we do it as above early.
>>
>> The downside is that any code that might later want to know "was
>> this possibly writable" would get that information. Well, it wouldn't
>> get that information reliably *today* already (and that sounds a bit shaky).
>>
>> 2) Tell batching logic to honor pte_write()
>>
>> Sounds suboptimal for some cases that really don't care in the future.
> 
> I'm still curious what the downside would be to applying the
> simple fix instead of introducing more "hacks". As I assume,
> cases where a folio has both writable and non-writable PTEs
> are not common?

With "in the future" I thought about file folios, where I'd assume ti
could happen more often.

For lazyfree, I agree.

In the end, batching as much as possible is nice, but obviously, once it
gets too shaky in corner cases we might not care that much.

> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index bff8f222004e..48ad3435593a 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1955,7 +1955,7 @@ static inline unsigned int
> folio_unmap_pte_batch(struct folio *folio,
>         if (userfaultfd_wp(vma))
>                 return 1;
> 
> -       return folio_pte_batch(folio, pvmw->pte, pte, max_nr);
> +       return folio_pte_batch_flags(folio, NULL, pvmw->pte, &pte,
> max_nr, FPB_RESPECT_WRITE);
>  }

If we already go for this approach assume we should then just set
FPB_RESPECT_SOFT_DIRTY as well and have it all handled properly.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-26 10:09         ` David Hildenbrand (Arm)
@ 2026-02-26 10:20           ` Barry Song
  0 siblings, 0 replies; 13+ messages in thread
From: Barry Song @ 2026-02-26 10:20 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: Lorenzo Stoakes, Dev Jain, akpm, riel, Liam.Howlett, vbabka,
	harry.yoo, jannh, linux-mm, linux-kernel, stable

On Thu, Feb 26, 2026 at 6:09 PM David Hildenbrand (Arm)
<david@kernel.org> wrote:
>
> On 2/26/26 08:01, Barry Song wrote:
> > On Wed, Feb 25, 2026 at 12:01 AM David Hildenbrand (Arm)
> > <david@kernel.org> wrote:
> >>
> >> On 2/24/26 12:43, Lorenzo Stoakes wrote:
> >>>
> >>> Sorry I misread the original mail rushing through this is old... so this is less
> >>> pressing than I thought (for some reason I thought it was merged last cycle...!)
> >>> but it's a good example of how stuff can go unnoticed for a while.
> >>>
> >>> In that case maybe a revert is a bit much and we just want the simplest possible
> >>> fix for backporting.
> >
> > Apologies for the mess I caused, and thanks to Dev for catching this bug.
> >
> >>
> >> Dev volunteered to un-messify some of the stuff here. In particular, to
> >> extend batching to all cases, not just some hand-selected ones.
> >>
> >> Support for file folios is on the way.
> >>
> >>>
> >>> But is the proposed 'just assume wrprotect' sensible? David?
> >>
> >> In general, I think so. If PTEs were writable, they certainly have
> >> PAE set. The write-fault handler can fully recover from that (as PAE is
> >> set). If it's ever a performance problem (doubt), we can revisit.
> >>
> >> I'm wondering whether we should just perform the wrprotect earlier:
> >>
> >> diff --git a/mm/rmap.c b/mm/rmap.c
> >> index 0f00570d1b9e..19b875ee3fad 100644
> >> --- a/mm/rmap.c
> >> +++ b/mm/rmap.c
> >> @@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>
> >>                         /* Nuke the page table entry. */
> >>                         pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
> >> +
> >> +                       /*
> >> +                        * Our batch might include writable and read-only
> >> +                        * PTEs. When we have to restore the mapping, just
> >> +                        * assume read-only to not accidentally upgrade
> >> +                        * write permissions for PTEs that must not be
> >> +                        * writable.
> >> +                        */
> >> +                       pteval = pte_wrprotect(pteval);
> >> +
> >>                         /*
> >>                          * We clear the PTE but do not flush so potentially
> >>                          * a remote CPU could still be writing to the folio
> >>
> >>
> >> Given that nobody asks for writability (pte_write()) later.
> >>
> >> Or does someone care?
> >>
> >> Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
> >> not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
> >> architecture (write-only)? I don't think so.
> >>
> >>
> >> We have the following options:
> >>
> >> 1) pte_wrprotect(): fake that all was read-only.
> >>
> >> Either we do it like Dev suggests, or we do it as above early.
> >>
> >> The downside is that any code that might later want to know "was
> >> this possibly writable" would get that information. Well, it wouldn't
> >> get that information reliably *today* already (and that sounds a bit shaky).
> >>
> >> 2) Tell batching logic to honor pte_write()
> >>
> >> Sounds suboptimal for some cases that really don't care in the future.
> >
> > I'm still curious what the downside would be to applying the
> > simple fix instead of introducing more "hacks". As I assume,
> > cases where a folio has both writable and non-writable PTEs
> > are not common?
>
> With "in the future" I thought about file folios, where I'd assume ti
> could happen more often.
>
> For lazyfree, I agree.
>
> In the end, batching as much as possible is nice, but obviously, once it
> gets too shaky in corner cases we might not care that much.

Assuming 90% of folios have consistent PTEs, perhaps we don’t
need to worry too much about the remaining 10% of inconsistent
folios. We’ve already gained performance benefits for the
consistent 90%, and while the remaining 10% may not receive the
full batch, they are still getting some batching.

I don’t have the exact number, but it’s likely 90% or higher :-)

>
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index bff8f222004e..48ad3435593a 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1955,7 +1955,7 @@ static inline unsigned int
> > folio_unmap_pte_batch(struct folio *folio,
> >         if (userfaultfd_wp(vma))
> >                 return 1;
> >
> > -       return folio_pte_batch(folio, pvmw->pte, pte, max_nr);
> > +       return folio_pte_batch_flags(folio, NULL, pvmw->pte, &pte,
> > max_nr, FPB_RESPECT_WRITE);
> >  }
>
> If we already go for this approach assume we should then just set
> FPB_RESPECT_SOFT_DIRTY as well and have it all handled properly.

I would vote for this, as supporting those inconsistent PTE
cases could become an endless and painful task :-)

Thanks
Barry


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-25  5:11       ` Dev Jain
@ 2026-02-26 10:21         ` David Hildenbrand (Arm)
  2026-02-26 10:27           ` Dev Jain
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-26 10:21 UTC (permalink / raw)
  To: Dev Jain, Lorenzo Stoakes
  Cc: akpm, riel, Liam.Howlett, vbabka, harry.yoo, jannh, baohua,
	linux-mm, linux-kernel, stable

On 2/25/26 06:11, Dev Jain wrote:
> 
> 
> On 24/02/26 9:31 pm, David Hildenbrand (Arm) wrote:
>> On 2/24/26 12:43, Lorenzo Stoakes wrote:
>>>
>>> Sorry I misread the original mail rushing through this is old... so this is less
>>> pressing than I thought (for some reason I thought it was merged last cycle...!)
>>> but it's a good example of how stuff can go unnoticed for a while.
>>>
>>> In that case maybe a revert is a bit much and we just want the simplest possible
>>> fix for backporting.
>>
>> Dev volunteered to un-messify some of the stuff here. In particular, to
>> extend batching to all cases, not just some hand-selected ones.
>>
>> Support for file folios is on the way.
> 
> Typo - anonymous non-lazyfree folios : )

Heh, no, not what I meant. We do have file folio support on the way (see
the other patch set).

> 
>>
>>>
>>> But is the proposed 'just assume wrprotect' sensible? David?
>>
>> In general, I think so. If PTEs were writable, they certainly have
>> PAE set. The write-fault handler can fully recover from that (as PAE is
>> set). If it's ever a performance problem (doubt), we can revisit.
>>
>> I'm wondering whether we should just perform the wrprotect earlier:
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 0f00570d1b9e..19b875ee3fad 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>  
>>                         /* Nuke the page table entry. */
>>                         pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
>> +
>> +                       /*
>> +                        * Our batch might include writable and read-only
>> +                        * PTEs. When we have to restore the mapping, just
>> +                        * assume read-only to not accidentally upgrade
>> +                        * write permissions for PTEs that must not be
>> +                        * writable.
>> +                        */
>> +                       pteval = pte_wrprotect(pteval);
>> +
>>                         /*
>>                          * We clear the PTE but do not flush so potentially
>>                          * a remote CPU could still be writing to the folio
>>
>>
>> Given that nobody asks for writability (pte_write()) later.
>>
>> Or does someone care?
>>
>> Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
>> not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
>> architecture (write-only)? I don't think so.
>>
>>
>> We have the following options:
>>
>> 1) pte_wrprotect(): fake that all was read-only.
>>
>> Either we do it like Dev suggests, or we do it as above early.
>>
>> The downside is that any code that might later want to know "was
>> this possibly writable" would get that information. Well, it wouldn't
>> get that information reliably *today* already (and that sounds a bit shaky).
> 
> I would vote for this, since if we were to follow the current patch,
> the extension to anon folios will make it worse (pte_wrprotect at 5 places
> - the 3 additional places being in the if conditions consisting of
> folio_dup_swap, arch_unmap_one, folio_try_share_anon_rmap_pte)
> The downside being that if we fail in this rmap path, the ptes are all
> write-protected. But then the page is already there - the fault is going
> to be processed fast.

Right, we should only have a single "revert pte", and not have to redo
that from multiple locations.

> 
>>
>> 2) Tell batching logic to honor pte_write()
>>
>> Sounds suboptimal for some cases that really don't care in the future.

As per discussion with Barry, we might just want to do that now as an
easy and obviously correct fix.

It's a shame we stop being able to use folio_pte_batch() and have to
create an inlined version.

>>
>> 3) Tell batching logic to tell us if any pte was writable: FPB_MERGE_WRITE
>>
>> ... then we know for sure whether any PTE was writable and we could
> 
> Well, we don't need this? The problem here is that we are making a decision
> on the basis of the writability of the *first* pte of the batch - so if
> the first pte is writable, only then we have the problem we have been
> talking about.

That's what I was referring above as "being shaky".

Some code has to be taught that "there is something writable here, so
assume it was accessible in a certain way", other code has to be taught
that "there is something read-only here, so make sure you don't
accidentally make something writable".

One way to handle it is to say that "the resulting pte is writable, so
assume it was accessible", to then say "but just assume it is read-only
as we are not sure whether everything is writable".

> 
> We could have had a FPB_MERGE_WRPROTECT (which I know, is totally
> incompatible with FPB_MERGE_WRITE) - that would tell whether at least one
> pte in the patch was non-writable, in which case we will be able to avoid
> the restoration of the entire batch to writeprotected if all the ptes
> were writable (which I am assuming is the common case). But of course this
> is not possible to do with the current shape of folio_pte_batch_flags. We
> will have to revert the FPB_MERGE_* stuff to just collect the "at least one
> is writable, at least one is dirty, at least one is young, at least one is
> non-writable" etc information from the function and let the caller handle
> it. That will kill all the work you did in simplifying that function :)
Yeah, let's not go down that path. :)

To fix what we currently have in the tree, probably we should really
just set FPB_RESPECT_WRITE|FPB_RESPECT_SOFT_DIRTY, saying that this is
"obviously correct", and revisit it once we expect more cases where
batching over these PTEs would provide more value.

For lazyfree, likely it doesn't make a difference.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-26 10:21         ` David Hildenbrand (Arm)
@ 2026-02-26 10:27           ` Dev Jain
  0 siblings, 0 replies; 13+ messages in thread
From: Dev Jain @ 2026-02-26 10:27 UTC (permalink / raw)
  To: David Hildenbrand (Arm), Lorenzo Stoakes
  Cc: akpm, riel, Liam.Howlett, vbabka, harry.yoo, jannh, baohua,
	linux-mm, linux-kernel, stable



On 26/02/26 3:51 pm, David Hildenbrand (Arm) wrote:
> On 2/25/26 06:11, Dev Jain wrote:
>>
>>
>> On 24/02/26 9:31 pm, David Hildenbrand (Arm) wrote:
>>> On 2/24/26 12:43, Lorenzo Stoakes wrote:
>>>>
>>>> Sorry I misread the original mail rushing through this is old... so this is less
>>>> pressing than I thought (for some reason I thought it was merged last cycle...!)
>>>> but it's a good example of how stuff can go unnoticed for a while.
>>>>
>>>> In that case maybe a revert is a bit much and we just want the simplest possible
>>>> fix for backporting.
>>>
>>> Dev volunteered to un-messify some of the stuff here. In particular, to
>>> extend batching to all cases, not just some hand-selected ones.
>>>
>>> Support for file folios is on the way.
>>
>> Typo - anonymous non-lazyfree folios : )
> 
> Heh, no, not what I meant. We do have file folio support on the way (see
> the other patch set).

Ah I thought that got merged already : )

> 
>>
>>>
>>>>
>>>> But is the proposed 'just assume wrprotect' sensible? David?
>>>
>>> In general, I think so. If PTEs were writable, they certainly have
>>> PAE set. The write-fault handler can fully recover from that (as PAE is
>>> set). If it's ever a performance problem (doubt), we can revisit.
>>>
>>> I'm wondering whether we should just perform the wrprotect earlier:
>>>
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 0f00570d1b9e..19b875ee3fad 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>  
>>>                         /* Nuke the page table entry. */
>>>                         pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
>>> +
>>> +                       /*
>>> +                        * Our batch might include writable and read-only
>>> +                        * PTEs. When we have to restore the mapping, just
>>> +                        * assume read-only to not accidentally upgrade
>>> +                        * write permissions for PTEs that must not be
>>> +                        * writable.
>>> +                        */
>>> +                       pteval = pte_wrprotect(pteval);
>>> +
>>>                         /*
>>>                          * We clear the PTE but do not flush so potentially
>>>                          * a remote CPU could still be writing to the folio
>>>
>>>
>>> Given that nobody asks for writability (pte_write()) later.
>>>
>>> Or does someone care?
>>>
>>> Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
>>> not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
>>> architecture (write-only)? I don't think so.
>>>
>>>
>>> We have the following options:
>>>
>>> 1) pte_wrprotect(): fake that all was read-only.
>>>
>>> Either we do it like Dev suggests, or we do it as above early.
>>>
>>> The downside is that any code that might later want to know "was
>>> this possibly writable" would get that information. Well, it wouldn't
>>> get that information reliably *today* already (and that sounds a bit shaky).
>>
>> I would vote for this, since if we were to follow the current patch,
>> the extension to anon folios will make it worse (pte_wrprotect at 5 places
>> - the 3 additional places being in the if conditions consisting of
>> folio_dup_swap, arch_unmap_one, folio_try_share_anon_rmap_pte)
>> The downside being that if we fail in this rmap path, the ptes are all
>> write-protected. But then the page is already there - the fault is going
>> to be processed fast.
> 
> Right, we should only have a single "revert pte", and not have to redo
> that from multiple locations.
> 
>>
>>>
>>> 2) Tell batching logic to honor pte_write()
>>>
>>> Sounds suboptimal for some cases that really don't care in the future.
> 
> As per discussion with Barry, we might just want to do that now as an
> easy and obviously correct fix.
> 
> It's a shame we stop being able to use folio_pte_batch() and have to
> create an inlined version.
> 
>>>
>>> 3) Tell batching logic to tell us if any pte was writable: FPB_MERGE_WRITE
>>>
>>> ... then we know for sure whether any PTE was writable and we could
>>
>> Well, we don't need this? The problem here is that we are making a decision
>> on the basis of the writability of the *first* pte of the batch - so if
>> the first pte is writable, only then we have the problem we have been
>> talking about.
> 
> That's what I was referring above as "being shaky".
> 
> Some code has to be taught that "there is something writable here, so
> assume it was accessible in a certain way", other code has to be taught
> that "there is something read-only here, so make sure you don't
> accidentally make something writable".
> 
> One way to handle it is to say that "the resulting pte is writable, so
> assume it was accessible", to then say "but just assume it is read-only
> as we are not sure whether everything is writable".
> 
>>
>> We could have had a FPB_MERGE_WRPROTECT (which I know, is totally
>> incompatible with FPB_MERGE_WRITE) - that would tell whether at least one
>> pte in the patch was non-writable, in which case we will be able to avoid
>> the restoration of the entire batch to writeprotected if all the ptes
>> were writable (which I am assuming is the common case). But of course this
>> is not possible to do with the current shape of folio_pte_batch_flags. We
>> will have to revert the FPB_MERGE_* stuff to just collect the "at least one
>> is writable, at least one is dirty, at least one is young, at least one is
>> non-writable" etc information from the function and let the caller handle
>> it. That will kill all the work you did in simplifying that function :)
> Yeah, let's not go down that path. :)
> 
> To fix what we currently have in the tree, probably we should really
> just set FPB_RESPECT_WRITE|FPB_RESPECT_SOFT_DIRTY, saying that this is
> "obviously correct", and revisit it once we expect more cases where
> batching over these PTEs would provide more value.

Yup makes sense, I'll do this.

> 
> For lazyfree, likely it doesn't make a difference.
> 



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios
  2026-02-26 10:06         ` David Hildenbrand (Arm)
@ 2026-02-26 10:28           ` Lance Yang
  0 siblings, 0 replies; 13+ messages in thread
From: Lance Yang @ 2026-02-26 10:28 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: Liam.Howlett, akpm, baohua, dev.jain, harry.yoo, jannh,
	linux-kernel, linux-mm, lorenzo.stoakes, riel, stable, vbabka



On 2026/2/26 18:06, David Hildenbrand (Arm) wrote:
> On 2/26/26 08:09, Lance Yang wrote:
>>
>> On Tue, Feb 24, 2026 at 05:01:50PM +0100, David Hildenbrand (Arm) wrote:
>>> On 2/24/26 12:43, Lorenzo Stoakes wrote:
>>>>
>>>> Sorry I misread the original mail rushing through this is old... so this is less
>>>> pressing than I thought (for some reason I thought it was merged last cycle...!)
>>>> but it's a good example of how stuff can go unnoticed for a while.
>>>>
>>>> In that case maybe a revert is a bit much and we just want the simplest possible
>>>> fix for backporting.
>>>
>>> Dev volunteered to un-messify some of the stuff here. In particular, to
>>> extend batching to all cases, not just some hand-selected ones.
>>>
>>> Support for file folios is on the way.
>>>
>>>>
>>>> But is the proposed 'just assume wrprotect' sensible? David?
>>>
>>> In general, I think so. If PTEs were writable, they certainly have
>>> PAE set. The write-fault handler can fully recover from that (as PAE is
>>> set). If it's ever a performance problem (doubt), we can revisit.
>>>
>>> I'm wondering whether we should just perform the wrprotect earlier:
>>>
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 0f00570d1b9e..19b875ee3fad 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -2150,6 +2150,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>
>>>                         /* Nuke the page table entry. */
>>>                         pteval = get_and_clear_ptes(mm, address, pvmw.pte, nr_pages);
>>> +
>>> +                       /*
>>> +                        * Our batch might include writable and read-only
>>> +                        * PTEs. When we have to restore the mapping, just
>>> +                        * assume read-only to not accidentally upgrade
>>> +                        * write permissions for PTEs that must not be
>>> +                        * writable.
>>> +                        */
>>> +                       pteval = pte_wrprotect(pteval);
>>> +
>>>                         /*
>>>                          * We clear the PTE but do not flush so potentially
>>>                          * a remote CPU could still be writing to the folio
>>>
>>>
>>> Given that nobody asks for writability (pte_write()) later.
>>>
>>> Or does someone care?
>>>
>>> Staring at set_tlb_ubc_flush_pending()->pte_accessible() I am
>>> not 100% sure. Could pte_wrprotect() turn a PTE inaccessible on some
>>> architecture (write-only)? I don't think so.
>>>
>>>
>>> We have the following options:
>>>
>>> 1) pte_wrprotect(): fake that all was read-only.
>>>
>>> Either we do it like Dev suggests, or we do it as above early.
>>>
>>> The downside is that any code that might later want to know "was
>>> this possibly writable" would get that information. Well, it wouldn't
>>> get that information reliably *today* already (and that sounds a bit shaky).
>>
>> Makes sense to me :)
>>
>>> 2) Tell batching logic to honor pte_write()
>>>
>>> Sounds suboptimal for some cases that really don't care in the future.
>>>
>>> 3) Tell batching logic to tell us if any pte was writable: FPB_MERGE_WRITE
>>>
>>> ... then we know for sure whether any PTE was writable and we could
>>>
>>> (a) Pass it as we did before around to all checks, like pte_accessible().
>>>
>>> (b) Have an explicit restore PTE where we play save.
>>>
>>>
>>> I raised to Dev in private that softdirty handling is also shaky, as we
>>> batch over that. Meaning that we could lose or gain softdirty PTE bits in
>>> a batch.
>>
>> I guess we won't lose soft_dirty bits - only gain them (false positive):
>>
>> 1) get_and_clear_ptes() merges dirty bits from all PTEs via pte_mkdirty()
>> 2) pte_mkdirty() atomically sets both _PAGE_DIRTY and _PAGE_SOFT_DIRTY on
>> all architectures that support soft_dirty (x86, s390, powerpc, riscv)
>> 3) set_ptes() uses pte_advance_pfn() which keeps all flags intact
>>
>> So if any PTE in the batch was dirty, all PTEs become soft_dirty after
>> restore.
> 
> PTEs can be softdirty without being dirty. That over-complicates the
> situation.

Ah, it's even trickier then :D


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2026-02-26 10:28 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-24 11:09 [PATCH] mm/rmap: fix incorrect pte restoration for lazyfree folios Dev Jain
2026-02-24 11:31 ` Lorenzo Stoakes
2026-02-24 11:43   ` Lorenzo Stoakes
2026-02-24 16:01     ` David Hildenbrand (Arm)
2026-02-25  5:11       ` Dev Jain
2026-02-26 10:21         ` David Hildenbrand (Arm)
2026-02-26 10:27           ` Dev Jain
2026-02-26  7:01       ` Barry Song
2026-02-26 10:09         ` David Hildenbrand (Arm)
2026-02-26 10:20           ` Barry Song
2026-02-26  7:09       ` Lance Yang
2026-02-26 10:06         ` David Hildenbrand (Arm)
2026-02-26 10:28           ` Lance Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox