linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/vmscan: restore allowed mask in alloc_demote_folio()
@ 2026-03-02  7:03 Bing Jiao
  2026-03-02  8:00 ` David Hildenbrand (Arm)
  0 siblings, 1 reply; 2+ messages in thread
From: Bing Jiao @ 2026-03-02  7:03 UTC (permalink / raw)
  To: linux-mm
  Cc: Bing Jiao, Andrew Morton, Johannes Weiner, David Hildenbrand,
	Michal Hocko, Qi Zheng, Shakeel Butt, Lorenzo Stoakes,
	Axel Rasmussen, Yuanchu Xie, Wei Xu, linux-kernel

In alloc_demote_folio(), mtc->nmask is set to NULL for the first
allocation. If that succeeds, it returns without restoring mtc->nmask
to allowed_mask. For subsequent allocations from the migrate_pages()
batch, mtc->nmask will be NULL. If the target node then becomes full,
the fallback allocation will use nmask = NULL, allocating from any
node allowed by the task cpuset, which for kswapd is all nodes.

To address this issue, restore the mtc->nmask to its original allowed
nodemask after the first allocation.

Signed-off-by: Bing Jiao <bingjiao@google.com>
---
 mm/vmscan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index cbffc0a27824..b42abd17aee7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -985,11 +985,11 @@ static struct folio *alloc_demote_folio(struct folio *src,
 	mtc->nmask = NULL;
 	mtc->gfp_mask |= __GFP_THISNODE;
 	dst = alloc_migration_target(src, (unsigned long)mtc);
+	mtc->nmask = allowed_mask;
 	if (dst)
 		return dst;

 	mtc->gfp_mask &= ~__GFP_THISNODE;
-	mtc->nmask = allowed_mask;

 	return alloc_migration_target(src, (unsigned long)mtc);
 }
--
2.53.0.473.g4a7958ca14-goog



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm/vmscan: restore allowed mask in alloc_demote_folio()
  2026-03-02  7:03 [PATCH] mm/vmscan: restore allowed mask in alloc_demote_folio() Bing Jiao
@ 2026-03-02  8:00 ` David Hildenbrand (Arm)
  0 siblings, 0 replies; 2+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-02  8:00 UTC (permalink / raw)
  To: Bing Jiao, linux-mm
  Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Qi Zheng,
	Shakeel Butt, Lorenzo Stoakes, Axel Rasmussen, Yuanchu Xie,
	Wei Xu, linux-kernel

On 3/2/26 08:03, Bing Jiao wrote:
> In alloc_demote_folio(), mtc->nmask is set to NULL for the first
> allocation. If that succeeds, it returns without restoring mtc->nmask
> to allowed_mask. For subsequent allocations from the migrate_pages()
> batch, mtc->nmask will be NULL. If the target node then becomes full,
> the fallback allocation will use nmask = NULL, allocating from any
> node allowed by the task cpuset, which for kswapd is all nodes.
> 
> To address this issue, restore the mtc->nmask to its original allowed
> nodemask after the first allocation.
> 

That would be

Fixes: 320080272892 ("mm/demotion: demote pages according to allocation fallback order")

?

> Signed-off-by: Bing Jiao <bingjiao@google.com>
> ---
>  mm/vmscan.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index cbffc0a27824..b42abd17aee7 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -985,11 +985,11 @@ static struct folio *alloc_demote_folio(struct folio *src,
>  	mtc->nmask = NULL;
>  	mtc->gfp_mask |= __GFP_THISNODE;
>  	dst = alloc_migration_target(src, (unsigned long)mtc);
> +	mtc->nmask = allowed_mask;
>  	if (dst)
>  		return dst;
> 
>  	mtc->gfp_mask &= ~__GFP_THISNODE;
> -	mtc->nmask = allowed_mask;
> 
>  	return alloc_migration_target(src, (unsigned long)mtc);
>  }
> --
> 2.53.0.473.g4a7958ca14-goog
> 

Maybe we should just not touch the original mtc?

diff --git a/mm/vmscan.c b/mm/vmscan.c
index de62225b381a..f07716e5389e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -985,9 +985,9 @@ static void folio_check_dirty_writeback(struct folio *folio,
 static struct folio *alloc_demote_folio(struct folio *src,
                unsigned long private)
 {
+       struct migration_target_control *mtc, target_nid_mtc;
        struct folio *dst;
        nodemask_t *allowed_mask;
-       struct migration_target_control *mtc;
 
        mtc = (struct migration_target_control *)private;
 
@@ -1001,15 +1001,12 @@ static struct folio *alloc_demote_folio(struct folio *src,
         * a demotion of cold pages from the target memtier. This can result
         * in the kernel placing hot pages in slower(lower) memory tiers.
         */
-       mtc->nmask = NULL;
-       mtc->gfp_mask |= __GFP_THISNODE;
-       dst = alloc_migration_target(src, (unsigned long)mtc);
+       target_nid_mtc = *mtc;
+       target_nid_mtc.nmask = NULL;
+       target_nid_mtc.gfp_mask |= __GFP_THISNODE;
+       dst = alloc_migration_target(src, (unsigned long)&target_nid_mtc);
        if (dst)
                return dst;
-
-       mtc->gfp_mask &= ~__GFP_THISNODE;
-       mtc->nmask = allowed_mask;
-
        return alloc_migration_target(src, (unsigned long)mtc);
 }
 


-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-03-02  8:00 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-02  7:03 [PATCH] mm/vmscan: restore allowed mask in alloc_demote_folio() Bing Jiao
2026-03-02  8:00 ` David Hildenbrand (Arm)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox