From: Andrew Morton <akpm@linux-foundation.org>
To: Chanwon Park <flyinrm@gmail.com>
Cc: vbabka@suse.cz, surenb@google.com, mhocko@suse.com,
jackmanb@google.com, hannes@cmpxchg.org, ziy@nvidia.com,
david@redhat.com, zhengqi.arch@bytedance.com,
shakeel.butt@linux.dev, lorenzo.stoakes@oracle.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: re-enable kswapd when memory pressure subsides or demotion is toggled
Date: Mon, 8 Sep 2025 17:06:50 -0700 [thread overview]
Message-ID: <20250908170650.8ede03581f38392a34d0d1f7@linux-foundation.org> (raw)
In-Reply-To: <aL6qGi69jWXfPc4D@pcw-MS-7D22>
On Mon, 8 Sep 2025 19:04:10 +0900 Chanwon Park <flyinrm@gmail.com> wrote:
> If kswapd fails to reclaim pages from a node MAX_RECLAIM_RETRIES in a
> row, kswapd on that node gets disabled. That is, the system won't wakeup
> kswapd for that node until page reclamation is observed at least once.
> That reclamation is mostly done by direct reclaim, which in turn enables
> kswapd back.
>
> However, on systems with CXL memory nodes, workloads with high anon page
> usage can disable kswapd indefinitely, without triggering direct
> reclaim. This can be reproduced with following steps:
>
> numa node 0 (32GB memory, 48 CPUs)
> numa node 2~5 (512GB CXL memory, 128GB each)
> (numa node 1 is disabled)
> swap space 8GB
>
> 1) Set /sys/kernel/mm/demotion_enabled to 0.
> 2) Set /proc/sys/kernel/numa_balancing to 0.
> 3) Run a process that allocates and random accesses 500GB of anon
> pages.
> 4) Let the process exit normally.
hm, OK, I guess this is longstanding misbehavior?
>
> Since kswapd_failures resets may be missed by ++ operation, it is
> changed from int to atomic_t.
Possibly this should have been a separate (earlier) patch. But I
assume the need for this conversion was inroduced by this patch, so
it's debatable.
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1411,7 +1411,7 @@ typedef struct pglist_data {
> int kswapd_order;
> enum zone_type kswapd_highest_zoneidx;
>
> - int kswapd_failures; /* Number of 'reclaimed == 0' runs */
> + atomic_t kswapd_failures; /* Number of 'reclaimed == 0' runs */
This caused a number of 80-column horrors! I had a fiddle, what do you
think?
--- a/mm/page_alloc.c~mm-re-enable-kswapd-when-memory-pressure-subsides-or-demotion-is-toggled-fix
+++ a/mm/page_alloc.c
@@ -2860,29 +2860,29 @@ static void free_frozen_page_commit(stru
*/
return;
}
+
high = nr_pcp_high(pcp, zone, batch, free_high);
- if (pcp->count >= high) {
- free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
- pcp, pindex);
- if (test_bit(ZONE_BELOW_HIGH, &zone->flags) &&
- zone_watermark_ok(zone, 0, high_wmark_pages(zone),
- ZONE_MOVABLE, 0)) {
- struct pglist_data *pgdat = zone->zone_pgdat;
- clear_bit(ZONE_BELOW_HIGH, &zone->flags);
+ if (pcp->count < high)
+ return;
- /*
- * Assume that memory pressure on this node is gone
- * and may be in a reclaimable state. If a memory
- * fallback node exists, direct reclaim may not have
- * been triggered, leaving 'hopeless node' stay in
- * that state for a while. Let kswapd work again by
- * resetting kswapd_failures.
- */
- if (atomic_read(&pgdat->kswapd_failures)
- >= MAX_RECLAIM_RETRIES &&
- next_memory_node(pgdat->node_id) < MAX_NUMNODES)
- atomic_set(&pgdat->kswapd_failures, 0);
- }
+ free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
+ pcp, pindex);
+ if (test_bit(ZONE_BELOW_HIGH, &zone->flags) &&
+ zone_watermark_ok(zone, 0, high_wmark_pages(zone),
+ ZONE_MOVABLE, 0)) {
+ struct pglist_data *pgdat = zone->zone_pgdat;
+ clear_bit(ZONE_BELOW_HIGH, &zone->flags);
+
+ /*
+ * Assume that memory pressure on this node is gone and may be
+ * in a reclaimable state. If a memory fallback node exists,
+ * direct reclaim may not have been triggered, causing a
+ * 'hopeless node' to stay in that state for a while. Let
+ * kswapd work again by resetting kswapd_failures.
+ */
+ if (atomic_read(&pgdat->kswapd_failures) >= MAX_RECLAIM_RETRIES &&
+ next_memory_node(pgdat->node_id) < MAX_NUMNODES)
+ atomic_set(&pgdat->kswapd_failures, 0);
}
}
--- a/mm/show_mem.c~mm-re-enable-kswapd-when-memory-pressure-subsides-or-demotion-is-toggled-fix
+++ a/mm/show_mem.c
@@ -278,8 +278,8 @@ static void show_free_areas(unsigned int
#endif
K(node_page_state(pgdat, NR_PAGETABLE)),
K(node_page_state(pgdat, NR_SECONDARY_PAGETABLE)),
- str_yes_no(atomic_read(&pgdat->kswapd_failures)
- >= MAX_RECLAIM_RETRIES),
+ str_yes_no(atomic_read(&pgdat->kswapd_failures) >=
+ MAX_RECLAIM_RETRIES),
K(node_page_state(pgdat, NR_BALLOON_PAGES)));
}
_
next prev parent reply other threads:[~2025-09-09 0:06 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-08 10:04 Chanwon Park
2025-09-09 0:06 ` Andrew Morton [this message]
2025-09-09 5:57 ` Chanwon Park
2025-09-30 7:43 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250908170650.8ede03581f38392a34d0d1f7@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=flyinrm@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=zhengqi.arch@bytedance.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox