From: "Jiayuan Chen" <jiayuan.chen@linux.dev>
To: "Shakeel Butt" <shakeel.butt@linux.dev>
Cc: linux-mm@kvack.org, "Jiayuan Chen" <jiayuan.chen@shopee.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"David Hildenbrand" <david@kernel.org>,
"Michal Hocko" <mhocko@kernel.org>,
"Qi Zheng" <zhengqi.arch@bytedance.com>,
"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>,
"Axel Rasmussen" <axelrasmussen@google.com>,
"Yuanchu Xie" <yuanchu@google.com>, "Wei Xu" <weixugc@google.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v1] mm/vmscan: mitigate spurious kswapd_failures reset from direct reclaim
Date: Tue, 23 Dec 2025 01:42:37 +0000 [thread overview]
Message-ID: <2e574085ed3d7775c3b83bb80d302ce45415ac42@linux.dev> (raw)
In-Reply-To: <4owaeb7bmkfgfzqd4ztdsi4tefc36cnmpju4yrknsgjm4y32ez@qsgn6lnv3cxb>
December 23, 2025 at 05:15, "Shakeel Butt" <shakeel.butt@linux.dev mailto:shakeel.butt@linux.dev?to=%22Shakeel%20Butt%22%20%3Cshakeel.butt%40linux.dev%3E > wrote:
>
> On Mon, Dec 22, 2025 at 08:20:21PM +0800, Jiayuan Chen wrote:
>
> >
> > From: Jiayuan Chen <jiayuan.chen@shopee.com>
> >
> > When kswapd fails to reclaim memory, kswapd_failures is incremented.
> > Once it reaches MAX_RECLAIM_RETRIES, kswapd stops running to avoid
> > futile reclaim attempts. However, any successful direct reclaim
> > unconditionally resets kswapd_failures to 0, which can cause problems.
> >
> > We observed an issue in production on a multi-NUMA system where a
> > process allocated large amounts of anonymous pages on a single NUMA
> > node, causing its watermark to drop below high and evicting most file
> > pages:
> >
> > $ numastat -m
> > Per-node system memory usage (in MBs):
> > Node 0 Node 1 Total
> > --------------- --------------- ---------------
> > MemTotal 128222.19 127983.91 256206.11
> > MemFree 1414.48 1432.80 2847.29
> > MemUsed 126807.71 126551.11 252358.82
> > SwapCached 0.00 0.00 0.00
> > Active 29017.91 25554.57 54572.48
> > Inactive 92749.06 95377.00 188126.06
> > Active(anon) 28998.96 23356.47 52355.43
> > Inactive(anon) 92685.27 87466.11 180151.39
> > Active(file) 18.95 2198.10 2217.05
> > Inactive(file) 63.79 7910.89 7974.68
> >
> > With swap disabled, only file pages can be reclaimed. When kswapd is
> > woken (e.g., via wake_all_kswapds()), it runs continuously but cannot
> > raise free memory above the high watermark since reclaimable file pages
> > are insufficient. Normally, kswapd would eventually stop after
> > kswapd_failures reaches MAX_RECLAIM_RETRIES.
> >
> > However, pods on this machine have memory.high set in their cgroup.
> > Business processes continuously trigger the high limit, causing frequent
> > direct reclaim that keeps resetting kswapd_failures to 0. This prevents
> > kswapd from ever stopping.
> >
> > The result is that kswapd runs endlessly, repeatedly evicting the few
> > remaining file pages which are actually hot. These pages constantly
> > refault, generating sustained heavy IO READ pressure.
> >
> I don't think kswapd is an issue here. The system is out of memory and
> most of the memory is unreclaimable. Either change the workload to use
> less memory or enable swap (or zswap) to have more reclaimable memory.
Hi,
Thanks for looking into this.
Sorry, I didn't describe the scenario clearly enough in the original patch. Let me clarify:
This is a multi-NUMA system where the memory pressure is not global but node-local. The key observation is:
Node 0: Under memory pressure, most memory is anonymous (unreclaimable without swap)
Node 1: Has plenty of reclaimable memory (~60GB file cache out of 125GB total)
Node 0's kswapd runs continuously but cannot reclaim anything
Direct reclaim succeeds by reclaiming from Node 1
Direct reclaim resets kswapd_failures, preventing Node 0's kswapd from stopping
The few file pages on Node 0 are hot and keep refaulting, causing heavy I/O
From a per-node perspective, Node 0 is truly out of reclaimable memory and its kswapd
should stop. But the global direct reclaim success (from Node 1) incorrectly keeps
Node 0's kswapd alive.
Thanks.
> Other than that we can discuss memcg reclaim resetting the kswapd
> failure count should be changed or not but that is a separate
> discussion.
>
next prev parent reply other threads:[~2025-12-23 1:42 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20251222122022.254268-1-jiayuan.chen@linux.dev>
2025-12-22 18:29 ` Andrew Morton
2025-12-23 1:51 ` Jiayuan Chen
2025-12-22 21:15 ` Shakeel Butt
2025-12-23 1:42 ` Jiayuan Chen [this message]
2025-12-23 6:11 ` Shakeel Butt
2025-12-23 8:22 ` Jiayuan Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2e574085ed3d7775c3b83bb80d302ce45415ac42@linux.dev \
--to=jiayuan.chen@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=jiayuan.chen@shopee.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=weixugc@google.com \
--cc=yuanchu@google.com \
--cc=zhengqi.arch@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox