linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Liu Shixin <liushixin2@huawei.com>
Cc: Yu Zhao <yuzhao@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Yosry Ahmed <yosryahmed@google.com>,
	Huang Ying <ying.huang@intel.com>,
	Sachin Sant <sachinp@linux.ibm.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v10] mm: vmscan: try to reclaim swapcache pages if no swap space
Date: Tue, 21 Nov 2023 14:00:21 +0100	[thread overview]
Message-ID: <ZVyp5eETLTT0PCYj@tiehlicka> (raw)
In-Reply-To: <20231121090624.1814733-1-liushixin2@huawei.com>

On Tue 21-11-23 17:06:24, Liu Shixin wrote:
> When spaces of swap devices are exhausted, only file pages can be
> reclaimed.  But there are still some swapcache pages in anon lru list.
> This can lead to a premature out-of-memory.
> 
> The problem is found with such step:
> 
>  Firstly, set a 9MB disk swap space, then create a cgroup with 10MB
>  memory limit, then runs an program to allocates about 15MB memory.
> 
> The problem occurs occasionally, which may need about 100 times [1].
> 
> Fix it by checking number of swapcache pages in can_reclaim_anon_pages().
> If the number is not zero, return true and set swapcache_only to 1.
> When scan anon lru list in swapcache_only mode, non-swapcache pages will
> be skipped to isolate in order to accelerate reclaim efficiency.
> 
> However, in swapcache_only mode, the scan count still increased when scan
> non-swapcache pages because there are large number of non-swapcache pages
> and rare swapcache pages in swapcache_only mode, and if the non-swapcache
> is skipped and do not count, the scan of pages in isolate_lru_folios() can
> eventually lead to hung task, just as Sachin reported [2].

I find this paragraph really confusing! I guess what you meant to say is
that a real swapcache_only is problematic because it can end up not
making any progress, correct? 

AFAIU you have addressed that problem by making swapcache_only anon LRU
specific, right? That would be certainly more robust as you can still
reclaim from file LRUs. I cannot say I like that because swapcache_only
is a bit confusing and I do not think we want to grow more special
purpose reclaim types. Would it be possible/reasonable to instead put
swapcache pages on the file LRU instead?
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2023-11-21 13:00 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-21  9:06 Liu Shixin
2023-11-21 13:00 ` Michal Hocko [this message]
2023-11-22  6:41   ` Liu Shixin
2023-11-22  6:44     ` Yosry Ahmed
2023-11-22  6:57       ` Huang, Ying
2023-11-22  8:55         ` Michal Hocko
2023-11-22  8:52       ` Michal Hocko
2023-11-22 10:09         ` Michal Hocko
2023-11-22 10:39           ` Yosry Ahmed
2023-11-22 13:19             ` Michal Hocko
2023-11-22 20:13               ` Yosry Ahmed
2023-11-23  6:15               ` Huang, Ying
2023-11-24 16:30                 ` Michal Hocko
2023-11-27  2:34                   ` Huang, Ying
2023-11-27  7:42                     ` Chris Li
2023-11-27  8:11                       ` Huang, Ying
2023-11-27  8:22                         ` Chris Li
2023-11-27 21:31                           ` Minchan Kim
2023-11-27 21:56                             ` Yosry Ahmed
2023-11-28  3:19                               ` Huang, Ying
2023-11-28  3:27                                 ` Yosry Ahmed
2023-11-28  4:03                                   ` Huang, Ying
2023-11-28  4:13                                     ` Yosry Ahmed
2023-11-28  5:37                                       ` Huang, Ying
2023-11-28  5:41                                         ` Yosry Ahmed
2023-11-28  5:52                                           ` Huang, Ying
2023-11-28 22:37                                 ` Minchan Kim
2023-11-29  3:12                                   ` Huang, Ying
2023-11-29 10:22                                 ` Michal Hocko
2023-11-30  8:07                                   ` Huang, Ying
2023-11-28 23:45                               ` Chris Li
2023-11-27  9:10                     ` Michal Hocko
2023-11-28  1:31                       ` Huang, Ying
2023-11-28 10:16                         ` Michal Hocko
2023-11-28 22:45                           ` Minchan Kim
2023-11-28 23:05                             ` Yosry Ahmed
2023-11-28 23:15                               ` Minchan Kim
2023-11-29 10:17                                 ` Michal Hocko
2023-12-13 23:13                                   ` Andrew Morton
2023-12-15  5:05                                     ` Huang, Ying
2023-12-15 19:24                                       ` Andrew Morton
2023-11-23 17:30   ` Chris Li
2023-11-23 17:19 ` Chris Li
2023-11-28  1:59   ` Liu Shixin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZVyp5eETLTT0PCYj@tiehlicka \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liushixin2@huawei.com \
    --cc=sachinp@linux.ibm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox