linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Hui Wang <hui.wang@canonical.com>
Cc: Gao Xiang <hsiangkao@linux.alibaba.com>,
	linux-mm@kvack.org, akpm@linux-foundation.org, surenb@google.com,
	colin.i.king@gmail.com, shy828301@gmail.com, hannes@cmpxchg.org,
	vbabka@suse.cz, hch@infradead.org, mgorman@suse.de,
	Phillip Lougher <phillip@squashfs.org.uk>
Subject: Re: [PATCH 1/1] mm/oom_kill: trigger the oom killer if oom occurs without __GFP_FS
Date: Wed, 3 May 2023 14:20:31 +0200	[thread overview]
Message-ID: <ZFJRj3Kt0DHTNh1L@dhcp22.suse.cz> (raw)
In-Reply-To: <b2f74e0d-8f5a-5d08-061b-0417bfc60110@canonical.com>

On Wed 03-05-23 19:49:19, Hui Wang wrote:
> 
> On 4/29/23 03:53, Michal Hocko wrote:
> > On Thu 27-04-23 11:47:10, Hui Wang wrote:
> > [...]
> > > So Michal,
> > > 
> > > Don't know if you read the "[PATCH 0/1] mm/oom_kill: system enters a state
> > > something like hang when running stress-ng", do you know why out_of_memory()
> > > will return immediately if there is no __GFP_FS, could we drop these lines
> > > directly:
> > > 
> > >      /*
> > >       * The OOM killer does not compensate for IO-less reclaim.
> > >       * pagefault_out_of_memory lost its gfp context so we have to
> > >       * make sure exclude 0 mask - all other users should have at least
> > >       * ___GFP_DIRECT_RECLAIM to get here. But mem_cgroup_oom() has to
> > >       * invoke the OOM killer even if it is a GFP_NOFS allocation.
> > >       */
> > >      if (oc->gfp_mask && !(oc->gfp_mask & __GFP_FS) && !is_memcg_oom(oc))
> > >          return true;
> > The comment is rather hard to grasp without an intimate knowledge of the
> > memory reclaim. The primary reason is that the allocation context
> > without __GFP_FS (and also __GFP_IO) cannot perform a full memory
> > reclaim because fs or the storage subsystem might be holding locks
> > required for the memory reclaim. This means that a large amount of
> > reclaimable memory is out of sight of the specific direct reclaim
> > context. If we allowed oom killer to trigger we could invoke the oom
> > killer while there is a lot of otherwise reclaimable memory. As you can
> > imagine not something many users would appreciate as the oom kill is a
> > very disruptive operation. In this case we rely on kswapd or other
> > GFP_KERNEL like allocation context to make forward instead. If there is
> > really nothing reclaimable then the oom killer would eventually hit from
> > elsewhere.
> > 
> > HTH
> Hi Michal,
> 
> Understand. Thanks for explanation. So we can't remove those 2 lines of
> code.
> 
> Here in my patch, letting a kthread allocate a page with GFP_KERNEL, It
> could possibly trigger the reclaim and if nothing reclaimable, trigger the
> oom killer. Do you think it is a safe workaround for the issue we are facing
> currently?

I have to say I really dislike this workaround. Allocating memory just
to release it and potentially hit the oom killer is really not very
mindful approach to the problem. It is not a reliable way either because
you depend on the WQ context which might be clogged for the very same
lack of memory. This issue simply doesn't have a simple and neat
solution unfortunately.

I would prefer if the fs could be less demanding from NOFS context if
that is possible at all.
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2023-05-03 12:20 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-26  5:10 [PATCH 0/1] mm/oom_kill: system enters a state something like hang when running stress-ng Hui Wang
2023-04-26  5:10 ` [PATCH 1/1] mm/oom_kill: trigger the oom killer if oom occurs without __GFP_FS Hui Wang
2023-04-26  8:33   ` Michal Hocko
2023-04-26 11:07     ` Hui Wang
2023-04-26 16:44       ` Phillip Lougher
2023-04-26 17:38         ` Phillip Lougher
2023-04-26 18:26           ` Yang Shi
2023-04-26 19:06             ` Phillip Lougher
2023-04-26 19:34               ` Phillip Lougher
2023-04-27  0:42                 ` Hui Wang
2023-04-27  1:37                   ` Phillip Lougher
2023-04-27  5:22                     ` Hui Wang
2023-04-27  1:18       ` Gao Xiang
2023-04-27  3:47         ` Hui Wang
2023-04-27  4:17           ` Gao Xiang
2023-04-27  7:03           ` Colin King (gmail)
2023-04-27  7:49             ` Hui Wang
2023-04-28 19:53           ` Michal Hocko
2023-05-03 11:49             ` Hui Wang
2023-05-03 12:20               ` Michal Hocko [this message]
2023-05-03 18:41                 ` Phillip Lougher
2023-05-03 19:10               ` Phillip Lougher
2023-05-03 19:38                 ` Hui Wang
2023-05-07 21:07                 ` Phillip Lougher
2023-05-08 10:05                   ` Hui Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZFJRj3Kt0DHTNh1L@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=colin.i.king@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=hch@infradead.org \
    --cc=hsiangkao@linux.alibaba.com \
    --cc=hui.wang@canonical.com \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=phillip@squashfs.org.uk \
    --cc=shy828301@gmail.com \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox