From: David Rientjes <rientjes@google.com>
To: Vlastimil Babka <vbabka@suse.cz>,
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Mel Gorman <mgorman@techsingularity.net>
Subject: Re: [patch] mm, oom: stop reclaiming if GFP_ATOMIC will start failing soon
Date: Tue, 28 Apr 2020 14:48:25 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.22.394.2004281436280.131129@chino.kir.corp.google.com> (raw)
In-Reply-To: <28e35a8b-400e-9320-5a97-accfccf4b9a8@suse.cz>
On Tue, 28 Apr 2020, Vlastimil Babka wrote:
> > I took a look at doing a quick-fix for the
> > direct-reclaimers-get-their-stuff-stolen issue about a million years
> > ago. I don't recall where it ended up. It's pretty trivial for the
> > direct reclaimer to free pages into current->reclaimed_pages and to
> > take a look in there on the allocation path, etc. But it's only
> > practical for order-0 pages.
>
> FWIW there's already such approach added to compaction by Mel some time ago,
> so order>0 allocations are covered to some extent. But in this case I imagine
> that compaction won't even start because order-0 watermarks are too low.
>
> The order-0 reclaim capture might work though - as a result the GFP_ATOMIC
> allocations would more likely fail and defer to their fallback context.
>
Yes, order-0 reclaim capture is interesting since the issue being reported
here is userspace going out to lunch because it loops for an unbounded
amount of time trying to get above a watermark where it's allowed to
allocate and other consumers are depleting that resource.
We actually prefer to oom kill earlier rather than being put in a
perpetual state of aggressive reclaim that affects all allocators and the
unbounded nature of those allocations leads to very poor results for
everybody.
I'm happy to scope this solely to an order-0 reclaim capture. I'm not
sure if I'm clear on whether this has been worked on before and patches
existed in the past?
Somewhat related to what I described in the changelog: we lost the "page
allocation stalls" artifacts in the kernel log for 4.15. The commit
description references an asynchronous mechanism for getting this
information; I don't know where this mechanism currently lives.
next prev parent reply other threads:[~2020-04-28 21:48 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-24 20:48 David Rientjes
2020-04-25 0:32 ` Tetsuo Handa
2020-04-26 0:27 ` Andrew Morton
2020-04-26 3:04 ` Tetsuo Handa
2020-04-27 3:12 ` David Rientjes
2020-04-27 5:03 ` Tetsuo Handa
2020-04-27 20:30 ` Andrew Morton
2020-04-27 23:03 ` David Rientjes
2020-04-27 23:35 ` Andrew Morton
2020-04-28 7:43 ` Michal Hocko
2020-04-29 8:31 ` peter enderborg
2020-04-29 9:00 ` Michal Hocko
2020-04-28 9:38 ` Vlastimil Babka
2020-04-28 21:48 ` David Rientjes [this message]
2020-04-28 23:37 ` Tetsuo Handa
2020-04-29 7:51 ` Vlastimil Babka
2020-04-29 9:04 ` Michal Hocko
2020-04-29 10:45 ` Tetsuo Handa
2020-04-29 11:43 ` Michal Hocko
2020-04-27 8:20 ` peter enderborg
2020-04-27 15:01 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.22.394.2004281436280.131129@chino.kir.corp.google.com \
--to=rientjes@google.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox