linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [patch] mm, vmscan: abort futile reclaim if we've been oom killed
Date: Wed, 13 Nov 2013 10:24:12 -0500	[thread overview]
Message-ID: <20131113152412.GH707@cmpxchg.org> (raw)
In-Reply-To: <alpine.DEB.2.02.1311121801200.18803@chino.kir.corp.google.com>

On Tue, Nov 12, 2013 at 06:02:18PM -0800, David Rientjes wrote:
> The oom killer is only invoked when reclaim has already failed and it
> only kills processes if the victim is also oom.  In other words, the oom
> killer does not select victims when a process tries to allocate from a
> disjoint cpuset or allocate DMA memory, for example.
> 
> Therefore, it's pointless for an oom killed process to continue
> attempting to reclaim memory in a loop when it has been granted access to
> memory reserves.  It can simply return to the page allocator and allocate
> memory.

On the other hand, finishing reclaim of 32 pages should not be a
problem.

> If there is a very large number of processes trying to reclaim memory,
> the cond_resched() in shrink_slab() becomes troublesome since it always
> forces a schedule to other processes also trying to reclaim memory.
> Compounded by many reclaim loops, it is possible for a process to sit in
> do_try_to_free_pages() for a very long time when reclaim is pointless and
> it could allocate if it just returned to the page allocator.

"Very large number of processes"

"sit in do_try_to_free_pages() for a very long time"

Can you quantify this a bit more?

And how common are OOM kills on your setups that you need to optimize
them on this level?

It sounds like your problem could be solved by having cond_resched()
not schedule away from TIF_MEMDIE processes, which would be much
preferable to oom-killed checks in random places.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-11-16 14:44 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-13  2:02 David Rientjes
2013-11-13 15:24 ` Johannes Weiner [this message]
2013-11-13 22:16   ` David Rientjes
2013-11-14  0:00     ` Johannes Weiner
2013-11-14  0:48       ` David Rientjes
2013-11-18 16:41         ` Johannes Weiner
2013-11-19  1:17           ` David Rientjes
2013-11-20 16:07             ` Johannes Weiner
2013-11-21  3:08               ` David Rientjes
2013-11-21 14:51                 ` Johannes Weiner
2013-11-21 16:40                 ` Johannes Weiner
2013-11-27  0:47                   ` David Rientjes
2013-11-27 16:09                     ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131113152412.GH707@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox