From: David Rientjes <rientjes@google.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>, Rik van Riel <riel@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [patch] mm, vmscan: abort futile reclaim if we've been oom killed
Date: Tue, 12 Nov 2013 18:02:18 -0800 (PST) [thread overview]
Message-ID: <alpine.DEB.2.02.1311121801200.18803@chino.kir.corp.google.com> (raw)
The oom killer is only invoked when reclaim has already failed and it
only kills processes if the victim is also oom. In other words, the oom
killer does not select victims when a process tries to allocate from a
disjoint cpuset or allocate DMA memory, for example.
Therefore, it's pointless for an oom killed process to continue
attempting to reclaim memory in a loop when it has been granted access to
memory reserves. It can simply return to the page allocator and allocate
memory.
If there is a very large number of processes trying to reclaim memory,
the cond_resched() in shrink_slab() becomes troublesome since it always
forces a schedule to other processes also trying to reclaim memory.
Compounded by many reclaim loops, it is possible for a process to sit in
do_try_to_free_pages() for a very long time when reclaim is pointless and
it could allocate if it just returned to the page allocator.
This patch checks if current has been oom killed and, if so, aborts
futile reclaim immediately. We're not concerned with complete depletion
of memory reserves since there's nothing else we can do.
Signed-off-by: David Rientjes <rientjes@google.com>
---
mm/vmscan.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/mm/vmscan.c b/mm/vmscan.c
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2428,6 +2428,14 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
goto out;
/*
+ * If we've been oom killed, reclaim has already failed. We've
+ * been given access to memory reserves so that we can allocate
+ * and quickly die, so just abort futile efforts.
+ */
+ if (unlikely(test_thread_flag(TIF_MEMDIE)))
+ aborted_reclaim = true;
+
+ /*
* If we're getting trouble reclaiming, start doing
* writepage even in laptop mode.
*/
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next reply other threads:[~2013-11-13 2:02 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-13 2:02 David Rientjes [this message]
2013-11-13 15:24 ` Johannes Weiner
2013-11-13 22:16 ` David Rientjes
2013-11-14 0:00 ` Johannes Weiner
2013-11-14 0:48 ` David Rientjes
2013-11-18 16:41 ` Johannes Weiner
2013-11-19 1:17 ` David Rientjes
2013-11-20 16:07 ` Johannes Weiner
2013-11-21 3:08 ` David Rientjes
2013-11-21 14:51 ` Johannes Weiner
2013-11-21 16:40 ` Johannes Weiner
2013-11-27 0:47 ` David Rientjes
2013-11-27 16:09 ` Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.02.1311121801200.18803@chino.kir.corp.google.com \
--to=rientjes@google.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox