linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Hugh Dickins <hughd@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: Is it safe for kthreadd to drain_all_pages?
Date: Thu, 6 Apr 2017 10:55:30 +0200	[thread overview]
Message-ID: <20170406085529.GF5497@dhcp22.suse.cz> (raw)
In-Reply-To: <alpine.LSU.2.11.1704051331420.4288@eggly.anvils>

On Wed 05-04-17 13:59:49, Hugh Dickins wrote:
> Hi Mel,
> 
> I suspect that it's not safe for kthreadd to drain_all_pages();
> but I haven't studied flush_work() etc, so don't really know what
> I'm talking about: hoping that you will jump to a realization.
> 
> 4.11-rc has been giving me hangs after hours of swapping load.  At
> first they looked like memory leaks ("fork: Cannot allocate memory");
> but for no good reason I happened to do "cat /proc/sys/vm/stat_refresh"
> before looking at /proc/meminfo one time, and the stat_refresh stuck
> in D state, waiting for completion of flush_work like many kworkers.
> kthreadd waiting for completion of flush_work in drain_all_pages().
> 
> But I only noticed that pattern later: originally tried to bisect
> rc1 before rc2 came out, but underestimated how long to wait before
> deciding a stage good - I thought 12 hours, but would now say 2 days.
> Too late for bisection, I suspect your drain_all_pages() changes.

Yes, this is a fallout from Mel's changes. I was about to say that
my follow up fixes which made this flushing to the single WQ with rescuer
fixed that but it seems that
http://www.ozlabs.org/~akpm/mmotm/broken-out/mm-move-pcp-and-lru-pcp-drainging-into-single-wq.patch
didn't make it to the Linus tree. Could you re-test with this one?
While your change is obviously correct I think the above should address
it as well and it is more generic. If it works then I will ask Andrew to
send the above to Linus (along with its follow up
mm-move-pcp-and-lru-pcp-drainging-into-single-wq-fix.patch)
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-04-06  8:55 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-05 20:59 Hugh Dickins
2017-04-06  8:55 ` Michal Hocko [this message]
2017-04-06 13:06 ` Mel Gorman
2017-04-06 18:52   ` Hugh Dickins
2017-04-07 16:25     ` Hugh Dickins
2017-04-07 16:39       ` Michal Hocko
2017-04-07 16:58         ` Hugh Dickins
2017-04-07 17:29           ` Michal Hocko
2017-04-07 18:46             ` Hugh Dickins
2017-04-08 17:04               ` Hugh Dickins
2017-04-08 18:09                 ` Mel Gorman
2017-04-13 17:56                   ` Andrea Arcangeli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170406085529.GF5497@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox