From: Roman Gushchin <guro@fb.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
Johannes Weiner <hannes@cmpxchg.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Kernel Team <Kernel-team@fb.com>
Subject: Re: [PATCH] mm: memcontrol: switch to rcu protection in drain_all_stock()
Date: Fri, 2 Aug 2019 17:36:42 +0000 [thread overview]
Message-ID: <20190802173638.GC28431@tower.DHCP.thefacebook.com> (raw)
In-Reply-To: <20190802171451.GN6461@dhcp22.suse.cz>
On Fri, Aug 02, 2019 at 07:14:51PM +0200, Michal Hocko wrote:
> On Fri 02-08-19 17:00:34, Roman Gushchin wrote:
> > On Fri, Aug 02, 2019 at 10:59:47AM +0200, Michal Hocko wrote:
> > > On Fri 02-08-19 10:04:22, Michal Hocko wrote:
> > > > On Thu 01-08-19 16:35:13, Roman Gushchin wrote:
> > > > > Commit 72f0184c8a00 ("mm, memcg: remove hotplug locking from try_charge")
> > > > > introduced css_tryget()/css_put() calls in drain_all_stock(),
> > > > > which are supposed to protect the target memory cgroup from being
> > > > > released during the mem_cgroup_is_descendant() call.
> > > > >
> > > > > However, it's not completely safe. In theory, memcg can go away
> > > > > between reading stock->cached pointer and calling css_tryget().
> > > >
> > > > I have to remember how is this whole thing supposed to work, it's been
> > > > some time since I've looked into that.
> > >
> > > OK, I guess I remember now and I do not see how the race is possible.
> > > Stock cache is keeping its memcg alive because it elevates the reference
> > > counting for each cached charge. And that should keep the whole chain up
> > > to the root (of draining) alive, no? Or do I miss something, could you
> > > generate a sequence of events that would lead to use-after-free?
> >
> > Right, but it's true when you reading a local percpu stock.
> > But here we read a remote stock->cached pointer, which can be cleared
> > by a remote concurrent drain_local_stock() execution.
>
> OK, I can see how refill_stock can race with drain_all_stock. I am not
> sure I see drain_local_stock race because that should be triggered only
> from drain_all_stock and only one cpu is allowed to do that. Maybe we
> might have scheduled a work from the previous run?
Exactly. Previously executed drain_all_stock() -> schedule_work ->
drain_local_stock() on a remote cpu races with checking memcg pointer
from drain_all_stock.
>
> In any case, please document the race in the changelog please. This code
> is indeed tricky and a comment would help as well.
Sure, will send out v2 soon.
Thanks!
next prev parent reply other threads:[~2019-08-02 17:36 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-01 23:35 Roman Gushchin
2019-08-02 3:33 ` Hillf Danton
2019-08-02 8:04 ` Michal Hocko
2019-08-02 8:59 ` Michal Hocko
2019-08-02 17:00 ` Roman Gushchin
2019-08-02 17:14 ` Michal Hocko
2019-08-02 17:36 ` Roman Gushchin [this message]
2019-08-02 16:55 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190802173638.GC28431@tower.DHCP.thefacebook.com \
--to=guro@fb.com \
--cc=Kernel-team@fb.com \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox