From: Johannes Weiner <hannes@cmpxchg.org>
To: Yosry Ahmed <yosryahmed@google.com>
Cc: Yu Zhao <yuzhao@google.com>, Nhat Pham <nphamcs@gmail.com>,
akpm@linux-foundation.org, kernel-team@meta.com,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
stable@vger.kernel.org
Subject: Re: [PATCH v2] workingset: ensure memcg is valid for recency check
Date: Fri, 18 Aug 2023 14:35:38 -0400 [thread overview]
Message-ID: <20230818183538.GA142974@cmpxchg.org> (raw)
In-Reply-To: <CAJD7tkZ3i-NoqSi+BkCY7nR-2z==243F1FKrh42toQwsgv5eKQ@mail.gmail.com>
On Fri, Aug 18, 2023 at 10:45:56AM -0700, Yosry Ahmed wrote:
> On Fri, Aug 18, 2023 at 10:35 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > On Fri, Aug 18, 2023 at 07:56:37AM -0700, Yosry Ahmed wrote:
> > > If this happens it seems possible for this to happen:
> > >
> > > cpu #1 cpu#2
> > > css_put()
> > > /* css_free_rwork_fn is queued */
> > > rcu_read_lock()
> > > mem_cgroup_from_id()
> > > mem_cgroup_id_remove()
> > > /* access memcg */
> >
> > I don't quite see how that'd possible. IDR uses rcu_assign_pointer()
> > during deletion, which inserts the necessary barriering. My
> > understanding is that this should always be safe:
> >
> > rcu_read_lock() (writer serialization, in this case ref count == 0)
> > foo = idr_find(x) idr_remove(x)
> > if (foo) kfree_rcu(foo)
> > LOAD(foo->bar)
> > rcu_read_unlock()
>
> How does a barrier inside IDR removal protect against the memcg being
> freed here though?
>
> If css_put() is executed out-of-order before mem_cgroup_id_remove(),
> the memcg can be freed even before mem_cgroup_id_remove() is called,
> right?
css_put() can start earlier, but it's not allowed to reorder the rcu
callback that frees past the rcu_assign_pointer() in idr_remove().
This is what RCU and its access primitives guarantees. It ensures that
after "unpublishing" the pointer, all concurrent RCU-protected
accesses to the object have finished, and the memory can be freed.
next prev parent reply other threads:[~2023-08-18 18:35 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-17 16:47 [PATCH] " Nhat Pham
2023-08-17 17:39 ` Yosry Ahmed
2023-08-17 18:13 ` Nhat Pham
2023-08-17 18:24 ` Yosry Ahmed
2023-08-17 19:01 ` [PATCH v2] " Nhat Pham
2023-08-17 20:50 ` Yosry Ahmed
2023-08-17 22:43 ` Nhat Pham
2023-08-17 22:49 ` Yosry Ahmed
2023-08-17 23:12 ` Yu Zhao
2023-08-18 13:49 ` Johannes Weiner
2023-08-18 14:56 ` Yosry Ahmed
2023-08-18 16:23 ` Nhat Pham
2023-08-18 16:30 ` Yosry Ahmed
2023-08-18 17:35 ` Johannes Weiner
2023-08-18 17:45 ` Yosry Ahmed
2023-08-18 18:35 ` Johannes Weiner [this message]
2023-08-18 18:44 ` Yosry Ahmed
2023-08-18 21:35 ` Shakeel Butt
2023-08-18 21:51 ` Yu Zhao
2023-08-18 21:59 ` Yosry Ahmed
2023-08-18 22:29 ` Johannes Weiner
2023-08-18 22:19 ` Johannes Weiner
2023-08-18 22:26 ` Yosry Ahmed
2023-08-18 0:37 ` Nhat Pham
2023-08-18 3:26 ` kernel test robot
2023-08-18 3:36 ` kernel test robot
2023-08-18 14:12 ` Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230818183538.GA142974@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=stable@vger.kernel.org \
--cc=yosryahmed@google.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox