From: Johannes Weiner <hannes@cmpxchg.org>
To: Muchun Song <songmuchun@bytedance.com>
Cc: mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com,
cgroups@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com,
longman@redhat.com
Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented
Date: Wed, 25 May 2022 08:30:15 -0400 [thread overview]
Message-ID: <Yo4hVw7B+bUlMzLX@cmpxchg.org> (raw)
In-Reply-To: <Yo38mlkMBFz2h+yP@FVFYT0MHHV2J.googleapis.com>
On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote:
> On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote:
> > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote:
> > > The diagram below shows how to make the folio lruvec lock safe when LRU
> > > pages are reparented.
> > >
> > > folio_lruvec_lock(folio)
> > > retry:
> > > lruvec = folio_lruvec(folio);
> > >
> > > // The folio is reparented at this time.
> > > spin_lock(&lruvec->lru_lock);
> > >
> > > if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
> > > // Acquired the wrong lruvec lock and need to retry.
> > > // Because this folio is on the parent memcg lruvec list.
> > > goto retry;
> > >
> > > // If we reach here, it means that folio_memcg(folio) is stable.
> > >
> > > memcg_reparent_objcgs(memcg)
> > > // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
> > > spin_lock(&lruvec->lru_lock);
> > > spin_lock(&lruvec_parent->lru_lock);
> > >
> > > // Move all the pages from the lruvec list to the parent lruvec list.
> > >
> > > spin_unlock(&lruvec_parent->lru_lock);
> > > spin_unlock(&lruvec->lru_lock);
> > >
> > > After we acquire the lruvec lock, we need to check whether the folio is
> > > reparented. If so, we need to reacquire the new lruvec lock. On the
> > > routine of the LRU pages reparenting, we will also acquire the lruvec
> > > lock (will be implemented in the later patch). So folio_memcg() cannot
> > > be changed when we hold the lruvec lock.
> > >
> > > Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
> > > we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
> > > remove it.
> > >
> > > This is a preparation for reparenting the LRU pages.
> > >
> > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >
> > This looks good to me. Just one question:
> >
> > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
> > > */
> > > struct lruvec *folio_lruvec_lock(struct folio *folio)
> > > {
> > > - struct lruvec *lruvec = folio_lruvec(folio);
> > > + struct lruvec *lruvec;
> > >
> > > + rcu_read_lock();
> > > +retry:
> > > + lruvec = folio_lruvec(folio);
> > > spin_lock(&lruvec->lru_lock);
> > > - lruvec_memcg_debug(lruvec, folio);
> > > +
> > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
> > > + spin_unlock(&lruvec->lru_lock);
> > > + goto retry;
> > > + }
> > > +
> > > + /*
> > > + * Preemption is disabled in the internal of spin_lock, which can serve
> > > + * as RCU read-side critical sections.
> > > + */
> > > + rcu_read_unlock();
> >
> > The code looks right to me, but I don't understand the comment: why do
> > we care that the rcu read-side continues? With the lru_lock held,
> > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?
> >
>
> Right. We could hold rcu read lock until end of reparting. So you mean
> we do rcu_read_unlock in folio_lruvec_lock()?
The comment seems to suggest that disabling preemption is what keeps
the lruvec alive. But it's the lru_lock that keeps it alive. The
cgroup destruction path tries to take the lru_lock long before it even
gets to synchronize_rcu(). Once you hold the lru_lock, having an
implied read-side critical section as well doesn't seem to matter.
Should the comment be deleted?
next prev parent reply other threads:[~2022-05-25 12:30 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-24 6:05 [PATCH v4 00/11] Use obj_cgroup APIs to charge the LRU pages Muchun Song
2022-05-24 6:05 ` [PATCH v4 01/11] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
2022-05-24 19:01 ` Johannes Weiner
2022-05-25 8:46 ` Muchun Song
2022-05-25 2:36 ` Roman Gushchin
2022-05-25 7:57 ` Muchun Song
2022-05-25 12:37 ` Johannes Weiner
2022-05-25 13:08 ` Muchun Song
2022-05-24 6:05 ` [PATCH v4 02/11] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Muchun Song
2022-05-24 19:22 ` Johannes Weiner
2022-05-25 9:38 ` Muchun Song
2022-05-24 6:05 ` [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Muchun Song
2022-05-24 19:23 ` Waiman Long
2022-05-25 10:20 ` Muchun Song
2022-05-25 14:59 ` Waiman Long
2022-05-24 19:27 ` Johannes Weiner
2022-05-25 9:53 ` Muchun Song
2022-05-25 12:30 ` Johannes Weiner [this message]
2022-05-25 13:03 ` Muchun Song
2022-05-25 14:48 ` Johannes Weiner
2022-05-25 15:38 ` Muchun Song
2022-05-26 20:17 ` Waiman Long
2022-05-27 2:55 ` Muchun Song
2022-05-24 6:05 ` [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru() Muchun Song
2022-05-24 19:38 ` Johannes Weiner
2022-05-25 11:38 ` Muchun Song
2022-05-24 19:52 ` Waiman Long
2022-05-25 11:43 ` Muchun Song
2022-05-25 2:43 ` Roman Gushchin
2022-05-25 11:41 ` Muchun Song
2022-05-24 6:05 ` [PATCH v4 05/11] mm: thp: introduce folio_split_queue_lock{_irqsave}() Muchun Song
2022-05-24 6:05 ` [PATCH v4 06/11] mm: thp: make split queue lock safe when LRU pages are reparented Muchun Song
2022-05-25 2:54 ` Roman Gushchin
2022-05-25 11:44 ` Muchun Song
2022-05-24 6:05 ` [PATCH v4 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Muchun Song
2022-05-25 3:03 ` Roman Gushchin
2022-05-25 11:51 ` Muchun Song
2022-05-24 6:05 ` [PATCH v4 08/11] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
2022-05-24 6:05 ` [PATCH v4 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
2022-05-24 12:29 ` kernel test robot
2022-05-24 18:16 ` kernel test robot
2022-05-25 7:14 ` [mm] bec0ae1210: WARNING:possible_recursive_locking_detected kernel test robot
2022-05-24 6:05 ` [PATCH v4 10/11] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function Muchun Song
2022-05-24 19:44 ` Johannes Weiner
2022-05-25 11:59 ` Muchun Song
2022-05-25 2:40 ` Roman Gushchin
2022-05-25 11:58 ` Muchun Song
2022-05-24 6:05 ` [PATCH v4 11/11] mm: lru: use lruvec lock to serialize memcg changes Muchun Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yo4hVw7B+bUlMzLX@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=cgroups@vger.kernel.org \
--cc=duanxiongchun@bytedance.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=longman@redhat.com \
--cc=mhocko@kernel.org \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox