From: Muchun Song <muchun.song@linux.dev>
To: Nhat Pham <nphamcs@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
cerasuolodomenico@gmail.com, Yosry Ahmed <yosryahmed@google.com>,
sjenning@redhat.com, ddstreet@ieee.org, vitaly.wool@konsulko.com,
Michal Hocko <mhocko@kernel.org>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeelb@google.com>, Chris Li <chrisl@kernel.org>,
Linux-MM <linux-mm@kvack.org>,
kernel-team@meta.com, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v4 2/5] zswap: make shrinking memcg-aware
Date: Thu, 2 Nov 2023 10:17:03 +0800 [thread overview]
Message-ID: <DB299182-3165-4BFD-8717-D6B88D1C9BCB@linux.dev> (raw)
In-Reply-To: <CAKEwX=PmLSKpmv3zpGhka-JaJoTk7Se4bo6D8r5s6HhPmkpEng@mail.gmail.com>
> On Nov 2, 2023, at 01:44, Nhat Pham <nphamcs@gmail.com> wrote:
>
> On Tue, Oct 31, 2023 at 8:07 PM Muchun Song <muchun.song@linux.dev> wrote:
>>
>>
>>
>>> On Nov 1, 2023, at 09:26, Nhat Pham <nphamcs@gmail.com> wrote:
>>>
>>> cc-ing Johannes, Roman, Shakeel, Muchun since you all know much more
>>> about memory controller + list_lru reparenting logic than me.
>>>
>>> There seems to be a race between memcg offlining and zswap’s
>>> cgroup-aware LRU implementation:
>>>
>>> CPU0 CPU1
>>> zswap_lru_add() mem_cgroup_css_offline()
>>> get_mem_cgroup_from_objcg()
>>> memcg_offline_kmem()
>>> memcg_reparent_objcgs()
>>> memcg_reparent_list_lrus()
>>> memcg_reparent_list_lru()
>>> memcg_reparent_list_lru_node()
>>> list_lru_add()
>>> memcg_list_lru_free()
>>>
>>>
>>> Essentially: on CPU0, zswap gets the memcg from the entry's objcg
>>> (before the objcgs are reparented). Then it performs list_lru_add()
>>> after the list_lru entries reparenting (memcg_reparent_list_lru_node())
>>> step. If the list_lru of the memcg being offlined has not been freed
>>> (i.e before the memcg_list_lru_free() call), then the list_lru_add()
>>> call would succeed - but the list will be freed soon after. The new
>>
>> No worries. list_lru_add() will add the object to the lru list of
>> the parent of the memcg being offlined, because the ->kmemcg_id of the
>> memcg being offlined will be changed to its parent's ->kmemcg_id before memcg_reparent_list_lru().
>>
>
> Ohhh that is subtle. Thanks for pointing this out, Muchun!
>
> In that case, I think Yosry is right after all! We don't even need to get
> a reference to the memcg:
>
> rcu_read_lock();
> memcg = obj_cgroup_memcg(objcg);
> list_lru_add();
> rcu_read_unlock();
>
> As long as we're inside this rcu section, we're guaranteed to get
> an un-freed memcg. Now it could be offlined etc., but as Muchun has
Right.
Thanks.
> pointed out, the list_lru_add() call will still does the right thing - it will
> either add the new entry to the parent list if this happens after the
> kmemcg_id update, or the child list before the list_lru reparenting
> action. Both of these scenarios are fine.
>
>> Muchun,
>> Thanks
>>
>>> zswap entry as a result will not be subjected to future reclaim
>>> attempt. IOW, this list_lru_add() call is effectively swallowed. And
>>> worse, there might be a crash when we invalidate the zswap_entry in the
>>> future (which will perform a list_lru removal).
>>>
>>> Within get_mem_cgroup_from_objcg(), none of the following seem
>>> sufficient to prevent this race:
>>>
>>> 1. Perform the objcg-to-memcg lookup inside a rcu_read_lock()
>>> section.
>>> 2. Checking if the memcg is freed yet (with css_tryget()) (what
>>> we're currently doing in this patch series).
>>> 3. Checking if the memcg is still online (with css_tryget_online())
>>> The memcg can still be offlined down the line.
>>>
>>>
>>> I've discussed this privately with Johannes, and it seems like the
>>> cleanest solution here is to move the reparenting logic down to release
>>> stage. That way, when get_mem_cgroup_from_objcg() returns,
>>> zswap_lru_add() is given an memcg that is reparenting-safe (until we
>>> drop the obtained reference).
next prev parent reply other threads:[~2023-11-02 2:17 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-24 20:32 [PATCH v4 0/5] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-10-24 20:32 ` [PATCH v4 1/5] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
2023-10-24 20:32 ` [PATCH v4 2/5] zswap: make shrinking memcg-aware Nhat Pham
2023-10-25 3:16 ` Yosry Ahmed
2023-10-27 21:10 ` Nhat Pham
2023-10-29 1:26 ` Nhat Pham
2023-10-30 18:16 ` Yosry Ahmed
2023-11-01 1:26 ` Nhat Pham
2023-11-01 1:32 ` Yosry Ahmed
2023-11-01 1:41 ` Nhat Pham
2023-11-01 3:06 ` Muchun Song
2023-11-01 17:44 ` Nhat Pham
2023-11-02 2:17 ` Muchun Song [this message]
2023-10-24 20:33 ` [PATCH v4 3/5] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
2023-10-24 20:33 ` [PATCH v4 5/5] zswap: shrinks zswap pool based on memory pressure Nhat Pham
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DB299182-3165-4BFD-8717-D6B88D1C9BCB@linux.dev \
--to=muchun.song@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=cerasuolodomenico@gmail.com \
--cc=chrisl@kernel.org \
--cc=ddstreet@ieee.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=nphamcs@gmail.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=sjenning@redhat.com \
--cc=vitaly.wool@konsulko.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox