From: Yosry Ahmed <yosryahmed@google.com>
To: Nhat Pham <nphamcs@gmail.com>
Cc: akpm@linux-foundation.org, hannes@cmpxchg.org,
cerasuolodomenico@gmail.com, sjenning@redhat.com,
ddstreet@ieee.org, vitaly.wool@konsulko.com, mhocko@kernel.org,
roman.gushchin@linux.dev, shakeelb@google.com,
muchun.song@linux.dev, chrisl@kernel.org, linux-mm@kvack.org,
kernel-team@meta.com, linux-kernel@vger.kernel.org,
cgroups@vger.kernel.org, linux-doc@vger.kernel.org,
linux-kselftest@vger.kernel.org, shuah@kernel.org
Subject: Re: [PATCH v8 3/6] zswap: make shrinking memcg-aware
Date: Tue, 5 Dec 2023 10:59:41 -0800 [thread overview]
Message-ID: <CAJD7tkYPfHP-=vYdfjvAfYbhJi0kqJF13R5QjayzpSCGvF0qrw@mail.gmail.com> (raw)
In-Reply-To: <CAKEwX=NXzpDbonY2K7O-bWJm60OE_FUGvyArpqyK9dLxhyvWAQ@mail.gmail.com>
[..]
> > > static void shrink_worker(struct work_struct *w)
> > > {
> > > struct zswap_pool *pool = container_of(w, typeof(*pool),
> > > shrink_work);
> > > + struct mem_cgroup *memcg;
> > > int ret, failures = 0;
> > >
> > > + /* global reclaim will select cgroup in a round-robin fashion. */
> > > do {
> > > - ret = zswap_reclaim_entry(pool);
> > > - if (ret) {
> > > - zswap_reject_reclaim_fail++;
> > > - if (ret != -EAGAIN)
> > > + spin_lock(&zswap_pools_lock);
> > > + pool->next_shrink = mem_cgroup_iter(NULL, pool->next_shrink, NULL);
> > > + memcg = pool->next_shrink;
> > > +
> > > + /*
> > > + * We need to retry if we have gone through a full round trip, or if we
> > > + * got an offline memcg (or else we risk undoing the effect of the
> > > + * zswap memcg offlining cleanup callback). This is not catastrophic
> > > + * per se, but it will keep the now offlined memcg hostage for a while.
> > > + *
> > > + * Note that if we got an online memcg, we will keep the extra
> > > + * reference in case the original reference obtained by mem_cgroup_iter
> > > + * is dropped by the zswap memcg offlining callback, ensuring that the
> > > + * memcg is not killed when we are reclaiming.
> > > + */
> > > + if (!memcg) {
> > > + spin_unlock(&zswap_pools_lock);
> > > + if (++failures == MAX_RECLAIM_RETRIES)
> > > break;
> > > +
> > > + goto resched;
> > > + }
> > > +
> > > + if (!mem_cgroup_online(memcg)) {
> > > + /* drop the reference from mem_cgroup_iter() */
> > > + mem_cgroup_put(memcg);
> >
> > Probably better to use mem_cgroup_iter_break() here?
>
> mem_cgroup_iter_break(NULL, memcg) seems to perform the same thing, right?
Yes, but it's better to break the iteration with the documented API
(e.g. if mem_cgroup_iter_break() changes to do extra work).
>
> >
> > Also, I don't see mem_cgroup_tryget_online() being used here (where I
> > expected it to be used), did I miss it?
>
> Oh shoot yeah that was a typo - it should be
> mem_cgroup_tryget_online(). Let me send a fix to that.
>
> >
> > > + pool->next_shrink = NULL;
> > > + spin_unlock(&zswap_pools_lock);
> > > +
> > > if (++failures == MAX_RECLAIM_RETRIES)
> > > break;
> > > +
> > > + goto resched;
> > > }
> > > + spin_unlock(&zswap_pools_lock);
> > > +
> > > + ret = shrink_memcg(memcg);
> >
> > We just checked for online-ness above, and then shrink_memcg() checks
> > it again. Is this intentional?
>
> Hmm these two checks are for two different purposes. The check above
> is mainly to prevent accidentally undoing the offline cleanup callback
> during memcg selection step. Inside shrink_memcg(), we check
> onlineness again to prevent reclaiming from offlined memcgs - which in
> effect will trigger the reclaim of the parent's memcg.
Right, but two checks in close proximity are not doing a lot.
Especially that the memcg online-ness can change right after the check
inside shrink_memcg() anyway, so it's a best effort thing.
Anyway, it shouldn't matter much. We can leave it.
>
> >
> > > + /* drop the extra reference */
> >
> > Where does the extra reference come from?
>
> The extra reference is from mem_cgroup_tryget_online(). We get two
> references in the dance above - one from mem_cgroup_iter() (which can
> be dropped) and one extra from mem_cgroup_tryget_online(). I kept the
> second one in case the first one was dropped by the zswap memcg
> offlining callback, but after reclaiming it is safe to just drop it.
Right. I was confused by the missing mem_cgroup_tryget_online().
>
> >
> > > + mem_cgroup_put(memcg);
> > > +
> > > + if (ret == -EINVAL)
> > > + break;
> > > + if (ret && ++failures == MAX_RECLAIM_RETRIES)
> > > + break;
> > > +
> > > +resched:
> > > cond_resched();
> > > } while (!zswap_can_accept());
> > > - zswap_pool_put(pool);
> > > }
> > >
> > > static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
[..]
> > > @@ -1240,15 +1395,15 @@ bool zswap_store(struct folio *folio)
> > > zswap_invalidate_entry(tree, dupentry);
> > > }
> > > spin_unlock(&tree->lock);
> > > -
> > > - /*
> > > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > > - * local cgroup limits.
> > > - */
> > > objcg = get_obj_cgroup_from_folio(folio);
> > > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > > - goto reject;
> > > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > > + memcg = get_mem_cgroup_from_objcg(objcg);
> >
> > Do we need a reference here? IIUC, this is folio_memcg() and the folio
> > is locked, so folio_memcg() should remain stable, no?
>
> Hmmm obj_cgroup_may_zswap() also holds a reference to the objcg's
> memcg, so I just followed the patterns to be safe.
Perhaps it's less clear inside obj_cgroup_may_zswap(). We can actually
pass the folio to obj_cgroup_may_zswap(), add a debug check that the
folio is locked, and avoid getting the ref there as well. That can be
done separately. Perhaps Johannes can shed some light on this, if
there's a different reason why getting a ref there is needed.
For this change, I think the refcount manipulation is unnecessary.
>
>
> >
> > Same for the call below.
> >
> > > + if (shrink_memcg(memcg)) {
> > > + mem_cgroup_put(memcg);
> > > + goto reject;
> > > + }
> > > + mem_cgroup_put(memcg);
> > > + }
> > >
> > > /* reclaim space if needed */
> > > if (zswap_is_full()) {
[..]
next prev parent reply other threads:[~2023-12-05 19:00 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-30 19:40 [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-11-30 19:40 ` [PATCH v8 1/6] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
2023-11-30 19:57 ` Matthew Wilcox
2023-11-30 20:07 ` Nhat Pham
2023-11-30 20:35 ` Johannes Weiner
2023-12-04 8:30 ` Chengming Zhou
2023-12-04 17:48 ` Nhat Pham
2023-12-05 2:28 ` Chengming Zhou
2023-12-05 0:30 ` Chris Li
2023-12-05 17:17 ` Johannes Weiner
2023-11-30 19:40 ` [PATCH v8 2/6] memcontrol: implement mem_cgroup_tryget_online() Nhat Pham
2023-12-05 0:35 ` Chris Li
2023-12-05 1:39 ` Nhat Pham
2023-12-06 0:16 ` Chris Li
2023-12-06 1:30 ` Nhat Pham
2023-12-05 18:02 ` Yosry Ahmed
2023-12-05 19:55 ` Nhat Pham
2023-11-30 19:40 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Nhat Pham
2023-12-05 18:20 ` Yosry Ahmed
2023-12-05 18:49 ` Nhat Pham
2023-12-05 18:59 ` Yosry Ahmed [this message]
2023-12-05 19:09 ` Nhat Pham
2023-12-05 19:54 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix) Nhat Pham
2023-12-06 0:10 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware Chris Li
2023-12-06 1:53 ` Nhat Pham
2023-12-06 3:03 ` Nhat Pham
2023-12-06 3:06 ` [PATCH v8 3/6] zswap: make shrinking memcg-aware (fix 2) Nhat Pham
2023-11-30 19:40 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
2023-12-05 18:21 ` Yosry Ahmed
2023-12-05 18:56 ` Nhat Pham
2023-12-05 19:33 ` [PATCH v8 4/6] mm: memcg: add per-memcg zswap writeback stat (fix) Nhat Pham
2023-12-05 20:05 ` Yosry Ahmed
2023-12-08 0:25 ` Chris Li
2023-11-30 19:40 ` [PATCH v8 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
2023-12-08 0:43 ` Chris Li
2023-11-30 19:40 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure Nhat Pham
2023-12-06 5:51 ` Chengming Zhou
2023-12-06 5:59 ` Yosry Ahmed
2023-12-06 6:43 ` Chengming Zhou
2023-12-06 7:36 ` Yosry Ahmed
2023-12-06 7:39 ` Chengming Zhou
2023-12-06 16:56 ` Nhat Pham
2023-12-06 19:47 ` Nhat Pham
2023-12-06 21:13 ` Yosry Ahmed
2023-12-07 2:32 ` Chengming Zhou
2023-12-06 19:44 ` [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure (fix) Nhat Pham
2023-11-30 21:19 ` [PATCH v8 0/6] workload-specific and memory pressure-driven zswap writeback Andrew Morton
2023-12-06 4:10 ` Bagas Sanjaya
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJD7tkYPfHP-=vYdfjvAfYbhJi0kqJF13R5QjayzpSCGvF0qrw@mail.gmail.com' \
--to=yosryahmed@google.com \
--cc=akpm@linux-foundation.org \
--cc=cerasuolodomenico@gmail.com \
--cc=cgroups@vger.kernel.org \
--cc=chrisl@kernel.org \
--cc=ddstreet@ieee.org \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@meta.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=nphamcs@gmail.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=shuah@kernel.org \
--cc=sjenning@redhat.com \
--cc=vitaly.wool@konsulko.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox