linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nhat Pham <nphamcs@gmail.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: akpm@linux-foundation.org, cerasuolodomenico@gmail.com,
	 yosryahmed@google.com, sjenning@redhat.com, ddstreet@ieee.org,
	 vitaly.wool@konsulko.com, mhocko@kernel.org,
	roman.gushchin@linux.dev,  shakeelb@google.com,
	muchun.song@linux.dev, chrisl@kernel.org,  linux-mm@kvack.org,
	kernel-team@meta.com, linux-kernel@vger.kernel.org,
	 cgroups@vger.kernel.org, linux-doc@vger.kernel.org,
	 linux-kselftest@vger.kernel.org, shuah@kernel.org
Subject: Re: [PATCH v7 3/6] zswap: make shrinking memcg-aware
Date: Wed, 29 Nov 2023 17:17:10 -0800	[thread overview]
Message-ID: <CAKEwX=MPkFfVp6vupje0cjePa9Uxh3orPubiDkrtewtj3N=RXA@mail.gmail.com> (raw)
In-Reply-To: <CAKEwX=M=iFGS6PQyF7FiV2JDhN0uLzSiJ3TK30nGiV1mM1wZ+A@mail.gmail.com>

On Wed, Nov 29, 2023 at 4:21 PM Nhat Pham <nphamcs@gmail.com> wrote:
>
> On Wed, Nov 29, 2023 at 7:17 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Mon, Nov 27, 2023 at 03:45:57PM -0800, Nhat Pham wrote:
> > >  static void shrink_worker(struct work_struct *w)
> > >  {
> > >       struct zswap_pool *pool = container_of(w, typeof(*pool),
> > >                                               shrink_work);
> > > +     struct mem_cgroup *memcg;
> > >       int ret, failures = 0;
> > >
> > > +     /* global reclaim will select cgroup in a round-robin fashion. */
> > >       do {
> > > -             ret = zswap_reclaim_entry(pool);
> > > -             if (ret) {
> > > -                     zswap_reject_reclaim_fail++;
> > > -                     if (ret != -EAGAIN)
> > > -                             break;
> > > +             spin_lock(&zswap_pools_lock);
> > > +             memcg = pool->next_shrink =
> > > +                     mem_cgroup_iter_online(NULL, pool->next_shrink, NULL, true);
> > > +
> > > +             /* full round trip */
> > > +             if (!memcg) {
> > > +                     spin_unlock(&zswap_pools_lock);
> > >                       if (++failures == MAX_RECLAIM_RETRIES)
> > >                               break;
> > > +
> > > +                     goto resched;
> > >               }
> > > +
> > > +             /*
> > > +              * Acquire an extra reference to the iterated memcg in case the
> > > +              * original reference is dropped by the zswap offlining callback.
> > > +              */
> > > +             css_get(&memcg->css);
> >
> > struct mem_cgroup isn't defined when !CONFIG_MEMCG. This needs a
> > mem_cgroup_get() wrapper and a dummy function for no-memcg builds.
>
> I got this exact same issue a couple of versions ago, but it was
> hidden behind another helper function which can be implemented as a
> no-op in the case of !CONFIG_MEMCG, so I forgot about it until now. It
> always strikes me a bit weird that we have mem_cgroup_put() but not an
> equivalent get - let me correct that.

Actually, I'll instead implement mem_cgroup_tryget_online(), as we
have to check for the cgroup's onlineness as well anyway! If it's
online, then keep the extra reference - all good. If it's not, then
drop the original reference before releasing the lock.


>
> >
> > With that fixed, though, everything else looks good to me:
> >
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
>
> Thanks for the review, Johannes!


  reply	other threads:[~2023-11-30  1:17 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-27 23:45 [PATCH v7 0/6] workload-specific and memory pressure-driven zswap writeback Nhat Pham
2023-11-27 23:45 ` [PATCH v7 1/6] list_lru: allows explicit memcg and NUMA node selection Nhat Pham
2023-11-29 15:02   ` Johannes Weiner
2023-11-27 23:45 ` [PATCH v7 2/6] memcontrol: add a new function to traverse online-only memcg hierarchy Nhat Pham
2023-11-29 15:04   ` Johannes Weiner
2023-11-29 17:00     ` Johannes Weiner
2023-11-27 23:45 ` [PATCH v7 3/6] zswap: make shrinking memcg-aware Nhat Pham
2023-11-29 15:17   ` Johannes Weiner
2023-11-30  0:21     ` Nhat Pham
2023-11-30  1:17       ` Nhat Pham [this message]
2023-11-27 23:45 ` [PATCH v7 4/6] mm: memcg: add per-memcg zswap writeback stat Nhat Pham
2023-11-29 15:25   ` Johannes Weiner
2023-11-30  1:26     ` Nhat Pham
2023-11-27 23:45 ` [PATCH v7 5/6] selftests: cgroup: update per-memcg zswap writeback selftest Nhat Pham
2023-11-27 23:46 ` [PATCH v7 6/6] zswap: shrinks zswap pool based on memory pressure Nhat Pham
2023-11-29 16:21   ` Johannes Weiner
2023-11-29 23:44     ` Nhat Pham
2023-12-02  4:44 ` [PATCH v7 0/6] workload-specific and memory pressure-driven zswap writeback Bagas Sanjaya

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKEwX=MPkFfVp6vupje0cjePa9Uxh3orPubiDkrtewtj3N=RXA@mail.gmail.com' \
    --to=nphamcs@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cerasuolodomenico@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chrisl@kernel.org \
    --cc=ddstreet@ieee.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@meta.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeelb@google.com \
    --cc=shuah@kernel.org \
    --cc=sjenning@redhat.com \
    --cc=vitaly.wool@konsulko.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox