linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nhat Pham <nphamcs@gmail.com>
To: Takero Funaki <flintglass@gmail.com>
Cc: Yosry Ahmed <yosryahmed@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Chengming Zhou <chengming.zhou@linux.dev>,
	Jonathan Corbet <corbet@lwn.net>,
	 Andrew Morton <akpm@linux-foundation.org>,
	 Domenico Cerasuolo <cerasuolodomenico@gmail.com>,
	linux-mm@kvack.org, linux-doc@vger.kernel.org,
	 linux-kernel@vger.kernel.org
Subject: Re: [PATCH v1 2/3] mm: zswap: fix global shrinker error handling logic
Date: Tue, 11 Jun 2024 08:51:02 -0700	[thread overview]
Message-ID: <CAKEwX=NSaRAjiKjGtYxPwh9ByBZ_DK+h3T6LS5-eNpxS4s4zPA@mail.gmail.com> (raw)
In-Reply-To: <CAPpoddf0ysCG=s5ixbOZkXjmcB0t_eqLOs9xhdqZHiWnYY4_Wg@mail.gmail.com>

On Tue, Jun 11, 2024 at 8:21 AM Takero Funaki <flintglass@gmail.com> wrote:
>
>
> Since shrink_worker evicts only one page per tree walk when there is
> only one memcg using zswap, I believe this is the intended behavior.

I don't think this is the intended behavior :) It's a holdover from
the old zswap reclaiming behaviors.

1. In the past, we used to shrink one object per shrink worker call.
This is crazy.

2. We then move the LRU from the allocator level to zswap level, and
shrink one object at a time until the pool can accept new pages (i.e
under the acceptance threshold).

3. When we separate the LRU to per-(memcg, node), we keep the
shrink-one-at-a-time part, but do it round-robin style on each of the
(memcg, node) combination.

It's time to optimize this. 4th time's the charm!

> Even if we choose to break the loop more aggressively, it would only
> be postponing the problem because pool_limit_hit will trigger the
> worker again.
>
> I agree the existing approach is inefficient. It might be better to
> change the 1 page in a round-robin strategy.

We can play with a bigger batch.

1. Most straightforward idea is to just use a bigger constant (32? 64? 128?)

2. We can try to shrink until accept for each memcg, hoping that the
round robin selection maintains fairness in the long run - but this
can be a bad idea in the short run for the memcg selected. At the very
least, this should try to respect the protected area for each lruvec.
This might still come into conflict with the zswap shrinker though
(since the protection is best-effort).

3. Proportional reclaim - a variant of what we're doing in
get_scan_count() for page reclaim?

scan = lruvec_size - lruvec_size * protection / (cgroup_size + 1);

protection is derived from memory.min or memory.low of the cgroup, and
cgroup_size is the memory usage of the cgroup. lruvec_size maybe we
can substitute with the number of (reclaimable/unprotected?) zswap
objects in the (node, memcg) lru?


  reply	other threads:[~2024-06-11 15:51 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-08 15:53 [PATCH v1 0/3] mm: zswap: global shrinker fix and proactive shrink Takero Funaki
2024-06-08 15:53 ` [PATCH v1 1/3] mm: zswap: fix global shrinker memcg iteration Takero Funaki
2024-06-10 19:16   ` Yosry Ahmed
2024-06-11 14:50     ` Takero Funaki
2024-06-11 18:26   ` Nhat Pham
2024-06-11 23:03     ` Shakeel Butt
2024-06-12 18:16     ` Takero Funaki
2024-06-12 18:28       ` Yosry Ahmed
2024-06-13  2:13         ` Takero Funaki
2024-06-13  2:18           ` Yosry Ahmed
2024-06-13  2:35             ` Takero Funaki
2024-06-13  2:57               ` Yosry Ahmed
2024-06-13 15:04                 ` Nhat Pham
2024-06-13 16:49                   ` Shakeel Butt
2024-06-14  4:39                     ` Takero Funaki
2024-06-13 16:08   ` Nhat Pham
2024-06-13 16:09     ` Nhat Pham
2024-06-08 15:53 ` [PATCH v1 2/3] mm: zswap: fix global shrinker error handling logic Takero Funaki
2024-06-10 20:27   ` Yosry Ahmed
2024-06-11 15:21     ` Takero Funaki
2024-06-11 15:51       ` Nhat Pham [this message]
2024-06-11 18:15     ` Nhat Pham
2024-06-08 15:53 ` [PATCH v1 3/3] mm: zswap: proactive shrinking before pool size limit is hit Takero Funaki
2024-06-13 15:13   ` Nhat Pham
2024-06-11 18:10 ` [PATCH v1 0/3] mm: zswap: global shrinker fix and proactive shrink Nhat Pham
2024-06-13 15:22 ` Nhat Pham
2024-06-14  4:09   ` Takero Funaki
2024-06-14 22:34     ` Nhat Pham
2024-06-14 22:48     ` Nhat Pham
2024-06-15  0:19     ` Yosry Ahmed
2024-06-20  1:03       ` Takero Funaki
2024-06-20 22:45         ` Nhat Pham

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKEwX=NSaRAjiKjGtYxPwh9ByBZ_DK+h3T6LS5-eNpxS4s4zPA@mail.gmail.com' \
    --to=nphamcs@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=cerasuolodomenico@gmail.com \
    --cc=chengming.zhou@linux.dev \
    --cc=corbet@lwn.net \
    --cc=flintglass@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox