linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zhongkun He <hezhongkun.hzk@bytedance.com>
To: Nhat Pham <nphamcs@gmail.com>
Cc: Yosry Ahmed <yosryahmed@google.com>,
	akpm@linux-foundation.org, hannes@cmpxchg.org,
	 sjenning@redhat.com, ddstreet@ieee.org,
	vitaly.wool@konsulko.com,  linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,  Chris Li <chrisl@kernel.org>
Subject: Re: [External] Re: [PATCH] mm: zswap: fix the lack of page lru flag in zswap_writeback_entry
Date: Thu, 11 Jan 2024 11:48:59 +0800	[thread overview]
Message-ID: <CACSyD1Pp8gkxwTXZuinm6wiZs0e5U2B5oND4rj29dzmRApFjhQ@mail.gmail.com> (raw)
In-Reply-To: <CAKEwX=Na3dg+KZwvtQi-Nj79Am-1tttDw50_qStkobmYGUC6NA@mail.gmail.com>

>
> This sounds dangerous. This is going to introduce a rather large
> unexpected side effect - we're changing the readahead behavior in a
> seemingly small zswap optimization. In fact, I'd argue that if we do
> this, the readahead behavior change will be the "main effect", and the
> zswap-side change would be a "happy consequence". We should run a lot
> of benchmarking and document the change extensively if we pursue this
> route.
>

I agree with the unexpected side effect,  and here I need
to clarify the original intention of this patch.Please see the memory
offloading steps below.


memory      zswap(reclaim)          memory+swap (writeback)
1G                 0.5G                        1G(tmp memory) + 1G(swap)

If the decompressed memory cannot be released in time,
zswap's writeback has great side effects(mostly clod pages).
On the one hand, the memory space has not been reduced,
but has increased (from 0.5G->1G).
At the same time, it is not put the pages to the tail of the lru.
When the memory is insufficient, other pages will be squeezed out
and released early.
With this patch, we can put the tmp pages to the tail and reclaim it
in time when the memory is insufficient or actively reclaimed.
So I think this patch makes sense and hope it can be fixed with a
suitable approaches.

>
> Unless some page flag/readahead expert can confirm that the first
> option is safe, my vote is on this option. I mean, it's fairly minimal
> codewise, no? Just a bunch of plumbing. We can also keep the other
> call sites intact if we just rename the old versions - something along
> the line of:
>
> __read_swap_cache_async_head(..., bool add_to_lru_head)
> {
> ...
> if (add_to_lru_head)
>   folio_add_lru(folio)
> else
>   folio_add_lru_tail(folio);
> }
>
> __read_swap_cache_async(...)
> {
>    return __read_swap_cache_async_tail(..., true);
> }
>
> A bit boilerplate? Sure. But this seems safer, and I doubt it's *that*
> much more work.
>

Yes, agree. I will try it again.

> > >
> > > I have the same idea and also find it intrusive. I think the first solution
> > > is very good and I will try it. If it works, I will send the next version.
> >
> > One way to avoid introducing folio_lru_add_tail() and blumping a
> > boolean from zswap is to have a per-task context (similar to
> > memalloc_nofs_save()), that makes folio_add_lru() automatically add
> > folios to the tail of the LRU. I am not sure if this is an acceptable
> > approach though in terms of per-task flags and such.
>
> This seems a bit hacky and obscure, but maybe it could work.


  reply	other threads:[~2024-01-11  3:49 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-24 14:27 Zhongkun He
2023-12-29 19:44 ` Andrew Morton
2023-12-30  2:09 ` Nhat Pham
2024-01-02 11:39   ` [External] " Zhongkun He
2024-01-02 14:09     ` Zhongkun He
2024-01-02 23:27     ` Nhat Pham
2024-01-03 14:12       ` Zhongkun He
2024-01-04 19:42         ` Nhat Pham
2024-01-05 14:10           ` Zhongkun He
2024-01-07 18:53             ` Nhat Pham
2024-01-07 21:29             ` Nhat Pham
2024-01-07 21:59               ` Nhat Pham
2024-01-08 23:12                 ` Yosry Ahmed
2024-01-09  3:13                   ` Zhongkun He
2024-01-09 16:29                     ` Yosry Ahmed
2024-01-10  1:32                       ` Nhat Pham
2024-01-11  3:48                         ` Zhongkun He [this message]
2024-01-11 11:27                           ` Yosry Ahmed
2024-01-11 19:25                           ` Nhat Pham
2024-01-12  7:08                             ` Zhongkun He
2024-01-16 13:40                               ` Zhongkun He
2024-01-16 20:28                                 ` Yosry Ahmed
2024-01-17  9:52                                   ` Zhongkun He
2024-01-17 17:53                                     ` Yosry Ahmed
2024-01-17 19:29                                     ` Nhat Pham
2024-01-16 21:03                                 ` Matthew Wilcox
2024-01-17 10:41                                   ` Zhongkun He
2024-01-11  2:57                       ` Zhongkun He
2024-01-09  2:43                 ` Zhongkun He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACSyD1Pp8gkxwTXZuinm6wiZs0e5U2B5oND4rj29dzmRApFjhQ@mail.gmail.com \
    --to=hezhongkun.hzk@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=ddstreet@ieee.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    --cc=sjenning@redhat.com \
    --cc=vitaly.wool@konsulko.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox