linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baoquan He <bhe@redhat.com>
To: Kairui Song <ryncsn@gmail.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Chris Li <chrisl@kernel.org>, Barry Song <v-songbaohua@oppo.com>,
	Hugh Dickins <hughd@google.com>,
	Yosry Ahmed <yosryahmed@google.com>,
	"Huang, Ying" <ying.huang@linux.alibaba.com>,
	Nhat Pham <nphamcs@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Kalesh Singh <kaleshsingh@google.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 5/7] mm, swap: use percpu cluster as allocation fast path
Date: Thu, 20 Feb 2025 11:24:30 +0800	[thread overview]
Message-ID: <Z7agbvvnshLwt0k7@MiWiFi-R3L-srv> (raw)
In-Reply-To: <CAMgjq7DoV=ZdHeREeMq1=hKzD_O40NkfHCym1Wo9m=J=cBnUvw@mail.gmail.com>

On 02/20/25 at 10:48am, Kairui Song wrote:
> On Thu, Feb 20, 2025 at 10:35 AM Baoquan He <bhe@redhat.com> wrote:
> >
> > On 02/19/25 at 07:12pm, Kairui Song wrote:
> > >
> > > > n reality it may be very difficult to achieve the 'each 2M space has been consumed for each order',
> > >
> > > Very true, but notice for order >= 1, slot cache never worked before.
> > > And for order == 0, it's very likely that a cluster will have more
> > > than 64 slots usable. The test result I posted should be a good
> > > example, and device is very full during the test, and performance is
> > > basically identical to before. My only concern was about the device
> >
> > My worry is the global percpu cluster may impact performance among
> > multiple swap devices. Before, per si percpu cluster will cache the
> > valid offset in one cluster for each order. For multiple swap devices,
> > this consumes a little bit more percpu memory. While the new global
> > percpu cluster could be updated to a different swap device easily only
> > of one order is available, then the whole array is invalid. That looks a
> > little drastic cmpared with before.
> 
> Ah, now I got what you mean. That's seems could be a problem indeed.
> 
> I think I can change the
> 
> +struct percpu_swap_cluster {
> +       struct swap_info_struct *si;
> 
> to
> 
> +struct percpu_swap_cluster {
> +       struct swap_info_struct *si[SWAP_NR_ORDERS];
> 
> Or embed the swp type in the offset, this way each order won't affect
> each other.  How do you think?

Yes, this looks much better. You may need store both si and offset, the
above demonstrated struct percpu_swap_cluster lacks offset which seems
not good.

> 
> Previously high order allocation will bypass slot cache so allocation
> could happen on different same priority devices too. So the behaviour
> that each order using different device should be acceptable.
> 
> >
> > Yeah, the example you shown looks good. Wonder how many swap devices are
> > simulated in your example.
> >
> > > rotating, as slot cache never worked for order >= 1, so the device
> > > rotates was very frequently. But still seems no one really cared about
> > > it, mthp swapout is a new thing and the previous rotation rule seems
> > > even more confusing than this new idea.
> >
> > I never contact a real product environment with multiple tier and
> > many swap devices. In reality, with my shallow knowledge, usually only
> > one swap device is deployed. If that's true in most of time, the old
> > code or new code is fine, otherwise, seems we may need consider the
> > impact.
> 



  reply	other threads:[~2025-02-20  3:24 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-14 17:57 [PATCH 0/7] mm, swap: remove swap slot cache Kairui Song
2025-02-14 17:57 ` [PATCH 1/7] mm, swap: avoid reclaiming irrelevant swap cache Kairui Song
2025-02-19  2:11   ` Baoquan He
2025-02-14 17:57 ` [PATCH 2/7] mm, swap: drop the flag TTRS_DIRECT Kairui Song
2025-02-19  2:42   ` Baoquan He
2025-02-14 17:57 ` [PATCH 3/7] mm, swap: avoid redundant swap device pinning Kairui Song
2025-02-19  3:35   ` Baoquan He
2025-02-14 17:57 ` [PATCH 4/7] mm, swap: don't update the counter up-front Kairui Song
2025-02-14 17:57 ` [PATCH 5/7] mm, swap: use percpu cluster as allocation fast path Kairui Song
2025-02-19  7:53   ` Baoquan He
2025-02-19  8:34     ` Kairui Song
2025-02-19  9:26       ` Baoquan He
2025-02-19 10:55       ` Baoquan He
2025-02-19 11:12         ` Kairui Song
2025-02-20  2:35           ` Baoquan He
2025-02-20  2:48             ` Kairui Song
2025-02-20  3:24               ` Baoquan He [this message]
2025-02-14 17:57 ` [PATCH 6/7] mm, swap: remove swap slot cache Kairui Song
2025-02-15 16:23   ` kernel test robot
2025-02-20  7:55   ` Baoquan He
2025-02-24  3:16     ` Kairui Song
2025-02-14 17:57 ` [PATCH 7/7] mm, swap: simplify folio swap allocation Kairui Song
2025-02-14 20:13   ` Matthew Wilcox
2025-02-15  6:40     ` Kairui Song
2025-02-15 16:43   ` kernel test robot
2025-02-15 16:54   ` kernel test robot
2025-02-20 10:41   ` Baoquan He
2025-02-15 10:27 ` [PATCH 0/7] mm, swap: remove swap slot cache Baoquan He
2025-02-15 13:34   ` Kairui Song
2025-02-15 15:07     ` Baoquan He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z7agbvvnshLwt0k7@MiWiFi-R3L-srv \
    --to=bhe@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kaleshsingh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    --cc=ryncsn@gmail.com \
    --cc=v-songbaohua@oppo.com \
    --cc=ying.huang@linux.alibaba.com \
    --cc=yosryahmed@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox