linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kairui Song <ryncsn@gmail.com>
To: Nhat Pham <nphamcs@gmail.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	 Kemeng Shi <shikemeng@huaweicloud.com>,
	Chris Li <chrisl@kernel.org>,  Baoquan He <bhe@redhat.com>,
	Barry Song <baohua@kernel.org>,
	 "Huang, Ying" <ying.huang@linux.alibaba.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] mm, swap: prefer nonfull over free clusters
Date: Wed, 6 Aug 2025 11:38:25 +0800	[thread overview]
Message-ID: <CAMgjq7CSRrjLYF=7ckieNsAhDX_Fqp0OkHxrGHB0gQG7=_RdOA@mail.gmail.com> (raw)
In-Reply-To: <CAKEwX=PkJdz3Um9j4m2bPahN9NbQpn7QnOvEAxDdWUHTqSvchg@mail.gmail.com>

On Wed, Aug 6, 2025 at 8:06 AM Nhat Pham <nphamcs@gmail.com> wrote:
>
> On Mon, Aug 4, 2025 at 10:24 AM Kairui Song <ryncsn@gmail.com> wrote:
> >
> > From: Kairui Song <kasong@tencent.com>
> >
> > We prefer a free cluster over a nonfull cluster whenever a CPU local
> > cluster is drained to respect the SSD discard behavior [1]. It's not
> > a best practice for non-discarding devices. And this is causing a
> > chigher fragmentation rate.
> >
> > So for a non-discarding device, prefer nonfull over free clusters. This
> > reduces the fragmentation issue by a lot.
> >
> > Testing with make -j96, defconfig, using 64k mTHP, 8G ZRAM:
> >
> > Before: sys time: 6121.0s  64kB/swpout: 1638155  64kB/swpout_fallback: 189562
> > After:  sys time: 6145.3s  64kB/swpout: 1761110  64kB/swpout_fallback: 66071
> >
> > Testing with make -j96, defconfig, using 64k mTHP, 10G ZRAM:
> >
> > Before: sys time 5527.9s  64kB/swpout: 1789358  64kB/swpout_fallback: 17813
> > After:  sys time 5538.3s  64kB/swpout: 1813133  64kB/swpout_fallback: 0
> >
> > Performance is basically unchanged, and the large allocation failure rate
> > is lower. Enabling all mTHP sizes showed a more significant result:
> >
> > Using the same test setup with 10G ZRAM and enabling all mTHP sizes:
> >
> > 128kB swap failure rate:
> > Before: swpout:449548 swpout_fallback:55894
> > After:  swpout:497519 swpout_fallback:3204
> >
> > 256kB swap failure rate:
> > Before: swpout:63938  swpout_fallback:2154
> > After:  swpout:65698  swpout_fallback:324
> >
> > 512kB swap failure rate:
> > Before: swpout:11971  swpout_fallback:2218
> > After:  swpout:14606  swpout_fallback:4
> >
> > 2M swap failure rate:
> > Before: swpout:12     swpout_fallback:1578
> > After:  swpout:1253   swpout_fallback:15
> >
> > The success rate of large allocations is much higher.
> >
> > Link: https://lore.kernel.org/linux-mm/87v8242vng.fsf@yhuang6-desk2.ccr.corp.intel.com/ [1]
> > Signed-off-by: Kairui Song <kasong@tencent.com>
>
> Nice! I agree with Chris' analysis too. It's less of a problem for
> vswap (because there's no physical/SSD implication over there), but
> this patch makes sense in the context of swapfile allocator.
>
> FWIW:
> Reviewed-by: Nhat Pham <nphamcs@gmail.com>

Thanks!

>
> > ---
> >  mm/swapfile.c | 38 ++++++++++++++++++++++++++++----------
> >  1 file changed, 28 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index 5fdb3cb2b8b7..4a0cf4fb348d 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -908,18 +908,20 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o
> >         }
> >
> >  new_cluster:
> > -       ci = isolate_lock_cluster(si, &si->free_clusters);
> > -       if (ci) {
> > -               found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci),
> > -                                               order, usage);
> > -               if (found)
> > -                       goto done;
> > +       /*
> > +        * If the device need discard, prefer new cluster over nonfull
> > +        * to spread out the writes.
> > +        */
> > +       if (si->flags & SWP_PAGE_DISCARD) {
> > +               ci = isolate_lock_cluster(si, &si->free_clusters);
> > +               if (ci) {
> > +                       found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci),
> > +                                                       order, usage);
> > +                       if (found)
> > +                               goto done;
> > +               }
> >         }
> >
> > -       /* Try reclaim from full clusters if free clusters list is drained */
> > -       if (vm_swap_full())
> > -               swap_reclaim_full_clusters(si, false);
> > -
> >         if (order < PMD_ORDER) {
> >                 while ((ci = isolate_lock_cluster(si, &si->nonfull_clusters[order]))) {
> >                         found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci),
> > @@ -927,7 +929,23 @@ static unsigned long cluster_alloc_swap_entry(struct swap_info_struct *si, int o
> >                         if (found)
> >                                 goto done;
> >                 }
> > +       }
> >
> > +       if (!(si->flags & SWP_PAGE_DISCARD)) {
> > +               ci = isolate_lock_cluster(si, &si->free_clusters);
> > +               if (ci) {
> > +                       found = alloc_swap_scan_cluster(si, ci, cluster_offset(si, ci),
> > +                                                       order, usage);
> > +                       if (found)
> > +                               goto done;
> > +               }
> > +       }
>
> Seems like this pattern is repeated a couple of places -
> isolate_lock_cluster from one of the lists, and if successful, then
> try to allocate (alloc_swap_scan_cluster) from it.

Indeed, I've been thinking about it but there are some other issues
that need to be cleaned up before this one.


  parent reply	other threads:[~2025-08-06  3:39 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-04 17:24 [PATCH 0/2] mm, swap: improve cluster scan strategy Kairui Song
2025-08-04 17:24 ` [PATCH 1/2] mm, swap: don't scan every fragment cluster Kairui Song
2025-08-05 23:30   ` Chris Li
2025-08-06  3:02     ` Kairui Song
2025-08-04 17:24 ` [PATCH 2/2] mm, swap: prefer nonfull over free clusters Kairui Song
2025-08-05 23:35   ` Chris Li
2025-08-06  0:03   ` Nhat Pham
2025-08-06  0:30     ` Chris Li
2025-08-06  3:38     ` Kairui Song [this message]
2025-08-05 23:26 ` [PATCH 0/2] mm, swap: improve cluster scan strategy Chris Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMgjq7CSRrjLYF=7ckieNsAhDX_Fqp0OkHxrGHB0gQG7=_RdOA@mail.gmail.com' \
    --to=ryncsn@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=bhe@redhat.com \
    --cc=chrisl@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    --cc=shikemeng@huaweicloud.com \
    --cc=ying.huang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox