linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: Yu Zhao <yuzhao@google.com>
Cc: Erhard Furtner <erhard_f@mailbox.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 linuxppc-dev@lists.ozlabs.org,
	Johannes Weiner <hannes@cmpxchg.org>,
	 Nhat Pham <nphamcs@gmail.com>,
	Chengming Zhou <chengming.zhou@linux.dev>,
	 Sergey Senozhatsky <senozhatsky@chromium.org>,
	Minchan Kim <minchan@kernel.org>,
	 "Vlastimil Babka (SUSE)" <vbabka@kernel.org>
Subject: Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc)
Date: Wed, 5 Jun 2024 16:58:11 -0700	[thread overview]
Message-ID: <CAJD7tkai+e39hFDJkQRZ_Zg_Yp8OWx2uQfawT28ZZTD=Jvh9EQ@mail.gmail.com> (raw)
In-Reply-To: <CAOUHufZ8BTTx1LoXHjHGnzJE9dzyv8EnvhpXMUm0NOt=P5KHVg@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 4373 bytes --]

On Wed, Jun 5, 2024 at 4:53 PM Yu Zhao <yuzhao@google.com> wrote:
>
> On Wed, Jun 5, 2024 at 5:42 PM Yosry Ahmed <yosryahmed@google.com> wrote:
> >
> > On Wed, Jun 5, 2024 at 4:04 PM Erhard Furtner <erhard_f@mailbox.org> wrote:
> > >
> > > On Tue, 4 Jun 2024 20:03:27 -0700
> > > Yosry Ahmed <yosryahmed@google.com> wrote:
> > >
> > > > Could you check if the attached patch helps? It basically changes the
> > > > number of zpools from 32 to min(32, nr_cpus).
> > >
> > > Thanks! The patch does not fix the issue but it helps.
> > >
> > > Means I still get to see the 'kswapd0: page allocation failure' in the dmesg, a 'stress-ng-vm: page allocation failure' later on, another kswapd0 error later on, etc. _but_ the machine keeps running the workload, stays usable via VNC and I get no hard crash any longer.
> > >
> > > Without patch kswapd0 error and hard crash (need to power-cycle) <3min. With patch several kswapd0 errors but running for 2 hrs now. I double checked this to be sure.
> >
> > Thanks for trying this out. This is interesting, so even two zpools is
> > too much fragmentation for your use case.
>
> Now I'm a little bit skeptical that the problem is due to fragmentation.
>
> > I think there are multiple ways to go forward here:
> > (a) Make the number of zpools a config option, leave the default as
> > 32, but allow special use cases to set it to 1 or similar. This is
> > probably not preferable because it is not clear to users how to set
> > it, but the idea is that no one will have to set it except special use
> > cases such as Erhard's (who will want to set it to 1 in this case).
> >
> > (b) Make the number of zpools scale linearly with the number of CPUs.
> > Maybe something like nr_cpus/4 or nr_cpus/8. The problem with this
> > approach is that with a large number of CPUs, too many zpools will
> > start having diminishing returns. Fragmentation will keep increasing,
> > while the scalability/concurrency gains will diminish.
> >
> > (c) Make the number of zpools scale logarithmically with the number of
> > CPUs. Maybe something like 4log2(nr_cpus). This will keep the number
> > of zpools from increasing too much and close to the status quo. The
> > problem is that at a small number of CPUs (e.g. 2), 4log2(nr_cpus)
> > will actually give a nr_zpools > nr_cpus. So we will need to come up
> > with a more fancy magic equation (e.g. 4log2(nr_cpus/4)).
> >
> > (d) Make the number of zpools scale linearly with memory. This makes
> > more sense than scaling with CPUs because increasing the number of
> > zpools increases fragmentation, so it makes sense to limit it by the
> > available memory. This is also more consistent with other magic
> > numbers we have (e.g. SWAP_ADDRESS_SPACE_SHIFT).
> >
> > The problem is that unlike zswap trees, the zswap pool is not
> > connected to the swapfile size, so we don't have an indication for how
> > much memory will be in the zswap pool. We can scale the number of
> > zpools with the entire memory on the machine during boot, but this
> > seems like it would be difficult to figure out, and will not take into
> > consideration memory hotplugging and the zswap global limit changing.
> >
> > (e) A creative mix of the above.
> >
> > (f) Something else (probably simpler).
> >
> > I am personally leaning toward (c), but I want to hear the opinions of
> > other people here. Yu, Vlastimil, Johannes, Nhat? Anyone else?
>
> I double checked that commit and didn't find anything wrong. If we are
> all in the mood of getting to the bottom, can we try using only 1
> zpool while there are 2 available? I.e.,

Erhard, do you mind checking if Yu's diff below to use a single zpool
fixes the problem completely? There is also an attached patch that
does the same thing if this is easier to apply for you.

>
> static struct zpool *zswap_find_zpool(struct zswap_entry *entry)
> {
>  - return entry->pool->zpools[hash_ptr(entry, ilog2(ZSWAP_NR_ZPOOLS))];
>  + return entry->pool->zpools[0];
> }
>
> > In the long-term, I think we may want to address the lock contention
> > in zsmalloc itself instead of zswap spawning multiple zpools.
> >
> > >
> > > The patch did not apply cleanly on v6.9.3 so I applied it on v6.10-rc2. dmesg of the current v6.10-rc2 run attached.
> > >
> > > Regards,
> > > Erhard

[-- Attachment #2: 0001-mm-zswap-set-ZSWAP_NR_ZPOOLS-to-1.patch --]
[-- Type: application/octet-stream, Size: 824 bytes --]

From c6c477dae9cb8bcdefaf1c1a0e8869efa8bfe3f9 Mon Sep 17 00:00:00 2001
From: Yosry Ahmed <yosryahmed@google.com>
Date: Wed, 5 Jun 2024 23:56:15 +0000
Subject: [PATCH] mm: zswap: set ZSWAP_NR_ZPOOLS to 1

Signed-off-by: Yosry Ahmed <yosryahmed@google.com>
---
 mm/zswap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index a50e2986cd2fa..2bfa91518d405 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -124,7 +124,7 @@ module_param_named(accept_threshold_percent, zswap_accept_thr_percent,
 		   uint, 0644);
 
 /* Number of zpools in zswap_pool (empirically determined for scalability) */
-#define ZSWAP_NR_ZPOOLS 32
+#define ZSWAP_NR_ZPOOLS 1
 
 /* Enable/disable memory pressure-based shrinker. */
 static bool zswap_shrinker_enabled = IS_ENABLED(
-- 
2.45.1.467.gbab1589fc0-goog


  reply	other threads:[~2024-06-05 23:58 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-08 18:21 Erhard Furtner
2024-05-15 20:45 ` Erhard Furtner
2024-05-15 22:06   ` Yu Zhao
2024-06-01  6:01     ` Yu Zhao
2024-06-01 15:37       ` David Hildenbrand
2024-06-06  3:11         ` Michael Ellerman
2024-06-06  3:38           ` Yu Zhao
2024-06-06 12:08             ` Michael Ellerman
2024-06-06 16:05               ` Erhard Furtner
2024-06-02 18:03       ` Erhard Furtner
2024-06-02 20:38         ` Yu Zhao
2024-06-02 21:36           ` Erhard Furtner
2024-06-03 22:13         ` Erhard Furtner
2024-06-03 23:24           ` Yosry Ahmed
     [not found]             ` <20240604134458.3ae4396a@yea>
2024-06-04 16:11               ` Yosry Ahmed
2024-06-04 17:18                 ` Yu Zhao
2024-06-04 17:34                   ` Yosry Ahmed
2024-06-04 17:53                     ` Yu Zhao
2024-06-04 18:01                       ` Yosry Ahmed
2024-06-04 21:00                         ` Vlastimil Babka (SUSE)
2024-06-04 21:10                         ` Erhard Furtner
2024-06-05  3:03                           ` Yosry Ahmed
2024-06-05 23:04                             ` Erhard Furtner
2024-06-05 23:41                               ` Yosry Ahmed
2024-06-05 23:52                                 ` Yu Zhao
2024-06-05 23:58                                   ` Yosry Ahmed [this message]
2024-06-06 13:28                                     ` Erhard Furtner
2024-06-06 16:42                                       ` Yosry Ahmed
2024-06-06  2:49                                 ` Chengming Zhou
2024-06-06  4:31                                   ` Sergey Senozhatsky
2024-06-06  4:46                                     ` Chengming Zhou
2024-06-06  5:43                                       ` Sergey Senozhatsky
2024-06-06  5:55                                         ` Chengming Zhou
2024-06-07  9:40                                         ` Nhat Pham
2024-06-07 11:20                                           ` Sergey Senozhatsky
2024-06-06  7:24                                 ` Vlastimil Babka (SUSE)
2024-06-06 13:32                                   ` Erhard Furtner
2024-06-06 16:53                                     ` Vlastimil Babka (SUSE)
2024-06-06 17:14                                 ` Takero Funaki
2024-06-06 17:41                                   ` Yosry Ahmed
2024-06-06 17:55                                     ` Yu Zhao
2024-06-06 18:03                                       ` Yosry Ahmed
2024-06-04 22:17                   ` Erhard Furtner
2024-06-04 20:52             ` Vlastimil Babka (SUSE)
2024-06-04 20:55               ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJD7tkai+e39hFDJkQRZ_Zg_Yp8OWx2uQfawT28ZZTD=Jvh9EQ@mail.gmail.com' \
    --to=yosryahmed@google.com \
    --cc=chengming.zhou@linux.dev \
    --cc=erhard_f@mailbox.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=senozhatsky@chromium.org \
    --cc=vbabka@kernel.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox