From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 121EAC25B76 for ; Wed, 5 Jun 2024 23:58:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E9D46B00A4; Wed, 5 Jun 2024 19:58:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 89A076B00A5; Wed, 5 Jun 2024 19:58:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 761B06B00A6; Wed, 5 Jun 2024 19:58:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 58D2B6B00A4 for ; Wed, 5 Jun 2024 19:58:55 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E046116119A for ; Wed, 5 Jun 2024 23:58:54 +0000 (UTC) X-FDA: 82198502988.15.8F1D505 Received: from mail-vs1-f51.google.com (mail-vs1-f51.google.com [209.85.217.51]) by imf21.hostedemail.com (Postfix) with ESMTP id 2DA821C0005 for ; Wed, 5 Jun 2024 23:58:53 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=run78QZS; spf=pass (imf21.hostedemail.com: domain of yosryahmed@google.com designates 209.85.217.51 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717631933; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Sf16gINAoan45PEw6aWCMW995aicLFXyStP7EDjqDE8=; b=8l0ekYflwgcNnYF9ME/YT3d/LvJBT6OgcG/AX1hNa+2jPeah4Ygx0F+7bML3fClMcrYbFg b0VkgJ4HiM0NNf3IeUc4ihcsYLxazpZyTEmyJmCLAVioOYLQcoVRlH6PCEWwWjFRQR6l6W CkS3jq5O8QgPQgUDEeftQd7D4QDWGqQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=run78QZS; spf=pass (imf21.hostedemail.com: domain of yosryahmed@google.com designates 209.85.217.51 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717631933; a=rsa-sha256; cv=none; b=fgHtZ6GRJtj7xMMnRuaETj+ZFFUUpiMYHiBBMGReULaHvhj0kWQS4R6k7aJ+gSYuZ8ECXM zVm1JLOu61dUCWv8G9asUbNcvXejP0EkqNmS6M7CsGkIq9UtPZ0JSKpNe4rn4QfWke0xxz VQ34p6ZbTqEX4kyM2bVduJ27RkG/Q0Y= Received: by mail-vs1-f51.google.com with SMTP id ada2fe7eead31-48bcfe2b414so104719137.3 for ; Wed, 05 Jun 2024 16:58:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1717631932; x=1718236732; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=Sf16gINAoan45PEw6aWCMW995aicLFXyStP7EDjqDE8=; b=run78QZSuvRqzGPuKMtb4MAB7uZz3fH5Vpdnmt1PqsLP90Rfa5DQh4PgQAqWYZpgYg luO5kRcnmIPmvM1SqXw7InucQWl7U2neMSjoAITcw9JeEe3y/ak6PCAN8qmvESLHjbgl z5t9LLo9zmg1vTdTQBcshWDS7U0O6Gy7iNUk1iUi6UlVqGPqRVK/l2hNMhJwLQhme9h2 XUYB3sbmtgyajr67XYjBUbsPnyJOn8KFgKCul4urLrvSwYIZXly+FkURRU6ppwFMceQq LP8aiGscUq3J9DlVONYd9Ug2QZ0HeCHlm5GsBaiLUNhLIQ+YbxWPp4ehULiVg3rTQxbv OaVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717631932; x=1718236732; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Sf16gINAoan45PEw6aWCMW995aicLFXyStP7EDjqDE8=; b=vGikS5XUrcVLJFpJTT5JSOQjgWy/oCmD4FmjnMlrMrxV4vje8mSbR5JXGmhE2h0Uio bhvDLFCrbS9cd1Cv/TUXbLT2fT4TNgg6PuHg9E2TDtU9jr4Mp+zCk26jFXLZViIorfsV Sjbe4Er1mW4KejoDhK8ThUDgJyrgAaVlcAx/YlsGj2cr0xYSSmQbHWoZeVcCooUWYJem btMRm8UnQjbBFo5t6PU3lQd5i6UHAvhipm81izc79ybEqa5iDOgRCUwPUEDUIE1CLHoh a23cXddNMByMZU1l18kVsrWbdD/e3g5x8N1svI0KzRKZmsoUhHgi/MOqFYSPJpMylhX0 rnIA== X-Forwarded-Encrypted: i=1; AJvYcCUwADS7XnAdnhpBftcxNMkECyuW6HOD4Om2cifXgNDH9d+5xXtDKC0RaOYRH82RXulWlYYn3D9ooJCFEVMB3c96rLg= X-Gm-Message-State: AOJu0YwY43hOeJoTpGcM4bBMGVo+Ui+YQZeQ/nANyJHLL3bmSm3bJ6YL 7hExfqchoDMYoZYXN1tyqsfLIxNVkbXeXWaW07wjX4iVMEmcu7H1jGXqaXXNyMS8Uo73UovtHwA eJybeVuDLrHonqa6yjycvZWc/mKlFyYu+bU5Y X-Google-Smtp-Source: AGHT+IHbHA0St5dI25tg2hHcg4EX0Nf8s65cWyX/QQ7AUSGrQyjJfHgyXWxF02+/PyYjZhft5J0qpp3TmAYsLBAeBIM= X-Received: by 2002:a67:f58e:0:b0:47b:fe0b:a92e with SMTP id ada2fe7eead31-48c04924d62mr4753406137.16.1717631931930; Wed, 05 Jun 2024 16:58:51 -0700 (PDT) MIME-Version: 1.0 References: <20240508202111.768b7a4d@yea> <20240515224524.1c8befbe@yea> <20240602200332.3e531ff1@yea> <20240604001304.5420284f@yea> <20240604134458.3ae4396a@yea> <20240604231019.18e2f373@yea> <20240606010431.2b33318c@yea> In-Reply-To: From: Yosry Ahmed Date: Wed, 5 Jun 2024 16:58:11 -0700 Message-ID: Subject: Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc) To: Yu Zhao Cc: Erhard Furtner , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Johannes Weiner , Nhat Pham , Chengming Zhou , Sergey Senozhatsky , Minchan Kim , "Vlastimil Babka (SUSE)" Content-Type: multipart/mixed; boundary="00000000000029497e061a2d592e" X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2DA821C0005 X-Stat-Signature: s6c783so93exf4ire4qqui9ky9oig37a X-HE-Tag: 1717631933-615001 X-HE-Meta: U2FsdGVkX19w3pGLKl2WxYJ0LIA1RUYB4+TdCt98zsoyZDZ2+WVPuZMj6DMVeKok9NqvAJF1QyQOA9j8KDHES8c4EVVl8j9QnVZX3dduw/USdCmpE10Rv8yQf4awBx/fffNHA41P+zoUfSHQqK9n8iddKC9B3dYqti2DBoYPDtL1VxvFg60+4uw0puKwlneaGVLuUJ2fPIQD1kMgaqXnluA4dLtMqOSeelQi56nL8HLs31ryJDHDX05+0rWzjoiVo1EbJ/fr7v0WPPhrpV/7wuuhtgqrj46mcudexqck4gcpm43MceKNc2vLTnrp9vDVaYFVEevXlqkMO+6TWouwbOtfIiijGYmt2Kt9T0mrTma5s02dBESRKoXShSur1tRSRZT2POmbLkUh4ioAoGnj74skKgYlGXXIR3bIJLzrqlV3iM9/xUGX+GQScEVaG2ppxc5/OTxdz5IE+5PYozrnGmKR9czC5yniiL7F9YjVUQljmNL662K0JUWM3a2TV4nnnkLEYOKpNq8znOwZGJyCBj9AQFtGSuJ+HtAmAowZWMxEtH0w9L6OioHa+1RHczw4K3tR+SAeOMBVRH9HBfWs65tuTvd32ef1YD0SWt4fS+un4jZJepAX4teXVaZQKPXqL1a5DdpGrI4oHeuHUb6ov9eFsQDiEJEAPKbIkC5ssnPNa73n5t01xIaSZ7Xi66jeKC2hcMsU9DM936+4RhuhfjeZmS6t9L0OlsSkyn/9DyHrembec654RaLmZ+iBar/1yBFdA+rEbm2n/teeqgRh5EvCrTAWKeAxBaRnqX+kAQnaHCuaL6xtIcTahVViRlvjUlPVOkxP734LsXIulE5Q9MD5xvSDm6YribiQsKDnIttsXZ8MsJTuXClSZdW/wTCdzxV4YS6qj1Tm4O5YmwvlWt7g1Vv6zFiKc2zJ61hhPL6sECZNKz6iGAWE7/xdErd5e4ONbNeDGuBog5gkSeV mKc7JrfK EtJzt+lZEz/DQSoyQgDn3q1qxhDOmQqiBl7tnyPszwjYYkwzzrvZy2cBX3eZAyhW0UuxSE+HvSkOeOOWWPdySpAhzda1WaN0N1dC6E2bW0jTBmymxzij/vS4lFffNwOmQZRtissI0+iEbYQsMFA/6s/t/1yixwqQMhUifKKTJMYJE4UFy6pclBd1mnsMJZbW7VqPrfHHL4Kz71ytcI8hoTNgQS5XyD6Myc8GGT3zMJh2EHy/zNtp+8298+QpkCjAKl9ok8PZ6FcUA0FUEt+gdMrHorZ8GbPDUwxSpVe1xglISeWarBbIPgJoaEckexFrSaGaef4vxdeh0JWa/+ygUl5hZsRaUPlcP9aP/0r97PtgwAHPDR5AOomPg6sbvEjzKYDQhFJBSOsQDc5ThKjyMPOR1uC+lXHESXb6ipdsWMx12hjRjgAo5kx5Tv1KNzHup4/Ahl86B5qzxBnw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: --00000000000029497e061a2d592e Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Wed, Jun 5, 2024 at 4:53=E2=80=AFPM Yu Zhao wrote: > > On Wed, Jun 5, 2024 at 5:42=E2=80=AFPM Yosry Ahmed wrote: > > > > On Wed, Jun 5, 2024 at 4:04=E2=80=AFPM Erhard Furtner wrote: > > > > > > On Tue, 4 Jun 2024 20:03:27 -0700 > > > Yosry Ahmed wrote: > > > > > > > Could you check if the attached patch helps? It basically changes t= he > > > > number of zpools from 32 to min(32, nr_cpus). > > > > > > Thanks! The patch does not fix the issue but it helps. > > > > > > Means I still get to see the 'kswapd0: page allocation failure' in th= e dmesg, a 'stress-ng-vm: page allocation failure' later on, another kswapd= 0 error later on, etc. _but_ the machine keeps running the workload, stays = usable via VNC and I get no hard crash any longer. > > > > > > Without patch kswapd0 error and hard crash (need to power-cycle) <3mi= n. With patch several kswapd0 errors but running for 2 hrs now. I double ch= ecked this to be sure. > > > > Thanks for trying this out. This is interesting, so even two zpools is > > too much fragmentation for your use case. > > Now I'm a little bit skeptical that the problem is due to fragmentation. > > > I think there are multiple ways to go forward here: > > (a) Make the number of zpools a config option, leave the default as > > 32, but allow special use cases to set it to 1 or similar. This is > > probably not preferable because it is not clear to users how to set > > it, but the idea is that no one will have to set it except special use > > cases such as Erhard's (who will want to set it to 1 in this case). > > > > (b) Make the number of zpools scale linearly with the number of CPUs. > > Maybe something like nr_cpus/4 or nr_cpus/8. The problem with this > > approach is that with a large number of CPUs, too many zpools will > > start having diminishing returns. Fragmentation will keep increasing, > > while the scalability/concurrency gains will diminish. > > > > (c) Make the number of zpools scale logarithmically with the number of > > CPUs. Maybe something like 4log2(nr_cpus). This will keep the number > > of zpools from increasing too much and close to the status quo. The > > problem is that at a small number of CPUs (e.g. 2), 4log2(nr_cpus) > > will actually give a nr_zpools > nr_cpus. So we will need to come up > > with a more fancy magic equation (e.g. 4log2(nr_cpus/4)). > > > > (d) Make the number of zpools scale linearly with memory. This makes > > more sense than scaling with CPUs because increasing the number of > > zpools increases fragmentation, so it makes sense to limit it by the > > available memory. This is also more consistent with other magic > > numbers we have (e.g. SWAP_ADDRESS_SPACE_SHIFT). > > > > The problem is that unlike zswap trees, the zswap pool is not > > connected to the swapfile size, so we don't have an indication for how > > much memory will be in the zswap pool. We can scale the number of > > zpools with the entire memory on the machine during boot, but this > > seems like it would be difficult to figure out, and will not take into > > consideration memory hotplugging and the zswap global limit changing. > > > > (e) A creative mix of the above. > > > > (f) Something else (probably simpler). > > > > I am personally leaning toward (c), but I want to hear the opinions of > > other people here. Yu, Vlastimil, Johannes, Nhat? Anyone else? > > I double checked that commit and didn't find anything wrong. If we are > all in the mood of getting to the bottom, can we try using only 1 > zpool while there are 2 available? I.e., Erhard, do you mind checking if Yu's diff below to use a single zpool fixes the problem completely? There is also an attached patch that does the same thing if this is easier to apply for you. > > static struct zpool *zswap_find_zpool(struct zswap_entry *entry) > { > - return entry->pool->zpools[hash_ptr(entry, ilog2(ZSWAP_NR_ZPOOLS))]; > + return entry->pool->zpools[0]; > } > > > In the long-term, I think we may want to address the lock contention > > in zsmalloc itself instead of zswap spawning multiple zpools. > > > > > > > > The patch did not apply cleanly on v6.9.3 so I applied it on v6.10-rc= 2. dmesg of the current v6.10-rc2 run attached. > > > > > > Regards, > > > Erhard --00000000000029497e061a2d592e Content-Type: application/octet-stream; name="0001-mm-zswap-set-ZSWAP_NR_ZPOOLS-to-1.patch" Content-Disposition: attachment; filename="0001-mm-zswap-set-ZSWAP_NR_ZPOOLS-to-1.patch" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_lx2ho9sj0 RnJvbSBjNmM0NzdkYWU5Y2I4YmNkZWZhZjFjMWEwZTg4NjllZmE4YmZlM2Y5IE1vbiBTZXAgMTcg MDA6MDA6MDAgMjAwMQpGcm9tOiBZb3NyeSBBaG1lZCA8eW9zcnlhaG1lZEBnb29nbGUuY29tPgpE YXRlOiBXZWQsIDUgSnVuIDIwMjQgMjM6NTY6MTUgKzAwMDAKU3ViamVjdDogW1BBVENIXSBtbTog enN3YXA6IHNldCBaU1dBUF9OUl9aUE9PTFMgdG8gMQoKU2lnbmVkLW9mZi1ieTogWW9zcnkgQWht ZWQgPHlvc3J5YWhtZWRAZ29vZ2xlLmNvbT4KLS0tCiBtbS96c3dhcC5jIHwgMiArLQogMSBmaWxl IGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCAxIGRlbGV0aW9uKC0pCgpkaWZmIC0tZ2l0IGEvbW0v enN3YXAuYyBiL21tL3pzd2FwLmMKaW5kZXggYTUwZTI5ODZjZDJmYS4uMmJmYTkxNTE4ZDQwNSAx MDA2NDQKLS0tIGEvbW0venN3YXAuYworKysgYi9tbS96c3dhcC5jCkBAIC0xMjQsNyArMTI0LDcg QEAgbW9kdWxlX3BhcmFtX25hbWVkKGFjY2VwdF90aHJlc2hvbGRfcGVyY2VudCwgenN3YXBfYWNj ZXB0X3Rocl9wZXJjZW50LAogCQkgICB1aW50LCAwNjQ0KTsKIAogLyogTnVtYmVyIG9mIHpwb29s cyBpbiB6c3dhcF9wb29sIChlbXBpcmljYWxseSBkZXRlcm1pbmVkIGZvciBzY2FsYWJpbGl0eSkg Ki8KLSNkZWZpbmUgWlNXQVBfTlJfWlBPT0xTIDMyCisjZGVmaW5lIFpTV0FQX05SX1pQT09MUyAx CiAKIC8qIEVuYWJsZS9kaXNhYmxlIG1lbW9yeSBwcmVzc3VyZS1iYXNlZCBzaHJpbmtlci4gKi8K IHN0YXRpYyBib29sIHpzd2FwX3Nocmlua2VyX2VuYWJsZWQgPSBJU19FTkFCTEVEKAotLSAKMi40 NS4xLjQ2Ny5nYmFiMTU4OWZjMC1nb29nCgo= --00000000000029497e061a2d592e--