From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB711C27C53 for ; Wed, 5 Jun 2024 23:53:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C3B16B00A1; Wed, 5 Jun 2024 19:53:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 673C06B00A3; Wed, 5 Jun 2024 19:53:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 539F86B00A4; Wed, 5 Jun 2024 19:53:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 35CC46B00A1 for ; Wed, 5 Jun 2024 19:53:16 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id AFA431C1C26 for ; Wed, 5 Jun 2024 23:53:15 +0000 (UTC) X-FDA: 82198488750.07.0666286 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) by imf05.hostedemail.com (Postfix) with ESMTP id E2417100007 for ; Wed, 5 Jun 2024 23:53:13 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="dO/uZgFt"; spf=pass (imf05.hostedemail.com: domain of yuzhao@google.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717631594; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=odPagFywCeLVckyJ7u7Ej1gNPxTHuemqTZ9yer+TyAs=; b=iyC9IvPb9PrXcq2HF2Ykf0TUAhuradSWi5rJq6ea/MxDaCF11tkm0783kFhIXt166jakeE WkwcUe38dRXy+zXyOy4TWqnrzSur0xg5Hz115M+W1Ee9IoJviNRtJb5N6frPXCBWybh3Hc wPbB1wjHyrZhO+1rkRcYkaUjVqqIsI0= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="dO/uZgFt"; spf=pass (imf05.hostedemail.com: domain of yuzhao@google.com designates 209.85.128.41 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717631594; a=rsa-sha256; cv=none; b=s82wqdDZgPvCSK0NZb6oHJfVIgruxA9HHHCDayjY9Z0/R+HiB5sptTCUCUZoRH3H2B14VF AwwlFnV/+t9tesge1zdXCjOdqMkEqTI1mbZuiJAdLMcODqSqehYWsmTQ62lLlgb1Be0+9n YyLfX8rYS25bmrUjb4h9WE1D71dsR9k= Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-42152bb7b81so33175e9.0 for ; Wed, 05 Jun 2024 16:53:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1717631592; x=1718236392; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=odPagFywCeLVckyJ7u7Ej1gNPxTHuemqTZ9yer+TyAs=; b=dO/uZgFtUqxhmSeDHW7le4m1n+3/FYjlAdVvZmy7BrFZS0bMOA6KR5ouGKVdPjZ1Xh MyOXa+HYx75arMn8LGXTczj9wrEm9JBrJMAQUpEsDZvBs9O2Cmr6xOsQNUlCKKVWnopy w5a0UOPpub7kGD6PFeimj8ZVs/YQXWWgNrcPWPqNfGgaE//oGd2lrWng8HLEr1Y33wbW VXzTHce500YDQ5mQkPnKyPvYPZTGtTq2FI3VuJ3W1bO37lMMnytBlp+Qo0ASycbm/y/4 XRzZx5m/9A56oolsfePTNIyMIG6lHkh5+rS1IFr84RCT/jbGAHzheYwU7ERhkoC07bib 6BVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717631592; x=1718236392; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=odPagFywCeLVckyJ7u7Ej1gNPxTHuemqTZ9yer+TyAs=; b=MSXpmuBmwdbC+RjFEjV1g51O39+zj+RAXrpbArVUDDyo7zK02acNc1503aHrSqkAKe YJPKmvkQhDun3Naafb42NeozIQlTeR4ePvbp8BApcWeXO6pAvnJFsSMPOW4eBB0dK2vX 36SOaOW9JGHJ5l1BXrE6Wpe6CsQ/5IBXF0QCuWLQVc8nnx0JeBa2ZzK00eOmFkDG8/Zy Y1rpvMgL3IIiWGWvEPvFektBD3fY6XQ/cpeZc98la3bLlQRjvduzbSAZ9qlm4/atmTM6 P+UwKyTReY3LiBQ8Qadcje/gVRFFQQmSeac8gDPav8JeR64C13Bj97AECw462XPgHjBJ XaNg== X-Forwarded-Encrypted: i=1; AJvYcCU2tJUbs3ux9/ua4Lb4sJCpjzi6qHqnK2M8SAAcBEN/jkzswDdquewxTpte9D8ZB5j8jIvT8tG4JQRWFZudIwT07N8= X-Gm-Message-State: AOJu0YyaBPp6uevv3sFiLQ7H4TVYTat1SnHa5Uq8oUslI9+TGYsiT9Sm XJr8BXz7+zkUqS8NYz5pwJRdPT+JLiEt13QlwC3RQBDHLU+2W3ci31PUQ1jxqriG8fUzYtRyJtf MKlXh/d46+2SGIa9/tYGiurtkJYYarnP4mW4Y X-Google-Smtp-Source: AGHT+IFaz0i8cgEfY0DYlrzEKrDKTNo5RklXtakhOCYBmYhqAaimHaovnYvcnFQZUHdgJHAfZ9A9l/1Vg2XhMu6EH+c= X-Received: by 2002:a05:600c:3ba9:b0:41c:ab7:f9af with SMTP id 5b1f17b1804b1-4215c0da3e7mr514005e9.3.1717631592034; Wed, 05 Jun 2024 16:53:12 -0700 (PDT) MIME-Version: 1.0 References: <20240508202111.768b7a4d@yea> <20240515224524.1c8befbe@yea> <20240602200332.3e531ff1@yea> <20240604001304.5420284f@yea> <20240604134458.3ae4396a@yea> <20240604231019.18e2f373@yea> <20240606010431.2b33318c@yea> In-Reply-To: From: Yu Zhao Date: Wed, 5 Jun 2024 17:52:34 -0600 Message-ID: Subject: Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc) To: Yosry Ahmed Cc: Erhard Furtner , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Johannes Weiner , Nhat Pham , Chengming Zhou , Sergey Senozhatsky , Minchan Kim , "Vlastimil Babka (SUSE)" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: E2417100007 X-Stat-Signature: ejkas8wc1s8ygpwd1oyyda83kgj1341c X-HE-Tag: 1717631593-382724 X-HE-Meta: U2FsdGVkX1/ZDJXvQQXX5jSRO2hmmcaXLeXUGixhNmCihqRn+QHqrFjXzTxgoUlmhaRuBNyVKx2iPDbkrUAImQefpO0Z57rDUQYX3WiOwa6iEbpuEiEWeRcstnio3UG8FgbBymOiX/us0+lZRnAYneU7f78KUp1fWwxI8mjloi11e+UNsJYnt17XUkia5Ec5MLVaBbcQQr77XHOj9/xqlxPjXr6hVoAAuCd58jgUSgMbwQ3hx6PMXkigEDIzK39Dk3qUVGdOp265tF4asLDRlAJQLFkqi4VoYsDi5TE1HVNXrSJSerr7aZqn8xNevU83Xe8Z55a406WMD+lTPDa7TkyaWYjUp36DLynXtsWP8QrhkQ894C10AIWgdZ2fITv8zE0TAfN/VMBUItCSIxD0DKr81sLB7FG4KIpoiw+jHFtRMj/nL0c1rZGQDPg4DvIxjCWQNEqVxx+eO0idRGeWaBtOyKmuoz7V17pLp57ZyqGjlMBcG9JKIXlAb6vSyhjxKScO5zpnnwNhLkRbNGTLN60RcAAthaYYnZzCyn5of+X7nLDFJuSLmxPH7EZftwy+XJvyJ9qDBbiCrk5nHCwWRAdaAmYgi2vMqSgZYq6hTMvgvjPiIxR1weHy4bHpNt+TboOi1PqFX5YsHTckYuCSa58rjeFVK3Kui9hDkWpRDundWwSU46QzCDfjXp9rMA/QVo4gsmaSKnZUQlLwVI5WJfxBeJKf34FV2Kk4duteg8AmHcIIqVbTz7YRF+oz9YdJbxDJEMl7un8hVF+aqYNFwraz1Q9owNyaJqoV1xflpbhTUQf5paPygO0vYkRweraGQxqbGqFLHEqjOUBu1JLTP6/sCHokVb0aXk1Elrj/dxwjLbkB32QxrQPsk3qVmiJDWBfWVPyElavPAQJZxzgnEhJwEfXj4AubmQ21fC4rW0FGGKu1/FIrgC4H2Omfo5BOvluK3ePC97DGq80hhp9 8Ntz2ATX 9+0ihyzGtMjH8139URRCGesGLgbEDdgCeabsTYW5WJ98NpmmwYJDdEID4dmLeiIEe4ohLmvq0svIdPz0bHDz14pk2EZKYqVIaNTXEL7niI42Ey/F2aOdrUYBLgTLu1Up9ubcgWJ+Hf/9nSsTrbAdG/xa88XrXvQuucUuTNb9icFegGvkJrBylIPZIwGg7QMXXRcn0emIv5nahcHiQjfy7C8CjVjkku218jnuxHezRvNkZ1aNgSkQvN7wXxCEQO6QbZBHU8O4WKdW+tEk2jCTJwLKeTX2JWIEdSgTlC+zHmovXm/Wza86U9IuFdY0dt6cx/60NfUNrifVKYfXo8WH61IuLi253MLw0vLCgHstG4bOpn30= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 5, 2024 at 5:42=E2=80=AFPM Yosry Ahmed = wrote: > > On Wed, Jun 5, 2024 at 4:04=E2=80=AFPM Erhard Furtner wrote: > > > > On Tue, 4 Jun 2024 20:03:27 -0700 > > Yosry Ahmed wrote: > > > > > Could you check if the attached patch helps? It basically changes the > > > number of zpools from 32 to min(32, nr_cpus). > > > > Thanks! The patch does not fix the issue but it helps. > > > > Means I still get to see the 'kswapd0: page allocation failure' in the = dmesg, a 'stress-ng-vm: page allocation failure' later on, another kswapd0 = error later on, etc. _but_ the machine keeps running the workload, stays us= able via VNC and I get no hard crash any longer. > > > > Without patch kswapd0 error and hard crash (need to power-cycle) <3min.= With patch several kswapd0 errors but running for 2 hrs now. I double chec= ked this to be sure. > > Thanks for trying this out. This is interesting, so even two zpools is > too much fragmentation for your use case. Now I'm a little bit skeptical that the problem is due to fragmentation. > I think there are multiple ways to go forward here: > (a) Make the number of zpools a config option, leave the default as > 32, but allow special use cases to set it to 1 or similar. This is > probably not preferable because it is not clear to users how to set > it, but the idea is that no one will have to set it except special use > cases such as Erhard's (who will want to set it to 1 in this case). > > (b) Make the number of zpools scale linearly with the number of CPUs. > Maybe something like nr_cpus/4 or nr_cpus/8. The problem with this > approach is that with a large number of CPUs, too many zpools will > start having diminishing returns. Fragmentation will keep increasing, > while the scalability/concurrency gains will diminish. > > (c) Make the number of zpools scale logarithmically with the number of > CPUs. Maybe something like 4log2(nr_cpus). This will keep the number > of zpools from increasing too much and close to the status quo. The > problem is that at a small number of CPUs (e.g. 2), 4log2(nr_cpus) > will actually give a nr_zpools > nr_cpus. So we will need to come up > with a more fancy magic equation (e.g. 4log2(nr_cpus/4)). > > (d) Make the number of zpools scale linearly with memory. This makes > more sense than scaling with CPUs because increasing the number of > zpools increases fragmentation, so it makes sense to limit it by the > available memory. This is also more consistent with other magic > numbers we have (e.g. SWAP_ADDRESS_SPACE_SHIFT). > > The problem is that unlike zswap trees, the zswap pool is not > connected to the swapfile size, so we don't have an indication for how > much memory will be in the zswap pool. We can scale the number of > zpools with the entire memory on the machine during boot, but this > seems like it would be difficult to figure out, and will not take into > consideration memory hotplugging and the zswap global limit changing. > > (e) A creative mix of the above. > > (f) Something else (probably simpler). > > I am personally leaning toward (c), but I want to hear the opinions of > other people here. Yu, Vlastimil, Johannes, Nhat? Anyone else? I double checked that commit and didn't find anything wrong. If we are all in the mood of getting to the bottom, can we try using only 1 zpool while there are 2 available? I.e., static struct zpool *zswap_find_zpool(struct zswap_entry *entry) { - return entry->pool->zpools[hash_ptr(entry, ilog2(ZSWAP_NR_ZPOOLS))]; + return entry->pool->zpools[0]; } > In the long-term, I think we may want to address the lock contention > in zsmalloc itself instead of zswap spawning multiple zpools. > > > > > The patch did not apply cleanly on v6.9.3 so I applied it on v6.10-rc2.= dmesg of the current v6.10-rc2 run attached. > > > > Regards, > > Erhard