From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DF8FC27C55 for ; Thu, 6 Jun 2024 17:14:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2EFAB6B009D; Thu, 6 Jun 2024 13:14:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2784B6B00A1; Thu, 6 Jun 2024 13:14:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 119F46B00A4; Thu, 6 Jun 2024 13:14:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E6DC76B009D for ; Thu, 6 Jun 2024 13:14:22 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 721181C2C1F for ; Thu, 6 Jun 2024 17:14:22 +0000 (UTC) X-FDA: 82201112364.22.B957E63 Received: from mail-oi1-f181.google.com (mail-oi1-f181.google.com [209.85.167.181]) by imf09.hostedemail.com (Postfix) with ESMTP id A570B140019 for ; Thu, 6 Jun 2024 17:14:20 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="bHSynu/h"; spf=pass (imf09.hostedemail.com: domain of flintglass@gmail.com designates 209.85.167.181 as permitted sender) smtp.mailfrom=flintglass@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717694060; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+/1l2hgdu4GOtQ+Ob2oTmSMpsc9q1e6vX4IAFQSksuY=; b=aypP0za7UF2/8Cn7jQTtVW9Gp0yEoZIL01Hdsb2x4INWiuOtvc4w/KV8XUUzXiwAp9hiFg oSXVYWEH4eky45WoFq/hxwcl5kVXv/y5a+PXo2Q81Y9U2+f2WeSvkqesAbomVoyfzCgPph XjiKR8lHek067R1Z3fWREA5fQcR+yXA= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="bHSynu/h"; spf=pass (imf09.hostedemail.com: domain of flintglass@gmail.com designates 209.85.167.181 as permitted sender) smtp.mailfrom=flintglass@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717694060; a=rsa-sha256; cv=none; b=lksujqTJ2irDLdao8v+YYCrFOlHG4heb55gQXIqmY28Rddt2ycsxkir2QMkMXwnGDivo0j 4a+DI4ucHWbz2Mef1RIZuV3c+WHnRkOg2uOw9HmZkfGAOnYNr3h7JYokiVpWTbcXO3DbuF QX2TTRwyO9idqZiYaT3HEiNuIINZGNY= Received: by mail-oi1-f181.google.com with SMTP id 5614622812f47-3c9cc681ee0so620296b6e.0 for ; Thu, 06 Jun 2024 10:14:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1717694059; x=1718298859; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=+/1l2hgdu4GOtQ+Ob2oTmSMpsc9q1e6vX4IAFQSksuY=; b=bHSynu/heEl44hW54UTX5XKxkAlanXtTsqpji+7W20YYgAfH+sHIL1e85A/lm/dIPe pHWSUO/X2AxRdlHZWWOHya9pdzyOIGSeOO62pXyxk5LhZ9NNngbazfOE/tXUQssdyCLD uAjiNJ0Z4lVyGOYNeasfJCzc9ZdNEf8II3JlC1+rOXHdMJDEkC5KDIYhkjegwwhE6Odx GbZeGsUyguclfIcFbcKiEd8L/0wZMtso6Y5RgnLBydN8BAFV0XqHpP8AmB1GN7MbDGAm GyyvwQWIJ2k2AhhcRDyoc8m8GDx1azEHilX5SJsYP/zqQ2pf7h8M5im2MOjj+sXwZOdA jhtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717694059; x=1718298859; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+/1l2hgdu4GOtQ+Ob2oTmSMpsc9q1e6vX4IAFQSksuY=; b=juHx5+eYe7e+xkKYrrRp0u7YZZo5gfSHN5K5QWb7OsvVIhYn5Xq+oeDeCizI8W0VCr MsbMnBNucvPtG603+vU8Fgwco+GZKuU5fjyoP+65+nfUG4DLKjjpcV9pMoeinwaMBO/c oBT+wwgOZitQMYzZGZwK5KUe9W+LcKNDdu9ENiR1bpY1aRqr5c07OfkErbfaY7QMZ5c3 JBDJu70Z8FWXzVQBBLOc2qIwPYPSG3+ZraCPxbrj7a+n9t6lhvdu2Fl92S0K8LE+qqYP 1adCu0xWP77Ykj8/MfxX3O5LV8CaDN19KoXFemHdMXsxmEGSRz6VUrMpRNjD6c20/Cfl zn/Q== X-Forwarded-Encrypted: i=1; AJvYcCX6m5d7yf6KkmIFyL91EGBU+yd3idEnUUghBTZSwMwpm+4X6SDycneqIYaXlFC267X9bnTbvIieg9bEuzIafTovkC4= X-Gm-Message-State: AOJu0YxxHCF7mFmPIErPFkn/a09snjXNOV3k6jJ4q48Wfz56j1wjjOlN 433EQm9Vd6PJnDpp84Q5Zzs/IU4h446Kz/ZNJYz3aqgooqtVc18DThYu8mkOauZKfTtBHxUsRl0 6ouGcHaUylj61HoKW1xcU45r/jpQ= X-Google-Smtp-Source: AGHT+IFWj6gKymPuX65qB3SLiTcr/L24kWgHP0A2qAXzrNU1UqM44JmLCxgMounxH46aeClTSRC9dMKKMGyWy81enQM= X-Received: by 2002:a05:6808:618:b0:3d1:e059:fec6 with SMTP id 5614622812f47-3d210d2b29emr63262b6e.3.1717694059455; Thu, 06 Jun 2024 10:14:19 -0700 (PDT) MIME-Version: 1.0 References: <20240508202111.768b7a4d@yea> <20240515224524.1c8befbe@yea> <20240602200332.3e531ff1@yea> <20240604001304.5420284f@yea> <20240604134458.3ae4396a@yea> <20240604231019.18e2f373@yea> <20240606010431.2b33318c@yea> In-Reply-To: From: Takero Funaki Date: Fri, 7 Jun 2024 02:14:07 +0900 Message-ID: Subject: Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc) To: Yosry Ahmed Cc: Erhard Furtner , Yu Zhao , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Johannes Weiner , Nhat Pham , Chengming Zhou , Sergey Senozhatsky , Minchan Kim , "Vlastimil Babka (SUSE)" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: hgeygh5njftwznn7uzdnir73bwcex6oh X-Rspamd-Queue-Id: A570B140019 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1717694060-568926 X-HE-Meta: U2FsdGVkX1/FbwoQBAeCmHxDNThh+1WoOHhF+WFqt0rE1Y1pd395q5+6WbDec+CmgAhGZkYyMYG7AYCP/P4UU01H5oYYZ7sRxBeo93Jb5EzVbF8AnjRsUQmgMecTK9O5jISKc4qkV9bY9UQZ1ay6T6Psbaa46DDaBTNwgWGk34hYTlO/74rRhXjxPmbsZEFaWBdlxABSfljjxe3/NFpeEg41VnuzxFS5aggmuIXfi0cHaYznevolcrpsVv/3I4RwaqZWyzErjTEgPHDEJGEnbIbOf2rnIs4YPIoIIaP6KD4GEh7/DRPMP13IRR9REDuplSVgbUF2hZ84a5auYjvSVXSuwg5GI6LMmN5tYbYcMN4I1kklHARieRaWTXhqf6/98kDQD7vWX3IscV0dtHUXdSOZR3tYBwYuuWfwJo58WqpDPrxW9J9QnmCfeuU0Q7AxOPFt2pq4iWCytL1a6c9eKWz3HMVvZqwWeuB3SfD7nd6MT2TP0kQIiVuVfAP3Zh1FzOOEuApupP9I8NoUs1DAYsqANNw6leFvRLogtXJVAjvGVYcBc3Yl0sib897C80p24AyYUhN6NglmeRJ0ekvhQs2G16OySWP+Fanz+1XgxTe7tsyr9d1jffdX+QkIacc8kIFa+BWRFq9MEO5Vu7uXwbol4NTYzwlOzzvMkEozN0kPxMgHANR5IfGjWYdmO7Q7Rqm0e1f5Zly6Jtzm5l2ovM2QzHDykAcdSYq4Z8+OzR3sOMTey6/NHtG+gngUan6M8yvEiyfKKKywidHyagNyo/r2bbXgV4leH2qX2FCiHnwpbm1vF/AYDXHn6piWlm44Fy8Czu6jg6+uAt5CRr7vIfvQxJt1rgsxGQ+kYLep85q3sZpl0xTrG2COF/AMuEC9GF/a9g2VBV1L9wRsAUT3Z+BIL1ii+aC6fERAZMAGXZKYdtFRse9kOy84sYcpSmxn7L8OZEbp97J0cBj07lX TlMI6gFj qTzSff3w2tVxuyh6Ngpo3e+kQMLZ9/gL8VWENOu7Pu5hf+lnmwiKCi7Q9wenH2ixOMdJCA0PyvmkqGvq4oE0iDhE8QiAGxTRz+zgjbyNTwYJ2MOPxi3eD+reQc/pDegtUWTVyttcGEgJ85EE0tOV7wW6Lq/YFUp/KAwlCzh2Nqci2CRLsv9xIOzR1Co6bRy1dXHgir3Ps1cYeV/oOyRD1vgYoj1aqfkee5SzZCAATCBRDgezE4ic6piT7TqwDbyJgKcz/RGitamA8H57o9APyl+xl5UyBjFRx+GhE0ETYD/dSJ3lZmicV3+qfdj3cc4k32R6x6NZZItuQoYVDx2FFCmzxrthhr9qpfnGxQIXceik+ECeO76/JSa/3WF9Rg/QiVK6t59ASxVjIsfiXlDN0SyqStA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.001792, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 2024=E5=B9=B46=E6=9C=886=E6=97=A5(=E6=9C=A8) 8:42 Yosry Ahmed : > I think there are multiple ways to go forward here: > (a) Make the number of zpools a config option, leave the default as > 32, but allow special use cases to set it to 1 or similar. This is > probably not preferable because it is not clear to users how to set > it, but the idea is that no one will have to set it except special use > cases such as Erhard's (who will want to set it to 1 in this case). > > (b) Make the number of zpools scale linearly with the number of CPUs. > Maybe something like nr_cpus/4 or nr_cpus/8. The problem with this > approach is that with a large number of CPUs, too many zpools will > start having diminishing returns. Fragmentation will keep increasing, > while the scalability/concurrency gains will diminish. > > (c) Make the number of zpools scale logarithmically with the number of > CPUs. Maybe something like 4log2(nr_cpus). This will keep the number > of zpools from increasing too much and close to the status quo. The > problem is that at a small number of CPUs (e.g. 2), 4log2(nr_cpus) > will actually give a nr_zpools > nr_cpus. So we will need to come up > with a more fancy magic equation (e.g. 4log2(nr_cpus/4)). > I just posted a patch to limit the number of zpools, with some theoretical background explained in the code comments. I believe that 2 * CPU linearly is sufficient to reduce contention, but the scale can be reduced further. All CPUs are trying to allocate/free zswap is unlikely to happen. How many concurrent accesses were the original 32 zpools supposed to handle? I think it was for 16 cpu or more. or nr_cpus/4 would be enough? --=20