From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE4DFC282EC for ; Tue, 18 Mar 2025 17:39:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 452D3280003; Tue, 18 Mar 2025 13:39:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 403E5280001; Tue, 18 Mar 2025 13:39:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23265280003; Tue, 18 Mar 2025 13:39:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F41A1280001 for ; Tue, 18 Mar 2025 13:38:59 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3B12B140B59 for ; Tue, 18 Mar 2025 17:39:00 +0000 (UTC) X-FDA: 83235382440.03.AC8BE9A Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by imf29.hostedemail.com (Postfix) with ESMTP id C3088120006 for ; Tue, 18 Mar 2025 17:38:55 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=XUuKF6PY; dmarc=pass (policy=none) header.from=intel.com; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf29.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1742319536; a=rsa-sha256; cv=pass; b=I82Fu/EHxrWUu2hDDt+k4bZNnTyS7nVX30+m3xXabsUNw1X6/I80iR+UmTtaM/FvuxgVpD j3BI9sDqPsnc+fOeiVp3S5gXeDd0qxC5QNgwWfnaloisJ95kXp5MrSuTa+b4dU5h87ibIC VmaFtK1z5TVDGA+XDOMLpkvJJTxbEOQ= ARC-Authentication-Results: i=2; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=XUuKF6PY; dmarc=pass (policy=none) header.from=intel.com; arc=pass ("microsoft.com:s=arcselector10001:i=1"); spf=pass (imf29.hostedemail.com: domain of kanchana.p.sridhar@intel.com designates 198.175.65.10 as permitted sender) smtp.mailfrom=kanchana.p.sridhar@intel.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742319536; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pD92lxNHC32gJbPbCyTDMfNgbYgfZJW9My2Why5Xv1M=; b=PEfDrDGzgLAWEo+2fly5oW/xvp+OPIU0cJnZgIA6JTTzXJpS9HUXEQ9LxDE9JdW7KYIr62 6HbHDr6CFPwb9HsVwCZFXQ01efyrLzFrX6ahAFXhgRgGnx+Ed1oSifOfTcsz6kfuIi3OB1 bc659gt8YiQgaXIui9Jdj/LJEnjd088= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1742319536; x=1773855536; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=AwEjEY3poQhYyVSK2TlpDsMxpjtll3NC0rdiC++/6Sk=; b=XUuKF6PYETixTpRwIjL+ZxzjuFQ6L/L16opvA2ymd3rChWuWEnKgXy+f 2ti4ZEEOIP1ct0BxA+0Kq/5N5CnAJPsk40tkDzC7Vl4oxaeuk/XGoYFX8 W0HdRNnYgHb2AGzRvTco4RwKsWeQEq5ctLNKZsUFAwoogQ69JwCKFpkFu GsIy87ZoK+zj4VFiICF9X+DkFk2izlMb6AIfT/aXWTTNQBdMmoBh0JSel J2IHzAAIcSTTLSJPexzsMry1WDW0coEsbNxKLzDwQ2JU0d/7KSmGxNDlF odH1xbf9wriKw5w3KkJzaf+ibIePOuokLTJQk/a3cGZLfYdrhfXzx2kF3 A==; X-CSE-ConnectionGUID: bQexzfgAThCbi4sFxbGcuQ== X-CSE-MsgGUID: y8uGHydvQQiKJe/um26O5w== X-IronPort-AV: E=McAfee;i="6700,10204,11377"; a="60868593" X-IronPort-AV: E=Sophos;i="6.14,257,1736841600"; d="scan'208";a="60868593" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2025 10:38:54 -0700 X-CSE-ConnectionGUID: YouJnVXqRUO7xeUnNvn8Tw== X-CSE-MsgGUID: DurZsEj9RQaDAu+xgial5Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,257,1736841600"; d="scan'208";a="127378086" Received: from orsmsx903.amr.corp.intel.com ([10.22.229.25]) by orviesa004.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2025 10:38:54 -0700 Received: from ORSMSX901.amr.corp.intel.com (10.22.229.23) by ORSMSX903.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14; Tue, 18 Mar 2025 10:38:53 -0700 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by ORSMSX901.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.14 via Frontend Transport; Tue, 18 Mar 2025 10:38:53 -0700 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.45) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.44; Tue, 18 Mar 2025 10:38:52 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Ed3ZhgOKh1yVEsyVZNiv7OKC9rSrVbHKQQa9o2EAyF077Kled1ZPq5I+bi1MecEUB0rO8ilebWt4RXBYJDj8ahITpfKFn8qrb7BTI6pXuR79QvFl8at7pilx3m/VBJ1eQv73P0hXvG/igJupKhWQ/cL1sIswov6hX2Ozk9XMObAuBDbpBCn0rnoxXe+xC7lzD5l+5/RJ/QBS8Oe4BjHa/jxPcDl25hev4eq64qKmCGEIaIXMKXBFtijopEhUK+Sisan9OVVIhg6HEGd/1ij/xDMovzcwOlo6VnSjzdN4hG9aWdmlgK8KgvzWVt3EOAsfj8L+fcjMeztzTCDB6bAD3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pD92lxNHC32gJbPbCyTDMfNgbYgfZJW9My2Why5Xv1M=; b=D8331WL4ax0urSGYULhQGgXoKXEP3tjerWkPalWw99fU3AqmuZgXkPGb2SkiZMK2JWvCXvdJlBHF6T2c78BY6mpMFJVgu+iqDErNmtif36hyPP3nmAWhKeVxXa5Lrj6S3nCJWc2km84E8+vtx0zPnWdOZtdxlvzOhR61fYbeNUO0D1Ad/AS/7NQgK1NFj/ap+H8j1BycyKCdxmwrbyEOVVVYPZ+9AS41lFviLKkUgyYfOZwKzMTF90b9+Rg4IvGs9ONOrXabS6Efd2lqLr0foikvEiroHeuL003cYxWDL7Z+hgDbApboK+TwYFtB+PrvPLZ75hN4eXZ0SMmH+E4YOA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from SA3PR11MB8120.namprd11.prod.outlook.com (2603:10b6:806:2f3::7) by SA2PR11MB4876.namprd11.prod.outlook.com (2603:10b6:806:119::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8534.33; Tue, 18 Mar 2025 17:38:50 +0000 Received: from SA3PR11MB8120.namprd11.prod.outlook.com ([fe80::3597:77d7:f969:142c]) by SA3PR11MB8120.namprd11.prod.outlook.com ([fe80::3597:77d7:f969:142c%5]) with mapi id 15.20.8534.031; Tue, 18 Mar 2025 17:38:49 +0000 From: "Sridhar, Kanchana P" To: Yosry Ahmed CC: "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "hannes@cmpxchg.org" , "nphamcs@gmail.com" , "chengming.zhou@linux.dev" , "usamaarif642@gmail.com" , "ryan.roberts@arm.com" , "21cnbao@gmail.com" <21cnbao@gmail.com>, "ying.huang@linux.alibaba.com" , "akpm@linux-foundation.org" , "linux-crypto@vger.kernel.org" , "herbert@gondor.apana.org.au" , "davem@davemloft.net" , "clabbe@baylibre.com" , "ardb@kernel.org" , "ebiggers@google.com" , "surenb@google.com" , "Accardi, Kristen C" , "Feghali, Wajdi K" , "Gopal, Vinodh" , "Sridhar, Kanchana P" Subject: RE: [PATCH v8 12/14] mm: zswap: Simplify acomp_ctx resource allocation/deletion and mutex lock usage. Thread-Topic: [PATCH v8 12/14] mm: zswap: Simplify acomp_ctx resource allocation/deletion and mutex lock usage. Thread-Index: AQHbjBjx1q5dleb6eEq5EZF781nDprNmhasAgAA86hCAAVPdAIAAYjdggAQziACACyMtkIABOxcAgAAmLoA= Date: Tue, 18 Mar 2025 17:38:49 +0000 Message-ID: References: <20250303084724.6490-1-kanchana.p.sridhar@intel.com> <20250303084724.6490-13-kanchana.p.sridhar@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-publictraffictype: Email x-ms-traffictypediagnostic: SA3PR11MB8120:EE_|SA2PR11MB4876:EE_ x-ms-office365-filtering-correlation-id: 8b9414cc-a52e-4daf-687f-08dd6643be48 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0;ARA:13230040|7416014|366016|376014|1800799024|38070700018; x-microsoft-antispam-message-info: =?us-ascii?Q?FbUFxgtEbRSimt3IcHAxtuUPHAbIujF8AsnKtsgaeMPGe3LTMWhi8Jqm7bId?= =?us-ascii?Q?/RuQKsJZPBt7nA/0XbNWWjCqaciFfX7r3NXOMmqiUW4gDjiUyGLEMiJjUPvt?= =?us-ascii?Q?teC5/w1uX8O5xAdlvP0cowQQ6wxi/15TjlViilVSwR+hw8W19G/TQQ6ePtjm?= =?us-ascii?Q?ptC6Vt6vFfP5ngDBHCSQ6fiRv5g24UZr4cIoZTrs4BjlN2BWGKQE4xSVEfCj?= =?us-ascii?Q?xdtD9CJU4Gw7lUz+qZCn0d8qcr6EOBDC4PzoM5u2O8Yw04QH/8NXULhINdFa?= =?us-ascii?Q?iQr2Gyo4QpIRaJxVg07aows2qxWcO08i4pun4BgufbZwLsalW0KgVnY/8vr5?= =?us-ascii?Q?T3bizt1X0yjmqQqNfyvGCfMxmWdSIpB4ApnBIzXdXh9j3xPsa9QHKfZCINP4?= =?us-ascii?Q?jtysy8gCDf7VBRorzM+YiSqp2AQ9ju0SQVx6JWIq45gVc/Hm8s6fXX0p27EP?= =?us-ascii?Q?BD2JTInJiK9yntIDkh8lhLiRwUelIxQpQZQW+cyMaOvpNoNFCTp09nF8z4G6?= =?us-ascii?Q?Y0kvESWr6y0WYLTzuD1Yeom4dTaY0DxJdUGrOxjM+4K7bIg5cxZEjb6FOoJk?= =?us-ascii?Q?UwAIfVIAoVOZarZEcRxMSgNWN5/Gav1Aavsa8qK4UpqTPSAGZ/jIkVztJJJi?= =?us-ascii?Q?yN1ZzSPRLH5AZOsABuOHq3evzYOPfuGQRcNCMj2iz2Tk+MdqXU8ooV8HcJ24?= =?us-ascii?Q?vWX5NCpBUXGqj9EfcBGh0CredsRulQgwoRwg2O5OreDu6439UIAB5fBDrkjn?= =?us-ascii?Q?e6R8Tda+noNoJ4fpaWcVNuS+ZDpdJeBB4xg0SeEzs6tr0/fFqiU4kDzzTYBe?= =?us-ascii?Q?4/4jCxZITtY9GoEPKy+rDzk/zd2tnCgbOpX1WGmfJtuI3UXZju/jXBr0BS76?= =?us-ascii?Q?rElvR65+SV5HtycnlOsbyKLR8XJQpahp0C060wcmCTBkSzKQkocbYu3hdU6t?= =?us-ascii?Q?7H18zmiGVoRDhAg9/bvbWO1S33RJL/BYmi162330R0agEuK99+aLxh74A9Ek?= =?us-ascii?Q?958BsQ7JIHjWLCDZGo5cU09KOsy8i/+YPCL8NcI8TIhmI8YYH7EDyrtkz80Q?= =?us-ascii?Q?5JDCwgUIvpQc7NHcY/LC9eQxqjeHGS628iW9mN/+KBZvlJF2Aoxz17vL56bd?= =?us-ascii?Q?YcSpRxsefTDS5hdskD1r8JIUJlP/Al2KK6HZkLJm1UrxsID/UoSkItPDs0PY?= =?us-ascii?Q?RsDAwrTTgj4zOIoBLfMpbjZ4vGPm2N6MWzhoGg9WscsYPbkucd2RtvE3CVJ8?= =?us-ascii?Q?REL9rhMEAMp/rNGfVXN3aOL4Nvdpku/8giix/hLIX+I+i4WPqrHeWMPMkAR+?= =?us-ascii?Q?lsAS7FipZpNZDN83Ym87Tlo62tt3TBD1UabP1aZ1rbLKaylJQfPMWdMnFHFk?= =?us-ascii?Q?Q8DNl2+w6EnvmmP/caIZNRBbrkjIejgeANf9gQv0jU77MpLNasuOZF9hvMjQ?= =?us-ascii?Q?9oOlrhZRU5mLaE+YZbgH/UDyOoBF45tn?= x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA3PR11MB8120.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(366016)(376014)(1800799024)(38070700018);DIR:OUT;SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?S44V5SEAIwMEicXJziOjB0nYOtpWQGaM6J6rSQ4PBBUyOla9nL1UOgJGs3uN?= =?us-ascii?Q?FLTusnBvpHWutJIwSFJg59j1gmO0H7rU4fYdU2DiPNOCbyQjpqpZU5q2pDPA?= =?us-ascii?Q?JLGySFx72xXjYxP6Egr+Ah2ObE/omlupDk8zwLFO8nBFMkSamYZwkmYrpUWp?= =?us-ascii?Q?Lfye2BydY2qQ58NNKVSDNVHmdIYHwPuP3moSuGF+QMHOWvSWpiQQ1jiCkBoB?= =?us-ascii?Q?KqSJgtth4qKIuXZCXVlFvoHZCTaT/85XKC4WpLwnwG3iPGLiXmH3olry8iGm?= =?us-ascii?Q?CL92/QMRmp3T0BmVWWcIBcfaLQfg2jecATi3O3Q03Cp25y0+Y1ZGnIQ7BE5i?= =?us-ascii?Q?EQSwkW0whAs1lK7X+Lna5ONSXIs2GnNTYP8FBXhJXoY/SWcb19wJcLHBZwTx?= =?us-ascii?Q?RuKx5RbzugVNfa4+6xY4+spQDWOTe7Qu0iMGFUxSereO8E5uDYRFO3a3BWlV?= =?us-ascii?Q?RqJedAwOimqI0KawrLPNMqamK5CXpnT9s1gq0/N++Dnr1uD2ECEBAo4W25oY?= =?us-ascii?Q?jJTQYmEvNbZmc8w/AjY9G6EY00viSbDQFlsIouuq8M+hr9245233OOSYhH3s?= =?us-ascii?Q?1TUYKVPN8Jl5caoAAu4Pm28oHhy3qdcPT9qoXOlGOQkFfDE/jNRjJi+gfqXZ?= =?us-ascii?Q?HtYsTpSkgvlUAIUt6ZL/c/yLKpfGNXSE4Tp/bE4vokY5PQiUkxAaqcNUFZ9Z?= =?us-ascii?Q?STdDNxNnp81OQWSFbd7TsKbNed1Otb5ZZptn3vQasRFyy8DyJMjO2t/XrW5S?= =?us-ascii?Q?jVWIEWXN8oLMa+0+H3t1ML2W6bDGxbiQ3UkYTs7bqu83bgPlUkCFHN0d9KJE?= =?us-ascii?Q?bkI9GTi6Kox9VXQ4CixtsprET2r+iMSIaRXZKFBhBSeerLX9aCUeNZe33WeF?= =?us-ascii?Q?CciFt6MtAGeRC7/YXFQSy8ruGcq41XU8JBX6NDeljNRwepvkWh1fFSE2svoC?= =?us-ascii?Q?CpuBu2GWx/zYqtZMut2+ccDGK+o5YkUq/kBagnDekFHXgI59qVAj4mcq/p56?= =?us-ascii?Q?Uh+59RZX4tlq6RaB+ENisBhMSO9IqNPDHenFGgzehcd5dvTdvPwyqRhm5770?= =?us-ascii?Q?nnUTpoW6Q3pLTxb9YGOGiT5mD+xbcp8fMXGZrb0abVmNeZK7FKtM435Q19wz?= =?us-ascii?Q?ewscMqmJ04w0a5XSBUVOHN4C8KfAzLkSQOH/jjr8lGfNdYvtoVeSX86hYNvE?= =?us-ascii?Q?HrUbS4Fj5OD6W5HZdAynEEUk9i4RMZj7dkaP24LAhfkQw392JQBbXegp/lq3?= =?us-ascii?Q?9+KVVnkeY7Dl+pb1eoPGBjyGNbltm0xHxg8yLLZx5YgYhn50uGDYYjfFdoZX?= =?us-ascii?Q?hnMJEAwskbB4dxnFmpji2D0+5cTepxmCacwkT+MYEmlTytBuq63bKQM+4xhG?= =?us-ascii?Q?UtXKFd1l3sQ3VpuYGso9l9Xq7ESHcSzB0lKttpVVdamB5W8lAwtC6ImdLHy6?= =?us-ascii?Q?+C+0m6t0Nfxmf/t/riKMHLABtAQK5crGyC74FRp/9GLC0Ds7w7QfnJJQfV63?= =?us-ascii?Q?crSykT/GruB4DBq0Q5zksOyCzk7TrC5nD/+gFhteb8YA0226YDR03V5aHoKh?= =?us-ascii?Q?QVeAPAbvESg1U35b4XyUEjSQquLgBdG8Fdyq5f2B2O3B3ZzwJIkiFGiB9NPB?= =?us-ascii?Q?Cg=3D=3D?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SA3PR11MB8120.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8b9414cc-a52e-4daf-687f-08dd6643be48 X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Mar 2025 17:38:49.7585 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: eZPruIR5Oti1BN3BcOh6a7kLTio+oVGKBFUT7TcnoNmsYqqN/IPLBoe5tV+J1Ts5GSsBc7PvUGqIdZql//d6uUw71pqJmG1ZkKiYGVbMI4Q= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR11MB4876 X-OriginatorOrg: intel.com X-Rspam-User: X-Rspamd-Queue-Id: C3088120006 X-Stat-Signature: tjrm31bzsgopbx8s35ran7hi8cw34ttx X-Rspamd-Server: rspam06 X-HE-Tag: 1742319535-734546 X-HE-Meta: U2FsdGVkX1+6Y4HhQ7FrJFG8TKFTXGFPkKhcEtV9+PoOHzQtzcmlmfTDs4HRbtYtMVxK8JiuoGGTwlRkMQxoxzU5uidMWDRMGWR6TTcU2siwTtf5dnAjBUEx9TLr/xC82mcF4kZftyTxLYJJsPFfik43pVVPRImlg6FJuqC+/OdRxI4a4hqVqLJrC3Jn78akc0LNJqBeFBUYjT4dfnpjcqnaIfxUjHPvL2mN3K0HCY7+yl/hdLGsvpmJuFwuuKd4XZi9aBLbvdXSke006W8gutjYY2enKXint1fpHFLNlCgf99n4Bx8bl4uOGNqto+lCwseH1qXGfVTePtv7m1HmSApVLNd1w/mztF89+WlD92PJogIfKB9Oc+nOIpXM37bPBLMA2Wd3qe7vhsNAJKLZvqagNKXav7aT2Jv/XfOBKzD3Yn7CGMFMNSPaA5+737/toTQK9L2qT6cFD4KscM+e0IPFhpiBwep1XFh7XeJcmrFuVFmAqNPmhrHn43BMeo60tczbfekUUL0MQEZDmF93U0+pOil+3hLkj5a9owPxTd/GVspi0rwjBVG4yVHRC3OyhpR4eHIUMaa/TOcBjVj+lADB3Ebqkkqu+frK3Ba4KcIiCgRMTZaUfWTaIT+Mh9xvTaEaJycbqCgMZKvf1nJ9CxGeMLWvAY+b+9Y8sF6d8Ruk7coKQnUU8BXgnS2VrZwu3eDqAPrewbmlGdV1cHFTmqxmCWBTaK0kOAdNdl0Gp776XGX/q4/dVKJBKHt08f6Vc/TGu0cjHz0ZtYXTVU/K5xBXilaG5cnHiGAUry5stTRd/Zuvyx9mPQ8ZiOIh+Tz2wG2BJ9K5KXyINVBHMFwujy4ivDbiG09Mzn7WqvhhiPubwiSJWXWUtV6puFlhvKsl9r20/uPL+Usb4ob5NoOIa+D6VYJvj0f87dQLPjRd72JGZ6q/0DEZIqeEIgJ2v0IOvHkQq8UYpr13iO+Iy7A 2Y/7Bqyu OfojTbwoZUxczjOcwXMI4EwZssM/ut44ZRE6SAqVXGhxIRSfyjs2Yzxyt/b2mQx6KCJGb0LxB7NNoUgeE1Ncu4venFsZiFd5CqMSfWIZQ+9xV5vkBCdpPHEUHAO8wqNB2Cs3e8iQ++fEAUyh3bbAvNl06W4mSLJuUCBzmQSVgF0+HOSWI4Qe8C1DQJdQpQToMZ81qmbYfxYX4Dv7D5pZ2QNRqUJZWtjfF3cRwx13qsk7dqHp6K1zP1q4Ibs8RjATiyME/xCJjOzPhHaeEscOa6UJathhDEKDo6CHxYExPUOr0IDivp/XqVTwVZMYyRTlz2XvjERnJiOXOQ4/Vra/sonptKbGTmVUP3LMdMv52Wku7mvtlgm7B2VCsxRygyMLYi5BsvV1j4ZnlibeE/O/qBp/TnDJTQQI6+y+F52ngO3t6De5jZqcmgj2DOhhWPr6MZ1CN/0AHfYZOO4yWkJGhLphGtTHXUFmmeXTCmkIgQ5ZE3kU/dJlTmk4zE6NFY980oes6BXAsBRQ2gKNOx0l4PWHNHwTiBdOZQRF9elU5BIBt0m5IAg3vyEsxpJDBBFW/sv8y+LORiISI9KcQixr4FgJMk77DurFQ9s36C1e51l0B2HG/9MJcJd3S/bLnVLJTFwFMsdTCD49W91E93c4219Sa3WhNfFYRoghKf9tqHF+x7qLw54bWWAfYpp4X16GGlFGx6JoeCZ7alygZ5tbwln70e+FjkYXmaWC2AuMav2YLbNYS/Y12P40sY5zAtnW8T9vCw2XrGmXl9tb5AiHEmNnl1MNRhKGseIOwY7Gdwvld88ktQaepvfFY0vTIUdgGQRSwX8iBSSqvX1bbCxc687maEnGhX07f5C5bK8MEl+ACMepxkGLV8W/aWzBkSfsTVIAFWGdy/yr2cSigoz/4NebVjrj7bd636QOTBpCO1s/w6nXaDUdB0Gchcl9bf9NAPI0pny7deZCt4UkTvEXJ9V1TEzYC VWJGA5FX lUQBBzPwz8EXnACOKTi/VnbJFv/1ROnLYbhD051WD8Qd729WHcW/rOpNwrY2uhoUF01woKu1pH+b6OPE/oIrQ1EQ74pCjvvkrhBjZcTCu4Nxe6ZSKiCyvgfkU6YeT1vQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: > -----Original Message----- > From: Yosry Ahmed > Sent: Tuesday, March 18, 2025 7:24 AM > To: Sridhar, Kanchana P > Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; > hannes@cmpxchg.org; nphamcs@gmail.com; chengming.zhou@linux.dev; > usamaarif642@gmail.com; ryan.roberts@arm.com; 21cnbao@gmail.com; > ying.huang@linux.alibaba.com; akpm@linux-foundation.org; linux- > crypto@vger.kernel.org; herbert@gondor.apana.org.au; > davem@davemloft.net; clabbe@baylibre.com; ardb@kernel.org; > ebiggers@google.com; surenb@google.com; Accardi, Kristen C > ; Feghali, Wajdi K ; > Gopal, Vinodh > Subject: Re: [PATCH v8 12/14] mm: zswap: Simplify acomp_ctx resource > allocation/deletion and mutex lock usage. >=20 > On Mon, Mar 17, 2025 at 09:15:09PM +0000, Sridhar, Kanchana P wrote: > > > > > -----Original Message----- > > > From: Yosry Ahmed > > > Sent: Monday, March 10, 2025 10:31 AM > > > To: Sridhar, Kanchana P > > > Cc: linux-kernel@vger.kernel.org; linux-mm@kvack.org; > > > hannes@cmpxchg.org; nphamcs@gmail.com; > chengming.zhou@linux.dev; > > > usamaarif642@gmail.com; ryan.roberts@arm.com; 21cnbao@gmail.com; > > > ying.huang@linux.alibaba.com; akpm@linux-foundation.org; linux- > > > crypto@vger.kernel.org; herbert@gondor.apana.org.au; > > > davem@davemloft.net; clabbe@baylibre.com; ardb@kernel.org; > > > ebiggers@google.com; surenb@google.com; Accardi, Kristen C > > > ; Feghali, Wajdi K > ; > > > Gopal, Vinodh > > > Subject: Re: [PATCH v8 12/14] mm: zswap: Simplify acomp_ctx resource > > > allocation/deletion and mutex lock usage. > > > > > > On Sat, Mar 08, 2025 at 02:47:15AM +0000, Sridhar, Kanchana P wrote: > > > > > > > [..] > > > > > > > > u8 *buffer; > > > > > > > > + u8 nr_reqs; > > > > > > > > + struct crypto_wait wait; > > > > > > > > struct mutex mutex; > > > > > > > > bool is_sleepable; > > > > > > > > + bool __online; > > > > > > > > > > > > > > I don't believe we need this. > > > > > > > > > > > > > > If we are not freeing resources during CPU offlining, then we= do > not > > > > > > > need a CPU offline callback and acomp_ctx->__online serves no > > > purpose. > > > > > > > > > > > > > > The whole point of synchronizing between offlining and > > > > > > > compress/decompress operations is to avoid UAF. If offlining = does > not > > > > > > > free resources, then we can hold the mutex directly in the > > > > > > > compress/decompress path and drop the hotunplug callback > > > completely. > > > > > > > > > > > > > > I also believe nr_reqs can be dropped from this patch, as it = seems > like > > > > > > > it's only used know when to set __online. > > > > > > > > > > > > All great points! In fact, that was the original solution I had > implemented > > > > > > (not having an offline callback). But then, I spent some time > > > understanding > > > > > > the v6.13 hotfix for synchronizing freeing of resources, and th= is > comment > > > > > > in zswap_cpu_comp_prepare(): > > > > > > > > > > > > /* > > > > > > * Only hold the mutex after completing allocations, > otherwise we > > > > > may > > > > > > * recurse into zswap through reclaim and attempt to hold the > mutex > > > > > > * again resulting in a deadlock. > > > > > > */ > > > > > > > > > > > > Hence, I figured the constraint of "recurse into zswap through > reclaim" > > > was > > > > > > something to comprehend in the simplification (even though I ha= d a > > > tough > > > > > > time imagining how this could happen). > > > > > > > > > > The constraint here is about zswap_cpu_comp_prepare() holding the > > > mutex, > > > > > making an allocation which internally triggers reclaim, then recu= rsing > > > > > into zswap and trying to hold the same mutex again causing a > deadlock. > > > > > > > > > > If zswap_cpu_comp_prepare() does not need to hold the mutex to > begin > > > > > with, the constraint naturally goes away. > > > > > > > > Actually, if it is possible for the allocations in > zswap_cpu_comp_prepare() > > > > to trigger reclaim, then I believe we need some way for reclaim to = know > if > > > > the acomp_ctx resources are available. Hence, this seems like a > potential > > > > for deadlock regardless of the mutex. > > > > > > I took a closer look and I believe my hotfix was actually unnecessary= . I > > > sent it out in response to a syzbot report, but upon closer look it > > > seems like it was not an actual problem. Sorry if my patch confused y= ou. > > > > > > Looking at enum cpuhp_state in include/linux/cpuhotplug.h, it seems l= ike > > > CPUHP_MM_ZSWP_POOL_PREPARE is in the PREPARE section. The > comment > > > above > > > says: > > > > > > * PREPARE: The callbacks are invoked on a control CPU before the > > > * hotplugged CPU is started up or after the hotplugged CPU has died. > > > > > > So even if we go into reclaim during zswap_cpu_comp_prepare(), it wil= l > > > never be on the CPU that we are allocating resources for. > > > > > > The other case where zswap_cpu_comp_prepare() could race with > > > compression/decompression is when a pool is being created. In this ca= se, > > > reclaim from zswap_cpu_comp_prepare() can recurse into zswap on the > > > same > > > CPU AFAICT. However, because the pool is still under creation, it wil= l > > > not be used (i.e. zswap_pool_current_get() won't find it). > > > > > > So I think we don't need to worry about zswap_cpu_comp_prepare() > racing > > > with compression or decompression for the same pool and CPU. > > > > Thanks Yosry, for this observation! You are right, when considered pure= ly > > from a CPU hotplug perspective, zswap_cpu_comp_prepare() and > > zswap_cpu_comp_dead() in fact run on a control CPU, because the state i= s > > registered in the PREPARE section of "enum cpuhp_state" in cpuhotplug.h= . > > > > The problem however is that, in the current architecture, CPU onlining/ > > zswap_pool creation, and CPU offlining/zswap_pool deletion have the > > same semantics as far as these resources are concerned. Hence, although > > zswap_cpu_comp_prepare() is run on a control CPU, the CPU for which > > the "hotplug" code is called is in fact online. It is possible for the = memory > > allocation calls in zswap_cpu_comp_prepare() to recurse into > > zswap_compress(), which now needs to be handled by the current pool, > > since the new pool has not yet been added to the zswap_pools, as you > > pointed out. > > > > The ref on the current pool has not yet been dropped. Could there be > > a potential for a deadlock at pool transition time: the new pool is blo= cked > > from allocating acomp_ctx resources, triggering reclaim, which the old > > pool needs to handle? >=20 > I am not sure how this could lead to a deadlock. The compression will be > happening in a different pool with a different acomp_ctx. I was thinking about this from the perspective of comparing the trade-offs between these two approaches: a) Allocating acomp_ctx resources for a pool when a CPU is functional, vs. b) Allocating acomp_ctx resources once upfront. With (a), when the user switches zswap to use a new compressor, it is possi= ble that the system is already in a low memory situation and the CPU could be d= oing a lot of swapouts. It occurred to me that in theory, the call to switch the compressor through the sysfs interface could never return if the acomp_ctx allocations trigger direct reclaim in this scenario. This was in the contex= t of exploring if a better design is possible, while acknowledging that this cou= ld still happen today. With (b), this situation is avoided by design, and we can switch to a new p= ool without triggering additional reclaim. Sorry, I should have articulated thi= s better. >=20 > > > > I see other places in the kernel that use CPU hotplug for resource allo= cation, > > outside of the context of CPU onlining. IIUC, it is difficult to guaran= tee that > > the startup/teardown callbacks are modifying acomp_ctx resources for a > > dysfunctional CPU. >=20 > IIUC, outside the context of CPU onlining, CPU hotplug callbacks get > called when they are added. In this case, only the newly added callbacks > will be executed. IOW, zswap's hotplug callback should not be randomly > getting called when irrelevant code adds hotplug callbacks. It should > only happen during zswap pool initialization or CPU onlining. >=20 > Please correct me if I am wrong. Yes, this is my understanding as well. >=20 > > > > Now that I think about it, the only real constraint is that the acomp_c= tx > > resources are guaranteed to exist for a functional CPU which can run zs= wap > > compress/decompress. >=20 > I believe this is already the case as I previously described, because > the hotplug callback can only be called in two scenarios: > - Zswap pool initialization, in which case compress/decompress > operations cannot run on the pool we are initializing. > - CPU onlining, in which case compress/decompress operations cannot run > on the CPU we are onlining. >=20 > Please correct me if I am wrong. Agreed, this is my understanding too. >=20 > > > > I think we can simplify this as follows, and would welcome suggestions > > to improve the proposed solution: > > > > 1) We dis-associate the acomp_ctx from the pool, and instead, have this > > be a global percpu zswap resource that gets allocated once in > zswap_setup(), > > just like the zswap_entry_cache. > > 2) The acomp_ctx resources will get allocated during zswap_setup(), usi= ng > > the cpuhp_setup_state_multi callback() in zswap_setup(), that regis= ters > > zswap_cpu_comp_prepare(), but no teardown callback. > > 3) We call cpuhp_state_add_instance() for_each_possible_cpu(cpu) in > > zswap_setup(). > > 4) The acomp_ctx resources persist through subsequent "real CPU > offline/online > > state transitions". > > 5) zswap_[de]compress() can go ahead and lock the mutex, and use the > > reqs/buffers without worrying about whether these resources are > > initialized or still exist/are being deleted. > > 6) "struct zswap_pool" is now de-coupled from this global percpu zswap > > acomp_ctx. > > 7) To address the issue of how many reqs/buffers to allocate, there cou= ld > > potentially be a memory cost for non-batching compressors, if we w= ant > > to always allocate ZSWAP_MAX_BATCH_SIZE acomp_reqs and buffers. > > This would allow the acomp_ctx to seamlessly handle batching > > compressors, non-batching compressors, and transitions among the > > two compressor types in a pretty general manner, that relies only = on > > the ZSWAP_MAX_BATCH_SIZE, which we define anyway. > > > > I believe we can maximize the chances of success for the allocatio= n of > > the acomp_ctx resources if this is done in zswap_setup(), but plea= se > > correct me if I am wrong. > > > > The added memory cost for platforms without IAA would be > > ~57 KB per cpu, on x86. Would this be acceptable? >=20 > I think that's a lot of memory to waste per-CPU, and I don't see a good > reason for it. Yes, it appears so. Towards trying to see if a better design is possible by de-coupling the acomp_ctx from zswap_pool: Would this cost be acceptable if it is incurred based on a build config option, saying CONFIG_ALLOC_ZSWAP_BATCHING_RESOURCES (default OFF)? If this is set, we go ahead and allocate ZSWAP_MAX_BATCH_SIZE acomp_ctx resources once, during zswap_setup(). If not, we allocate only one req/buffer in the global percpu acomp_ctx?=20 Since we are comprehending that other compressors may want to do batching in future, I thought a more general config option name would be more appropriate. >=20 > > If not, I don't believe this simplification would be worth it, bec= ause > > allocating for one req/buffer, then dynamically adding more resour= ces > > if a newly selected compressor requires more resources, would run > > into the same race conditions and added checks as in > > acomp_ctx_get_cpu_lock(), which I believe, seem to be necessary > because > > CPU onlining/zswap_pool creation and CPU offlining/zswap_pool > > deletion have the same semantics for these resources. >=20 > Agree that using a single acomp_ctx per-CPU but making the resources > resizable is not a win. Yes, this makes sense: resizing is not the way to go. >=20 > > > > The only other fallback solution in lieu of the proposed simplifica= tion that > > I can think of is to keep the lifespan of these resources from pool= creation > > to deletion, using the CPU hotplug code. Although, it is not totall= y clear > > to me if there is potential for deadlock during pool transitions, a= s noted > above. >=20 > I am not sure what's the deadlock scenario you're worried about, please > clarify. My apologies: I was referring to triggering direct reclaim during pool crea= tion, which could, in theory, run into a scenario where the pool switching would have to wait for reclaim to free up enough memory for the acomp_ctx resources allocation: this could cause the system to hang, but not a deadlo= ck. This can happen even today, hence trying to see if a better design is possi= ble. Thanks, Kanchana