From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D0034D262B3 for ; Wed, 21 Jan 2026 01:30:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09D646B0005; Tue, 20 Jan 2026 20:30:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 04BE76B0088; Tue, 20 Jan 2026 20:30:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E8FD96B0089; Tue, 20 Jan 2026 20:30:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D7BBC6B0005 for ; Tue, 20 Jan 2026 20:30:17 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6A8CCC281B for ; Wed, 21 Jan 2026 01:30:17 +0000 (UTC) X-FDA: 84354240474.05.8F12454 Received: from out-177.mta0.migadu.com (out-177.mta0.migadu.com [91.218.175.177]) by imf04.hostedemail.com (Postfix) with ESMTP id 840AD40003 for ; Wed, 21 Jan 2026 01:30:15 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=W3K11HZx; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf04.hostedemail.com: domain of yosry.ahmed@linux.dev designates 91.218.175.177 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1768959015; a=rsa-sha256; cv=none; b=3t+THf7WPCuvSwjHptBe8XsPo5WlJOVBGnFFykzKXTfvNpfA8ej1dhfUpDCFPK70B57E7M Vkvi0iZE/SfByaNEfmm02gPnbbMS1TlErwi7LH0PXqWMPgYt/iq5xcBLxznGbnYE8GYhRK NjyMLuMfoPZY2RMvxWv2pI9THMFxQEk= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=W3K11HZx; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf04.hostedemail.com: domain of yosry.ahmed@linux.dev designates 91.218.175.177 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1768959015; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=by2Of9/no8AMdSPd8fbbxmHQrcM/1oBnQaRY+SN6eTQ=; b=cjWI1AQwKRXyYd/lzeKfBRfEFWJVtWU5b7BsILTQE7+HVzFwkKl4BIqlCVeHOfvQHhCCX3 ZX9osVRQsW6Vn0sZFaaBuYBWRZLFtNCYdRpym1XxgQ3R5JAfbx3qr7ikocxZfazq3egjhC J3qs3rhH1V4iTpZv42McGnG7OMyudik= Date: Wed, 21 Jan 2026 01:30:07 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1768959013; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=by2Of9/no8AMdSPd8fbbxmHQrcM/1oBnQaRY+SN6eTQ=; b=W3K11HZxYEvJKBDrTw6uJCoORsfkmLT4E6BlqmDwnOCF0TFsp+xh25cD4N5xjs6bPOiDiA WYP7qv8aIjTMfgnLUmFu1P++zau8W6uwHs9OrqNRH8npfKEiUl122YOMa4frTJIVcIsYqI OZ2upY6NUnTL68nvPkRLRAuckHkMEOg= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Sergey Senozhatsky Cc: Andrew Morton , Minchan Kim , Nhat Pham , Johannes Weiner , Brian Geffon , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC PATCH] zsmalloc: make common caches global Message-ID: References: <20260116044841.334821-1-senozhatsky@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 840AD40003 X-Stat-Signature: q3cydfpqco4oju6d75dj6s1dbx8zoqey X-Rspam-User: X-HE-Tag: 1768959015-871022 X-HE-Meta: U2FsdGVkX18tP/ctEG6wWAC0LZPkaWozsTVdYbnlNyStPSthF9i0LfiWy6o6UXdIOQEgD7oExJdkPuQGsrXHC5Gr/LAGswBYSMgtKp63hYZNy1gZC4tA30BA4ImJrU735CsPTOHvg9xyWe0mvpxC+sFw3R7o0XjcDztRtsVJHOLTL275FcK6KTu4MOyWnnvgSoAhL3bTuF4UQvavkSj/pMLSUjCE69P2xEvTp8d82IqIrOwTaNQCN54TLThYzDhol/SOLYNcTBrxqr+0B/QYt/8BoH3u+YhOvqNAtwbXeftZadvhCpn2bbf+zf2PWPJ8Bpo6vy5OYVhTZ3y6zvKinBfbb5jIxRp617R+bHf+g4Wg7n1sv9ZXU9Kr5o9aZjj6RBTaD6NWCaYzeoJTTw9bzYxglSJeeCWl/VwpA0H1GNUvfap8ChKwJufCr7Vx9JzWPpYm9YdW7ptenw3UGH/bu2a11PZRVhzbUzVR6d9rwtXIslr6clOI7LOnPX1QWb1/wfjpNqiSRD76POc5eOJhs0yszm1bITP1ZnUj2W9oatuUxe+mHgIeJ5BijOnTmMAxHyUo8SWVVRt+W/cEALctmAOnby20IUvqRNT8SGBUGL8BVX/eLrrvO47ua02TnCtS5iZ48aS4D+lq9ftOp7U9wSReLutLMGbgUX/uc7b0Kh17LZTYmj+lYZTH/7C1Ue0QQaQafBBv0gNLLKHuBBQhIs/PM/Y6AZTorMhUrsjxaYvmW476KHDbUAanXGpdQMOY+0CdyTBn2elbrd9jZMgIvy9G4b3HuooAwiLqPfBGbyOimkE69GWqOzyll4B3gnXSHmubSIN71kOEud192u1iSS3Uw/m3mk1G6k3n7TznhmTBJ/8lTFDStoIIinvZ3eBd88WYr5JUkvjTbxQBHWkhdfIUj17fBOmfex14Sb+bnn+/kyo+TFqyuQl/ebxYaOoEihN2PU/8/Pdskh7TY7K wqyVfRlG TppstPa/4Sx7vcM8hSGgbocUsYbFwpjVRMOzN+OU5Zk4lCGS6jZy773V6CEj3sdzKcPiYNAU2GXVj59XlV5wZtnr7Cq8igt78gFvza8SjwlcTTe5tTDxGcDobENC8caxRfxVocnbXRqG8d8qyJymSpOzxdP8+cefZf9fsiK48XADweM1MmBUQwj8R5Ue3PIyGnWd6apncWoX7Us0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Jan 17, 2026 at 11:24:01AM +0900, Sergey Senozhatsky wrote: > On (26/01/16 20:49), Yosry Ahmed wrote: > > On Fri, Jan 16, 2026 at 01:48:41PM +0900, Sergey Senozhatsky wrote: > > > Currently, zsmalloc creates kmem_cache of handles and zspages > > > for each pool, which may be suboptimal from the memory usage > > > point of view (extra internal fragmentation per pool). Systems > > > that create multiple zsmalloc pools may benefit from shared > > > common zsmalloc caches. > > > > I had a similar patch internally when we had 32 zsmalloc pools with > > zswap. > > Oh, nice. > > > You can calculate the savings by using /proc/slabinfo. The unused memory > > is (num_objs-active_objs)*objsize. You can sum this across all caches > > when you have multiple pools, and compare it to the unused memory with a > > single cache. > > Right. Just curious, do you recall any numbers? I have the exact numbers actually, from /proc/slabinfo while running a zswap (internal) test: *** Before: # name .. zs_handle 35637 35760 16 ... zs_handle 35577 35760 16 ... zs_handle 35638 35760 16 ... zs_handle 35700 35760 16 ... zs_handle 35937 36240 16 ... zs_handle 35518 35760 16 ... zs_handle 35700 36000 16 ... zs_handle 35517 35760 16 ... zs_handle 35818 36000 16 ... zs_handle 35698 35760 16 ... zs_handle 35536 35760 16 ... zs_handle 35877 36240 16 ... zs_handle 35757 36000 16 ... zs_handle 35760 36000 16 ... zs_handle 35820 36000 16 ... zs_handle 35999 36000 16 ... zs_handle 35700 36000 16 ... zs_handle 35817 36000 16 ... zs_handle 35698 36000 16 ... zs_handle 35699 36000 16 ... zs_handle 35580 35760 16 ... zs_handle 35578 35760 16 ... zs_handle 35820 36000 16 ... zs_handle 35517 35760 16 ... zs_handle 35700 36000 16 ... zs_handle 35640 35760 16 ... zs_handle 35820 36000 16 ... zs_handle 35578 35760 16 ... zs_handle 35578 35760 16 ... zs_handle 35817 36000 16 ... zs_handle 35518 35760 16 ... zs_handle 35940 36240 16 ... zspage 991 1079 48 ... zspage 936 996 48 ... zspage 940 996 48 ... zspage 1050 1079 48 ... zspage 973 1079 48 ... zspage 942 996 48 ... zspage 1065 1162 48 ... zspage 885 996 48 ... zspage 887 913 48 ... zspage 1053 1079 48 ... zspage 983 996 48 ... zspage 966 996 48 ... zspage 970 1079 48 ... zspage 880 913 48 ... zspage 1006 1079 48 ... zspage 998 1079 48 ... zspage 1129 1162 48 ... zspage 903 913 48 ... zspage 833 996 48 ... zspage 861 913 48 ... zspage 764 913 48 ... zspage 898 913 48 ... zspage 973 1079 48 ... zspage 945 996 48 ... zspage 943 1079 48 ... zspage 1024 1079 48 ... zspage 820 913 48 ... zspage 702 830 48 ... zspage 1049 1079 48 ... zspage 990 1162 48 ... zspage 988 1079 48 ... zspage 932 996 48 ... Unused memory = $(awk '{s += $4*($3-$2)} END {print s}') = 218416 bytes *** After: # name .. zs_handle 1054440 1054800 16 ... zspage 5720 5810 48 ... Unused memory = (1054800-1054440)*16 + (5810-5720)*48 = 10080 bytes That was about ~20 times reduction in waste when using 32 pools with zswap. I suspect we wouldn't be using that many pools with zram. > > [..] > > Hmm instead of the repeated kmem_cache_destroy() calls, can we do sth > > like this: > > Sure.