From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17266C54E58 for ; Tue, 12 Mar 2024 02:34:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A03C56B0178; Mon, 11 Mar 2024 22:34:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 98C288D000D; Mon, 11 Mar 2024 22:34:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82DBF6B017A; Mon, 11 Mar 2024 22:34:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 740A56B0178 for ; Mon, 11 Mar 2024 22:34:24 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id DC4B9140EB9 for ; Tue, 12 Mar 2024 02:34:23 +0000 (UTC) X-FDA: 81886818006.05.EDF2577 Received: from mail-qv1-f42.google.com (mail-qv1-f42.google.com [209.85.219.42]) by imf17.hostedemail.com (Postfix) with ESMTP id C59BC40002 for ; Tue, 12 Mar 2024 02:34:21 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=s0iyh4RB; spf=pass (imf17.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.42 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710210862; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ub7QivAAtZjYlqRZBrv6vJV8M2mzOE2hIvtqdNYKl2I=; b=HGvI2nbDU9MQbna5EeWC5+Xn80X1D2TTq7rfhjpI/ZTONZXY8cfp+NOFaE0gnbbGoHybHl VobAC5NBblULZLHg9yLabpA3Nnum0cpvNzKhvzEKUXK7qxVCKTLkFr1j0FJvzNw8Fj6qUg Azv5+CcxjFyc+ioXMmHEfe0p9ILL50U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710210862; a=rsa-sha256; cv=none; b=UVYoDtvFZE/0HBMkw24W6Qic9+lQqhCOj/Mgh7zW3GSkcEol6+JjBUf2OxVcbqYvAquRK5 AKwyM9YHt+umadH/W43vGVe+Ou6jDvIA6OoS+bvYyXm4yfnH2A1LH+xRKzSSQo7YJFRD7g nLEfyhWtv/ykIX3KPsqHRW2osV1+jmg= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=s0iyh4RB; spf=pass (imf17.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.42 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qv1-f42.google.com with SMTP id 6a1803df08f44-690f24da354so165216d6.1 for ; Mon, 11 Mar 2024 19:34:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710210861; x=1710815661; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=ub7QivAAtZjYlqRZBrv6vJV8M2mzOE2hIvtqdNYKl2I=; b=s0iyh4RBzhlyr7Mfz1/4ZTrLt5YUURevu0UQNslBOoF7Zthp0MZ/OftX0L8Dkq0jjd SF8TuP/Y4oHfIGdwRMZmoKB8xRLOzztvkQ1YRQd6JizWezsLbT2QWLVkQK+vLCNGGk/C fC9fROG1FbJR4rt7GFXX/BqwRM1S3C2fMSrmP959FkivVY8+vEXdSV3Cliphrz7WMOI2 tUq27lV86GIfb5SyitAPk18kmfEAcyW83LdVyJnc0vR8hjWc9etFcI0g7YBrLIRfHxGc i59FHHmb77OxRDHa+uJ42PbyIdzK06IrHXl7uLO7l8+91GwavKRKZ+AZAv7LwqkKdj9t nwnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710210861; x=1710815661; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ub7QivAAtZjYlqRZBrv6vJV8M2mzOE2hIvtqdNYKl2I=; b=xFc1GnRK5fKUg7W4eNvYrBKRPGKeA1955vmqj4G8FZbKnAmsAsnBkfvInPe/+9SvCp LtKVjUhReiiW1Tmlf9dtBg/n913OFTMVvJ2DuEoQjbnp5BpFqev/dV6dzoWxjdydwLyb EqtkRpIj1/K6fLCOy+gouxMMkUaE9mDs8iypZEIQKiQR0v2+ApDOa+rs+sWKa5m7xigO uKNIbuY0L5FfF/U9fyRAFCNElDV+x7yDtYtTebKOZCTRej0nVeV41K6wKXQn93/TQSo5 DvdkKi4LTubAyjAVPpPAAxltAhANq32Ns8aXbYePNm/LttqMfhV8q87wk2LlSwNS3zcf WvMg== X-Forwarded-Encrypted: i=1; AJvYcCWISWkpkcLKeQWLnfFgfQDHansucgDIhJrIZIO3WUPaWk6C9lmUQKtjJmiPvtXwaIt2na7KY0L896gR0IUMb/sPSlM= X-Gm-Message-State: AOJu0Yz6eiqHbtMcY5leqejoG06iHeXc579Mf2tbh/D2/6RTw7xJyvCi OL5S5mWIqzbWqI9eZl7n0MmbKtI4VMPwhTunFuYqWfJYFlBxb0VotvAKHKh8cVJcofFMzI9+4Rj U X-Google-Smtp-Source: AGHT+IFZulKC4BK1ZVU8uTLFFrfjm6Xefz3DkxAwWmYaazdkd3ur2Bhrw0giRjFsHSXurhumfm+JBQ== X-Received: by 2002:a0c:e20c:0:b0:690:e32a:76c7 with SMTP id q12-20020a0ce20c000000b00690e32a76c7mr1126158qvl.15.1710210860777; Mon, 11 Mar 2024 19:34:20 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id ez16-20020ad45910000000b00690c7a788b6sm2346220qvb.48.2024.03.11.19.34.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Mar 2024 19:34:20 -0700 (PDT) Date: Mon, 11 Mar 2024 22:34:11 -0400 From: Johannes Weiner To: Yosry Ahmed Cc: Andrew Morton , Nhat Pham , Chengming Zhou , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] mm: zswap: optimize zswap pool size tracking Message-ID: <20240312023411.GA22705@cmpxchg.org> References: <20240311161214.1145168-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: op5ikai4r3cuqd5cin3aazas9qgj1esi X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C59BC40002 X-Rspam-User: X-HE-Tag: 1710210861-349478 X-HE-Meta: U2FsdGVkX18QUX4tBejhunkelkdjNm+ZTlgXMqQHDgjf1ALhb4uhoZmTRRdFfN/mqrh8XhUl3KTJyIktsHY0V8XTxEKiuEvIID9ki0R4PTMsKc3QkSKQFT/jIKunKs7Fp+SLDlg3k0aEPgRma1hziNxuvhZw0+9hIyjKAlH+MtKtdEfUWWgb0D2fcMZf2QEFCQq8exhOqhE4Fk4QsS2tmf1XMT6pKPbQq83zj6V6TMm57jA4T5xG++K8Hrw1jbGqMcFyXAdr2+ZyEXxCoGverpIKnr7QDwrCRQn0KsNxiCWxiXuKto2ywq9azkmj/j1bv86FU/II8VQiRPyMHKE53n+4B3HHDBhVAVfP2ZdwmEF5J5LlUYep5R8E18eaymQi2geR6k290JXPpqBbfDgkuNs4mNzRyRqAPb0F9ObjthRMIjxS+dbd3N7dz8Iz/LMcbtmgW2QsQ+1qt+2jywZ8xhV8LsCWIF62FCsjcJWn8sap/Y0CVELhhy22eRgZOr8GDO8jkAKqLmegbENehJ65LSqbK4QwM8bPERk0hSAYJD8MJbwQASk0Z07maS67PojLrIF92aEbs9d56Ts+Q8O8iqWZLCmHZZnD2NcPFEZWSwl9dmrUvjWU/dgNngeth7QdPSHEzClEK1wNnMAZ4hFyXaieWzNMp8TZzTtEnqoGMjAddTCSG5I89anb4FUgeVhO7pX4q7qsOx6uiM0RyZwl9U0zG9ENZ92hJqFGMhN9pURfCWWHVyHhh6OZkX6HQRLqHLtcs7j6PbV3mWEhrtDESCBiQ5vuFSgjWwH9MLhx+QtXXF8et7TeBSILTBOlkrzFwzfl545CcpNCcjJx+GqQ0LIbEYiUrhyImv6RcWoxZz6CCPqWNoEygdZqSuxT/n7eDp1CyrSbabW+VB+6yh6QwlvD/4vW35WSrzo3tjumOxPUoW+FQp/nXNYLB+VJ8wBBt+gmi3AiUyqcr1EafJJ rvCH4yeB 24227twqsQ9Av0IiOgUITc/bNwfCCKot1K5+Y0HDJzr2wiXNZkVopBu+wB8tCuvYojKlz+8GXkpaWPZhmq3LXqM96RjIxp9O6EV03fd+kuUdHU/66jjpnRKmQ7O+V+b4sO6vvvdN4qUeQrasi90UZI4zbxM9NgwXEpT8o1W7YX8z2blVDMCiiCXA4sqIHxq/P+LLn1M6cvQWdpyTv1yCM1+pyg7j8HacElaVd1kjVpVpLGgX4CQWo9cZ7sNhM+UXLSSMHxb/rbsRjrssGatOVvaKLoTdecgL2MEeRPUpUzGaUf13tO91UY493sorqD7frbRT437/Cy1Neh6Lsnx28tveuCizaNEY54UV91OqWuTavD660dX5cn1H5mcck6eZdDHZjUaSnL1urC1fSGOFuj9jpbLqKG2usO+IKbLHgPakkBA8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Mar 11, 2024 at 10:09:35PM +0000, Yosry Ahmed wrote: > On Mon, Mar 11, 2024 at 12:12:13PM -0400, Johannes Weiner wrote: > > Profiling the munmap() of a zswapped memory region shows 50%(!) of the > > total cycles currently going into updating the zswap_pool_total_size. > > Yikes. I have always hated that size update scheme FWIW. > > I have also wondered whether it makes sense to just maintain the number > of pages in zswap as an atomic, like zswap_stored_pages. I guess your > proposed scheme is even cheaper for the load/invalidate paths because we > do nothing at all. It could be an option if the aggregation in other > paths ever becomes a problem, but we would need to make sure it > doesn't regress the load/invalidate paths. Just sharing some thoughts. Agree with you there. I actually tried doing it that way at first, but noticed zram uses zs_get_total_pages() and actually wants a per-pool count. I didn't want the backend to have to update two atomics, so I settled for this version. > > There are three consumers of this counter: > > - store, to enforce the globally configured pool limit > > - meminfo & debugfs, to report the size to the user > > - shrink, to determine the batch size for each cycle > > > > Instead of aggregating everytime an entry enters or exits the zswap > > pool, aggregate the value from the zpools on-demand: > > > > - Stores aggregate the counter anyway upon success. Aggregating to > > check the limit instead is the same amount of work. > > > > - Meminfo & debugfs might benefit somewhat from a pre-aggregated > > counter, but aren't exactly hotpaths. > > > > - Shrinking can aggregate once for every cycle instead of doing it for > > every freed entry. As the shrinker might work on tens or hundreds of > > objects per scan cycle, this is a large reduction in aggregations. > > > > The paths that benefit dramatically are swapin, swapoff, and > > unmaps. There could be millions of pages being processed until > > somebody asks for the pool size again. This eliminates the pool size > > updates from those paths entirely. > > This looks like a big win, thanks! I wonder if you have any numbers of > perf profiles to share. That would be nice to have, but I think the > benefit is clear regardless. I deleted the perf files already, but can re-run it tomorrow. > I also like the implicit cleanup when we switch to maintaining the > number of pages rather than bytes. The code looks much better with all > the shifts and divisions gone :) > > I have a couple of comments below. With them addressed, feel free to > add: > Acked-by: Yosry Ahmed Thanks! > > @@ -1385,6 +1365,10 @@ static void shrink_worker(struct work_struct *w) > > { > > struct mem_cgroup *memcg; > > int ret, failures = 0; > > + unsigned long thr; > > + > > + /* Reclaim down to the accept threshold */ > > + thr = zswap_max_pages() * zswap_accept_thr_percent / 100; > > This calculation is repeated twice, so I'd rather keep a helper for it > as an alternative to zswap_can_accept(). Perhaps zswap_threshold_page() > or zswap_acceptance_pages()? Sounds good. I went with zswap_accept_thr_pages(). > > @@ -1711,6 +1700,13 @@ void zswap_swapoff(int type) > > > > static struct dentry *zswap_debugfs_root; > > > > +static int debugfs_get_total_size(void *data, u64 *val) > > +{ > > + *val = zswap_total_pages() * PAGE_SIZE; > > + return 0; > > +} > > +DEFINE_DEBUGFS_ATTRIBUTE(total_size_fops, debugfs_get_total_size, NULL, "%llu"); > > I think we are missing a newline here to maintain the current format > (i.e "%llu\n"). Oops, good catch! I had verified the debugfs file (along with the others) with 'grep . *', which hides that this is missing. Fixed up. Thanks for taking a look. The incremental diff is below. I'll run the tests and recapture the numbers tomorrow, then send v2. diff --git a/mm/zswap.c b/mm/zswap.c index 7c39327a7cc2..1a5cc7298306 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -504,6 +504,11 @@ static unsigned long zswap_max_pages(void) return totalram_pages() * zswap_max_pool_percent / 100; } +static unsigned long zswap_accept_thr_pages(void) +{ + return zswap_max_pages() * zswap_accept_thr_percent / 100; +} + unsigned long zswap_total_pages(void) { struct zswap_pool *pool; @@ -1368,7 +1373,7 @@ static void shrink_worker(struct work_struct *w) unsigned long thr; /* Reclaim down to the accept threshold */ - thr = zswap_max_pages() * zswap_accept_thr_percent / 100; + thr = zswap_accept_thr_pages(); /* global reclaim will select cgroup in a round-robin fashion. */ do { @@ -1493,9 +1498,7 @@ bool zswap_store(struct folio *folio) } if (zswap_pool_reached_full) { - unsigned long thr = max_pages * zswap_accept_thr_percent / 100; - - if (cur_pages > thr) + if (cur_pages > zswap_accept_thr_pages()) goto shrink; else zswap_pool_reached_full = false; @@ -1705,7 +1708,7 @@ static int debugfs_get_total_size(void *data, u64 *val) *val = zswap_total_pages() * PAGE_SIZE; return 0; } -DEFINE_DEBUGFS_ATTRIBUTE(total_size_fops, debugfs_get_total_size, NULL, "%llu"); +DEFINE_DEBUGFS_ATTRIBUTE(total_size_fops, debugfs_get_total_size, NULL, "%llu\n"); static int zswap_debugfs_init(void) {