From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CC86C021A0 for ; Wed, 12 Feb 2025 16:19:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C7C876B0085; Wed, 12 Feb 2025 11:19:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C2C48280001; Wed, 12 Feb 2025 11:19:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF4706B0089; Wed, 12 Feb 2025 11:19:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 950F26B0085 for ; Wed, 12 Feb 2025 11:19:14 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 46B8E1CAD9E for ; Wed, 12 Feb 2025 16:19:14 +0000 (UTC) X-FDA: 83111802228.20.0AC8277 Received: from out-176.mta1.migadu.com (out-176.mta1.migadu.com [95.215.58.176]) by imf18.hostedemail.com (Postfix) with ESMTP id 7F37A1C0009 for ; Wed, 12 Feb 2025 16:19:12 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=rYQzcs9a; spf=pass (imf18.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.176 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739377152; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WZOew7CJXb7yqNYds4gMMCr1x6GPAYERMcNZxA4F9Js=; b=Fx+zO3EIGYqOG4+6u/cPp5jlv8uiCTs/9cmAD8gyVDeN85yQxRf+YFo/CZVpvnQE3kGPkh 2l2cyvjpZah9BmUFBjbHIa5IJSnJW0oNCYZITW4pim+cwGpytTbisLQaPyjyUvKvbRb3s4 kU+IfpZkuHCIYT01U9sbNQJjroRSKOs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739377152; a=rsa-sha256; cv=none; b=x7CwMgFJiaRA/tgWthZAtqAySg70OlO2I4yJcwfocg6mmpNrRKJ933F9odk0fD2SHZ4FCL PQSV4KbEed3jHoVBnxo3EXmFKJVE4Ou1BiEs4o9Bd325brNKCKqwtmuOMk7SSKMgYdGH5Y C6TJ5MCPb9aLKPBoe55cPwnLtmcOw3o= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=rYQzcs9a; spf=pass (imf18.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.176 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Date: Wed, 12 Feb 2025 16:19:07 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1739377151; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=WZOew7CJXb7yqNYds4gMMCr1x6GPAYERMcNZxA4F9Js=; b=rYQzcs9a/1y/Tu8SI6xJQG+UIFiCp7J72NIlXMt+4dVDCrGb6vOu2g2/V26SFGCgifIscT ccLIB7GiX8B9d5oumx+D3hdETKBTJqBGZPu4Q3Jd/b4UQ7f182hXF7+5OYgY7asHQGRfVP 2+9eROq7sNdk6fb3751Y+ZD0FZNwdj0= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Sergey Senozhatsky Cc: Andrew Morton , Kairui Song , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 10/18] zsmalloc: factor out pool locking helpers Message-ID: References: <20250212063153.179231-1-senozhatsky@chromium.org> <20250212063153.179231-11-senozhatsky@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Queue-Id: 7F37A1C0009 X-Rspamd-Server: rspam07 X-Stat-Signature: xto5y5tzmxnbo5c49i6f3c8o88pbp7w7 X-HE-Tag: 1739377152-321214 X-HE-Meta: U2FsdGVkX19jJFM4tLBJVprKj9wTRan0edzTHdqVS09Ky1h/Vt7xQYerPuu0fs5m/D+tgyLvn/NleaTdUwftJme/s1tUUddGt/O+5LEUoAHngT0BMPsLskj6LjehqbpdEwFB4nzLbCJG7Lp35Cesf3lq+sltWD51MvT035mAaA8BBisk3RPv3Giz0sgSMhQ1Az++hSuH4nUiikppksv/K+XaYZDc2xI3FSoQmXzxzDOPKghhpaH0AAlmURpy+kUiLSqQ7uV7tXTdigaSqyibrT1J3qhkdlhm+Iga8DgUJmonbJS26r/T1RNGrEDe/J86WcTPGQ0fzMe+3TD8o4N1CJKHVA/qnXMNRuhZaaCAVAmiF40TIk7N5wZ0nWwY4kVbqZcC+agGfYeLiLDp3T2ytn/3dMChQ3uEu2bXK6VQJApB/J1uWSf445d63FkBRLYwHWXD8+hYZE0svKtHsJw6I/rm8ciWdyilA16gmu79F50ayQsH4vfjlx/buK2bYoutPyVCK98El/MS92Buru/SVZHl6tp16nW/I+zYDJk3cxEW4ZDdGPKcekrbEBwy1jihsyypjKIlZ9CsqImC4XnvkHCVGRVTonykFqIJkSiwQWlXP6uaHP+kpxzyQpwOTbg32BEbR34GqPNlmLCQx3MftYiszHq8YEIO1oMgasMsNMetqUkrnupfDzBiijhAUQSkj3JKTkrpTcMkjGXoZ3fPnAYrzSqbAdSkd0M0102qplHk8J6ZRztBd/7VAO8Yr9KR226Sobn6hGz9sBJT2TxBjdiyVtCjhjx3Gs8vXVcC06KbWHCLhKGvYfvsMuMuHkWGHLvd3uPnZGv59JKVWLZK/+m7wT5qx2ZJDJWo9ltffsnWkCCpRmzLTJtPlnlMXVP/QDqgGcidTudXOCdZ/N8yypd+Y4xPzwVmqLUaX6p6bmmWEH9R8EN1mryPzLCBCfv8J8jn+kXRQNKS8hGFNgl +vnFPqJy zsXWx4qaKoYDA9N4Y45LLnR4zcKOsE7X4hzhjx/R3SzDzzTWQjXmm6SORuhMvuDHMi+mXZP40YNtQ8+pIIs0oS36A3awo+Zj0CUZCTZvCm1IVevMTjV1LxKIFyGkfTl/Zom1TMKc8karPuUQ3P3tSFWRq/3+8i9jxaq/MchC4YO9237HhvebeMuA4d6JKBTpnLLrpv+7UAUqgEPPF6+SYfqSlw2iwpk1sBBy8S9uhF3xmuXg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Feb 12, 2025 at 04:18:14PM +0000, Yosry Ahmed wrote: > On Wed, Feb 12, 2025 at 03:27:08PM +0900, Sergey Senozhatsky wrote: > > We currently have a mix of migrate_{read,write}_lock() helpers > > that lock zspages, but it's zs_pool that actually has a ->migrate_lock > > access to which is opene-coded. Factor out pool migrate locking > > into helpers, zspage migration locking API will be renamed to > > reduce confusion. > > > > It's worth mentioning that zsmalloc locks sync not only migration, > > but also compaction. > > > > Signed-off-by: Sergey Senozhatsky > > FWIW I don't see a lot of value in the helpers (renaming the lock is > useful tho). We open-code other locks like the class lock anyway, and > the helpers obscure the underlying lock type without adding much value > in terms of readability/conciseness. We use helpers for the class lock in the following change, but my point stands for that too. > > > --- > > mm/zsmalloc.c | 63 +++++++++++++++++++++++++++++++++++---------------- > > 1 file changed, 44 insertions(+), 19 deletions(-) > > > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > > index 6d0e47f7ae33..47c638df47c5 100644 > > --- a/mm/zsmalloc.c > > +++ b/mm/zsmalloc.c > > @@ -18,7 +18,7 @@ > > /* > > * lock ordering: > > * page_lock > > - * pool->migrate_lock > > + * pool->lock > > * class->lock > > * zspage->lock > > */ > > @@ -224,10 +224,35 @@ struct zs_pool { > > struct work_struct free_work; > > #endif > > /* protect page/zspage migration */ > > - rwlock_t migrate_lock; > > + rwlock_t lock; > > atomic_t compaction_in_progress; > > }; > > > > +static void pool_write_unlock(struct zs_pool *pool) > > +{ > > + write_unlock(&pool->lock); > > +} > > + > > +static void pool_write_lock(struct zs_pool *pool) > > +{ > > + write_lock(&pool->lock); > > +} > > + > > +static void pool_read_unlock(struct zs_pool *pool) > > +{ > > + read_unlock(&pool->lock); > > +} > > + > > +static void pool_read_lock(struct zs_pool *pool) > > +{ > > + read_lock(&pool->lock); > > +} > > + > > +static bool pool_lock_is_contended(struct zs_pool *pool) > > +{ > > + return rwlock_is_contended(&pool->lock); > > +} > > + > > static inline void zpdesc_set_first(struct zpdesc *zpdesc) > > { > > SetPagePrivate(zpdesc_page(zpdesc)); > > @@ -1206,7 +1231,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > > BUG_ON(in_interrupt()); > > > > /* It guarantees it can get zspage from handle safely */ > > - read_lock(&pool->migrate_lock); > > + pool_read_lock(pool); > > obj = handle_to_obj(handle); > > obj_to_location(obj, &zpdesc, &obj_idx); > > zspage = get_zspage(zpdesc); > > @@ -1218,7 +1243,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > > * which is smaller granularity. > > */ > > migrate_read_lock(zspage); > > - read_unlock(&pool->migrate_lock); > > + pool_read_unlock(pool); > > > > class = zspage_class(pool, zspage); > > off = offset_in_page(class->size * obj_idx); > > @@ -1450,16 +1475,16 @@ void zs_free(struct zs_pool *pool, unsigned long handle) > > return; > > > > /* > > - * The pool->migrate_lock protects the race with zpage's migration > > + * The pool->lock protects the race with zpage's migration > > * so it's safe to get the page from handle. > > */ > > - read_lock(&pool->migrate_lock); > > + pool_read_lock(pool); > > obj = handle_to_obj(handle); > > obj_to_zpdesc(obj, &f_zpdesc); > > zspage = get_zspage(f_zpdesc); > > class = zspage_class(pool, zspage); > > spin_lock(&class->lock); > > - read_unlock(&pool->migrate_lock); > > + pool_read_unlock(pool); > > > > class_stat_sub(class, ZS_OBJS_INUSE, 1); > > obj_free(class->size, obj); > > @@ -1793,10 +1818,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > > pool = zspage->pool; > > > > /* > > - * The pool migrate_lock protects the race between zpage migration > > + * The pool lock protects the race between zpage migration > > * and zs_free. > > */ > > - write_lock(&pool->migrate_lock); > > + pool_write_lock(pool); > > class = zspage_class(pool, zspage); > > > > /* > > @@ -1833,7 +1858,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > > * Since we complete the data copy and set up new zspage structure, > > * it's okay to release migration_lock. > > */ > > - write_unlock(&pool->migrate_lock); > > + pool_write_unlock(pool); > > spin_unlock(&class->lock); > > migrate_write_unlock(zspage); > > > > @@ -1956,7 +1981,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > > * protect the race between zpage migration and zs_free > > * as well as zpage allocation/free > > */ > > - write_lock(&pool->migrate_lock); > > + pool_write_lock(pool); > > spin_lock(&class->lock); > > while (zs_can_compact(class)) { > > int fg; > > @@ -1983,14 +2008,14 @@ static unsigned long __zs_compact(struct zs_pool *pool, > > src_zspage = NULL; > > > > if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 > > - || rwlock_is_contended(&pool->migrate_lock)) { > > + || pool_lock_is_contended(pool)) { > > putback_zspage(class, dst_zspage); > > dst_zspage = NULL; > > > > spin_unlock(&class->lock); > > - write_unlock(&pool->migrate_lock); > > + pool_write_unlock(pool); > > cond_resched(); > > - write_lock(&pool->migrate_lock); > > + pool_write_lock(pool); > > spin_lock(&class->lock); > > } > > } > > @@ -2002,7 +2027,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > > putback_zspage(class, dst_zspage); > > > > spin_unlock(&class->lock); > > - write_unlock(&pool->migrate_lock); > > + pool_write_unlock(pool); > > > > return pages_freed; > > } > > @@ -2014,10 +2039,10 @@ unsigned long zs_compact(struct zs_pool *pool) > > unsigned long pages_freed = 0; > > > > /* > > - * Pool compaction is performed under pool->migrate_lock so it is basically > > + * Pool compaction is performed under pool->lock so it is basically > > * single-threaded. Having more than one thread in __zs_compact() > > - * will increase pool->migrate_lock contention, which will impact other > > - * zsmalloc operations that need pool->migrate_lock. > > + * will increase pool->lock contention, which will impact other > > + * zsmalloc operations that need pool->lock. > > */ > > if (atomic_xchg(&pool->compaction_in_progress, 1)) > > return 0; > > @@ -2139,7 +2164,7 @@ struct zs_pool *zs_create_pool(const char *name) > > return NULL; > > > > init_deferred_free(pool); > > - rwlock_init(&pool->migrate_lock); > > + rwlock_init(&pool->lock); > > atomic_set(&pool->compaction_in_progress, 0); > > > > pool->name = kstrdup(name, GFP_KERNEL); > > -- > > 2.48.1.502.g6dc24dfdaf-goog > > >