From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66902C02198 for ; Wed, 12 Feb 2025 16:18:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F14856B0082; Wed, 12 Feb 2025 11:18:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EC40E6B0083; Wed, 12 Feb 2025 11:18:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB3E86B0085; Wed, 12 Feb 2025 11:18:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C19026B0082 for ; Wed, 12 Feb 2025 11:18:23 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 55C43B2638 for ; Wed, 12 Feb 2025 16:18:23 +0000 (UTC) X-FDA: 83111800086.10.7B848EE Received: from out-187.mta1.migadu.com (out-187.mta1.migadu.com [95.215.58.187]) by imf10.hostedemail.com (Postfix) with ESMTP id 6DB6EC000D for ; Wed, 12 Feb 2025 16:18:21 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=NJDcyJht; spf=pass (imf10.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.187 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739377101; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8sbZGIrU5ceW1xiKOedooUcjCQRIvPjuNItFFd09wtg=; b=lhz9jTXkTwV5AER/R58qLwpC6l9u4Ifw/TIq35UT5uGqYQWBio45E3eTDpkLKD3kB2s/zs jGq0sL4T5b+kP4JWxqMR8Bbt63PPcRqvozd5DxVzosVtvWVJYB0TfGryRTG6Fhefx3kv91 ohoSWpCkdt68dvv8q5N1THUfeRtT/6Y= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=NJDcyJht; spf=pass (imf10.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.187 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739377101; a=rsa-sha256; cv=none; b=PBUlofBsgk6+PtcBDd0Fh+s9O3sPPfdSlhcccOPsBmYzdE6N8roBr0mJnDSYVnwqmyTRhO wInbsgefSLBPWfYXZAFI7OKRWzXkX6+N5I151Loz5EQpShXeZ4IjIA7hqbeafRQEW4nsxA E+KIk7OHBm3o8Se345g7vqohNyVjWlo= Date: Wed, 12 Feb 2025 16:18:14 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1739377099; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=8sbZGIrU5ceW1xiKOedooUcjCQRIvPjuNItFFd09wtg=; b=NJDcyJht7AeI00TfUWJEpc2hJhV354L8qmqv/6HzUweMk/x7EIPRjW3crYfABZYrwsLjpQ 67Lx5lBG53DcezBuXvk6LKLVIvMFqWPi7OtIh/t8Z6no1UtUZ3CrPDsmAqkK8EA7HxIRkB 8NQC68RVipcjk2EDOVuni+N93c8vVvA= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Sergey Senozhatsky Cc: Andrew Morton , Kairui Song , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 10/18] zsmalloc: factor out pool locking helpers Message-ID: References: <20250212063153.179231-1-senozhatsky@chromium.org> <20250212063153.179231-11-senozhatsky@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250212063153.179231-11-senozhatsky@chromium.org> X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6DB6EC000D X-Stat-Signature: 89jg7y7cwcbfwkyg17skj6uuo9hh14xi X-Rspam-User: X-HE-Tag: 1739377101-53049 X-HE-Meta: U2FsdGVkX18A4iCQz1K0kgKSIMEpZZvbTqmRJVuko1T4UrAXHeX6DKv/IPjKv4iHeMjzj7R6pV0Bzcyl4t9q4F5abc1BwuQp8fVS0005idyyQf3AugUR65aE83ZzvHO2AJ8ayOCkJofpdOsChjm3xMtR3/+G1Qjgs2jFK/cP9iI49W2ZzjFxnWBe46ntJq61G9CyevGIb41Yn8s6Taua5TTaSWINZabzH+R0jCBzRlxengA+oUpSM6PZF6T/g1Es76O8zXogM0OAWNp11pj7aJyxIE5iQX5HbExrmtRM5dQ2G53/Ou0sJCf0uAa4qWb2mej8TNoiWwXtaB+v2UQz008ks5il0NbjlaH1FMQUhOZcZabxz7IYnfIadFXh061SHVFwHAZahSxKTJaj+2EjOKfHv5frwiStMuVDOzKiVO/QQWmGQGUZaqXyukj5uvces293K1e7Eo5ZAq0dlvfRelaarP2Y2u264vocRNcVf73Nex4Qm1ZQM8oWrT4wEHlUOawcEcDmK3nsPlGJBvcnQ4Q5+tQB6ZZYExWXoDrTSxpzich9Eowp46YMJ+V61I7NtH1lsm7xpMvAjff88SdVQJy+tWJ6OkI3YVrDszZvq9Clo1ALa2wHe6jkxYSASrITUtAJlpobEfKUhZ9Cxf8fskTb0VrhOr96DOG/ys0JRH8E/P0FHaKpIjzwYEhK8/So7R5ckWrO5e/9tqUnalQFPDGOBLEZvLgTADajFDoMWCOzWi4cIcrjTfItK3vaRG98D7Let02GR5fkiMKOJKH9uFLaKIpseWkC2AIpt9RpABT9lFtehDQRJIDM0EaG703uQGi2Tn+pFvSdVT9Lm5m73xDLh3KE2ByK865i5vUcny7SJszdpM/Ukk7Yvck3THSb/yg29hOjLP1HHuLKqrIwhyfIGK86nEkMiGxe3Fzb9nuKHiz8ChizNfT1hU2CDajiZMbgFq+wTfUQrap9FSr qr5YcbYU t8RjrQ8XnT1Gc6SiL5HH7sv3J2C/L9Wp6xxhwgKBj01Nnn+2nmdUKrPJP5iWIlw5FoQOPW1wSwFYbBIxkwH0CES7YlFpDfYQw84gvH4dDFJb+QaC8NJoF9VDWfFyY94obR95GFlLm4lz1tCEZidAzft+QZScRRNSY67cxo9jspMo2D6CS9xe1Z+RjhUFj9dYleIT0P2qoITPr/keggPdi6km6Ol4PueSvqP8jfzfC2EmgFJ0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Feb 12, 2025 at 03:27:08PM +0900, Sergey Senozhatsky wrote: > We currently have a mix of migrate_{read,write}_lock() helpers > that lock zspages, but it's zs_pool that actually has a ->migrate_lock > access to which is opene-coded. Factor out pool migrate locking > into helpers, zspage migration locking API will be renamed to > reduce confusion. > > It's worth mentioning that zsmalloc locks sync not only migration, > but also compaction. > > Signed-off-by: Sergey Senozhatsky FWIW I don't see a lot of value in the helpers (renaming the lock is useful tho). We open-code other locks like the class lock anyway, and the helpers obscure the underlying lock type without adding much value in terms of readability/conciseness. > --- > mm/zsmalloc.c | 63 +++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 44 insertions(+), 19 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 6d0e47f7ae33..47c638df47c5 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -18,7 +18,7 @@ > /* > * lock ordering: > * page_lock > - * pool->migrate_lock > + * pool->lock > * class->lock > * zspage->lock > */ > @@ -224,10 +224,35 @@ struct zs_pool { > struct work_struct free_work; > #endif > /* protect page/zspage migration */ > - rwlock_t migrate_lock; > + rwlock_t lock; > atomic_t compaction_in_progress; > }; > > +static void pool_write_unlock(struct zs_pool *pool) > +{ > + write_unlock(&pool->lock); > +} > + > +static void pool_write_lock(struct zs_pool *pool) > +{ > + write_lock(&pool->lock); > +} > + > +static void pool_read_unlock(struct zs_pool *pool) > +{ > + read_unlock(&pool->lock); > +} > + > +static void pool_read_lock(struct zs_pool *pool) > +{ > + read_lock(&pool->lock); > +} > + > +static bool pool_lock_is_contended(struct zs_pool *pool) > +{ > + return rwlock_is_contended(&pool->lock); > +} > + > static inline void zpdesc_set_first(struct zpdesc *zpdesc) > { > SetPagePrivate(zpdesc_page(zpdesc)); > @@ -1206,7 +1231,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > BUG_ON(in_interrupt()); > > /* It guarantees it can get zspage from handle safely */ > - read_lock(&pool->migrate_lock); > + pool_read_lock(pool); > obj = handle_to_obj(handle); > obj_to_location(obj, &zpdesc, &obj_idx); > zspage = get_zspage(zpdesc); > @@ -1218,7 +1243,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > * which is smaller granularity. > */ > migrate_read_lock(zspage); > - read_unlock(&pool->migrate_lock); > + pool_read_unlock(pool); > > class = zspage_class(pool, zspage); > off = offset_in_page(class->size * obj_idx); > @@ -1450,16 +1475,16 @@ void zs_free(struct zs_pool *pool, unsigned long handle) > return; > > /* > - * The pool->migrate_lock protects the race with zpage's migration > + * The pool->lock protects the race with zpage's migration > * so it's safe to get the page from handle. > */ > - read_lock(&pool->migrate_lock); > + pool_read_lock(pool); > obj = handle_to_obj(handle); > obj_to_zpdesc(obj, &f_zpdesc); > zspage = get_zspage(f_zpdesc); > class = zspage_class(pool, zspage); > spin_lock(&class->lock); > - read_unlock(&pool->migrate_lock); > + pool_read_unlock(pool); > > class_stat_sub(class, ZS_OBJS_INUSE, 1); > obj_free(class->size, obj); > @@ -1793,10 +1818,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > pool = zspage->pool; > > /* > - * The pool migrate_lock protects the race between zpage migration > + * The pool lock protects the race between zpage migration > * and zs_free. > */ > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > class = zspage_class(pool, zspage); > > /* > @@ -1833,7 +1858,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > * Since we complete the data copy and set up new zspage structure, > * it's okay to release migration_lock. > */ > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > spin_unlock(&class->lock); > migrate_write_unlock(zspage); > > @@ -1956,7 +1981,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > * protect the race between zpage migration and zs_free > * as well as zpage allocation/free > */ > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > spin_lock(&class->lock); > while (zs_can_compact(class)) { > int fg; > @@ -1983,14 +2008,14 @@ static unsigned long __zs_compact(struct zs_pool *pool, > src_zspage = NULL; > > if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 > - || rwlock_is_contended(&pool->migrate_lock)) { > + || pool_lock_is_contended(pool)) { > putback_zspage(class, dst_zspage); > dst_zspage = NULL; > > spin_unlock(&class->lock); > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > cond_resched(); > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > spin_lock(&class->lock); > } > } > @@ -2002,7 +2027,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > putback_zspage(class, dst_zspage); > > spin_unlock(&class->lock); > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > > return pages_freed; > } > @@ -2014,10 +2039,10 @@ unsigned long zs_compact(struct zs_pool *pool) > unsigned long pages_freed = 0; > > /* > - * Pool compaction is performed under pool->migrate_lock so it is basically > + * Pool compaction is performed under pool->lock so it is basically > * single-threaded. Having more than one thread in __zs_compact() > - * will increase pool->migrate_lock contention, which will impact other > - * zsmalloc operations that need pool->migrate_lock. > + * will increase pool->lock contention, which will impact other > + * zsmalloc operations that need pool->lock. > */ > if (atomic_xchg(&pool->compaction_in_progress, 1)) > return 0; > @@ -2139,7 +2164,7 @@ struct zs_pool *zs_create_pool(const char *name) > return NULL; > > init_deferred_free(pool); > - rwlock_init(&pool->migrate_lock); > + rwlock_init(&pool->lock); > atomic_set(&pool->compaction_in_progress, 0); > > pool->name = kstrdup(name, GFP_KERNEL); > -- > 2.48.1.502.g6dc24dfdaf-goog >