From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78FA4C0218D for ; Wed, 29 Jan 2025 16:59:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D409428004E; Wed, 29 Jan 2025 11:59:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF0D128004D; Wed, 29 Jan 2025 11:59:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDF4328004E; Wed, 29 Jan 2025 11:59:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9F41928004D for ; Wed, 29 Jan 2025 11:59:41 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 19571C0E90 for ; Wed, 29 Jan 2025 16:59:41 +0000 (UTC) X-FDA: 83061100962.22.1CD0C1D Received: from out-183.mta1.migadu.com (out-183.mta1.migadu.com [95.215.58.183]) by imf26.hostedemail.com (Postfix) with ESMTP id 241BB140004 for ; Wed, 29 Jan 2025 16:59:38 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=HeKpEGtv; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf26.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.183 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738169979; a=rsa-sha256; cv=none; b=Kd1MO6B54A2r8FaLz/jz41a7mrA+ruSP+mJp1nQ9dHMfaNGD695NKDHxI2yI0li/Zg37ds l0uymL6CP/NhGBP/cFzpGClz/6uBnL3u5lmqn3iGos8K8Q4tvPaUI4SavchTUD1oOHosxF 7FAOjB7dY2yXo9LccSsOsDTTcBFyjZk= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=HeKpEGtv; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf26.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.183 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738169979; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MG0Y8sw7sRvdmBx3H9vkJh/pAVy7qC+aj6e/d2kEEFQ=; b=tkIggxdRfmqG5drbP6h46iSBJRI6DX371ka7j+RK/s0eMIrsf8shLtrvw5tbCpTcJ3Ni3D H4Gpk2ipVM0blbAnoaFMz9pP/Jgg68SHDk1WD+5pkjGrq7RQtqbXsTTI1ZQ7HhGSDdwKz5 eYalqASVSIsivdJ68kRCIQZ3e5oT4BA= Date: Wed, 29 Jan 2025 16:59:31 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1738169977; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=MG0Y8sw7sRvdmBx3H9vkJh/pAVy7qC+aj6e/d2kEEFQ=; b=HeKpEGtviSHWuLl46aWoaTLVybd0LvxTRrs0vCcTwtSUuW15Hywf2mdRhl5ItETBGDK6aZ Rf54bbTETivUYgBJydni2xUjlbTyIiFnrS0qdCpYNUlUbklJgn1W8sWrr/yKqRnEWyjwEH +KjiNxEqoNXVGMznfzbUWOnvvBwHAmo= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Sergey Senozhatsky Cc: Andrew Morton , Minchan Kim , Johannes Weiner , Nhat Pham , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv1 1/6] zsmalloc: factor out pool locking helpers Message-ID: References: <20250129064853.2210753-1-senozhatsky@chromium.org> <20250129064853.2210753-2-senozhatsky@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250129064853.2210753-2-senozhatsky@chromium.org> X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Queue-Id: 241BB140004 X-Rspamd-Server: rspam10 X-Stat-Signature: 4zwzbimaq8bfdheqxgkxgs131r38uh38 X-HE-Tag: 1738169978-789136 X-HE-Meta: U2FsdGVkX1+dMux5tf3dcMO6/jHN2J5Bce7oqFDDEdPO0h/5UrGz9px9rSegIB7qyWeNevz1/jD0CiGevhHDLiBD9hB0aWgWXoxl8Rbx5oyPKqw+lrlSwdHw/qH/LB3GbS+Isu4KwvJ6rLUBLqHIhjAcQtzQm64ebEZmSb6D33WJ2Rb6WJJurov/AtJWp0YpMWsjRfaL5rjdxi1XTT371zwyMmO+g3VyGZ/KRGYSus+PAJ+QFVT9iYVWkubFJwF3BNYWsPI6eftAn203XxQCKKpzbAtsWozRMeE543HzlkI0bYf4w0eMMv652TvdatTdiRZhJu0VXwyVrAWcSKIh32e9dS/ySgarwph4dmS1V4vKnpSYLjcyn/4CwggyplItHyRtqIXlXsJOsnHwC7FPQuKmgiJJQ22wniiZlsXcz93JXCs7+zjEjB8CptyYy6MXB4jyJf2SRCtvqOAkkFYH4dy31+RKK+2PBFeePFKy4svnt27kFPRXajWcuN7l00jqZFMqhLzoxplyt/gdXIMDgYzgKfrKiDzE9e1Y9UqdAsHPQBqFLPXA5ANjeexjkL6d54CW92CR5129dtVEzQSb+gaiSDyDntEBUm0BMBL77kgNKqycwGPOOR1dMzPI3pV8RRtM6I+jmVU67FhzF4AGR1W1SFdIXePjoEeiymGsjX0noGHfcZkh7F3xktU9Rn93vI4ehyduZRyM+W6+q+HU32aHJcpWFQ3T9fxykrAEeCbul8ivpjnaKY+fd/vxR5zkLeIJIKvtw5o53goUImTPlPRcW3IoUKijXIPQhn10wa+o4gZzWzAMdRWyNNoIbybrfkzyXSVShhotWnDs3b6NLGOY6j7/UAHPn7rxucmrIlm49qvUZ5F0eiOueUBvJXFDEJvPyC+zC2DHmW4zMvMjPooJtLUAzikQgoSieaF0pEYnxzcY0s/XojXjyhgVDsOjx5jDxTgmjJwuuLW9L98 FwggPBzR CqXrrAepx5AjP0U+m8vsRUT8FBKAwjpHBveSslfX8ARVPFksBHY6Cz8NZVXnj0MRkR238WHWpOYjTu6YFxL+KC+2VDAW1XMTBPqVl3ukAOYV/SQw2gjDDh0Lul40M6vmvNJucgQgkH7e5Sv72+VxfTw1arDZ4RvSa04lxJ8qn64boX7V2RfnIaucihHlNykgRdHHmoja4sjVNMGIuqFyXk4ig/AiSZME3TffmAKDMbykrm+s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jan 29, 2025 at 03:43:47PM +0900, Sergey Senozhatsky wrote: > We currently have a mix of migrate_{read,write}_lock() helpers > that lock zspages, but it's zs_pool that actually has a ->migrate_lock > access to which is opene-coded. Factor out pool migrate locking > into helpers, zspage migration locking API will be renamed to > reduce confusion. But why did we remove "migrate" from the helpers' names if this is the real migrate lock? It seems like the real problem is how the zspage lock helpers are named, which is addressed later. > > Signed-off-by: Sergey Senozhatsky > --- > mm/zsmalloc.c | 56 +++++++++++++++++++++++++++++++++++++-------------- > 1 file changed, 41 insertions(+), 15 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 817626a351f8..2f8a2b139919 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -204,7 +204,8 @@ struct link_free { > }; > > struct zs_pool { > - const char *name; > + /* protect page/zspage migration */ > + rwlock_t migrate_lock; > > struct size_class *size_class[ZS_SIZE_CLASSES]; > struct kmem_cache *handle_cachep; > @@ -213,6 +214,7 @@ struct zs_pool { > atomic_long_t pages_allocated; > > struct zs_pool_stats stats; > + atomic_t compaction_in_progress; > > /* Compact classes */ > struct shrinker *shrinker; > @@ -223,11 +225,35 @@ struct zs_pool { > #ifdef CONFIG_COMPACTION > struct work_struct free_work; > #endif > - /* protect page/zspage migration */ > - rwlock_t migrate_lock; > - atomic_t compaction_in_progress; > + > + const char *name; Are the struct moves relevant to the patch or just to improve readability? I am generally scared to move hot members around unnecessarily due to cache-line sharing problems -- although I don't know if these are really hot. > }; > > +static void pool_write_unlock(struct zs_pool *pool) > +{ > + write_unlock(&pool->migrate_lock); > +} > + > +static void pool_write_lock(struct zs_pool *pool) > +{ > + write_lock(&pool->migrate_lock); > +} > + > +static void pool_read_unlock(struct zs_pool *pool) > +{ > + read_unlock(&pool->migrate_lock); > +} > + > +static void pool_read_lock(struct zs_pool *pool) > +{ > + read_lock(&pool->migrate_lock); > +} > + > +static bool pool_lock_is_contended(struct zs_pool *pool) > +{ > + return rwlock_is_contended(&pool->migrate_lock); > +} > + > static inline void zpdesc_set_first(struct zpdesc *zpdesc) > { > SetPagePrivate(zpdesc_page(zpdesc)); > @@ -1206,7 +1232,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > BUG_ON(in_interrupt()); > > /* It guarantees it can get zspage from handle safely */ > - read_lock(&pool->migrate_lock); > + pool_read_lock(pool); > obj = handle_to_obj(handle); > obj_to_location(obj, &zpdesc, &obj_idx); > zspage = get_zspage(zpdesc); > @@ -1218,7 +1244,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > * which is smaller granularity. > */ > migrate_read_lock(zspage); > - read_unlock(&pool->migrate_lock); > + pool_read_unlock(pool); > > class = zspage_class(pool, zspage); > off = offset_in_page(class->size * obj_idx); > @@ -1453,13 +1479,13 @@ void zs_free(struct zs_pool *pool, unsigned long handle) > * The pool->migrate_lock protects the race with zpage's migration > * so it's safe to get the page from handle. > */ > - read_lock(&pool->migrate_lock); > + pool_read_lock(pool); > obj = handle_to_obj(handle); > obj_to_zpdesc(obj, &f_zpdesc); > zspage = get_zspage(f_zpdesc); > class = zspage_class(pool, zspage); > spin_lock(&class->lock); > - read_unlock(&pool->migrate_lock); > + pool_read_unlock(pool); > > class_stat_sub(class, ZS_OBJS_INUSE, 1); > obj_free(class->size, obj); > @@ -1796,7 +1822,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > * The pool migrate_lock protects the race between zpage migration > * and zs_free. > */ > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > class = zspage_class(pool, zspage); > > /* > @@ -1833,7 +1859,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > * Since we complete the data copy and set up new zspage structure, > * it's okay to release migration_lock. > */ > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > spin_unlock(&class->lock); > migrate_write_unlock(zspage); > > @@ -1956,7 +1982,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > * protect the race between zpage migration and zs_free > * as well as zpage allocation/free > */ > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > spin_lock(&class->lock); > while (zs_can_compact(class)) { > int fg; > @@ -1983,14 +2009,14 @@ static unsigned long __zs_compact(struct zs_pool *pool, > src_zspage = NULL; > > if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 > - || rwlock_is_contended(&pool->migrate_lock)) { > + || pool_lock_is_contended(pool)) { > putback_zspage(class, dst_zspage); > dst_zspage = NULL; > > spin_unlock(&class->lock); > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > cond_resched(); > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > spin_lock(&class->lock); > } > } > @@ -2002,7 +2028,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > putback_zspage(class, dst_zspage); > > spin_unlock(&class->lock); > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > > return pages_freed; > } > -- > 2.48.1.262.g85cc9f2d1e-goog >