From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2F1D8C0218D for ; Wed, 29 Jan 2025 17:01:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9246A28007C; Wed, 29 Jan 2025 12:01:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D3B728007B; Wed, 29 Jan 2025 12:01:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C22828007C; Wed, 29 Jan 2025 12:01:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5FA8D28007B for ; Wed, 29 Jan 2025 12:01:39 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 87C45A0E5B for ; Wed, 29 Jan 2025 17:01:38 +0000 (UTC) X-FDA: 83061105876.12.B31CE83 Received: from out-174.mta1.migadu.com (out-174.mta1.migadu.com [95.215.58.174]) by imf29.hostedemail.com (Postfix) with ESMTP id 05D2C120025 for ; Wed, 29 Jan 2025 17:01:35 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=u+PxvMlH; spf=pass (imf29.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.174 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738170096; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ex0OXA+TDLjZuNYfOW3EWOMneSqruaVaw9dp9CjlOmc=; b=PmoH+sS5kuuPkV4xt5NwH7705Tqim4B7hKGrSin+k27xgmES+G5xCCgoOs4UM5wOWyUwaH 76gx2QugbSsRf1RtncNZp23D201bxcnCoZbYot5ap5qjdql8BNlhfsznka79i001GTV9Sr VcQtLk+2har2srios9Q7tMs8pUehwIo= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=u+PxvMlH; spf=pass (imf29.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.174 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738170096; a=rsa-sha256; cv=none; b=JnznLhasX5cjHuW1pADFNafeMBsHpX097whNYgYvk+/ZXzm+rq6BUyBJaYgfIo3W8mZsyw 7I1QN8xV42nrev1apAT463pU5zIcqUt7Lt90ULsIta2ZGlGvkq98dIGOlmK/6F0cCFKH1P sKmIDz6JY19lWl0f7F3GSWupKShKZt8= Date: Wed, 29 Jan 2025 17:01:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1738170094; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Ex0OXA+TDLjZuNYfOW3EWOMneSqruaVaw9dp9CjlOmc=; b=u+PxvMlHUWIcPaSTzuqMKOqshpePcduYvVz/MwyuVNwkeaecD1veTVQGXlLEF1x7F5O5r/ 9y8cBqiyjqO2/5KuoZ+QUyHGv+R3vR8HhdjMfn4hgzg8ogAJswJJEB1Ns23E1hZpOid3H1 P+wbICLFl7upFnSHyJWlBC/80WWsPyw= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Sergey Senozhatsky Cc: Andrew Morton , Minchan Kim , Johannes Weiner , Nhat Pham , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv1 2/6] zsmalloc: factor out size-class locking helpers Message-ID: References: <20250129064853.2210753-1-senozhatsky@chromium.org> <20250129064853.2210753-3-senozhatsky@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250129064853.2210753-3-senozhatsky@chromium.org> X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 05D2C120025 X-Stat-Signature: 3p6pai6qbkfiwyf8ktqjkm8juqtax1pt X-Rspam-User: X-HE-Tag: 1738170095-537817 X-HE-Meta: U2FsdGVkX180ft7V17E7LvINjcsci6UNhLHHZ3IBkdhP/BRKDSzyCQFjkzIVNB23GpOUxtsTJV3i3YOX2u3y1dtPQB33SWH7JLaZYgVX8KgQOCjDos22j1iGr4MA/eCXwsRrVvD7ZjDeE81A5bjyqyIPE9os54NU8HOj3uJsfKM3n9OxufpNt0Du7Un4wL1FM7eq8ALoGDw/tTYwmh4/VQJzHSY318BKaq9nbe+WXPRTtga1YNO4kFmxR9FV5vyKH1Iu51+KbzsrFoHYN+nk4tUmxw0Sy93NO5xbAVaVslS4Z3KAh05CeDTOQL4M2qzeFYL3D8xkesCZMNdMO6qcolsX0DRCKM9jQBMYlTvgB3YXZ1im8xWHXhxcugZPRlC9uMF99EYLmnlMYNTKcMnLrl6lSnu/u6WZKCrq5XRTIU84TGr5XFIZdeUR60MjxjDP5F4iOaAetXyDPwpesDmO5eHNHlfNmD+OHSJKVxaWfHCIpNYvZPoowAzYlKCyFXTZY+vk6prvIj6blvRZcXr+/BgFeEkjHIj95OcExTVMB65TY5HkBQOq/mqPVRi1tkwYPwLHU1D5a0aW0vViduxJ2esuS4fX49j63D810BpME648bCyt+Gv8TXp6t/Kb2wkHYoK+Vwj4bdKSulnFxdRKF7P0amo8MArsPGRWCUxz52g6JqMgj3zs/ztz2LJ4uJ5+DS3df14WgsWEL+ZWcZExrVnruKT3he9wkFkWYw0gXYh5laT4WmANdknVFsUp86Qvv/mVOB9zokiFffZP6RBCrhOjhLsyV9FAn7RUanJiiaPly32fXMbGHHnEYE4MScQFEd6PNpFNQ6qQuneua8OcfemhlVKaaalRfmESrIlUyPVjXYa0WoOI9Qthcu2RouPzS5jIoyNDNN87S0KBNDCYV0985TWvsAOhwNAmI7YkicQAO3VuiyhOYw76TiqIkpqR7Z9gaIBEhRK477eykqM tWYuhamE 7LSBGkW5OT9HGw0eExjYJBNFGgODDsmUOZhbDwy4b5rb7EyKv9d7ce01cYpJFrOuwYavaQ4cPqnXky4k6WQP49tR5JFgywHtmkk7RVMUeHR581IxJ4Bnc9KxQYKaN3E7Qw4/uYmzWyNfE/sn0CsXw8qVxUuQYP5kKvuYtrvz5xQwy7bmKw2DE0xwcWtPeecF8yH9rY2z6zqh6si47y36CO1IL15F1cUpdZ5HWvfveWOk5SXc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jan 29, 2025 at 03:43:48PM +0900, Sergey Senozhatsky wrote: > Move open-coded size-class locking to dedicated helpers. > > Signed-off-by: Sergey Senozhatsky Reviewed-by: Yosry Ahmed > --- > mm/zsmalloc.c | 47 ++++++++++++++++++++++++++++------------------- > 1 file changed, 28 insertions(+), 19 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 2f8a2b139919..0f575307675d 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -254,6 +254,16 @@ static bool pool_lock_is_contended(struct zs_pool *pool) > return rwlock_is_contended(&pool->migrate_lock); > } > > +static void size_class_lock(struct size_class *class) > +{ > + spin_lock(&class->lock); > +} > + > +static void size_class_unlock(struct size_class *class) > +{ > + spin_unlock(&class->lock); > +} > + > static inline void zpdesc_set_first(struct zpdesc *zpdesc) > { > SetPagePrivate(zpdesc_page(zpdesc)); > @@ -614,8 +624,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) > if (class->index != i) > continue; > > - spin_lock(&class->lock); > - > + size_class_lock(class); > seq_printf(s, " %5u %5u ", i, class->size); > for (fg = ZS_INUSE_RATIO_10; fg < NR_FULLNESS_GROUPS; fg++) { > inuse_totals[fg] += class_stat_read(class, fg); > @@ -625,7 +634,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) > obj_allocated = class_stat_read(class, ZS_OBJS_ALLOCATED); > obj_used = class_stat_read(class, ZS_OBJS_INUSE); > freeable = zs_can_compact(class); > - spin_unlock(&class->lock); > + size_class_unlock(class); > > objs_per_zspage = class->objs_per_zspage; > pages_used = obj_allocated / objs_per_zspage * > @@ -1400,7 +1409,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > class = pool->size_class[get_size_class_index(size)]; > > /* class->lock effectively protects the zpage migration */ > - spin_lock(&class->lock); > + size_class_lock(class); > zspage = find_get_zspage(class); > if (likely(zspage)) { > obj_malloc(pool, zspage, handle); > @@ -1411,7 +1420,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > goto out; > } > > - spin_unlock(&class->lock); > + size_class_unlock(class); > > zspage = alloc_zspage(pool, class, gfp); > if (!zspage) { > @@ -1419,7 +1428,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > return (unsigned long)ERR_PTR(-ENOMEM); > } > > - spin_lock(&class->lock); > + size_class_lock(class); > obj_malloc(pool, zspage, handle); > newfg = get_fullness_group(class, zspage); > insert_zspage(class, zspage, newfg); > @@ -1430,7 +1439,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > /* We completely set up zspage so mark them as movable */ > SetZsPageMovable(pool, zspage); > out: > - spin_unlock(&class->lock); > + size_class_unlock(class); > > return handle; > } > @@ -1484,7 +1493,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) > obj_to_zpdesc(obj, &f_zpdesc); > zspage = get_zspage(f_zpdesc); > class = zspage_class(pool, zspage); > - spin_lock(&class->lock); > + size_class_lock(class); > pool_read_unlock(pool); > > class_stat_sub(class, ZS_OBJS_INUSE, 1); > @@ -1494,7 +1503,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) > if (fullness == ZS_INUSE_RATIO_0) > free_zspage(pool, class, zspage); > > - spin_unlock(&class->lock); > + size_class_unlock(class); > cache_free_handle(pool, handle); > } > EXPORT_SYMBOL_GPL(zs_free); > @@ -1828,7 +1837,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > /* > * the class lock protects zpage alloc/free in the zspage. > */ > - spin_lock(&class->lock); > + size_class_lock(class); > /* the migrate_write_lock protects zpage access via zs_map_object */ > migrate_write_lock(zspage); > > @@ -1860,7 +1869,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > * it's okay to release migration_lock. > */ > pool_write_unlock(pool); > - spin_unlock(&class->lock); > + size_class_unlock(class); > migrate_write_unlock(zspage); > > zpdesc_get(newzpdesc); > @@ -1904,10 +1913,10 @@ static void async_free_zspage(struct work_struct *work) > if (class->index != i) > continue; > > - spin_lock(&class->lock); > + size_class_lock(class); > list_splice_init(&class->fullness_list[ZS_INUSE_RATIO_0], > &free_pages); > - spin_unlock(&class->lock); > + size_class_unlock(class); > } > > list_for_each_entry_safe(zspage, tmp, &free_pages, list) { > @@ -1915,10 +1924,10 @@ static void async_free_zspage(struct work_struct *work) > lock_zspage(zspage); > > class = zspage_class(pool, zspage); > - spin_lock(&class->lock); > + size_class_lock(class); > class_stat_sub(class, ZS_INUSE_RATIO_0, 1); > __free_zspage(pool, class, zspage); > - spin_unlock(&class->lock); > + size_class_unlock(class); > } > }; > > @@ -1983,7 +1992,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > * as well as zpage allocation/free > */ > pool_write_lock(pool); > - spin_lock(&class->lock); > + size_class_lock(class); > while (zs_can_compact(class)) { > int fg; > > @@ -2013,11 +2022,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, > putback_zspage(class, dst_zspage); > dst_zspage = NULL; > > - spin_unlock(&class->lock); > + size_class_unlock(class); > pool_write_unlock(pool); > cond_resched(); > pool_write_lock(pool); > - spin_lock(&class->lock); > + size_class_lock(class); > } > } > > @@ -2027,7 +2036,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > if (dst_zspage) > putback_zspage(class, dst_zspage); > > - spin_unlock(&class->lock); > + size_class_unlock(class); > pool_write_unlock(pool); > > return pages_freed; > -- > 2.48.1.262.g85cc9f2d1e-goog >