From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87A4AC0218F for ; Fri, 31 Jan 2025 15:47:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C3406B0088; Fri, 31 Jan 2025 10:47:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 173726B0089; Fri, 31 Jan 2025 10:47:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 061A56B008A; Fri, 31 Jan 2025 10:47:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DCAEC6B0088 for ; Fri, 31 Jan 2025 10:47:04 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8546C1C7F97 for ; Fri, 31 Jan 2025 15:47:04 +0000 (UTC) X-FDA: 83068175568.24.FD48273 Received: from out-171.mta1.migadu.com (out-171.mta1.migadu.com [95.215.58.171]) by imf06.hostedemail.com (Postfix) with ESMTP id 912F418000B for ; Fri, 31 Jan 2025 15:47:02 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=B8GMhZl7; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf06.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.171 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738338422; a=rsa-sha256; cv=none; b=0WdSAUjZdhHUMkfK4db614UoGaIj2xTVA52SGTnoXiCCCK/PPRWlF4/ttcTPSx4vHvX0Bw WPV8JInF2o6cBTD5H3dAMUj1Ds6StET+KWvMxj8PvNjLLRVfz77sV1P49dxrK2HrswWwxO R/PKwgZ7IlvPwl4H9xZexcSYOxgpsFo= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=B8GMhZl7; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf06.hostedemail.com: domain of yosry.ahmed@linux.dev designates 95.215.58.171 as permitted sender) smtp.mailfrom=yosry.ahmed@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738338422; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S4UMvpLmUoH/72hjAhozm2z+sH1cwVyeGJhHL6Ll8kw=; b=NzsRli8CNfhV3w7rkWOHaoRghzceLG0flylHT1FuBTOTib4X804UP3yj+MAP2n9UfN0fMt X2M5+tYnm+yZ+z8+YsWZIkpxhvCwy5dErvYJUA12qCw58Iuow+43koaz1NxoOyuVVMi/BW sHCaA1z+41HI3uYtRj0ncQ8OmGm/9gc= Date: Fri, 31 Jan 2025 15:46:56 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1738338420; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=S4UMvpLmUoH/72hjAhozm2z+sH1cwVyeGJhHL6Ll8kw=; b=B8GMhZl7iklwBRFc1bjI7Rc1UTjyA07LqDm3SNWI0MKYPuEgufSga/N1u2I/aXS+AmU44w QstlTtGcSUwBE//ouasMb5ZhdgjezBXP5EHjMfZtBtcxmgp9GlZ4oJHj3aUv104UVJi6Kh 0nAeE+XL0EtaCsDwYVK5Uk7g+guiqd0= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Sergey Senozhatsky Cc: Andrew Morton , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv4 12/17] zsmalloc: factor out pool locking helpers Message-ID: References: <20250131090658.3386285-1-senozhatsky@chromium.org> <20250131090658.3386285-13-senozhatsky@chromium.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250131090658.3386285-13-senozhatsky@chromium.org> X-Migadu-Flow: FLOW_OUT X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 912F418000B X-Stat-Signature: e33s1qz1sw3ukrmyy4gmk67ycbkgfqqy X-HE-Tag: 1738338422-492102 X-HE-Meta: U2FsdGVkX18DElI4mbqyLgYLs/qOqZjGP3H1ZlfSEbmrMRGreEdY2SG1n/s3U5H2T5APgczj7vrQIhy0bj84qeRLjG/zBO8pox6Bx7HT41wtQoH84L4J6I7SvXkteTmhQpiUVbNgwvoXCuc7W9QeUvvDW+asOcYWqzpHNsZ5UKARkdHsuRBKFd4i7END4+wqmaqvVXcvDb4/+gIQHUYt7B8kScxiZxnXpIiY4wOjoZZZM/hBs64QGk3NBFIQUvQLMmfQhcKCXw31BC8dBLqJea3YzC0SOHGoEEOXAWbC6fGazvZ/ZGfX4Uf/QjJ70zWdDuG3G73Aj264iRcmatOoM/2B/Ki0l0IINmZ+8Ey5k66TRQR7g+HIFz3bRvuyLuEviYvIH4LTZ3DF4O7ORudUgxEv7ZXm8lVFMlGenfhV2BEVpNdVWCmzEsJu0tD6ELt2sVexyTAnp5PA7zSWMTypSXAUFjgfc1+TtX/lt2rOlFP5cy5otMizciaEWvCB5Z4FcU1TSf+6OyODotLI1+KL7jTw0kOn2uY02NW9cQ/b/LyBi9SqeBOL2Unt0j+ZOtzPZ+4FNgefAF0Zymtxd4s94PuA3oCxQ5LENYeZUbVSRsAksnBgpeIi/iwxAgMVehm3IJ0vRe/Umzwh6B0y0jcPbXvZyy5Tv6cLJHAZ1n3y3O8qYg7rnp65IbcrOPxPZ8+/o+O55dw6+gvYV2TS4JrcbMy4loRcKR0z/ePzPhyhF2l6kOwXx6Hh1W9u49FawkluX3rTgymc2swydnF7pmvaaNJDmN/GSPBMBiVmZ+Lv3teMS9/tP0q6K9FB9JRrS3zCUWSXPTwfpkMQl4G/TOfKnuxoMWWnHEZIga1gjaOnOfj7qbG5Rt3RiM3u6qspGMcOvQguPcie3wCePsS8VPdl1QO+uBOUc9kMT/s/gdbSdUnZ8r00iJePrS7E3rH43MnaP+Mom7a1q3ZI/6FRbcQ rFuZYfkx AOAI8QL9A/trU4SP8G+Xh2KZELxCV/esLyQU/TpcN8JObZalsjmGwhHSfIlZvzZ8t6Jglsqa3RdCc1yYilklAtYylJ2YCsAMq34io6agfwhKwZKiX6pZVRMMRyRf+X1EXkc6bplNBhbI9sB7zYlWJP17vfJLz8Q4x6O/FQ8uR7pBOAkkEhyjFs3wvQRPhROc+JBar2JUbB5LcRqCAXuuTwGovLHFRqc4u/ad2ZRX9dD37/8I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jan 31, 2025 at 06:06:11PM +0900, Sergey Senozhatsky wrote: > We currently have a mix of migrate_{read,write}_lock() helpers > that lock zspages, but it's zs_pool that actually has a ->migrate_lock > access to which is opene-coded. Factor out pool migrate locking > into helpers, zspage migration locking API will be renamed to > reduce confusion. > > It's worth mentioning that zsmalloc locks sync not only migration, > but also compaction. > > Signed-off-by: Sergey Senozhatsky > Cc: Yosry Ahmed > --- > mm/zsmalloc.c | 69 +++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 47 insertions(+), 22 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 817626a351f8..c129596ab960 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -18,7 +18,7 @@ > /* > * lock ordering: > * page_lock > - * pool->migrate_lock > + * pool->lock > * class->lock > * zspage->lock > */ > @@ -224,10 +224,35 @@ struct zs_pool { > struct work_struct free_work; > #endif > /* protect page/zspage migration */ > - rwlock_t migrate_lock; > + rwlock_t lock; > atomic_t compaction_in_progress; > }; > > +static void pool_write_unlock(struct zs_pool *pool) > +{ > + write_unlock(&pool->lock); > +} > + > +static void pool_write_lock(struct zs_pool *pool) > +{ > + write_lock(&pool->lock); > +} > + > +static void pool_read_unlock(struct zs_pool *pool) > +{ > + read_unlock(&pool->lock); > +} > + > +static void pool_read_lock(struct zs_pool *pool) > +{ > + read_lock(&pool->lock); > +} > + > +static bool pool_lock_is_contended(struct zs_pool *pool) > +{ > + return rwlock_is_contended(&pool->lock); > +} > + > static inline void zpdesc_set_first(struct zpdesc *zpdesc) > { > SetPagePrivate(zpdesc_page(zpdesc)); > @@ -290,7 +315,7 @@ static bool ZsHugePage(struct zspage *zspage) > return zspage->huge; > } > > -static void migrate_lock_init(struct zspage *zspage); > +static void lock_init(struct zspage *zspage); Seems like this change slipped in here, with a s/migrate_lock/lock replacement if I have to make a guess :P > static void migrate_read_lock(struct zspage *zspage); > static void migrate_read_unlock(struct zspage *zspage); > static void migrate_write_lock(struct zspage *zspage); > @@ -992,7 +1017,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, > return NULL; > > zspage->magic = ZSPAGE_MAGIC; > - migrate_lock_init(zspage); > + lock_init(zspage); > > for (i = 0; i < class->pages_per_zspage; i++) { > struct zpdesc *zpdesc; > @@ -1206,7 +1231,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > BUG_ON(in_interrupt()); > > /* It guarantees it can get zspage from handle safely */ > - read_lock(&pool->migrate_lock); > + pool_read_lock(pool); > obj = handle_to_obj(handle); > obj_to_location(obj, &zpdesc, &obj_idx); > zspage = get_zspage(zpdesc); > @@ -1218,7 +1243,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, > * which is smaller granularity. > */ > migrate_read_lock(zspage); > - read_unlock(&pool->migrate_lock); > + pool_read_unlock(pool); > > class = zspage_class(pool, zspage); > off = offset_in_page(class->size * obj_idx); > @@ -1450,16 +1475,16 @@ void zs_free(struct zs_pool *pool, unsigned long handle) > return; > > /* > - * The pool->migrate_lock protects the race with zpage's migration > + * The pool->lock protects the race with zpage's migration > * so it's safe to get the page from handle. > */ > - read_lock(&pool->migrate_lock); > + pool_read_lock(pool); > obj = handle_to_obj(handle); > obj_to_zpdesc(obj, &f_zpdesc); > zspage = get_zspage(f_zpdesc); > class = zspage_class(pool, zspage); > spin_lock(&class->lock); > - read_unlock(&pool->migrate_lock); > + pool_read_unlock(pool); > > class_stat_sub(class, ZS_OBJS_INUSE, 1); > obj_free(class->size, obj); > @@ -1703,7 +1728,7 @@ static void lock_zspage(struct zspage *zspage) > } > #endif /* CONFIG_COMPACTION */ > > -static void migrate_lock_init(struct zspage *zspage) > +static void lock_init(struct zspage *zspage) > { > rwlock_init(&zspage->lock); > } > @@ -1793,10 +1818,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > pool = zspage->pool; > > /* > - * The pool migrate_lock protects the race between zpage migration > + * The pool lock protects the race between zpage migration > * and zs_free. > */ > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > class = zspage_class(pool, zspage); > > /* > @@ -1833,7 +1858,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > * Since we complete the data copy and set up new zspage structure, > * it's okay to release migration_lock. > */ > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > spin_unlock(&class->lock); > migrate_write_unlock(zspage); > > @@ -1956,7 +1981,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > * protect the race between zpage migration and zs_free > * as well as zpage allocation/free > */ > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > spin_lock(&class->lock); > while (zs_can_compact(class)) { > int fg; > @@ -1983,14 +2008,14 @@ static unsigned long __zs_compact(struct zs_pool *pool, > src_zspage = NULL; > > if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 > - || rwlock_is_contended(&pool->migrate_lock)) { > + || pool_lock_is_contended(pool)) { > putback_zspage(class, dst_zspage); > dst_zspage = NULL; > > spin_unlock(&class->lock); > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > cond_resched(); > - write_lock(&pool->migrate_lock); > + pool_write_lock(pool); > spin_lock(&class->lock); > } > } > @@ -2002,7 +2027,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > putback_zspage(class, dst_zspage); > > spin_unlock(&class->lock); > - write_unlock(&pool->migrate_lock); > + pool_write_unlock(pool); > > return pages_freed; > } > @@ -2014,10 +2039,10 @@ unsigned long zs_compact(struct zs_pool *pool) > unsigned long pages_freed = 0; > > /* > - * Pool compaction is performed under pool->migrate_lock so it is basically > + * Pool compaction is performed under pool->lock so it is basically > * single-threaded. Having more than one thread in __zs_compact() > - * will increase pool->migrate_lock contention, which will impact other > - * zsmalloc operations that need pool->migrate_lock. > + * will increase pool->lock contention, which will impact other > + * zsmalloc operations that need pool->lock. > */ > if (atomic_xchg(&pool->compaction_in_progress, 1)) > return 0; > @@ -2139,7 +2164,7 @@ struct zs_pool *zs_create_pool(const char *name) > return NULL; > > init_deferred_free(pool); > - rwlock_init(&pool->migrate_lock); > + rwlock_init(&pool->lock); > atomic_set(&pool->compaction_in_progress, 0); > > pool->name = kstrdup(name, GFP_KERNEL); > -- > 2.48.1.362.g079036d154-goog >