linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosry.ahmed@linux.dev>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Kairui Song <ryncsn@gmail.com>, Minchan Kim <minchan@kernel.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 10/18] zsmalloc: factor out pool locking helpers
Date: Wed, 12 Feb 2025 16:19:07 +0000	[thread overview]
Message-ID: <Z6zJ-3exkKoj5n2D@google.com> (raw)
In-Reply-To: <Z6zJxvLbRQ6pKtue@google.com>

On Wed, Feb 12, 2025 at 04:18:14PM +0000, Yosry Ahmed wrote:
> On Wed, Feb 12, 2025 at 03:27:08PM +0900, Sergey Senozhatsky wrote:
> > We currently have a mix of migrate_{read,write}_lock() helpers
> > that lock zspages, but it's zs_pool that actually has a ->migrate_lock
> > access to which is opene-coded.  Factor out pool migrate locking
> > into helpers, zspage migration locking API will be renamed to
> > reduce confusion.
> > 
> > It's worth mentioning that zsmalloc locks sync not only migration,
> > but also compaction.
> > 
> > Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> 
> FWIW I don't see a lot of value in the helpers (renaming the lock is
> useful tho). We open-code other locks like the class lock anyway, and
> the helpers obscure the underlying lock type without adding much value
> in terms of readability/conciseness.

We use helpers for the class lock in the following change, but my point
stands for that too.

> 
> > ---
> >  mm/zsmalloc.c | 63 +++++++++++++++++++++++++++++++++++----------------
> >  1 file changed, 44 insertions(+), 19 deletions(-)
> > 
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 6d0e47f7ae33..47c638df47c5 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -18,7 +18,7 @@
> >  /*
> >   * lock ordering:
> >   *	page_lock
> > - *	pool->migrate_lock
> > + *	pool->lock
> >   *	class->lock
> >   *	zspage->lock
> >   */
> > @@ -224,10 +224,35 @@ struct zs_pool {
> >  	struct work_struct free_work;
> >  #endif
> >  	/* protect page/zspage migration */
> > -	rwlock_t migrate_lock;
> > +	rwlock_t lock;
> >  	atomic_t compaction_in_progress;
> >  };
> >  
> > +static void pool_write_unlock(struct zs_pool *pool)
> > +{
> > +	write_unlock(&pool->lock);
> > +}
> > +
> > +static void pool_write_lock(struct zs_pool *pool)
> > +{
> > +	write_lock(&pool->lock);
> > +}
> > +
> > +static void pool_read_unlock(struct zs_pool *pool)
> > +{
> > +	read_unlock(&pool->lock);
> > +}
> > +
> > +static void pool_read_lock(struct zs_pool *pool)
> > +{
> > +	read_lock(&pool->lock);
> > +}
> > +
> > +static bool pool_lock_is_contended(struct zs_pool *pool)
> > +{
> > +	return rwlock_is_contended(&pool->lock);
> > +}
> > +
> >  static inline void zpdesc_set_first(struct zpdesc *zpdesc)
> >  {
> >  	SetPagePrivate(zpdesc_page(zpdesc));
> > @@ -1206,7 +1231,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >  	BUG_ON(in_interrupt());
> >  
> >  	/* It guarantees it can get zspage from handle safely */
> > -	read_lock(&pool->migrate_lock);
> > +	pool_read_lock(pool);
> >  	obj = handle_to_obj(handle);
> >  	obj_to_location(obj, &zpdesc, &obj_idx);
> >  	zspage = get_zspage(zpdesc);
> > @@ -1218,7 +1243,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
> >  	 * which is smaller granularity.
> >  	 */
> >  	migrate_read_lock(zspage);
> > -	read_unlock(&pool->migrate_lock);
> > +	pool_read_unlock(pool);
> >  
> >  	class = zspage_class(pool, zspage);
> >  	off = offset_in_page(class->size * obj_idx);
> > @@ -1450,16 +1475,16 @@ void zs_free(struct zs_pool *pool, unsigned long handle)
> >  		return;
> >  
> >  	/*
> > -	 * The pool->migrate_lock protects the race with zpage's migration
> > +	 * The pool->lock protects the race with zpage's migration
> >  	 * so it's safe to get the page from handle.
> >  	 */
> > -	read_lock(&pool->migrate_lock);
> > +	pool_read_lock(pool);
> >  	obj = handle_to_obj(handle);
> >  	obj_to_zpdesc(obj, &f_zpdesc);
> >  	zspage = get_zspage(f_zpdesc);
> >  	class = zspage_class(pool, zspage);
> >  	spin_lock(&class->lock);
> > -	read_unlock(&pool->migrate_lock);
> > +	pool_read_unlock(pool);
> >  
> >  	class_stat_sub(class, ZS_OBJS_INUSE, 1);
> >  	obj_free(class->size, obj);
> > @@ -1793,10 +1818,10 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> >  	pool = zspage->pool;
> >  
> >  	/*
> > -	 * The pool migrate_lock protects the race between zpage migration
> > +	 * The pool lock protects the race between zpage migration
> >  	 * and zs_free.
> >  	 */
> > -	write_lock(&pool->migrate_lock);
> > +	pool_write_lock(pool);
> >  	class = zspage_class(pool, zspage);
> >  
> >  	/*
> > @@ -1833,7 +1858,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> >  	 * Since we complete the data copy and set up new zspage structure,
> >  	 * it's okay to release migration_lock.
> >  	 */
> > -	write_unlock(&pool->migrate_lock);
> > +	pool_write_unlock(pool);
> >  	spin_unlock(&class->lock);
> >  	migrate_write_unlock(zspage);
> >  
> > @@ -1956,7 +1981,7 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> >  	 * protect the race between zpage migration and zs_free
> >  	 * as well as zpage allocation/free
> >  	 */
> > -	write_lock(&pool->migrate_lock);
> > +	pool_write_lock(pool);
> >  	spin_lock(&class->lock);
> >  	while (zs_can_compact(class)) {
> >  		int fg;
> > @@ -1983,14 +2008,14 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> >  		src_zspage = NULL;
> >  
> >  		if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100
> > -		    || rwlock_is_contended(&pool->migrate_lock)) {
> > +		    || pool_lock_is_contended(pool)) {
> >  			putback_zspage(class, dst_zspage);
> >  			dst_zspage = NULL;
> >  
> >  			spin_unlock(&class->lock);
> > -			write_unlock(&pool->migrate_lock);
> > +			pool_write_unlock(pool);
> >  			cond_resched();
> > -			write_lock(&pool->migrate_lock);
> > +			pool_write_lock(pool);
> >  			spin_lock(&class->lock);
> >  		}
> >  	}
> > @@ -2002,7 +2027,7 @@ static unsigned long __zs_compact(struct zs_pool *pool,
> >  		putback_zspage(class, dst_zspage);
> >  
> >  	spin_unlock(&class->lock);
> > -	write_unlock(&pool->migrate_lock);
> > +	pool_write_unlock(pool);
> >  
> >  	return pages_freed;
> >  }
> > @@ -2014,10 +2039,10 @@ unsigned long zs_compact(struct zs_pool *pool)
> >  	unsigned long pages_freed = 0;
> >  
> >  	/*
> > -	 * Pool compaction is performed under pool->migrate_lock so it is basically
> > +	 * Pool compaction is performed under pool->lock so it is basically
> >  	 * single-threaded. Having more than one thread in __zs_compact()
> > -	 * will increase pool->migrate_lock contention, which will impact other
> > -	 * zsmalloc operations that need pool->migrate_lock.
> > +	 * will increase pool->lock contention, which will impact other
> > +	 * zsmalloc operations that need pool->lock.
> >  	 */
> >  	if (atomic_xchg(&pool->compaction_in_progress, 1))
> >  		return 0;
> > @@ -2139,7 +2164,7 @@ struct zs_pool *zs_create_pool(const char *name)
> >  		return NULL;
> >  
> >  	init_deferred_free(pool);
> > -	rwlock_init(&pool->migrate_lock);
> > +	rwlock_init(&pool->lock);
> >  	atomic_set(&pool->compaction_in_progress, 0);
> >  
> >  	pool->name = kstrdup(name, GFP_KERNEL);
> > -- 
> > 2.48.1.502.g6dc24dfdaf-goog
> > 
> 


  reply	other threads:[~2025-02-12 16:19 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-12  6:26 [PATCH v5 00/18] zsmalloc/zram: there be preemption Sergey Senozhatsky
2025-02-12  6:26 ` [PATCH v5 01/18] zram: sleepable entry locking Sergey Senozhatsky
2025-02-13  0:08   ` Andrew Morton
2025-02-13  0:52     ` Sergey Senozhatsky
2025-02-13  1:42       ` Sergey Senozhatsky
2025-02-13  8:49         ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 02/18] zram: permit preemption with active compression stream Sergey Senozhatsky
2025-02-12 16:01   ` Yosry Ahmed
2025-02-13  1:04     ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 03/18] zram: remove crypto include Sergey Senozhatsky
2025-02-12 16:13   ` Yosry Ahmed
2025-02-13  0:53     ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 04/18] zram: remove max_comp_streams device attr Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 05/18] zram: remove two-staged handle allocation Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 06/18] zram: remove writestall zram_stats member Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 07/18] zram: limit max recompress prio to num_active_comps Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 08/18] zram: filter out recomp targets based on priority Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 09/18] zram: rework recompression loop Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 10/18] zsmalloc: factor out pool locking helpers Sergey Senozhatsky
2025-02-12 16:18   ` Yosry Ahmed
2025-02-12 16:19     ` Yosry Ahmed [this message]
2025-02-13  0:57     ` Sergey Senozhatsky
2025-02-13  1:12       ` Yosry Ahmed
2025-02-13  2:54         ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 11/18] zsmalloc: factor out size-class " Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 12/18] zsmalloc: make zspage lock preemptible Sergey Senozhatsky
2025-02-12 17:14   ` Yosry Ahmed
2025-02-13  1:20     ` Sergey Senozhatsky
2025-02-13  1:31       ` Yosry Ahmed
2025-02-13  1:53         ` Sergey Senozhatsky
2025-02-13 11:32   ` Hillf Danton
2025-02-13 12:29     ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 13/18] zsmalloc: introduce new object mapping API Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 14/18] zram: switch to new zsmalloc " Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 15/18] zram: permit reclaim in zstd custom allocator Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 16/18] zram: do not leak page on recompress_store error path Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 17/18] zram: do not leak page on writeback_store " Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 18/18] zram: add might_sleep to zcomp API Sergey Senozhatsky
2025-02-13  0:09 ` [PATCH v5 00/18] zsmalloc/zram: there be preemption Andrew Morton
2025-02-13  0:51   ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z6zJ-3exkKoj5n2D@google.com \
    --to=yosry.ahmed@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=ryncsn@gmail.com \
    --cc=senozhatsky@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox