linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Hillf Danton <hdanton@sina.com>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>,
	 Yosry Ahmed <yosry.ahmed@linux.dev>,
	Kairui Song <ryncsn@gmail.com>, Minchan Kim <minchan@kernel.org>,
	 Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 12/18] zsmalloc: make zspage lock preemptible
Date: Thu, 13 Feb 2025 21:29:01 +0900	[thread overview]
Message-ID: <uuejsxdilozxhallkev5tspm6kfpe47lgkoonlubnauwj4ckvm@iui2n2g56cat> (raw)
In-Reply-To: <20250213113248.2225-1-hdanton@sina.com>

On (25/02/13 19:32), Hillf Danton wrote:
[..]
> > +static void zspage_read_lock(struct zspage *zspage)
> > +{
> > +	atomic_t *lock = &zspage->lock;
> > +	int old = atomic_read_acquire(lock);
> > +
> > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> > +	rwsem_acquire_read(&zspage->lockdep_map, 0, 0, _RET_IP_);
> > +#endif
> > +
> > +	do {
> > +		if (old == ZS_PAGE_WRLOCKED) {
> > +			cpu_relax();
> > +			old = atomic_read_acquire(lock);
> > +			continue;
> > +		}
> > +	} while (!atomic_try_cmpxchg_acquire(lock, &old, old + 1));
> 
> Given mcs_spinlock, inventing spinlock in 2025 sounds no good.
> See below for the spinlock version.

I should have sent this series in 2024, when inventing a spinlock
sounded good :)

> struct zspage_lock {
> 	spinlock_t	lock;
> 	int		cnt;
> 	struct lockdep_map lockdep_map;
> };
> 
> static __must_check bool zspage_write_trylock(struct zspage_lock *zl)
> {
> 	spin_lock(&zl->lock);
> 	if (zl->cnt == ZS_PAGE_UNLOCKED) {
> 		// zl->cnt = ZS_PAGE_WRLOCKED;
> 		rwsem_acquire(&zl->lockdep_map, 0, 1, _RET_IP_);
> 		return true;
> 	}
> 	spin_unlock(&zl->lock);
> 	return false;
> }
> 
> static void zspage_write_unlock(struct zspage_lock *zl)
> {
> 	rwsem_release(&zl->lockdep_map, _RET_IP_);
> 	spin_unlock(&zl->lock);
> }
> 
> static void zspage_read_lock(struct zspage_lock *zl)
> {
> 	rwsem_acquire_read(&zl->lockdep_map, 0, 0, _RET_IP_);
> 
> 	spin_lock(&zl->lock);
> 	zl->cnt++;
> 	spin_unlock(&zl->lock);
> }
> 
> static void zspage_read_unlock(struct zspage_lock *zl)
> {
> 	rwsem_release(&zl->lockdep_map, _RET_IP_);
> 
> 	spin_lock(&zl->lock);
> 	zl->cnt--;
> 	spin_unlock(&zl->lock);
> }

I see, yeah I can pick it up, thanks.  A couple of *minor* things I can
think of.  First. in the current implementation I also track LOCK_STAT
(lock-contended/lock-acquired), something like

static inline void __read_lock(struct zspage *zspage)
{
        atomic_t *lock = &zspage->lock;
        int old = atomic_read_acquire(lock);

        rwsem_acquire_read(&zspage->dep_map, 0, 0, _RET_IP_);

        do {
                if (old == ZS_PAGE_WRLOCKED) {
                        lock_contended(&zspage->dep_map, _RET_IP_);
                        cpu_relax();
                        old = atomic_read_acquire(lock);
                        continue;
                }
        } while (!atomic_try_cmpxchg_acquire(lock, &old, old + 1));

        lock_acquired(&zspage->dep_map, _RET_IP_);
}


I'll add lock-stat to zsl, but it's worth mentioning that zsl "splits"
the stats into zsl spin-lock's dep_map and zsl's own dep_map:

                              class name    con-bounces    contentions   waittime-min   waittime-max waittime-total   waittime-avg    acq-bounces   acquisitions   holdtime-min   holdtime-max holdtime-total   holdtime-avg
                          zspage->lock-R:             0              0           0.00           0.00           0.00           0.00              1              2           6.19          11.61          17.80           8.90
                       &zspage->zsl.lock:             0              0           0.00           0.00           0.00           0.00           5457        1330106           0.10         118.53      174917.46           0.13

That is, quite likely, fine.  One can just add the numbers, I assume.
Second, we'll be carrying around two dep_map-s per-zsl in lockdep builds
now, but, again, that is, likely, not a problem as sizeof(lockdep_map)
isn't too huge (around 48 bytes).


  reply	other threads:[~2025-02-13 12:29 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-12  6:26 [PATCH v5 00/18] zsmalloc/zram: there be preemption Sergey Senozhatsky
2025-02-12  6:26 ` [PATCH v5 01/18] zram: sleepable entry locking Sergey Senozhatsky
2025-02-13  0:08   ` Andrew Morton
2025-02-13  0:52     ` Sergey Senozhatsky
2025-02-13  1:42       ` Sergey Senozhatsky
2025-02-13  8:49         ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 02/18] zram: permit preemption with active compression stream Sergey Senozhatsky
2025-02-12 16:01   ` Yosry Ahmed
2025-02-13  1:04     ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 03/18] zram: remove crypto include Sergey Senozhatsky
2025-02-12 16:13   ` Yosry Ahmed
2025-02-13  0:53     ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 04/18] zram: remove max_comp_streams device attr Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 05/18] zram: remove two-staged handle allocation Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 06/18] zram: remove writestall zram_stats member Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 07/18] zram: limit max recompress prio to num_active_comps Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 08/18] zram: filter out recomp targets based on priority Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 09/18] zram: rework recompression loop Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 10/18] zsmalloc: factor out pool locking helpers Sergey Senozhatsky
2025-02-12 16:18   ` Yosry Ahmed
2025-02-12 16:19     ` Yosry Ahmed
2025-02-13  0:57     ` Sergey Senozhatsky
2025-02-13  1:12       ` Yosry Ahmed
2025-02-13  2:54         ` Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 11/18] zsmalloc: factor out size-class " Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 12/18] zsmalloc: make zspage lock preemptible Sergey Senozhatsky
2025-02-12 17:14   ` Yosry Ahmed
2025-02-13  1:20     ` Sergey Senozhatsky
2025-02-13  1:31       ` Yosry Ahmed
2025-02-13  1:53         ` Sergey Senozhatsky
2025-02-13 11:32   ` Hillf Danton
2025-02-13 12:29     ` Sergey Senozhatsky [this message]
2025-02-12  6:27 ` [PATCH v5 13/18] zsmalloc: introduce new object mapping API Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 14/18] zram: switch to new zsmalloc " Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 15/18] zram: permit reclaim in zstd custom allocator Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 16/18] zram: do not leak page on recompress_store error path Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 17/18] zram: do not leak page on writeback_store " Sergey Senozhatsky
2025-02-12  6:27 ` [PATCH v5 18/18] zram: add might_sleep to zcomp API Sergey Senozhatsky
2025-02-13  0:09 ` [PATCH v5 00/18] zsmalloc/zram: there be preemption Andrew Morton
2025-02-13  0:51   ` Sergey Senozhatsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=uuejsxdilozxhallkev5tspm6kfpe47lgkoonlubnauwj4ckvm@iui2n2g56cat \
    --to=senozhatsky@chromium.org \
    --cc=akpm@linux-foundation.org \
    --cc=hdanton@sina.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=ryncsn@gmail.com \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox