From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Yosry Ahmed <yosry.ahmed@linux.dev>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>,
Kairui Song <ryncsn@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Minchan Kim <minchan@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCHv4 02/17] zram: do not use per-CPU compression streams
Date: Fri, 7 Feb 2025 11:56:01 +0900 [thread overview]
Message-ID: <n3k4c6qurl7vuqenxhmtxtitkeszohrcpauobwxxc4cpgbfujg@jxrinduvof6p> (raw)
In-Reply-To: <Z6TgSaYIf1DnOszP@google.com>
On (25/02/06 16:16), Yosry Ahmed wrote:
> > So for spin-lock contention - yes, but that lock really should not
> > be so visible. Other than that we limit the number of compression
> > streams to the number of the CPUs and permit preemption, so it should
> > be the same as the "preemptible per-CPU" streams, roughly.
>
> I think one other problem is that with a pool of streams guarded by a
> single lock all CPUs have to be serialized on that lock, even if there's
> enough streams for all CPUs in theory.
Yes, at the same time it guards list-first-entry, which is not
exceptionally expensive. Yet, somehow, it still showed up on
Kairui's radar.
I think there was also a problem with how on-demand streams were
constructed - GFP_KERNEL allocations from a reclaim path, which
is a tiny bit problematic and deadlock-ish.
> > The difference, perhaps, is that we don't pre-allocate streams, but
> > allocate only as needed. This has two sides: one side is that later
> > allocations can fail, but the other side is that we don't allocate
> > streams that we don't use. Especially secondary streams (priority 1
> > and 2, which are used for recompression). I didn't know it was possible
> > to use per-CPU data and still have preemption enabled at the same time.
> > So I'm not opposed to the idea of still having per-CPU streams and do
> > what zswap folks did.
>
> Note that it's not a free lunch. If preemption is allowed there is
> nothing holding keeping the CPU that you're using its data, and it can
> be offlined. I see that zcomp_cpu_dead() would free the compression
> stream from under its user in this case.
Yes, I took same approach as you did in zswap - we are holding the mutex
that cpu-dead is blocked on as long as the stream is being used.
struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
{
for (;;) {
struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream);
/*
* Inspired by zswap
*
* stream is returned with ->mutex locked which prevents
* cpu_dead() from releasing this stream under us, however
* there is still a race window between raw_cpu_ptr() and
* mutex_lock(), during which we could have been migrated
* to a CPU that has already destroyed its stream. If so
* then unlock and re-try on the current CPU.
*/
mutex_lock(&zstrm->lock);
if (likely(zstrm->buffer))
return zstrm;
mutex_unlock(&zstrm->lock);
}
}
void zcomp_stream_put(struct zcomp_strm *zstrm)
{
mutex_unlock(&zstrm->lock);
}
int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node)
{
struct zcomp *comp = hlist_entry(node, struct zcomp, node);
struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu);
mutex_lock(&zstrm->lock);
zcomp_strm_free(comp, zstrm);
mutex_unlock(&zstrm->lock);
return 0;
}
> We had a similar problem recently in zswap and it took me a couple of
> iterations to properly fix it. In short, you need to synchronize the CPU
> hotplug callbacks with the users of the compression stream to make sure
> the stream is not freed under the user.
Agreed.
next prev parent reply other threads:[~2025-02-07 2:56 UTC|newest]
Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-31 9:05 [PATCHv4 00/17] zsmalloc/zram: there be preemption Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 01/17] zram: switch to non-atomic entry locking Sergey Senozhatsky
2025-01-31 11:41 ` Hillf Danton
2025-02-03 3:21 ` Sergey Senozhatsky
2025-02-03 3:52 ` Sergey Senozhatsky
2025-02-03 12:39 ` Sergey Senozhatsky
2025-01-31 22:55 ` Andrew Morton
2025-02-03 3:26 ` Sergey Senozhatsky
2025-02-03 7:11 ` Sergey Senozhatsky
2025-02-03 7:33 ` Sergey Senozhatsky
2025-02-04 0:19 ` Andrew Morton
2025-02-04 4:22 ` Sergey Senozhatsky
2025-02-06 7:01 ` Sergey Senozhatsky
2025-02-06 7:38 ` Sebastian Andrzej Siewior
2025-02-06 7:47 ` Sergey Senozhatsky
2025-02-06 8:13 ` Sebastian Andrzej Siewior
2025-02-06 8:17 ` Sergey Senozhatsky
2025-02-06 8:26 ` Sebastian Andrzej Siewior
2025-02-06 8:29 ` Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 02/17] zram: do not use per-CPU compression streams Sergey Senozhatsky
2025-02-01 9:21 ` Kairui Song
2025-02-03 3:49 ` Sergey Senozhatsky
2025-02-03 21:00 ` Yosry Ahmed
2025-02-06 12:26 ` Sergey Senozhatsky
2025-02-06 6:55 ` Kairui Song
2025-02-06 7:22 ` Sergey Senozhatsky
2025-02-06 8:22 ` Sergey Senozhatsky
2025-02-06 16:16 ` Yosry Ahmed
2025-02-07 2:56 ` Sergey Senozhatsky [this message]
2025-02-07 6:12 ` Sergey Senozhatsky
2025-02-07 21:07 ` Yosry Ahmed
2025-02-08 16:20 ` Sergey Senozhatsky
2025-02-08 16:41 ` Sergey Senozhatsky
2025-02-09 6:22 ` Sergey Senozhatsky
2025-02-09 7:42 ` Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 03/17] zram: remove crypto include Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 04/17] zram: remove max_comp_streams device attr Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 05/17] zram: remove two-staged handle allocation Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 06/17] zram: permit reclaim in zstd custom allocator Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 07/17] zram: permit reclaim in recompression handle allocation Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 08/17] zram: remove writestall zram_stats member Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 09/17] zram: limit max recompress prio to num_active_comps Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 10/17] zram: filter out recomp targets based on priority Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 11/17] zram: unlock slot during recompression Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 12/17] zsmalloc: factor out pool locking helpers Sergey Senozhatsky
2025-01-31 15:46 ` Yosry Ahmed
2025-02-03 4:57 ` Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 13/17] zsmalloc: factor out size-class " Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 14/17] zsmalloc: make zspage lock preemptible Sergey Senozhatsky
2025-01-31 15:51 ` Yosry Ahmed
2025-02-03 3:13 ` Sergey Senozhatsky
2025-02-03 4:56 ` Sergey Senozhatsky
2025-02-03 21:11 ` Yosry Ahmed
2025-02-04 6:59 ` Sergey Senozhatsky
2025-02-04 17:19 ` Yosry Ahmed
2025-02-05 2:43 ` Sergey Senozhatsky
2025-02-05 19:06 ` Yosry Ahmed
2025-02-06 3:05 ` Sergey Senozhatsky
2025-02-06 3:28 ` Sergey Senozhatsky
2025-02-06 16:19 ` Yosry Ahmed
2025-02-07 2:48 ` Sergey Senozhatsky
2025-02-07 21:09 ` Yosry Ahmed
2025-02-12 5:00 ` Sergey Senozhatsky
2025-02-12 15:35 ` Yosry Ahmed
2025-02-13 2:18 ` Sergey Senozhatsky
2025-02-13 2:57 ` Yosry Ahmed
2025-02-13 7:21 ` Sergey Senozhatsky
2025-02-13 8:22 ` Sergey Senozhatsky
2025-02-13 15:25 ` Yosry Ahmed
2025-02-14 3:33 ` Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 15/17] zsmalloc: introduce new object mapping API Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 16/17] zram: switch to new zsmalloc " Sergey Senozhatsky
2025-01-31 9:06 ` [PATCHv4 17/17] zram: add might_sleep to zcomp API Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=n3k4c6qurl7vuqenxhmtxtitkeszohrcpauobwxxc4cpgbfujg@jxrinduvof6p \
--to=senozhatsky@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=ryncsn@gmail.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox