From: "Sridhar, Kanchana P" <kanchana.p.sridhar@intel.com>
To: Yosry Ahmed <yosry.ahmed@linux.dev>
Cc: Herbert Xu <herbert@gondor.apana.org.au>,
SeongJae Park <sj@kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"hannes@cmpxchg.org" <hannes@cmpxchg.org>,
"nphamcs@gmail.com" <nphamcs@gmail.com>,
"chengming.zhou@linux.dev" <chengming.zhou@linux.dev>,
"usamaarif642@gmail.com" <usamaarif642@gmail.com>,
"ryan.roberts@arm.com" <ryan.roberts@arm.com>,
"21cnbao@gmail.com" <21cnbao@gmail.com>,
"ying.huang@linux.alibaba.com" <ying.huang@linux.alibaba.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"senozhatsky@chromium.org" <senozhatsky@chromium.org>,
"kasong@tencent.com" <kasong@tencent.com>,
"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
"davem@davemloft.net" <davem@davemloft.net>,
"clabbe@baylibre.com" <clabbe@baylibre.com>,
"ardb@kernel.org" <ardb@kernel.org>,
"ebiggers@google.com" <ebiggers@google.com>,
"surenb@google.com" <surenb@google.com>,
"Accardi, Kristen C" <kristen.c.accardi@intel.com>,
"Gomes, Vinicius" <vinicius.gomes@intel.com>,
"Feghali, Wajdi K" <wajdi.k.feghali@intel.com>,
"Gopal, Vinodh" <vinodh.gopal@intel.com>,
"Sridhar, Kanchana P" <kanchana.p.sridhar@intel.com>
Subject: RE: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with compress batching of large folios.
Date: Wed, 10 Dec 2025 18:47:16 +0000 [thread overview]
Message-ID: <SJ2PR11MB8472317EAEE27FE54D71C858C9A0A@SJ2PR11MB8472.namprd11.prod.outlook.com> (raw)
In-Reply-To: <yhecgcnt52hnsyf23p576mz2mlnffqrluikwzv6tdn3bnmzumc@thpyltdpxtjq>
> -----Original Message-----
> From: Yosry Ahmed <yosry.ahmed@linux.dev>
> Sent: Wednesday, December 10, 2025 8:02 AM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com>
> Cc: Herbert Xu <herbert@gondor.apana.org.au>; SeongJae Park
> <sj@kernel.org>; linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> hannes@cmpxchg.org; nphamcs@gmail.com; chengming.zhou@linux.dev;
> usamaarif642@gmail.com; ryan.roberts@arm.com; 21cnbao@gmail.com;
> ying.huang@linux.alibaba.com; akpm@linux-foundation.org;
> senozhatsky@chromium.org; kasong@tencent.com; linux-
> crypto@vger.kernel.org; davem@davemloft.net; clabbe@baylibre.com;
> ardb@kernel.org; ebiggers@google.com; surenb@google.com; Accardi,
> Kristen C <kristen.c.accardi@intel.com>; Gomes, Vinicius
> <vinicius.gomes@intel.com>; Feghali, Wajdi K <wajdi.k.feghali@intel.com>;
> Gopal, Vinodh <vinodh.gopal@intel.com>
> Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with
> compress batching of large folios.
>
> On Tue, Dec 09, 2025 at 07:38:20PM +0000, Sridhar, Kanchana P wrote:
> >
> > > -----Original Message-----
> > > From: Yosry Ahmed <yosry.ahmed@linux.dev>
> > > Sent: Tuesday, December 9, 2025 9:32 AM
> > > To: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com>
> > > Cc: Herbert Xu <herbert@gondor.apana.org.au>; SeongJae Park
> > > <sj@kernel.org>; linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > > hannes@cmpxchg.org; nphamcs@gmail.com;
> chengming.zhou@linux.dev;
> > > usamaarif642@gmail.com; ryan.roberts@arm.com; 21cnbao@gmail.com;
> > > ying.huang@linux.alibaba.com; akpm@linux-foundation.org;
> > > senozhatsky@chromium.org; kasong@tencent.com; linux-
> > > crypto@vger.kernel.org; davem@davemloft.net; clabbe@baylibre.com;
> > > ardb@kernel.org; ebiggers@google.com; surenb@google.com; Accardi,
> > > Kristen C <kristen.c.accardi@intel.com>; Gomes, Vinicius
> > > <vinicius.gomes@intel.com>; Feghali, Wajdi K
> <wajdi.k.feghali@intel.com>;
> > > Gopal, Vinodh <vinodh.gopal@intel.com>
> > > Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress()
> with
> > > compress batching of large folios.
> > >
> > > On Tue, Dec 09, 2025 at 05:21:06PM +0000, Sridhar, Kanchana P wrote:
> > > >
> > > > > -----Original Message-----
> > > > > From: Yosry Ahmed <yosry.ahmed@linux.dev>
> > > > > Sent: Tuesday, December 9, 2025 8:55 AM
> > > > > To: Herbert Xu <herbert@gondor.apana.org.au>
> > > > > Cc: Sridhar, Kanchana P <kanchana.p.sridhar@intel.com>; SeongJae
> Park
> > > > > <sj@kernel.org>; linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > > > > hannes@cmpxchg.org; nphamcs@gmail.com;
> > > chengming.zhou@linux.dev;
> > > > > usamaarif642@gmail.com; ryan.roberts@arm.com;
> 21cnbao@gmail.com;
> > > > > ying.huang@linux.alibaba.com; akpm@linux-foundation.org;
> > > > > senozhatsky@chromium.org; kasong@tencent.com; linux-
> > > > > crypto@vger.kernel.org; davem@davemloft.net;
> clabbe@baylibre.com;
> > > > > ardb@kernel.org; ebiggers@google.com; surenb@google.com;
> Accardi,
> > > > > Kristen C <kristen.c.accardi@intel.com>; Gomes, Vinicius
> > > > > <vinicius.gomes@intel.com>; Feghali, Wajdi K
> > > <wajdi.k.feghali@intel.com>;
> > > > > Gopal, Vinodh <vinodh.gopal@intel.com>
> > > > > Subject: Re: [PATCH v13 22/22] mm: zswap: Batched
> zswap_compress()
> > > with
> > > > > compress batching of large folios.
> > > > >
> > > > > On Tue, Dec 09, 2025 at 10:32:20AM +0800, Herbert Xu wrote:
> > > > > > On Tue, Dec 09, 2025 at 01:15:02AM +0000, Yosry Ahmed wrote:
> > > > > > >
> > > > > > > Just to clarify, does this mean that zswap can pass a batch of
> (eight)
> > > > > > > pages to the acomp API, and get the results for the batch uniformly
> > > > > > > whether or not the underlying compressor supports batching?
> > > > > >
> > > > > > Correct. In fact I'd like to remove the batch size exposure to zswap
> > > > > > altogether. zswap should just pass along whatever maximum
> number of
> > > > > > pages that is convenient to itself.
> > > > >
> > > > > I think exposing the batch size is still useful as a hint for zswap. In
> > > > > the current series, zswap allocates as many per-CPU buffers as the
> > > > > compressor's batch size, so no extra buffers for non-batching
> > > > > compressors (including SW compressors).
> > > > >
> > > > > If we use the same batch size regardless, we'll have to always allocate
> > > > > 8 (or N) per-CPU buffers, for little to no benefit on non-batching
> > > > > compressors.
> > > > >
> > > > > So we still want the batch size on the zswap side, but we want the
> > > > > crypto API to be uniform whether or not the compressor supports
> > > > > batching.
> > > >
> > > > Thanks Yosry, you bring up a good point. I currently have the outer for
> > > > loop in zswap_compress() due to the above constraint. For non-batching
> > > > compressors, we allocate only one per-CPU buffer. Hence, we need to
> > > > call crypto_acomp_compress() and write the compressed data to the
> > > > zs_poll for each page in the batch. Wouldn't we need to allocate
> > > > 8 per-CPU buffers for non-batching compressors if we want zswap to
> > > > send a batch of 8 pages uniformly to the crypto API, so that
> > > > zswap_compress() can store the 8 pages in zs_pool after the crypto
> > > > API returns?
> > >
> > > Ugh, yes.. I don't think we want to burn 7 extra pages per-CPU for SW
> > > compressors.
> > >
> > > I think the cleanest way to handle this would be to:
> > > - Rename zswap_compress() to __zswap_compress(), and make it handle
> a
> > > given batch size (which would be 1 or 8).
> > > - Introduce zswap_compress() as a wrapper that breaks down the folio
> > > into batches and loops over them, passing them to __zswap_compress().
> > > - __zswap_compress() has a single unified path (e.g. for compressed
> > > length and error handling), regardless of the batch size.
> > >
> > > Can this be done with the current acomp API? I think all we really need
> > > is to be able to pass in a batch of size N (which can be 1), and read
> > > the error and compressed length in a single way. This is my main problem
> > > with the current patch.
> >
> > Once Herbert gives us the crypto_acomp modification for non-batching
> > compressors to set the acomp_req->dst->length to the
> > compressed length/error value, I think the same could be accomplished
> > with the current patch, since I will be able to delete the "errp". IOW, I think
> > a simplification is possible without introducing __zswap_compress(). The
> > code will look seamless for non-batching and batching compressors, and the
> > distinction will be made apparent by the outer for loop that iterates over
> > the batch based on the pool->compr_batch_size in the current patch.
>
> I think moving the outer loop outside to a wrapper could make the
> function digestable without nested loops.
Sure. We would still iterate over the output SG lists in __zswap_compress(),
but yes, there wouldn't be nested loops.
>
> >
> > Alternately, we could introduce the __zswap_compress() that abstracts
> > one single iteration through the outer for loop: it compresses 1 or 8 pages
> > as a "batch". However, the distinction would still need to be made for
> > non-batching vs. batching compressors in the zswap_compress() wrapper:
> > both for sending the pool->compr_batch_size # of pages to
> > __zswap_compress() and for iterating over the single/multiple dst buffers
> > to write to zs_pool (the latter could be done within __zswap_compress(),
> > but the point remains: we would need to distinguish in one or the other).
>
> Not sure what you mean by the latter. IIUC, for all compressors
> __zswap_compress() would iterate over the dst buffers and write to
> zs_pool, whether the number of dst buffers is 1 or 8. So there wouldn't
> be any different handling in __zswap_compress(), right?
Yes, this is correct.
>
> That's my whole motivation for introducing a wrapper that abstracts away
> the batching size.
Yes, you're right.
>
> >
> > It could be argued that keeping the seamless-ness in handling the calls to
> > crypto based on the pool->compr_batch_size and the logical distinctions
> > imposed by this in iterating over the output SG lists/buffers, would be
> > cleaner being self-contained in zswap_compress(). We already have a
> > zswap_store_pages() that processes the folio in batches. Maybe minimizing
> > the functions that do batch processing could be cleaner?
>
> Yeah it's not great that we'll end up with zswap_store_pages() splitting
> the folio into batches of 8, then zswap_compress() further splitting
> them into compression batches -- but we'll have that anyway. Whether
> it's inside zswap_compress() or a wrapper doesn't make things much
> different imo.
>
> Also, splitting the folio differently at different levels make semantic
> sense. zswap_store_pages() splits it into batches of 8, because this is
> what zswap handles (mainly to avoid dynamically allocating things like
> entries). zswap_compress() will split it further if the underlying
> compressor prefers that, to avoid allocating many buffer pages. So I
> think it kinda makes sense.
Agreed.
>
> In the future, we can revisit the split in zswap_compress() if we have a
> good case for batching compression for SW (e.g. compress every 8 pages
> as a single unit), or if we can optimize the per-CPU buffers somehow.
Yes. Let me see how best the __zswap_compress() API can support this.
Thanks!
Kanchana
>
> >
> > In any case, let me know which would be preferable.
> >
> > Thanks,
> > Kanchana
> >
> > >
> > > In the future, if it's beneifical for some SW compressors to batch
> > > compressions, we can look into optimizations for the per-CPU buffers to
> > > avoid allocating 8 pages per-CPU (e.g. shared page pool), or make this
> > > opt-in for certain SW compressors that justify the cost.
> > >
> > > >
> > > > Thanks,
> > > > Kanchana
> > > >
> > > > >
> > > > > >
> > > > > > Cheers,
> > > > > > --
> > > > > > Email: Herbert Xu <herbert@gondor.apana.org.au>
> > > > > > Home Page: http://gondor.apana.org.au/~herbert/
> > > > > > PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
next prev parent reply other threads:[~2025-12-10 18:47 UTC|newest]
Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-04 9:12 [PATCH v13 00/22] zswap compression batching with optimized iaa_crypto driver Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 01/22] crypto: iaa - Reorganize the iaa_crypto driver code Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 02/22] crypto: iaa - New architecture for IAA device WQ comp/decomp usage & core mapping Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 03/22] crypto: iaa - Simplify, consistency of function parameters, minor stats bug fix Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 04/22] crypto: iaa - Descriptor allocation timeouts with mitigations Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 05/22] crypto: iaa - iaa_wq uses percpu_refs for get/put reference counting Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 06/22] crypto: iaa - Simplify the code flow in iaa_compress() and iaa_decompress() Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 07/22] crypto: iaa - Refactor hardware descriptor setup into separate procedures Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 08/22] crypto: iaa - Simplified, efficient job submissions for non-irq mode Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 09/22] crypto: iaa - Deprecate exporting add/remove IAA compression modes Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 10/22] crypto: iaa - Expect a single scatterlist for a [de]compress request's src/dst Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 11/22] crypto: iaa - Rearchitect iaa_crypto to have clean interfaces with crypto_acomp Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 12/22] crypto: acomp - Define a unit_size in struct acomp_req to enable batching Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 13/22] crypto: iaa - IAA Batching for parallel compressions/decompressions Kanchana P Sridhar
2025-11-14 9:59 ` Herbert Xu
2025-11-16 18:53 ` Sridhar, Kanchana P
2025-11-17 3:12 ` Herbert Xu
2025-11-17 5:47 ` Sridhar, Kanchana P
2025-11-04 9:12 ` [PATCH v13 14/22] crypto: iaa - Enable async mode and make it the default Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 15/22] crypto: iaa - Disable iaa_verify_compress by default Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 16/22] crypto: iaa - Submit the two largest source buffers first in decompress batching Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 17/22] crypto: iaa - Add deflate-iaa-dynamic compression mode Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 18/22] crypto: acomp - Add crypto_acomp_batch_size() to get an algorithm's batch-size Kanchana P Sridhar
2025-11-04 9:12 ` [PATCH v13 19/22] mm: zswap: Per-CPU acomp_ctx resources exist from pool creation to deletion Kanchana P Sridhar
2025-11-13 20:24 ` Yosry Ahmed
2025-12-12 0:55 ` Sridhar, Kanchana P
2025-12-12 1:06 ` Yosry Ahmed
2025-12-12 1:58 ` Sridhar, Kanchana P
2025-12-12 2:47 ` Yosry Ahmed
2025-12-12 4:32 ` Sridhar, Kanchana P
2025-12-12 18:17 ` Sridhar, Kanchana P
2025-12-12 18:43 ` Yosry Ahmed
2025-12-12 20:53 ` Sridhar, Kanchana P
2025-12-12 22:25 ` Yosry Ahmed
2025-12-13 19:53 ` Sridhar, Kanchana P
2025-11-04 9:12 ` [PATCH v13 20/22] mm: zswap: Consistently use IS_ERR_OR_NULL() to check acomp_ctx resources Kanchana P Sridhar
2025-11-13 20:25 ` Yosry Ahmed
2025-12-12 1:07 ` Sridhar, Kanchana P
2025-11-04 9:12 ` [PATCH v13 21/22] mm: zswap: zswap_store() will process a large folio in batches Kanchana P Sridhar
2025-11-06 17:45 ` Nhat Pham
2025-11-07 2:28 ` Sridhar, Kanchana P
2025-11-13 20:52 ` Yosry Ahmed
2025-11-13 20:51 ` Yosry Ahmed
2025-12-12 1:43 ` Sridhar, Kanchana P
2025-12-12 4:40 ` Yosry Ahmed
2025-12-12 18:03 ` Sridhar, Kanchana P
2025-11-04 9:12 ` [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with compress batching of large folios Kanchana P Sridhar
2025-11-13 21:34 ` Yosry Ahmed
2025-11-13 23:55 ` Sridhar, Kanchana P
2025-11-14 0:46 ` Yosry Ahmed
2025-12-19 2:29 ` Sridhar, Kanchana P
2025-12-19 15:26 ` Yosry Ahmed
2025-12-19 19:03 ` Sridhar, Kanchana P
2025-11-14 5:52 ` Yosry Ahmed
2025-11-14 6:43 ` Sridhar, Kanchana P
2025-11-14 15:37 ` Yosry Ahmed
2025-11-14 19:23 ` Sridhar, Kanchana P
2025-11-14 19:44 ` Yosry Ahmed
2025-11-14 19:59 ` Sridhar, Kanchana P
2025-11-14 20:49 ` Yosry Ahmed
2025-11-26 5:46 ` Herbert Xu
2025-11-26 6:34 ` Yosry Ahmed
2025-11-26 20:05 ` Sridhar, Kanchana P
2025-12-08 3:23 ` Herbert Xu
2025-12-08 4:17 ` Sridhar, Kanchana P
2025-12-08 4:24 ` Herbert Xu
2025-12-08 4:33 ` Sridhar, Kanchana P
2025-12-09 1:15 ` Yosry Ahmed
2025-12-09 2:32 ` Herbert Xu
2025-12-09 16:55 ` Yosry Ahmed
2025-12-09 17:21 ` Sridhar, Kanchana P
2025-12-09 17:31 ` Yosry Ahmed
2025-12-09 19:38 ` Sridhar, Kanchana P
2025-12-10 16:01 ` Yosry Ahmed
2025-12-10 18:47 ` Sridhar, Kanchana P [this message]
2025-12-10 4:28 ` Herbert Xu
2025-12-10 5:36 ` Sridhar, Kanchana P
2025-12-10 15:53 ` Yosry Ahmed
2025-11-13 18:14 ` [PATCH v13 00/22] zswap compression batching with optimized iaa_crypto driver Sridhar, Kanchana P
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=SJ2PR11MB8472317EAEE27FE54D71C858C9A0A@SJ2PR11MB8472.namprd11.prod.outlook.com \
--to=kanchana.p.sridhar@intel.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=chengming.zhou@linux.dev \
--cc=clabbe@baylibre.com \
--cc=davem@davemloft.net \
--cc=ebiggers@google.com \
--cc=hannes@cmpxchg.org \
--cc=herbert@gondor.apana.org.au \
--cc=kasong@tencent.com \
--cc=kristen.c.accardi@intel.com \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nphamcs@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=senozhatsky@chromium.org \
--cc=sj@kernel.org \
--cc=surenb@google.com \
--cc=usamaarif642@gmail.com \
--cc=vinicius.gomes@intel.com \
--cc=vinodh.gopal@intel.com \
--cc=wajdi.k.feghali@intel.com \
--cc=ying.huang@linux.alibaba.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox