linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Qun-wei Lin (林群崴)" <Qun-wei.Lin@mediatek.com>
To: "21cnbao@gmail.com" <21cnbao@gmail.com>,
	"senozhatsky@chromium.org" <senozhatsky@chromium.org>
Cc: "Chinwen Chang (張錦文)" <chinwen.chang@mediatek.com>,
	"Andrew Yang (楊智強)" <Andrew.Yang@mediatek.com>,
	"Casper Li (李中榮)" <casper.li@mediatek.com>,
	"nphamcs@gmail.com" <nphamcs@gmail.com>,
	"chrisl@kernel.org" <chrisl@kernel.org>,
	"James Hsu (徐慶薰)" <James.Hsu@mediatek.com>,
	"AngeloGioacchino Del Regno"
	<angelogioacchino.delregno@collabora.com>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-mediatek@lists.infradead.org"
	<linux-mediatek@lists.infradead.org>,
	"ira.weiny@intel.com" <ira.weiny@intel.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"dave.jiang@intel.com" <dave.jiang@intel.com>,
	"vishal.l.verma@intel.com" <vishal.l.verma@intel.com>,
	"schatzberg.dan@gmail.com" <schatzberg.dan@gmail.com>,
	"viro@zeniv.linux.org.uk" <viro@zeniv.linux.org.uk>,
	"ryan.roberts@arm.com" <ryan.roberts@arm.com>,
	"minchan@kernel.org" <minchan@kernel.org>,
	"axboe@kernel.dk" <axboe@kernel.dk>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"kasong@tencent.com" <kasong@tencent.com>,
	"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"matthias.bgg@gmail.com" <matthias.bgg@gmail.com>,
	"ying.huang@intel.com" <ying.huang@intel.com>,
	"dan.j.williams@intel.com" <dan.j.williams@intel.com>
Subject: Re: [PATCH 0/2] Improve Zram by separating compression context from kswapd
Date: Tue, 11 Mar 2025 14:12:55 +0000	[thread overview]
Message-ID: <32d951629ab18bcb2cb59b0c0baab65de915dbea.camel@mediatek.com> (raw)
In-Reply-To: <CAGsJ_4wbgEGKDdUqa8Kpw952qiM_H5V-3X+BH6SboJMh8k2sRg@mail.gmail.com>

On Tue, 2025-03-11 at 22:33 +1300, Barry Song wrote:
> 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
> 
> 
> On Tue, Mar 11, 2025 at 5:58 PM Sergey Senozhatsky
> <senozhatsky@chromium.org> wrote:
> > 
> > On (25/03/08 18:41), Barry Song wrote:
> > > On Sat, Mar 8, 2025 at 12:03 PM Nhat Pham <nphamcs@gmail.com>
> > > wrote:
> > > > 
> > > > On Fri, Mar 7, 2025 at 4:02 AM Qun-Wei Lin
> > > > <qun-wei.lin@mediatek.com> wrote:
> > > > > 
> > > > > This patch series introduces a new mechanism called
> > > > > kcompressd to
> > > > > improve the efficiency of memory reclaiming in the operating
> > > > > system. The
> > > > > main goal is to separate the tasks of page scanning and page
> > > > > compression
> > > > > into distinct processes or threads, thereby reducing the load
> > > > > on the
> > > > > kswapd thread and enhancing overall system performance under
> > > > > high memory
> > > > > pressure conditions.
> > > > 
> > > > Please excuse my ignorance, but from your cover letter I still
> > > > don't
> > > > quite get what is the problem here? And how would decouple
> > > > compression
> > > > and scanning help?
> > > 
> > > My understanding is as follows:
> > > 
> > > When kswapd attempts to reclaim M anonymous folios and N file
> > > folios,
> > > the process involves the following steps:
> > > 
> > > * t1: Time to scan and unmap anonymous folios
> > > * t2: Time to compress anonymous folios
> > > * t3: Time to reclaim file folios
> > > 
> > > Currently, these steps are executed sequentially, meaning the
> > > total time
> > > required to reclaim M + N folios is t1 + t2 + t3.
> > > 
> > > However, Qun-Wei's patch enables t1 + t3 and t2 to run in
> > > parallel,
> > > reducing the total time to max(t1 + t3, t2). This likely improves
> > > the
> > > reclamation speed, potentially reducing allocation stalls.
> > 
> > If compression kthread-s can run (have CPUs to be scheduled on).
> > This looks a bit like a bottleneck.  Is there anything that
> > guarantees forward progress?  Also, if compression kthreads
> > constantly preempt kswapd, then it might not be worth it to
> > have compression kthreads, I assume?
> 
> Thanks for your critical insights, all of which are valuable.
> 
> Qun-Wei is likely working on an Android case where the CPU is
> relatively idle in many scenarios (though there are certainly cases
> where all CPUs are busy), but free memory is quite limited.
> We may soon see benefits for these types of use cases. I expect
> Android might have the opportunity to adopt it before it's fully
> ready upstream.
> 
> If the workload keeps all CPUs busy, I suppose this async thread
> won’t help, but at least we might find a way to mitigate regression.
> 
> We likely need to collect more data on various scenarios—when
> CPUs are relatively idle and when all CPUs are busy—and
> determine the proper approach based on the data, which we
> currently lack :-)
> 

Thanks for the explaining!

> > 
> > If we have a pagefault and need to map a page that is still in
> > the compression queue (not compressed and stored in zram yet, e.g.
> > dut to scheduling latency + slow compression algorithm) then what
> > happens?
> 
> This is happening now even without the patch?  Right now we are
> having 4 steps:
> 1. add_to_swap: The folio is added to the swapcache.
> 2. try_to_unmap: PTEs are converted to swap entries.
> 3. pageout: The folio is written back.
> 4. Swapcache is cleared.
> 
> If a swap-in occurs between 2 and 4, doesn't that mean
> we've already encountered the case where we hit
> the swapcache for a folio undergoing compression?
> 
> It seems we might have an opportunity to terminate
> compression if the request is still in the queue and
> compression hasn’t started for a folio yet? seems
> quite difficult to do?

As Barry explained, these folios that are being compressed are in the
swapcache. If a refault occurs during the compression process, its
correctness is already guaranteed by the swap subsystem (similar to 
other asynchronous swap devices).

Indeed, terminating a folio that is already in the queue waiting for
compression is a challenging task. Will this require some modifications
to the current architecture of swap subsystem?

> 
> Thanks
> Barry

Best Regards,
Qun-wei

  reply	other threads:[~2025-03-11 14:13 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-07 12:01 Qun-Wei Lin
2025-03-07 12:01 ` [PATCH 1/2] mm: Split BLK_FEAT_SYNCHRONOUS and SWP_SYNCHRONOUS_IO into separate read and write flags Qun-Wei Lin
2025-03-07 12:01 ` [PATCH 2/2] kcompressd: Add Kcompressd for accelerated zram compression Qun-Wei Lin
2025-03-07 19:41   ` Barry Song
2025-03-07 23:13     ` Nhat Pham
2025-03-07 23:14       ` Nhat Pham
2025-03-10 13:26         ` Qun-wei Lin (林群崴)
     [not found]       ` <20250309010541.3152-1-hdanton@sina.com>
2025-03-09 19:56         ` Nhat Pham
2025-03-09 20:44           ` Barry Song
2025-03-09 22:20             ` Nhat Pham
2025-03-10 13:23               ` Qun-wei Lin (林群崴)
     [not found]             ` <20250310103427.3216-1-hdanton@sina.com>
2025-03-10 17:44               ` Barry Song
     [not found]                 ` <20250310230902.3282-1-hdanton@sina.com>
2025-03-11  3:57                   ` Barry Song
2025-03-11  6:36                     ` Greg KH
2025-03-11  5:02       ` Sergey Senozhatsky
2025-03-10 13:26     ` Qun-wei Lin (林群崴)
2025-03-11  7:05       ` Barry Song
2025-03-11  7:25         ` Barry Song
2025-03-11 14:33         ` Qun-wei Lin (林群崴)
2025-03-07 19:34 ` [PATCH 0/2] Improve Zram by separating compression context from kswapd Barry Song
2025-03-10 13:21   ` Qun-wei Lin (林群崴)
2025-03-07 23:03 ` Nhat Pham
2025-03-08  5:41   ` Barry Song
2025-03-10 13:22     ` Qun-wei Lin (林群崴)
2025-03-10 16:58       ` Nhat Pham
2025-03-10 17:30         ` Nhat Pham
2025-03-11  4:58     ` Sergey Senozhatsky
2025-03-11  9:33       ` Barry Song
2025-03-11 14:12         ` Qun-wei Lin (林群崴) [this message]
2025-03-12  5:19           ` Sergey Senozhatsky
2025-03-12 18:11 ` Minchan Kim
2025-03-13  3:09   ` Sergey Senozhatsky
2025-03-13  3:45     ` Barry Song
2025-03-13 16:07       ` Minchan Kim
2025-03-13 16:58         ` Barry Song
2025-03-13 17:33           ` Minchan Kim
2025-03-13 20:37             ` Barry Song
2025-03-13  3:52     ` Barry Song
2025-03-13  9:30       ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=32d951629ab18bcb2cb59b0c0baab65de915dbea.camel@mediatek.com \
    --to=qun-wei.lin@mediatek.com \
    --cc=21cnbao@gmail.com \
    --cc=Andrew.Yang@mediatek.com \
    --cc=James.Hsu@mediatek.com \
    --cc=akpm@linux-foundation.org \
    --cc=angelogioacchino.delregno@collabora.com \
    --cc=axboe@kernel.dk \
    --cc=casper.li@mediatek.com \
    --cc=chinwen.chang@mediatek.com \
    --cc=chrisl@kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=dave.jiang@intel.com \
    --cc=ira.weiny@intel.com \
    --cc=kasong@tencent.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=matthias.bgg@gmail.com \
    --cc=minchan@kernel.org \
    --cc=nphamcs@gmail.com \
    --cc=nvdimm@lists.linux.dev \
    --cc=ryan.roberts@arm.com \
    --cc=schatzberg.dan@gmail.com \
    --cc=senozhatsky@chromium.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=vishal.l.verma@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox