linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hsin-Yi Wang <hsinyi@chromium.org>
To: Matthew Wilcox <willy@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	 William Kucharski <william.kucharski@oracle.com>,
	Christoph Hellwig <hch@lst.de>,
	linux-mm@kvack.org,  linux-fsdevel@vger.kernel.org,
	Phillip Lougher <phillip@squashfs.org.uk>
Subject: Re: Readahead regressed with c1f6925e1091("mm: put readahead pages in cache earlier") on multicore arm64 platforms
Date: Fri, 8 Oct 2021 12:11:05 +0800	[thread overview]
Message-ID: <CAJMQK-h24shNo3eKGaj0sVn8vH+oHht4g_R9yQbwUKSVCaUT-Q@mail.gmail.com> (raw)
In-Reply-To: <YV76Dg+C4BT47ABN@casper.infradead.org>

On Thu, Oct 7, 2021 at 9:46 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Oct 07, 2021 at 03:08:38PM +0800, Hsin-Yi Wang wrote:
> > This calls into squashfs_readpage().
>
> Aha!  I hadn't looked at squashfs before, and now that I do, I can
> see why this commit causes problems for squashfs.  (It would be
> helpful if your report included more detail about which paths inside
> squashfs were taken, but I think I can guess):
>
> squashfs_readpage()
>   squashfs_readpage_block()
>     squashfs_copy_cache()
>       grab_cache_page_nowait()
>
Right, before the patch, push_page won't be null but after the patch,
grab_cache_page_nowait() fails.


> Before this patch, readahead of 1MB would allocate 256x4kB pages,
> then add each one to the page cache and call ->readpage on it:
>
>         for (page_idx = 0; page_idx < readahead_count(rac); page_idx++) {
>                 struct page *page = lru_to_page(pages);
>                 list_del(&page->lru);
>                 if (!add_to_page_cache_lru(page, rac->mapping, page->index,
>                                gfp))
>                         aops->readpage(rac->file, page);
>
> When Squashfs sees it has more than 4kB of data, it calls
> grab_cache_page_nowait(), which allocates more memory (ignoring the
> other 255 pages which have been allocated, because they're not in the
> page cache yet).  Then this loop frees the pages that readahead
> allocated.
>
> After this patch, the pages are already in the page cache when
> ->readpage is called the first time.  So the call to
> grab_cache_page_nowait() fails and squashfs redoes the decompression for
> each page.
>
> Neither of these approaches are efficient.  Squashfs need to implement
> ->readahead.  Working on it now ...
>


      reply	other threads:[~2021-10-08  4:11 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-06  9:25 Hsin-Yi Wang
2021-10-06 11:20 ` Matthew Wilcox
2021-10-06 13:07   ` Hsin-Yi Wang
2021-10-06 13:12     ` Matthew Wilcox
2021-10-07  4:08       ` Hsin-Yi Wang
2021-10-07  7:08         ` Hsin-Yi Wang
2021-10-07 13:45           ` Matthew Wilcox
2021-10-08  4:11             ` Hsin-Yi Wang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJMQK-h24shNo3eKGaj0sVn8vH+oHht4g_R9yQbwUKSVCaUT-Q@mail.gmail.com \
    --to=hsinyi@chromium.org \
    --cc=akpm@linux-foundation.org \
    --cc=hch@lst.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=phillip@squashfs.org.uk \
    --cc=william.kucharski@oracle.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox