linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hsin-Yi Wang <hsinyi@chromium.org>
To: Phillip Lougher <phillip@squashfs.org.uk>,
	Matthew Wilcox <willy@infradead.org>,
	 Xiongwei Song <Xiongwei.Song@windriver.com>
Cc: Zheng Liang <zhengliang6@huawei.com>,
	Zhang Yi <yi.zhang@huawei.com>,  Hou Tao <houtao1@huawei.com>,
	Miao Xie <miaoxie@huawei.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	"linux-mm @ kvack . org" <linux-mm@kvack.org>,
	 "squashfs-devel @ lists . sourceforge . net"
	<squashfs-devel@lists.sourceforge.net>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] squashfs: implement readahead
Date: Mon, 16 May 2022 19:04:08 +0800	[thread overview]
Message-ID: <CAJMQK-jjNRuZFOJrd2knQHwPERaJ64pyys+14cCVyfdsP1GZSQ@mail.gmail.com> (raw)
In-Reply-To: <20220516105100.1412740-3-hsinyi@chromium.org>

On Mon, May 16, 2022 at 6:51 PM Hsin-Yi Wang <hsinyi@chromium.org> wrote:
>
> Implement readahead callback for squashfs. It will read datablocks
> which cover pages in readahead request. For a few cases it will
> not mark page as uptodate, including:
> - file end is 0.
> - zero filled blocks.
> - current batch of pages isn't in the same datablock or not enough in a
>   datablock.
> Otherwise pages will be marked as uptodate. The unhandled pages will be
> updated by readpage later.
>
Suggested-by: Matthew Wilcox <willy@infradead.org>
> Signed-off-by: Hsin-Yi Wang <hsinyi@chromium.org>
> ---
> Note that this patch was not formally sent to the list before. It's
> attached to email thread for discussion as it's still under development.
>
> - v1:
> The patch outline was suggested by Matthew. It went through a few
> reviews by Matthew offline.
>
> - v2:
> https://lore.kernel.org/linux-mm/Yn5Yij9pRPCzDozt@casper.infradead.org/t/#m442435c149d411c5c9d8019cff5915419b04bf10
> This is a resend of v1.
>
> - v3:
> https://lore.kernel.org/linux-mm/Yn5Yij9pRPCzDozt@casper.infradead.org/t/#m55a709e6ba5ec59fe95323a67a7f3d6b1953e470
> Fix page actor size to avoid a crash from squashfs_decompress().
> Suggested by Phillip Lougher[1]
> [1] https://lore.kernel.org/linux-mm/Yn5Yij9pRPCzDozt@casper.infradead.org/t/#m687f82debb7667ff31982a05aef3eba081eb5039
>
> - v4:
> https://lore.kernel.org/linux-mm/Yn5Yij9pRPCzDozt@casper.infradead.org/t/#mf93267690ec2e841dade6a494fe72c84b61328d9
> Fix to free page after used. Suggested by Xiongwei Song[2]
> Refactor the skip page logic to possible improve the performance.
> Suggested by Phillip Lougher[3]
> [2] https://lore.kernel.org/linux-mm/Yn5Yij9pRPCzDozt@casper.infradead.org/t/#m0e7b33d167b1ef0eb39b9f41c32ed3f80dfced18
> [3] https://lore.kernel.org/linux-mm/Yn5Yij9pRPCzDozt@casper.infradead.org/t/#m1e0a8f8e4a98d79d14c81b66e197b6dc0a3b77a1
> ---
>  fs/squashfs/file.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 77 insertions(+)
>
> diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
> index a8e495d8eb86..91dfec792f4c 100644
> --- a/fs/squashfs/file.c
> +++ b/fs/squashfs/file.c
> @@ -39,6 +39,7 @@
>  #include "squashfs_fs_sb.h"
>  #include "squashfs_fs_i.h"
>  #include "squashfs.h"
> +#include "page_actor.h"
>
>  /*
>   * Locate cache slot in range [offset, index] for specified inode.  If
> @@ -495,7 +496,83 @@ static int squashfs_read_folio(struct file *file, struct folio *folio)
>         return 0;
>  }
>
> +static void squashfs_readahead(struct readahead_control *ractl)
> +{
> +       struct inode *inode = ractl->mapping->host;
> +       struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
> +       size_t mask = (1UL << msblk->block_log) - 1;
> +       size_t shift = msblk->block_log - PAGE_SHIFT;
> +       loff_t req_end = readahead_pos(ractl) + readahead_length(ractl);
> +       loff_t start = readahead_pos(ractl) &~ mask;
> +       size_t len = readahead_length(ractl) + readahead_pos(ractl) - start;
> +       struct squashfs_page_actor *actor;
> +       unsigned int nr_pages = 0;
> +       struct page **pages;
> +       u64 block = 0;
> +       int bsize, res, i, index;
> +       int file_end = i_size_read(inode) >> msblk->block_log;
> +       unsigned int max_pages = 1UL << shift;
> +
> +       readahead_expand(ractl, start, (len | mask) + 1);
> +
> +       if (readahead_pos(ractl) + readahead_length(ractl) < req_end ||
> +           file_end == 0)
> +               return;
> +
> +       pages = kmalloc_array(max_pages, sizeof(void *), GFP_KERNEL);
> +       if (!pages)
> +               return;
> +
> +       actor = squashfs_page_actor_init_special(pages, max_pages, 0);
> +       if (!actor)
> +               goto out;
> +
> +       for (;;) {
> +               nr_pages = __readahead_batch(ractl, pages, max_pages);
> +               if (!nr_pages)
> +                       break;
> +
> +               if (readahead_pos(ractl) >= i_size_read(inode) ||
> +                   nr_pages < max_pages)
> +                       goto skip_pages;
> +
> +               index = pages[0]->index >> shift;
> +               if ((pages[nr_pages - 1]->index >> shift) != index)
> +                       goto skip_pages;
> +
> +               bsize = read_blocklist(inode, index, &block);
> +               if (bsize == 0)
> +                       goto skip_pages;
> +
> +               res = squashfs_read_data(inode->i_sb, block, bsize, NULL,
> +                                        actor);
> +
> +               if (res >= 0)
> +                       for (i = 0; i < nr_pages; i++)
> +                               SetPageUptodate(pages[i]);
> +
> +               for (i = 0; i < nr_pages; i++) {
> +                       unlock_page(pages[i]);
> +                       put_page(pages[i]);
> +               }
> +       }
> +
> +       kfree(actor);
> +       kfree(pages);
> +       return;
> +
> +skip_pages:
> +       for (i = 0; i < nr_pages; i++) {
> +               unlock_page(pages[i]);
> +               put_page(pages[i]);
> +       }
> +
> +       kfree(actor);
> +out:
> +       kfree(pages);
> +}
>
>  const struct address_space_operations squashfs_aops = {
>         .read_folio = squashfs_read_folio
> +       .readahead = squashfs_readahead
>  };
> --
> 2.36.0.550.gb090851708-goog
>


  reply	other threads:[~2022-05-16 11:04 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-16 10:50 [PATCH 0/2] Implement readahead for squashfs Hsin-Yi Wang
2022-05-16 10:51 ` [PATCH 1/2] Revert "squashfs: provide backing_dev_info in order to disable read-ahead" Hsin-Yi Wang
2022-05-16 10:51 ` [PATCH 2/2] squashfs: implement readahead Hsin-Yi Wang
2022-05-16 11:04   ` Hsin-Yi Wang [this message]
2022-05-16 12:36     ` Matthew Wilcox
2022-05-16 12:47       ` Hsin-Yi Wang
2022-05-16 12:55         ` Matthew Wilcox
2022-05-16 12:57           ` Hsin-Yi Wang
2022-05-16 14:21   ` kernel test robot
2022-05-17  3:40     ` Phillip Lougher
2022-05-16 16:01   ` kernel test robot
2022-05-17  3:41     ` Phillip Lougher
2022-05-17  3:35 ` [PATCH 3/2] squashfs: always build "file direct" version of page actor Phillip Lougher
2022-05-17  7:51   ` Hsin-Yi Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJMQK-jjNRuZFOJrd2knQHwPERaJ64pyys+14cCVyfdsP1GZSQ@mail.gmail.com \
    --to=hsinyi@chromium.org \
    --cc=Xiongwei.Song@windriver.com \
    --cc=akpm@linux-foundation.org \
    --cc=houtao1@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=miaoxie@huawei.com \
    --cc=phillip@squashfs.org.uk \
    --cc=squashfs-devel@lists.sourceforge.net \
    --cc=willy@infradead.org \
    --cc=yi.zhang@huawei.com \
    --cc=zhengliang6@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox