From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8A73C433EF for ; Sun, 12 Jun 2022 11:52:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B45F8D012A; Sun, 12 Jun 2022 07:52:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 163E48D011D; Sun, 12 Jun 2022 07:52:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 004B18D012A; Sun, 12 Jun 2022 07:52:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E43078D011D for ; Sun, 12 Jun 2022 07:52:12 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B75BE20AB0 for ; Sun, 12 Jun 2022 11:52:12 +0000 (UTC) X-FDA: 79569420504.25.7789EBA Received: from mail-ej1-f51.google.com (mail-ej1-f51.google.com [209.85.218.51]) by imf14.hostedemail.com (Postfix) with ESMTP id 5260F100064 for ; Sun, 12 Jun 2022 11:52:12 +0000 (UTC) Received: by mail-ej1-f51.google.com with SMTP id o7so6167575eja.1 for ; Sun, 12 Jun 2022 04:52:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=TmMV+3xkKf8c7XoSTYtxBpYM2vfFBnYBbc+vZ4D+11w=; b=mX8IJT+enb9/PS+30WDJkwYEFpX8E1WUJTmKL3Pe4dcQ+8saAOlXMWgH32Q6XQPpHu E9FF8ENaUDvV7jmvczp48vIV+yjHXWZtz42pedHSDA6e5z/k6+HgJnRipDSWRAb44Ogs p9jCI6DVkWNq9ol+tGQjfDt8NOH0lGz0rr/yc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=TmMV+3xkKf8c7XoSTYtxBpYM2vfFBnYBbc+vZ4D+11w=; b=EUQXckNZ5IwGH+GOT+LV5qXBswXtW/Je/d7tjcbkj3vVx8No2UcbuLRJ5dTwwDxcjj ObbSNNhnFUIZNTHm4TvuVZ63VUb6tyQxCbVLxqHUamyXC3drLwEBc2VFs9NHiuRbi1tq Ihc1DB8khsLBszCkexorTWECFBeTCYlr6yBjoffJjZ8b4xrAT6VrpQP89nzuk7ttbwU0 yYMSN9mXFNT5fSWt1Hi9mhen2IeyzhKNkTr9SXfGvCjECcwPK8dyf/32pwVQfIR14/5n ZF9oM6ffQialAeRWc7fK7EpblYF+6UDOve03/abvbmhLIdNkUJhq46FMexoMvNGGIudv E/Kg== X-Gm-Message-State: AOAM531FifE7CGhQ+BmY+GtkGnfGM3MI1mbbb5LaOJ87bBuAwZWKHQsM 0B/4AFrm0aOiSYo+8lt+u2BB0fWK37Bq82I1uX+egg== X-Google-Smtp-Source: ABdhPJzeoY30o9o9MOZeufSs5M1iTycyNTnrqfgQaZsvZpyoOSNvdsMd7+PYcOHtikr2iRpB8eGXxEVxw2quLtQbv2E= X-Received: by 2002:a17:906:ced1:b0:710:f654:87ef with SMTP id si17-20020a170906ced100b00710f65487efmr38212713ejb.194.1655034730909; Sun, 12 Jun 2022 04:52:10 -0700 (PDT) MIME-Version: 1.0 References: <20220606150305.1883410-1-hsinyi@chromium.org> <20220606150305.1883410-4-hsinyi@chromium.org> In-Reply-To: From: Hsin-Yi Wang Date: Sun, 12 Jun 2022 19:51:59 +0800 Message-ID: Subject: Re: [PATCH v5 3/3] squashfs: implement readahead To: Phillip Lougher Cc: Andrew Morton , Hou Tao , Marek Szyprowski , Matthew Wilcox , Miao Xie , Xiongwei Song , Zhang Yi , Zheng Liang , linux-kernel@vger.kernel.org, "linux-mm @ kvack . org" , "squashfs-devel @ lists . sourceforge . net" Content-Type: multipart/alternative; boundary="0000000000003a224705e13ecd9b" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655034732; a=rsa-sha256; cv=none; b=YPSbsPzPssrXW6f7+wz0+Ao+eUJA6ctTviMmN++ZxRd7GT/uBLbiSbdheIxppPF6anJFF3 l5V2u3dLR44oG9cLS7OhZszDW1eJma9Y5BvIzYBMq7Zlpp57MgKYhyGXSQKCKJ59+GdKNS lFs9mo159E4vz2xnfubGmTkDYRXNe6Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655034732; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TmMV+3xkKf8c7XoSTYtxBpYM2vfFBnYBbc+vZ4D+11w=; b=tOUQlIUedeGWw/2I/kd9JqGIr5Vrl83EuU83ky2LBOaj5uS7KDQb4xD9Xu2fAycmJJGD7b 8oM8xByLcj+YH2nripj0Bj+w+pMsy68u5l6DZ33KH5vsc7/5uTqlCe1yHqurrLr6xwCpbg QDnXKpEAGey23viu9t7D4428//457UE= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=mX8IJT+e; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf14.hostedemail.com: domain of hsinyi@chromium.org designates 209.85.218.51 as permitted sender) smtp.mailfrom=hsinyi@chromium.org Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=mX8IJT+e; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf14.hostedemail.com: domain of hsinyi@chromium.org designates 209.85.218.51 as permitted sender) smtp.mailfrom=hsinyi@chromium.org X-Stat-Signature: k9i8wg5d7nhre8a1rj16op641b1gh9bm X-Rspamd-Queue-Id: 5260F100064 X-Rspamd-Server: rspam12 X-Rspam-User: X-HE-Tag: 1655034732-199526 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --0000000000003a224705e13ecd9b Content-Type: text/plain; charset="UTF-8" On Sat, Jun 11, 2022 at 1:23 PM Phillip Lougher wrote: > > On 06/06/2022 16:03, Hsin-Yi Wang wrote: > > Implement readahead callback for squashfs. It will read datablocks > > which cover pages in readahead request. For a few cases it will > > not mark page as uptodate, including: > > - file end is 0. > > - zero filled blocks. > > - current batch of pages isn't in the same datablock. > > - decompressor error. > > Otherwise pages will be marked as uptodate. The unhandled pages will be > > updated by readpage later. > > > > Hi Hsin-Yi, > > I have reviewed, tested and instrumented the following patch. > > There are a number of problems with the patch including > performance, unhandled issues, and bugs. > > In this email I'll concentrate on the performance aspects. > > The major change between this V5 patch and the previous patches > (V4 etc), is that it now handles the case where > > + nr_pages = __readahead_batch(ractl, pages, max_pages); > > returns an "nr_pages" less than "max_pages". > > What this means is that the readahead code has returned a set > of page cache pages which does not fully map the datablock to > be decompressed. > > If this is passed to squashfs_read_data() using the current > "page actor" code, the decompression will fail on the missing > pages. > > In recognition of that fact, your V5 patch falls back to using > the earlier intermediate buffer method, with > squashfs_get_datablock() returning a buffer, which is then memcopied > into the page cache pages. > > This is currently what is also done in the existing > squashfs_readpage_block() function if the entire set of pages cannot > be obtained. > hi Phillip, I think there's still one difference between fallback to .readfolio (v4) and v5: If the remaining pages (nr_pages < max_pages) are fallbacked to .readfolio, each single page will be handled by squashfs_readpage_block(). In the block that handles a single page, the for loop in function squashfs_readpage_block() will fill the other 31 pages to null. Later in squashfs_read_cache(), there's also a loop that will go through all 32 pages just to handle a page (other 31 are null). In v5, we just need to run the for loop once to handle the remaining pages, thus we can save a constant (32) for looping through null pages compared to v4. But the impact might still be small, comparing to using intermediate buffer. I'll rebase this series on your series. Also thanks for providing the diff. Hsin-Yi > The problem with this fallback intermediate buffer is it is slow, both > due to the additional memcopies, but, more importantly because it > introduces contention on a single shared buffer. > > I have long had the intention to fix this performance issue in > squashfs_readpage_block(), but, due it being a rare issue there, the > additional work has seemed to be nice but not essential. > > The problem is we don't want the readahead code to be using this > slow method, because the scenario will probably happen much more > often, and for a performance improvement patch, falling back to > an old slow method isn't very useful. > > So I have finally done the work to make the "page actor" code handle > missing pages. > > This I have sent out in the following patch-set updating the > squashfs_readpage_block() function to use it. > > https://lore.kernel.org/lkml/20220611032133.5743-1-phillip@squashfs.org.uk/ > > You can use this updated "page actor" code to eliminate the > "nr_pages < max_pages" special case in your patch. With the benefit > that decompression is done directly into the page cache. > > I have updated your patch to use the new functionality. The diff > including a bug fix I have appended to this email. > > Phillip > > diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c > index b86b2f9d9ae6..721d35ecfca9 100644 > --- a/fs/squashfs/file.c > +++ b/fs/squashfs/file.c > @@ -519,10 +519,6 @@ static void squashfs_readahead(struct > readahead_control *ractl) > if (!pages) > return; > > - actor = squashfs_page_actor_init_special(pages, max_pages, 0); > - if (!actor) > - goto out; > - > for (;;) { > pgoff_t index; > int res, bsize; > @@ -548,41 +544,21 @@ static void squashfs_readahead(struct > readahead_control *ractl) > if (bsize == 0) > goto skip_pages; > > - if (nr_pages < max_pages) { > - struct squashfs_cache_entry *buffer; > - unsigned int block_mask = max_pages - 1; > - int offset = pages[0]->index - (pages[0]->index & ~block_mask); > - > - buffer = squashfs_get_datablock(inode->i_sb, block, > - bsize); > - if (buffer->error) { > - squashfs_cache_put(buffer); > - goto skip_pages; > - } > - > - expected -= offset * PAGE_SIZE; > - for (i = 0; i < nr_pages && expected > 0; i++, > - expected -= PAGE_SIZE, offset++) { > - int avail = min_t(int, expected, PAGE_SIZE); > - > - squashfs_fill_page(pages[i], buffer, > - offset * PAGE_SIZE, avail); > - unlock_page(pages[i]); > - } > - > - squashfs_cache_put(buffer); > - continue; > - } > + actor = squashfs_page_actor_init_special(msblk, pages, nr_pages, > expected); > + if (!actor) > + goto out; > > res = squashfs_read_data(inode->i_sb, block, bsize, NULL, > actor); > > + kfree(actor); > + > if (res == expected) { > int bytes; > > - /* Last page may have trailing bytes not filled */ > + /* Last page (if present) may have trailing bytes not filled */ > bytes = res % PAGE_SIZE; > - if (bytes) { > + if (pages[nr_pages - 1]->index == file_end && bytes) { > void *pageaddr; > > pageaddr = kmap_atomic(pages[nr_pages - 1]); > @@ -602,7 +578,6 @@ static void squashfs_readahead(struct > readahead_control *ractl) > } > } > > - kfree(actor); > kfree(pages); > return; > > @@ -612,7 +587,6 @@ static void squashfs_readahead(struct > readahead_control *ractl) > put_page(pages[i]); > } > > - kfree(actor); > out: > kfree(pages); > } > -- > 2.34.1 --0000000000003a224705e13ecd9b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


On Sat, Jun 11, 2022 at 1:23 PM Phillip Lougher <phillip@squashfs.org.uk> wrote= :
>
> On 06/06/2022 16:03, Hsin-Yi Wang wrote:
> > Implement readahead callback for squashfs. It will read datablock= s
> > which cover pages in readahead request. For a few cases it will > > not mark page as uptodate, including:
> > - file end is 0.
> > - zero filled blocks.
> > - current batch of pages isn't in the same datablock.
> > - decompressor error.
> > Otherwise pages will be marked as uptodate. The unhandled pages w= ill be
> > updated by readpage later.
> >
>
> Hi Hsin-Yi,
>
> I have reviewed, tested and instrumented the following patch.
>
> There are a number of problems with the patch including
> performance, unhandled issues, and bugs.
>
> In this email I'll concentrate on the performance aspects.
>
> The major change between this V5 patch and the previous patches
> (V4 etc), is that it now handles the case where
>
> + nr_pages =3D __readahead_batch(ractl, pages, max_pages);
>
> returns an "nr_pages" less than "max_pages".
>
> What this means is that the readahead code has returned a set
> of page cache pages which does not fully map the datablock to
> be decompressed.
>
> If this is passed to squashfs_read_data() using the current
> "page actor" code, the decompression will fail on the missin= g
> pages.
>
> In recognition of that fact, your V5 patch falls back to using
> the earlier intermediate buffer method, with
> squashfs_get_datablock() returning a buffer, which is then memcopied > into the page cache pages.
>
> This is currently what is also done in the existing
> squashfs_readpage_block() function if the entire set of pages cannot > be obtained.
>

hi Phillip,

I think there's still one difference between fallback to .readfolio (v4= ) and v5:

If the remaining pages (nr_pages < max_pages) are fallbacked to .readfol= io, each single page will be handled by squashfs_readpage_block().
In the block that handles a single page, the for loop in function squashfs_= readpage_block() will fill the other 31 pages to null. Later in squashfs_re= ad_cache(), there's also a loop that will go through all 32 pages just = to handle a page (other 31 are null).

In v5, we just need to run the for loop once to handle the remaining pages,= thus we can save a constant (32) for looping through null pages compared t= o v4. But the impact might still be small, comparing to using intermediate = buffer.=C2=A0

I'll rebase this series on your series. Also thanks for providing the d= iff.=C2=A0


Hsin-Yi

> The problem with this fallback intermediate buffer is it is slow, both=
> due to the additional memcopies, but, more importantly because it
> introduces contention on a single shared buffer.
>
> I have long had the intention to fix this performance issue in
> squashfs_readpage_block(), but, due it being a rare issue there, the > additional work has seemed to be nice but not essential.
>
> The problem is we don't want the readahead code to be using this > slow method, because the scenario will probably happen much more
> often, and for a performance improvement patch, falling back to
> an old slow method isn't very useful.
>
> So I have finally done the work to make the "page actor" cod= e handle
> missing pages.
>
> This I have sent out in the following patch-set updating the
> squashfs_readpage_block() function to use it.
>
> https://lore.kernel.= org/lkml/20220611032133.5743-1-phillip@squashfs.org.uk/
>
> You can use this updated "page actor" code to eliminate the<= br> > "nr_pages < max_pages" special case in your patch.=C2=A0 = With the benefit
> that decompression is done directly into the page cache.
>
> I have updated your patch to use the new functionality.=C2=A0 The diff=
> including a bug fix I have appended to this email.
>
> Phillip
>
> diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
> index b86b2f9d9ae6..721d35ecfca9 100644
> --- a/fs/squashfs/file.c
> +++ b/fs/squashfs/file.c
> @@ -519,10 +519,6 @@ static void squashfs_readahead(struct
> readahead_control *ractl)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!pages)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return; >
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0actor =3D squashfs_page_actor_init_special= (pages, max_pages, 0);
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!actor)
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto out;
> -
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0for (;;) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pgoff_t i= ndex;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0int res, = bsize;
> @@ -548,41 +544,21 @@ static void squashfs_readahead(struct
> readahead_control *ractl)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (bsize= =3D=3D 0)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0goto skip_pages;
>
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (nr_pages &= lt; max_pages) {
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0struct squashfs_cache_entry *buffer;
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0unsigned int block_mask =3D max_pages - 1;
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0int offset =3D pages[0]->index - (pages[0]->index &= ~block_mask);
> -
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0buffer =3D squashfs_get_datablock(inode->i_sb, block,
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0bsize);
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0if (buffer->error) {
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0squashfs_cache_put(buffer);
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto skip_pages;
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0}
> -
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0expected -=3D offset * PAGE_SIZE;
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0for (i =3D 0; i < nr_pages && expected > 0; i++= ,
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0expected -=3D PAGE_SIZE, offset++) {
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0int avail =3D min_t(int, expected= , PAGE_SIZE);
> -
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0squashfs_fill_page(pages[i], buff= er,
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0offset * PAGE_SIZE, avail);
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0unlock_page(pages[i]);
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0}
> -
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0squashfs_cache_put(buffer);
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0continue;
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0actor =3D squa= shfs_page_actor_init_special(msblk, pages, nr_pages,
> expected);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!actor) > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0goto out;
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0res =3D s= quashfs_read_data(inode->i_sb, block, bsize, NULL,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 actor);
>
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0kfree(actor);<= br> > +
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (res = =3D=3D expected) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0int bytes;
>
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0/* Last page may have trailing bytes not filled */
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0/* Last page (if present) may have trailing bytes not filled = */
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0bytes =3D res % PAGE_SIZE;
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0if (bytes) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0if (pages[nr_pages - 1]->index =3D=3D file_end && = bytes) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0void *pageaddr;
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pageaddr =3D kmap_atomic(pa= ges[nr_pages - 1]);
> @@ -602,7 +578,6 @@ static void squashfs_readahead(struct
> readahead_control *ractl)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
>
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0kfree(actor);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0kfree(pages);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return;
>
> @@ -612,7 +587,6 @@ static void squashfs_readahead(struct
> readahead_control *ractl)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0put_page(= pages[i]);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
>
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0kfree(actor);
>=C2=A0 =C2=A0out:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0kfree(pages);
>=C2=A0 =C2=A0}
> --
> 2.34.1
--0000000000003a224705e13ecd9b--