From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 240C6C00144 for ; Fri, 29 Jul 2022 05:22:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A1726B0071; Fri, 29 Jul 2022 01:22:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 05158940007; Fri, 29 Jul 2022 01:22:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E34958E0001; Fri, 29 Jul 2022 01:22:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CBD2E6B0071 for ; Fri, 29 Jul 2022 01:22:58 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 90CC116108D for ; Fri, 29 Jul 2022 05:22:58 +0000 (UTC) X-FDA: 79738993236.11.880F79B Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf10.hostedemail.com (Postfix) with ESMTP id D655BC00B9 for ; Fri, 29 Jul 2022 05:22:56 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id d65-20020a17090a6f4700b001f303a97b14so4246712pjk.1 for ; Thu, 28 Jul 2022 22:22:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=ab+lr+au9UriMl9Fbf9rU9WLixGYuvF4NaEm7BgFrMM=; b=GX1+9NrUXBPnBoGdW5JFZ2aXhLr/hL8VsoCiJXgnKIggIn+2/OcqoCEpXonb4QRGhL qpRmm55mS2uAM75skZN2Cc/W/gePJcFQwYoRpCrM4RWyr+cDI8tigTYK3n0K+wVCz5/d T7NwMyWopxaAn18bIBVyVrPrlAEB2rfXK+jWN0Vuf3E1P/5OfL1VEV0Pr/hOBqjDNy+Q Rd/RpxFjY8gBvXD2L0pqEuscsjjzK40JvNErI49ZSVbYvA9Dc3iJad2WmP/JWXqNF48G 2qHQZigF20FiUyHTFJvYZC2b3yvfwMhb/ckP0F1LEqg1XNvPAuz5kpFhRU15y1paz6Bm dUfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=ab+lr+au9UriMl9Fbf9rU9WLixGYuvF4NaEm7BgFrMM=; b=SBJJWB2Dag4fWQFQSMlv/KBHftRiEIJTE4Xy9NpX8QpomCxMl3SGvFv3pBnm5c+3+0 IucGqSWHytpBDjy0wFcXdlT1fLgeHGBXtlq19g2VZ1PxZHNQHxhqRZsnHXWco39F/cgr IbwxCKhTFXixkEAxrUsF8LpfEbVUSDX1JwT/JI5eqJl4xNc3ZPZHgU2BCPPkpaNWev6h oXUoQcvJxQpNNd7AxWUtjfAv6QK4on2wiBl1qjZXOb8wp3s/aXv73WCyR3moPqu97QFj ehujPE57qAURzvHFxqnUfdTM9ce2AmrAotKcoWgFdNekmPAcONgBvAjnQ0LR6HmbXZbf 2hCA== X-Gm-Message-State: ACgBeo3OfQZO9Dfb32IiLo4505sKZFbVFpT4JipEj6sDNOPHt9+A+/Cf Eq5tq66Wm/h1ejOvdrZ41w4y1cQMXGxfNq2VJqE= X-Google-Smtp-Source: AA6agR4MvXGZnWcnKiDV1TvKCfOwZmIZIZfPZZyMEDy/uP0fX7FDSP1OTcicfrDtM00O4nGfu4pvuj3lFeNp8VrjHxA= X-Received: by 2002:a17:90a:1d4f:b0:1f3:10f2:155c with SMTP id u15-20020a17090a1d4f00b001f310f2155cmr2841934pju.25.1659072175476; Thu, 28 Jul 2022 22:22:55 -0700 (PDT) MIME-Version: 1.0 References: <20220606150305.1883410-1-hsinyi@chromium.org> <20220606150305.1883410-4-hsinyi@chromium.org> In-Reply-To: From: Xiongwei Song Date: Fri, 29 Jul 2022 13:22:29 +0800 Message-ID: Subject: Re: [PATCH v5 3/3] squashfs: implement readahead To: Phillip Lougher Cc: Hsin-Yi Wang , Matthew Wilcox , Xiongwei Song , Marek Szyprowski , Andrew Morton , Zheng Liang , Zhang Yi , Hou Tao , Miao Xie , "linux-mm @ kvack . org" , "squashfs-devel @ lists . sourceforge . net" , Linux Kernel Mailing List , xiaohong.qi@windriver.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659072177; a=rsa-sha256; cv=none; b=sUSYidS5saF133q7Ei4ENksGWaDVI82Tx4Xsq2ciQfYiZXYgc9z9QzuFvaWHQr3msOAmgQ +kJ1+SkNqOMk/hWBh6DyF5HowVuLV+MUI+8A7nVLxRm2yQW1BEQoiAhXZ7UuPUb1Va579Q bAi0F5S9zyJVbfMgEwMvdNy5VcXrRgs= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=GX1+9NrU; spf=pass (imf10.hostedemail.com: domain of sxwjean@gmail.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=sxwjean@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659072177; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ab+lr+au9UriMl9Fbf9rU9WLixGYuvF4NaEm7BgFrMM=; b=aymKloIPqRyDghC/3HsfiZVUiQO73TUIQRCkdvTo85Pyva06A61JRNh7cEqoH3J++tK65j owjKekTusk2pOKWLSFMQeskMZdD6x80qdbmy8EVh6DKnUd+Z1ExM85N/F4yQe8OeuHo0jf RigkY/N1wOFpRwod6zYjXAyHQNzkFS4= X-Rspamd-Queue-Id: D655BC00B9 X-Rspam-User: X-Stat-Signature: inriy8fxn9u487n97hyrsa9oabopfqmi Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=GX1+9NrU; spf=pass (imf10.hostedemail.com: domain of sxwjean@gmail.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=sxwjean@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam08 X-HE-Tag: 1659072176-551424 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Phillip, Gentle ping. Regards, Xiongwei On Fri, Jul 15, 2022 at 9:45 AM Xiongwei Song wrote: > > Hi Phillip, > > Sorry for providing my test info so late. > > On Fri, Jun 10, 2022 at 3:42 PM Phillip Lougher = wrote: > > > > On 09/06/2022 15:46, Xiongwei Song wrote: > > > This version is bad for my test. I ran the test below > > > "for cnt in $(seq 0 9); do echo 3 > /proc/sys/vm/drop_caches; echo > > > "Loop ${cnt}:"; time -v find /software/test[0-9][0-9] | xargs -P 24 -= i > > > cat {} > /dev/null 2>/dev/null; echo ""; done" > > > in 90 partitions. > > > > > > With 9eec1d897139 reverted: > > > 1:06.18 (1m + 6.18s) > > > 1:05.65 > > > 1:06.34 > > > 1:06.88 > > > 1:06.52 > > > 1:06.78 > > > 1:06.61 > > > 1:06.99 > > > 1:06.60 > > > 1:06.79 > > > > > > With this version: > > > 2:36.85 (2m + 36.85s) > > > 2:28.89 > > > 1:43.46 > > > 1:41.50 > > > 1:42.75 > > > 1:43.46 > > > 1:43.67 > > > 1:44.41 > > > 1:44.91 > > > 1:45.44 > > > > > > Any thoughts? > > > > Thank-you for your latest test results, and they tend to > > imply that the latest version of the patch hasn't improved > > performance in your use-case. > > > > One thing which is becoming clear here is that the devil is in > > the detail, and your results being summaries are not capturing > > enough detail to understand what is happening. They show > > something is wrong, but, don't give any guidance as to what > > is happening. > > > > I think it will be difficult to capture more details from > > your test case. But, detail can be captured from summaries, by > > varying the input and extrapolating from the results. > > > > By that I mean have you tried changing anything, and observed any > > changed results? > > > > For instance have you tried any of the following > > > > 1. Changing the parallelism of your test from 24 read threads. > > Does 1, 2, 4 etc parallel read threads change the observed > > behaviour? In other words is the slow-down observed across > > all degrees of parallelism, or is there a critical point. > > Please see the test results below, which are from my colleague Xiaohong Q= i: > > I test file size from 256KB to 5120KB with thread number > 1,2,4,8,16,24,32(run ten times and get it=E2=80=99s average value). The r= ead > performance is shown below. The difference of read performance between > 4.18 kernel and 5.10(with squashfs_readahead() patch v7) seems is > caused by the files whose size is litter than 256KB. > > T1 T2 T4 T8 > T16 T24 T32 > All File Size > 4.18 136.8642 100.479 96.5523 96.1569 96.204 > 96.0587 96.0519 > 5.10-v7 138.474 103.1351 99.9192 99.7091 99.7894 > 100.2034 100.4447 > Delta 1.6098 2.6561 3.3669 3.5522 > 3.5854 4.1447 4.3928 > > Fsize < 256KB > 4.18 21.7949 14.6959 11.639 10.5154 10.14 > 10.1092 10.1425 > 5.10-v7 23.8629 16.2483 13.1475 12.3697 12.1985 > 12.8799 13.3292 > Delta 2.068 1.5524 1.5085 1.8543 > 2.0585 2.7707 3.1867 > > 256KB < Fsize < 512KB > 4.18 11.8042 7.9228 7.6891 7.7924 7.8181 > 7.8548 7.8496 > 5.10-v7 12.0505 8.2506 8.1557 8.156 8.16 > 8.1577 8.1611 > Delta 0.2463 0.3278 0.4666 0.3636 0.3419 > 0.3029 0.3115 > > 512KB < Fsize < 1024KB > 4.18 7.7806 5.5496 5.496 5.4912 5.4897 > 5.4883 5.6602 > 5.10-v7 8.1283 5.8784 5.8486 5.8505 5.8523 > 5.8511 5.856 > Delta 0.3477 0.3288 0.3526 0.3593 0.3626 > 0.3628 0.1958 > > 1024KB < Fsize < 1536KB > 4.18 10.2686 7.5294 7.5012 7.4902 7.4855 > 7.4858 7.4851 > 5.10-v7 10.5289 7.8486 7.8502 7.8477 7.849 > 7.8482 7.8542 > Delta 0.2603 0.3192 0.349 0.3575 0.3635 > 0.3624 0.3691 > > 1536KB < Fsize < 2048KB > 4.18 5.6439 4.0588 3.9974 3.9946 3.9949 > 3.9942 3.9925 > 5.10-v7 6.2263 4.6009 4.6062 4.6069 4.6078 > 4.6074 4.6099 > Delta 0.5824 0.5421 0.6088 0.6123 0.6129 > 0.6132 0.6174 > > 2048KB < Fsize < 5120KB > 4.18 34.9166 28.7944 28.7355 28.7192 28.7046 > 28.6976 28.69 > 5.10-v7 33.8689 27.9726 27.9747 27.9801 27.9849 > 27.9855 27.9915 > Delta -1.0477 -0.8218 -0.7608 -0.7391 > -0.7197 -0.7121 -0.6985 > > > 5120KB > 4.18 45.6575 33.8609 33.7512 33.7349 33.7196 > 33.7166 33.708 > 5.10-v7 45.3494 34.0473 34.0443 34.0692 34.0635 > 34.0622 34.0599 > Delta -0.3081 0.1864 0.2931 0.3343 > 0.3439 0.3456 0.3519 > > (T1 means test with 1 thread, File size unit: KB, time unit: second, > 5.10-v7 means > we backported squashfs_readahead() v7 patchset on linux 5.10) > > The command to test is like: > echo 3 > /proc/sys/vm/drop_caches; sleep 3; time -v find /test/ -type > f -size -256k | xargs -P 32 -i cat {} > /dev/null 2>/dev/null > echo 3 > /proc/sys/vm/drop_caches; sleep 3; time -v find /test/ -type > f -size +256k -size -512k | xargs -P 32 -i cat {} > /dev/null > 2>/dev/null > > > > > 2. Does the Squashfs parallelism options in the kernel configuration > > change the behaviour? Knowing if the number of "decompressors" > > available changes the difference in performance could be important. > > In our ENV, the config SQUASHFS_DECOMP_MULTI_PERCPU is enalbed. There are > 12 cpus in our board. We tried to enable CONFIG_SQUASHFS_DECOMP_MULTI and > read files with 2/4/6/8/12/16/24/32 threads, the performance was not > improved and even a bit worse. > > > > > 3. Are your Squashfs filesystems built using fragments, or without > > fragments? Rebuilding the filesystems without fragments, and > > observing any different performance, would help to pinpoint > > where the issue lies. > > We didn't use option "-no-fragments" when build the squashfs image. > The steps of build squashfs partition is: > a. mksquashfs /lib64/ test.squash > b. lvcreate -L 24M /dev/vg0 -n test -y > c. dd if=3D/root/test.squash of=3D/dev/vg0/test > d. mount -t squashfs /dev/vg0/test xxx > > When using "-no-fragments", the performance is much worse than with > fragments. As you can see, the test files are from /lib64, most of > them are small files. > > > > > 4. What is the block size used in your Squashfs filesystems. Have > > you tried changing the block size, and seen what effect > > it has on the difference in performance between the patches? > > We configured CONFIG_SQUASHFS_4K_DEVBLK_SIZE to "y", so the blk size > should be 4k. We didn't try other block sizes because we have identical s= quashfs > configs on 4.18 and 5.10. > > > > > 5. You don't mention where your Squashfs filesystems are stored. > > Is this slow media or fast media? > > Please see the disk info we are testing on: > """ > $ hdparm -I /dev/sda1 > > /dev/sda1: > SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 > 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > > ATA device, with non-removable media > Standards: > Likely used: 1 > Configuration: > Logical max current > cylinders 0 0 > heads 0 0 > sectors/track 0 0 > =E2=80=93 > Logical/Physical Sector size: 512 bytes > device size with M =3D 1024*1024: 0 MBytes > device size with M =3D 1000*1000: 0 MBytes > cache/buffer size =3D unknown > Capabilities: > IORDY not likely > Cannot perform double-word IO > R/W multiple sector transfer: not supported > DMA: not supported > PIO: pio0 > """ > > > Have you tried moving > > the Squashfs filesystems onto different media and observed > > any difference in performance between the patches? > > Sorry, I still didn't get a chance to test on other medias. > > > > > The fact of the matter is there are many over-lapping factors > > which affect the performance of squashfs filesystems (like any > > reasonably complex code), which may be elsewhere. It can only > > take a small change somewhere to have a dramatic affect on > > performance. > > We found the performance is improved when running our test after remaking > the partitions with my steps in item 3 above. The following data is the > elapsed times of squashfs_readahead() when reading files before(this stat= us > means we have run the test command many times) and after remaking the > partitions. I captured the data below with ftrace: > > Fo 14k file: > Before partition remade After partition remade: > 4352.306 us 3943.846 us > 4321.176 us 3929.255 us > > For 1.8M file: > Before partition remade After partition remade: > 17446.73 us 16506.58 us > 17446.73 us 16201.32 us > 18465.38 us 17548.96 us > 12269.78 us 11939.09 us > 9627.990 us 9167.052 us > > As you can see the elapsed times of squashfs_readahead() got significant > reduction after fresh partitions. We hit same problem on linux 4.18. > > By the way, I think my test results that I have ever sent out in v5 threa= d > is related with if the partitions remade: > https://lore.kernel.org/lkml/20220606150305.1883410-1-hsinyi@chromium.org= /T/#m5f3f8386eb8b72a1f63b60be37ea2cc6d03c5f84 > > > > > This is particularly the case with embedded systems, which > > may be short on CPU performance, short on RAM, and have low > > performance media, and be effectively operating on the "edge". > > It can only take a small change, an update for instance, to > > change from performing well to badly. > > Checked cpu usage it's not over 11%. The RAM is also enough: > total used free shared buff/cache ava= ilable > Mem: 15837684 531420 11051344 262080 4254920 14858224 > Swap: 0 > > Regards, > Xiongwei > > > > > > I speak from experience, having spent over ten years in embedded > > Linux as a senior engineer and then as a consultant. I have > > my own horror tales as a consultant, dealing with systems pushed > > beyond the edge (with hacks), and the customer insisting they > > didn't do anything to cause the system to finally break. > > > > Maybe it is off topic here. But, I remember one instance where > > a customer had a system out in the field, which "inexplicably" > > started to lock up every 6 months or so. This system had regular > > updates "over the air", and I discovered the "lock up" only > > started happening after the latest update. It turns out the new versio= n > > of the application had grown a new feature which needed more > > RAM than normal. This feature wasn't used very often, but, > > if it coincided with an infrequent "house-keeping" background task, > > the system ran out of memory and locked up (they had disabled the OOM > > killer). This was so rare it might only coincide after six months. No > > bug, but a slow growth in working set RAM over a number of versions. > > > > In other words we may be looking at a knock-on side effect of > > readahead, which is either caused by issues elsewhere or is > > causing issues elsewhere. > > > > Dealing with it in isolation, as bug in the readahead code is going > > to get us nowhere, looking for something that isn't there. > > > > I'm not saying that this is the case here. But, the more detail > > you can provide, and the more test variants you can provide will > > help to determine what is the problem. > > > > Thanks > > > > Phillip > > > > > > > > > > Regards, > > > Xiongwei > > > > > > On Mon, Jun 6, 2022 at 11:03 PM Hsin-Yi Wang wr= ote: > > >> > > >> Implement readahead callback for squashfs. It will read datablocks > > >> which cover pages in readahead request. For a few cases it will > > >> not mark page as uptodate, including: > > >> - file end is 0. > > >> - zero filled blocks. > > >> - current batch of pages isn't in the same datablock. > > >> - decompressor error. > > >> Otherwise pages will be marked as uptodate. The unhandled pages will= be > > >> updated by readpage later. > > >> > > >> Suggested-by: Matthew Wilcox > > >> Signed-off-by: Hsin-Yi Wang > > >> Reported-by: Matthew Wilcox > > >> Reported-by: Phillip Lougher > > >> Reported-by: Xiongwei Song > > >> Reported-by: Marek Szyprowski > > >> Reported-by: Andrew Morton > > >> --- > > >> v4->v5: > > >> - Handle short file cases reported by Marek and Matthew. > > >> - Fix checkpatch error reported by Andrew. > > >> > > >> v4: https://lore.kernel.org/lkml/20220601103922.1338320-4-hsinyi@chr= omium.org/ > > >> v3: https://lore.kernel.org/lkml/20220523065909.883444-4-hsinyi@chro= mium.org/ > > >> v2: https://lore.kernel.org/lkml/20220517082650.2005840-4-hsinyi@chr= omium.org/ > > >> v1: https://lore.kernel.org/lkml/20220516105100.1412740-3-hsinyi@chr= omium.org/ > > >> --- > > >> fs/squashfs/file.c | 124 +++++++++++++++++++++++++++++++++++++++++= +++- > > >> 1 file changed, 123 insertions(+), 1 deletion(-) > > >> > > >> diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c > > >> index a8e495d8eb86..fbd096cd15f4 100644 > > >> --- a/fs/squashfs/file.c > > >> +++ b/fs/squashfs/file.c > > >> @@ -39,6 +39,7 @@ > > >> #include "squashfs_fs_sb.h" > > >> #include "squashfs_fs_i.h" > > >> #include "squashfs.h" > > >> +#include "page_actor.h" > > >> > > >> /* > > >> * Locate cache slot in range [offset, index] for specified inode.= If > > >> @@ -495,7 +496,128 @@ static int squashfs_read_folio(struct file *fi= le, struct folio *folio) > > >> return 0; > > >> } > > >> > > >> +static void squashfs_readahead(struct readahead_control *ractl) > > >> +{ > > >> + struct inode *inode =3D ractl->mapping->host; > > >> + struct squashfs_sb_info *msblk =3D inode->i_sb->s_fs_info; > > >> + size_t mask =3D (1UL << msblk->block_log) - 1; > > >> + unsigned short shift =3D msblk->block_log - PAGE_SHIFT; > > >> + loff_t start =3D readahead_pos(ractl) & ~mask; > > >> + size_t len =3D readahead_length(ractl) + readahead_pos(ractl= ) - start; > > >> + struct squashfs_page_actor *actor; > > >> + unsigned int nr_pages =3D 0; > > >> + struct page **pages; > > >> + int i, file_end =3D i_size_read(inode) >> msblk->block_log; > > >> + unsigned int max_pages =3D 1UL << shift; > > >> + > > >> + readahead_expand(ractl, start, (len | mask) + 1); > > >> + > > >> + if (file_end =3D=3D 0) > > >> + return; > > >> + > > >> + pages =3D kmalloc_array(max_pages, sizeof(void *), GFP_KERNE= L); > > >> + if (!pages) > > >> + return; > > >> + > > >> + actor =3D squashfs_page_actor_init_special(pages, max_pages,= 0); > > >> + if (!actor) > > >> + goto out; > > >> + > > >> + for (;;) { > > >> + pgoff_t index; > > >> + int res, bsize; > > >> + u64 block =3D 0; > > >> + unsigned int expected; > > >> + > > >> + nr_pages =3D __readahead_batch(ractl, pages, max_pag= es); > > >> + if (!nr_pages) > > >> + break; > > >> + > > >> + if (readahead_pos(ractl) >=3D i_size_read(inode)) > > >> + goto skip_pages; > > >> + > > >> + index =3D pages[0]->index >> shift; > > >> + if ((pages[nr_pages - 1]->index >> shift) !=3D index= ) > > >> + goto skip_pages; > > >> + > > >> + expected =3D index =3D=3D file_end ? > > >> + (i_size_read(inode) & (msblk->block_size = - 1)) : > > >> + msblk->block_size; > > >> + > > >> + bsize =3D read_blocklist(inode, index, &block); > > >> + if (bsize =3D=3D 0) > > >> + goto skip_pages; > > >> + > > >> + if (nr_pages < max_pages) { > > >> + struct squashfs_cache_entry *buffer; > > >> + unsigned int block_mask =3D max_pages - 1; > > >> + int offset =3D pages[0]->index - (pages[0]->= index & ~block_mask); > > >> + > > >> + buffer =3D squashfs_get_datablock(inode->i_s= b, block, > > >> + bsize); > > >> + if (buffer->error) { > > >> + squashfs_cache_put(buffer); > > >> + goto skip_pages; > > >> + } > > >> + > > >> + expected -=3D offset * PAGE_SIZE; > > >> + for (i =3D 0; i < nr_pages && expected > 0; = i++, > > >> + expected -=3D PAGE_S= IZE, offset++) { > > >> + int avail =3D min_t(int, expected, P= AGE_SIZE); > > >> + > > >> + squashfs_fill_page(pages[i], buffer, > > >> + offset * PAGE_SIZE, = avail); > > >> + unlock_page(pages[i]); > > >> + } > > >> + > > >> + squashfs_cache_put(buffer); > > >> + continue; > > >> + } > > >> + > > >> + res =3D squashfs_read_data(inode->i_sb, block, bsize= , NULL, > > >> + actor); > > >> + > > >> + if (res =3D=3D expected) { > > >> + int bytes; > > >> + > > >> + /* Last page may have trailing bytes not fil= led */ > > >> + bytes =3D res % PAGE_SIZE; > > >> + if (bytes) { > > >> + void *pageaddr; > > >> + > > >> + pageaddr =3D kmap_atomic(pages[nr_pa= ges - 1]); > > >> + memset(pageaddr + bytes, 0, PAGE_SIZ= E - bytes); > > >> + kunmap_atomic(pageaddr); > > >> + } > > >> + > > >> + for (i =3D 0; i < nr_pages; i++) { > > >> + flush_dcache_page(pages[i]); > > >> + SetPageUptodate(pages[i]); > > >> + } > > >> + } > > >> + > > >> + for (i =3D 0; i < nr_pages; i++) { > > >> + unlock_page(pages[i]); > > >> + put_page(pages[i]); > > >> + } > > >> + } > > >> + > > >> + kfree(actor); > > >> + kfree(pages); > > >> + return; > > >> + > > >> +skip_pages: > > >> + for (i =3D 0; i < nr_pages; i++) { > > >> + unlock_page(pages[i]); > > >> + put_page(pages[i]); > > >> + } > > >> + > > >> + kfree(actor); > > >> +out: > > >> + kfree(pages); > > >> +} > > >> > > >> const struct address_space_operations squashfs_aops =3D { > > >> - .read_folio =3D squashfs_read_folio > > >> + .read_folio =3D squashfs_read_folio, > > >> + .readahead =3D squashfs_readahead > > >> }; > > >> -- > > >> 2.36.1.255.ge46751e96f-goog > > >> > > >> > >