From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ABACC433EF for ; Mon, 6 Jun 2022 03:54:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BAFE98D0002; Sun, 5 Jun 2022 23:54:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B5F2A8D0001; Sun, 5 Jun 2022 23:54:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FD248D0002; Sun, 5 Jun 2022 23:54:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8FCEB8D0001 for ; Sun, 5 Jun 2022 23:54:36 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 5FE1B1204AC for ; Mon, 6 Jun 2022 03:54:36 +0000 (UTC) X-FDA: 79546444152.22.F19E4B6 Received: from p3plwbeout27-05.prod.phx3.secureserver.net (p3plsmtp27-05-2.prod.phx3.secureserver.net [216.69.139.54]) by imf31.hostedemail.com (Postfix) with ESMTP id 8756920045 for ; Mon, 6 Jun 2022 03:53:49 +0000 (UTC) Received: from mailex.mailcore.me ([94.136.40.145]) by :WBEOUT: with ESMTP id y3p1nHIP0AU9fy3p1nyEyF; Sun, 05 Jun 2022 20:54:32 -0700 X-CMAE-Analysis: v=2.4 cv=WeTJ12tX c=1 sm=1 tr=0 ts=629d7a78 a=7e6w4QD8YWtpVJ/7+iiidw==:117 a=84ok6UeoqCVsigPHarzEiQ==:17 a=ggZhUymU-5wA:10 a=IkcTkHD0fZMA:10 a=JPEYwPQDsx4A:10 a=hD80L64hAAAA:8 a=JfrnYn6hAAAA:8 a=cm27Pg_UAAAA:8 a=FXvPX3liAAAA:8 a=t7CeM3EgAAAA:8 a=chwQjfZ6gPOV06gMw64A:9 a=QEXdDO2ut3YA:10 a=1CNFftbPRP8L7MoqJWF3:22 a=xmb-EsYY8bH0VWELuYED:22 a=UObqyxdv-6Yh2QiB9mM_:22 a=FdTzh2GWekK77mhwV6Dw:22 X-SECURESERVER-ACCT: phillip@squashfs.org.uk X-SID: y3p1nHIP0AU9f Received: from 82-69-79-175.dsl.in-addr.zen.co.uk ([82.69.79.175] helo=[192.168.178.33]) by smtp01.mailcore.me with esmtpa (Exim 4.94.2) (envelope-from ) id 1ny3oy-0005XP-LE; Mon, 06 Jun 2022 04:54:30 +0100 Message-ID: <0e84fe64-c993-7f43-ca52-8fee735b0372@squashfs.org.uk> Date: Mon, 6 Jun 2022 04:54:24 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.9.1 Subject: Re: [PATCH v4 3/3] squashfs: implement readahead To: Marek Szyprowski , Matthew Wilcox , Hsin-Yi Wang Cc: Xiongwei Song , Zheng Liang , Zhang Yi , Hou Tao , Miao Xie , Andrew Morton , "linux-mm @ kvack . org" , "squashfs-devel @ lists . sourceforge . net" , linux-kernel@vger.kernel.org References: <20220601103922.1338320-1-hsinyi@chromium.org> <20220601103922.1338320-4-hsinyi@chromium.org> <90b228ea-1b0e-d2e8-62be-9ad5802dcce7@samsung.com> From: Phillip Lougher In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Mailcore-Auth: 439999529 X-Mailcore-Domain: 1394945 X-123-reg-Authenticated: phillip@squashfs.org.uk X-Originating-IP: 82.69.79.175 X-CMAE-Envelope: MS4xfB+B4xOt3+1zHKqYMP47ELM6n9siHCuBx66coNuTm3+kNlo+F43Z6GxlJyKQAKQOky8dInJHUd1WLJvu6R04gv4Pp3FyEMijDmZWdEJQhXFUCYLq2GhP QPN544+HqBRytQvT20IkaMv0f1fsLaCAGn2N5CREJuDBK5Lp6WQP6jrf1CgNyLmvqlCK38QebfffiBJlGLnyOBnUfiGe9Rbzxv4= X-Stat-Signature: xpwyd9xwzmxohzqe8pph81yjgdc9fc4x X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=none; spf=none (imf31.hostedemail.com: domain of phillip@squashfs.org.uk has no SPF policy when checking 216.69.139.54) smtp.mailfrom=phillip@squashfs.org.uk; dmarc=none X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 8756920045 X-HE-Tag: 1654487629-910508 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 03/06/2022 16:58, Marek Szyprowski wrote: > Hi Matthew, > > On 03.06.2022 17:29, Matthew Wilcox wrote: >> On Fri, Jun 03, 2022 at 10:55:01PM +0800, Hsin-Yi Wang wrote: >>> On Fri, Jun 3, 2022 at 10:10 PM Marek Szyprowski >>> wrote: >>>> Hi Matthew, >>>> >>>> On 03.06.2022 14:59, Matthew Wilcox wrote: >>>>> On Fri, Jun 03, 2022 at 02:54:21PM +0200, Marek Szyprowski wrote: >>>>>> On 01.06.2022 12:39, Hsin-Yi Wang wrote: >>>>>>> Implement readahead callback for squashfs. It will read datablocks >>>>>>> which cover pages in readahead request. For a few cases it will >>>>>>> not mark page as uptodate, including: >>>>>>> - file end is 0. >>>>>>> - zero filled blocks. >>>>>>> - current batch of pages isn't in the same datablock or not enough in a >>>>>>> datablock. >>>>>>> - decompressor error. >>>>>>> Otherwise pages will be marked as uptodate. The unhandled pages will be >>>>>>> updated by readpage later. >>>>>>> >>>>>>> Suggested-by: Matthew Wilcox >>>>>>> Signed-off-by: Hsin-Yi Wang >>>>>>> Reported-by: Matthew Wilcox >>>>>>> Reported-by: Phillip Lougher >>>>>>> Reported-by: Xiongwei Song >>>>>>> --- >>>>>> This patch landed recently in linux-next as commit 95f7a26191de >>>>>> ("squashfs: implement readahead"). I've noticed that it causes serious >>>>>> issues on my test systems (various ARM 32bit and 64bit based boards). >>>>>> The easiest way to observe is udev timeout 'waiting for /dev to be fully >>>>>> populated' and prolonged booting time. I'm using squashfs for deploying >>>>>> kernel modules via initrd. Reverting aeefca9dfae7 & 95f7a26191deon on >>>>>> top of the next-20220603 fixes the issue. >>>>> How large are these files? Just a few kilobytes? >>>> Yes, they are small, most of them are smaller than 16KB, some about >>>> 128KB and a few about 256KB. I've sent a detailed list in private mail. >>>> >>> Hi Marek, >>> >>> Are there any obvious squashfs errors in dmesg? Did you enable >>> CONFIG_SQUASHFS_FILE_DIRECT or CONFIG_SQUASHFS_FILE_CACHE? >> I don't think it's an error problem. I think it's a short file problem. >> >> As I understand the current code (and apologies for not keeping up >> to date with how the patch is progressing), if the file is less than >> msblk->block_size bytes, we'll leave all the pages as !uptodate, leaving >> them to be brough uptodate by squashfs_read_folio(). So Marek is hitting >> the worst case scenario where we re-read the entire block for each page >> in it. I think we have to handle this tail case in ->readahead(). > > I'm not sure if this is related to reading of small files. There are > only 50 modules being loaded from squashfs volume. I did a quick test of > reading the files. > > Simple file read with this patch: > > root@target:~# time find /initrd/ -type f | while read f; do cat $f > >/dev/null; done > > real    0m5.865s > user    0m2.362s > sys     0m3.844s > > Without: > > root@target:~# time find /initrd/ -type f | while read f; do cat $f > >/dev/null; done > > real    0m6.619s > user    0m2.112s > sys     0m4.827s > It has been a four day holiday in the UK (Queen's Platinum Jubilee), hence the delay in responding. The above read use-case is sequential (only one thread/process), whereas the use-case where the slow-down is observed may be parallel (multiple threads/processes entering Squashfs). The above sequential use-case if the small files are held in fragments, will be exhibiting caching behaviour that will ameliorate the case where the same block is being repeatedly re-read for each page in it. Because each time Squashfs is re-entered handling only a single page, the decompressed block will be found in the fragment cache, eliminating a block decompression for each page. In a parallel use-case the decompressed fragment block may be being eliminated from the cache (by other reading processes), hence forcing the block to be repeatedly decompressed. Hence the slow-down will be much more noticable with a parallel use-case than a sequential use-case. It also may be why this slipped through testing, if the test cases are purely sequential in nature. So Matthew's previous comment is still the most likely explanation for the slow-down. Phillip > Best regards