From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB620C7EE31 for ; Wed, 1 Mar 2023 05:51:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 273756B0071; Wed, 1 Mar 2023 00:51:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FA9E6B0072; Wed, 1 Mar 2023 00:51:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09BE26B0073; Wed, 1 Mar 2023 00:51:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EA88F6B0071 for ; Wed, 1 Mar 2023 00:51:38 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id EF54C16115E for ; Wed, 1 Mar 2023 05:51:37 +0000 (UTC) X-FDA: 80519257434.13.2AB100B Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) by imf05.hostedemail.com (Postfix) with ESMTP id 7C7FE100008 for ; Wed, 1 Mar 2023 05:51:34 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of hsiangkao@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=hsiangkao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677649896; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ixZ1/mGTUgbr11eNFDEHHYYhnzI6TgZe6aGLlIIttaA=; b=5HwznbpCViakhCs5O3aEzxVwQn7IRlWyE6PLxNBdiTkYdUeqKZQijK929LPZ2Uv9m4z9jd AQ/3V5eMiN7WrDt7o39zXffEniMCQvjGqaABDWkFHx6yl6rwhQzDz7SkOCDSKFmBhbXHbt HkSm/EaE8t4jHLdxDkaHoF9zNdBX0aE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of hsiangkao@linux.alibaba.com designates 115.124.30.113 as permitted sender) smtp.mailfrom=hsiangkao@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677649896; a=rsa-sha256; cv=none; b=uUK5KoDuJNeeD4SX4gobeBAHozarGpaFcsNyw/URLfnMQUOL7+y3z2MfWEyWJq2xHq91r5 dCQQnVCFCxb9O48ipHxx21CEN82e2bT/2P8/bs9q4nc6yJr61q/9WiLY57mZFFjkKHmFrs gq8AP2C3LcyvwTgfbjk9eIjKAQaghkc= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=hsiangkao@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0VcoLIyT_1677649889; Received: from 30.97.48.239(mailfrom:hsiangkao@linux.alibaba.com fp:SMTPD_---0VcoLIyT_1677649889) by smtp.aliyun-inc.com; Wed, 01 Mar 2023 13:51:30 +0800 Message-ID: <01ff76e3-87fd-0105-c363-44eecff12b57@linux.alibaba.com> Date: Wed, 1 Mar 2023 13:51:28 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [LSF/MM/BPF TOPIC] Cloud storage optimizations To: Matthew Wilcox Cc: Theodore Ts'o , lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org References: <49b6d3de-e5c7-73fc-fa43-5c068426619b@linux.alibaba.com> From: Gao Xiang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 6db4upg6poq799ifuufudz3fpowosd3s X-Rspamd-Queue-Id: 7C7FE100008 X-HE-Tag: 1677649894-392227 X-HE-Meta: U2FsdGVkX18W0GehcCQBFTosqlx6lwCCx9sRGBEpsUiAPC7rVE37LNZsTVnSWV5vLNaYFqkk0+t537UygNv+on3uXMHW63nC/QBB8Bop6P8zYgOwusmbd1JPjJqZ55ply2rJkyIBb2MHIr5KuGITT9W6DYv10958pnOosMi7Ydt6LoPN8ACs8QwM1uXQ91REvDnI4bvqt5Gvgb8Y1oKg8YtR1pnmMxj4zycxEhWeOGPvGWLFBdMlqHfmtHRq4IIyq4aeliSa6QwXCywj0Ijpu0Db22upIiqQbUJW4MUfC8F3jewQD/ZvfqH5hgLwvudgphvLJnIRUmbHMO7ID0BmKMsfJj+MEjB5oiEigKruuYZctHCVhGNpDJPaDLPYcRgEVqZ2ZDpUn261TPjuVN5NFG03FQOozKS08qVRvlSxwOAXy/tN1gnaNXrx12+uxr9isBb4nT+zB2XpTDiJxF2KNE9MnTc3GaAy2WvAZ2SOYk4uFcnk9QqxmCdkB3JSuc5naufzoWyF/aLy++4PEPHqfnCH3R0HbAfvEtG3ZXDGpQG3TmAEHhjGLnLa3Ck3ujOczDdpROR78oJO2c5eLZ2ZUbvn0ONaOy625HT67/NwIbknxg0VpBsV0suw5Xv7IFpnJG8BN3ujemCFaf4ALLC6xzZWWV1hjDH1yJmbnOG4c6F8SzzFSdKgDHEW+7WMyuh4S/axln217/SOzLhVmVCwva/XlQXxIL/4fE0K57Kd8Bn7KEfrCaW/89xUoXB1PPi8UMJGUG9yBR2eRr0OJMVZP/3yX89mtovuQ4AyacTsVsAZiKYsTmdfSFyXXVxdr5zypgzPct/SG++Uw4NxrbpamgCLa7Db5xzP7vFqHGT4o7sJP/aKogDNdkKMZx8wKNrDZFNgQq/A5XZBEUcceBvP9LrUC3qlO/2Hz1nkm2ksxrMC4DKVrkLvQMCChHLjylx7EEBR/LQSLoncn0VLr6x QGEWQTSc e6+y41fgqhWFgPR55HhkI9/h9RzjidHJhotaDndTn0RJIT4oiKlgA7vrlghOeYSsFlIiekIp1hUrzb1XmcqtG8RBA9WIeE9i5RbAWwLalM2YGe6tbm3pV9w7Y5SpUyiNjbAXQy/z2OkKQm/qAACzBPkcmE2E6oftdMO21lhU6U2Q0Ng/XAE2hkKIVT70d2mt7kTFy X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/3/1 13:42, Matthew Wilcox wrote: > On Wed, Mar 01, 2023 at 01:09:34PM +0800, Gao Xiang wrote: >> On 2023/3/1 13:01, Matthew Wilcox wrote: >>> On Wed, Mar 01, 2023 at 12:49:10PM +0800, Gao Xiang wrote: >>>>> The only problem is that the readahead code doesn't tell the filesystem >>>>> whether the request is sync or async. This should be a simple matter >>>>> of adding a new 'bool async' to the readahead_control and then setting >>>>> REQ_RAHEAD based on that, rather than on whether the request came in >>>>> through readahead() or read_folio() (eg see mpage_readahead()). >>>> >>>> Great! In addition to that, just (somewhat) off topic, if we have a >>>> "bool async" now, I think it will immediately have some users (such as >>>> EROFS), since we'd like to do post-processing (such as decompression) >>>> immediately in the same context with sync readahead (due to missing >>>> pages) and leave it to another kworker for async readahead (I think >>>> it's almost same for decryption and verification). >>>> >>>> So "bool async" is quite useful on my side if it could be possible >>>> passed to fs side. I'd like to raise my hands to have it. >>> >>> That's a really interesting use-case; thanks for bringing it up. >>> >>> Ideally, we'd have the waiting task do the >>> decompression/decryption/verification for proper accounting of CPU. >>> Unfortunately, if the folio isn't uptodate, the task doesn't even hold >>> a reference to the folio while it waits, so there's no way to wake the >>> task and let it know that it has work to do. At least not at the moment >>> ... let me think about that a bit (and if you see a way to do it, feel >>> free to propose it) >> >> Honestly, I'd like to take the folio lock until all post-processing is >> done and make it uptodate and unlock so that only we need is to pass >> locked-folios requests to kworkers for async way or sync handling in >> the original context. >> >> If we unlocked these folios in advance without uptodate, which means >> we have to lock it again (which could have more lock contention) and >> need to have a way to trace I/Oed but not post-processed stuff in >> addition to no I/Oed stuff. > > Right, look at how it's handled right now ... > > sys_read() ends up in filemap_get_pages() which (assuming no folio in > cache) calls page_cache_sync_readahead(). That creates locked, !uptodate > folios and asks the filesystem to fill them. Unless that completes > incredibly quickly, filemap_get_pages() ends up in filemap_update_page() > which calls folio_put_wait_locked(). > > If the filesystem BIO completion routine could identify if there was > a task waiting and select one of them, it could wake up the waiter and > pass it a description of what work it needed to do (with the folio still > locked), rather than do the postprocessing itself and unlock the folio Currently EROFS sync decompression is waiting in .readahead() with locked page cache folios, one "completion" together than BIO descriptor (bi_private) in the original context, so that the filesystem BIO completion just needs to complete the completion and wakeup the original context (due to missing pages, so the original context will need the page data immediately as well) to go on .readhead() and unlock folios. Does this way have some flew? Or I'm missing something? Thanks, Gao Xiang > > But that all seems _very_ hard to do with 100% reliability. Note the > comment in folio_wait_bit_common() which points out that the waiters > bit may be set even when there are no waiters. The wake_up code > doesn't seem to support this kind of thing (all waiters are > non-exclusive, but only wake up one of them).