From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93483CD13DA for ; Mon, 18 Sep 2023 18:42:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D9A106B042F; Mon, 18 Sep 2023 14:42:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D49D16B0430; Mon, 18 Sep 2023 14:42:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C11F86B0431; Mon, 18 Sep 2023 14:42:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id AF0546B042F for ; Mon, 18 Sep 2023 14:42:23 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6BEB540498 for ; Mon, 18 Sep 2023 18:42:23 +0000 (UTC) X-FDA: 81250588566.22.719AA74 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf05.hostedemail.com (Postfix) with ESMTP id C7B8410000D for ; Mon, 18 Sep 2023 18:42:21 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=oT9ZRHMT; spf=none (imf05.hostedemail.com: domain of mcgrof@infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=mcgrof@infradead.org; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695062541; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ioc9mhO9oS3pTCUmAiffPAnO0TMPDBthf0n0SicOcH8=; b=UUShlKFK7onAeIHehQRlA/iwxRw7zsJv4dCEnhbPUWLpOeOrBn40WTJSGO3lmvWAeCu/in e97Kp3kpszGn3MXgbjyDBxbG6M5DECSlYZzY+V1Fqcf1BO+lYOT3VPrtFrpGGo4npftFPA q5GezX6rzV20v3Lu90EXq+me9qzwD8I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695062541; a=rsa-sha256; cv=none; b=lke2rChVLnt2IYjSbSfUp93xFmGpl0RzUQGafNUEdnzPtlRQ5hS999xoLwOldb9NngymoK 4oQxJPFxhjYf9JVDCsVkCQTGWPlbGzDl4Oq7i3VdmL0eWXu4/nLc9jRS0zRUFYcerZ4BgI kqP9u6CeLPTFKFi4T0TSdE1qjqiOEDQ= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=bombadil.20210309 header.b=oT9ZRHMT; spf=none (imf05.hostedemail.com: domain of mcgrof@infradead.org has no SPF policy when checking 198.137.202.133) smtp.mailfrom=mcgrof@infradead.org; dmarc=fail reason="No valid SPF, DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Sender:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ioc9mhO9oS3pTCUmAiffPAnO0TMPDBthf0n0SicOcH8=; b=oT9ZRHMTXK0XFkVcgbXfClVqKj M1Ht4wZqVZ7PBbNl7hmEy4EKtEoAYYPil0fwgeQuZgGt+HbOHy17MbjwEX79MLTqf4uPONoDh1o0K C5R3xjIU9AuYsrfX/IYZhznjhu8UAknbQyc92qT2nGVkUCoyO9YQbru8CdVNMVv0qnQd9qhzd5HSW bfDrXQ7Io6hlSSweNanzdOHmc06PpYuqb6M43Gzag8G4CxW3pwjXIBs3PpWmGmnVYhO+JzPTo0SLG axmXY0I9ESij1KH1ensXTjdCjTT2iOBx159xi03Ow/+8qx89GGMERl3Z2AXiNthGe1sCvtgDFaGXU sGGh2kdA==; Received: from mcgrof by bombadil.infradead.org with local (Exim 4.96 #2 (Red Hat Linux)) id 1qiJCJ-00G6EF-0n; Mon, 18 Sep 2023 18:42:15 +0000 Date: Mon, 18 Sep 2023 11:42:15 -0700 From: Luis Chamberlain To: Matthew Wilcox Cc: Daniel Gomez , "minchan@kernel.org" , "senozhatsky@chromium.org" , "axboe@kernel.dk" , "djwong@kernel.org" , "hughd@google.com" , "akpm@linux-foundation.org" , "linux-kernel@vger.kernel.org" , "linux-block@vger.kernel.org" , "linux-xfs@vger.kernel.org" , "linux-fsdevel@vger.kernel.org" , "linux-mm@kvack.org" , "gost.dev@samsung.com" , Pankaj Raghav Subject: Re: [PATCH 1/6] filemap: make the folio order calculation shareable Message-ID: References: <20230915095042.1320180-1-da.gomez@samsung.com> <20230915095042.1320180-2-da.gomez@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: C7B8410000D X-Rspam-User: X-Stat-Signature: pjxsnirdtzqm1hmj7zp8oq8rzw7md1ab X-Rspamd-Server: rspam03 X-HE-Tag: 1695062541-583525 X-HE-Meta: U2FsdGVkX1+piJIhmihxgXVeQE7aJpaO6/60vHaigEP+yojOqqT0oaISALkgQQvj/TeBKG9f+V0zdoclnOE1U/9/lyJFe10lYmH20bYD6rrEJKxwitY/7pPQHkweiV5XJhAJlKYpkva+5qHaO4ejW8wH3yYHWMVI+C0TVShNyPAsspyu1FTPSKQnIQvvUBUHAiCR1UXZeN2QeMF9jSjVeMM9xKmkUE3ZdEUj0lICHFl8c9TN9pmX/2tLCcHRrWl5RGgeb6zxRRNgtkumUPEFQNKL/25WLj5l9byPzUWmnl7pt9EGNh4IWsHEENk3UW+5RN49wns8KNym8DXs3zOtQBd1LH124eD6GmVwLUzPB3DjWvgFdvzB+lT8jt/yK9QwFIQT65vQQ3awt8I1APHJcIyrmWUhzTNE3cb2PSAmPn9Ji2yeKBcQ/oCQL5vVPX5XAnbbiUc1Udc6LbbDl3GgeGWopTU9XUjlkLY+Mocsyu4l66j5tB6RsEekKtESZPHetIpESvlkKVSSzyw4ZAq4/m03xJ5CzkDLWmACDaTviC+NAkSLnG1RdQl5AXO82/FqD8Ou+sCNZxEBIzfOmdxXiJQAj6g7g23RH4kVg47mtkRoRzQSapNutmOb8/nNwSseJFyq/Br4lfNCtTVdjEZLmYC3VWOdlF1i9gR2wGx0NWiXV/7Ak8pQ0SRgwovbCWitxrB6RBtynKcj6wKSN8DTAz+Pjsgs+qT7pAFa2QXafWc+7pM6XN7vt7dfFTPBiRnkUVndxVNg0rmFVGnysSUcVL0Ala7beMA9bIJDvhIVGRywCizkqPRYu97rHFOStXQXDJcA7srI2uYf4rs+lP1QTNBjl37pWvoOK+oZqsCbJciFYGlcEqfsptmkZlggFGIbU4euFZc1aIxyelu+sTlnsTrj2EufJRAM9yUGReqlbE17ebCQoKLMiehSCdcBwNzGXt+vRK36BOe8DniVPUl 40Ll1PWU XOWRiZDvgn2GZW1WEKQ9u2l8tMBev/JAC30iJW88+LoMaJt2QPldA0R6Sk5BGrlagSHtf7mAI9a0pSRX+xYacSBiVicnNzEhVofsLsyWZUPnYgygWiGN/elai/0k69O18mbOf2pV0Xw4wr+7UUkOgdSxsqtb55PU7LNi9tmsEho5G6CzSNXynPZkZqP6YBppHwzXexFIhaDIVnYH28LWxskiWoqXROJWTNyFYKmfFNiuhC28SCWOQNDf9IVgykayKic/CTh5F2zyIxEr2xQiS5SvmY3xMN6I1zh9SLk+h+4NV9+9LZCdCJ9nyzR70rSBuHejzBVrq2uE3p8k1SoNvweMqZBLwquQAEi50f5bum7GbT665a45jQZlPeQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 18, 2023 at 07:24:55PM +0100, Matthew Wilcox wrote: > On Mon, Sep 18, 2023 at 11:09:00AM -0700, Luis Chamberlain wrote: > > On Fri, Sep 15, 2023 at 02:40:07PM +0100, Matthew Wilcox wrote: > > > On Fri, Sep 15, 2023 at 09:51:23AM +0000, Daniel Gomez wrote: > > > > To make the code that clamps the folio order in the __filemap_get_folio > > > > routine reusable to others, move and merge it to the fgf_set_order > > > > new subroutine (mapping_size_order), so when mapping the size at a > > > > given index, the order calculated is already valid and ready to be > > > > used when order is retrieved from fgp_flags with FGF_GET_ORDER. > > > > > > > > Signed-off-by: Daniel Gomez > > > > --- > > > > fs/iomap/buffered-io.c | 6 ++++-- > > > > include/linux/pagemap.h | 42 ++++++++++++++++++++++++++++++++++++----- > > > > mm/filemap.c | 8 -------- > > > > 3 files changed, 41 insertions(+), 15 deletions(-) > > > > > > That seems like a lot of extra code to add in order to avoid copying > > > six lines of code and one comment into the shmem code. > > > > > > It's not wrong, but it seems like a bad tradeoff to me. > > > > The suggestion to merge came from me, mostly based on later observations > > that in the future we may want to extend this with a min order to ensure > > the index is aligned the the order. This check would only be useful for > > buffred IO for iomap, readahead. It has me wondering if buffer-heads > > support for large order folios come around would we a similar check > > there? > > > > So Willy, you would know better if and when a shared piece of code would > > be best with all these things in mind. > > In my mind, this is fundamentally code which belongs in the page cache > rather than in individual filesystems. The fly in the ointment is that > shmem has forked the page cache in order to do its own slightly > specialised thing. Do we do any effort *now* try to to not make that situation worse? This just being one example. > I don't see the buffer_head connection; I haven't reviewed yet Hannes' patches yet but I was wondering if for bf there was also a loop to target a high order and retry until you get the min order allowed. > shmem is > an extremely special case, and we shouldn't mess around with other > filesystems to avoid changing shmem. > > Ideally, we'd reunify (parts of) shmem and the regular page cache, but > that's a lot of work, probably involving the swap layer changing. Well so indeed swap effort will be next, so addressing if we should not make the situation worse as we add more code would be good to know. Luis