From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37A65C64EC4 for ; Sat, 4 Mar 2023 16:39:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACD746B0072; Sat, 4 Mar 2023 11:39:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7E376B0073; Sat, 4 Mar 2023 11:39:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96CC56B0074; Sat, 4 Mar 2023 11:39:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 854116B0072 for ; Sat, 4 Mar 2023 11:39:20 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 599A080938 for ; Sat, 4 Mar 2023 16:39:20 +0000 (UTC) X-FDA: 80531776080.01.E3DB533 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id EC27CC0014 for ; Sat, 4 Mar 2023 16:39:17 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IgVFiNLc; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677947958; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h97qSCKBwWKB/vgNGl/R2rNrAByzPswrGoQZipssPGg=; b=vOgQds4dSsR6d9z/PGSd+E85a/INcQhv2smifI6LTt38/KXSwlr/sGYqFymbZh3i+PiXSA L+0grqpiY1Oobf4Mu7sa7hR8qULvV7u1u5UWxmC2zLMOyi1StcLsDzEFJ4y6VTxjMzRs+P UdkOVbaCDbta1g4xgoOveCptToUUBgI= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=IgVFiNLc; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677947958; a=rsa-sha256; cv=none; b=1EepRoAcAPh6o8IuYlWUssZ1AcJspmHUSKHYkpdD6btlBClck7jN4L/KzaKks7SpXLgKpZ Nlppvq6EqfOLYx4YaedWDdyKtP6mKsfEB3ltnHcflnNan+6rqz8ACDA+QFiPNtnsJofbTs 1WtRXwUY6KzFQJOYN/JKDqB86vvDxPU= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Sender:Reply-To:Content-ID:Content-Description; bh=h97qSCKBwWKB/vgNGl/R2rNrAByzPswrGoQZipssPGg=; b=IgVFiNLcTJCMLaVE1nNe+gXi6O M94ahKtwVh8NWYTFWg/3iRrlZ58rayMTrKLYF2d/HMi3XTNDkjJMS24eAsYoCJEGMqGskv54cDoij A6vZiS8T4E2IhmTMAulcqCw6jhCGJJhHiJQJCIYP9srvi7E4QxvQlrOi4lQzIqeLcCiZOS0eWHYIi seTPHhJQTfkD6U2ZROwBkVcF7/FXNR7zVPvHda0BFSyY0Uc4wyv23jOnV7Tv/UouWQ4x2eBIT44+N jJho6nkn3JBfLlz+TKgF0vURa+FOiiL1RmgCYtaR09+ukfexTu0/96RpX/RZtMNUbxg9oOCGL+Xf+ ylykMb6g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pYUuU-003wF5-VT; Sat, 04 Mar 2023 16:39:03 +0000 Date: Sat, 4 Mar 2023 16:39:02 +0000 From: Matthew Wilcox To: James Bottomley Cc: Keith Busch , Luis Chamberlain , Theodore Ts'o , lsf-pc@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org Subject: Re: [LSF/MM/BPF TOPIC] Cloud storage optimizations Message-ID: References: <2600732b9ed0ddabfda5831aff22fd7e4270e3be.camel@HansenPartnership.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2600732b9ed0ddabfda5831aff22fd7e4270e3be.camel@HansenPartnership.com> X-Stat-Signature: 6ex3bf5do1udkudtb3etnn9w4i87ncne X-Rspam-User: X-Rspamd-Queue-Id: EC27CC0014 X-Rspamd-Server: rspam06 X-HE-Tag: 1677947957-763726 X-HE-Meta: U2FsdGVkX187MA9eVQk+9CzMkUrdYbUdrV7xiujUMRbhQPEj2pMEE0gVh0mHRR79pu28BOBwJGSPYGmTHaOA4urYNQoCyGAXJDwtSOQkmXOVaHKMnaN/LkUUWtXErpwlpgRz0cQnZoiTk2M7rzgQ9B8hR328K+KknX063qrWVH4yUzCYn8m0v/qmEuWGizIqVJKRi3O/EEx9QjtiXn6zgkNuuiv134+bw+f5aYwS0A63GbNQXKowP1OQV52XaY4Hs+tgTM30514ImBshHiX8DKOEfzAawnTxTaiKu6RT/f9XQ7ZOZwBXVQLoPR1TJB3zB7cQ/RIjK7hURu0l7BllL4x6rJG4//NcpWh0ZUg4P1nbnIi5pbEbNit7Svqi5QBMw9ZL3DFw8BRvSPQYw2lSwP/TNwbfwVtSyBJVsOuleEQ/ShGqlOgk5P2CEl+dzhTncIe7t1gRa4AdTfRRHyDBac5EOKvsRThV/C5dWMqTIs4xEY2qry0m4vxf7O/W9LJhh2ZgfgJNJwaiAUPYaFLcutFlMijwc/QF97KkGUU0rLRtOwAVBQXmPMdLjev+xGrhVYX/MsI8C2LiKgLrLSaeR7Hip5VcyVoczhd67rgb+J/E5UaX86n5WojkmNO2f8H+qUO6jOq5Ve3MMd9PHYofCRPpGBqnRX4TF+NdbQbxsrQ6nFvennfWqia4MTpCWFvpt3CWpT4AsNG2d6HaSNU0Yr9I6hyc80fcyJZOmA0dCUKH/5i7sGURHEryZz5UpvmTsJqPM/8xXCv2P+p5V+M4Ehd4G0xvjEaYdyJ/pk7dRYW3YBopbOSrxGdnOYwjqkdlU1SqXZGU208aabVqyesimjvreeRjKKqKgdxBDmiS7zXBP1Fn19J2KE5URlqVi0NNBFDJ4RFUdv8Gdz3xVTtRUWdX/oDvzB5iG19bTRq2jRO5d0bZkbSEabozgsbhaMH8ZE4wrAgJYA4V7d1/5f9 +zNj35F/ eB+xjZ3bfE8CHG1ncRtVVh9269VORHEmxzhONGdrJfgVy8zMx5qJUsohzd0yOMl6kzIe3RsrH63ZZJoISVZK0ivk53l2T5BOZl2UgBi4vnSMIbZcEDX/9qhkwBS9ZBkZTEteEaUXdrAsJ524GLSiwSSPsiw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Mar 04, 2023 at 08:41:04AM -0500, James Bottomley wrote: > On Sat, 2023-03-04 at 07:34 +0000, Matthew Wilcox wrote: > > On Fri, Mar 03, 2023 at 08:11:47AM -0500, James Bottomley wrote: > > > On Fri, 2023-03-03 at 03:49 +0000, Matthew Wilcox wrote: > > > > On Thu, Mar 02, 2023 at 06:58:58PM -0700, Keith Busch wrote: > > > > > That said, I was hoping you were going to suggest supporting > > > > > 16k logical block sizes. Not a problem on some arch's, but > > > > > still problematic when PAGE_SIZE is 4k. :) > > > > > > > > I was hoping Luis was going to propose a session on LBA size > > > > > PAGE_SIZE. Funnily, while the pressure is coming from the storage > > > > vendors, I don't think there's any work to be done in the storage > > > > layers.  It's purely a FS+MM problem. > > > > > > Heh, I can do the fools rush in bit, especially if what we're > > > interested in the minimum it would take to support this ... > > > > > > The FS problem could be solved simply by saying FS block size must > > > equal device block size, then it becomes purely a MM issue. > > > > Spoken like somebody who's never converted a filesystem to > > supporting large folios.  There are a number of issues: > > > > 1. The obvious; use of PAGE_SIZE and/or PAGE_SHIFT > > Well, yes, a filesystem has to be aware it's using a block size larger > than page size. > > > 2. Use of kmap-family to access, eg directories.  You can't kmap > >    an entire folio, only one page at a time.  And if a dentry is > > split across a page boundary ... > > Is kmap relevant? It's only used for reading user pages in the kernel > and I can't see why a filesystem would use it unless it wants to pack > inodes into pages that also contain user data, which is an optimization > not a fundamental issue (although I grant that as the blocksize grows > it becomes more useful) so it doesn't have to be part of the minimum > viable prototype. Filesystems often choose to store their metadata in HIGHMEM. This wasn't an entirely crazy idea back in, say, 2005, when you might be running an ext2 filesystem on a machine with 32GB of RAM, and only 800MB of address space for it. Now it's silly. Buy a real computer. I'm getting more and more comfortable with the idea that "Linux doesn't support block sizes > PAGE_SIZE on 32-bit machines" is an acceptable answer. > > 3. buffer_heads do not currently support large folios.  Working on > > it. > > Yes, I always forget filesystems still use the buffer cache. But > fundamentally the buffer_head structure can cope with buffers that span > pages so most of the logic changes would be around grow_dev_page(). It > seems somewhat messy but not too hard. I forgot one particularly nasty case; we have filesystems (including the mpage code used by a number of filesystems) which put an array of block numbers on the stack. Not a big deal when that's 8 entries (4kB/512 * 8 bytes = 64 bytes), but it starts to get noticable at 64kB PAGE_SIZE (1kB is a little large for a stack allocation) and downright unreasonable if you try to do something to a 2MB allocation (32kB). > > Probably a few other things I forget.  But look through the recent > > patches to AFS, CIFS, NFS, XFS, iomap that do folio conversions. > > A lot of it is pretty mechanical, but some of it takes hard thought. > > And if you have ideas about how to handle ext2 directories, I'm all > > ears. > > OK, so I can see you were waiting for someone to touch a nerve, but if > I can go back to the stated goal, I never really thought *every* > filesystem would be suitable for block size > page size, so simply > getting a few of the modern ones working would be good enough for the > minimum viable prototype. XFS already works with arbitrary-order folios. The only needed piece is specifying to the VFS that there's a minimum order for this particular inode, and having the VFS honour that everywhere. What "touches a nerve" is people who clearly haven't been paying attention to the problem making sweeping assertions about what the easy and hard parts are. > I fully understand that eventually we'll need to get a single large > buffer to span discontiguous pages ... I noted that in the bit you cut, > but I don't see why the prototype shouldn't start with contiguous > pages. I disagree that this is a desirable goal. To solve the scalability issues we have in the VFS, we need to manage memory in larger chunks than PAGE_SIZE. That makes the concerns expressed in previous years moot.