From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BDC0C3DA4A for ; Mon, 29 Jul 2024 16:42:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AFF9F6B00BB; Mon, 29 Jul 2024 12:42:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AAFBA6B00BC; Mon, 29 Jul 2024 12:42:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 977346B00BE; Mon, 29 Jul 2024 12:42:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7B4B56B00BB for ; Mon, 29 Jul 2024 12:42:08 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1FF521404C8 for ; Mon, 29 Jul 2024 16:42:08 +0000 (UTC) X-FDA: 82393357536.23.436F2F4 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf20.hostedemail.com (Postfix) with ESMTP id BF6A11C0023 for ; Mon, 29 Jul 2024 16:42:05 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=TU6eKuRi; spf=pass (imf20.hostedemail.com: domain of djwong@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=djwong@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722271272; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MYmBQjGtBaLqzXXgoUvHACPTr7QCa8j39eO5DDzfnm4=; b=P1n/XSTmubLn1xq+MmQMuDVfoUM/y++/C0tYG7U2KQwrerY7ckQxHADKkP3iIW5eSOz0V8 T4f8dUOl32aOQ9hksQTxAm7pS3Jhd9g4cDKYAK/y4wYxLoqefvovGkgViF+0ozz0enX/eH i64HAll4ZEy37Jw1RXme3BFCRMnlFj8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722271272; a=rsa-sha256; cv=none; b=jZPwn+8g/0fNED4zhmoNJib/ohui3C+dBjuUbflxzKLTSEx1ZpuJKmQnZPGhJa6zndKYwH MLooI6w7HsPcDc0IzTZ2v/tsXLtxIm6M0nccCpwErDXxUGYU/4k/J/h90eqWFWmJY1aM0c D5zp8mk+1Gp+36ATVs7OSz0Ys+Tf+0I= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=TU6eKuRi; spf=pass (imf20.hostedemail.com: domain of djwong@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=djwong@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 2307ACE0E08; Mon, 29 Jul 2024 16:42:01 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 544D8C32786; Mon, 29 Jul 2024 16:42:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1722271320; bh=STYJUioiL118Hlgwr4HXrqxbmA6PLG6V/dHdCUeL6U4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=TU6eKuRicd3lcz+xcXI3oydUQnDeDfXJtlTH6WrdTIkHpdwcgLDqW+VSGdu4B3XAy +91WIAIwEsRg15HYw9/eSnBEn0e33DMneMzn3ey1eKWeUaDr5NprWTmV4ZFkHvjqTN xArt2eisQtQ3LQGXuXXdeY4rtfC5ZU8XcfdhOv0cwV0HCM/6EexXjEJl3Biuvt2yLH HSt7d/BXS+b2o08R/vqKhLamKYkk2nTei6HBE3PWSuElvD9q87KyDAwtRaiY7ZLbtR 3ZxpP359LF0FoaOsM5BjKx8TEr8Buc/ZP5u1BlAm/toJ/b54e6qCOFItLoXgEQDC2+ NUdQ3e9FKksgg== Date: Mon, 29 Jul 2024 09:41:59 -0700 From: "Darrick J. Wong" To: "Pankaj Raghav (Samsung)" Cc: david@fromorbit.com, willy@infradead.org, chandan.babu@oracle.com, brauner@kernel.org, akpm@linux-foundation.org, yang@os.amperecomputing.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, ryan.roberts@arm.com, hch@lst.de, Zi Yan Subject: Re: [PATCH v11 10/10] xfs: enable block size larger than page size support Message-ID: <20240729164159.GC6352@frogsfrogsfrogs> References: <20240726115956.643538-1-kernel@pankajraghav.com> <20240726115956.643538-11-kernel@pankajraghav.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240726115956.643538-11-kernel@pankajraghav.com> X-Stat-Signature: qjdby49yojosgfncw4hmwino9mmwi5og X-Rspamd-Queue-Id: BF6A11C0023 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1722271325-564260 X-HE-Meta: U2FsdGVkX1/vZrfgzYRAA88afk9GQggAiWhUpxTZbQCjlOhSRc9yPH3sRnhwcrc2i0oGa00NBdbyrTp4AtZltCoZ/5+8jeGGT1Ds7qeO4hto8DTV40pdyyi/r6M0s7bUPRXsM/uc7sQt04ve4RcFgbAFWGTBvA+/10flqeVEoh/OwYnai7jMCu1rIrh4CgoGTY7tEcuVQNjUL+sZb9I4zEL27h5jeRg0b32mn+0SGMvPe7Pa/pFYXwdgzItm+MN7UdWmIvLlnAQv/boLdOUUwJALNwHPNDZBLa+Gy9/LwULLnT+ZxpB4vp49KXmi2Zz4gU9P/2K6LUcqtfJZkBwTgxhGsEp1YUogvpbWUimfnvZq5AMFTjzI441QSvRiCy2xfdkv5N+Z0NRTyoI8bZO9M1Gu9vI+WZJ0oMbtT4oPccuo+CKFBkFVk/GMyD69zM+9L8UQ6PuhBo5ylT/pbv7LR8g5vFVN/WbRq+76QVvr8+JKLxKYnsTvfGN9m0Fc9gNDReQQ46sslKOsr5T/+s8MnY5uILPa9yWE7u+9yQ+cfC5O3K8B535T7p7Nn40qHlZUXhIVUOYMSLlay5NsmolZAVkepivoR5Yo3AnBftnQWF2LvOh0Udsk4QSjeNGNigrg71uFBejRfE9LZlbl7np3VhleWUM+0Mn6tF6qlX/+oKqzcFBkcrGbE3YVl2A63QksIq1MCNxpHjCyeUWdBsahQUsWZKMNw46ktUI/aOMHKi8nnjYNPiB3OeOc8VST3PPRAVjvA+4PmTgD+9rp+dHFXC1+xnrBinFn/eNt1EpS557S7KKl4qNfd5Ni84GrC0Km+Cy183TPoZC2ELwM06lHT4spelT+Z4HDhLP1tl/7pnkM6i3xIij7qwIv6pcavIAbCccut+fTc6Dy60w1MvQn3xCyCa/DyjTftqARmW/hqJJfT4FqBjSX1UU+8xRoTvAifS5E1+aJCtZjx46pc94 SjgyJbWW W0BiJxomATaTutHOCFoiamJehgVGEmI+rACDERDy5kug0sHQkabNI3g27rX1VHFHVrnqIXmrokAra1sNgVCHzhDX9xsnFlcBAtedqJoITTB58TC25gpnvN5E98Tz0ErQBz4CeEKESvhmK7+ViEKH9hCD678ZFVaTDpfv4ID0fKGtNQQJBWlhy0msr2Q0/73toaTkeHPZbhJLVbJ+5/xKO0jOpVZQI49vvnTJ3EKEQE4Mm2hw+HA8ROEfbzPFWH1QWDfaYBu3rUTrOrlsT9qcWvHQwkHoMGh15hb9kGWAqbGPBLK4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jul 26, 2024 at 01:59:56PM +0200, Pankaj Raghav (Samsung) wrote: > From: Pankaj Raghav > > Page cache now has the ability to have a minimum order when allocating > a folio which is a prerequisite to add support for block size > page > size. > > Signed-off-by: Pankaj Raghav > Signed-off-by: Luis Chamberlain > Reviewed-by: Darrick J. Wong > --- > fs/xfs/libxfs/xfs_ialloc.c | 5 +++++ > fs/xfs/libxfs/xfs_shared.h | 3 +++ > fs/xfs/xfs_icache.c | 6 ++++-- > fs/xfs/xfs_mount.c | 1 - > fs/xfs/xfs_super.c | 28 ++++++++++++++++++++-------- > include/linux/pagemap.h | 13 +++++++++++++ > 6 files changed, 45 insertions(+), 11 deletions(-) > > diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c > index 0af5b7a33d055..1921b689888b8 100644 > --- a/fs/xfs/libxfs/xfs_ialloc.c > +++ b/fs/xfs/libxfs/xfs_ialloc.c > @@ -3033,6 +3033,11 @@ xfs_ialloc_setup_geometry( > igeo->ialloc_align = mp->m_dalign; > else > igeo->ialloc_align = 0; > + > + if (mp->m_sb.sb_blocksize > PAGE_SIZE) > + igeo->min_folio_order = mp->m_sb.sb_blocklog - PAGE_SHIFT; > + else > + igeo->min_folio_order = 0; > } > > /* Compute the location of the root directory inode that is laid out by mkfs. */ > diff --git a/fs/xfs/libxfs/xfs_shared.h b/fs/xfs/libxfs/xfs_shared.h > index 2f7413afbf46c..33b84a3a83ff6 100644 > --- a/fs/xfs/libxfs/xfs_shared.h > +++ b/fs/xfs/libxfs/xfs_shared.h > @@ -224,6 +224,9 @@ struct xfs_ino_geometry { > /* precomputed value for di_flags2 */ > uint64_t new_diflags2; > > + /* minimum folio order of a page cache allocation */ > + unsigned int min_folio_order; > + > }; > > #endif /* __XFS_SHARED_H__ */ > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c > index cf629302d48e7..0fcf235e50235 100644 > --- a/fs/xfs/xfs_icache.c > +++ b/fs/xfs/xfs_icache.c > @@ -88,7 +88,8 @@ xfs_inode_alloc( > > /* VFS doesn't initialise i_mode! */ > VFS_I(ip)->i_mode = 0; > - mapping_set_large_folios(VFS_I(ip)->i_mapping); > + mapping_set_folio_min_order(VFS_I(ip)->i_mapping, > + M_IGEO(mp)->min_folio_order); > > XFS_STATS_INC(mp, vn_active); > ASSERT(atomic_read(&ip->i_pincount) == 0); > @@ -325,7 +326,8 @@ xfs_reinit_inode( > inode->i_uid = uid; > inode->i_gid = gid; > inode->i_state = state; > - mapping_set_large_folios(inode->i_mapping); > + mapping_set_folio_min_order(inode->i_mapping, > + M_IGEO(mp)->min_folio_order); > return error; > } > > diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c > index 3949f720b5354..c6933440f8066 100644 > --- a/fs/xfs/xfs_mount.c > +++ b/fs/xfs/xfs_mount.c > @@ -134,7 +134,6 @@ xfs_sb_validate_fsb_count( > { > uint64_t max_bytes; > > - ASSERT(PAGE_SHIFT >= sbp->sb_blocklog); > ASSERT(sbp->sb_blocklog >= BBSHIFT); > > if (check_shl_overflow(nblocks, sbp->sb_blocklog, &max_bytes)) > diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c > index 27e9f749c4c7f..b2f5a1706c59d 100644 > --- a/fs/xfs/xfs_super.c > +++ b/fs/xfs/xfs_super.c > @@ -1638,16 +1638,28 @@ xfs_fs_fill_super( > goto out_free_sb; > } > > - /* > - * Until this is fixed only page-sized or smaller data blocks work. > - */ > if (mp->m_sb.sb_blocksize > PAGE_SIZE) { > - xfs_warn(mp, > - "File system with blocksize %d bytes. " > - "Only pagesize (%ld) or less will currently work.", > + size_t max_folio_size = mapping_max_folio_size_supported(); > + > + if (!xfs_has_crc(mp)) { > + xfs_warn(mp, > +"V4 Filesystem with blocksize %d bytes. Only pagesize (%ld) or less is supported.", > mp->m_sb.sb_blocksize, PAGE_SIZE); > - error = -ENOSYS; > - goto out_free_sb; > + error = -ENOSYS; > + goto out_free_sb; > + } > + > + if (mp->m_sb.sb_blocksize > max_folio_size) { > + xfs_warn(mp, > +"block size (%u bytes) not supported; Only block size (%ld) or less is supported", > + mp->m_sb.sb_blocksize, max_folio_size); Dumb nit: Please indent ^^^ this second line so that it doesn't start on the same column as the separate statement below it. --D > + error = -ENOSYS; > + goto out_free_sb; > + } > + > + xfs_warn(mp, > +"EXPERIMENTAL: V5 Filesystem with Large Block Size (%d bytes) enabled.", > + mp->m_sb.sb_blocksize); > } > > /* Ensure this filesystem fits in the page cache limits */ > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index 3a876d6801a90..61a7649d86e57 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -373,6 +373,19 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) > #define MAX_XAS_ORDER (XA_CHUNK_SHIFT * 2 - 1) > #define MAX_PAGECACHE_ORDER min(MAX_XAS_ORDER, PREFERRED_MAX_PAGECACHE_ORDER) > > +/* > + * mapping_max_folio_size_supported() - Check the max folio size supported > + * > + * The filesystem should call this function at mount time if there is a > + * requirement on the folio mapping size in the page cache. > + */ > +static inline size_t mapping_max_folio_size_supported(void) > +{ > + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) > + return 1U << (PAGE_SHIFT + MAX_PAGECACHE_ORDER); > + return PAGE_SIZE; > +} > + > /* > * mapping_set_folio_order_range() - Set the orders supported by a file. > * @mapping: The address space of the file. > -- > 2.44.1 > >