From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 804C8C30653 for ; Thu, 4 Jul 2024 21:28:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16D526B00C3; Thu, 4 Jul 2024 17:28:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 11D706B00C5; Thu, 4 Jul 2024 17:28:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F26666B00C8; Thu, 4 Jul 2024 17:28:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D60FB6B00C3 for ; Thu, 4 Jul 2024 17:28:43 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4AF0EA1159 for ; Thu, 4 Jul 2024 21:28:43 +0000 (UTC) X-FDA: 82303359726.19.B3B2422 Received: from mout-p-103.mailbox.org (mout-p-103.mailbox.org [80.241.56.161]) by imf29.hostedemail.com (Postfix) with ESMTP id 326DD12000F for ; Thu, 4 Jul 2024 21:28:40 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=e2Ryhnwc; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf29.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720128501; a=rsa-sha256; cv=none; b=DkK7yvorEZRsLyT1MNT7mCccTHWwFLSssh9xJTTOZxomMmuFKETxdbIITNiirH5SLwywcI MKK7wRyID5xJZgUg+GdKE9gq4cbsRderzQ4x4A4XvHPSb8ZJkKSuXCh7M/rN1VVOcbNhbJ zveOx4ZGjBTlVbhEx5GozN11Xhr/UjM= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=pankajraghav.com header.s=MBO0001 header.b=e2Ryhnwc; dmarc=pass (policy=quarantine) header.from=pankajraghav.com; spf=pass (imf29.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.161 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720128501; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LCjoDPUbBuAc5cYe3hsTKSsm/SW417S4g2RIPjxg9Qs=; b=IfCOBRadgYrOKclfLkiFsr6UFoDFb7WMckPTjrzXb78k6uYoT/1Rg/j1cWe7xZDt+tf/eE 7/TZKxg2SzLQRPfqEyvMVWEq0gzPbb1wybChsdzRRy5e+bxtwfpRLbUD5W4RIiothwveFN cthXIKNAw/0L4qC8de6+OwwGqt99lDM= Received: from smtp2.mailbox.org (smtp2.mailbox.org [10.196.197.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-103.mailbox.org (Postfix) with ESMTPS id 4WFVBX0DWkz9sST; Thu, 4 Jul 2024 23:28:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pankajraghav.com; s=MBO0001; t=1720128516; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=LCjoDPUbBuAc5cYe3hsTKSsm/SW417S4g2RIPjxg9Qs=; b=e2RyhnwcEB/2WIfuol6lyKxOGyvhqnPGS21glSV6Cjyg8Uli24x7nPVYoWmgChLIpXzt11 bRMLi8MpZDZ0HD50MBBL7rm06NW6MBiq8+8uLOxZ1tmQMou8vDRgTjPutIw2Pn9/567LYg jVOAhGtXQa4M0/OZg47XF0gfDctcl87am4R2jjeJC23a2/Hr7o5oY4wIG3Ft8pp7WGcs2q 8Pis0O1xGuojt3wg9lSdy7Y9NGxiutrAVgRURv1rKSJr74bQGZc6gsssIzQnGOnOTyc/tk nBedjImbJdAf0W4JefXXg/uAa420yBC9BL1SAi5895iJp2Zn9Lyf9LnsF5wDoQ== Date: Thu, 4 Jul 2024 21:28:30 +0000 From: "Pankaj Raghav (Samsung)" To: Matthew Wilcox Cc: Ryan Roberts , david@fromorbit.com, chandan.babu@oracle.com, djwong@kernel.org, brauner@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, yang@os.amperecomputing.com, linux-mm@kvack.org, john.g.garry@oracle.com, linux-fsdevel@vger.kernel.org, hare@suse.de, p.raghav@samsung.com, mcgrof@kernel.org, gost.dev@samsung.com, cl@os.amperecomputing.com, linux-xfs@vger.kernel.org, hch@lst.de, Zi Yan Subject: Re: [PATCH v8 01/10] fs: Allow fine-grained control of folio sizes Message-ID: <20240704212830.xtakuw57wonas42u@quentin> References: <20240625114420.719014-1-kernel@pankajraghav.com> <20240625114420.719014-2-kernel@pankajraghav.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 326DD12000F X-Stat-Signature: p36iyq3h1h4a4o5kqs6688r4cshs4bok X-Rspam-User: X-HE-Tag: 1720128520-850861 X-HE-Meta: U2FsdGVkX1/EKStHHm0zIVuwabIJ60vQQ0su5bUT+01v45qxSFeQ3x91KGL7YURZ1yK9JCzQ8GUv4fyZIoYJJuwcZICDakz/Ogmf7Tu4RTVHa1oRz9z7I76tPpeoNEoLCvy0sN29CeYLMyXBiROe4fIKeR72IpY3ClmegW3M8RF2ho7EY845lEQmI5C6SdG2JEC6dkgU6omeiXMDa7L0DYxK5I8t8vPPOuIUgErXnxGXS5/dpBYJx7z6YIxXejc+wLVK9TMV2gKN3ue8ib+7+z1uaL2DvQm0Q3McMbvVqSHEvsseoIF2YGtAoYzBB6/30Bx7hYLh6bwRgath8VYbQzkrtAaM1Aqn6gspc9PQMBpPYbmtC1lVzHP22lRISCXaEd8ZqGNg3k0+iDnZwKEpPHdZcJ1gCnn9nPW0pSIQqYUXahemJHM5PRwaOREbdrfLDJZ1KbzCcl7I2RzSio7HEINaasaQ0AZUMf1NB3tJlbC/pvrNHZ4vd1sGwtWsu6OhEpcQcLIYXotNUB6fC85eJ/eXTIeKREH7OLqyWEa0EHCiKzqox4chgfMBXUbgV/LdqTl1L0VEuatl2nMrBWpn/sgYK4MwHYD+kVuFP7faYBOQvL6DwLp8rd2/vobLuYk3xJMem9PIleoIIpV2AgYwRIf/x2tWVkfd0H68osxAiM6JbsCtjcArGHZZsaVbBFB8sp1FBZoxIC2w/rdI6npJSoLtyXPyan6Kc+83ZeQebv9up1P4HBMZZFJw7bCOLyUH58WGaaznrwScOBjT78vH2zxqg5S0/e0GWZSgIZW6sYVtxZgc7ItvYTLUpxu2j6gP5vYZHh4xscp6GFaMcHRg1QeRPcGTAGlEry7s3xqEGZhnd422xerJ4DXbpcpjhboc/tknhXr0bfAs5ER63PsxgWgmMXIRGqZ34hlfgiDsl6CEgE9DFdcmqNxCnKFPkGdox29EcsYCDf4IhnFYpZP hYWcC9+l QX247KACZ2PwNOJlYNkcxvTc68nLDlKw6Q2c5uveFgn88/izVfaChi43USiasXNqhCwpwOX8IvTCAfBsHEVmudXnUzgnWLfY7PQVgJb1DJPgMB5GqUhxQbil6LhNSzqoZrSBtg26BAipHDGrdwbSH8q3sUuv5KedLjNR+pn+AjpuGmY2fj556hdKsrQc/tU4Jqsn4LLbUH3MPGaOASWMKhylNSnO4XDLo4fif X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jul 04, 2024 at 04:20:13PM +0100, Matthew Wilcox wrote: > On Thu, Jul 04, 2024 at 01:23:20PM +0100, Ryan Roberts wrote: > > > - AS_LARGE_FOLIO_SUPPORT = 6, > > > > nit: this removed enum is still referenced in a comment further down the file. Good catch. > > Thanks. Pankaj, let me know if you want me to send you a patch or if > you'll do it directly. Yes, I will fold the changes. > > > > +static inline void mapping_set_folio_order_range(struct address_space *mapping, > > > + unsigned int min, > > > + unsigned int max) > > > +{ > > > + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) > > > + return; > > > + > > > + if (min > MAX_PAGECACHE_ORDER) > > > + min = MAX_PAGECACHE_ORDER; > > > + if (max > MAX_PAGECACHE_ORDER) > > > + max = MAX_PAGECACHE_ORDER; > > > + if (max < min) > > > + max = min; > > > > It seems strange to silently clamp these? Presumably for the bs>ps usecase, > > whatever values are passed in are a hard requirement? So wouldn't want them to > > be silently reduced. (Especially given the recent change to reduce the size of > > MAX_PAGECACHE_ORDER to less then PMD size in some cases). > > Hm, yes. We should probably make this return an errno. Including > returning an errno for !IS_ENABLED() and min > 0. > Something like this? (I also need to change the xfs_icache.c to use this return value in the last patch) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 14e1415f7dcf..04916720f807 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -390,28 +390,27 @@ static inline void mapping_set_gfp_mask(struct address_space *m, gfp_t mask) * Context: This should not be called while the inode is active as it * is non-atomic. */ -static inline void mapping_set_folio_order_range(struct address_space *mapping, +static inline int mapping_set_folio_order_range(struct address_space *mapping, unsigned int min, unsigned int max) { if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) - return; + return -EINVAL; - if (min > MAX_PAGECACHE_ORDER) - min = MAX_PAGECACHE_ORDER; - if (max > MAX_PAGECACHE_ORDER) - max = MAX_PAGECACHE_ORDER; + if (min > MAX_PAGECACHE_ORDER || max > MAX_PAGECACHE_ORDER) + return -EINVAL; if (max < min) max = min; mapping->flags = (mapping->flags & ~AS_FOLIO_ORDER_MASK) | (min << AS_FOLIO_ORDER_MIN) | (max << AS_FOLIO_ORDER_MAX); + return 0; } -static inline void mapping_set_folio_min_order(struct address_space *mapping, +static inline int mapping_set_folio_min_order(struct address_space *mapping, unsigned int min) { - mapping_set_folio_order_range(mapping, min, MAX_PAGECACHE_ORDER); + return mapping_set_folio_order_range(mapping, min, MAX_PAGECACHE_ORDER); } @@ -428,6 +427,10 @@ static inline void mapping_set_folio_min_order(struct address_space *mapping, */ static inline void mapping_set_large_folios(struct address_space *mapping) { + /* + * The return value can be safely ignored because this range + * will always be supported by the page cache. + */ mapping_set_folio_order_range(mapping, 0, MAX_PAGECACHE_ORDER); } -- Pankaj