From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28E8AC369DC for ; Tue, 29 Apr 2025 17:44:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BBEB06B0007; Tue, 29 Apr 2025 13:44:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B6D7F6B000A; Tue, 29 Apr 2025 13:44:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A36A56B000C; Tue, 29 Apr 2025 13:44:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8560B6B0007 for ; Tue, 29 Apr 2025 13:44:37 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id EA485C0CCC for ; Tue, 29 Apr 2025 17:44:37 +0000 (UTC) X-FDA: 83387806194.07.0B92B4E Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by imf29.hostedemail.com (Postfix) with ESMTP id 19221120011 for ; Tue, 29 Apr 2025 17:44:34 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=d8e6dB8e; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of ville.syrjala@linux.intel.com has no SPF policy when checking 198.175.65.10) smtp.mailfrom=ville.syrjala@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745948675; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=B4J58JU8QE6JLmZiJCUN5hdv6yFKxIcTBoy2+eOYCGc=; b=lzpmlJ+Fx9d1TwY3KFAddQ7ji0hrHskNDdP/pgq8j8ro85lT4PQ2Ydu+9IGfbr0EFUEVao xKI9nDeeri120iOxwPE6LfLL3BncVK1lfGOMvsyOIXmMMFJA+ZURbXpw9y/dFOR/aGeaiF GdJxURoAKXHpncZu795KQ2ggtT6ryqU= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=d8e6dB8e; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of ville.syrjala@linux.intel.com has no SPF policy when checking 198.175.65.10) smtp.mailfrom=ville.syrjala@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745948675; a=rsa-sha256; cv=none; b=5WevlEzcaAnrUnEteRRDhdkpGqAUaEkyLiu/yvXHpbhEj6IioGJhvL+7uR7X8lgP1welr9 uqfBv6l78s6iir5MeSDT6EICHNLKxPHiR5fmRvKV5Tz8wzQOrXpq8kLUceZsiAfgp+P/yh tbi+L/bX5UoxebjwYLSp69u7K39IyR4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1745948676; x=1777484676; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=YihdS4lIlPx1QvCRHI6VlfjDGJoGrUsNEFaaqylzs1A=; b=d8e6dB8eZm90Elpl9uc5Onlu9MxYvLGlsmDcORbYvrqkDcWU78XElN0G Sgh2sW0smTqT6vrJq65fiBFuwU2u1+PQyFRp+AmkBW6u7Z/rMgyDmKXDt D6I/cNskxoIg1wFuBf3/4RoRYgPvtjTBnnFis5q3rJ0e+c+O6vu3s4DSg pO4PdvX55OB15UqCdfQ41n2tHJjtfTuglGHtlYgiEDkXegoQv0HLC93q/ Aw7WIo3YmiV0j5EIL14850jArqqluxbd1l6CmWJAfOgsr8MIS7mvVNM5G /BT9SBU6ZMQ9VGoDapeLa1Q+CTsclL3bTiLSC07BJ0JbujH3hbODDrsps Q==; X-CSE-ConnectionGUID: qKPZbvg0QkqyJIXIhfZO8Q== X-CSE-MsgGUID: WeBGs0mrT5+93yyx838++Q== X-IronPort-AV: E=McAfee;i="6700,10204,11418"; a="64999822" X-IronPort-AV: E=Sophos;i="6.15,249,1739865600"; d="scan'208";a="64999822" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Apr 2025 10:44:34 -0700 X-CSE-ConnectionGUID: 71ZXWDiaR8mGsyo071y1aQ== X-CSE-MsgGUID: ZUYBiCRvRZWmgpkXX6ipMQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.15,249,1739865600"; d="scan'208";a="139077844" Received: from stinkpipe.fi.intel.com (HELO stinkbox) ([10.237.72.74]) by orviesa005.jf.intel.com with SMTP; 29 Apr 2025 10:44:29 -0700 Received: by stinkbox (sSMTP sendmail emulation); Tue, 29 Apr 2025 20:44:27 +0300 Date: Tue, 29 Apr 2025 20:44:27 +0300 From: Ville =?iso-8859-1?Q?Syrj=E4l=E4?= To: Baolin Wang Cc: akpm@linux-foundation.org, hughd@google.com, willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, regressions@lists.linux.dev, intel-gfx@lists.freedesktop.org, Eero Tamminen Subject: [REGRESSION] Re: [PATCH v3 3/6] mm: shmem: add large folio support for tmpfs Message-ID: References: <035bf55fbdebeff65f5cb2cdb9907b7d632c3228.1732779148.git.baolin.wang@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <035bf55fbdebeff65f5cb2cdb9907b7d632c3228.1732779148.git.baolin.wang@linux.alibaba.com> X-Patchwork-Hint: comment X-Stat-Signature: 6mjd9ib81wkznn6zuct96nyonq14jbzr X-Rspamd-Queue-Id: 19221120011 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1745948674-714795 X-HE-Meta: U2FsdGVkX19Wxyj02KtBmli4hNDEnZ6hse3SyWs/RmUCQfInePP5vK0rFHbRkHoIwsTG0WTHkL3pfsFOru1pOGvCqD1q9OUG1XcQvW7h0ZDmsL5FMNOX6nSaXkPt8xJuxe6mVhMxuXCOW/Ml1/JLP6HzTqagVD+H4Vk+ewb/Hw1+SjO94ZeDDRJayxj0NVgEsZFF3n8JMyHfSqdUXY3+SXCw9v/F9C14aQEcetd9oSoU3/OydmAdH/U46OJCkkmBCBE0riXiizaVYkaojEVUe6utExHC6ZX3BHS5XsSuHWJxo0s/T2XfOGMSmuk9dcPleXIhkkmE4pFXhgYuJwT64u9P/anG2FQ0/FYWaD337CcB2wM/esMb8yxsVD6pEQZM4ZkZwExxLSxlPk2RMiviJWEkfP13rMaueydcPeche1GWdV6XHYx0kleQKLRk7qBVLa9TLQndiuJN346utaX4nQxUTCnSSU7WkvGjkNg6uWMOa5/aPmz5awLs8zwIs2XqBm8dbCw+38xoFfvbZbnizOF9frCPTzziG6Zhiq9+kkPOGvzdkjWHkozSEXSAqMsoTnJiGhnKJxHpnfifDKR537QWscjkgkqPh7KrA1bkncuGM03m9pRBn30gHQ07crZZbUPMOwjRof4oogjfCrV43kq9kToy72oDYI3sxG3/LuNUJRTIhpYdouOCCDfv1xjjy1J3nP5IVut7RhnvHt23u05rxlrGEAsClv3dspwYiNwD945tRrZ4KO/sfgt+1X5xlaxWeagKgNuhwbgrjQl6pTe82ijIqhZugdGTizoYe6Lmy4zVz8xMlrWS+3YLHDvSD4V3PjRqgH7pymyQcC6URS3/CwmSC+GxiFDP4i4fQbPyJihoPKJP/TfxYVpEv6cfklkwX1o3Ex2JDqTNzOT5NldoVyoUnlCtW5Zn+jSqsDCHfKZZ3UBpu+xbg7dXRD9lIEVyWMsg9wQ5uAcm35v lgqhnntA SZBVkk7n5T2hJpugrMK830LLQVc9DeefJ8PWuobTvDWTvvkMY2TmsfmsgO6dJwEX/AdzGzeHYQqd63Sx3PRH4/Zr9VNNEI7QkFE41qpyzwXU8adpiLDSkwI4WM8Rbe2zlWjS7Rw/LBfblqpqECrOFZhOvf/aq4V8Qh76a5T5HdS8QSh0Yh8TaAX24kHO/F0ZTMm08jKx0EFbGaloaCrfemE24OBHTsR8ly+ukNDOc5WV3bhzD3s8eBIDnYl2C8mnhT/5d6RXtvpR2OdYzNv/C6ZuMAx0Ecgy0stXEnG0dMkBMK5ELkWYwEoYBUDPRFmtXlAI2rhtIKe9P8LJAA2stG5+W+s1/IEt6v1AK5gqc2dcIRCdqX8caMU+PXwx9IVDaT/Z7GJDTM/Nrv/M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Nov 28, 2024 at 03:40:41PM +0800, Baolin Wang wrote: > Add large folio support for tmpfs write and fallocate paths matching the > same high order preference mechanism used in the iomap buffered IO path > as used in __filemap_get_folio(). > > Add shmem_mapping_size_orders() to get a hint for the orders of the folio > based on the file size which takes care of the mapping requirements. > > Traditionally, tmpfs only supported PMD-sized large folios. However nowadays > with other file systems supporting any sized large folios, and extending > anonymous to support mTHP, we should not restrict tmpfs to allocating only > PMD-sized large folios, making it more special. Instead, we should allow > tmpfs can allocate any sized large folios. > > Considering that tmpfs already has the 'huge=' option to control the PMD-sized > large folios allocation, we can extend the 'huge=' option to allow any sized > large folios. The semantics of the 'huge=' mount option are: > > huge=never: no any sized large folios > huge=always: any sized large folios > huge=within_size: like 'always' but respect the i_size > huge=advise: like 'always' if requested with madvise() > > Note: for tmpfs mmap() faults, due to the lack of a write size hint, still > allocate the PMD-sized huge folios if huge=always/within_size/advise is set. > > Moreover, the 'deny' and 'force' testing options controlled by > '/sys/kernel/mm/transparent_hugepage/shmem_enabled', still retain the same > semantics. The 'deny' can disable any sized large folios for tmpfs, while > the 'force' can enable PMD sized large folios for tmpfs. > > Co-developed-by: Daniel Gomez > Signed-off-by: Daniel Gomez > Signed-off-by: Baolin Wang Hi, This causes a huge regression in Intel iGPU texturing performance. I haven't had time to look at this in detail, but presumably the problem is that we're no longer getting huge pages from our private tmpfs mount (done in i915_gemfs_init()). Some more details at https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13845 > --- > mm/shmem.c | 99 ++++++++++++++++++++++++++++++++++++++++++++---------- > 1 file changed, 81 insertions(+), 18 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 7595c3db4c1c..54eaa724c153 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -554,34 +554,100 @@ static bool shmem_confirm_swap(struct address_space *mapping, > > static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER; > > +/** > + * shmem_mapping_size_orders - Get allowable folio orders for the given file size. > + * @mapping: Target address_space. > + * @index: The page index. > + * @write_end: end of a write, could extend inode size. > + * > + * This returns huge orders for folios (when supported) based on the file size > + * which the mapping currently allows at the given index. The index is relevant > + * due to alignment considerations the mapping might have. The returned order > + * may be less than the size passed. > + * > + * Return: The orders. > + */ > +static inline unsigned int > +shmem_mapping_size_orders(struct address_space *mapping, pgoff_t index, loff_t write_end) > +{ > + unsigned int order; > + size_t size; > + > + if (!mapping_large_folio_support(mapping) || !write_end) > + return 0; > + > + /* Calculate the write size based on the write_end */ > + size = write_end - (index << PAGE_SHIFT); > + order = filemap_get_order(size); > + if (!order) > + return 0; > + > + /* If we're not aligned, allocate a smaller folio */ > + if (index & ((1UL << order) - 1)) > + order = __ffs(index); > + > + order = min_t(size_t, order, MAX_PAGECACHE_ORDER); > + return order > 0 ? BIT(order + 1) - 1 : 0; > +} > + > static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index, > loff_t write_end, bool shmem_huge_force, > + struct vm_area_struct *vma, > unsigned long vm_flags) > { > + unsigned int maybe_pmd_order = HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER ? > + 0 : BIT(HPAGE_PMD_ORDER); > + unsigned long within_size_orders; > + unsigned int order; > + pgoff_t aligned_index; > loff_t i_size; > > - if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER) > - return 0; > if (!S_ISREG(inode->i_mode)) > return 0; > if (shmem_huge == SHMEM_HUGE_DENY) > return 0; > if (shmem_huge_force || shmem_huge == SHMEM_HUGE_FORCE) > - return BIT(HPAGE_PMD_ORDER); > + return maybe_pmd_order; > > + /* > + * The huge order allocation for anon shmem is controlled through > + * the mTHP interface, so we still use PMD-sized huge order to > + * check whether global control is enabled. > + * > + * For tmpfs mmap()'s huge order, we still use PMD-sized order to > + * allocate huge pages due to lack of a write size hint. > + * > + * Otherwise, tmpfs will allow getting a highest order hint based on > + * the size of write and fallocate paths, then will try each allowable > + * huge orders. > + */ > switch (SHMEM_SB(inode->i_sb)->huge) { > case SHMEM_HUGE_ALWAYS: > - return BIT(HPAGE_PMD_ORDER); > + if (vma) > + return maybe_pmd_order; > + > + return shmem_mapping_size_orders(inode->i_mapping, index, write_end); > case SHMEM_HUGE_WITHIN_SIZE: > - index = round_up(index + 1, HPAGE_PMD_NR); > - i_size = max(write_end, i_size_read(inode)); > - i_size = round_up(i_size, PAGE_SIZE); > - if (i_size >> PAGE_SHIFT >= index) > - return BIT(HPAGE_PMD_ORDER); > + if (vma) > + within_size_orders = maybe_pmd_order; > + else > + within_size_orders = shmem_mapping_size_orders(inode->i_mapping, > + index, write_end); > + > + order = highest_order(within_size_orders); > + while (within_size_orders) { > + aligned_index = round_up(index + 1, 1 << order); > + i_size = max(write_end, i_size_read(inode)); > + i_size = round_up(i_size, PAGE_SIZE); > + if (i_size >> PAGE_SHIFT >= aligned_index) > + return within_size_orders; > + > + order = next_order(&within_size_orders, order); > + } > fallthrough; > case SHMEM_HUGE_ADVISE: > if (vm_flags & VM_HUGEPAGE) > - return BIT(HPAGE_PMD_ORDER); > + return maybe_pmd_order; > fallthrough; > default: > return 0; > @@ -781,6 +847,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, > > static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index, > loff_t write_end, bool shmem_huge_force, > + struct vm_area_struct *vma, > unsigned long vm_flags) > { > return 0; > @@ -1176,7 +1243,7 @@ static int shmem_getattr(struct mnt_idmap *idmap, > STATX_ATTR_NODUMP); > generic_fillattr(idmap, request_mask, inode, stat); > > - if (shmem_huge_global_enabled(inode, 0, 0, false, 0)) > + if (shmem_huge_global_enabled(inode, 0, 0, false, NULL, 0)) > stat->blksize = HPAGE_PMD_SIZE; > > if (request_mask & STATX_BTIME) { > @@ -1693,14 +1760,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, > return 0; > > global_orders = shmem_huge_global_enabled(inode, index, write_end, > - shmem_huge_force, vm_flags); > - if (!vma || !vma_is_anon_shmem(vma)) { > - /* > - * For tmpfs, we now only support PMD sized THP if huge page > - * is enabled, otherwise fallback to order 0. > - */ > + shmem_huge_force, vma, vm_flags); > + /* Tmpfs huge pages allocation */ > + if (!vma || !vma_is_anon_shmem(vma)) > return global_orders; > - } > > /* > * Following the 'deny' semantics of the top level, force the huge > -- > 2.39.3 > -- Ville Syrjälä Intel