* [PATCH] mm: shmem: allow fallback to smaller large orders for tmpfs mmap() access
@ 2025-11-14 0:46 Baolin Wang
0 siblings, 0 replies; only message in thread
From: Baolin Wang @ 2025-11-14 0:46 UTC (permalink / raw)
To: akpm, hughd
Cc: david, lorenzo.stoakes, willy, baolin.wang, linux-mm, linux-kernel
After commit 69e0a3b49003 ("mm: shmem: fix the strategy for the tmpfs 'huge='
options"), we have fixed the large order allocation strategy for tmpfs, which
always tries PMD-sized large folios first, and if that fails, falls back to
smaller large folios. For tmpfs large folio allocation via mmap(), we should
maintain the same strategy as well. Let's unify the large order allocation
strategy for tmpfs.
There is no functional change for large folio allocation of anonymous shmem.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/shmem.c | 17 +++--------------
1 file changed, 3 insertions(+), 14 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 395ca58ac4a5..fc835b3e4914 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -645,34 +645,23 @@ static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index
* the mTHP interface, so we still use PMD-sized huge order to
* check whether global control is enabled.
*
- * For tmpfs mmap()'s huge order, we still use PMD-sized order to
- * allocate huge pages due to lack of a write size hint.
- *
* For tmpfs with 'huge=always' or 'huge=within_size' mount option,
* we will always try PMD-sized order first. If that failed, it will
* fall back to small large folios.
*/
switch (SHMEM_SB(inode->i_sb)->huge) {
case SHMEM_HUGE_ALWAYS:
- if (vma)
- return maybe_pmd_order;
-
return THP_ORDERS_ALL_FILE_DEFAULT;
case SHMEM_HUGE_WITHIN_SIZE:
- if (vma)
- within_size_orders = maybe_pmd_order;
- else
- within_size_orders = THP_ORDERS_ALL_FILE_DEFAULT;
-
- within_size_orders = shmem_get_orders_within_size(inode, within_size_orders,
- index, write_end);
+ within_size_orders = shmem_get_orders_within_size(inode,
+ THP_ORDERS_ALL_FILE_DEFAULT, index, write_end);
if (within_size_orders > 0)
return within_size_orders;
fallthrough;
case SHMEM_HUGE_ADVISE:
if (vm_flags & VM_HUGEPAGE)
- return maybe_pmd_order;
+ return THP_ORDERS_ALL_FILE_DEFAULT;
fallthrough;
default:
return 0;
--
2.43.7
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2025-11-14 0:46 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-11-14 0:46 [PATCH] mm: shmem: allow fallback to smaller large orders for tmpfs mmap() access Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox