From: Luiz Capitulino <luizcap@redhat.com>
To: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
david@kernel.org, baolin.wang@linux.alibaba.com
Cc: ryan.roberts@arm.com, akpm@linux-foundation.org,
lorenzo.stoakes@oracle.com
Subject: [PATCH v2 10/11] mm: thp: always enable mTHP support
Date: Mon, 9 Feb 2026 17:14:32 -0500 [thread overview]
Message-ID: <29e8dfc2772af4b6e0db24134ca3563ec422b91a.1770675272.git.luizcap@redhat.com> (raw)
In-Reply-To: <cover.1770675272.git.luizcap@redhat.com>
If PMD-sized pages are not supported on an architecture (ie. the
arch implements arch_has_pmd_leaves() and it returns false) then the
current code disables all THP, including mTHP.
This commit fixes this by allowing mTHP to be always enabled for all
archs. When PMD-sized pages are not supported, its sysfs entry won't be
created and their mapping will be disallowed at page-fault time.
Similarly, this commit implements the following changes for shmem:
- In shmem_allowable_huge_orders(): drop the pgtable_has_pmd_leaves()
check so that mTHP sizes are considered
- In shmem_alloc_and_add_folio(): don't consider PMD and PUD orders
when PMD-sized pages are not supported by the CPU
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
mm/huge_memory.c | 11 +++++++----
mm/shmem.c | 4 +++-
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1e5ea2e47f79..882331592928 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -115,6 +115,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
else
supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
+ if (!pgtable_has_pmd_leaves())
+ supported_orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
+
orders &= supported_orders;
if (!orders)
return 0;
@@ -122,7 +125,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
if (!vma->vm_mm) /* vdso */
return 0;
- if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, forced_collapse))
+ if (vma_thp_disabled(vma, vm_flags, forced_collapse))
return 0;
/* khugepaged doesn't collapse DAX vma, but page fault is fine. */
@@ -806,6 +809,9 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj)
}
orders = THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DEFAULT;
+ if (!pgtable_has_pmd_leaves())
+ orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
+
order = highest_order(orders);
while (orders) {
thpsize = thpsize_create(order, *hugepage_kobj);
@@ -905,9 +911,6 @@ static int __init hugepage_init(void)
int err;
struct kobject *hugepage_kobj;
- if (!pgtable_has_pmd_leaves())
- return -EINVAL;
-
/*
* hugepages can't be allocated by the buddy allocator
*/
diff --git a/mm/shmem.c b/mm/shmem.c
index 1c98e84667a4..cb325d1e2d1e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1827,7 +1827,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
vm_flags_t vm_flags = vma ? vma->vm_flags : 0;
unsigned int global_orders;
- if (!pgtable_has_pmd_leaves() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
+ if (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force))
return 0;
global_orders = shmem_huge_global_enabled(inode, index, write_end,
@@ -1935,6 +1935,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
orders = 0;
+ else if (!pgtable_has_pmd_leaves())
+ orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
if (orders > 0) {
suitable_orders = shmem_suitable_orders(inode, vmf,
--
2.53.0
next prev parent reply other threads:[~2026-02-09 22:15 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-09 22:14 [PATCH v2 00/11] " Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 01/11] docs: tmpfs: remove implementation detail reference Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
2026-02-10 7:45 ` kernel test robot
2026-02-10 8:05 ` kernel test robot
2026-02-09 22:14 ` [PATCH v2 03/11] drivers: dax: use pgtable_has_pmd_leaves() Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 04/11] drivers: i915 selftest: " Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 05/11] drivers: nvdimm: " Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 06/11] mm: debug_vm_pgtable: " Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 07/11] mm: shmem: drop has_transparent_hugepage() usage Luiz Capitulino
2026-02-10 9:20 ` Baolin Wang
2026-02-09 22:14 ` [PATCH v2 08/11] treewide: rename has_transparent_hugepage() to arch_has_pmd_leaves() Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 09/11] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves() Luiz Capitulino
2026-02-09 22:14 ` Luiz Capitulino [this message]
2026-02-10 9:56 ` [PATCH v2 10/11] mm: thp: always enable mTHP support Baolin Wang
2026-02-10 13:28 ` Luiz Capitulino
2026-02-11 1:12 ` Baolin Wang
2026-02-09 22:14 ` [PATCH v2 11/11] mm: thp: x86: cleanup PSE feature bit usage Luiz Capitulino
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=29e8dfc2772af4b6e0db24134ca3563ec422b91a.1770675272.git.luizcap@redhat.com \
--to=luizcap@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=ryan.roberts@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox