linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/11] mm: thp: always enable mTHP support
@ 2026-02-09 22:14 Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 01/11] docs: tmpfs: remove implementation detail reference Luiz Capitulino
                   ` (10 more replies)
  0 siblings, 11 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

Today, if an architecture implements has_transparent_hugepage() and the CPU
lacks support for PMD-sized pages, the THP code disables all THP, including
mTHP. In addition, the kernel lacks a well defined API to check for
PMD-sized page support. It currently relies on has_transparent_hugepage()
and thp_disabled_by_hw(), but they are not well defined and are tied to
THP support.

This series addresses both issues by introducing a new well defined API
to query PMD-sized page support: pgtable_has_pmd_leaves(). Using this
new helper, we ensure that mTHP remains enabled even when the
architecture or CPU doesn't support PMD-sized pages.

Thanks to David Hildenbrand for suggesting this improvement and for
providing guidance (all bugs and misconceptions are mine).

This applies to v6.19, but I tested it on v6.19-rc8+.

v2
--
- Added support for always enabling mTHPs for shmem (Baolin)
- Improved commits changelog & added reviewed-by

v1
--
- Call init_arch_has_pmd_leaves() from start_kernel()
- Keep pgtable_has_pmd_leaves() calls tied to CONFIG_TRANSPARENT_HUGEPAGE (David)
- Clear PUD_ORDER when clearing PMD_ORDER (David)
- Small changelog improvements (David)
- Rebased on top of latest mm-new

Luiz Capitulino (11):
  docs: tmpfs: remove implementation detail reference
  mm: introduce pgtable_has_pmd_leaves()
  drivers: dax: use pgtable_has_pmd_leaves()
  drivers: i915 selftest: use pgtable_has_pmd_leaves()
  drivers: nvdimm: use pgtable_has_pmd_leaves()
  mm: debug_vm_pgtable: use pgtable_has_pmd_leaves()
  mm: shmem: drop has_transparent_hugepage() usage
  treewide: rename has_transparent_hugepage() to arch_has_pmd_leaves()
  mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves()
  mm: thp: always enable mTHP support
  mm: thp: x86: cleanup PSE feature bit usage

 Documentation/filesystems/tmpfs.rst           |  5 ++---
 arch/mips/include/asm/pgtable.h               |  4 ++--
 arch/mips/mm/tlb-r4k.c                        |  4 ++--
 arch/powerpc/include/asm/book3s/64/hash-4k.h  |  2 +-
 arch/powerpc/include/asm/book3s/64/hash-64k.h |  2 +-
 arch/powerpc/include/asm/book3s/64/pgtable.h  | 10 +++++-----
 arch/powerpc/include/asm/book3s/64/radix.h    |  2 +-
 arch/powerpc/mm/book3s64/hash_pgtable.c       |  4 ++--
 arch/s390/include/asm/pgtable.h               |  4 ++--
 arch/x86/include/asm/pgtable.h                |  6 ------
 arch/x86/include/asm/pgtable_32.h             |  6 ++++++
 drivers/dax/dax-private.h                     |  2 +-
 .../gpu/drm/i915/gem/selftests/huge_pages.c   |  4 +++-
 drivers/nvdimm/pfn_devs.c                     |  6 ++++--
 include/linux/huge_mm.h                       |  7 -------
 include/linux/pgtable.h                       | 11 ++++++++--
 init/main.c                                   |  1 +
 mm/debug_vm_pgtable.c                         | 20 +++++++++----------
 mm/huge_memory.c                              | 13 ++++++------
 mm/memory.c                                   | 10 +++++++++-
 mm/shmem.c                                    | 11 +++++-----
 21 files changed, 74 insertions(+), 60 deletions(-)

-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 01/11] docs: tmpfs: remove implementation detail reference
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

The tmpfs.rst doc references the has_transparent_hugepage() helper, which
is an implementation detail in the kernel and not relevant for users
wishing to properly configure THP support for tmpfs. Remove it.

Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 Documentation/filesystems/tmpfs.rst | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst
index d677e0428c3f..46fc986c3388 100644
--- a/Documentation/filesystems/tmpfs.rst
+++ b/Documentation/filesystems/tmpfs.rst
@@ -109,9 +109,8 @@ noswap  Disables swap. Remounts must respect the original settings.
 ======  ===========================================================
 
 tmpfs also supports Transparent Huge Pages which requires a kernel
-configured with CONFIG_TRANSPARENT_HUGEPAGE and with huge supported for
-your system (has_transparent_hugepage(), which is architecture specific).
-The mount options for this are:
+configured with CONFIG_TRANSPARENT_HUGEPAGE and with huge pages
+supported for your system. The mount options for this are:
 
 ================ ==============================================================
 huge=never       Do not allocate huge pages.  This is the default.
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves()
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 01/11] docs: tmpfs: remove implementation detail reference Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-10  7:45   ` kernel test robot
  2026-02-10  8:05   ` kernel test robot
  2026-02-09 22:14 ` [PATCH v2 03/11] drivers: dax: use pgtable_has_pmd_leaves() Luiz Capitulino
                   ` (8 subsequent siblings)
  10 siblings, 2 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

Currently, we have two helpers that check for PMD-sized pages but have
different names and slightly different semantics:

- has_transparent_hugepage(): the name suggests it checks if THP is
  enabled, but when CONFIG_TRANSPARENT_HUGEPAGE=y and the architecture
  implements this helper, it actually checks if the CPU supports
  PMD-sized pages

- thp_disabled_by_hw(): the name suggests it checks if THP is disabled
  by the hardware, but it just returns a cached value acquired with
  has_transparent_hugepage(). This helper is used in fast paths

This commit introduces a new helper called pgtable_has_pmd_leaves()
which is intended to replace both has_transparent_hugepage() and
thp_disabled_by_hw(). pgtable_has_pmd_leaves() has very clear semantics:
it returns true if the CPU supports PMD-sized pages and false otherwise.
It always returns a cached value, so it can be used in fast paths.

The new helper requires an initialization step which is performed by
init_arch_has_pmd_leaves(). We call init_arch_has_pmd_leaves() very
early during boot in start_kernel() so that users of the API can call it
from __setup() handlers.

The next commits will convert users of both has_transparent_hugepage()
and thp_disabled_by_hw() to pgtable_has_pmd_leaves().

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 include/linux/pgtable.h | 7 +++++++
 init/main.c             | 1 +
 mm/memory.c             | 8 ++++++++
 3 files changed, 16 insertions(+)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 652f287c1ef6..6733f90a1da4 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -2017,6 +2017,13 @@ static inline const char *pgtable_level_to_str(enum pgtable_level level)
 	}
 }
 
+extern bool __arch_has_pmd_leaves;
+static inline bool pgtable_has_pmd_leaves(void)
+{
+	return __arch_has_pmd_leaves;
+}
+void __init init_arch_has_pmd_leaves(void);
+
 #endif /* !__ASSEMBLY__ */
 
 #if !defined(MAX_POSSIBLE_PHYSMEM_BITS) && !defined(CONFIG_64BIT)
diff --git a/init/main.c b/init/main.c
index b84818ad9685..ad1209fffcde 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1036,6 +1036,7 @@ void start_kernel(void)
 	smp_prepare_boot_cpu();	/* arch-specific boot-cpu hooks */
 	early_numa_node_init();
 	boot_cpu_hotplug_init();
+	init_arch_has_pmd_leaves();
 
 	print_kernel_cmdline(saved_command_line);
 	/* parameters may set static keys */
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..4c25a3c453c6 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -177,6 +177,14 @@ static int __init init_zero_pfn(void)
 }
 early_initcall(init_zero_pfn);
 
+bool __arch_has_pmd_leaves __read_mostly;
+EXPORT_SYMBOL(__arch_has_pmd_leaves);
+
+void __init init_arch_has_pmd_leaves(void)
+{
+	__arch_has_pmd_leaves = has_transparent_hugepage();
+}
+
 void mm_trace_rss_stat(struct mm_struct *mm, int member)
 {
 	trace_rss_stat(mm, member);
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 03/11] drivers: dax: use pgtable_has_pmd_leaves()
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 01/11] docs: tmpfs: remove implementation detail reference Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 04/11] drivers: i915 selftest: " Luiz Capitulino
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

dax_align_valid() uses has_transparent_hugepage() to check if PMD-sized
pages are supported, use pgtable_has_pmd_leaves() instead.

Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 drivers/dax/dax-private.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h
index c6ae27c982f4..97b577f4107b 100644
--- a/drivers/dax/dax-private.h
+++ b/drivers/dax/dax-private.h
@@ -119,7 +119,7 @@ static inline bool dax_align_valid(unsigned long align)
 {
 	if (align == PUD_SIZE && IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD))
 		return true;
-	if (align == PMD_SIZE && has_transparent_hugepage())
+	if (align == PMD_SIZE && pgtable_has_pmd_leaves())
 		return true;
 	if (align == PAGE_SIZE)
 		return true;
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 04/11] drivers: i915 selftest: use pgtable_has_pmd_leaves()
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
                   ` (2 preceding siblings ...)
  2026-02-09 22:14 ` [PATCH v2 03/11] drivers: dax: use pgtable_has_pmd_leaves() Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 05/11] drivers: nvdimm: " Luiz Capitulino
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

igt_can_allocate_thp() uses has_transparent_hugepage() to check if THP
is supported with PMD-sized pages. However, has_transparent_hugepage()
is being replaced by an API that has clearer semantics. Thus, replace
has_transparent_hugepage() with pgtable_has_pmd_leaves() and a check for
IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE), this makes the intent of the
check very clear.

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index bd08605a1611..dcd1f1141513 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -1316,7 +1316,9 @@ typedef struct drm_i915_gem_object *
 
 static inline bool igt_can_allocate_thp(struct drm_i915_private *i915)
 {
-	return i915->mm.gemfs && has_transparent_hugepage();
+	return i915->mm.gemfs &&
+		IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+		pgtable_has_pmd_leaves();
 }
 
 static struct drm_i915_gem_object *
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 05/11] drivers: nvdimm: use pgtable_has_pmd_leaves()
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
                   ` (3 preceding siblings ...)
  2026-02-09 22:14 ` [PATCH v2 04/11] drivers: i915 selftest: " Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 06/11] mm: debug_vm_pgtable: " Luiz Capitulino
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

nd_pfn_supported_alignments() and nd_pfn_default_alignment() use
has_transparent_hugepage() to check if THP is supported with PMD-sized
pages. However, has_transparent_hugepage() is being replaced by an API
that has clearer semantics. Thus, replace has_transparent_hugepage()
usage in both functions with pgtable_has_pmd_leaves() and a check for
IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE), this makes the intent of the
check very clear.

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 drivers/nvdimm/pfn_devs.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
index 42b172fc5576..7ee8ec50e72d 100644
--- a/drivers/nvdimm/pfn_devs.c
+++ b/drivers/nvdimm/pfn_devs.c
@@ -94,7 +94,8 @@ static unsigned long *nd_pfn_supported_alignments(unsigned long *alignments)
 
 	alignments[0] = PAGE_SIZE;
 
-	if (has_transparent_hugepage()) {
+	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+	    pgtable_has_pmd_leaves()) {
 		alignments[1] = HPAGE_PMD_SIZE;
 		if (has_transparent_pud_hugepage())
 			alignments[2] = HPAGE_PUD_SIZE;
@@ -109,7 +110,8 @@ static unsigned long *nd_pfn_supported_alignments(unsigned long *alignments)
 static unsigned long nd_pfn_default_alignment(void)
 {
 
-	if (has_transparent_hugepage())
+	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+	    pgtable_has_pmd_leaves())
 		return HPAGE_PMD_SIZE;
 	return PAGE_SIZE;
 }
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 06/11] mm: debug_vm_pgtable: use pgtable_has_pmd_leaves()
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
                   ` (4 preceding siblings ...)
  2026-02-09 22:14 ` [PATCH v2 05/11] drivers: nvdimm: " Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 07/11] mm: shmem: drop has_transparent_hugepage() usage Luiz Capitulino
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

debug_vm_pgtable calls has_transparent_hugepage() in multiple places to
check if PMD-sized pages are supported, use pgtable_has_pmd_leaves()
instead.

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 mm/debug_vm_pgtable.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index ae9b9310d96f..ec02bafd9d45 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -177,7 +177,7 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx)
 	unsigned long val = idx, *ptr = &val;
 	pmd_t pmd;
 
-	if (!has_transparent_hugepage())
+	if (!pgtable_has_pmd_leaves())
 		return;
 
 	pr_debug("Validating PMD basic (%pGv)\n", ptr);
@@ -222,7 +222,7 @@ static void __init pmd_advanced_tests(struct pgtable_debug_args *args)
 	pmd_t pmd;
 	unsigned long vaddr = args->vaddr;
 
-	if (!has_transparent_hugepage())
+	if (!pgtable_has_pmd_leaves())
 		return;
 
 	page = (args->pmd_pfn != ULONG_MAX) ? pfn_to_page(args->pmd_pfn) : NULL;
@@ -283,7 +283,7 @@ static void __init pmd_leaf_tests(struct pgtable_debug_args *args)
 {
 	pmd_t pmd;
 
-	if (!has_transparent_hugepage())
+	if (!pgtable_has_pmd_leaves())
 		return;
 
 	pr_debug("Validating PMD leaf\n");
@@ -688,7 +688,7 @@ static void __init pmd_protnone_tests(struct pgtable_debug_args *args)
 	if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
 		return;
 
-	if (!has_transparent_hugepage())
+	if (!pgtable_has_pmd_leaves())
 		return;
 
 	pr_debug("Validating PMD protnone\n");
@@ -737,7 +737,7 @@ static void __init pmd_soft_dirty_tests(struct pgtable_debug_args *args)
 	if (!pgtable_supports_soft_dirty())
 		return;
 
-	if (!has_transparent_hugepage())
+	if (!pgtable_has_pmd_leaves())
 		return;
 
 	pr_debug("Validating PMD soft dirty\n");
@@ -754,7 +754,7 @@ static void __init pmd_leaf_soft_dirty_tests(struct pgtable_debug_args *args)
 	    !IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION))
 		return;
 
-	if (!has_transparent_hugepage())
+	if (!pgtable_has_pmd_leaves())
 		return;
 
 	pr_debug("Validating PMD swap soft dirty\n");
@@ -825,7 +825,7 @@ static void __init pmd_softleaf_tests(struct pgtable_debug_args *args)
 	swp_entry_t arch_entry;
 	pmd_t pmd1, pmd2;
 
-	if (!has_transparent_hugepage())
+	if (!pgtable_has_pmd_leaves())
 		return;
 
 	pr_debug("Validating PMD swap\n");
@@ -906,7 +906,7 @@ static void __init pmd_thp_tests(struct pgtable_debug_args *args)
 {
 	pmd_t pmd;
 
-	if (!has_transparent_hugepage())
+	if (!pgtable_has_pmd_leaves())
 		return;
 
 	pr_debug("Validating PMD based THP\n");
@@ -993,7 +993,7 @@ static void __init destroy_args(struct pgtable_debug_args *args)
 	}
 
 	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
-	    has_transparent_hugepage() &&
+	    pgtable_has_pmd_leaves() &&
 	    args->pmd_pfn != ULONG_MAX) {
 		if (args->is_contiguous_page) {
 			free_contig_range(args->pmd_pfn, (1 << HPAGE_PMD_ORDER));
@@ -1253,7 +1253,7 @@ static int __init init_args(struct pgtable_debug_args *args)
 	}
 
 	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
-	    has_transparent_hugepage()) {
+	    pgtable_has_pmd_leaves()) {
 		page = debug_vm_pgtable_alloc_huge_page(args, HPAGE_PMD_ORDER);
 		if (page) {
 			args->pmd_pfn = page_to_pfn(page);
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 07/11] mm: shmem: drop has_transparent_hugepage() usage
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
                   ` (5 preceding siblings ...)
  2026-02-09 22:14 ` [PATCH v2 06/11] mm: debug_vm_pgtable: " Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-10  9:20   ` Baolin Wang
  2026-02-09 22:14 ` [PATCH v2 08/11] treewide: rename has_transparent_hugepage() to arch_has_pmd_leaves() Luiz Capitulino
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

Shmem performs two kinds of has_transparent_hugepage() usage:

1. shmem_parse_one() and shmem_init(): since the calls to
   has_transparent_hugepage() are protected by #ifdef
   CONFIG_TRANSPARENT_HUGEPAGE, this actually checks if the CPU supports
   PMD-sized pages. This is irrelevant for shmem as it supports mTHP

2. shmem_parse_huge(): This is checking if THP is enabled and on
   architectures that implement has_transparent_hugepage(), this also
   checks if the CPU supports PMD-sized pages

While it's necessary to check if CONFIG_TRANSPARENT_HUGEPAGE is enabled,
shmem can determine mTHP size support at folio allocation time.
Therefore, drop has_transparent_hugepage() usage while keeping the
CONFIG_TRANSPARENT_HUGEPAGE checks.

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 mm/shmem.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 79af5f9f8b90..32529586cd78 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -689,7 +689,7 @@ static int shmem_parse_huge(const char *str)
 	else
 		return -EINVAL;
 
-	if (!has_transparent_hugepage() &&
+	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
 	    huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY)
 		return -EINVAL;
 
@@ -4678,8 +4678,7 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param)
 	case Opt_huge:
 		ctx->huge = result.uint_32;
 		if (ctx->huge != SHMEM_HUGE_NEVER &&
-		    !(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
-		      has_transparent_hugepage()))
+		    !IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
 			goto unsupported_parameter;
 		ctx->seen |= SHMEM_SEEN_HUGE;
 		break;
@@ -5463,7 +5462,7 @@ void __init shmem_init(void)
 #endif
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY)
+	if (shmem_huge > SHMEM_HUGE_DENY)
 		SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge;
 	else
 		shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 08/11] treewide: rename has_transparent_hugepage() to arch_has_pmd_leaves()
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
                   ` (6 preceding siblings ...)
  2026-02-09 22:14 ` [PATCH v2 07/11] mm: shmem: drop has_transparent_hugepage() usage Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 09/11] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves() Luiz Capitulino
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

Now that all has_transparent_hugepage() callers have been converted to
pgtable_has_pmd_leaves(), rename has_transparent_hugepage() to
arch_has_pmd_leaves() since that's what the helper checks for.

arch_has_pmd_leaves() is supposed to be called only by
init_arch_has_pmd_leaves(). The only temporary exception is
hugepage_init() which will be converted in a future commit.

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 arch/mips/include/asm/pgtable.h               |  4 ++--
 arch/mips/mm/tlb-r4k.c                        |  4 ++--
 arch/powerpc/include/asm/book3s/64/hash-4k.h  |  2 +-
 arch/powerpc/include/asm/book3s/64/hash-64k.h |  2 +-
 arch/powerpc/include/asm/book3s/64/pgtable.h  | 10 +++++-----
 arch/powerpc/include/asm/book3s/64/radix.h    |  2 +-
 arch/powerpc/mm/book3s64/hash_pgtable.c       |  4 ++--
 arch/s390/include/asm/pgtable.h               |  4 ++--
 arch/x86/include/asm/pgtable.h                |  4 ++--
 include/linux/pgtable.h                       |  4 ++--
 mm/huge_memory.c                              |  2 +-
 mm/memory.c                                   |  2 +-
 12 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 9c06a612d33a..0080724a7df5 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -615,8 +615,8 @@ unsigned long io_remap_pfn_range_pfn(unsigned long pfn, unsigned long size);
 /* We don't have hardware dirty/accessed bits, generic_pmdp_establish is fine.*/
 #define pmdp_establish generic_pmdp_establish
 
-#define has_transparent_hugepage has_transparent_hugepage
-extern int has_transparent_hugepage(void);
+#define arch_has_pmd_leaves arch_has_pmd_leaves
+extern int arch_has_pmd_leaves(void);
 
 static inline int pmd_trans_huge(pmd_t pmd)
 {
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index 44a662536148..4fcc8195a130 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -432,7 +432,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
-int has_transparent_hugepage(void)
+int arch_has_pmd_leaves(void)
 {
 	static unsigned int mask = -1;
 
@@ -448,7 +448,7 @@ int has_transparent_hugepage(void)
 	}
 	return mask == PM_HUGE_MASK;
 }
-EXPORT_SYMBOL(has_transparent_hugepage);
+EXPORT_SYMBOL(arch_has_pmd_leaves);
 
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE  */
 
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index 8e5bd9902bed..6744c2287199 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -165,7 +165,7 @@ extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
 extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
 				       unsigned long addr, pmd_t *pmdp);
-extern int hash__has_transparent_hugepage(void);
+extern int hash__arch_has_pmd_leaves(void);
 #endif
 
 #endif /* !__ASSEMBLER__ */
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index 7deb3a66890b..9392aba5e5dc 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -278,7 +278,7 @@ extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
 extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
 				       unsigned long addr, pmd_t *pmdp);
-extern int hash__has_transparent_hugepage(void);
+extern int hash__arch_has_pmd_leaves(void);
 #endif /*  CONFIG_TRANSPARENT_HUGEPAGE */
 
 #endif	/* __ASSEMBLER__ */
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index aac8ce30cd3b..6ed036b3d3c2 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1094,14 +1094,14 @@ static inline void update_mmu_cache_pud(struct vm_area_struct *vma,
 {
 }
 
-extern int hash__has_transparent_hugepage(void);
-static inline int has_transparent_hugepage(void)
+extern int hash__arch_has_pmd_leaves(void);
+static inline int arch_has_pmd_leaves(void)
 {
 	if (radix_enabled())
-		return radix__has_transparent_hugepage();
-	return hash__has_transparent_hugepage();
+		return radix__arch_has_pmd_leaves();
+	return hash__arch_has_pmd_leaves();
 }
-#define has_transparent_hugepage has_transparent_hugepage
+#define arch_has_pmd_leaves arch_has_pmd_leaves
 
 static inline int has_transparent_pud_hugepage(void)
 {
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index da954e779744..c884a119cbd9 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -298,7 +298,7 @@ extern pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm,
 pud_t radix__pudp_huge_get_and_clear(struct mm_struct *mm,
 				     unsigned long addr, pud_t *pudp);
 
-static inline int radix__has_transparent_hugepage(void)
+static inline int radix__arch_has_pmd_leaves(void)
 {
 	/* For radix 2M at PMD level means thp */
 	if (mmu_psize_defs[MMU_PAGE_2M].shift == PMD_SHIFT)
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index 82d31177630b..1dec64bf0c75 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -366,7 +366,7 @@ pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
 	return old_pmd;
 }
 
-int hash__has_transparent_hugepage(void)
+int hash__arch_has_pmd_leaves(void)
 {
 
 	if (!mmu_has_feature(MMU_FTR_16M_PAGE))
@@ -395,7 +395,7 @@ int hash__has_transparent_hugepage(void)
 
 	return 1;
 }
-EXPORT_SYMBOL_GPL(hash__has_transparent_hugepage);
+EXPORT_SYMBOL_GPL(hash__arch_has_pmd_leaves);
 
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index bca9b29778c3..4398855d558e 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1866,8 +1866,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
 	return pmd_leaf(pmd);
 }
 
-#define has_transparent_hugepage has_transparent_hugepage
-static inline int has_transparent_hugepage(void)
+#define arch_has_pmd_leaves arch_has_pmd_leaves
+static inline int arch_has_pmd_leaves(void)
 {
 	return cpu_has_edat1() ? 1 : 0;
 }
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index e33df3da6980..08d109280e36 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -313,8 +313,8 @@ static inline int pud_trans_huge(pud_t pud)
 }
 #endif
 
-#define has_transparent_hugepage has_transparent_hugepage
-static inline int has_transparent_hugepage(void)
+#define arch_has_pmd_leaves arch_has_pmd_leaves
+static inline int arch_has_pmd_leaves(void)
 {
 	return boot_cpu_has(X86_FEATURE_PSE);
 }
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 6733f90a1da4..b4d10ea9e45a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -2039,8 +2039,8 @@ void __init init_arch_has_pmd_leaves(void);
 #endif
 #endif
 
-#ifndef has_transparent_hugepage
-#define has_transparent_hugepage() IS_BUILTIN(CONFIG_TRANSPARENT_HUGEPAGE)
+#ifndef arch_has_pmd_leaves
+#define arch_has_pmd_leaves() IS_BUILTIN(CONFIG_TRANSPARENT_HUGEPAGE)
 #endif
 
 #ifndef has_transparent_pud_hugepage
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 40cf59301c21..b80a897b9b6f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -905,7 +905,7 @@ static int __init hugepage_init(void)
 	int err;
 	struct kobject *hugepage_kobj;
 
-	if (!has_transparent_hugepage()) {
+	if (!arch_has_pmd_leaves()) {
 		transparent_hugepage_flags = 1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED;
 		return -EINVAL;
 	}
diff --git a/mm/memory.c b/mm/memory.c
index 4c25a3c453c6..9794429df015 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -182,7 +182,7 @@ EXPORT_SYMBOL(__arch_has_pmd_leaves);
 
 void __init init_arch_has_pmd_leaves(void)
 {
-	__arch_has_pmd_leaves = has_transparent_hugepage();
+	__arch_has_pmd_leaves = arch_has_pmd_leaves();
 }
 
 void mm_trace_rss_stat(struct mm_struct *mm, int member)
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 09/11] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves()
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
                   ` (7 preceding siblings ...)
  2026-02-09 22:14 ` [PATCH v2 08/11] treewide: rename has_transparent_hugepage() to arch_has_pmd_leaves() Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 10/11] mm: thp: always enable mTHP support Luiz Capitulino
  2026-02-09 22:14 ` [PATCH v2 11/11] mm: thp: x86: cleanup PSE feature bit usage Luiz Capitulino
  10 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

Despite its name, thp_disabled_by_hw() just checks whether the
architecture supports PMD-sized pages. It returns true when
TRANSPARENT_HUGEPAGE_UNSUPPORTED is set in transparent_hugepage_flags,
this only occurs if the architecture implements arch_has_pmd_leaves()
and that function returns false.

Since pgtable_has_pmd_leaves() provides the same semantics, use it
instead.

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 include/linux/huge_mm.h | 7 -------
 mm/huge_memory.c        | 6 ++----
 mm/memory.c             | 2 +-
 mm/shmem.c              | 2 +-
 4 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index a4d9f964dfde..e291a650b10f 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -47,7 +47,6 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio,
 				bool write);
 
 enum transparent_hugepage_flag {
-	TRANSPARENT_HUGEPAGE_UNSUPPORTED,
 	TRANSPARENT_HUGEPAGE_FLAG,
 	TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
 	TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
@@ -352,12 +351,6 @@ static inline bool vma_thp_disabled(struct vm_area_struct *vma,
 	return mm_flags_test(MMF_DISABLE_THP_EXCEPT_ADVISED, vma->vm_mm);
 }
 
-static inline bool thp_disabled_by_hw(void)
-{
-	/* If the hardware/firmware marked hugepage support disabled. */
-	return transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED);
-}
-
 unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
 unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b80a897b9b6f..1e5ea2e47f79 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -122,7 +122,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
 	if (!vma->vm_mm)		/* vdso */
 		return 0;
 
-	if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags, forced_collapse))
+	if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, forced_collapse))
 		return 0;
 
 	/* khugepaged doesn't collapse DAX vma, but page fault is fine. */
@@ -905,10 +905,8 @@ static int __init hugepage_init(void)
 	int err;
 	struct kobject *hugepage_kobj;
 
-	if (!arch_has_pmd_leaves()) {
-		transparent_hugepage_flags = 1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED;
+	if (!pgtable_has_pmd_leaves())
 		return -EINVAL;
-	}
 
 	/*
 	 * hugepages can't be allocated by the buddy allocator
diff --git a/mm/memory.c b/mm/memory.c
index 9794429df015..1e9dbec7e6eb 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5386,7 +5386,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio, struct page *pa
 	 * PMD mappings if THPs are disabled. As we already have a THP,
 	 * behave as if we are forcing a collapse.
 	 */
-	if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags,
+	if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vma->vm_flags,
 						     /* forced_collapse=*/ true))
 		return ret;
 
diff --git a/mm/shmem.c b/mm/shmem.c
index 32529586cd78..1c98e84667a4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1827,7 +1827,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
 	vm_flags_t vm_flags = vma ? vma->vm_flags : 0;
 	unsigned int global_orders;
 
-	if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
+	if (!pgtable_has_pmd_leaves() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
 		return 0;
 
 	global_orders = shmem_huge_global_enabled(inode, index, write_end,
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 10/11] mm: thp: always enable mTHP support
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
                   ` (8 preceding siblings ...)
  2026-02-09 22:14 ` [PATCH v2 09/11] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves() Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  2026-02-10  9:56   ` Baolin Wang
  2026-02-09 22:14 ` [PATCH v2 11/11] mm: thp: x86: cleanup PSE feature bit usage Luiz Capitulino
  10 siblings, 1 reply; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

If PMD-sized pages are not supported on an architecture (ie. the
arch implements arch_has_pmd_leaves() and it returns false) then the
current code disables all THP, including mTHP.

This commit fixes this by allowing mTHP to be always enabled for all
archs. When PMD-sized pages are not supported, its sysfs entry won't be
created and their mapping will be disallowed at page-fault time.

Similarly, this commit implements the following changes for shmem:

 - In shmem_allowable_huge_orders(): drop the pgtable_has_pmd_leaves()
   check so that mTHP sizes are considered
 - In shmem_alloc_and_add_folio(): don't consider PMD and PUD orders
   when PMD-sized pages are not supported by the CPU

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 mm/huge_memory.c | 11 +++++++----
 mm/shmem.c       |  4 +++-
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1e5ea2e47f79..882331592928 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -115,6 +115,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
 	else
 		supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
 
+	if (!pgtable_has_pmd_leaves())
+		supported_orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
+
 	orders &= supported_orders;
 	if (!orders)
 		return 0;
@@ -122,7 +125,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
 	if (!vma->vm_mm)		/* vdso */
 		return 0;
 
-	if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, forced_collapse))
+	if (vma_thp_disabled(vma, vm_flags, forced_collapse))
 		return 0;
 
 	/* khugepaged doesn't collapse DAX vma, but page fault is fine. */
@@ -806,6 +809,9 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj)
 	}
 
 	orders = THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DEFAULT;
+	if (!pgtable_has_pmd_leaves())
+		orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
+
 	order = highest_order(orders);
 	while (orders) {
 		thpsize = thpsize_create(order, *hugepage_kobj);
@@ -905,9 +911,6 @@ static int __init hugepage_init(void)
 	int err;
 	struct kobject *hugepage_kobj;
 
-	if (!pgtable_has_pmd_leaves())
-		return -EINVAL;
-
 	/*
 	 * hugepages can't be allocated by the buddy allocator
 	 */
diff --git a/mm/shmem.c b/mm/shmem.c
index 1c98e84667a4..cb325d1e2d1e 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1827,7 +1827,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
 	vm_flags_t vm_flags = vma ? vma->vm_flags : 0;
 	unsigned int global_orders;
 
-	if (!pgtable_has_pmd_leaves() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
+	if (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force))
 		return 0;
 
 	global_orders = shmem_huge_global_enabled(inode, index, write_end,
@@ -1935,6 +1935,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
 
 	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
 		orders = 0;
+	else if (!pgtable_has_pmd_leaves())
+		orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
 
 	if (orders > 0) {
 		suitable_orders = shmem_suitable_orders(inode, vmf,
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 11/11] mm: thp: x86: cleanup PSE feature bit usage
  2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
                   ` (9 preceding siblings ...)
  2026-02-09 22:14 ` [PATCH v2 10/11] mm: thp: always enable mTHP support Luiz Capitulino
@ 2026-02-09 22:14 ` Luiz Capitulino
  10 siblings, 0 replies; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-09 22:14 UTC (permalink / raw)
  To: linux-kernel, linux-mm, david, baolin.wang
  Cc: ryan.roberts, akpm, lorenzo.stoakes

Historically, THP support on x86 checked the PSE feature bit to enable
THP. On 64-bit, this check is redundant since PSE is always enabled by
default for compatibility. On 32-bit, PSE can enable 2 MiB or 4 MiB
page sizes so it must be checked. To clean this up, this commit:

1. Drops arch_has_pmd_leaves() from common x86 code. For 64-bit,
   we assume PMD-sized pages are always supported

2. Checks for PSE only on 32-bit by implementing arch_has_pmd_leaves()

Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
 arch/x86/include/asm/pgtable.h    | 6 ------
 arch/x86/include/asm/pgtable_32.h | 6 ++++++
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 08d109280e36..55b88de5178f 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -313,12 +313,6 @@ static inline int pud_trans_huge(pud_t pud)
 }
 #endif
 
-#define arch_has_pmd_leaves arch_has_pmd_leaves
-static inline int arch_has_pmd_leaves(void)
-{
-	return boot_cpu_has(X86_FEATURE_PSE);
-}
-
 #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
 static inline bool pmd_special(pmd_t pmd)
 {
diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
index b612cc57a4d3..3bd51cfa431e 100644
--- a/arch/x86/include/asm/pgtable_32.h
+++ b/arch/x86/include/asm/pgtable_32.h
@@ -45,6 +45,12 @@ do {						\
 	flush_tlb_one_kernel((vaddr));		\
 } while (0)
 
+#define arch_has_pmd_leaves arch_has_pmd_leaves
+static inline int arch_has_pmd_leaves(void)
+{
+	return boot_cpu_has(X86_FEATURE_PSE);
+}
+
 #endif /* !__ASSEMBLER__ */
 
 /*
-- 
2.53.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves()
  2026-02-09 22:14 ` [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
@ 2026-02-10  7:45   ` kernel test robot
  2026-02-10  8:05   ` kernel test robot
  1 sibling, 0 replies; 18+ messages in thread
From: kernel test robot @ 2026-02-10  7:45 UTC (permalink / raw)
  To: Luiz Capitulino, linux-kernel, linux-mm, david, baolin.wang
  Cc: llvm, oe-kbuild-all, ryan.roberts, akpm, lorenzo.stoakes

Hi Luiz,

kernel test robot noticed the following build errors:

[auto build test ERROR on powerpc/next]
[also build test ERROR on powerpc/fixes tip/x86/core linus/master v6.19 next-20260209]
[cannot apply to akpm-mm/mm-everything]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Luiz-Capitulino/docs-tmpfs-remove-implementation-detail-reference/20260210-061806
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
patch link:    https://lore.kernel.org/r/f55ca6204336f52f1651d02dc370bc3b771df3d0.1770675272.git.luizcap%40redhat.com
patch subject: [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves()
config: arm-allnoconfig (https://download.01.org/0day-ci/archive/20260210/202602101556.msl493OX-lkp@intel.com/config)
compiler: clang version 22.0.0git (https://github.com/llvm/llvm-project 9b8addffa70cee5b2acc5454712d9cf78ce45710)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260210/202602101556.msl493OX-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602101556.msl493OX-lkp@intel.com/

All errors (new ones prefixed by >>):

>> ld.lld: error: undefined symbol: init_arch_has_pmd_leaves
   >>> referenced by main.c
   >>>               init/main.o:(start_kernel) in archive vmlinux.a

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves()
  2026-02-09 22:14 ` [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
  2026-02-10  7:45   ` kernel test robot
@ 2026-02-10  8:05   ` kernel test robot
  1 sibling, 0 replies; 18+ messages in thread
From: kernel test robot @ 2026-02-10  8:05 UTC (permalink / raw)
  To: Luiz Capitulino, linux-kernel, linux-mm, david, baolin.wang
  Cc: oe-kbuild-all, ryan.roberts, akpm, lorenzo.stoakes

Hi Luiz,

kernel test robot noticed the following build errors:

[auto build test ERROR on powerpc/next]
[also build test ERROR on powerpc/fixes tip/x86/core linus/master v6.19 next-20260209]
[cannot apply to akpm-mm/mm-everything]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Luiz-Capitulino/docs-tmpfs-remove-implementation-detail-reference/20260210-061806
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
patch link:    https://lore.kernel.org/r/f55ca6204336f52f1651d02dc370bc3b771df3d0.1770675272.git.luizcap%40redhat.com
patch subject: [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves()
config: m68k-allnoconfig (https://download.01.org/0day-ci/archive/20260210/202602101515.tnpQ1Idh-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260210/202602101515.tnpQ1Idh-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602101515.tnpQ1Idh-lkp@intel.com/

All errors (new ones prefixed by >>):

   m68k-linux-ld: init/main.o: in function `start_kernel':
>> main.c:(.init.text+0x698): undefined reference to `init_arch_has_pmd_leaves'

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 07/11] mm: shmem: drop has_transparent_hugepage() usage
  2026-02-09 22:14 ` [PATCH v2 07/11] mm: shmem: drop has_transparent_hugepage() usage Luiz Capitulino
@ 2026-02-10  9:20   ` Baolin Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Baolin Wang @ 2026-02-10  9:20 UTC (permalink / raw)
  To: Luiz Capitulino, linux-kernel, linux-mm, david
  Cc: ryan.roberts, akpm, lorenzo.stoakes



On 2/10/26 6:14 AM, Luiz Capitulino wrote:
> Shmem performs two kinds of has_transparent_hugepage() usage:
> 
> 1. shmem_parse_one() and shmem_init(): since the calls to
>     has_transparent_hugepage() are protected by #ifdef
>     CONFIG_TRANSPARENT_HUGEPAGE, this actually checks if the CPU supports
>     PMD-sized pages. This is irrelevant for shmem as it supports mTHP
> 
> 2. shmem_parse_huge(): This is checking if THP is enabled and on
>     architectures that implement has_transparent_hugepage(), this also
>     checks if the CPU supports PMD-sized pages
> 
> While it's necessary to check if CONFIG_TRANSPARENT_HUGEPAGE is enabled,
> shmem can determine mTHP size support at folio allocation time.
> Therefore, drop has_transparent_hugepage() usage while keeping the
> CONFIG_TRANSPARENT_HUGEPAGE checks.
> 
> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
> ---

Looks reasonable to me. Thanks.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

>   mm/shmem.c | 7 +++----
>   1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 79af5f9f8b90..32529586cd78 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -689,7 +689,7 @@ static int shmem_parse_huge(const char *str)
>   	else
>   		return -EINVAL;
>   
> -	if (!has_transparent_hugepage() &&
> +	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
>   	    huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY)
>   		return -EINVAL;
>   
> @@ -4678,8 +4678,7 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param)
>   	case Opt_huge:
>   		ctx->huge = result.uint_32;
>   		if (ctx->huge != SHMEM_HUGE_NEVER &&
> -		    !(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
> -		      has_transparent_hugepage()))
> +		    !IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
>   			goto unsupported_parameter;
>   		ctx->seen |= SHMEM_SEEN_HUGE;
>   		break;
> @@ -5463,7 +5462,7 @@ void __init shmem_init(void)
>   #endif
>   
>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -	if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY)
> +	if (shmem_huge > SHMEM_HUGE_DENY)
>   		SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge;
>   	else
>   		shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 10/11] mm: thp: always enable mTHP support
  2026-02-09 22:14 ` [PATCH v2 10/11] mm: thp: always enable mTHP support Luiz Capitulino
@ 2026-02-10  9:56   ` Baolin Wang
  2026-02-10 13:28     ` Luiz Capitulino
  0 siblings, 1 reply; 18+ messages in thread
From: Baolin Wang @ 2026-02-10  9:56 UTC (permalink / raw)
  To: Luiz Capitulino, linux-kernel, linux-mm, david
  Cc: ryan.roberts, akpm, lorenzo.stoakes



On 2/10/26 6:14 AM, Luiz Capitulino wrote:
> If PMD-sized pages are not supported on an architecture (ie. the
> arch implements arch_has_pmd_leaves() and it returns false) then the
> current code disables all THP, including mTHP.
> 
> This commit fixes this by allowing mTHP to be always enabled for all
> archs. When PMD-sized pages are not supported, its sysfs entry won't be
> created and their mapping will be disallowed at page-fault time.
> 
> Similarly, this commit implements the following changes for shmem:
> 
>   - In shmem_allowable_huge_orders(): drop the pgtable_has_pmd_leaves()
>     check so that mTHP sizes are considered
>   - In shmem_alloc_and_add_folio(): don't consider PMD and PUD orders
>     when PMD-sized pages are not supported by the CPU
> 
> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
> ---
>   mm/huge_memory.c | 11 +++++++----
>   mm/shmem.c       |  4 +++-
>   2 files changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 1e5ea2e47f79..882331592928 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -115,6 +115,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
>   	else
>   		supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
>   
> +	if (!pgtable_has_pmd_leaves())
> +		supported_orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
> +
>   	orders &= supported_orders;
>   	if (!orders)
>   		return 0;
> @@ -122,7 +125,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
>   	if (!vma->vm_mm)		/* vdso */
>   		return 0;
>   
> -	if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, forced_collapse))
> +	if (vma_thp_disabled(vma, vm_flags, forced_collapse))
>   		return 0;
>   
>   	/* khugepaged doesn't collapse DAX vma, but page fault is fine. */
> @@ -806,6 +809,9 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj)
>   	}
>   
>   	orders = THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DEFAULT;
> +	if (!pgtable_has_pmd_leaves())
> +		orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));

I think you should also handle the 'huge_anon_orders_inherit' setting in 
this function if pgtable_has_pmd_leaves() returns false. Shmem as well.

if (!anon_orders_configured)
	huge_anon_orders_inherit = BIT(PMD_ORDER);

> +
>   	order = highest_order(orders);
>   	while (orders) {
>   		thpsize = thpsize_create(order, *hugepage_kobj);
> @@ -905,9 +911,6 @@ static int __init hugepage_init(void)
>   	int err;
>   	struct kobject *hugepage_kobj;
>   
> -	if (!pgtable_has_pmd_leaves())
> -		return -EINVAL;
> -
>   	/*
>   	 * hugepages can't be allocated by the buddy allocator
>   	 */
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 1c98e84667a4..cb325d1e2d1e 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1827,7 +1827,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
>   	vm_flags_t vm_flags = vma ? vma->vm_flags : 0;
>   	unsigned int global_orders;
>   
> -	if (!pgtable_has_pmd_leaves() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
> +	if (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force))
>   		return 0;
>   
>   	global_orders = shmem_huge_global_enabled(inode, index, write_end,
> @@ -1935,6 +1935,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>   
>   	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
>   		orders = 0;
> +	else if (!pgtable_has_pmd_leaves())
> +		orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));

Moving this check into shmem_allowable_huge_orders() would be more 
appropriate.

>   
>   	if (orders > 0) {
>   		suitable_orders = shmem_suitable_orders(inode, vmf,



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 10/11] mm: thp: always enable mTHP support
  2026-02-10  9:56   ` Baolin Wang
@ 2026-02-10 13:28     ` Luiz Capitulino
  2026-02-11  1:12       ` Baolin Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Luiz Capitulino @ 2026-02-10 13:28 UTC (permalink / raw)
  To: Baolin Wang, linux-kernel, linux-mm, david
  Cc: ryan.roberts, akpm, lorenzo.stoakes

On 2026-02-10 04:56, Baolin Wang wrote:
> 
> 
> On 2/10/26 6:14 AM, Luiz Capitulino wrote:
>> If PMD-sized pages are not supported on an architecture (ie. the
>> arch implements arch_has_pmd_leaves() and it returns false) then the
>> current code disables all THP, including mTHP.
>>
>> This commit fixes this by allowing mTHP to be always enabled for all
>> archs. When PMD-sized pages are not supported, its sysfs entry won't be
>> created and their mapping will be disallowed at page-fault time.
>>
>> Similarly, this commit implements the following changes for shmem:
>>
>>   - In shmem_allowable_huge_orders(): drop the pgtable_has_pmd_leaves()
>>     check so that mTHP sizes are considered
>>   - In shmem_alloc_and_add_folio(): don't consider PMD and PUD orders
>>     when PMD-sized pages are not supported by the CPU
>>
>> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
>> ---
>>   mm/huge_memory.c | 11 +++++++----
>>   mm/shmem.c       |  4 +++-
>>   2 files changed, 10 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 1e5ea2e47f79..882331592928 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -115,6 +115,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
>>       else
>>           supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
>> +    if (!pgtable_has_pmd_leaves())
>> +        supported_orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
>> +
>>       orders &= supported_orders;
>>       if (!orders)
>>           return 0;
>> @@ -122,7 +125,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
>>       if (!vma->vm_mm)        /* vdso */
>>           return 0;
>> -    if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, forced_collapse))
>> +    if (vma_thp_disabled(vma, vm_flags, forced_collapse))
>>           return 0;
>>       /* khugepaged doesn't collapse DAX vma, but page fault is fine. */
>> @@ -806,6 +809,9 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj)
>>       }
>>       orders = THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DEFAULT;
>> +    if (!pgtable_has_pmd_leaves())
>> +        orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
> 
> I think you should also handle the 'huge_anon_orders_inherit' setting in this function if pgtable_has_pmd_leaves() returns false. Shmem as well.
> 
> if (!anon_orders_configured)
>      huge_anon_orders_inherit = BIT(PMD_ORDER);

Good catch. So, would you agree that should set it to BIT(PMD_ORDER - 1)
in this case?

> 
>> +
>>       order = highest_order(orders);
>>       while (orders) {
>>           thpsize = thpsize_create(order, *hugepage_kobj);
>> @@ -905,9 +911,6 @@ static int __init hugepage_init(void)
>>       int err;
>>       struct kobject *hugepage_kobj;
>> -    if (!pgtable_has_pmd_leaves())
>> -        return -EINVAL;
>> -
>>       /*
>>        * hugepages can't be allocated by the buddy allocator
>>        */
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 1c98e84667a4..cb325d1e2d1e 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1827,7 +1827,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
>>       vm_flags_t vm_flags = vma ? vma->vm_flags : 0;
>>       unsigned int global_orders;
>> -    if (!pgtable_has_pmd_leaves() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
>> +    if (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force))
>>           return 0;
>>       global_orders = shmem_huge_global_enabled(inode, index, write_end,
>> @@ -1935,6 +1935,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>>       if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
>>           orders = 0;
>> +    else if (!pgtable_has_pmd_leaves())
>> +        orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
> 
> Moving this check into shmem_allowable_huge_orders() would be more appropriate.

Will do.

Thanks a lot for the very fast review.

> 
>>       if (orders > 0) {
>>           suitable_orders = shmem_suitable_orders(inode, vmf,
> 



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 10/11] mm: thp: always enable mTHP support
  2026-02-10 13:28     ` Luiz Capitulino
@ 2026-02-11  1:12       ` Baolin Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Baolin Wang @ 2026-02-11  1:12 UTC (permalink / raw)
  To: Luiz Capitulino, linux-kernel, linux-mm, david
  Cc: ryan.roberts, akpm, lorenzo.stoakes



On 2/10/26 9:28 PM, Luiz Capitulino wrote:
> On 2026-02-10 04:56, Baolin Wang wrote:
>>
>>
>> On 2/10/26 6:14 AM, Luiz Capitulino wrote:
>>> If PMD-sized pages are not supported on an architecture (ie. the
>>> arch implements arch_has_pmd_leaves() and it returns false) then the
>>> current code disables all THP, including mTHP.
>>>
>>> This commit fixes this by allowing mTHP to be always enabled for all
>>> archs. When PMD-sized pages are not supported, its sysfs entry won't be
>>> created and their mapping will be disallowed at page-fault time.
>>>
>>> Similarly, this commit implements the following changes for shmem:
>>>
>>>   - In shmem_allowable_huge_orders(): drop the pgtable_has_pmd_leaves()
>>>     check so that mTHP sizes are considered
>>>   - In shmem_alloc_and_add_folio(): don't consider PMD and PUD orders
>>>     when PMD-sized pages are not supported by the CPU
>>>
>>> Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
>>> ---
>>>   mm/huge_memory.c | 11 +++++++----
>>>   mm/shmem.c       |  4 +++-
>>>   2 files changed, 10 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 1e5ea2e47f79..882331592928 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -115,6 +115,9 @@ unsigned long __thp_vma_allowable_orders(struct 
>>> vm_area_struct *vma,
>>>       else
>>>           supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
>>> +    if (!pgtable_has_pmd_leaves())
>>> +        supported_orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
>>> +
>>>       orders &= supported_orders;
>>>       if (!orders)
>>>           return 0;
>>> @@ -122,7 +125,7 @@ unsigned long __thp_vma_allowable_orders(struct 
>>> vm_area_struct *vma,
>>>       if (!vma->vm_mm)        /* vdso */
>>>           return 0;
>>> -    if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, 
>>> forced_collapse))
>>> +    if (vma_thp_disabled(vma, vm_flags, forced_collapse))
>>>           return 0;
>>>       /* khugepaged doesn't collapse DAX vma, but page fault is fine. */
>>> @@ -806,6 +809,9 @@ static int __init hugepage_init_sysfs(struct 
>>> kobject **hugepage_kobj)
>>>       }
>>>       orders = THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DEFAULT;
>>> +    if (!pgtable_has_pmd_leaves())
>>> +        orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
>>
>> I think you should also handle the 'huge_anon_orders_inherit' setting 
>> in this function if pgtable_has_pmd_leaves() returns false. Shmem as 
>> well.
>>
>> if (!anon_orders_configured)
>>      huge_anon_orders_inherit = BIT(PMD_ORDER);
> 
> Good catch. So, would you agree that should set it to BIT(PMD_ORDER - 1)
> in this case?

 From the documentation:
"
By default, PMD-sized hugepages have enabled="inherit" and all other
hugepage sizes have enabled="never".
"

So if pgtable_has_pmd_leaves() returns false, IMO, we should just skip 
setting the PMD-sized order for huge_anon_orders_inherit. What I mean is:

if (!anon_orders_configured && pgtable_has_pmd_leaves())
	huge_anon_orders_inherit = BIT(PMD_ORDER);


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2026-02-11  1:12 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-09 22:14 [PATCH v2 00/11] mm: thp: always enable mTHP support Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 01/11] docs: tmpfs: remove implementation detail reference Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 02/11] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
2026-02-10  7:45   ` kernel test robot
2026-02-10  8:05   ` kernel test robot
2026-02-09 22:14 ` [PATCH v2 03/11] drivers: dax: use pgtable_has_pmd_leaves() Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 04/11] drivers: i915 selftest: " Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 05/11] drivers: nvdimm: " Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 06/11] mm: debug_vm_pgtable: " Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 07/11] mm: shmem: drop has_transparent_hugepage() usage Luiz Capitulino
2026-02-10  9:20   ` Baolin Wang
2026-02-09 22:14 ` [PATCH v2 08/11] treewide: rename has_transparent_hugepage() to arch_has_pmd_leaves() Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 09/11] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves() Luiz Capitulino
2026-02-09 22:14 ` [PATCH v2 10/11] mm: thp: always enable mTHP support Luiz Capitulino
2026-02-10  9:56   ` Baolin Wang
2026-02-10 13:28     ` Luiz Capitulino
2026-02-11  1:12       ` Baolin Wang
2026-02-09 22:14 ` [PATCH v2 11/11] mm: thp: x86: cleanup PSE feature bit usage Luiz Capitulino

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox