* [RFC 00/12] mm: PUD (1GB) THP implementation
@ 2026-02-02 0:50 Usama Arif
2026-02-02 0:50 ` [RFC 01/12] mm: add PUD THP ptdesc and rmap support Usama Arif
` (15 more replies)
0 siblings, 16 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
This is an RFC series to implement 1GB PUD-level THPs, allowing
applications to benefit from reduced TLB pressure without requiring
hugetlbfs. The patches are based on top of
f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
Motivation: Why 1GB THP over hugetlbfs?
=======================================
While hugetlbfs provides 1GB huge pages today, it has significant limitations
that make it unsuitable for many workloads:
1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot
or runtime, taking memory away. This requires capacity planning,
administrative overhead, and makes workload orchastration much much more
complex, especially colocating with workloads that don't use hugetlbfs.
4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails
rather than falling back to smaller pages. This makes it fragile under
memory pressure.
4. No Splitting: hugetlbfs pages cannot be split when only partial access
is needed, leading to memory waste and preventing partial reclaim.
5. Memory Accounting: hugetlbfs memory is accounted separately and cannot
be easily shared with regular memory pools.
PUD THP solves these limitations by integrating 1GB pages into the existing
THP infrastructure.
Performance Results
===================
Benchmark results of these patches on Intel Xeon Platinum 8321HC:
Test: True Random Memory Access [1] test of 4GB memory region with pointer
chasing workload (4M random pointer dereferences through memory):
| Metric | PUD THP (1GB) | PMD THP (2MB) | Change |
|-------------------|---------------|---------------|--------------|
| Memory access | 88 ms | 134 ms | 34% faster |
| Page fault time | 898 ms | 331 ms | 2.7x slower |
Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)).
For long-running workloads this will be a one-off cost, and the 34%
improvement in access latency provides significant benefit.
ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU
bound workload running on a large number of ARM servers (256G). I enabled
the 512M THP settings to always for a 100 servers in production (didn't
really have high expectations :)). The average memory used for the workload
increased from 217G to 233G. The amount of memory backed by 512M pages was
68G! The dTLB misses went down by 26% and the PID multiplier increased input
by 5.9% (This is a very significant improvment in workload performance).
A significant number of these THPs were faulted in at application start when
were present across different VMAs. Ofcourse getting these 512M pages is
easier on ARM due to bigger PAGE_SIZE and pageblock order.
I am hoping that these patches for 1G THP can be used to provide similar
benefits for x86. I expect workloads to fault them in at start time when there
is plenty of free memory available.
Previous attempt by Zi Yan
==========================
Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been
significant changes in kernel since then, including folio conversion, mTHP
framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD
code as reference for making 1G PUD THP work. I am hoping Zi can provide
guidance on these patches!
Major Design Decisions
======================
1. No shared 1G zero page: The memory cost would be quite significant!
2. Page Table Pre-deposit Strategy
PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE
page tables (one for each potential PMD entry after split).
We allocate a PMD page table and use its pmd_huge_pte list to store
the deposited PTE tables. This ensures split operations don't fail due
to page table allocation failures (at the cost of 2M per PUD THP)
3. Split to Base Pages
When a PUD THP must be split (COW, partial unmap, mprotect), we split
directly to base pages (262,144 PTEs). The ideal thing would be to split
to 2M pages and then to 4K pages if needed. However, this would require
significant rmap and mapcount tracking changes.
4. COW and fork handling via split
Copy-on-write and fork for PUD THP triggers a split to base pages, then
uses existing PTE-level COW infrastructure. Getting another 1G region is
hard and could fail. If only a 4K is written, copying 1G is a waste.
Probably this should only be done on CoW and not fork?
5. Migration via split
Split PUD to PTEs and migrate individual pages. It is going to be difficult
to find a 1G continguous memory to migrate to. Maybe its better to not
allow migration of PUDs at all? I am more tempted to not allow migration,
but have kept splitting in this RFC.
Reviewers guide
===============
Most of the code is written by adapting from PMD code. For e.g. the PUD page
fault path is very similar to PMD. The difference is no shared zero page and
the page table deposit strategy. I think the easiest way to review this series
is to compare with PMD code.
Test results
============
1..7
# Starting 7 tests from 1 test cases.
# RUN pud_thp.basic_allocation ...
# pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1)
# OK pud_thp.basic_allocation
ok 1 pud_thp.basic_allocation
# RUN pud_thp.read_write_access ...
# OK pud_thp.read_write_access
ok 2 pud_thp.read_write_access
# RUN pud_thp.fork_cow ...
# pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1)
# OK pud_thp.fork_cow
ok 3 pud_thp.fork_cow
# RUN pud_thp.partial_munmap ...
# pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2)
# OK pud_thp.partial_munmap
ok 4 pud_thp.partial_munmap
# RUN pud_thp.mprotect_split ...
# pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3)
# OK pud_thp.mprotect_split
ok 5 pud_thp.mprotect_split
# RUN pud_thp.reclaim_pageout ...
# pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4)
# OK pud_thp.reclaim_pageout
ok 6 pud_thp.reclaim_pageout
# RUN pud_thp.migration_mbind ...
# pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5)
# OK pud_thp.migration_mbind
ok 7 pud_thp.migration_mbind
# PASSED: 7 / 7 tests passed.
# Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0
[1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26
[2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
Usama Arif (12):
mm: add PUD THP ptdesc and rmap support
mm/thp: add mTHP stats infrastructure for PUD THP
mm: thp: add PUD THP allocation and fault handling
mm: thp: implement PUD THP split to PTE level
mm: thp: add reclaim and migration support for PUD THP
selftests/mm: add PUD THP basic allocation test
selftests/mm: add PUD THP read/write access test
selftests/mm: add PUD THP fork COW test
selftests/mm: add PUD THP partial munmap test
selftests/mm: add PUD THP mprotect split test
selftests/mm: add PUD THP reclaim test
selftests/mm: add PUD THP migration test
include/linux/huge_mm.h | 60 ++-
include/linux/mm.h | 19 +
include/linux/mm_types.h | 5 +-
include/linux/pgtable.h | 8 +
include/linux/rmap.h | 7 +-
mm/huge_memory.c | 535 +++++++++++++++++++++-
mm/internal.h | 3 +
mm/memory.c | 8 +-
mm/migrate.c | 17 +
mm/page_vma_mapped.c | 35 ++
mm/pgtable-generic.c | 83 ++++
mm/rmap.c | 96 +++-
mm/vmscan.c | 2 +
tools/testing/selftests/mm/Makefile | 1 +
tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++
15 files changed, 1197 insertions(+), 42 deletions(-)
create mode 100644 tools/testing/selftests/mm/pud_thp_test.c
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 10:44 ` Kiryl Shutsemau
2026-02-02 12:15 ` Lorenzo Stoakes
2026-02-02 0:50 ` [RFC 02/12] mm/thp: add mTHP stats infrastructure for PUD THP Usama Arif
` (14 subsequent siblings)
15 siblings, 2 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
For page table management, PUD THPs need to pre-deposit page tables
that will be used when the huge page is later split. When a PUD THP
is allocated, we cannot know in advance when or why it might need to
be split (COW, partial unmap, reclaim), but we need page tables ready
for that eventuality. Similar to how PMD THPs deposit a single PTE
table, PUD THPs deposit a PMD table which itself contains deposited
PTE tables - a two-level deposit. This commit adds the deposit/withdraw
infrastructure and a new pud_huge_pmd field in ptdesc to store the
deposited PMD.
The deposited PMD tables are stored as a singly-linked stack using only
page->lru.next as the link pointer. A doubly-linked list using the
standard list_head mechanism would cause memory corruption: list_del()
poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev
overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD
tables have their own deposited PTE tables stored in pmd_huge_pte,
poisoning lru.prev would corrupt the PTE table list and cause crashes
when withdrawing PTE tables during split. PMD THPs don't have this
problem because their deposited PTE tables don't have sub-deposits.
Using only lru.next avoids the overlap entirely.
For reverse mapping, PUD THPs need the same rmap support that PMD THPs
have. The page_vma_mapped_walk() function is extended to recognize and
handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD
flag tells the unmap path to split PUD THPs before proceeding, since
there is no PUD-level migration entry format - the split converts the
single PUD mapping into individual PTE mappings that can be migrated
or swapped normally.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
include/linux/huge_mm.h | 5 +++
include/linux/mm.h | 19 ++++++++
include/linux/mm_types.h | 5 ++-
include/linux/pgtable.h | 8 ++++
include/linux/rmap.h | 7 ++-
mm/huge_memory.c | 8 ++++
mm/internal.h | 3 ++
mm/page_vma_mapped.c | 35 +++++++++++++++
mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++
mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++---
10 files changed, 260 insertions(+), 9 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index a4d9f964dfdea..e672e45bb9cc7 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -463,10 +463,15 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
unsigned long address);
#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
+ unsigned long address);
int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
pud_t *pudp, unsigned long addr, pgprot_t newprot,
unsigned long cp_flags);
#else
+static inline void
+split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
+ unsigned long address) {}
static inline int
change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
pud_t *pudp, unsigned long addr, pgprot_t newprot,
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ab2e7e30aef96..a15e18df0f771 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3455,6 +3455,22 @@ static inline bool pagetable_pmd_ctor(struct mm_struct *mm,
* considered ready to switch to split PUD locks yet; there may be places
* which need to be converted from page_table_lock.
*/
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static inline struct page *pud_pgtable_page(pud_t *pud)
+{
+ unsigned long mask = ~(PTRS_PER_PUD * sizeof(pud_t) - 1);
+
+ return virt_to_page((void *)((unsigned long)pud & mask));
+}
+
+static inline struct ptdesc *pud_ptdesc(pud_t *pud)
+{
+ return page_ptdesc(pud_pgtable_page(pud));
+}
+
+#define pud_huge_pmd(pud) (pud_ptdesc(pud)->pud_huge_pmd)
+#endif
+
static inline spinlock_t *pud_lockptr(struct mm_struct *mm, pud_t *pud)
{
return &mm->page_table_lock;
@@ -3471,6 +3487,9 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud)
static inline void pagetable_pud_ctor(struct ptdesc *ptdesc)
{
__pagetable_ctor(ptdesc);
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+ ptdesc->pud_huge_pmd = NULL;
+#endif
}
static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 78950eb8926dc..26a38490ae2e1 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -577,7 +577,10 @@ struct ptdesc {
struct list_head pt_list;
struct {
unsigned long _pt_pad_1;
- pgtable_t pmd_huge_pte;
+ union {
+ pgtable_t pmd_huge_pte; /* For PMD tables: deposited PTE */
+ pgtable_t pud_huge_pmd; /* For PUD tables: deposited PMD list */
+ };
};
};
unsigned long __page_mapping;
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 2f0dd3a4ace1a..3ce733c1d71a2 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1168,6 +1168,14 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
#define arch_needs_pgtable_deposit() (false)
#endif
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+extern void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
+ pmd_t *pmd_table);
+extern pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp);
+extern void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable);
+extern pgtable_t pud_withdraw_pte(pmd_t *pmd_table);
+#endif
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
/*
* This is an implementation of pmdp_establish() that is only suitable for an
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index daa92a58585d9..08cd0a0eb8763 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -101,6 +101,7 @@ enum ttu_flags {
* do a final flush if necessary */
TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock:
* caller holds it */
+ TTU_SPLIT_HUGE_PUD = 0x100, /* split huge PUD if any */
};
#ifdef CONFIG_MMU
@@ -473,6 +474,8 @@ void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages,
folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags)
void folio_add_anon_rmap_pmd(struct folio *, struct page *,
struct vm_area_struct *, unsigned long address, rmap_t flags);
+void folio_add_anon_rmap_pud(struct folio *, struct page *,
+ struct vm_area_struct *, unsigned long address, rmap_t flags);
void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
unsigned long address, rmap_t flags);
void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages,
@@ -933,6 +936,7 @@ struct page_vma_mapped_walk {
pgoff_t pgoff;
struct vm_area_struct *vma;
unsigned long address;
+ pud_t *pud;
pmd_t *pmd;
pte_t *pte;
spinlock_t *ptl;
@@ -970,7 +974,7 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
static inline void
page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
{
- WARN_ON_ONCE(!pvmw->pmd && !pvmw->pte);
+ WARN_ON_ONCE(!pvmw->pud && !pvmw->pmd && !pvmw->pte);
if (likely(pvmw->ptl))
spin_unlock(pvmw->ptl);
@@ -978,6 +982,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
WARN_ON_ONCE(1);
pvmw->ptl = NULL;
+ pvmw->pud = NULL;
pvmw->pmd = NULL;
pvmw->pte = NULL;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 40cf59301c21a..3128b3beedb0a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2933,6 +2933,14 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
spin_unlock(ptl);
mmu_notifier_invalidate_range_end(&range);
}
+
+void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
+ unsigned long address)
+{
+ VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PUD_SIZE));
+ if (pud_trans_huge(*pud))
+ __split_huge_pud_locked(vma, pud, address);
+}
#else
void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
unsigned long address)
diff --git a/mm/internal.h b/mm/internal.h
index 9ee336aa03656..21d5c00f638dc 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -545,6 +545,9 @@ int user_proactive_reclaim(char *buf,
* in mm/rmap.c:
*/
pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address);
+#endif
/*
* in mm/page_alloc.c
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index b38a1d00c971b..d31eafba38041 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -146,6 +146,18 @@ static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
return true;
}
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+/* Returns true if the two ranges overlap. Careful to not overflow. */
+static bool check_pud(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
+{
+ if ((pfn + HPAGE_PUD_NR - 1) < pvmw->pfn)
+ return false;
+ if (pfn > pvmw->pfn + pvmw->nr_pages - 1)
+ return false;
+ return true;
+}
+#endif
+
static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
{
pvmw->address = (pvmw->address + size) & ~(size - 1);
@@ -188,6 +200,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
pud_t *pud;
pmd_t pmde;
+ /* The only possible pud mapping has been handled on last iteration */
+ if (pvmw->pud && !pvmw->pmd)
+ return not_found(pvmw);
+
/* The only possible pmd mapping has been handled on last iteration */
if (pvmw->pmd && !pvmw->pte)
return not_found(pvmw);
@@ -234,6 +250,25 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
continue;
}
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+ /* Check for PUD-mapped THP */
+ if (pud_trans_huge(*pud)) {
+ pvmw->pud = pud;
+ pvmw->ptl = pud_lock(mm, pud);
+ if (likely(pud_trans_huge(*pud))) {
+ if (pvmw->flags & PVMW_MIGRATION)
+ return not_found(pvmw);
+ if (!check_pud(pud_pfn(*pud), pvmw))
+ return not_found(pvmw);
+ return true;
+ }
+ /* PUD was split under us, retry at PMD level */
+ spin_unlock(pvmw->ptl);
+ pvmw->ptl = NULL;
+ pvmw->pud = NULL;
+ }
+#endif
+
pvmw->pmd = pmd_offset(pud, pvmw->address);
/*
* Make sure the pmd value isn't cached in a register by the
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index d3aec7a9926ad..2047558ddcd79 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
}
#endif
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+/*
+ * Deposit page tables for PUD THP.
+ * Called with PUD lock held. Stores PMD tables in a singly-linked stack
+ * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer.
+ *
+ * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full
+ * list_head. This is because lru.prev (offset 16) overlaps with
+ * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables.
+ * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2.
+ *
+ * PTE tables should be deposited into the PMD using pud_deposit_pte().
+ */
+void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
+ pmd_t *pmd_table)
+{
+ pgtable_t pmd_page = virt_to_page(pmd_table);
+
+ assert_spin_locked(pud_lockptr(mm, pudp));
+
+ /* Push onto stack using only lru.next as the link */
+ pmd_page->lru.next = (struct list_head *)pud_huge_pmd(pudp);
+ pud_huge_pmd(pudp) = pmd_page;
+}
+
+/*
+ * Withdraw the deposited PMD table for PUD THP split or zap.
+ * Called with PUD lock held.
+ * Returns NULL if no more PMD tables are deposited.
+ */
+pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp)
+{
+ pgtable_t pmd_page;
+
+ assert_spin_locked(pud_lockptr(mm, pudp));
+
+ pmd_page = pud_huge_pmd(pudp);
+ if (!pmd_page)
+ return NULL;
+
+ /* Pop from stack - lru.next points to next PMD page (or NULL) */
+ pud_huge_pmd(pudp) = (pgtable_t)pmd_page->lru.next;
+
+ return page_address(pmd_page);
+}
+
+/*
+ * Deposit a PTE table into a standalone PMD table (not yet in page table hierarchy).
+ * Used for PUD THP pre-deposit. The PMD table's pmd_huge_pte stores a linked list.
+ * No lock assertion since the PMD isn't visible yet.
+ */
+void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable)
+{
+ struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
+
+ /* FIFO - add to front of list */
+ if (!ptdesc->pmd_huge_pte)
+ INIT_LIST_HEAD(&pgtable->lru);
+ else
+ list_add(&pgtable->lru, &ptdesc->pmd_huge_pte->lru);
+ ptdesc->pmd_huge_pte = pgtable;
+}
+
+/*
+ * Withdraw a PTE table from a standalone PMD table.
+ * Returns NULL if no more PTE tables are deposited.
+ */
+pgtable_t pud_withdraw_pte(pmd_t *pmd_table)
+{
+ struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
+ pgtable_t pgtable;
+
+ pgtable = ptdesc->pmd_huge_pte;
+ if (!pgtable)
+ return NULL;
+ ptdesc->pmd_huge_pte = list_first_entry_or_null(&pgtable->lru,
+ struct page, lru);
+ if (ptdesc->pmd_huge_pte)
+ list_del(&pgtable->lru);
+ return pgtable;
+}
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+
#ifndef __HAVE_ARCH_PMDP_INVALIDATE
pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
diff --git a/mm/rmap.c b/mm/rmap.c
index 7b9879ef442d9..69acabd763da4 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -811,6 +811,32 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
return pmd;
}
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+/*
+ * Returns the actual pud_t* where we expect 'address' to be mapped from, or
+ * NULL if it doesn't exist. No guarantees / checks on what the pud_t*
+ * represents.
+ */
+pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address)
+{
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pud_t *pud = NULL;
+
+ pgd = pgd_offset(mm, address);
+ if (!pgd_present(*pgd))
+ goto out;
+
+ p4d = p4d_offset(pgd, address);
+ if (!p4d_present(*p4d))
+ goto out;
+
+ pud = pud_offset(p4d, address);
+out:
+ return pud;
+}
+#endif
+
struct folio_referenced_arg {
int mapcount;
int referenced;
@@ -1415,11 +1441,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
SetPageAnonExclusive(page);
break;
case PGTABLE_LEVEL_PUD:
- /*
- * Keep the compiler happy, we don't support anonymous
- * PUD mappings.
- */
- WARN_ON_ONCE(1);
+ SetPageAnonExclusive(page);
break;
default:
BUILD_BUG();
@@ -1503,6 +1525,31 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
#endif
}
+/**
+ * folio_add_anon_rmap_pud - add a PUD mapping to a page range of an anon folio
+ * @folio: The folio to add the mapping to
+ * @page: The first page to add
+ * @vma: The vm area in which the mapping is added
+ * @address: The user virtual address of the first page to map
+ * @flags: The rmap flags
+ *
+ * The page range of folio is defined by [first_page, first_page + HPAGE_PUD_NR)
+ *
+ * The caller needs to hold the page table lock, and the page must be locked in
+ * the anon_vma case: to serialize mapping,index checking after setting.
+ */
+void folio_add_anon_rmap_pud(struct folio *folio, struct page *page,
+ struct vm_area_struct *vma, unsigned long address, rmap_t flags)
+{
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
+ defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
+ __folio_add_anon_rmap(folio, page, HPAGE_PUD_NR, vma, address, flags,
+ PGTABLE_LEVEL_PUD);
+#else
+ WARN_ON_ONCE(true);
+#endif
+}
+
/**
* folio_add_new_anon_rmap - Add mapping to a new anonymous folio.
* @folio: The folio to add the mapping to.
@@ -1934,6 +1981,20 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
}
if (!pvmw.pte) {
+ /*
+ * Check for PUD-mapped THP first.
+ * If we have a PUD mapping and TTU_SPLIT_HUGE_PUD is set,
+ * split the PUD to PMD level and restart the walk.
+ */
+ if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
+ if (flags & TTU_SPLIT_HUGE_PUD) {
+ split_huge_pud_locked(vma, pvmw.pud, pvmw.address);
+ flags &= ~TTU_SPLIT_HUGE_PUD;
+ page_vma_mapped_walk_restart(&pvmw);
+ continue;
+ }
+ }
+
if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
goto walk_done;
@@ -2325,6 +2386,27 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
mmu_notifier_invalidate_range_start(&range);
while (page_vma_mapped_walk(&pvmw)) {
+ /* Handle PUD-mapped THP first */
+ if (!pvmw.pte && !pvmw.pmd) {
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+ /*
+ * PUD-mapped THP: skip migration to preserve the huge
+ * page. Splitting would defeat the purpose of PUD THPs.
+ * Return false to indicate migration failure, which
+ * will cause alloc_contig_range() to try a different
+ * memory region.
+ */
+ if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
+ page_vma_mapped_walk_done(&pvmw);
+ ret = false;
+ break;
+ }
+#endif
+ /* Unexpected state: !pte && !pmd but not a PUD THP */
+ page_vma_mapped_walk_done(&pvmw);
+ break;
+ }
+
/* PMD-mapped THP migration entry */
if (!pvmw.pte) {
__maybe_unused unsigned long pfn;
@@ -2607,10 +2689,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
/*
* Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
- * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
+ * TTU_SPLIT_HUGE_PMD, TTU_SPLIT_HUGE_PUD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
*/
if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
- TTU_SYNC | TTU_BATCH_FLUSH)))
+ TTU_SPLIT_HUGE_PUD | TTU_SYNC | TTU_BATCH_FLUSH)))
return;
if (folio_is_zone_device(folio) &&
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 02/12] mm/thp: add mTHP stats infrastructure for PUD THP
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
2026-02-02 0:50 ` [RFC 01/12] mm: add PUD THP ptdesc and rmap support Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 11:56 ` Lorenzo Stoakes
2026-02-02 0:50 ` [RFC 03/12] mm: thp: add PUD THP allocation and fault handling Usama Arif
` (13 subsequent siblings)
15 siblings, 1 reply; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Extend the mTHP (multi-size THP) statistics infrastructure to support
PUD-sized transparent huge pages.
The mTHP framework tracks statistics for each supported THP size through
per-order counters exposed via sysfs. To add PUD THP support, PUD_ORDER
must be included in the set of tracked orders.
With this change, PUD THP events (allocations, faults, splits, swaps)
are tracked and exposed through the existing sysfs interface at
/sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/stats/. This
provides visibility into PUD THP behavior for debugging and performance
analysis.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
include/linux/huge_mm.h | 42 +++++++++++++++++++++++++++++++++++++----
mm/huge_memory.c | 3 ++-
2 files changed, 40 insertions(+), 5 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index e672e45bb9cc7..5509ba8555b6e 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -76,7 +76,13 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr;
* and including PMD_ORDER, except order-0 (which is not "huge") and order-1
* (which is a limitation of the THP implementation).
*/
-#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1)))
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+#define THP_ORDERS_ALL_ANON_PUD BIT(PUD_ORDER)
+#else
+#define THP_ORDERS_ALL_ANON_PUD 0
+#endif
+#define THP_ORDERS_ALL_ANON (((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) | \
+ THP_ORDERS_ALL_ANON_PUD)
/*
* Mask of all large folio orders supported for file THP. Folios in a DAX
@@ -146,18 +152,46 @@ enum mthp_stat_item {
};
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS)
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+#define MTHP_STAT_COUNT (PMD_ORDER + 2)
+#define MTHP_STAT_PUD_INDEX (PMD_ORDER + 1) /* PUD uses last index */
+#else
+#define MTHP_STAT_COUNT (PMD_ORDER + 1)
+#endif
+
struct mthp_stat {
- unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUNT];
+ unsigned long stats[MTHP_STAT_COUNT][__MTHP_STAT_COUNT];
};
DECLARE_PER_CPU(struct mthp_stat, mthp_stats);
+static inline int mthp_stat_order_to_index(int order)
+{
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+ if (order == PUD_ORDER)
+ return MTHP_STAT_PUD_INDEX;
+#endif
+ return order;
+}
+
static inline void mod_mthp_stat(int order, enum mthp_stat_item item, int delta)
{
- if (order <= 0 || order > PMD_ORDER)
+ int index;
+
+ if (order <= 0)
+ return;
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+ if (order != PUD_ORDER && order > PMD_ORDER)
return;
+#else
+ if (order > PMD_ORDER)
+ return;
+#endif
- this_cpu_add(mthp_stats.stats[order][item], delta);
+ index = mthp_stat_order_to_index(order);
+ this_cpu_add(mthp_stats.stats[index][item], delta);
}
static inline void count_mthp_stat(int order, enum mthp_stat_item item)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3128b3beedb0a..d033624d7e1f2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -598,11 +598,12 @@ static unsigned long sum_mthp_stat(int order, enum mthp_stat_item item)
{
unsigned long sum = 0;
int cpu;
+ int index = mthp_stat_order_to_index(order);
for_each_possible_cpu(cpu) {
struct mthp_stat *this = &per_cpu(mthp_stats, cpu);
- sum += this->stats[order][item];
+ sum += this->stats[index][item];
}
return sum;
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 03/12] mm: thp: add PUD THP allocation and fault handling
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
2026-02-02 0:50 ` [RFC 01/12] mm: add PUD THP ptdesc and rmap support Usama Arif
2026-02-02 0:50 ` [RFC 02/12] mm/thp: add mTHP stats infrastructure for PUD THP Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 04/12] mm: thp: implement PUD THP split to PTE level Usama Arif
` (12 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Add the page fault handling path for anonymous PUD THPs, following the
same design as the existing PMD THP fault handlers.
When a process accesses memory in an anonymous VMA that is PUD-aligned
and large enough, the fault handler checks if PUD THP is enabled and
attempts to allocate a 1GB folio. The allocation uses folio_alloc_gigantic.
If allocation succeeds, the folio is mapped at the faulting PUD entry.
Before installing the PUD mapping, page tables are pre-deposited for
future use. A PUD THP will eventually need to be split - whether due
to copy-on-write after fork, partial munmap, mprotect on a subregion,
or memory reclaim. At split time, we need 512 PTE tables (one for each
PMD entry) plus the PMD table itself. Allocating 513 page tables during
split could fail, leaving the system unable to proceed. By depositing
them at fault time when memory pressure is typically lower, we guarantee
the split will always succeed.
The write-protect fault handler triggers when a process tries to write
to a PUD THP that is mapped read-only (typically after fork). Rather
than implementing PUD-level COW which would require copying 1GB of data,
the handler splits the PUD to PTE level and retries the fault. The
retry then handles COW at PTE level, copying only the single 4KB page
being written.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
include/linux/huge_mm.h | 2 +
mm/huge_memory.c | 260 ++++++++++++++++++++++++++++++++++++++--
mm/memory.c | 8 +-
3 files changed, 258 insertions(+), 12 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 5509ba8555b6e..a292035c0270f 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -8,6 +8,7 @@
#include <linux/kobject.h>
vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
+vm_fault_t do_huge_pud_anonymous_page(struct vm_fault *vmf);
int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
@@ -25,6 +26,7 @@ static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
#endif
vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf);
+vm_fault_t do_huge_pud_wp_page(struct vm_fault *vmf);
bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
pmd_t *pmd, unsigned long addr, unsigned long next);
int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d033624d7e1f2..7613caf1e7c30 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1294,6 +1294,70 @@ static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
return folio;
}
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static struct folio *vma_alloc_anon_folio_pud(struct vm_area_struct *vma,
+ unsigned long addr)
+{
+ gfp_t gfp = vma_thp_gfp_mask(vma);
+ const int order = HPAGE_PUD_ORDER;
+ struct folio *folio = NULL;
+ /*
+ * Contiguous allocation via alloc_contig_range() migrates existing
+ * pages out of the target range. __GFP_NOMEMALLOC would allow using
+ * memory reserves for migration destination pages, but THP is an
+ * optional performance optimization and should not deplete reserves
+ * that may be needed for critical allocations. Remove it.
+ * alloc_contig_range_noprof (__alloc_contig_verify_gfp_mask) will
+ * cause this to fail without it.
+ */
+ gfp_t contig_gfp = gfp & ~__GFP_NOMEMALLOC;
+
+ folio = folio_alloc_gigantic(order, contig_gfp, numa_node_id(), NULL);
+
+ if (unlikely(!folio)) {
+ count_vm_event(THP_FAULT_FALLBACK);
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
+ return NULL;
+ }
+
+ VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
+ if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
+ folio_put(folio);
+ count_vm_event(THP_FAULT_FALLBACK);
+ count_vm_event(THP_FAULT_FALLBACK_CHARGE);
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
+ return NULL;
+ }
+ folio_throttle_swaprate(folio, gfp);
+
+ /*
+ * When a folio is not zeroed during allocation (__GFP_ZERO not used)
+ * or user folios require special handling, folio_zero_user() is used to
+ * make sure that the page corresponding to the faulting address will be
+ * hot in the cache after zeroing.
+ */
+ if (user_alloc_needs_zeroing())
+ folio_zero_user(folio, addr);
+ /*
+ * The memory barrier inside __folio_mark_uptodate makes sure that
+ * folio_zero_user writes become visible before the set_pud_at()
+ * write.
+ */
+ __folio_mark_uptodate(folio);
+
+ /*
+ * Set the large_rmappable flag so that the folio can be properly
+ * removed from the deferred_split list when freed.
+ * folio_alloc_gigantic() doesn't set this flag (unlike __folio_alloc),
+ * so we must set it explicitly.
+ */
+ folio_set_large_rmappable(folio);
+
+ return folio;
+}
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+
void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd,
struct vm_area_struct *vma, unsigned long haddr)
{
@@ -1318,6 +1382,40 @@ static void map_anon_folio_pmd_pf(struct folio *folio, pmd_t *pmd,
count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
}
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma)
+{
+ if (likely(vma->vm_flags & VM_WRITE))
+ pud = pud_mkwrite(pud);
+ return pud;
+}
+
+static void map_anon_folio_pud_nopf(struct folio *folio, pud_t *pud,
+ struct vm_area_struct *vma, unsigned long haddr)
+{
+ pud_t entry;
+
+ entry = folio_mk_pud(folio, vma->vm_page_prot);
+ entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma);
+ folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
+ folio_add_lru_vma(folio, vma);
+ set_pud_at(vma->vm_mm, haddr, pud, entry);
+ update_mmu_cache_pud(vma, haddr, pud);
+ deferred_split_folio(folio, false);
+}
+
+
+static void map_anon_folio_pud_pf(struct folio *folio, pud_t *pud,
+ struct vm_area_struct *vma, unsigned long haddr)
+{
+ map_anon_folio_pud_nopf(folio, pud, vma, haddr);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PUD_NR);
+ count_vm_event(THP_FAULT_ALLOC);
+ count_mthp_stat(HPAGE_PUD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
+ count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
+}
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+
static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
{
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
@@ -1513,6 +1611,161 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
return __do_huge_pmd_anonymous_page(vmf);
}
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+/* Number of PTE tables needed for PUD THP split: 512 */
+#define NR_PTE_TABLES_FOR_PUD (HPAGE_PUD_NR / HPAGE_PMD_NR)
+
+/*
+ * Allocate page tables for PUD THP pre-deposit.
+ */
+static bool alloc_pud_predeposit_ptables(struct mm_struct *mm,
+ unsigned long haddr,
+ pmd_t **pmd_table_out,
+ int *nr_pte_deposited)
+{
+ pmd_t *pmd_table;
+ pgtable_t pte_table;
+ struct ptdesc *pmd_ptdesc;
+ int i;
+
+ *pmd_table_out = NULL;
+ *nr_pte_deposited = 0;
+
+ pmd_table = pmd_alloc_one(mm, haddr);
+ if (!pmd_table)
+ return false;
+
+ /* Initialize the pmd_huge_pte field for PTE table storage */
+ pmd_ptdesc = virt_to_ptdesc(pmd_table);
+ pmd_ptdesc->pmd_huge_pte = NULL;
+
+ /* Allocate and deposit 512 PTE tables into the PMD table */
+ for (i = 0; i < NR_PTE_TABLES_FOR_PUD; i++) {
+ pte_table = pte_alloc_one(mm);
+ if (!pte_table)
+ goto fail;
+ pud_deposit_pte(pmd_table, pte_table);
+ (*nr_pte_deposited)++;
+ }
+
+ *pmd_table_out = pmd_table;
+ return true;
+
+fail:
+ /* Free any PTE tables we deposited */
+ while ((pte_table = pud_withdraw_pte(pmd_table)) != NULL)
+ pte_free(mm, pte_table);
+ pmd_free(mm, pmd_table);
+ return false;
+}
+
+/*
+ * Free pre-allocated page tables if the PUD THP fault fails.
+ */
+static void free_pud_predeposit_ptables(struct mm_struct *mm,
+ pmd_t *pmd_table)
+{
+ pgtable_t pte_table;
+
+ if (!pmd_table)
+ return;
+
+ while ((pte_table = pud_withdraw_pte(pmd_table)) != NULL)
+ pte_free(mm, pte_table);
+ pmd_free(mm, pmd_table);
+}
+
+vm_fault_t do_huge_pud_anonymous_page(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ unsigned long haddr = vmf->address & HPAGE_PUD_MASK;
+ struct folio *folio;
+ pmd_t *pmd_table = NULL;
+ int nr_pte_deposited = 0;
+ vm_fault_t ret = 0;
+ int i;
+
+ /* Check VMA bounds and alignment */
+ if (!thp_vma_suitable_order(vma, haddr, PUD_ORDER))
+ return VM_FAULT_FALLBACK;
+
+ ret = vmf_anon_prepare(vmf);
+ if (ret)
+ return ret;
+
+ folio = vma_alloc_anon_folio_pud(vma, vmf->address);
+ if (unlikely(!folio))
+ return VM_FAULT_FALLBACK;
+
+ /*
+ * Pre-allocate page tables for future PUD split.
+ * We need 1 PMD table and 512 PTE tables.
+ */
+ if (!alloc_pud_predeposit_ptables(vma->vm_mm, haddr,
+ &pmd_table, &nr_pte_deposited)) {
+ folio_put(folio);
+ return VM_FAULT_FALLBACK;
+ }
+
+ vmf->ptl = pud_lock(vma->vm_mm, vmf->pud);
+ if (unlikely(!pud_none(*vmf->pud)))
+ goto release;
+
+ ret = check_stable_address_space(vma->vm_mm);
+ if (ret)
+ goto release;
+
+ /* Deliver the page fault to userland */
+ if (userfaultfd_missing(vma)) {
+ spin_unlock(vmf->ptl);
+ folio_put(folio);
+ free_pud_predeposit_ptables(vma->vm_mm, pmd_table);
+ ret = handle_userfault(vmf, VM_UFFD_MISSING);
+ VM_BUG_ON(ret & VM_FAULT_FALLBACK);
+ return ret;
+ }
+
+ /* Deposit page tables for future PUD split */
+ pgtable_trans_huge_pud_deposit(vma->vm_mm, vmf->pud, pmd_table);
+ map_anon_folio_pud_pf(folio, vmf->pud, vma, haddr);
+ mm_inc_nr_pmds(vma->vm_mm);
+ for (i = 0; i < nr_pte_deposited; i++)
+ mm_inc_nr_ptes(vma->vm_mm);
+ spin_unlock(vmf->ptl);
+
+ return 0;
+release:
+ spin_unlock(vmf->ptl);
+ folio_put(folio);
+ free_pud_predeposit_ptables(vma->vm_mm, pmd_table);
+ return ret;
+}
+#else
+vm_fault_t do_huge_pud_anonymous_page(struct vm_fault *vmf)
+{
+ return VM_FAULT_FALLBACK;
+}
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+vm_fault_t do_huge_pud_wp_page(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+
+ /*
+ * For now, split PUD to PTE level on write fault.
+ * This is the simplest approach for COW handling.
+ */
+ __split_huge_pud(vma, vmf->pud, vmf->address);
+ return VM_FAULT_FALLBACK;
+}
+#else
+vm_fault_t do_huge_pud_wp_page(struct vm_fault *vmf)
+{
+ return VM_FAULT_FALLBACK;
+}
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+
struct folio_or_pfn {
union {
struct folio *folio;
@@ -1646,13 +1899,6 @@ vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio,
EXPORT_SYMBOL_GPL(vmf_insert_folio_pmd);
#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma)
-{
- if (likely(vma->vm_flags & VM_WRITE))
- pud = pud_mkwrite(pud);
- return pud;
-}
-
static vm_fault_t insert_pud(struct vm_area_struct *vma, unsigned long addr,
pud_t *pud, struct folio_or_pfn fop, pgprot_t prot, bool write)
{
diff --git a/mm/memory.c b/mm/memory.c
index 87cf4e1a6f866..e5f86c1d2aded 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -6142,9 +6142,9 @@ static vm_fault_t create_huge_pud(struct vm_fault *vmf)
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
struct vm_area_struct *vma = vmf->vma;
- /* No support for anonymous transparent PUD pages yet */
+
if (vma_is_anonymous(vma))
- return VM_FAULT_FALLBACK;
+ return do_huge_pud_anonymous_page(vmf);
if (vma->vm_ops->huge_fault)
return vma->vm_ops->huge_fault(vmf, PUD_ORDER);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
@@ -6158,9 +6158,8 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
struct vm_area_struct *vma = vmf->vma;
vm_fault_t ret;
- /* No support for anonymous transparent PUD pages yet */
if (vma_is_anonymous(vma))
- goto split;
+ return do_huge_pud_wp_page(vmf);
if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) {
if (vma->vm_ops->huge_fault) {
ret = vma->vm_ops->huge_fault(vmf, PUD_ORDER);
@@ -6168,7 +6167,6 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
return ret;
}
}
-split:
/* COW or write-notify not handled on PUD level: split pud.*/
__split_huge_pud(vma, vmf->pud, vmf->address);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 04/12] mm: thp: implement PUD THP split to PTE level
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (2 preceding siblings ...)
2026-02-02 0:50 ` [RFC 03/12] mm: thp: add PUD THP allocation and fault handling Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 05/12] mm: thp: add reclaim and migration support for PUD THP Usama Arif
` (11 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Implement the split operation that converts a PUD THP mapping into
individual PTE mappings.
A PUD THP maps 1GB of memory with a single page table entry. When the
mapping needs to be broken - for COW, partial unmap, permission changes,
or reclaim - it must be split into smaller mappings. Unlike PMD THPs
which split into 512 PTEs in a single level, PUD THPs require a two-level
split: the single PUD entry becomes 512 PMD entries, each pointing to a
PTE table containing 512 PTEs, for a total of 262144 page table entries.
The split uses page tables that were pre-deposited when the PUD THP was
first allocated. This guarantees the split cannot fail due to memory
allocation failure, which is critical since splits often happen under
memory pressure during reclaim. The deposited PMD table is installed in
the PUD entry, and each PMD slot receives one of the 512 deposited PTE
tables.
Each PTE is populated to map one 4KB page of the original 1GB folio.
Page flags from the original PUD entry (dirty, accessed, writable,
soft-dirty) are propagated to each PTE so that no information is lost.
The rmap is updated to remove the single PUD-level mapping entry and
add 262144 PTE-level mapping entries.
The split goes directly to PTE level rather than stopping at PMD level.
This is because the kernel's rmap infrastructure assumes that PMD-level
mappings are for PMD-sized folios. If we mapped a PUD-sized folio at
PMD level (512 PMD entries for one folio), the rmap accounting would
break - it would see 512 "large" mappings for a folio that should have
far more. Going to PTE level avoids this problem entirely.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
mm/huge_memory.c | 181 ++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 173 insertions(+), 8 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 7613caf1e7c30..39b8212b5abd4 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3129,12 +3129,82 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
return 1;
}
+/*
+ * Structure to hold page tables for PUD split.
+ * Tables are withdrawn from the pre-deposit made at fault time.
+ */
+struct pud_split_ptables {
+ pmd_t *pmd_table;
+ pgtable_t *pte_tables; /* Array of 512 PTE tables */
+ int nr_pte_tables; /* Number of PTE tables in array */
+};
+
+/*
+ * Withdraw pre-deposited page tables from PUD THP.
+ * Tables are always deposited at fault time in do_huge_pud_anonymous_page().
+ * Returns true if successful, false if no tables deposited.
+ */
+static bool withdraw_pud_split_ptables(struct mm_struct *mm, pud_t *pud,
+ struct pud_split_ptables *tables)
+{
+ pmd_t *pmd_table;
+ pgtable_t pte_table;
+ int i;
+
+ tables->pmd_table = NULL;
+ tables->pte_tables = NULL;
+ tables->nr_pte_tables = 0;
+
+ /* Try to withdraw the deposited PMD table */
+ pmd_table = pgtable_trans_huge_pud_withdraw(mm, pud);
+ if (!pmd_table)
+ return false;
+
+ tables->pmd_table = pmd_table;
+
+ /* Allocate array to hold PTE table pointers */
+ tables->pte_tables = kmalloc_array(NR_PTE_TABLES_FOR_PUD,
+ sizeof(pgtable_t), GFP_ATOMIC);
+ if (!tables->pte_tables)
+ goto fail;
+
+ /* Withdraw PTE tables from the PMD table */
+ for (i = 0; i < NR_PTE_TABLES_FOR_PUD; i++) {
+ pte_table = pud_withdraw_pte(pmd_table);
+ if (!pte_table)
+ goto fail;
+ tables->pte_tables[i] = pte_table;
+ tables->nr_pte_tables++;
+ }
+
+ return true;
+
+fail:
+ /* Put back any tables we withdrew */
+ for (i = 0; i < tables->nr_pte_tables; i++)
+ pud_deposit_pte(pmd_table, tables->pte_tables[i]);
+ kfree(tables->pte_tables);
+ pgtable_trans_huge_pud_deposit(mm, pud, pmd_table);
+ tables->pmd_table = NULL;
+ tables->pte_tables = NULL;
+ tables->nr_pte_tables = 0;
+ return false;
+}
+
static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
unsigned long haddr)
{
+ bool dirty = false, young = false, write = false;
+ struct pud_split_ptables tables = { 0 };
+ struct mm_struct *mm = vma->vm_mm;
+ rmap_t rmap_flags = RMAP_NONE;
+ bool anon_exclusive = false;
+ bool soft_dirty = false;
struct folio *folio;
+ unsigned long addr;
struct page *page;
pud_t old_pud;
+ int i, j;
VM_BUG_ON(haddr & ~HPAGE_PUD_MASK);
VM_BUG_ON_VMA(vma->vm_start > haddr, vma);
@@ -3145,20 +3215,115 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
old_pud = pudp_huge_clear_flush(vma, haddr, pud);
- if (!vma_is_dax(vma))
+ if (!vma_is_anonymous(vma)) {
+ if (!vma_is_dax(vma))
+ return;
+
+ page = pud_page(old_pud);
+ folio = page_folio(page);
+
+ if (!folio_test_dirty(folio) && pud_dirty(old_pud))
+ folio_mark_dirty(folio);
+ if (!folio_test_referenced(folio) && pud_young(old_pud))
+ folio_set_referenced(folio);
+ folio_remove_rmap_pud(folio, page, vma);
+ folio_put(folio);
+ add_mm_counter(mm, mm_counter_file(folio), -HPAGE_PUD_NR);
return;
+ }
+
+ /*
+ * Anonymous PUD split: split directly to PTE level.
+ *
+ * We cannot create PMD huge entries pointing to portions of a larger
+ * folio because the kernel's rmap infrastructure assumes PMD mappings
+ * are for PMD-sized folios only (see __folio_rmap_sanity_checks).
+ * Instead, we create a PMD table with 512 entries, each pointing to
+ * a PTE table with 512 PTEs.
+ *
+ * Tables are always deposited at fault time in do_huge_pud_anonymous_page().
+ */
+ if (!withdraw_pud_split_ptables(mm, pud, &tables)) {
+ WARN_ON_ONCE(1);
+ return;
+ }
page = pud_page(old_pud);
folio = page_folio(page);
- if (!folio_test_dirty(folio) && pud_dirty(old_pud))
- folio_mark_dirty(folio);
- if (!folio_test_referenced(folio) && pud_young(old_pud))
- folio_set_referenced(folio);
+ dirty = pud_dirty(old_pud);
+ write = pud_write(old_pud);
+ young = pud_young(old_pud);
+ soft_dirty = pud_soft_dirty(old_pud);
+ anon_exclusive = PageAnonExclusive(page);
+
+ if (dirty)
+ folio_set_dirty(folio);
+
+ /*
+ * Add references for each page that will have its own PTE.
+ * Original folio has 1 reference. After split, each of 262144 PTEs
+ * will eventually be unmapped, each calling folio_put().
+ */
+ folio_ref_add(folio, HPAGE_PUD_NR - 1);
+
+ /*
+ * Add PTE-level rmap for all pages at once.
+ */
+ if (anon_exclusive)
+ rmap_flags |= RMAP_EXCLUSIVE;
+ folio_add_anon_rmap_ptes(folio, page, HPAGE_PUD_NR,
+ vma, haddr, rmap_flags);
+
+ /* Remove PUD-level rmap */
folio_remove_rmap_pud(folio, page, vma);
- folio_put(folio);
- add_mm_counter(vma->vm_mm, mm_counter_file(folio),
- -HPAGE_PUD_NR);
+
+ /*
+ * Create 512 PMD entries, each pointing to a PTE table.
+ * Each PTE table has 512 PTEs pointing to individual pages.
+ */
+ addr = haddr;
+ for (i = 0; i < (HPAGE_PUD_NR / HPAGE_PMD_NR); i++) {
+ pmd_t *pmd_entry = tables.pmd_table + i;
+ pgtable_t pte_table = tables.pte_tables[i];
+ pte_t *pte;
+ struct page *subpage_base = page + i * HPAGE_PMD_NR;
+
+ /* Populate the PTE table */
+ pte = page_address(pte_table);
+ for (j = 0; j < HPAGE_PMD_NR; j++) {
+ struct page *subpage = subpage_base + j;
+ pte_t entry;
+
+ entry = mk_pte(subpage, vma->vm_page_prot);
+ if (write)
+ entry = pte_mkwrite(entry, vma);
+ if (dirty)
+ entry = pte_mkdirty(entry);
+ if (young)
+ entry = pte_mkyoung(entry);
+ if (soft_dirty)
+ entry = pte_mksoft_dirty(entry);
+
+ set_pte_at(mm, addr + j * PAGE_SIZE, pte + j, entry);
+ }
+
+ /* Set PMD to point to PTE table */
+ pmd_populate(mm, pmd_entry, pte_table);
+ addr += HPAGE_PMD_SIZE;
+ }
+
+ /*
+ * Memory barrier ensures all PMD entries are visible before
+ * installing the PMD table in the PUD.
+ */
+ smp_wmb();
+
+ /* Install the PMD table in the PUD */
+ pud_populate(mm, pud, tables.pmd_table);
+
+ /* Free the temporary array holding PTE table pointers */
+ kfree(tables.pte_tables);
}
void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 05/12] mm: thp: add reclaim and migration support for PUD THP
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (3 preceding siblings ...)
2026-02-02 0:50 ` [RFC 04/12] mm: thp: implement PUD THP split to PTE level Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 06/12] selftests/mm: add PUD THP basic allocation test Usama Arif
` (10 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Enable the memory reclaim and migration paths to handle PUD THPs
correctly by splitting them before proceeding.
Memory reclaim needs to unmap pages before they can be reclaimed. For
PUD THPs, the unmap path now passes TTU_SPLIT_HUGE_PUD when unmapping
PUD-sized folios. This triggers the PUD split during the unmap phase,
converting the single PUD mapping into 262144 PTE mappings. Reclaim
then proceeds normally with the individual pages. This follows the same
pattern used for PMD THPs with TTU_SPLIT_HUGE_PMD.
When migration encounters a PUD-sized folio, it now splits the folio
first using the standard folio split mechanism. The resulting smaller
folios (or individual pages) can then be migrated normally. This matches
how PMD THPs are handled when PMD migration is not supported on a given
architecture.
The split-before-migrate approach means PUD THPs will be broken up
during NUMA balancing or memory compaction. While this loses the TLB
benefit of the large mapping, it allows these memory management
operations to proceed. Future work could add PUD-level migration
entries to preserve the mapping through migration.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
include/linux/huge_mm.h | 11 ++++++
mm/huge_memory.c | 83 +++++++++++++++++++++++++++++++++++++----
mm/migrate.c | 17 +++++++++
mm/vmscan.c | 2 +
4 files changed, 105 insertions(+), 8 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index a292035c0270f..8b2bffda4b4f3 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -559,6 +559,17 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
return folio_order(folio) >= HPAGE_PMD_ORDER;
}
+/**
+ * folio_test_pud_mappable - Can we map this folio with a PUD?
+ * @folio: The folio to test
+ *
+ * Return: true - @folio can be PUD-mapped, false - @folio cannot be PUD-mapped.
+ */
+static inline bool folio_test_pud_mappable(struct folio *folio)
+{
+ return folio_order(folio) >= HPAGE_PUD_ORDER;
+}
+
vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 39b8212b5abd4..87b2c21df4a49 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2228,9 +2228,17 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
goto out_unlock;
/*
- * TODO: once we support anonymous pages, use
- * folio_try_dup_anon_rmap_*() and split if duplicating fails.
+ * For anonymous pages, split to PTE level.
+ * This simplifies fork handling - we don't need to duplicate
+ * the complex anon rmap at PUD level.
*/
+ if (vma_is_anonymous(vma)) {
+ spin_unlock(src_ptl);
+ spin_unlock(dst_ptl);
+ __split_huge_pud(vma, src_pud, addr);
+ return -EAGAIN;
+ }
+
if (is_cow_mapping(vma->vm_flags) && pud_write(pud)) {
pudp_set_wrprotect(src_mm, addr, src_pud);
pud = pud_wrprotect(pud);
@@ -3099,11 +3107,29 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
{
spinlock_t *ptl;
pud_t orig_pud;
+ pmd_t *pmd_table;
+ pgtable_t pte_table;
+ int nr_pte_tables = 0;
ptl = __pud_trans_huge_lock(pud, vma);
if (!ptl)
return 0;
+ /*
+ * Withdraw any deposited page tables before clearing the PUD.
+ * These need to be freed and their counters decremented.
+ */
+ pmd_table = pgtable_trans_huge_pud_withdraw(tlb->mm, pud);
+ if (pmd_table) {
+ while ((pte_table = pud_withdraw_pte(pmd_table)) != NULL) {
+ pte_free(tlb->mm, pte_table);
+ mm_dec_nr_ptes(tlb->mm);
+ nr_pte_tables++;
+ }
+ pmd_free(tlb->mm, pmd_table);
+ mm_dec_nr_pmds(tlb->mm);
+ }
+
orig_pud = pudp_huge_get_and_clear_full(vma, addr, pud, tlb->fullmm);
arch_check_zapped_pud(vma, orig_pud);
tlb_remove_pud_tlb_entry(tlb, pud, addr);
@@ -3114,14 +3140,15 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
struct page *page = NULL;
struct folio *folio;
- /* No support for anonymous PUD pages or migration yet */
- VM_WARN_ON_ONCE(vma_is_anonymous(vma) ||
- !pud_present(orig_pud));
+ VM_WARN_ON_ONCE(!pud_present(orig_pud));
page = pud_page(orig_pud);
folio = page_folio(page);
folio_remove_rmap_pud(folio, page, vma);
- add_mm_counter(tlb->mm, mm_counter_file(folio), -HPAGE_PUD_NR);
+ if (vma_is_anonymous(vma))
+ add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PUD_NR);
+ else
+ add_mm_counter(tlb->mm, mm_counter_file(folio), -HPAGE_PUD_NR);
spin_unlock(ptl);
tlb_remove_page_size(tlb, page, HPAGE_PUD_SIZE);
@@ -3729,15 +3756,53 @@ static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned
split_huge_pmd_address(vma, address, false);
}
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static void split_huge_pud_address(struct vm_area_struct *vma, unsigned long address)
+{
+ pud_t *pud = mm_find_pud(vma->vm_mm, address);
+
+ if (!pud)
+ return;
+
+ __split_huge_pud(vma, pud, address);
+}
+
+static inline void split_huge_pud_if_needed(struct vm_area_struct *vma, unsigned long address)
+{
+ /*
+ * If the new address isn't PUD-aligned and it could previously
+ * contain a PUD huge page: check if we need to split it.
+ */
+ if (!IS_ALIGNED(address, HPAGE_PUD_SIZE) &&
+ range_in_vma(vma, ALIGN_DOWN(address, HPAGE_PUD_SIZE),
+ ALIGN(address, HPAGE_PUD_SIZE)))
+ split_huge_pud_address(vma, address);
+}
+#else
+static inline void split_huge_pud_if_needed(struct vm_area_struct *vma, unsigned long address)
+{
+}
+#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
+
void vma_adjust_trans_huge(struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
struct vm_area_struct *next)
{
- /* Check if we need to split start first. */
+ /* Check if we need to split PUD THP at start first. */
+ split_huge_pud_if_needed(vma, start);
+
+ /* Check if we need to split PUD THP at end. */
+ split_huge_pud_if_needed(vma, end);
+
+ /* If we're incrementing next->vm_start, we might need to split it. */
+ if (next)
+ split_huge_pud_if_needed(next, end);
+
+ /* Check if we need to split PMD THP at start. */
split_huge_pmd_if_needed(vma, start);
- /* Check if we need to split end next. */
+ /* Check if we need to split PMD THP at end. */
split_huge_pmd_if_needed(vma, end);
/* If we're incrementing next->vm_start, we might need to split it. */
@@ -3752,6 +3817,8 @@ static void unmap_folio(struct folio *folio)
VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
+ if (folio_test_pud_mappable(folio))
+ ttu_flags |= TTU_SPLIT_HUGE_PUD;
if (folio_test_pmd_mappable(folio))
ttu_flags |= TTU_SPLIT_HUGE_PMD;
diff --git a/mm/migrate.c b/mm/migrate.c
index 4688b9e38cd2f..2d3d2f5585d14 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1859,6 +1859,23 @@ static int migrate_pages_batch(struct list_head *from,
* we will migrate them after the rest of the
* list is processed.
*/
+ /*
+ * PUD-sized folios cannot be migrated directly,
+ * but can be split. Try to split them first and
+ * migrate the resulting smaller folios.
+ */
+ if (folio_test_pud_mappable(folio)) {
+ nr_failed++;
+ stats->nr_thp_failed++;
+ if (!try_split_folio(folio, split_folios, mode)) {
+ stats->nr_thp_split++;
+ stats->nr_split++;
+ continue;
+ }
+ stats->nr_failed_pages += nr_pages;
+ list_move_tail(&folio->lru, ret_folios);
+ continue;
+ }
if (!thp_migration_supported() && is_thp) {
nr_failed++;
stats->nr_thp_failed++;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 619691aa43938..868514a770bf2 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1348,6 +1348,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
enum ttu_flags flags = TTU_BATCH_FLUSH;
bool was_swapbacked = folio_test_swapbacked(folio);
+ if (folio_test_pud_mappable(folio))
+ flags |= TTU_SPLIT_HUGE_PUD;
if (folio_test_pmd_mappable(folio))
flags |= TTU_SPLIT_HUGE_PMD;
/*
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 06/12] selftests/mm: add PUD THP basic allocation test
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (4 preceding siblings ...)
2026-02-02 0:50 ` [RFC 05/12] mm: thp: add reclaim and migration support for PUD THP Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 07/12] selftests/mm: add PUD THP read/write access test Usama Arif
` (9 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Add a selftest for PUD-level THPs (1GB THPs) with test infrastructure
and a basic allocation test.
The test uses the kselftest harness FIXTURE/TEST_F framework. A shared
fixture allocates a 2GB anonymous mapping and computes a PUD-aligned
address within it. Helper functions read THP counters from /proc/vmstat
and mTHP statistics from sysfs.
The basic allocation test verifies the fundamental PUD THP allocation
path by touching a PUD-aligned region and checking that the mTHP
anon_fault_alloc counter increments, confirming a 1GB folio was
allocated.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
tools/testing/selftests/mm/Makefile | 1 +
tools/testing/selftests/mm/pud_thp_test.c | 161 ++++++++++++++++++++++
2 files changed, 162 insertions(+)
create mode 100644 tools/testing/selftests/mm/pud_thp_test.c
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index eaf9312097f7b..ab79f1693941a 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -88,6 +88,7 @@ TEST_GEN_FILES += pagemap_ioctl
TEST_GEN_FILES += pfnmap
TEST_GEN_FILES += process_madv
TEST_GEN_FILES += prctl_thp_disable
+TEST_GEN_FILES += pud_thp_test
TEST_GEN_FILES += thuge-gen
TEST_GEN_FILES += transhuge-stress
TEST_GEN_FILES += uffd-stress
diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c
new file mode 100644
index 0000000000000..6f0c02c6afd3a
--- /dev/null
+++ b/tools/testing/selftests/mm/pud_thp_test.c
@@ -0,0 +1,161 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Test program for PUD-level Transparent Huge Pages (1GB anonymous THP)
+ *
+ * Prerequisites:
+ * - Kernel with PUD THP support (CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
+ * - THP enabled: echo always > /sys/kernel/mm/transparent_hugepage/enabled
+ * - PUD THP enabled: echo always > /sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/enabled
+ */
+
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <stdint.h>
+#include <sys/syscall.h>
+
+#include "kselftest_harness.h"
+
+#define PUD_SIZE (1UL << 30) /* 1GB */
+#define PMD_SIZE (1UL << 21) /* 2MB */
+#define PAGE_SIZE (1UL << 12) /* 4KB */
+
+#define TEST_REGION_SIZE (2 * PUD_SIZE) /* 2GB to ensure PUD alignment */
+
+/* Get PUD-aligned address within a region */
+static inline void *pud_align(void *addr)
+{
+ return (void *)(((unsigned long)addr + PUD_SIZE - 1) & ~(PUD_SIZE - 1));
+}
+
+/* Read vmstat counter */
+static unsigned long read_vmstat(const char *name)
+{
+ FILE *fp;
+ char line[256];
+ unsigned long value = 0;
+
+ fp = fopen("/proc/vmstat", "r");
+ if (!fp)
+ return 0;
+
+ while (fgets(line, sizeof(line), fp)) {
+ if (strncmp(line, name, strlen(name)) == 0 &&
+ line[strlen(name)] == ' ') {
+ sscanf(line + strlen(name), " %lu", &value);
+ break;
+ }
+ }
+ fclose(fp);
+ return value;
+}
+
+/* Read mTHP stats for PUD order (1GB = 1048576kB) */
+static unsigned long read_mthp_stat(const char *stat_name)
+{
+ char path[256];
+ char buf[64];
+ int fd;
+ ssize_t ret;
+ unsigned long value = 0;
+
+ snprintf(path, sizeof(path),
+ "/sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/stats/%s",
+ stat_name);
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return 0;
+ ret = read(fd, buf, sizeof(buf) - 1);
+ close(fd);
+ if (ret <= 0)
+ return 0;
+ buf[ret] = '\0';
+ sscanf(buf, "%lu", &value);
+ return value;
+}
+
+/* Check if PUD THP is enabled */
+static int pud_thp_enabled(void)
+{
+ char buf[64];
+ int fd;
+ ssize_t ret;
+
+ fd = open("/sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/enabled", O_RDONLY);
+ if (fd < 0)
+ return 0;
+ ret = read(fd, buf, sizeof(buf) - 1);
+ close(fd);
+ if (ret <= 0)
+ return 0;
+ buf[ret] = '\0';
+
+ /* Check if [always] or [madvise] is set */
+ if (strstr(buf, "[always]") || strstr(buf, "[madvise]"))
+ return 1;
+ return 0;
+}
+
+/*
+ * Main fixture for PUD THP tests
+ * Allocates a 2GB region and provides a PUD-aligned pointer within it
+ */
+FIXTURE(pud_thp)
+{
+ void *mem; /* Base mmap allocation */
+ void *aligned; /* PUD-aligned pointer within mem */
+ unsigned long mthp_alloc_before;
+ unsigned long split_before;
+};
+
+FIXTURE_SETUP(pud_thp)
+{
+ if (!pud_thp_enabled())
+ SKIP(return, "PUD THP not enabled in sysfs");
+
+ self->mthp_alloc_before = read_mthp_stat("anon_fault_alloc");
+ self->split_before = read_vmstat("thp_split_pud");
+
+ self->mem = mmap(NULL, TEST_REGION_SIZE, PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ ASSERT_NE(self->mem, MAP_FAILED);
+
+ self->aligned = pud_align(self->mem);
+}
+
+FIXTURE_TEARDOWN(pud_thp)
+{
+ if (self->mem && self->mem != MAP_FAILED)
+ munmap(self->mem, TEST_REGION_SIZE);
+}
+
+/*
+ * Test: Basic PUD THP allocation
+ * Verifies that touching a PUD-aligned region allocates a PUD THP
+ */
+TEST_F(pud_thp, basic_allocation)
+{
+ unsigned long mthp_alloc_after;
+
+ /* Touch memory to trigger page fault and PUD THP allocation */
+ memset(self->aligned, 0xAB, PUD_SIZE);
+
+ mthp_alloc_after = read_mthp_stat("anon_fault_alloc");
+
+ /*
+ * If mTHP allocation counter increased, a PUD THP was allocated.
+ */
+ if (mthp_alloc_after <= self->mthp_alloc_before)
+ SKIP(return, "PUD THP not allocated");
+
+ TH_LOG("PUD THP allocated (anon_fault_alloc: %lu -> %lu)",
+ self->mthp_alloc_before, mthp_alloc_after);
+}
+
+TEST_HARNESS_MAIN
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 07/12] selftests/mm: add PUD THP read/write access test
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (5 preceding siblings ...)
2026-02-02 0:50 ` [RFC 06/12] selftests/mm: add PUD THP basic allocation test Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 08/12] selftests/mm: add PUD THP fork COW test Usama Arif
` (8 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Add a test that verifies data integrity across a 1GB PUD THP region
by writing patterns at page boundaries and reading them back.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
tools/testing/selftests/mm/pud_thp_test.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c
index 6f0c02c6afd3a..7a1f0b0f81468 100644
--- a/tools/testing/selftests/mm/pud_thp_test.c
+++ b/tools/testing/selftests/mm/pud_thp_test.c
@@ -158,4 +158,27 @@ TEST_F(pud_thp, basic_allocation)
self->mthp_alloc_before, mthp_alloc_after);
}
+/*
+ * Test: Read/write access patterns
+ * Verifies data integrity across the entire 1GB region
+ */
+TEST_F(pud_thp, read_write_access)
+{
+ unsigned long *ptr = (unsigned long *)self->aligned;
+ size_t i;
+ int errors = 0;
+
+ /* Write pattern - sample every page to reduce test time */
+ for (i = 0; i < PUD_SIZE / sizeof(unsigned long); i += PAGE_SIZE / sizeof(unsigned long))
+ ptr[i] = i ^ 0xDEADBEEFUL;
+
+ /* Verify pattern */
+ for (i = 0; i < PUD_SIZE / sizeof(unsigned long); i += PAGE_SIZE / sizeof(unsigned long)) {
+ if (ptr[i] != (i ^ 0xDEADBEEFUL))
+ errors++;
+ }
+
+ ASSERT_EQ(errors, 0);
+}
+
TEST_HARNESS_MAIN
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 08/12] selftests/mm: add PUD THP fork COW test
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (6 preceding siblings ...)
2026-02-02 0:50 ` [RFC 07/12] selftests/mm: add PUD THP read/write access test Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 09/12] selftests/mm: add PUD THP partial munmap test Usama Arif
` (7 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Add a test that allocates a PUD THP, forks a child process, and has
the child write to the shared memory. This triggers the copy-on-write
path which must split the PUD THP. The test verifies that both parent
and child see correct data after the split.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
tools/testing/selftests/mm/pud_thp_test.c | 44 +++++++++++++++++++++++
1 file changed, 44 insertions(+)
diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c
index 7a1f0b0f81468..27a509cd477d5 100644
--- a/tools/testing/selftests/mm/pud_thp_test.c
+++ b/tools/testing/selftests/mm/pud_thp_test.c
@@ -181,4 +181,48 @@ TEST_F(pud_thp, read_write_access)
ASSERT_EQ(errors, 0);
}
+/*
+ * Test: Fork and copy-on-write
+ * Verifies that COW correctly splits the PUD THP and isolates parent/child
+ */
+TEST_F(pud_thp, fork_cow)
+{
+ unsigned long *ptr = (unsigned long *)self->aligned;
+ unsigned char *bytes = (unsigned char *)self->aligned;
+ pid_t pid;
+ int status;
+ unsigned long split_after;
+
+ /* Initialize memory with known pattern */
+ memset(self->aligned, 0xCC, PUD_SIZE);
+
+ pid = fork();
+ ASSERT_GE(pid, 0);
+
+ if (pid == 0) {
+ /* Child: write to trigger COW */
+ ptr[0] = 0x12345678UL;
+
+ /* Verify write succeeded and rest of memory unchanged */
+ if (ptr[0] != 0x12345678UL)
+ _exit(1);
+ if (bytes[PAGE_SIZE] != 0xCC)
+ _exit(2);
+
+ _exit(0);
+ }
+
+ /* Parent: wait for child */
+ waitpid(pid, &status, 0);
+ ASSERT_TRUE(WIFEXITED(status));
+ ASSERT_EQ(WEXITSTATUS(status), 0);
+
+ /* Verify parent memory unchanged (COW should have given child a copy) */
+ ASSERT_EQ(bytes[0], 0xCC);
+
+ split_after = read_vmstat("thp_split_pud");
+ TH_LOG("Fork COW completed (thp_split_pud: %lu -> %lu)",
+ self->split_before, split_after);
+}
+
TEST_HARNESS_MAIN
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 09/12] selftests/mm: add PUD THP partial munmap test
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (7 preceding siblings ...)
2026-02-02 0:50 ` [RFC 08/12] selftests/mm: add PUD THP fork COW test Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 10/12] selftests/mm: add PUD THP mprotect split test Usama Arif
` (6 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Add a test that allocates a PUD THP and unmaps a 2MB region from the
middle. Since the PUD can no longer cover the entire region, it must
be split. The test verifies that memory before and after the hole
remains accessible with correct data.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
tools/testing/selftests/mm/pud_thp_test.c | 31 +++++++++++++++++++++++
1 file changed, 31 insertions(+)
diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c
index 27a509cd477d5..8d4cb0e60f7f7 100644
--- a/tools/testing/selftests/mm/pud_thp_test.c
+++ b/tools/testing/selftests/mm/pud_thp_test.c
@@ -225,4 +225,35 @@ TEST_F(pud_thp, fork_cow)
self->split_before, split_after);
}
+/*
+ * Test: Partial munmap triggers split
+ * Verifies that unmapping part of a PUD THP splits it correctly
+ */
+TEST_F(pud_thp, partial_munmap)
+{
+ unsigned long *ptr = (unsigned long *)self->aligned;
+ unsigned long *after_hole;
+ unsigned long split_after;
+ int ret;
+
+ /* Touch memory to allocate PUD THP */
+ memset(self->aligned, 0xDD, PUD_SIZE);
+
+ /* Unmap a 2MB region in the middle - should trigger PUD split */
+ ret = munmap((char *)self->aligned + PUD_SIZE / 2, PMD_SIZE);
+ ASSERT_EQ(ret, 0);
+
+ split_after = read_vmstat("thp_split_pud");
+
+ /* Verify memory before the hole is still accessible and correct */
+ ASSERT_EQ(ptr[0], 0xDDDDDDDDDDDDDDDDUL);
+
+ /* Verify memory after the hole is still accessible and correct */
+ after_hole = (unsigned long *)((char *)self->aligned + PUD_SIZE / 2 + PMD_SIZE);
+ ASSERT_EQ(*after_hole, 0xDDDDDDDDDDDDDDDDUL);
+
+ TH_LOG("Partial munmap completed (thp_split_pud: %lu -> %lu)",
+ self->split_before, split_after);
+}
+
TEST_HARNESS_MAIN
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 10/12] selftests/mm: add PUD THP mprotect split test
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (8 preceding siblings ...)
2026-02-02 0:50 ` [RFC 09/12] selftests/mm: add PUD THP partial munmap test Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 11/12] selftests/mm: add PUD THP reclaim test Usama Arif
` (5 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Add a test that changes permissions on a portion of a PUD THP using
mprotect. Since different parts now have different permissions, the
PUD must be split. The test verifies correct behavior after the
permission change.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
tools/testing/selftests/mm/pud_thp_test.c | 26 +++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c
index 8d4cb0e60f7f7..b59eb470adbba 100644
--- a/tools/testing/selftests/mm/pud_thp_test.c
+++ b/tools/testing/selftests/mm/pud_thp_test.c
@@ -256,4 +256,30 @@ TEST_F(pud_thp, partial_munmap)
self->split_before, split_after);
}
+/*
+ * Test: mprotect triggers split
+ * Verifies that changing protection on part of a PUD THP splits it
+ */
+TEST_F(pud_thp, mprotect_split)
+{
+ volatile unsigned char *p = (unsigned char *)self->aligned;
+ unsigned long split_after;
+ int ret;
+
+ /* Touch memory to allocate PUD THP */
+ memset(self->aligned, 0xEE, PUD_SIZE);
+
+ /* Change protection on a 2MB region - should trigger PUD split */
+ ret = mprotect((char *)self->aligned + PMD_SIZE, PMD_SIZE, PROT_READ);
+ ASSERT_EQ(ret, 0);
+
+ split_after = read_vmstat("thp_split_pud");
+
+ /* Verify memory still readable */
+ ASSERT_EQ(*p, 0xEE);
+
+ TH_LOG("mprotect split completed (thp_split_pud: %lu -> %lu)",
+ self->split_before, split_after);
+}
+
TEST_HARNESS_MAIN
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 11/12] selftests/mm: add PUD THP reclaim test
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (9 preceding siblings ...)
2026-02-02 0:50 ` [RFC 10/12] selftests/mm: add PUD THP mprotect split test Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 0:50 ` [RFC 12/12] selftests/mm: add PUD THP migration test Usama Arif
` (4 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Add a test that uses MADV_PAGEOUT to advise the kernel to page out
the PUD THP memory. This exercises the reclaim path which must split
the PUD THP before reclaiming the individual pages.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
tools/testing/selftests/mm/pud_thp_test.c | 33 +++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c
index b59eb470adbba..961fdc489d8a2 100644
--- a/tools/testing/selftests/mm/pud_thp_test.c
+++ b/tools/testing/selftests/mm/pud_thp_test.c
@@ -28,6 +28,10 @@
#define TEST_REGION_SIZE (2 * PUD_SIZE) /* 2GB to ensure PUD alignment */
+#ifndef MADV_PAGEOUT
+#define MADV_PAGEOUT 21
+#endif
+
/* Get PUD-aligned address within a region */
static inline void *pud_align(void *addr)
{
@@ -282,4 +286,33 @@ TEST_F(pud_thp, mprotect_split)
self->split_before, split_after);
}
+/*
+ * Test: Reclaim via MADV_PAGEOUT
+ * Verifies that reclaim path correctly handles PUD THPs
+ */
+TEST_F(pud_thp, reclaim_pageout)
+{
+ volatile unsigned char *p;
+ unsigned long split_after;
+ int ret;
+
+ /* Touch memory to allocate PUD THP */
+ memset(self->aligned, 0xAA, PUD_SIZE);
+
+ /* Try to reclaim the pages */
+ ret = madvise(self->aligned, PUD_SIZE, MADV_PAGEOUT);
+ if (ret < 0 && errno == EINVAL)
+ SKIP(return, "MADV_PAGEOUT not supported");
+ ASSERT_EQ(ret, 0);
+
+ split_after = read_vmstat("thp_split_pud");
+
+ /* Touch memory again to verify it's still accessible */
+ p = (unsigned char *)self->aligned;
+ (void)*p; /* Read to bring pages back if swapped */
+
+ TH_LOG("Reclaim completed (thp_split_pud: %lu -> %lu)",
+ self->split_before, split_after);
+}
+
TEST_HARNESS_MAIN
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* [RFC 12/12] selftests/mm: add PUD THP migration test
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (10 preceding siblings ...)
2026-02-02 0:50 ` [RFC 11/12] selftests/mm: add PUD THP reclaim test Usama Arif
@ 2026-02-02 0:50 ` Usama Arif
2026-02-02 2:44 ` [RFC 00/12] mm: PUD (1GB) THP implementation Rik van Riel
` (3 subsequent siblings)
15 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-02 0:50 UTC (permalink / raw)
To: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm
Cc: hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team, Usama Arif
Add a test that uses mbind() to change the NUMA memory policy, which
triggers migration. The kernel must split PUD THPs before migration
since there is no PUD-level migration entry support. The test verifies
data integrity after the migration attempt.
Signed-off-by: Usama Arif <usamaarif642@gmail.com>
---
tools/testing/selftests/mm/pud_thp_test.c | 42 +++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/tools/testing/selftests/mm/pud_thp_test.c b/tools/testing/selftests/mm/pud_thp_test.c
index 961fdc489d8a2..7e227f29e69fb 100644
--- a/tools/testing/selftests/mm/pud_thp_test.c
+++ b/tools/testing/selftests/mm/pud_thp_test.c
@@ -32,6 +32,14 @@
#define MADV_PAGEOUT 21
#endif
+#ifndef MPOL_BIND
+#define MPOL_BIND 2
+#endif
+
+#ifndef MPOL_MF_MOVE
+#define MPOL_MF_MOVE (1 << 1)
+#endif
+
/* Get PUD-aligned address within a region */
static inline void *pud_align(void *addr)
{
@@ -315,4 +323,38 @@ TEST_F(pud_thp, reclaim_pageout)
self->split_before, split_after);
}
+/*
+ * Test: Migration via mbind
+ * Verifies that migration path correctly handles PUD THPs by splitting
+ */
+TEST_F(pud_thp, migration_mbind)
+{
+ unsigned char *bytes = (unsigned char *)self->aligned;
+ unsigned long nodemask = 1UL; /* Node 0 */
+ unsigned long split_after;
+ int ret;
+
+ /* Touch memory to allocate PUD THP */
+ memset(self->aligned, 0xBB, PUD_SIZE);
+
+ /* Try to migrate by changing NUMA policy */
+ ret = syscall(__NR_mbind, self->aligned, PUD_SIZE, MPOL_BIND, &nodemask,
+ sizeof(nodemask) * 8, MPOL_MF_MOVE);
+ /*
+ * mbind may fail with EINVAL (single node) or EIO (migration failed),
+ * which is acceptable - we just want to exercise the migration path.
+ */
+ if (ret < 0 && errno != EINVAL && errno != EIO)
+ TH_LOG("mbind returned unexpected error: %s", strerror(errno));
+
+ split_after = read_vmstat("thp_split_pud");
+
+ /* Verify data integrity */
+ ASSERT_EQ(bytes[0], 0xBB);
+ ASSERT_EQ(bytes[PUD_SIZE - 1], 0xBB);
+
+ TH_LOG("Migration completed (thp_split_pud: %lu -> %lu)",
+ self->split_before, split_after);
+}
+
TEST_HARNESS_MAIN
--
2.47.3
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (11 preceding siblings ...)
2026-02-02 0:50 ` [RFC 12/12] selftests/mm: add PUD THP migration test Usama Arif
@ 2026-02-02 2:44 ` Rik van Riel
2026-02-02 11:30 ` Lorenzo Stoakes
2026-02-02 4:00 ` Matthew Wilcox
` (2 subsequent siblings)
15 siblings, 1 reply; 49+ messages in thread
From: Rik van Riel @ 2026-02-02 2:44 UTC (permalink / raw)
To: Usama Arif, ziy, Andrew Morton, David Hildenbrand,
lorenzo.stoakes, linux-mm
Cc: hannes, shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team, Frank van der Linden
On Sun, 2026-02-01 at 16:50 -0800, Usama Arif wrote:
>
> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages
> at boot
> or runtime, taking memory away. This requires capacity planning,
> administrative overhead, and makes workload orchastration much
> much more
> complex, especially colocating with workloads that don't use
> hugetlbfs.
>
To address the obvious objection "but how could we
possibly allocate 1GB huge pages while the workload
is running?", I am planning to pick up the CMA balancing
patch series (thank you, Frank) and get that in an
upstream ready shape soon.
https://lkml.org/2025/9/15/1735
That patch set looks like another case where no
amount of internal testing will find every single
corner case, and we'll probably just want to
merge it upstream, deploy it experimentally, and
aggressively deal with anything that might pop up.
With CMA balancing, it would be possibly to just
have half (or even more) of system memory for
movable allocations only, which would make it possible
to allocate 1GB huge pages dynamically.
--
All Rights Reversed.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (12 preceding siblings ...)
2026-02-02 2:44 ` [RFC 00/12] mm: PUD (1GB) THP implementation Rik van Riel
@ 2026-02-02 4:00 ` Matthew Wilcox
2026-02-02 9:06 ` David Hildenbrand (arm)
2026-02-02 11:20 ` Lorenzo Stoakes
2026-02-02 16:24 ` Zi Yan
15 siblings, 1 reply; 49+ messages in thread
From: Matthew Wilcox @ 2026-02-02 4:00 UTC (permalink / raw)
To: Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm,
hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team
On Sun, Feb 01, 2026 at 04:50:17PM -0800, Usama Arif wrote:
> This is an RFC series to implement 1GB PUD-level THPs, allowing
> applications to benefit from reduced TLB pressure without requiring
> hugetlbfs. The patches are based on top of
> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
I suggest this has not had enough testing. There are dozens of places
in the MM which assume that if a folio is at leaast PMD size then it is
exactly PMD size. Everywhere that calls folio_test_pmd_mappable() needs
to be audited to make sure that it will work properly if the folio is
larger than PMD size.
zap_pmd_range() for example. Or finish_fault():
page = vmf->page;
(can be any page within the folio)
folio = page_folio(page);
if (pmd_none(*vmf->pmd)) {
if (!needs_fallback && folio_test_pmd_mappable(folio)) {
ret = do_set_pmd(vmf, folio, page);
then do_set_pmd() does:
if (folio_order(folio) != HPAGE_PMD_ORDER)
return ret;
page = &folio->page;
so that check needs to be changed, and then we need to select the
appropriate page within the folio rather than just the first page
of the folio. And then after the call:
entry = folio_mk_pmd(folio, vma->vm_page_prot);
we need to adjust entry to point to the appropriate PMD-sized range
within the folio.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 4:00 ` Matthew Wilcox
@ 2026-02-02 9:06 ` David Hildenbrand (arm)
2026-02-03 21:11 ` Usama Arif
0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand (arm) @ 2026-02-02 9:06 UTC (permalink / raw)
To: Matthew Wilcox, Usama Arif
Cc: ziy, Andrew Morton, lorenzo.stoakes, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On 2/2/26 05:00, Matthew Wilcox wrote:
> On Sun, Feb 01, 2026 at 04:50:17PM -0800, Usama Arif wrote:
>> This is an RFC series to implement 1GB PUD-level THPs, allowing
>> applications to benefit from reduced TLB pressure without requiring
>> hugetlbfs. The patches are based on top of
>> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
>
> I suggest this has not had enough testing. There are dozens of places
> in the MM which assume that if a folio is at leaast PMD size then it is
> exactly PMD size. Everywhere that calls folio_test_pmd_mappable() needs
> to be audited to make sure that it will work properly if the folio is
> larger than PMD size.
I think the hack (ehm trick) in this patch set is to do it just like dax
PUDs: only map through a PUD or through PTEs, not through PMDs.
That also avoids dealing with mapcounts until I sorted that out.
--
Cheers
David
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-02 0:50 ` [RFC 01/12] mm: add PUD THP ptdesc and rmap support Usama Arif
@ 2026-02-02 10:44 ` Kiryl Shutsemau
2026-02-02 16:01 ` Zi Yan
2026-02-02 12:15 ` Lorenzo Stoakes
1 sibling, 1 reply; 49+ messages in thread
From: Kiryl Shutsemau @ 2026-02-02 10:44 UTC (permalink / raw)
To: Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm,
hannes, riel, shakeel.butt, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team
On Sun, Feb 01, 2026 at 04:50:18PM -0800, Usama Arif wrote:
> For page table management, PUD THPs need to pre-deposit page tables
> that will be used when the huge page is later split. When a PUD THP
> is allocated, we cannot know in advance when or why it might need to
> be split (COW, partial unmap, reclaim), but we need page tables ready
> for that eventuality. Similar to how PMD THPs deposit a single PTE
> table, PUD THPs deposit a PMD table which itself contains deposited
> PTE tables - a two-level deposit. This commit adds the deposit/withdraw
> infrastructure and a new pud_huge_pmd field in ptdesc to store the
> deposited PMD.
>
> The deposited PMD tables are stored as a singly-linked stack using only
> page->lru.next as the link pointer. A doubly-linked list using the
> standard list_head mechanism would cause memory corruption: list_del()
> poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev
> overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD
> tables have their own deposited PTE tables stored in pmd_huge_pte,
> poisoning lru.prev would corrupt the PTE table list and cause crashes
> when withdrawing PTE tables during split. PMD THPs don't have this
> problem because their deposited PTE tables don't have sub-deposits.
> Using only lru.next avoids the overlap entirely.
>
> For reverse mapping, PUD THPs need the same rmap support that PMD THPs
> have. The page_vma_mapped_walk() function is extended to recognize and
> handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD
> flag tells the unmap path to split PUD THPs before proceeding, since
> there is no PUD-level migration entry format - the split converts the
> single PUD mapping into individual PTE mappings that can be migrated
> or swapped normally.
>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
> ---
> include/linux/huge_mm.h | 5 +++
> include/linux/mm.h | 19 ++++++++
> include/linux/mm_types.h | 5 ++-
> include/linux/pgtable.h | 8 ++++
> include/linux/rmap.h | 7 ++-
> mm/huge_memory.c | 8 ++++
> mm/internal.h | 3 ++
> mm/page_vma_mapped.c | 35 +++++++++++++++
> mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++
> mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++---
> 10 files changed, 260 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index a4d9f964dfdea..e672e45bb9cc7 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -463,10 +463,15 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> unsigned long address);
>
> #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> + unsigned long address);
> int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> pud_t *pudp, unsigned long addr, pgprot_t newprot,
> unsigned long cp_flags);
> #else
> +static inline void
> +split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> + unsigned long address) {}
> static inline int
> change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> pud_t *pudp, unsigned long addr, pgprot_t newprot,
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ab2e7e30aef96..a15e18df0f771 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3455,6 +3455,22 @@ static inline bool pagetable_pmd_ctor(struct mm_struct *mm,
> * considered ready to switch to split PUD locks yet; there may be places
> * which need to be converted from page_table_lock.
> */
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +static inline struct page *pud_pgtable_page(pud_t *pud)
> +{
> + unsigned long mask = ~(PTRS_PER_PUD * sizeof(pud_t) - 1);
> +
> + return virt_to_page((void *)((unsigned long)pud & mask));
> +}
> +
> +static inline struct ptdesc *pud_ptdesc(pud_t *pud)
> +{
> + return page_ptdesc(pud_pgtable_page(pud));
> +}
> +
> +#define pud_huge_pmd(pud) (pud_ptdesc(pud)->pud_huge_pmd)
> +#endif
> +
> static inline spinlock_t *pud_lockptr(struct mm_struct *mm, pud_t *pud)
> {
> return &mm->page_table_lock;
> @@ -3471,6 +3487,9 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud)
> static inline void pagetable_pud_ctor(struct ptdesc *ptdesc)
> {
> __pagetable_ctor(ptdesc);
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> + ptdesc->pud_huge_pmd = NULL;
> +#endif
> }
>
> static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc)
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 78950eb8926dc..26a38490ae2e1 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -577,7 +577,10 @@ struct ptdesc {
> struct list_head pt_list;
> struct {
> unsigned long _pt_pad_1;
> - pgtable_t pmd_huge_pte;
> + union {
> + pgtable_t pmd_huge_pte; /* For PMD tables: deposited PTE */
> + pgtable_t pud_huge_pmd; /* For PUD tables: deposited PMD list */
> + };
> };
> };
> unsigned long __page_mapping;
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 2f0dd3a4ace1a..3ce733c1d71a2 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1168,6 +1168,14 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
> #define arch_needs_pgtable_deposit() (false)
> #endif
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +extern void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
> + pmd_t *pmd_table);
> +extern pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp);
> +extern void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable);
> +extern pgtable_t pud_withdraw_pte(pmd_t *pmd_table);
> +#endif
> +
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> /*
> * This is an implementation of pmdp_establish() that is only suitable for an
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index daa92a58585d9..08cd0a0eb8763 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -101,6 +101,7 @@ enum ttu_flags {
> * do a final flush if necessary */
> TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock:
> * caller holds it */
> + TTU_SPLIT_HUGE_PUD = 0x100, /* split huge PUD if any */
> };
>
> #ifdef CONFIG_MMU
> @@ -473,6 +474,8 @@ void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages,
> folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags)
> void folio_add_anon_rmap_pmd(struct folio *, struct page *,
> struct vm_area_struct *, unsigned long address, rmap_t flags);
> +void folio_add_anon_rmap_pud(struct folio *, struct page *,
> + struct vm_area_struct *, unsigned long address, rmap_t flags);
> void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
> unsigned long address, rmap_t flags);
> void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages,
> @@ -933,6 +936,7 @@ struct page_vma_mapped_walk {
> pgoff_t pgoff;
> struct vm_area_struct *vma;
> unsigned long address;
> + pud_t *pud;
> pmd_t *pmd;
> pte_t *pte;
> spinlock_t *ptl;
> @@ -970,7 +974,7 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> static inline void
> page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
> {
> - WARN_ON_ONCE(!pvmw->pmd && !pvmw->pte);
> + WARN_ON_ONCE(!pvmw->pud && !pvmw->pmd && !pvmw->pte);
>
> if (likely(pvmw->ptl))
> spin_unlock(pvmw->ptl);
> @@ -978,6 +982,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
> WARN_ON_ONCE(1);
>
> pvmw->ptl = NULL;
> + pvmw->pud = NULL;
> pvmw->pmd = NULL;
> pvmw->pte = NULL;
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 40cf59301c21a..3128b3beedb0a 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2933,6 +2933,14 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> spin_unlock(ptl);
> mmu_notifier_invalidate_range_end(&range);
> }
> +
> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> + unsigned long address)
> +{
> + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PUD_SIZE));
> + if (pud_trans_huge(*pud))
> + __split_huge_pud_locked(vma, pud, address);
> +}
> #else
> void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> unsigned long address)
> diff --git a/mm/internal.h b/mm/internal.h
> index 9ee336aa03656..21d5c00f638dc 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -545,6 +545,9 @@ int user_proactive_reclaim(char *buf,
> * in mm/rmap.c:
> */
> pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address);
> +#endif
>
> /*
> * in mm/page_alloc.c
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index b38a1d00c971b..d31eafba38041 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -146,6 +146,18 @@ static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
> return true;
> }
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +/* Returns true if the two ranges overlap. Careful to not overflow. */
> +static bool check_pud(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
> +{
> + if ((pfn + HPAGE_PUD_NR - 1) < pvmw->pfn)
> + return false;
> + if (pfn > pvmw->pfn + pvmw->nr_pages - 1)
> + return false;
> + return true;
> +}
> +#endif
> +
> static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
> {
> pvmw->address = (pvmw->address + size) & ~(size - 1);
> @@ -188,6 +200,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> pud_t *pud;
> pmd_t pmde;
>
> + /* The only possible pud mapping has been handled on last iteration */
> + if (pvmw->pud && !pvmw->pmd)
> + return not_found(pvmw);
> +
> /* The only possible pmd mapping has been handled on last iteration */
> if (pvmw->pmd && !pvmw->pte)
> return not_found(pvmw);
> @@ -234,6 +250,25 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> continue;
> }
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> + /* Check for PUD-mapped THP */
> + if (pud_trans_huge(*pud)) {
> + pvmw->pud = pud;
> + pvmw->ptl = pud_lock(mm, pud);
> + if (likely(pud_trans_huge(*pud))) {
> + if (pvmw->flags & PVMW_MIGRATION)
> + return not_found(pvmw);
> + if (!check_pud(pud_pfn(*pud), pvmw))
> + return not_found(pvmw);
> + return true;
> + }
> + /* PUD was split under us, retry at PMD level */
> + spin_unlock(pvmw->ptl);
> + pvmw->ptl = NULL;
> + pvmw->pud = NULL;
> + }
> +#endif
> +
> pvmw->pmd = pmd_offset(pud, pvmw->address);
> /*
> * Make sure the pmd value isn't cached in a register by the
> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> index d3aec7a9926ad..2047558ddcd79 100644
> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
> }
> #endif
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +/*
> + * Deposit page tables for PUD THP.
> + * Called with PUD lock held. Stores PMD tables in a singly-linked stack
> + * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer.
> + *
> + * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full
> + * list_head. This is because lru.prev (offset 16) overlaps with
> + * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables.
> + * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2.
This is ugly.
Sounds like you want to use llist_node/head instead of list_head for this.
You might able to avoid taking the lock in some cases. Note that
pud_lockptr() is mm->page_table_lock as of now.
> + *
> + * PTE tables should be deposited into the PMD using pud_deposit_pte().
> + */
> +void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
> + pmd_t *pmd_table)
> +{
> + pgtable_t pmd_page = virt_to_page(pmd_table);
> +
> + assert_spin_locked(pud_lockptr(mm, pudp));
> +
> + /* Push onto stack using only lru.next as the link */
> + pmd_page->lru.next = (struct list_head *)pud_huge_pmd(pudp);
> + pud_huge_pmd(pudp) = pmd_page;
> +}
> +
> +/*
> + * Withdraw the deposited PMD table for PUD THP split or zap.
> + * Called with PUD lock held.
> + * Returns NULL if no more PMD tables are deposited.
> + */
> +pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp)
> +{
> + pgtable_t pmd_page;
> +
> + assert_spin_locked(pud_lockptr(mm, pudp));
> +
> + pmd_page = pud_huge_pmd(pudp);
> + if (!pmd_page)
> + return NULL;
> +
> + /* Pop from stack - lru.next points to next PMD page (or NULL) */
> + pud_huge_pmd(pudp) = (pgtable_t)pmd_page->lru.next;
> +
> + return page_address(pmd_page);
> +}
> +
> +/*
> + * Deposit a PTE table into a standalone PMD table (not yet in page table hierarchy).
> + * Used for PUD THP pre-deposit. The PMD table's pmd_huge_pte stores a linked list.
> + * No lock assertion since the PMD isn't visible yet.
> + */
> +void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable)
> +{
> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
> +
> + /* FIFO - add to front of list */
> + if (!ptdesc->pmd_huge_pte)
> + INIT_LIST_HEAD(&pgtable->lru);
> + else
> + list_add(&pgtable->lru, &ptdesc->pmd_huge_pte->lru);
> + ptdesc->pmd_huge_pte = pgtable;
> +}
> +
> +/*
> + * Withdraw a PTE table from a standalone PMD table.
> + * Returns NULL if no more PTE tables are deposited.
> + */
> +pgtable_t pud_withdraw_pte(pmd_t *pmd_table)
> +{
> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
> + pgtable_t pgtable;
> +
> + pgtable = ptdesc->pmd_huge_pte;
> + if (!pgtable)
> + return NULL;
> + ptdesc->pmd_huge_pte = list_first_entry_or_null(&pgtable->lru,
> + struct page, lru);
> + if (ptdesc->pmd_huge_pte)
> + list_del(&pgtable->lru);
> + return pgtable;
> +}
> +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> +
> #ifndef __HAVE_ARCH_PMDP_INVALIDATE
> pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> pmd_t *pmdp)
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 7b9879ef442d9..69acabd763da4 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -811,6 +811,32 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
> return pmd;
> }
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +/*
> + * Returns the actual pud_t* where we expect 'address' to be mapped from, or
> + * NULL if it doesn't exist. No guarantees / checks on what the pud_t*
> + * represents.
> + */
> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address)
Remove the ifdef and make mm_find_pmd() call it.
And in general, try to avoid ifdeffery where possible.
> +{
> + pgd_t *pgd;
> + p4d_t *p4d;
> + pud_t *pud = NULL;
> +
> + pgd = pgd_offset(mm, address);
> + if (!pgd_present(*pgd))
> + goto out;
> +
> + p4d = p4d_offset(pgd, address);
> + if (!p4d_present(*p4d))
> + goto out;
> +
> + pud = pud_offset(p4d, address);
> +out:
> + return pud;
> +}
> +#endif
> +
> struct folio_referenced_arg {
> int mapcount;
> int referenced;
> @@ -1415,11 +1441,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
> SetPageAnonExclusive(page);
> break;
> case PGTABLE_LEVEL_PUD:
> - /*
> - * Keep the compiler happy, we don't support anonymous
> - * PUD mappings.
> - */
> - WARN_ON_ONCE(1);
> + SetPageAnonExclusive(page);
> break;
> default:
> BUILD_BUG();
> @@ -1503,6 +1525,31 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
> #endif
> }
>
> +/**
> + * folio_add_anon_rmap_pud - add a PUD mapping to a page range of an anon folio
> + * @folio: The folio to add the mapping to
> + * @page: The first page to add
> + * @vma: The vm area in which the mapping is added
> + * @address: The user virtual address of the first page to map
> + * @flags: The rmap flags
> + *
> + * The page range of folio is defined by [first_page, first_page + HPAGE_PUD_NR)
> + *
> + * The caller needs to hold the page table lock, and the page must be locked in
> + * the anon_vma case: to serialize mapping,index checking after setting.
> + */
> +void folio_add_anon_rmap_pud(struct folio *folio, struct page *page,
> + struct vm_area_struct *vma, unsigned long address, rmap_t flags)
> +{
> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
> + defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
> + __folio_add_anon_rmap(folio, page, HPAGE_PUD_NR, vma, address, flags,
> + PGTABLE_LEVEL_PUD);
> +#else
> + WARN_ON_ONCE(true);
> +#endif
> +}
> +
> /**
> * folio_add_new_anon_rmap - Add mapping to a new anonymous folio.
> * @folio: The folio to add the mapping to.
> @@ -1934,6 +1981,20 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> }
>
> if (!pvmw.pte) {
> + /*
> + * Check for PUD-mapped THP first.
> + * If we have a PUD mapping and TTU_SPLIT_HUGE_PUD is set,
> + * split the PUD to PMD level and restart the walk.
> + */
> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
> + if (flags & TTU_SPLIT_HUGE_PUD) {
> + split_huge_pud_locked(vma, pvmw.pud, pvmw.address);
> + flags &= ~TTU_SPLIT_HUGE_PUD;
> + page_vma_mapped_walk_restart(&pvmw);
> + continue;
> + }
> + }
> +
> if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
> if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
> goto walk_done;
> @@ -2325,6 +2386,27 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> mmu_notifier_invalidate_range_start(&range);
>
> while (page_vma_mapped_walk(&pvmw)) {
> + /* Handle PUD-mapped THP first */
> + if (!pvmw.pte && !pvmw.pmd) {
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> + /*
> + * PUD-mapped THP: skip migration to preserve the huge
> + * page. Splitting would defeat the purpose of PUD THPs.
> + * Return false to indicate migration failure, which
> + * will cause alloc_contig_range() to try a different
> + * memory region.
> + */
> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
> + page_vma_mapped_walk_done(&pvmw);
> + ret = false;
> + break;
> + }
> +#endif
> + /* Unexpected state: !pte && !pmd but not a PUD THP */
> + page_vma_mapped_walk_done(&pvmw);
> + break;
> + }
> +
> /* PMD-mapped THP migration entry */
> if (!pvmw.pte) {
> __maybe_unused unsigned long pfn;
> @@ -2607,10 +2689,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
>
> /*
> * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
> - * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
> + * TTU_SPLIT_HUGE_PMD, TTU_SPLIT_HUGE_PUD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
> */
> if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
> - TTU_SYNC | TTU_BATCH_FLUSH)))
> + TTU_SPLIT_HUGE_PUD | TTU_SYNC | TTU_BATCH_FLUSH)))
> return;
>
> if (folio_is_zone_device(folio) &&
> --
> 2.47.3
>
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (13 preceding siblings ...)
2026-02-02 4:00 ` Matthew Wilcox
@ 2026-02-02 11:20 ` Lorenzo Stoakes
2026-02-04 1:00 ` Usama Arif
2026-02-02 16:24 ` Zi Yan
15 siblings, 1 reply; 49+ messages in thread
From: Lorenzo Stoakes @ 2026-02-02 11:20 UTC (permalink / raw)
To: Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
OK so this is somewhat unexpected :)
It would have been nice to discuss it in the THP cabal or at a conference
etc. so we could discuss approaches ahead of time. Communication is important,
especially with major changes like this.
And PUD THP is especially problematic in that it requires pages that the page
allocator can't give us, presumably you're doing something with CMA and... it's
a whole kettle of fish.
It's also complicated by the fact we _already_ support it in the DAX, VFIO cases
but it's kinda a weird sorta special case that we need to keep supporting.
There's questions about how this will interact with khugepaged, MADV_COLLAPSE,
mTHP (and really I want to see Nico's series land before we really consider
this).
So overall, I want to be very cautious and SLOW here. So let's please not drop
the RFC tag until David and I are ok with that?
Also the THP code base is in _dire_ need of rework, and I don't really want to
add major new features without us paying down some technical debt, to be honest.
So let's proceed with caution, and treat this as a very early bit of
experimental code.
Thanks, Lorenzo
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 2:44 ` [RFC 00/12] mm: PUD (1GB) THP implementation Rik van Riel
@ 2026-02-02 11:30 ` Lorenzo Stoakes
2026-02-02 15:50 ` Zi Yan
0 siblings, 1 reply; 49+ messages in thread
From: Lorenzo Stoakes @ 2026-02-02 11:30 UTC (permalink / raw)
To: Rik van Riel
Cc: Usama Arif, ziy, Andrew Morton, David Hildenbrand, linux-mm,
hannes, shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team, Frank van der Linden
On Sun, Feb 01, 2026 at 09:44:12PM -0500, Rik van Riel wrote:
> On Sun, 2026-02-01 at 16:50 -0800, Usama Arif wrote:
> >
> > 1. Static Reservation: hugetlbfs requires pre-allocating huge pages
> > at boot
> > or runtime, taking memory away. This requires capacity planning,
> > administrative overhead, and makes workload orchastration much
> > much more
> > complex, especially colocating with workloads that don't use
> > hugetlbfs.
> >
> To address the obvious objection "but how could we
> possibly allocate 1GB huge pages while the workload
> is running?", I am planning to pick up the CMA balancing
> patch series (thank you, Frank) and get that in an
> upstream ready shape soon.
>
> https://lkml.org/2025/9/15/1735
That link doesn't work?
Did a quick search for CMA balancing on lore, couldn't find anything, could you
provide a lore link?
>
> That patch set looks like another case where no
> amount of internal testing will find every single
> corner case, and we'll probably just want to
> merge it upstream, deploy it experimentally, and
> aggressively deal with anything that might pop up.
I'm not really in favour of this kind of approach. There's plenty of things that
were considered 'temporary' upstream that became rather permanent :)
Maybe we can't cover all corner-cases, but we need to make sure whatever we do
send upstream is maintainable, conceptually sensible and doesn't paint us into
any corners, etc.
>
> With CMA balancing, it would be possibly to just
> have half (or even more) of system memory for
> movable allocations only, which would make it possible
> to allocate 1GB huge pages dynamically.
Could you expand on that?
>
> --
> All Rights Reversed.
Thanks, Lorenzo
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 02/12] mm/thp: add mTHP stats infrastructure for PUD THP
2026-02-02 0:50 ` [RFC 02/12] mm/thp: add mTHP stats infrastructure for PUD THP Usama Arif
@ 2026-02-02 11:56 ` Lorenzo Stoakes
2026-02-05 5:53 ` Usama Arif
0 siblings, 1 reply; 49+ messages in thread
From: Lorenzo Stoakes @ 2026-02-02 11:56 UTC (permalink / raw)
To: Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On Sun, Feb 01, 2026 at 04:50:19PM -0800, Usama Arif wrote:
> Extend the mTHP (multi-size THP) statistics infrastructure to support
> PUD-sized transparent huge pages.
>
> The mTHP framework tracks statistics for each supported THP size through
> per-order counters exposed via sysfs. To add PUD THP support, PUD_ORDER
> must be included in the set of tracked orders.
>
> With this change, PUD THP events (allocations, faults, splits, swaps)
> are tracked and exposed through the existing sysfs interface at
> /sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/stats/. This
> provides visibility into PUD THP behavior for debugging and performance
> analysis.
>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
Yeah we really need to be basing this on mm-unstable once Nico's series is
landed.
I think it's quite important as well for you to check that khugepaged mTHP works
with all of this.
> ---
> include/linux/huge_mm.h | 42 +++++++++++++++++++++++++++++++++++++----
> mm/huge_memory.c | 3 ++-
> 2 files changed, 40 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index e672e45bb9cc7..5509ba8555b6e 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -76,7 +76,13 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr;
> * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
> * (which is a limitation of the THP implementation).
> */
> -#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1)))
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +#define THP_ORDERS_ALL_ANON_PUD BIT(PUD_ORDER)
> +#else
> +#define THP_ORDERS_ALL_ANON_PUD 0
> +#endif
> +#define THP_ORDERS_ALL_ANON (((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) | \
> + THP_ORDERS_ALL_ANON_PUD)
Err what is this change doing in a 'stats' change? This quietly updates
__thp_vma_allowable_orders() to also support PUD order for anon memory... Can we
put this in the right place?
>
> /*
> * Mask of all large folio orders supported for file THP. Folios in a DAX
> @@ -146,18 +152,46 @@ enum mthp_stat_item {
> };
>
> #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS)
> +
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
By the way I'm not a fan of us treating an 'arch has' as a 'will use'.
> +#define MTHP_STAT_COUNT (PMD_ORDER + 2)
Yeah I hate this. This is just 'one more thing to remember'.
> +#define MTHP_STAT_PUD_INDEX (PMD_ORDER + 1) /* PUD uses last index */
> +#else
> +#define MTHP_STAT_COUNT (PMD_ORDER + 1)
> +#endif
> +
> struct mthp_stat {
> - unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUNT];
> + unsigned long stats[MTHP_STAT_COUNT][__MTHP_STAT_COUNT];
> };
>
> DECLARE_PER_CPU(struct mthp_stat, mthp_stats);
>
> +static inline int mthp_stat_order_to_index(int order)
> +{
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> + if (order == PUD_ORDER)
> + return MTHP_STAT_PUD_INDEX;
This seems like a hack again.
> +#endif
> + return order;
> +}
> +
> static inline void mod_mthp_stat(int order, enum mthp_stat_item item, int delta)
> {
> - if (order <= 0 || order > PMD_ORDER)
> + int index;
> +
> + if (order <= 0)
> + return;
> +
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> + if (order != PUD_ORDER && order > PMD_ORDER)
> return;
> +#else
> + if (order > PMD_ORDER)
> + return;
> +#endif
Or we could actually define a max order... except now the hack contorts this
code.
Is it really that bad to just take up memory for the order between PMD_ORDER and
PUD_ORDER? ~72 bytes * cores and we avoid having to do this silly dance.
>
> - this_cpu_add(mthp_stats.stats[order][item], delta);
> + index = mthp_stat_order_to_index(order);
> + this_cpu_add(mthp_stats.stats[index][item], delta);
> }
>
> static inline void count_mthp_stat(int order, enum mthp_stat_item item)
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 3128b3beedb0a..d033624d7e1f2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -598,11 +598,12 @@ static unsigned long sum_mthp_stat(int order, enum mthp_stat_item item)
> {
> unsigned long sum = 0;
> int cpu;
> + int index = mthp_stat_order_to_index(order);
>
> for_each_possible_cpu(cpu) {
> struct mthp_stat *this = &per_cpu(mthp_stats, cpu);
>
> - sum += this->stats[order][item];
> + sum += this->stats[index][item];
> }
>
> return sum;
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-02 0:50 ` [RFC 01/12] mm: add PUD THP ptdesc and rmap support Usama Arif
2026-02-02 10:44 ` Kiryl Shutsemau
@ 2026-02-02 12:15 ` Lorenzo Stoakes
2026-02-04 7:38 ` Usama Arif
1 sibling, 1 reply; 49+ messages in thread
From: Lorenzo Stoakes @ 2026-02-02 12:15 UTC (permalink / raw)
To: Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
I think I'm going to have to do several passes on this, so this is just a
first one :)
On Sun, Feb 01, 2026 at 04:50:18PM -0800, Usama Arif wrote:
> For page table management, PUD THPs need to pre-deposit page tables
> that will be used when the huge page is later split. When a PUD THP
> is allocated, we cannot know in advance when or why it might need to
> be split (COW, partial unmap, reclaim), but we need page tables ready
> for that eventuality. Similar to how PMD THPs deposit a single PTE
> table, PUD THPs deposit a PMD table which itself contains deposited
> PTE tables - a two-level deposit. This commit adds the deposit/withdraw
> infrastructure and a new pud_huge_pmd field in ptdesc to store the
> deposited PMD.
This feels like you're hacking this support in, honestly. The list_head
abuse only adds to that feeling.
And are we now not required to store rather a lot of memory to keep all of
this coherent?
>
> The deposited PMD tables are stored as a singly-linked stack using only
> page->lru.next as the link pointer. A doubly-linked list using the
> standard list_head mechanism would cause memory corruption: list_del()
> poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev
> overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD
> tables have their own deposited PTE tables stored in pmd_huge_pte,
> poisoning lru.prev would corrupt the PTE table list and cause crashes
> when withdrawing PTE tables during split. PMD THPs don't have this
> problem because their deposited PTE tables don't have sub-deposits.
> Using only lru.next avoids the overlap entirely.
Yeah this is horrendous and a hack, I don't consider this at all
upstreamable.
You need to completely rework this.
>
> For reverse mapping, PUD THPs need the same rmap support that PMD THPs
> have. The page_vma_mapped_walk() function is extended to recognize and
> handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD
> flag tells the unmap path to split PUD THPs before proceeding, since
> there is no PUD-level migration entry format - the split converts the
> single PUD mapping into individual PTE mappings that can be migrated
> or swapped normally.
Individual PTE... mappings? You need to be a lot clearer here, page tables
are naturally confusing with entries vs. tables.
Let's be VERY specific here. Do you mean you have 1 PMD table and 512 PTE
tables reserved, spanning 1 PUD entry and 262,144 PTE entries?
>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
How does this change interact with existing DAX/VFIO code, which now it
seems will be subject to the mechanisms you introduce here?
Right now DAX/VFIO is only obtainable via a specially THP-aligned
get_unmapped_area() + then can only be obtained at fault time.
Is that the intent here also?
What is your intent - that khugepaged do this, or on alloc? How does it
interact with MADV_COLLAPSE?
I noted on the 2nd patch, but you're changing THP_ORDERS_ALL_ANON which
alters __thp_vma_allowable_orders() behaviour, that change belongs here...
> ---
> include/linux/huge_mm.h | 5 +++
> include/linux/mm.h | 19 ++++++++
> include/linux/mm_types.h | 5 ++-
> include/linux/pgtable.h | 8 ++++
> include/linux/rmap.h | 7 ++-
> mm/huge_memory.c | 8 ++++
> mm/internal.h | 3 ++
> mm/page_vma_mapped.c | 35 +++++++++++++++
> mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++
> mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++---
> 10 files changed, 260 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index a4d9f964dfdea..e672e45bb9cc7 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -463,10 +463,15 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> unsigned long address);
>
> #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> + unsigned long address);
> int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> pud_t *pudp, unsigned long addr, pgprot_t newprot,
> unsigned long cp_flags);
> #else
> +static inline void
> +split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> + unsigned long address) {}
> static inline int
> change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> pud_t *pudp, unsigned long addr, pgprot_t newprot,
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ab2e7e30aef96..a15e18df0f771 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3455,6 +3455,22 @@ static inline bool pagetable_pmd_ctor(struct mm_struct *mm,
> * considered ready to switch to split PUD locks yet; there may be places
> * which need to be converted from page_table_lock.
> */
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +static inline struct page *pud_pgtable_page(pud_t *pud)
> +{
> + unsigned long mask = ~(PTRS_PER_PUD * sizeof(pud_t) - 1);
> +
> + return virt_to_page((void *)((unsigned long)pud & mask));
> +}
> +
> +static inline struct ptdesc *pud_ptdesc(pud_t *pud)
> +{
> + return page_ptdesc(pud_pgtable_page(pud));
> +}
> +
> +#define pud_huge_pmd(pud) (pud_ptdesc(pud)->pud_huge_pmd)
> +#endif
> +
> static inline spinlock_t *pud_lockptr(struct mm_struct *mm, pud_t *pud)
> {
> return &mm->page_table_lock;
> @@ -3471,6 +3487,9 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud)
> static inline void pagetable_pud_ctor(struct ptdesc *ptdesc)
> {
> __pagetable_ctor(ptdesc);
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> + ptdesc->pud_huge_pmd = NULL;
> +#endif
> }
>
> static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc)
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 78950eb8926dc..26a38490ae2e1 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -577,7 +577,10 @@ struct ptdesc {
> struct list_head pt_list;
> struct {
> unsigned long _pt_pad_1;
> - pgtable_t pmd_huge_pte;
> + union {
> + pgtable_t pmd_huge_pte; /* For PMD tables: deposited PTE */
> + pgtable_t pud_huge_pmd; /* For PUD tables: deposited PMD list */
> + };
> };
> };
> unsigned long __page_mapping;
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 2f0dd3a4ace1a..3ce733c1d71a2 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1168,6 +1168,14 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
> #define arch_needs_pgtable_deposit() (false)
> #endif
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +extern void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
> + pmd_t *pmd_table);
> +extern pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp);
> +extern void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable);
> +extern pgtable_t pud_withdraw_pte(pmd_t *pmd_table);
These are useless extern's.
> +#endif
> +
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> /*
> * This is an implementation of pmdp_establish() that is only suitable for an
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index daa92a58585d9..08cd0a0eb8763 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -101,6 +101,7 @@ enum ttu_flags {
> * do a final flush if necessary */
> TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock:
> * caller holds it */
> + TTU_SPLIT_HUGE_PUD = 0x100, /* split huge PUD if any */
> };
>
> #ifdef CONFIG_MMU
> @@ -473,6 +474,8 @@ void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages,
> folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags)
> void folio_add_anon_rmap_pmd(struct folio *, struct page *,
> struct vm_area_struct *, unsigned long address, rmap_t flags);
> +void folio_add_anon_rmap_pud(struct folio *, struct page *,
> + struct vm_area_struct *, unsigned long address, rmap_t flags);
> void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
> unsigned long address, rmap_t flags);
> void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages,
> @@ -933,6 +936,7 @@ struct page_vma_mapped_walk {
> pgoff_t pgoff;
> struct vm_area_struct *vma;
> unsigned long address;
> + pud_t *pud;
> pmd_t *pmd;
> pte_t *pte;
> spinlock_t *ptl;
> @@ -970,7 +974,7 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> static inline void
> page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
> {
> - WARN_ON_ONCE(!pvmw->pmd && !pvmw->pte);
> + WARN_ON_ONCE(!pvmw->pud && !pvmw->pmd && !pvmw->pte);
>
> if (likely(pvmw->ptl))
> spin_unlock(pvmw->ptl);
> @@ -978,6 +982,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
> WARN_ON_ONCE(1);
>
> pvmw->ptl = NULL;
> + pvmw->pud = NULL;
> pvmw->pmd = NULL;
> pvmw->pte = NULL;
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 40cf59301c21a..3128b3beedb0a 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2933,6 +2933,14 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> spin_unlock(ptl);
> mmu_notifier_invalidate_range_end(&range);
> }
> +
> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> + unsigned long address)
> +{
> + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PUD_SIZE));
> + if (pud_trans_huge(*pud))
> + __split_huge_pud_locked(vma, pud, address);
> +}
> #else
> void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> unsigned long address)
> diff --git a/mm/internal.h b/mm/internal.h
> index 9ee336aa03656..21d5c00f638dc 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -545,6 +545,9 @@ int user_proactive_reclaim(char *buf,
> * in mm/rmap.c:
> */
> pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address);
> +#endif
>
> /*
> * in mm/page_alloc.c
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index b38a1d00c971b..d31eafba38041 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -146,6 +146,18 @@ static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
> return true;
> }
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +/* Returns true if the two ranges overlap. Careful to not overflow. */
> +static bool check_pud(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
> +{
> + if ((pfn + HPAGE_PUD_NR - 1) < pvmw->pfn)
> + return false;
> + if (pfn > pvmw->pfn + pvmw->nr_pages - 1)
> + return false;
> + return true;
> +}
> +#endif
> +
> static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
> {
> pvmw->address = (pvmw->address + size) & ~(size - 1);
> @@ -188,6 +200,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> pud_t *pud;
> pmd_t pmde;
>
> + /* The only possible pud mapping has been handled on last iteration */
> + if (pvmw->pud && !pvmw->pmd)
> + return not_found(pvmw);
> +
> /* The only possible pmd mapping has been handled on last iteration */
> if (pvmw->pmd && !pvmw->pte)
> return not_found(pvmw);
> @@ -234,6 +250,25 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> continue;
> }
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
Said it elsewhere, but it's really weird to treat an arch having the
ability to do something as a go ahead for doing it.
> + /* Check for PUD-mapped THP */
> + if (pud_trans_huge(*pud)) {
> + pvmw->pud = pud;
> + pvmw->ptl = pud_lock(mm, pud);
> + if (likely(pud_trans_huge(*pud))) {
> + if (pvmw->flags & PVMW_MIGRATION)
> + return not_found(pvmw);
> + if (!check_pud(pud_pfn(*pud), pvmw))
> + return not_found(pvmw);
> + return true;
> + }
> + /* PUD was split under us, retry at PMD level */
> + spin_unlock(pvmw->ptl);
> + pvmw->ptl = NULL;
> + pvmw->pud = NULL;
> + }
> +#endif
> +
Yeah, as I said elsewhere, we got to be refactoring not copy/pasting with
modifications :)
> pvmw->pmd = pmd_offset(pud, pvmw->address);
> /*
> * Make sure the pmd value isn't cached in a register by the
> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> index d3aec7a9926ad..2047558ddcd79 100644
> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
> }
> #endif
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +/*
> + * Deposit page tables for PUD THP.
> + * Called with PUD lock held. Stores PMD tables in a singly-linked stack
> + * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer.
> + *
> + * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full
> + * list_head. This is because lru.prev (offset 16) overlaps with
> + * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables.
> + * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2.
This is horrible and feels like a hack? Treating a doubly-linked list as a
singly-linked one like this is not upstreamable.
> + *
> + * PTE tables should be deposited into the PMD using pud_deposit_pte().
> + */
> +void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
> + pmd_t *pmd_table)
This is a horrid, you're depositing the PMD using the... questionable
list_head abuse, but then also have pud_deposit_pte()... But here we're
depositing a PMD shouldn't the name reflect that?
> +{
> + pgtable_t pmd_page = virt_to_page(pmd_table);
> +
> + assert_spin_locked(pud_lockptr(mm, pudp));
> +
> + /* Push onto stack using only lru.next as the link */
> + pmd_page->lru.next = (struct list_head *)pud_huge_pmd(pudp);
Yikes...
> + pud_huge_pmd(pudp) = pmd_page;
> +}
> +
> +/*
> + * Withdraw the deposited PMD table for PUD THP split or zap.
> + * Called with PUD lock held.
> + * Returns NULL if no more PMD tables are deposited.
> + */
> +pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp)
> +{
> + pgtable_t pmd_page;
> +
> + assert_spin_locked(pud_lockptr(mm, pudp));
> +
> + pmd_page = pud_huge_pmd(pudp);
> + if (!pmd_page)
> + return NULL;
> +
> + /* Pop from stack - lru.next points to next PMD page (or NULL) */
> + pud_huge_pmd(pudp) = (pgtable_t)pmd_page->lru.next;
Where's the popping? You're just assigning here.
> +
> + return page_address(pmd_page);
> +}
> +
> +/*
> + * Deposit a PTE table into a standalone PMD table (not yet in page table hierarchy).
> + * Used for PUD THP pre-deposit. The PMD table's pmd_huge_pte stores a linked list.
> + * No lock assertion since the PMD isn't visible yet.
> + */
> +void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable)
> +{
> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
> +
> + /* FIFO - add to front of list */
> + if (!ptdesc->pmd_huge_pte)
> + INIT_LIST_HEAD(&pgtable->lru);
> + else
> + list_add(&pgtable->lru, &ptdesc->pmd_huge_pte->lru);
> + ptdesc->pmd_huge_pte = pgtable;
> +}
> +
> +/*
> + * Withdraw a PTE table from a standalone PMD table.
> + * Returns NULL if no more PTE tables are deposited.
> + */
> +pgtable_t pud_withdraw_pte(pmd_t *pmd_table)
> +{
> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
> + pgtable_t pgtable;
> +
> + pgtable = ptdesc->pmd_huge_pte;
> + if (!pgtable)
> + return NULL;
> + ptdesc->pmd_huge_pte = list_first_entry_or_null(&pgtable->lru,
> + struct page, lru);
> + if (ptdesc->pmd_huge_pte)
> + list_del(&pgtable->lru);
> + return pgtable;
> +}
> +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> +
> #ifndef __HAVE_ARCH_PMDP_INVALIDATE
> pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> pmd_t *pmdp)
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 7b9879ef442d9..69acabd763da4 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -811,6 +811,32 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
> return pmd;
> }
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +/*
> + * Returns the actual pud_t* where we expect 'address' to be mapped from, or
> + * NULL if it doesn't exist. No guarantees / checks on what the pud_t*
> + * represents.
> + */
> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address)
This series seems to be full of copy/paste.
It's just not acceptable given the state of THP code as I said in reply to
the cover letter - you need to _refactor_ the code.
The code is bug-prone and difficult to maintain as-is, your series has to
improve the technical debt, not add to it.
> +{
> + pgd_t *pgd;
> + p4d_t *p4d;
> + pud_t *pud = NULL;
> +
> + pgd = pgd_offset(mm, address);
> + if (!pgd_present(*pgd))
> + goto out;
> +
> + p4d = p4d_offset(pgd, address);
> + if (!p4d_present(*p4d))
> + goto out;
> +
> + pud = pud_offset(p4d, address);
> +out:
> + return pud;
> +}
> +#endif
> +
> struct folio_referenced_arg {
> int mapcount;
> int referenced;
> @@ -1415,11 +1441,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
> SetPageAnonExclusive(page);
> break;
> case PGTABLE_LEVEL_PUD:
> - /*
> - * Keep the compiler happy, we don't support anonymous
> - * PUD mappings.
> - */
> - WARN_ON_ONCE(1);
> + SetPageAnonExclusive(page);
> break;
> default:
> BUILD_BUG();
> @@ -1503,6 +1525,31 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
> #endif
> }
>
> +/**
> + * folio_add_anon_rmap_pud - add a PUD mapping to a page range of an anon folio
> + * @folio: The folio to add the mapping to
> + * @page: The first page to add
> + * @vma: The vm area in which the mapping is added
> + * @address: The user virtual address of the first page to map
> + * @flags: The rmap flags
> + *
> + * The page range of folio is defined by [first_page, first_page + HPAGE_PUD_NR)
> + *
> + * The caller needs to hold the page table lock, and the page must be locked in
> + * the anon_vma case: to serialize mapping,index checking after setting.
> + */
> +void folio_add_anon_rmap_pud(struct folio *folio, struct page *page,
> + struct vm_area_struct *vma, unsigned long address, rmap_t flags)
> +{
> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
> + defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
> + __folio_add_anon_rmap(folio, page, HPAGE_PUD_NR, vma, address, flags,
> + PGTABLE_LEVEL_PUD);
> +#else
> + WARN_ON_ONCE(true);
> +#endif
> +}
More copy/paste... Maybe unavoidable in this case, but be good to try.
> +
> /**
> * folio_add_new_anon_rmap - Add mapping to a new anonymous folio.
> * @folio: The folio to add the mapping to.
> @@ -1934,6 +1981,20 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> }
>
> if (!pvmw.pte) {
> + /*
> + * Check for PUD-mapped THP first.
> + * If we have a PUD mapping and TTU_SPLIT_HUGE_PUD is set,
> + * split the PUD to PMD level and restart the walk.
> + */
This is literally describing the code below, it's not useful.
> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
> + if (flags & TTU_SPLIT_HUGE_PUD) {
> + split_huge_pud_locked(vma, pvmw.pud, pvmw.address);
> + flags &= ~TTU_SPLIT_HUGE_PUD;
> + page_vma_mapped_walk_restart(&pvmw);
> + continue;
> + }
> + }
> +
> if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
> if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
> goto walk_done;
> @@ -2325,6 +2386,27 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> mmu_notifier_invalidate_range_start(&range);
>
> while (page_vma_mapped_walk(&pvmw)) {
> + /* Handle PUD-mapped THP first */
How did/will this interact with DAX, VFIO PUD THP?
> + if (!pvmw.pte && !pvmw.pmd) {
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
Won't pud_trans_huge() imply this...
> + /*
> + * PUD-mapped THP: skip migration to preserve the huge
> + * page. Splitting would defeat the purpose of PUD THPs.
> + * Return false to indicate migration failure, which
> + * will cause alloc_contig_range() to try a different
> + * memory region.
> + */
> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
> + page_vma_mapped_walk_done(&pvmw);
> + ret = false;
> + break;
> + }
> +#endif
> + /* Unexpected state: !pte && !pmd but not a PUD THP */
> + page_vma_mapped_walk_done(&pvmw);
> + break;
> + }
> +
> /* PMD-mapped THP migration entry */
> if (!pvmw.pte) {
> __maybe_unused unsigned long pfn;
> @@ -2607,10 +2689,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
>
> /*
> * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
> - * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
> + * TTU_SPLIT_HUGE_PMD, TTU_SPLIT_HUGE_PUD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
> */
> if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
> - TTU_SYNC | TTU_BATCH_FLUSH)))
> + TTU_SPLIT_HUGE_PUD | TTU_SYNC | TTU_BATCH_FLUSH)))
> return;
>
> if (folio_is_zone_device(folio) &&
> --
> 2.47.3
>
This isn't a final review, I'll have to look more thoroughly through here
over time and you're going to have to be patient in general :)
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 11:30 ` Lorenzo Stoakes
@ 2026-02-02 15:50 ` Zi Yan
2026-02-04 10:56 ` Lorenzo Stoakes
2026-02-05 11:22 ` David Hildenbrand (arm)
0 siblings, 2 replies; 49+ messages in thread
From: Zi Yan @ 2026-02-02 15:50 UTC (permalink / raw)
To: Lorenzo Stoakes, David Hildenbrand
Cc: Rik van Riel, Usama Arif, Andrew Morton, linux-mm, hannes,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team, Frank van der Linden
On 2 Feb 2026, at 6:30, Lorenzo Stoakes wrote:
> On Sun, Feb 01, 2026 at 09:44:12PM -0500, Rik van Riel wrote:
>> On Sun, 2026-02-01 at 16:50 -0800, Usama Arif wrote:
>>>
>>> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages
>>> at boot
>>> or runtime, taking memory away. This requires capacity planning,
>>> administrative overhead, and makes workload orchastration much
>>> much more
>>> complex, especially colocating with workloads that don't use
>>> hugetlbfs.
>>>
>> To address the obvious objection "but how could we
>> possibly allocate 1GB huge pages while the workload
>> is running?", I am planning to pick up the CMA balancing
>> patch series (thank you, Frank) and get that in an
>> upstream ready shape soon.
>>
>> https://lkml.org/2025/9/15/1735
>
> That link doesn't work?
>
> Did a quick search for CMA balancing on lore, couldn't find anything, could you
> provide a lore link?
https://lwn.net/Articles/1038263/
>
>>
>> That patch set looks like another case where no
>> amount of internal testing will find every single
>> corner case, and we'll probably just want to
>> merge it upstream, deploy it experimentally, and
>> aggressively deal with anything that might pop up.
>
> I'm not really in favour of this kind of approach. There's plenty of things that
> were considered 'temporary' upstream that became rather permanent :)
>
> Maybe we can't cover all corner-cases, but we need to make sure whatever we do
> send upstream is maintainable, conceptually sensible and doesn't paint us into
> any corners, etc.
>
>>
>> With CMA balancing, it would be possibly to just
>> have half (or even more) of system memory for
>> movable allocations only, which would make it possible
>> to allocate 1GB huge pages dynamically.
>
> Could you expand on that?
I also would like to hear David’s opinion on using CMA for 1GB THP.
He did not like it[1] when I posted my patch back in 2020, but it has
been more than 5 years. :)
The other direction I explored is to get 1GB THP from buddy allocator.
That means we need to:
1. bump MAX_PAGE_ORDER to 18 or make it a runtime variable so that only 1GB
THP users need to bump it,
2. handle cross memory section PFN merge in buddy allocator,
3. improve anti-fragmentation mechanism for 1GB range compaction.
1 is easier-ish[2]. I have not looked into 2 and 3 much yet.
[1] https://lore.kernel.org/all/52bc2d5d-eb8a-83de-1c93-abd329132d58@redhat.com/
[2] https://lore.kernel.org/all/20210805190253.2795604-1-zi.yan@sent.com/
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-02 10:44 ` Kiryl Shutsemau
@ 2026-02-02 16:01 ` Zi Yan
2026-02-03 22:07 ` Usama Arif
0 siblings, 1 reply; 49+ messages in thread
From: Zi Yan @ 2026-02-02 16:01 UTC (permalink / raw)
To: Usama Arif, Kiryl Shutsemau
Cc: Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm,
hannes, riel, shakeel.butt, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team
On 2 Feb 2026, at 5:44, Kiryl Shutsemau wrote:
> On Sun, Feb 01, 2026 at 04:50:18PM -0800, Usama Arif wrote:
>> For page table management, PUD THPs need to pre-deposit page tables
>> that will be used when the huge page is later split. When a PUD THP
>> is allocated, we cannot know in advance when or why it might need to
>> be split (COW, partial unmap, reclaim), but we need page tables ready
>> for that eventuality. Similar to how PMD THPs deposit a single PTE
>> table, PUD THPs deposit a PMD table which itself contains deposited
>> PTE tables - a two-level deposit. This commit adds the deposit/withdraw
>> infrastructure and a new pud_huge_pmd field in ptdesc to store the
>> deposited PMD.
>>
>> The deposited PMD tables are stored as a singly-linked stack using only
>> page->lru.next as the link pointer. A doubly-linked list using the
>> standard list_head mechanism would cause memory corruption: list_del()
>> poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev
>> overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD
>> tables have their own deposited PTE tables stored in pmd_huge_pte,
>> poisoning lru.prev would corrupt the PTE table list and cause crashes
>> when withdrawing PTE tables during split. PMD THPs don't have this
>> problem because their deposited PTE tables don't have sub-deposits.
>> Using only lru.next avoids the overlap entirely.
>>
>> For reverse mapping, PUD THPs need the same rmap support that PMD THPs
>> have. The page_vma_mapped_walk() function is extended to recognize and
>> handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD
>> flag tells the unmap path to split PUD THPs before proceeding, since
>> there is no PUD-level migration entry format - the split converts the
>> single PUD mapping into individual PTE mappings that can be migrated
>> or swapped normally.
>>
>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>> ---
>> include/linux/huge_mm.h | 5 +++
>> include/linux/mm.h | 19 ++++++++
>> include/linux/mm_types.h | 5 ++-
>> include/linux/pgtable.h | 8 ++++
>> include/linux/rmap.h | 7 ++-
>> mm/huge_memory.c | 8 ++++
>> mm/internal.h | 3 ++
>> mm/page_vma_mapped.c | 35 +++++++++++++++
>> mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++
>> mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++---
>> 10 files changed, 260 insertions(+), 9 deletions(-)
>>
<snip>
>> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
>> index d3aec7a9926ad..2047558ddcd79 100644
>> --- a/mm/pgtable-generic.c
>> +++ b/mm/pgtable-generic.c
>> @@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
>> }
>> #endif
>>
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +/*
>> + * Deposit page tables for PUD THP.
>> + * Called with PUD lock held. Stores PMD tables in a singly-linked stack
>> + * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer.
>> + *
>> + * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full
>> + * list_head. This is because lru.prev (offset 16) overlaps with
>> + * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables.
>> + * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2.
>
> This is ugly.
>
> Sounds like you want to use llist_node/head instead of list_head for this.
>
> You might able to avoid taking the lock in some cases. Note that
> pud_lockptr() is mm->page_table_lock as of now.
I agree. I used llist_node/head in my implementation[1] and it works.
I have an illustration at[2] to show the concept. Feel free to reuse the code.
[1] https://lore.kernel.org/all/20200928193428.GB30994@casper.infradead.org/
[2] https://normal.zone/blog/2021-01-04-linux-1gb-thp-2/#new-mechanism
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
` (14 preceding siblings ...)
2026-02-02 11:20 ` Lorenzo Stoakes
@ 2026-02-02 16:24 ` Zi Yan
2026-02-03 23:29 ` Usama Arif
15 siblings, 1 reply; 49+ messages in thread
From: Zi Yan @ 2026-02-02 16:24 UTC (permalink / raw)
To: Usama Arif
Cc: Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm,
hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team
On 1 Feb 2026, at 19:50, Usama Arif wrote:
> This is an RFC series to implement 1GB PUD-level THPs, allowing
> applications to benefit from reduced TLB pressure without requiring
> hugetlbfs. The patches are based on top of
> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
It is nice to see you are working on 1GB THP.
>
> Motivation: Why 1GB THP over hugetlbfs?
> =======================================
>
> While hugetlbfs provides 1GB huge pages today, it has significant limitations
> that make it unsuitable for many workloads:
>
> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot
> or runtime, taking memory away. This requires capacity planning,
> administrative overhead, and makes workload orchastration much much more
> complex, especially colocating with workloads that don't use hugetlbfs.
But you are using CMA, the same allocation mechanism as hugetlb_cma. What
is the difference?
>
> 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails
> rather than falling back to smaller pages. This makes it fragile under
> memory pressure.
True.
>
> 4. No Splitting: hugetlbfs pages cannot be split when only partial access
> is needed, leading to memory waste and preventing partial reclaim.
Since you have PUD THP implementation, have you run any workload on it?
How often you see a PUD THP split?
Oh, you actually ran 512MB THP on ARM64 (I saw it below), do you have
any split stats to show the necessity of THP split?
>
> 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot
> be easily shared with regular memory pools.
True.
>
> PUD THP solves these limitations by integrating 1GB pages into the existing
> THP infrastructure.
The main advantage of PUD THP over hugetlb is that it can be split and mapped
at sub-folio level. Do you have any data to support the necessity of them?
I wonder if it would be easier to just support 1GB folio in core-mm first
and we can add 1GB THP split and sub-folio mapping later. With that, we
can move hugetlb users to 1GB folio.
BTW, without split support, you can apply HVO to 1GB folio to save memory.
That is a disadvantage of PUD THP. Have you taken that into consideration?
Basically, switching from hugetlb to PUD THP, you will lose memory due
to vmemmap usage.
>
> Performance Results
> ===================
>
> Benchmark results of these patches on Intel Xeon Platinum 8321HC:
>
> Test: True Random Memory Access [1] test of 4GB memory region with pointer
> chasing workload (4M random pointer dereferences through memory):
>
> | Metric | PUD THP (1GB) | PMD THP (2MB) | Change |
> |-------------------|---------------|---------------|--------------|
> | Memory access | 88 ms | 134 ms | 34% faster |
> | Page fault time | 898 ms | 331 ms | 2.7x slower |
>
> Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)).
> For long-running workloads this will be a one-off cost, and the 34%
> improvement in access latency provides significant benefit.
>
> ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU
> bound workload running on a large number of ARM servers (256G). I enabled
> the 512M THP settings to always for a 100 servers in production (didn't
> really have high expectations :)). The average memory used for the workload
> increased from 217G to 233G. The amount of memory backed by 512M pages was
> 68G! The dTLB misses went down by 26% and the PID multiplier increased input
> by 5.9% (This is a very significant improvment in workload performance).
> A significant number of these THPs were faulted in at application start when
> were present across different VMAs. Ofcourse getting these 512M pages is
> easier on ARM due to bigger PAGE_SIZE and pageblock order.
>
> I am hoping that these patches for 1G THP can be used to provide similar
> benefits for x86. I expect workloads to fault them in at start time when there
> is plenty of free memory available.
>
>
> Previous attempt by Zi Yan
> ==========================
>
> Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been
> significant changes in kernel since then, including folio conversion, mTHP
> framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD
> code as reference for making 1G PUD THP work. I am hoping Zi can provide
> guidance on these patches!
I am more than happy to help you. :)
>
> Major Design Decisions
> ======================
>
> 1. No shared 1G zero page: The memory cost would be quite significant!
>
> 2. Page Table Pre-deposit Strategy
> PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE
> page tables (one for each potential PMD entry after split).
> We allocate a PMD page table and use its pmd_huge_pte list to store
> the deposited PTE tables. This ensures split operations don't fail due
> to page table allocation failures (at the cost of 2M per PUD THP)
>
> 3. Split to Base Pages
> When a PUD THP must be split (COW, partial unmap, mprotect), we split
> directly to base pages (262,144 PTEs). The ideal thing would be to split
> to 2M pages and then to 4K pages if needed. However, this would require
> significant rmap and mapcount tracking changes.
>
> 4. COW and fork handling via split
> Copy-on-write and fork for PUD THP triggers a split to base pages, then
> uses existing PTE-level COW infrastructure. Getting another 1G region is
> hard and could fail. If only a 4K is written, copying 1G is a waste.
> Probably this should only be done on CoW and not fork?
>
> 5. Migration via split
> Split PUD to PTEs and migrate individual pages. It is going to be difficult
> to find a 1G continguous memory to migrate to. Maybe its better to not
> allow migration of PUDs at all? I am more tempted to not allow migration,
> but have kept splitting in this RFC.
Without migration, PUD THP loses its flexibility and transparency. But with
its 1GB size, I also wonder what the purpose of PUD THP migration can be.
It does not create memory fragmentation, since it is the largest folio size
we have and contiguous. NUMA balancing 1GB THP seems too much work.
BTW, I posted many questions, but that does not mean I object the patchset.
I just want to understand your use case better, reduce unnecessary
code changes, and hopefully get it upstreamed this time. :)
Thank you for the work.
>
>
> Reviewers guide
> ===============
>
> Most of the code is written by adapting from PMD code. For e.g. the PUD page
> fault path is very similar to PMD. The difference is no shared zero page and
> the page table deposit strategy. I think the easiest way to review this series
> is to compare with PMD code.
>
> Test results
> ============
>
> 1..7
> # Starting 7 tests from 1 test cases.
> # RUN pud_thp.basic_allocation ...
> # pud_thp_test.c:169:basic_allocation:PUD THP allocated (anon_fault_alloc: 0 -> 1)
> # OK pud_thp.basic_allocation
> ok 1 pud_thp.basic_allocation
> # RUN pud_thp.read_write_access ...
> # OK pud_thp.read_write_access
> ok 2 pud_thp.read_write_access
> # RUN pud_thp.fork_cow ...
> # pud_thp_test.c:236:fork_cow:Fork COW completed (thp_split_pud: 0 -> 1)
> # OK pud_thp.fork_cow
> ok 3 pud_thp.fork_cow
> # RUN pud_thp.partial_munmap ...
> # pud_thp_test.c:267:partial_munmap:Partial munmap completed (thp_split_pud: 1 -> 2)
> # OK pud_thp.partial_munmap
> ok 4 pud_thp.partial_munmap
> # RUN pud_thp.mprotect_split ...
> # pud_thp_test.c:293:mprotect_split:mprotect split completed (thp_split_pud: 2 -> 3)
> # OK pud_thp.mprotect_split
> ok 5 pud_thp.mprotect_split
> # RUN pud_thp.reclaim_pageout ...
> # pud_thp_test.c:322:reclaim_pageout:Reclaim completed (thp_split_pud: 3 -> 4)
> # OK pud_thp.reclaim_pageout
> ok 6 pud_thp.reclaim_pageout
> # RUN pud_thp.migration_mbind ...
> # pud_thp_test.c:356:migration_mbind:Migration completed (thp_split_pud: 4 -> 5)
> # OK pud_thp.migration_mbind
> ok 7 pud_thp.migration_mbind
> # PASSED: 7 / 7 tests passed.
> # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0
>
> [1] https://gist.github.com/uarif1/bf279b2a01a536cda945ff9f40196a26
> [2] https://lore.kernel.org/linux-mm/20210224223536.803765-1-zi.yan@sent.com/
>
> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>
> Usama Arif (12):
> mm: add PUD THP ptdesc and rmap support
> mm/thp: add mTHP stats infrastructure for PUD THP
> mm: thp: add PUD THP allocation and fault handling
> mm: thp: implement PUD THP split to PTE level
> mm: thp: add reclaim and migration support for PUD THP
> selftests/mm: add PUD THP basic allocation test
> selftests/mm: add PUD THP read/write access test
> selftests/mm: add PUD THP fork COW test
> selftests/mm: add PUD THP partial munmap test
> selftests/mm: add PUD THP mprotect split test
> selftests/mm: add PUD THP reclaim test
> selftests/mm: add PUD THP migration test
>
> include/linux/huge_mm.h | 60 ++-
> include/linux/mm.h | 19 +
> include/linux/mm_types.h | 5 +-
> include/linux/pgtable.h | 8 +
> include/linux/rmap.h | 7 +-
> mm/huge_memory.c | 535 +++++++++++++++++++++-
> mm/internal.h | 3 +
> mm/memory.c | 8 +-
> mm/migrate.c | 17 +
> mm/page_vma_mapped.c | 35 ++
> mm/pgtable-generic.c | 83 ++++
> mm/rmap.c | 96 +++-
> mm/vmscan.c | 2 +
> tools/testing/selftests/mm/Makefile | 1 +
> tools/testing/selftests/mm/pud_thp_test.c | 360 +++++++++++++++
> 15 files changed, 1197 insertions(+), 42 deletions(-)
> create mode 100644 tools/testing/selftests/mm/pud_thp_test.c
>
> --
> 2.47.3
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 9:06 ` David Hildenbrand (arm)
@ 2026-02-03 21:11 ` Usama Arif
0 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-03 21:11 UTC (permalink / raw)
To: David Hildenbrand (arm), Matthew Wilcox
Cc: ziy, Andrew Morton, lorenzo.stoakes, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On 02/02/2026 01:06, David Hildenbrand (arm) wrote:
> On 2/2/26 05:00, Matthew Wilcox wrote:
>> On Sun, Feb 01, 2026 at 04:50:17PM -0800, Usama Arif wrote:
>>> This is an RFC series to implement 1GB PUD-level THPs, allowing
>>> applications to benefit from reduced TLB pressure without requiring
>>> hugetlbfs. The patches are based on top of
>>> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
>>
>> I suggest this has not had enough testing. There are dozens of places
>> in the MM which assume that if a folio is at leaast PMD size then it is
>> exactly PMD size. Everywhere that calls folio_test_pmd_mappable() needs
>> to be audited to make sure that it will work properly if the folio is
>> larger than PMD size.
>
> I think the hack (ehm trick) in this patch set is to do it just like dax PUDs: only map through a PUD or through PTEs, not through PMDs.
>
> That also avoids dealing with mapcounts until I sorted that out.
>
Hello!
Thanks for the review! So its as David said, currently for PUD THP case, we
won't run into those paths.
PUD is split via TTU_SPLIT_HUGE_PUD which calls __split_huge_pud_locked().
This splits PUD to PTE directly (not PMD), so we never have a PUD folio
going through do_set_pmd(). The anonymous fault path uses
do_huge_pud_anonymous_page() so we won't go to finish_fault()
When I started working on this, I was really hoping that we could split PUDs to PMDs,
but very quickly realised thats a separate and much more complicated mapcount problem
(which is probably why David is dealing with it as he mentioned in the reply :P)
and should not be dealt with in this series.
In terms of more testing, I would definitely like to add more.
I have added selftests for allocation, memory integrity, fork, partial munmap, mprotect,
reclaim and migration, and am running them with DEBUG_VM to make sure we dont get the VM
bugs/warnings, but I am sure I am missing paths. I will try to think of more
but please let me know if there are more cases we can come up with.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-02 16:01 ` Zi Yan
@ 2026-02-03 22:07 ` Usama Arif
2026-02-05 4:17 ` Matthew Wilcox
0 siblings, 1 reply; 49+ messages in thread
From: Usama Arif @ 2026-02-03 22:07 UTC (permalink / raw)
To: Zi Yan, Kiryl Shutsemau, lorenzo.stoakes
Cc: Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On 02/02/2026 08:01, Zi Yan wrote:
> On 2 Feb 2026, at 5:44, Kiryl Shutsemau wrote:
>
>> On Sun, Feb 01, 2026 at 04:50:18PM -0800, Usama Arif wrote:
>>> For page table management, PUD THPs need to pre-deposit page tables
>>> that will be used when the huge page is later split. When a PUD THP
>>> is allocated, we cannot know in advance when or why it might need to
>>> be split (COW, partial unmap, reclaim), but we need page tables ready
>>> for that eventuality. Similar to how PMD THPs deposit a single PTE
>>> table, PUD THPs deposit a PMD table which itself contains deposited
>>> PTE tables - a two-level deposit. This commit adds the deposit/withdraw
>>> infrastructure and a new pud_huge_pmd field in ptdesc to store the
>>> deposited PMD.
>>>
>>> The deposited PMD tables are stored as a singly-linked stack using only
>>> page->lru.next as the link pointer. A doubly-linked list using the
>>> standard list_head mechanism would cause memory corruption: list_del()
>>> poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev
>>> overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD
>>> tables have their own deposited PTE tables stored in pmd_huge_pte,
>>> poisoning lru.prev would corrupt the PTE table list and cause crashes
>>> when withdrawing PTE tables during split. PMD THPs don't have this
>>> problem because their deposited PTE tables don't have sub-deposits.
>>> Using only lru.next avoids the overlap entirely.
>>>
>>> For reverse mapping, PUD THPs need the same rmap support that PMD THPs
>>> have. The page_vma_mapped_walk() function is extended to recognize and
>>> handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD
>>> flag tells the unmap path to split PUD THPs before proceeding, since
>>> there is no PUD-level migration entry format - the split converts the
>>> single PUD mapping into individual PTE mappings that can be migrated
>>> or swapped normally.
>>>
>>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>>> ---
>>> include/linux/huge_mm.h | 5 +++
>>> include/linux/mm.h | 19 ++++++++
>>> include/linux/mm_types.h | 5 ++-
>>> include/linux/pgtable.h | 8 ++++
>>> include/linux/rmap.h | 7 ++-
>>> mm/huge_memory.c | 8 ++++
>>> mm/internal.h | 3 ++
>>> mm/page_vma_mapped.c | 35 +++++++++++++++
>>> mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++
>>> mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++---
>>> 10 files changed, 260 insertions(+), 9 deletions(-)
>>>
>
> <snip>
>
>>> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
>>> index d3aec7a9926ad..2047558ddcd79 100644
>>> --- a/mm/pgtable-generic.c
>>> +++ b/mm/pgtable-generic.c
>>> @@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
>>> }
>>> #endif
>>>
>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>> +/*
>>> + * Deposit page tables for PUD THP.
>>> + * Called with PUD lock held. Stores PMD tables in a singly-linked stack
>>> + * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer.
>>> + *
>>> + * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full
>>> + * list_head. This is because lru.prev (offset 16) overlaps with
>>> + * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables.
>>> + * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2.
>>
>> This is ugly.
>>
>> Sounds like you want to use llist_node/head instead of list_head for this.
>>
>> You might able to avoid taking the lock in some cases. Note that
>> pud_lockptr() is mm->page_table_lock as of now.
>
> I agree. I used llist_node/head in my implementation[1] and it works.
> I have an illustration at[2] to show the concept. Feel free to reuse the code.
>
>
> [1] https://lore.kernel.org/all/20200928193428.GB30994@casper.infradead.org/
> [2] https://normal.zone/blog/2021-01-04-linux-1gb-thp-2/#new-mechanism
>
> Best Regards,
> Yan, Zi
Ah I should have looked at your patches more! I started working by just using lru
and was using list_add/list_del which was ofcourse corrupting the list and took me
way more time than I would like to admit to debug what was going on! The diagrams
in your 2nd link are really useful. I ended up drawing by hand those to debug
the corruption issue. I will point to that link in the next series :)
How about something like the below diff over this patch? (Not included the comment
changes that I will make everywhere)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 26a38490ae2e1..3653e24ce97d7 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -99,6 +99,9 @@ struct page {
struct list_head buddy_list;
struct list_head pcp_list;
struct llist_node pcp_llist;
+
+ /* PMD pagetable deposit head */
+ struct llist_node pgtable_deposit_head;
};
struct address_space *mapping;
union {
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 2047558ddcd79..764f14d0afcbb 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -215,9 +215,7 @@ void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
assert_spin_locked(pud_lockptr(mm, pudp));
- /* Push onto stack using only lru.next as the link */
- pmd_page->lru.next = (struct list_head *)pud_huge_pmd(pudp);
- pud_huge_pmd(pudp) = pmd_page;
+ llist_add(&pmd_page->pgtable_deposit_head, (struct llist_head *)&pud_huge_pmd(pudp));
}
/*
@@ -227,16 +225,16 @@ void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
*/
pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp)
{
+ struct llist_node *node;
pgtable_t pmd_page;
assert_spin_locked(pud_lockptr(mm, pudp));
- pmd_page = pud_huge_pmd(pudp);
- if (!pmd_page)
+ node = llist_del_first((struct llist_head *)&pud_huge_pmd(pudp));
+ if (!node)
return NULL;
- /* Pop from stack - lru.next points to next PMD page (or NULL) */
- pud_huge_pmd(pudp) = (pgtable_t)pmd_page->lru.next;
+ pmd_page = llist_entry(node, struct page, pgtable_deposit_head);
return page_address(pmd_page);
}
Also, Zi is it ok if I add your Co-developed by on this patch in future revisions?
I didn't want to do that without your explicit approval.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 16:24 ` Zi Yan
@ 2026-02-03 23:29 ` Usama Arif
2026-02-04 0:08 ` Frank van der Linden
2026-02-05 18:07 ` Zi Yan
0 siblings, 2 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-03 23:29 UTC (permalink / raw)
To: Zi Yan
Cc: Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm,
hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team
On 02/02/2026 08:24, Zi Yan wrote:
> On 1 Feb 2026, at 19:50, Usama Arif wrote:
>
>> This is an RFC series to implement 1GB PUD-level THPs, allowing
>> applications to benefit from reduced TLB pressure without requiring
>> hugetlbfs. The patches are based on top of
>> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
>
> It is nice to see you are working on 1GB THP.
>
>>
>> Motivation: Why 1GB THP over hugetlbfs?
>> =======================================
>>
>> While hugetlbfs provides 1GB huge pages today, it has significant limitations
>> that make it unsuitable for many workloads:
>>
>> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot
>> or runtime, taking memory away. This requires capacity planning,
>> administrative overhead, and makes workload orchastration much much more
>> complex, especially colocating with workloads that don't use hugetlbfs.
>
> But you are using CMA, the same allocation mechanism as hugetlb_cma. What
> is the difference?
>
So we dont really need to use CMA. CMA can help a lot ofcourse, but we dont *need* it.
For e.g. I can run the very simple case [1] of trying to get 1G pages in the upstream
kernel without CMA on my server and it works. The server has been up for more than a week
(so pretty fragmented), is running a bunch of stuff in the background, uses 0 CMA memory,
and I tried to get 20x1G pages on it and it worked.
It uses folio_alloc_gigantic, which is exactly what this series uses:
$ uptime -p
up 1 week, 3 days, 5 hours, 7 minutes
$ cat /proc/meminfo | grep -i cma
CmaTotal: 0 kB
CmaFree: 0 kB
$ echo 20 | sudo tee /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
20
$ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
20
$ free -h
total used free shared buff/cache available
Mem: 1.0Ti 142Gi 292Gi 143Mi 583Gi 868Gi
Swap: 129Gi 3.5Gi 126Gi
$ ./map_1g_hugepages
Mapping 20 x 1GB huge pages (20 GB total)
Mapped at 0x7f43c0000000
Touched page 0 at 0x7f43c0000000
Touched page 1 at 0x7f4400000000
Touched page 2 at 0x7f4440000000
Touched page 3 at 0x7f4480000000
Touched page 4 at 0x7f44c0000000
Touched page 5 at 0x7f4500000000
Touched page 6 at 0x7f4540000000
Touched page 7 at 0x7f4580000000
Touched page 8 at 0x7f45c0000000
Touched page 9 at 0x7f4600000000
Touched page 10 at 0x7f4640000000
Touched page 11 at 0x7f4680000000
Touched page 12 at 0x7f46c0000000
Touched page 13 at 0x7f4700000000
Touched page 14 at 0x7f4740000000
Touched page 15 at 0x7f4780000000
Touched page 16 at 0x7f47c0000000
Touched page 17 at 0x7f4800000000
Touched page 18 at 0x7f4840000000
Touched page 19 at 0x7f4880000000
Unmapped successfully
>>
>> 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails
>> rather than falling back to smaller pages. This makes it fragile under
>> memory pressure.
>
> True.
>
>>
>> 4. No Splitting: hugetlbfs pages cannot be split when only partial access
>> is needed, leading to memory waste and preventing partial reclaim.
>
> Since you have PUD THP implementation, have you run any workload on it?
> How often you see a PUD THP split?
>
Ah so running non upstream kernels in production is a bit more difficult
(and also risky). I was trying to use the 512M experiment on arm as a comparison,
although I know its not the same thing with PAGE_SIZE and pageblock order.
I can try some other upstream benchmarks if it helps? Although will need to find
ones that create VMA > 1G.
> Oh, you actually ran 512MB THP on ARM64 (I saw it below), do you have
> any split stats to show the necessity of THP split?
>
>>
>> 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot
>> be easily shared with regular memory pools.
>
> True.
>
>>
>> PUD THP solves these limitations by integrating 1GB pages into the existing
>> THP infrastructure.
>
> The main advantage of PUD THP over hugetlb is that it can be split and mapped
> at sub-folio level. Do you have any data to support the necessity of them?
> I wonder if it would be easier to just support 1GB folio in core-mm first
> and we can add 1GB THP split and sub-folio mapping later. With that, we
> can move hugetlb users to 1GB folio.
>
I would say its not the main advantage? But its definitely one of them.
The 2 main areas where split would be helpful is munmap partial
range and reclaim (MADV_PAGEOUT). For e.g. jemalloc/tcmalloc can now start
taking advantge of 1G pages. My knowledge is not that great when it comes
to memory allocators, but I believe they track for how long certain areas
have been cold and can trigger reclaim as an example. Then split will be useful.
Having memory allocators use hugetlb is probably going to be a no?
> BTW, without split support, you can apply HVO to 1GB folio to save memory.
> That is a disadvantage of PUD THP. Have you taken that into consideration?
> Basically, switching from hugetlb to PUD THP, you will lose memory due
> to vmemmap usage.
>
Yeah so HVO saves 16M per 1G, and the page depost mechanism adds ~2M as per 1G.
We have HVO enabled in the meta fleet. I think we should not only think of PUD THP
as a replacement for hugetlb, but to also enable further usescases where hugetlb
would not be feasible.
Ater the basic infrastructure for 1G is there, we can work on optimizing, I think
there would be a a lot of interesting work we can do. HVO for 1G THP would be one
of them?
>>
>> Performance Results
>> ===================
>>
>> Benchmark results of these patches on Intel Xeon Platinum 8321HC:
>>
>> Test: True Random Memory Access [1] test of 4GB memory region with pointer
>> chasing workload (4M random pointer dereferences through memory):
>>
>> | Metric | PUD THP (1GB) | PMD THP (2MB) | Change |
>> |-------------------|---------------|---------------|--------------|
>> | Memory access | 88 ms | 134 ms | 34% faster |
>> | Page fault time | 898 ms | 331 ms | 2.7x slower |
>>
>> Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)).
>> For long-running workloads this will be a one-off cost, and the 34%
>> improvement in access latency provides significant benefit.
>>
>> ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU
>> bound workload running on a large number of ARM servers (256G). I enabled
>> the 512M THP settings to always for a 100 servers in production (didn't
>> really have high expectations :)). The average memory used for the workload
>> increased from 217G to 233G. The amount of memory backed by 512M pages was
>> 68G! The dTLB misses went down by 26% and the PID multiplier increased input
>> by 5.9% (This is a very significant improvment in workload performance).
>> A significant number of these THPs were faulted in at application start when
>> were present across different VMAs. Ofcourse getting these 512M pages is
>> easier on ARM due to bigger PAGE_SIZE and pageblock order.
>>
>> I am hoping that these patches for 1G THP can be used to provide similar
>> benefits for x86. I expect workloads to fault them in at start time when there
>> is plenty of free memory available.
>>
>>
>> Previous attempt by Zi Yan
>> ==========================
>>
>> Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been
>> significant changes in kernel since then, including folio conversion, mTHP
>> framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD
>> code as reference for making 1G PUD THP work. I am hoping Zi can provide
>> guidance on these patches!
>
> I am more than happy to help you. :)
>
Thanks!!!
>>
>> Major Design Decisions
>> ======================
>>
>> 1. No shared 1G zero page: The memory cost would be quite significant!
>>
>> 2. Page Table Pre-deposit Strategy
>> PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE
>> page tables (one for each potential PMD entry after split).
>> We allocate a PMD page table and use its pmd_huge_pte list to store
>> the deposited PTE tables. This ensures split operations don't fail due
>> to page table allocation failures (at the cost of 2M per PUD THP)
>>
>> 3. Split to Base Pages
>> When a PUD THP must be split (COW, partial unmap, mprotect), we split
>> directly to base pages (262,144 PTEs). The ideal thing would be to split
>> to 2M pages and then to 4K pages if needed. However, this would require
>> significant rmap and mapcount tracking changes.
>>
>> 4. COW and fork handling via split
>> Copy-on-write and fork for PUD THP triggers a split to base pages, then
>> uses existing PTE-level COW infrastructure. Getting another 1G region is
>> hard and could fail. If only a 4K is written, copying 1G is a waste.
>> Probably this should only be done on CoW and not fork?
>>
>> 5. Migration via split
>> Split PUD to PTEs and migrate individual pages. It is going to be difficult
>> to find a 1G continguous memory to migrate to. Maybe its better to not
>> allow migration of PUDs at all? I am more tempted to not allow migration,
>> but have kept splitting in this RFC.
>
> Without migration, PUD THP loses its flexibility and transparency. But with
> its 1GB size, I also wonder what the purpose of PUD THP migration can be.
> It does not create memory fragmentation, since it is the largest folio size
> we have and contiguous. NUMA balancing 1GB THP seems too much work.
Yeah this is exactly what I was thinking as well. It is going to be expensive
and difficult to migrate 1G pages, and I am not sure if what we get out of it
is worth it? I kept the splitting code in this RFC as I wanted to show that
its possible to split and migrate and the rejecting migration code is a lot easier.
>
> BTW, I posted many questions, but that does not mean I object the patchset.
> I just want to understand your use case better, reduce unnecessary
> code changes, and hopefully get it upstreamed this time. :)
>
> Thank you for the work.
>
Ah no this is awesome! Thanks for the questions! Its basically the discussion I
wanted to start with the RFC.
[1] https://gist.github.com/uarif1/35dcd63f9d76048b07eb5c16ace85991
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-03 23:29 ` Usama Arif
@ 2026-02-04 0:08 ` Frank van der Linden
2026-02-05 5:46 ` Usama Arif
2026-02-05 18:07 ` Zi Yan
1 sibling, 1 reply; 49+ messages in thread
From: Frank van der Linden @ 2026-02-04 0:08 UTC (permalink / raw)
To: Usama Arif
Cc: Zi Yan, Andrew Morton, David Hildenbrand, lorenzo.stoakes,
linux-mm, hannes, riel, shakeel.butt, kas, baohua, dev.jain,
baolin.wang, npache, Liam.Howlett, ryan.roberts, vbabka,
lance.yang, linux-kernel, kernel-team
On Tue, Feb 3, 2026 at 3:29 PM Usama Arif <usamaarif642@gmail.com> wrote:
>
>
>
> On 02/02/2026 08:24, Zi Yan wrote:
> > On 1 Feb 2026, at 19:50, Usama Arif wrote:
> >
> >> This is an RFC series to implement 1GB PUD-level THPs, allowing
> >> applications to benefit from reduced TLB pressure without requiring
> >> hugetlbfs. The patches are based on top of
> >> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
> >
> > It is nice to see you are working on 1GB THP.
> >
> >>
> >> Motivation: Why 1GB THP over hugetlbfs?
> >> =======================================
> >>
> >> While hugetlbfs provides 1GB huge pages today, it has significant limitations
> >> that make it unsuitable for many workloads:
> >>
> >> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot
> >> or runtime, taking memory away. This requires capacity planning,
> >> administrative overhead, and makes workload orchastration much much more
> >> complex, especially colocating with workloads that don't use hugetlbfs.
> >
> > But you are using CMA, the same allocation mechanism as hugetlb_cma. What
> > is the difference?
> >
>
> So we dont really need to use CMA. CMA can help a lot ofcourse, but we dont *need* it.
> For e.g. I can run the very simple case [1] of trying to get 1G pages in the upstream
> kernel without CMA on my server and it works. The server has been up for more than a week
> (so pretty fragmented), is running a bunch of stuff in the background, uses 0 CMA memory,
> and I tried to get 20x1G pages on it and it worked.
> It uses folio_alloc_gigantic, which is exactly what this series uses:
>
> $ uptime -p
> up 1 week, 3 days, 5 hours, 7 minutes
> $ cat /proc/meminfo | grep -i cma
> CmaTotal: 0 kB
> CmaFree: 0 kB
> $ echo 20 | sudo tee /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
> 20
> $ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
> 20
> $ free -h
> total used free shared buff/cache available
> Mem: 1.0Ti 142Gi 292Gi 143Mi 583Gi 868Gi
> Swap: 129Gi 3.5Gi 126Gi
> $ ./map_1g_hugepages
> Mapping 20 x 1GB huge pages (20 GB total)
> Mapped at 0x7f43c0000000
> Touched page 0 at 0x7f43c0000000
> Touched page 1 at 0x7f4400000000
> Touched page 2 at 0x7f4440000000
> Touched page 3 at 0x7f4480000000
> Touched page 4 at 0x7f44c0000000
> Touched page 5 at 0x7f4500000000
> Touched page 6 at 0x7f4540000000
> Touched page 7 at 0x7f4580000000
> Touched page 8 at 0x7f45c0000000
> Touched page 9 at 0x7f4600000000
> Touched page 10 at 0x7f4640000000
> Touched page 11 at 0x7f4680000000
> Touched page 12 at 0x7f46c0000000
> Touched page 13 at 0x7f4700000000
> Touched page 14 at 0x7f4740000000
> Touched page 15 at 0x7f4780000000
> Touched page 16 at 0x7f47c0000000
> Touched page 17 at 0x7f4800000000
> Touched page 18 at 0x7f4840000000
> Touched page 19 at 0x7f4880000000
> Unmapped successfully
>
>
>
>
> >>
> >> 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails
> >> rather than falling back to smaller pages. This makes it fragile under
> >> memory pressure.
> >
> > True.
> >
> >>
> >> 4. No Splitting: hugetlbfs pages cannot be split when only partial access
> >> is needed, leading to memory waste and preventing partial reclaim.
> >
> > Since you have PUD THP implementation, have you run any workload on it?
> > How often you see a PUD THP split?
> >
>
> Ah so running non upstream kernels in production is a bit more difficult
> (and also risky). I was trying to use the 512M experiment on arm as a comparison,
> although I know its not the same thing with PAGE_SIZE and pageblock order.
>
> I can try some other upstream benchmarks if it helps? Although will need to find
> ones that create VMA > 1G.
>
> > Oh, you actually ran 512MB THP on ARM64 (I saw it below), do you have
> > any split stats to show the necessity of THP split?
> >
> >>
> >> 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot
> >> be easily shared with regular memory pools.
> >
> > True.
> >
> >>
> >> PUD THP solves these limitations by integrating 1GB pages into the existing
> >> THP infrastructure.
> >
> > The main advantage of PUD THP over hugetlb is that it can be split and mapped
> > at sub-folio level. Do you have any data to support the necessity of them?
> > I wonder if it would be easier to just support 1GB folio in core-mm first
> > and we can add 1GB THP split and sub-folio mapping later. With that, we
> > can move hugetlb users to 1GB folio.
> >
>
> I would say its not the main advantage? But its definitely one of them.
> The 2 main areas where split would be helpful is munmap partial
> range and reclaim (MADV_PAGEOUT). For e.g. jemalloc/tcmalloc can now start
> taking advantge of 1G pages. My knowledge is not that great when it comes
> to memory allocators, but I believe they track for how long certain areas
> have been cold and can trigger reclaim as an example. Then split will be useful.
> Having memory allocators use hugetlb is probably going to be a no?
>
>
> > BTW, without split support, you can apply HVO to 1GB folio to save memory.
> > That is a disadvantage of PUD THP. Have you taken that into consideration?
> > Basically, switching from hugetlb to PUD THP, you will lose memory due
> > to vmemmap usage.
> >
>
> Yeah so HVO saves 16M per 1G, and the page depost mechanism adds ~2M as per 1G.
> We have HVO enabled in the meta fleet. I think we should not only think of PUD THP
> as a replacement for hugetlb, but to also enable further usescases where hugetlb
> would not be feasible.
>
> Ater the basic infrastructure for 1G is there, we can work on optimizing, I think
> there would be a a lot of interesting work we can do. HVO for 1G THP would be one
> of them?
>
> >>
> >> Performance Results
> >> ===================
> >>
> >> Benchmark results of these patches on Intel Xeon Platinum 8321HC:
> >>
> >> Test: True Random Memory Access [1] test of 4GB memory region with pointer
> >> chasing workload (4M random pointer dereferences through memory):
> >>
> >> | Metric | PUD THP (1GB) | PMD THP (2MB) | Change |
> >> |-------------------|---------------|---------------|--------------|
> >> | Memory access | 88 ms | 134 ms | 34% faster |
> >> | Page fault time | 898 ms | 331 ms | 2.7x slower |
> >>
> >> Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)).
> >> For long-running workloads this will be a one-off cost, and the 34%
> >> improvement in access latency provides significant benefit.
> >>
> >> ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU
> >> bound workload running on a large number of ARM servers (256G). I enabled
> >> the 512M THP settings to always for a 100 servers in production (didn't
> >> really have high expectations :)). The average memory used for the workload
> >> increased from 217G to 233G. The amount of memory backed by 512M pages was
> >> 68G! The dTLB misses went down by 26% and the PID multiplier increased input
> >> by 5.9% (This is a very significant improvment in workload performance).
> >> A significant number of these THPs were faulted in at application start when
> >> were present across different VMAs. Ofcourse getting these 512M pages is
> >> easier on ARM due to bigger PAGE_SIZE and pageblock order.
> >>
> >> I am hoping that these patches for 1G THP can be used to provide similar
> >> benefits for x86. I expect workloads to fault them in at start time when there
> >> is plenty of free memory available.
> >>
> >>
> >> Previous attempt by Zi Yan
> >> ==========================
> >>
> >> Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been
> >> significant changes in kernel since then, including folio conversion, mTHP
> >> framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD
> >> code as reference for making 1G PUD THP work. I am hoping Zi can provide
> >> guidance on these patches!
> >
> > I am more than happy to help you. :)
> >
>
> Thanks!!!
>
> >>
> >> Major Design Decisions
> >> ======================
> >>
> >> 1. No shared 1G zero page: The memory cost would be quite significant!
> >>
> >> 2. Page Table Pre-deposit Strategy
> >> PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE
> >> page tables (one for each potential PMD entry after split).
> >> We allocate a PMD page table and use its pmd_huge_pte list to store
> >> the deposited PTE tables. This ensures split operations don't fail due
> >> to page table allocation failures (at the cost of 2M per PUD THP)
> >>
> >> 3. Split to Base Pages
> >> When a PUD THP must be split (COW, partial unmap, mprotect), we split
> >> directly to base pages (262,144 PTEs). The ideal thing would be to split
> >> to 2M pages and then to 4K pages if needed. However, this would require
> >> significant rmap and mapcount tracking changes.
> >>
> >> 4. COW and fork handling via split
> >> Copy-on-write and fork for PUD THP triggers a split to base pages, then
> >> uses existing PTE-level COW infrastructure. Getting another 1G region is
> >> hard and could fail. If only a 4K is written, copying 1G is a waste.
> >> Probably this should only be done on CoW and not fork?
> >>
> >> 5. Migration via split
> >> Split PUD to PTEs and migrate individual pages. It is going to be difficult
> >> to find a 1G continguous memory to migrate to. Maybe its better to not
> >> allow migration of PUDs at all? I am more tempted to not allow migration,
> >> but have kept splitting in this RFC.
> >
> > Without migration, PUD THP loses its flexibility and transparency. But with
> > its 1GB size, I also wonder what the purpose of PUD THP migration can be.
> > It does not create memory fragmentation, since it is the largest folio size
> > we have and contiguous. NUMA balancing 1GB THP seems too much work.
>
> Yeah this is exactly what I was thinking as well. It is going to be expensive
> and difficult to migrate 1G pages, and I am not sure if what we get out of it
> is worth it? I kept the splitting code in this RFC as I wanted to show that
> its possible to split and migrate and the rejecting migration code is a lot easier.
>
> >
> > BTW, I posted many questions, but that does not mean I object the patchset.
> > I just want to understand your use case better, reduce unnecessary
> > code changes, and hopefully get it upstreamed this time. :)
> >
> > Thank you for the work.
> >
>
> Ah no this is awesome! Thanks for the questions! Its basically the discussion I
> wanted to start with the RFC.
>
>
> [1] https://gist.github.com/uarif1/35dcd63f9d76048b07eb5c16ace85991
>
>
It looks like the scenario you're going for is an application that
allocates a sizeable chunk of memory upfront, and would like it to be
1G pages as much as possible, right?
You can do that with 1G THPs, the advantage being that any failures to
get 1G pages are not explicit, so you're not left with having to grow
the number of hugetlb pages yourself, and see how many you can use.
1G THPs seem useful for that. I don't recall all of the discussion
here, but I assume that hooking 1G THP support in to khugepaged is
quite something else - the potential churn to get an 1G page could
well cause more system interference than you'd like.
The CMA scenario Rik was talking about is similar: you set
hugetlb_cma=NG, and then, when you need 1G pages, you grow the hugetlb
pool and use them. Disadvantage: you have to do it explicitly.
However, hugetlb_cma does give you a much larger chance of getting
those 1G pages. The example you give, 20 1G pages on a 1T system where
there is 292G free, isn't much of a problem in my experience. You
should have no problem getting that amount of 1G pages. Things get
more difficult when most of your memory is taken - hugetlb_cma really
helps there. E.g. we have systems that have 90% hugetlb_cma, and there
is a pretty good success rate converting back and forth between
hugetlb and normal page allocator pages with hugetlb_cma, while
operating close to that 90% hugetlb coverage. Without CMA, the success
rate drops quite a bit at that level.
CMA balancing is a related issue, for hugetlb. It fixes a problem that
has been known for years: the more memory you set aside for movable
only allocations (e.g. hugetlb_cma), the less breathing room you have
for unmovable allocations. So you risk the 'false OOM' scenario, where
the kernel can't make an unmovable allocation, even though there is
enough memory available, even outside of CMA. It's just that those
MOVABLE pageblocks were used for movable allocations. So ideally, you
would migrate those movable allocations to CMA under those
circumstances. Which is what CMA balancing does. It's worked out very
well for us in the scenario I list above (most memory being
hugetlb_cma).
Anyway, I'm rambling on a bit. Let's see if I got this right:
1G THP
- advantages: transparent interface
- disadvantage: no HVO, lower success rate under higher memory
pressure than hugetlb_cma
hugetlb_cma
- disadvantage: explicit interface, for higher values needs 'false
OOM' avoidance
- advange: better success rate under pressure.
I think 1G THPs are a good solution for "nice to have" scenarios, but
there will still be use cases where a higher success rate is preferred
and HugeTLB is preferred.
Lastly, there's also the ZONE_MOVABLE story. I think 1G THPs and
ZONE_MOVABLE could work well together, improving the success rate. But
then the issue of pinning raise its head again, and whether that
should be allowed or configurable per zone..
- Frank
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 11:20 ` Lorenzo Stoakes
@ 2026-02-04 1:00 ` Usama Arif
2026-02-04 11:08 ` Lorenzo Stoakes
0 siblings, 1 reply; 49+ messages in thread
From: Usama Arif @ 2026-02-04 1:00 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On 02/02/2026 03:20, Lorenzo Stoakes wrote:
> OK so this is somewhat unexpected :)
>
> It would have been nice to discuss it in the THP cabal or at a conference
> etc. so we could discuss approaches ahead of time. Communication is important,
> especially with major changes like this.
Makes sense!
>
> And PUD THP is especially problematic in that it requires pages that the page
> allocator can't give us, presumably you're doing something with CMA and... it's
> a whole kettle of fish.
So we dont need CMA. It helps ofcourse, but we don't *need* it.
Its summarized in the first reply I gave to Zi in [1]:
>
> It's also complicated by the fact we _already_ support it in the DAX, VFIO cases
> but it's kinda a weird sorta special case that we need to keep supporting.
>
> There's questions about how this will interact with khugepaged, MADV_COLLAPSE,
> mTHP (and really I want to see Nico's series land before we really consider
> this).
So I have numbers and experiments for page faults which are in the cover letter,
but not for khugepaged. I would be very surprised (although pleasently :)) if
khugepaged by some magic finds 262144 pages that meets all the khugepaged requirements
to collapse the page. In the basic infrastructure support which this series is adding,
I want to keep khugepaged collapse disabled for 1G pages. This is also the initial
approach that was taken in other mTHP sizes. We should go slow with 1G THPs.
>
> So overall, I want to be very cautious and SLOW here. So let's please not drop
> the RFC tag until David and I are ok with that?
>
> Also the THP code base is in _dire_ need of rework, and I don't really want to
> add major new features without us paying down some technical debt, to be honest.
>
> So let's proceed with caution, and treat this as a very early bit of
> experimental code.
>
> Thanks, Lorenzo
Ack, yeah so this is mainly an RFC to discuss what the major design choices will be.
I got a kernel with selftests for allocation, memory integrity, fork, partial munmap,
mprotect, reclaim and migration passing and am running them with DEBUG_VM to make sure
we dont get the VM bugs/warnings and the numbers are good, so just wanted to share it
upstream and get your opinions! Basically try and trigger a discussion similar to what
Zi asked in [2]! And also if someone could point out if there is something fundamental
we are missing in this series.
Thanks for the reviews! Really do apprecaite it!
[1] https://lore.kernel.org/all/20f92576-e932-435f-bb7b-de49eb84b012@gmail.com/#t
[2] https://lore.kernel.org/all/3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-02 12:15 ` Lorenzo Stoakes
@ 2026-02-04 7:38 ` Usama Arif
2026-02-04 12:55 ` Lorenzo Stoakes
0 siblings, 1 reply; 49+ messages in thread
From: Usama Arif @ 2026-02-04 7:38 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On 02/02/2026 04:15, Lorenzo Stoakes wrote:
> I think I'm going to have to do several passes on this, so this is just a
> first one :)
>
Thanks! Really appreciate the reviews!
One thing over here is the higher level design decision when it comes to migration
of 1G pages. As Zi said in [1]:
"I also wonder what the purpose of PUD THP migration can be.
It does not create memory fragmentation, since it is the largest folio size
we have and contiguous. NUMA balancing 1GB THP seems too much work."
> On Sun, Feb 01, 2026 at 04:50:18PM -0800, Usama Arif wrote:
>> For page table management, PUD THPs need to pre-deposit page tables
>> that will be used when the huge page is later split. When a PUD THP
>> is allocated, we cannot know in advance when or why it might need to
>> be split (COW, partial unmap, reclaim), but we need page tables ready
>> for that eventuality. Similar to how PMD THPs deposit a single PTE
>> table, PUD THPs deposit a PMD table which itself contains deposited
>> PTE tables - a two-level deposit. This commit adds the deposit/withdraw
>> infrastructure and a new pud_huge_pmd field in ptdesc to store the
>> deposited PMD.
>
> This feels like you're hacking this support in, honestly. The list_head
> abuse only adds to that feeling.
>
Yeah so I hope turning it to something like [2] is the way forward.
> And are we now not required to store rather a lot of memory to keep all of
> this coherent?
PMD THP allocates 1 4K page (pte_alloc_one) at fault time so that split
doesnt fail.
For PUD we allocate 2M worth of PTE page tables and 1 4K PMD table at fault
time so that split doesnt fail due to there not being enough memory.
Its not great, but its not bad as well.
The alternative is to allocate this at split time and so we are not
pre-reserving them. Now there is a chance that allocation and therefore split
fails, so the tradeoff is some memory vs reliability. This patch favours
reliability.
Lets say a user gets 100x1G THPs. They would end up using ~200M for it.
I think that is okish. If the user has 100G, 200M might not be an issue
for them :)
>
>>
>> The deposited PMD tables are stored as a singly-linked stack using only
>> page->lru.next as the link pointer. A doubly-linked list using the
>> standard list_head mechanism would cause memory corruption: list_del()
>> poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev
>> overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD
>> tables have their own deposited PTE tables stored in pmd_huge_pte,
>> poisoning lru.prev would corrupt the PTE table list and cause crashes
>> when withdrawing PTE tables during split. PMD THPs don't have this
>> problem because their deposited PTE tables don't have sub-deposits.
>> Using only lru.next avoids the overlap entirely.
>
> Yeah this is horrendous and a hack, I don't consider this at all
> upstreamable.
>
> You need to completely rework this.
Hopefully [2] is the path forward!
>
>>
>> For reverse mapping, PUD THPs need the same rmap support that PMD THPs
>> have. The page_vma_mapped_walk() function is extended to recognize and
>> handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD
>> flag tells the unmap path to split PUD THPs before proceeding, since
>> there is no PUD-level migration entry format - the split converts the
>> single PUD mapping into individual PTE mappings that can be migrated
>> or swapped normally.
>
> Individual PTE... mappings? You need to be a lot clearer here, page tables
> are naturally confusing with entries vs. tables.
>
> Let's be VERY specific here. Do you mean you have 1 PMD table and 512 PTE
> tables reserved, spanning 1 PUD entry and 262,144 PTE entries?
>
Yes that is correct, Thanks! I will change the commit message in the next revision
to what you have written: 1 PMD table and 512 PTE tables reserved, spanning
1 PUD entry and 262,144 PTE entries.
>>
>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>
> How does this change interact with existing DAX/VFIO code, which now it
> seems will be subject to the mechanisms you introduce here?
I think what you mean here is the change in try_to_migrate_one?
So one
>
> Right now DAX/VFIO is only obtainable via a specially THP-aligned
> get_unmapped_area() + then can only be obtained at fault time.
> > Is that the intent here also?
>
Ah thanks for pointing this out. This is something the series is missing.
What I did in the selftest and benchmark was fault on an address that was already aligned.
i.e. basically call the below function before faulting in.
static inline void *pud_align(void *addr)
{
return (void *)(((unsigned long)addr + PUD_SIZE - 1) & ~(PUD_SIZE - 1));
}
What I think you are suggesting this series is missing is the below diff? (its untested).
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 87b2c21df4a49..461158a0840db 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1236,6 +1236,12 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
unsigned long ret;
loff_t off = (loff_t)pgoff << PAGE_SHIFT;
+ if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && len >= PUD_SIZE) {
+ ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PUD_SIZE, vm_flags);
+ if (ret)
+ return ret;
+ }
+
> What is your intent - that khugepaged do this, or on alloc? How does it
> interact with MADV_COLLAPSE?
>
Ah basically what I mentioned in [3], we want to go slow. Only enable PUD THPs
page faults at the start. If there is data supporting that khugepaged will work
than we do it, but we keep it disabled.
> I noted on the 2nd patch, but you're changing THP_ORDERS_ALL_ANON which
> alters __thp_vma_allowable_orders() behaviour, that change belongs here...
>
>
Thanks for this! I only tried to split this code into logical commits
after the whole thing was working. Some things are tightly coupled
and I would need to move them to the right commit.
>> ---
>> include/linux/huge_mm.h | 5 +++
>> include/linux/mm.h | 19 ++++++++
>> include/linux/mm_types.h | 5 ++-
>> include/linux/pgtable.h | 8 ++++
>> include/linux/rmap.h | 7 ++-
>> mm/huge_memory.c | 8 ++++
>> mm/internal.h | 3 ++
>> mm/page_vma_mapped.c | 35 +++++++++++++++
>> mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++
>> mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++---
>> 10 files changed, 260 insertions(+), 9 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index a4d9f964dfdea..e672e45bb9cc7 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -463,10 +463,15 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>> unsigned long address);
>>
>> #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
>> + unsigned long address);
>> int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
>> pud_t *pudp, unsigned long addr, pgprot_t newprot,
>> unsigned long cp_flags);
>> #else
>> +static inline void
>> +split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
>> + unsigned long address) {}
>> static inline int
>> change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
>> pud_t *pudp, unsigned long addr, pgprot_t newprot,
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index ab2e7e30aef96..a15e18df0f771 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -3455,6 +3455,22 @@ static inline bool pagetable_pmd_ctor(struct mm_struct *mm,
>> * considered ready to switch to split PUD locks yet; there may be places
>> * which need to be converted from page_table_lock.
>> */
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +static inline struct page *pud_pgtable_page(pud_t *pud)
>> +{
>> + unsigned long mask = ~(PTRS_PER_PUD * sizeof(pud_t) - 1);
>> +
>> + return virt_to_page((void *)((unsigned long)pud & mask));
>> +}
>> +
>> +static inline struct ptdesc *pud_ptdesc(pud_t *pud)
>> +{
>> + return page_ptdesc(pud_pgtable_page(pud));
>> +}
>> +
>> +#define pud_huge_pmd(pud) (pud_ptdesc(pud)->pud_huge_pmd)
>> +#endif
>> +
>> static inline spinlock_t *pud_lockptr(struct mm_struct *mm, pud_t *pud)
>> {
>> return &mm->page_table_lock;
>> @@ -3471,6 +3487,9 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud)
>> static inline void pagetable_pud_ctor(struct ptdesc *ptdesc)
>> {
>> __pagetable_ctor(ptdesc);
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> + ptdesc->pud_huge_pmd = NULL;
>> +#endif
>> }
>>
>> static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc)
>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>> index 78950eb8926dc..26a38490ae2e1 100644
>> --- a/include/linux/mm_types.h
>> +++ b/include/linux/mm_types.h
>> @@ -577,7 +577,10 @@ struct ptdesc {
>> struct list_head pt_list;
>> struct {
>> unsigned long _pt_pad_1;
>> - pgtable_t pmd_huge_pte;
>> + union {
>> + pgtable_t pmd_huge_pte; /* For PMD tables: deposited PTE */
>> + pgtable_t pud_huge_pmd; /* For PUD tables: deposited PMD list */
>> + };
>> };
>> };
>> unsigned long __page_mapping;
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 2f0dd3a4ace1a..3ce733c1d71a2 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -1168,6 +1168,14 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
>> #define arch_needs_pgtable_deposit() (false)
>> #endif
>>
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +extern void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
>> + pmd_t *pmd_table);
>> +extern pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp);
>> +extern void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable);
>> +extern pgtable_t pud_withdraw_pte(pmd_t *pmd_table);
>
> These are useless extern's.
>
ack
These are coming from the existing functions from the file:
extern void pgtable_trans_huge_deposit
extern pgtable_t pgtable_trans_huge_withdraw
I think the externs can be removed from these as well? We can
fix those in a separate patch.
>> +#endif
>> +
>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> /*
>> * This is an implementation of pmdp_establish() that is only suitable for an
>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>> index daa92a58585d9..08cd0a0eb8763 100644
>> --- a/include/linux/rmap.h
>> +++ b/include/linux/rmap.h
>> @@ -101,6 +101,7 @@ enum ttu_flags {
>> * do a final flush if necessary */
>> TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock:
>> * caller holds it */
>> + TTU_SPLIT_HUGE_PUD = 0x100, /* split huge PUD if any */
>> };
>>
>> #ifdef CONFIG_MMU
>> @@ -473,6 +474,8 @@ void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages,
>> folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags)
>> void folio_add_anon_rmap_pmd(struct folio *, struct page *,
>> struct vm_area_struct *, unsigned long address, rmap_t flags);
>> +void folio_add_anon_rmap_pud(struct folio *, struct page *,
>> + struct vm_area_struct *, unsigned long address, rmap_t flags);
>> void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>> unsigned long address, rmap_t flags);
>> void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages,
>> @@ -933,6 +936,7 @@ struct page_vma_mapped_walk {
>> pgoff_t pgoff;
>> struct vm_area_struct *vma;
>> unsigned long address;
>> + pud_t *pud;
>> pmd_t *pmd;
>> pte_t *pte;
>> spinlock_t *ptl;
>> @@ -970,7 +974,7 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
>> static inline void
>> page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
>> {
>> - WARN_ON_ONCE(!pvmw->pmd && !pvmw->pte);
>> + WARN_ON_ONCE(!pvmw->pud && !pvmw->pmd && !pvmw->pte);
>>
>> if (likely(pvmw->ptl))
>> spin_unlock(pvmw->ptl);
>> @@ -978,6 +982,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
>> WARN_ON_ONCE(1);
>>
>> pvmw->ptl = NULL;
>> + pvmw->pud = NULL;
>> pvmw->pmd = NULL;
>> pvmw->pte = NULL;
>> }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 40cf59301c21a..3128b3beedb0a 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2933,6 +2933,14 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>> spin_unlock(ptl);
>> mmu_notifier_invalidate_range_end(&range);
>> }
>> +
>> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
>> + unsigned long address)
>> +{
>> + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PUD_SIZE));
>> + if (pud_trans_huge(*pud))
>> + __split_huge_pud_locked(vma, pud, address);
>> +}
>> #else
>> void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>> unsigned long address)
>> diff --git a/mm/internal.h b/mm/internal.h
>> index 9ee336aa03656..21d5c00f638dc 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -545,6 +545,9 @@ int user_proactive_reclaim(char *buf,
>> * in mm/rmap.c:
>> */
>> pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address);
>> +#endif
>>
>> /*
>> * in mm/page_alloc.c
>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
>> index b38a1d00c971b..d31eafba38041 100644
>> --- a/mm/page_vma_mapped.c
>> +++ b/mm/page_vma_mapped.c
>> @@ -146,6 +146,18 @@ static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
>> return true;
>> }
>>
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +/* Returns true if the two ranges overlap. Careful to not overflow. */
>> +static bool check_pud(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
>> +{
>> + if ((pfn + HPAGE_PUD_NR - 1) < pvmw->pfn)
>> + return false;
>> + if (pfn > pvmw->pfn + pvmw->nr_pages - 1)
>> + return false;
>> + return true;
>> +}
>> +#endif
>> +
>> static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
>> {
>> pvmw->address = (pvmw->address + size) & ~(size - 1);
>> @@ -188,6 +200,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>> pud_t *pud;
>> pmd_t pmde;
>>
>> + /* The only possible pud mapping has been handled on last iteration */
>> + if (pvmw->pud && !pvmw->pmd)
>> + return not_found(pvmw);
>> +
>> /* The only possible pmd mapping has been handled on last iteration */
>> if (pvmw->pmd && !pvmw->pte)
>> return not_found(pvmw);
>> @@ -234,6 +250,25 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>> continue;
>> }
>>
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>
> Said it elsewhere, but it's really weird to treat an arch having the
> ability to do something as a go ahead for doing it.
>
>> + /* Check for PUD-mapped THP */
>> + if (pud_trans_huge(*pud)) {
>> + pvmw->pud = pud;
>> + pvmw->ptl = pud_lock(mm, pud);
>> + if (likely(pud_trans_huge(*pud))) {
>> + if (pvmw->flags & PVMW_MIGRATION)
>> + return not_found(pvmw);
>> + if (!check_pud(pud_pfn(*pud), pvmw))
>> + return not_found(pvmw);
>> + return true;
>> + }
>> + /* PUD was split under us, retry at PMD level */
>> + spin_unlock(pvmw->ptl);
>> + pvmw->ptl = NULL;
>> + pvmw->pud = NULL;
>> + }
>> +#endif
>> +
>
> Yeah, as I said elsewhere, we got to be refactoring not copy/pasting with
> modifications :)
>
Yeah there is repeated code in multiple places, where all I did was replace
what was done from PMD into PUD. In a lot of places, its actually difficult
to not repeat the code (unless we want function macros, which is much worse
IMO).
>
>> pvmw->pmd = pmd_offset(pud, pvmw->address);
>> /*
>> * Make sure the pmd value isn't cached in a register by the
>> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
>> index d3aec7a9926ad..2047558ddcd79 100644
>> --- a/mm/pgtable-generic.c
>> +++ b/mm/pgtable-generic.c
>> @@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
>> }
>> #endif
>>
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +/*
>> + * Deposit page tables for PUD THP.
>> + * Called with PUD lock held. Stores PMD tables in a singly-linked stack
>> + * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer.
>> + *
>> + * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full
>> + * list_head. This is because lru.prev (offset 16) overlaps with
>> + * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables.
>> + * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2.
>
> This is horrible and feels like a hack? Treating a doubly-linked list as a
> singly-linked one like this is not upstreamable.
>
>> + *
>> + * PTE tables should be deposited into the PMD using pud_deposit_pte().
>> + */
>> +void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
>> + pmd_t *pmd_table)
>
> This is a horrid, you're depositing the PMD using the... questionable
> list_head abuse, but then also have pud_deposit_pte()... But here we're
> depositing a PMD shouldn't the name reflect that?
>
>> +{
>> + pgtable_t pmd_page = virt_to_page(pmd_table);
>> +
>> + assert_spin_locked(pud_lockptr(mm, pudp));
>> +
>> + /* Push onto stack using only lru.next as the link */
>> + pmd_page->lru.next = (struct list_head *)pud_huge_pmd(pudp);
>
> Yikes...
>
>> + pud_huge_pmd(pudp) = pmd_page;
>> +}
>> +
>> +/*
>> + * Withdraw the deposited PMD table for PUD THP split or zap.
>> + * Called with PUD lock held.
>> + * Returns NULL if no more PMD tables are deposited.
>> + */
>> +pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp)
>> +{
>> + pgtable_t pmd_page;
>> +
>> + assert_spin_locked(pud_lockptr(mm, pudp));
>> +
>> + pmd_page = pud_huge_pmd(pudp);
>> + if (!pmd_page)
>> + return NULL;
>> +
>> + /* Pop from stack - lru.next points to next PMD page (or NULL) */
>> + pud_huge_pmd(pudp) = (pgtable_t)pmd_page->lru.next;
>
> Where's the popping? You're just assigning here.
Ack on all of the above. Hopefully [1] is better.
>
>> +
>> + return page_address(pmd_page);
>> +}
>> +
>> +/*
>> + * Deposit a PTE table into a standalone PMD table (not yet in page table hierarchy).
>> + * Used for PUD THP pre-deposit. The PMD table's pmd_huge_pte stores a linked list.
>> + * No lock assertion since the PMD isn't visible yet.
>> + */
>> +void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable)
>> +{
>> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
>> +
>> + /* FIFO - add to front of list */
>> + if (!ptdesc->pmd_huge_pte)
>> + INIT_LIST_HEAD(&pgtable->lru);
>> + else
>> + list_add(&pgtable->lru, &ptdesc->pmd_huge_pte->lru);
>> + ptdesc->pmd_huge_pte = pgtable;
>> +}
>> +
>> +/*
>> + * Withdraw a PTE table from a standalone PMD table.
>> + * Returns NULL if no more PTE tables are deposited.
>> + */
>> +pgtable_t pud_withdraw_pte(pmd_t *pmd_table)
>> +{
>> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
>> + pgtable_t pgtable;
>> +
>> + pgtable = ptdesc->pmd_huge_pte;
>> + if (!pgtable)
>> + return NULL;
>> + ptdesc->pmd_huge_pte = list_first_entry_or_null(&pgtable->lru,
>> + struct page, lru);
>> + if (ptdesc->pmd_huge_pte)
>> + list_del(&pgtable->lru);
>> + return pgtable;
>> +}
>> +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>> +
>> #ifndef __HAVE_ARCH_PMDP_INVALIDATE
>> pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
>> pmd_t *pmdp)
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 7b9879ef442d9..69acabd763da4 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -811,6 +811,32 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
>> return pmd;
>> }
>>
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +/*
>> + * Returns the actual pud_t* where we expect 'address' to be mapped from, or
>> + * NULL if it doesn't exist. No guarantees / checks on what the pud_t*
>> + * represents.
>> + */
>> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address)
>
> This series seems to be full of copy/paste.
>
> It's just not acceptable given the state of THP code as I said in reply to
> the cover letter - you need to _refactor_ the code.
>
> The code is bug-prone and difficult to maintain as-is, your series has to
> improve the technical debt, not add to it.
>
In some cases we might not be able to avoid the copy, but this is definitely
a place where we dont need to. I will change here. Thanks!
>> +{
>> + pgd_t *pgd;
>> + p4d_t *p4d;
>> + pud_t *pud = NULL;
>> +
>> + pgd = pgd_offset(mm, address);
>> + if (!pgd_present(*pgd))
>> + goto out;
>> +
>> + p4d = p4d_offset(pgd, address);
>> + if (!p4d_present(*p4d))
>> + goto out;
>> +
>> + pud = pud_offset(p4d, address);
>> +out:
>> + return pud;
>> +}
>> +#endif
>> +
>> struct folio_referenced_arg {
>> int mapcount;
>> int referenced;
>> @@ -1415,11 +1441,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
>> SetPageAnonExclusive(page);
>> break;
>> case PGTABLE_LEVEL_PUD:
>> - /*
>> - * Keep the compiler happy, we don't support anonymous
>> - * PUD mappings.
>> - */
>> - WARN_ON_ONCE(1);
>> + SetPageAnonExclusive(page);
>> break;
>> default:
>> BUILD_BUG();
>> @@ -1503,6 +1525,31 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
>> #endif
>> }
>>
>> +/**
>> + * folio_add_anon_rmap_pud - add a PUD mapping to a page range of an anon folio
>> + * @folio: The folio to add the mapping to
>> + * @page: The first page to add
>> + * @vma: The vm area in which the mapping is added
>> + * @address: The user virtual address of the first page to map
>> + * @flags: The rmap flags
>> + *
>> + * The page range of folio is defined by [first_page, first_page + HPAGE_PUD_NR)
>> + *
>> + * The caller needs to hold the page table lock, and the page must be locked in
>> + * the anon_vma case: to serialize mapping,index checking after setting.
>> + */
>> +void folio_add_anon_rmap_pud(struct folio *folio, struct page *page,
>> + struct vm_area_struct *vma, unsigned long address, rmap_t flags)
>> +{
>> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
>> + defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
>> + __folio_add_anon_rmap(folio, page, HPAGE_PUD_NR, vma, address, flags,
>> + PGTABLE_LEVEL_PUD);
>> +#else
>> + WARN_ON_ONCE(true);
>> +#endif
>> +}
>
> More copy/paste... Maybe unavoidable in this case, but be good to try.
>
>> +
>> /**
>> * folio_add_new_anon_rmap - Add mapping to a new anonymous folio.
>> * @folio: The folio to add the mapping to.
>> @@ -1934,6 +1981,20 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>> }
>>
>> if (!pvmw.pte) {
>> + /*
>> + * Check for PUD-mapped THP first.
>> + * If we have a PUD mapping and TTU_SPLIT_HUGE_PUD is set,
>> + * split the PUD to PMD level and restart the walk.
>> + */
>
> This is literally describing the code below, it's not useful.
Ack, Will remove this comment, Thanks!
>
>> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
>> + if (flags & TTU_SPLIT_HUGE_PUD) {
>> + split_huge_pud_locked(vma, pvmw.pud, pvmw.address);
>> + flags &= ~TTU_SPLIT_HUGE_PUD;
>> + page_vma_mapped_walk_restart(&pvmw);
>> + continue;
>> + }
>> + }
>> +
>> if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
>> if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
>> goto walk_done;
>> @@ -2325,6 +2386,27 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>> mmu_notifier_invalidate_range_start(&range);
>>
>> while (page_vma_mapped_walk(&pvmw)) {
>> + /* Handle PUD-mapped THP first */
>
> How did/will this interact with DAX, VFIO PUD THP?
It wont interact with DAX. try_to_migrate does the below and just returns:
if (folio_is_zone_device(folio) &&
(!folio_is_device_private(folio) && !folio_is_device_coherent(folio)))
return;
so DAX would never reach here.
I think vfio pages are pinned and therefore cant be migrated? (I have
not looked at vfio code, I will try to get a better understanding tomorrow,
but please let me know if that sounds wrong.)
>
>> + if (!pvmw.pte && !pvmw.pmd) {
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>
> Won't pud_trans_huge() imply this...
>
Agreed, I think it should cover it.
>> + /*
>> + * PUD-mapped THP: skip migration to preserve the huge
>> + * page. Splitting would defeat the purpose of PUD THPs.
>> + * Return false to indicate migration failure, which
>> + * will cause alloc_contig_range() to try a different
>> + * memory region.
>> + */
>> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
>> + page_vma_mapped_walk_done(&pvmw);
>> + ret = false;
>> + break;
>> + }
>> +#endif
>> + /* Unexpected state: !pte && !pmd but not a PUD THP */
>> + page_vma_mapped_walk_done(&pvmw);
>> + break;
>> + }
>> +
>> /* PMD-mapped THP migration entry */
>> if (!pvmw.pte) {
>> __maybe_unused unsigned long pfn;
>> @@ -2607,10 +2689,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
>>
>> /*
>> * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
>> - * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
>> + * TTU_SPLIT_HUGE_PMD, TTU_SPLIT_HUGE_PUD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
>> */
>> if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
>> - TTU_SYNC | TTU_BATCH_FLUSH)))
>> + TTU_SPLIT_HUGE_PUD | TTU_SYNC | TTU_BATCH_FLUSH)))
>> return;
>>
>> if (folio_is_zone_device(folio) &&
>> --
>> 2.47.3
>>
>
> This isn't a final review, I'll have to look more thoroughly through here
> over time and you're going to have to be patient in general :)
>
> Cheers, Lorenzo
Thanks for the review, this is awesome!
[1] https://lore.kernel.org/all/20f92576-e932-435f-bb7b-de49eb84b012@gmail.com/
[2] https://lore.kernel.org/all/05d5918f-b61b-4091-b8c6-20eebfffc3c4@gmail.com/
[3] https://lore.kernel.org/all/2efaa5ed-bd09-41f0-9c07-5cd6cccc4595@gmail.com/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 15:50 ` Zi Yan
@ 2026-02-04 10:56 ` Lorenzo Stoakes
2026-02-05 11:29 ` David Hildenbrand (arm)
2026-02-05 11:22 ` David Hildenbrand (arm)
1 sibling, 1 reply; 49+ messages in thread
From: Lorenzo Stoakes @ 2026-02-04 10:56 UTC (permalink / raw)
To: Zi Yan
Cc: David Hildenbrand, Rik van Riel, Usama Arif, Andrew Morton,
linux-mm, hannes, shakeel.butt, kas, baohua, dev.jain,
baolin.wang, npache, Liam.Howlett, ryan.roberts, vbabka,
lance.yang, linux-kernel, kernel-team, Frank van der Linden
On Mon, Feb 02, 2026 at 10:50:35AM -0500, Zi Yan wrote:
> On 2 Feb 2026, at 6:30, Lorenzo Stoakes wrote:
>
> > On Sun, Feb 01, 2026 at 09:44:12PM -0500, Rik van Riel wrote:
> >> On Sun, 2026-02-01 at 16:50 -0800, Usama Arif wrote:
> >>>
> >>> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages
> >>> at boot
> >>> or runtime, taking memory away. This requires capacity planning,
> >>> administrative overhead, and makes workload orchastration much
> >>> much more
> >>> complex, especially colocating with workloads that don't use
> >>> hugetlbfs.
> >>>
> >> To address the obvious objection "but how could we
> >> possibly allocate 1GB huge pages while the workload
> >> is running?", I am planning to pick up the CMA balancing
> >> patch series (thank you, Frank) and get that in an
> >> upstream ready shape soon.
> >>
> >> https://lkml.org/2025/9/15/1735
> >
> > That link doesn't work?
> >
> > Did a quick search for CMA balancing on lore, couldn't find anything, could you
> > provide a lore link?
>
> https://lwn.net/Articles/1038263/
>
> >
> >>
> >> That patch set looks like another case where no
> >> amount of internal testing will find every single
> >> corner case, and we'll probably just want to
> >> merge it upstream, deploy it experimentally, and
> >> aggressively deal with anything that might pop up.
> >
> > I'm not really in favour of this kind of approach. There's plenty of things that
> > were considered 'temporary' upstream that became rather permanent :)
> >
> > Maybe we can't cover all corner-cases, but we need to make sure whatever we do
> > send upstream is maintainable, conceptually sensible and doesn't paint us into
> > any corners, etc.
> >
> >>
> >> With CMA balancing, it would be possibly to just
> >> have half (or even more) of system memory for
> >> movable allocations only, which would make it possible
> >> to allocate 1GB huge pages dynamically.
> >
> > Could you expand on that?
>
> I also would like to hear David’s opinion on using CMA for 1GB THP.
> He did not like it[1] when I posted my patch back in 2020, but it has
> been more than 5 years. :)
Yes please David :)
I find the idea of using the CMA for this a bit gross. And I fear we're
essentially expanding the hacks for DAX to everyone.
Again I really feel that we should be tackling technical debt here, rather
than adding features on shaky foundations and just making things worse.
We are inundated with series-after-series for THP trying to add features
but really not very many that are tackling this debt, and I think it's time
to get firmer about that.
>
> The other direction I explored is to get 1GB THP from buddy allocator.
> That means we need to:
> 1. bump MAX_PAGE_ORDER to 18 or make it a runtime variable so that only 1GB
> THP users need to bump it,
Would we need to bump the page block size too to stand more of a chance of
avoiding fragmentation?
Doing that though would result in reserves being way higher and thus more
memory used and we'd be in the territory of the unresolved issues with 64
KB page size kernels :)
> 2. handle cross memory section PFN merge in buddy allocator,
Ugh god...
> 3. improve anti-fragmentation mechanism for 1GB range compaction.
I think we'd really need something like this. Obviously there's the series
Rik refers to.
I mean CMA itself feels like a hack, though efforts are being made to at
least make it more robust (series mentioned, also the guaranteed CMA stuff
from Suren).
>
> 1 is easier-ish[2]. I have not looked into 2 and 3 much yet.
>
> [1] https://lore.kernel.org/all/52bc2d5d-eb8a-83de-1c93-abd329132d58@redhat.com/
> [2] https://lore.kernel.org/all/20210805190253.2795604-1-zi.yan@sent.com/
>
>
> Best Regards,
> Yan, Zi
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-04 1:00 ` Usama Arif
@ 2026-02-04 11:08 ` Lorenzo Stoakes
2026-02-04 11:50 ` Dev Jain
2026-02-05 6:08 ` Usama Arif
0 siblings, 2 replies; 49+ messages in thread
From: Lorenzo Stoakes @ 2026-02-04 11:08 UTC (permalink / raw)
To: Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On Tue, Feb 03, 2026 at 05:00:10PM -0800, Usama Arif wrote:
>
>
> On 02/02/2026 03:20, Lorenzo Stoakes wrote:
> > OK so this is somewhat unexpected :)
> >
> > It would have been nice to discuss it in the THP cabal or at a conference
> > etc. so we could discuss approaches ahead of time. Communication is important,
> > especially with major changes like this.
>
> Makes sense!
>
> >
> > And PUD THP is especially problematic in that it requires pages that the page
> > allocator can't give us, presumably you're doing something with CMA and... it's
> > a whole kettle of fish.
>
> So we dont need CMA. It helps ofcourse, but we don't *need* it.
> Its summarized in the first reply I gave to Zi in [1]:
>
> >
> > It's also complicated by the fact we _already_ support it in the DAX, VFIO cases
> > but it's kinda a weird sorta special case that we need to keep supporting.
> >
> > There's questions about how this will interact with khugepaged, MADV_COLLAPSE,
> > mTHP (and really I want to see Nico's series land before we really consider
> > this).
>
>
> So I have numbers and experiments for page faults which are in the cover letter,
> but not for khugepaged. I would be very surprised (although pleasently :)) if
> khugepaged by some magic finds 262144 pages that meets all the khugepaged requirements
> to collapse the page. In the basic infrastructure support which this series is adding,
> I want to keep khugepaged collapse disabled for 1G pages. This is also the initial
> approach that was taken in other mTHP sizes. We should go slow with 1G THPs.
Yes we definitely want to limit to page faults for now.
But keep in mind for that to be viable you'd surely need to update who gets
appropriate alignment in __get_unmapped_area()... not read through series far
enough to see so not sure if you update that though!
I guess that'd be the sanest place to start, if an allocation _size_ is aligned
1 GB, then align the unmapped area _address_ to 1 GB for maximum chance of 1 GB
fault-in.
Oh by the way I made some rough THP notes at
https://publish.obsidian.md/mm/Transparent+Huge+Pages+(THP) which are helpful
for reminding me about what does what where, useful for a top-down view of how
things are now.
>
> >
> > So overall, I want to be very cautious and SLOW here. So let's please not drop
> > the RFC tag until David and I are ok with that?
> >
> > Also the THP code base is in _dire_ need of rework, and I don't really want to
> > add major new features without us paying down some technical debt, to be honest.
> >
> > So let's proceed with caution, and treat this as a very early bit of
> > experimental code.
> >
> > Thanks, Lorenzo
>
> Ack, yeah so this is mainly an RFC to discuss what the major design choices will be.
> I got a kernel with selftests for allocation, memory integrity, fork, partial munmap,
> mprotect, reclaim and migration passing and am running them with DEBUG_VM to make sure
> we dont get the VM bugs/warnings and the numbers are good, so just wanted to share it
> upstream and get your opinions! Basically try and trigger a discussion similar to what
> Zi asked in [2]! And also if someone could point out if there is something fundamental
> we are missing in this series.
Well that's fair enough :)
But do come to a THP cabal so we can chat, face-to-face (ok, digital face to
digital face ;). It's usually a force-multiplier I find, esp. if multiple people
have input which I think is the case here. We're friendly :)
In any case, conversations are already kicking off so that's definitely positive!
I think we will definitely get there with this at _some point_ but I would urge
patience and also I really want to underline my desire for us in THP to start
paying down some of this technical debt.
I know people are already making efforts (Vernon, Luiz), and sorry that I've not
been great at review recently (should be gradually increasing over time), but I
feel that for large features to be added like this now we really do require some
refactoring work before we take it.
We definitely need to rebase this once Nico's series lands (should do next
cycle) and think about how it plays with this, I'm not sure if arm64 supports
mTHP between PMD and PUD size (Dev? Do you know?) so maybe that one is moot, but
in general want to make sure it plays nice.
>
> Thanks for the reviews! Really do apprecaite it!
No worries! :)
>
> [1] https://lore.kernel.org/all/20f92576-e932-435f-bb7b-de49eb84b012@gmail.com/#t
> [2] https://lore.kernel.org/all/3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com/
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-04 11:08 ` Lorenzo Stoakes
@ 2026-02-04 11:50 ` Dev Jain
2026-02-04 12:01 ` Dev Jain
2026-02-05 6:08 ` Usama Arif
1 sibling, 1 reply; 49+ messages in thread
From: Dev Jain @ 2026-02-04 11:50 UTC (permalink / raw)
To: Lorenzo Stoakes, Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, baolin.wang, npache, Liam.Howlett,
ryan.roberts, vbabka, lance.yang, linux-kernel, kernel-team
On 04/02/26 4:38 pm, Lorenzo Stoakes wrote:
> On Tue, Feb 03, 2026 at 05:00:10PM -0800, Usama Arif wrote:
>>
>> On 02/02/2026 03:20, Lorenzo Stoakes wrote:
>>> OK so this is somewhat unexpected :)
>>>
>>> It would have been nice to discuss it in the THP cabal or at a conference
>>> etc. so we could discuss approaches ahead of time. Communication is important,
>>> especially with major changes like this.
>> Makes sense!
>>
>>> And PUD THP is especially problematic in that it requires pages that the page
>>> allocator can't give us, presumably you're doing something with CMA and... it's
>>> a whole kettle of fish.
>> So we dont need CMA. It helps ofcourse, but we don't *need* it.
>> Its summarized in the first reply I gave to Zi in [1]:
>>
>>> It's also complicated by the fact we _already_ support it in the DAX, VFIO cases
>>> but it's kinda a weird sorta special case that we need to keep supporting.
>>>
>>> There's questions about how this will interact with khugepaged, MADV_COLLAPSE,
>>> mTHP (and really I want to see Nico's series land before we really consider
>>> this).
>>
>> So I have numbers and experiments for page faults which are in the cover letter,
>> but not for khugepaged. I would be very surprised (although pleasently :)) if
>> khugepaged by some magic finds 262144 pages that meets all the khugepaged requirements
>> to collapse the page. In the basic infrastructure support which this series is adding,
>> I want to keep khugepaged collapse disabled for 1G pages. This is also the initial
>> approach that was taken in other mTHP sizes. We should go slow with 1G THPs.
> Yes we definitely want to limit to page faults for now.
>
> But keep in mind for that to be viable you'd surely need to update who gets
> appropriate alignment in __get_unmapped_area()... not read through series far
> enough to see so not sure if you update that though!
>
> I guess that'd be the sanest place to start, if an allocation _size_ is aligned
> 1 GB, then align the unmapped area _address_ to 1 GB for maximum chance of 1 GB
> fault-in.
>
> Oh by the way I made some rough THP notes at
> https://publish.obsidian.md/mm/Transparent+Huge+Pages+(THP) which are helpful
> for reminding me about what does what where, useful for a top-down view of how
> things are now.
>
>>> So overall, I want to be very cautious and SLOW here. So let's please not drop
>>> the RFC tag until David and I are ok with that?
>>>
>>> Also the THP code base is in _dire_ need of rework, and I don't really want to
>>> add major new features without us paying down some technical debt, to be honest.
>>>
>>> So let's proceed with caution, and treat this as a very early bit of
>>> experimental code.
>>>
>>> Thanks, Lorenzo
>> Ack, yeah so this is mainly an RFC to discuss what the major design choices will be.
>> I got a kernel with selftests for allocation, memory integrity, fork, partial munmap,
>> mprotect, reclaim and migration passing and am running them with DEBUG_VM to make sure
>> we dont get the VM bugs/warnings and the numbers are good, so just wanted to share it
>> upstream and get your opinions! Basically try and trigger a discussion similar to what
>> Zi asked in [2]! And also if someone could point out if there is something fundamental
>> we are missing in this series.
> Well that's fair enough :)
>
> But do come to a THP cabal so we can chat, face-to-face (ok, digital face to
> digital face ;). It's usually a force-multiplier I find, esp. if multiple people
> have input which I think is the case here. We're friendly :)
>
> In any case, conversations are already kicking off so that's definitely positive!
>
> I think we will definitely get there with this at _some point_ but I would urge
> patience and also I really want to underline my desire for us in THP to start
> paying down some of this technical debt.
>
> I know people are already making efforts (Vernon, Luiz), and sorry that I've not
> been great at review recently (should be gradually increasing over time), but I
> feel that for large features to be added like this now we really do require some
> refactoring work before we take it.
>
> We definitely need to rebase this once Nico's series lands (should do next
> cycle) and think about how it plays with this, I'm not sure if arm64 supports
> mTHP between PMD and PUD size (Dev? Do you know?) so maybe that one is moot, but
arm64 does support cont mappings at the PMD level. Currently, they are supported
for kernel pagetables, and hugetlbpages. You may search around for "CONT_PMD" in
the codebase. Hence it only supports cont PMD in the "static" case, there is
no dynamic folding/unfolding of the cont bit at the PMD level, which mTHP requires.
I see that this patchset splits PUD all the way down to PTEs. If we were to split
it down to PMD, and add arm64 support for dynamic cont mappings at the PMD level,
it will be nicer. But I guess there is some mapcount/rmap stuff involved
here stopping us from doing that :(
> in general want to make sure it plays nice.
>
>> Thanks for the reviews! Really do apprecaite it!
> No worries! :)
>
>> [1] https://lore.kernel.org/all/20f92576-e932-435f-bb7b-de49eb84b012@gmail.com/#t
>> [2] https://lore.kernel.org/all/3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com/
> Cheers, Lorenzo
>
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-04 11:50 ` Dev Jain
@ 2026-02-04 12:01 ` Dev Jain
0 siblings, 0 replies; 49+ messages in thread
From: Dev Jain @ 2026-02-04 12:01 UTC (permalink / raw)
To: Lorenzo Stoakes, Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, baolin.wang, npache, Liam.Howlett,
ryan.roberts, vbabka, lance.yang, linux-kernel, kernel-team
On 04/02/26 5:20 pm, Dev Jain wrote:
> On 04/02/26 4:38 pm, Lorenzo Stoakes wrote:
>> On Tue, Feb 03, 2026 at 05:00:10PM -0800, Usama Arif wrote:
>>> On 02/02/2026 03:20, Lorenzo Stoakes wrote:
>>>> OK so this is somewhat unexpected :)
>>>>
>>>> It would have been nice to discuss it in the THP cabal or at a conference
>>>> etc. so we could discuss approaches ahead of time. Communication is important,
>>>> especially with major changes like this.
>>> Makes sense!
>>>
>>>> And PUD THP is especially problematic in that it requires pages that the page
>>>> allocator can't give us, presumably you're doing something with CMA and... it's
>>>> a whole kettle of fish.
>>> So we dont need CMA. It helps ofcourse, but we don't *need* it.
>>> Its summarized in the first reply I gave to Zi in [1]:
>>>
>>>> It's also complicated by the fact we _already_ support it in the DAX, VFIO cases
>>>> but it's kinda a weird sorta special case that we need to keep supporting.
>>>>
>>>> There's questions about how this will interact with khugepaged, MADV_COLLAPSE,
>>>> mTHP (and really I want to see Nico's series land before we really consider
>>>> this).
>>> So I have numbers and experiments for page faults which are in the cover letter,
>>> but not for khugepaged. I would be very surprised (although pleasently :)) if
>>> khugepaged by some magic finds 262144 pages that meets all the khugepaged requirements
>>> to collapse the page. In the basic infrastructure support which this series is adding,
>>> I want to keep khugepaged collapse disabled for 1G pages. This is also the initial
>>> approach that was taken in other mTHP sizes. We should go slow with 1G THPs.
>> Yes we definitely want to limit to page faults for now.
>>
>> But keep in mind for that to be viable you'd surely need to update who gets
>> appropriate alignment in __get_unmapped_area()... not read through series far
>> enough to see so not sure if you update that though!
>>
>> I guess that'd be the sanest place to start, if an allocation _size_ is aligned
>> 1 GB, then align the unmapped area _address_ to 1 GB for maximum chance of 1 GB
>> fault-in.
>>
>> Oh by the way I made some rough THP notes at
>> https://publish.obsidian.md/mm/Transparent+Huge+Pages+(THP) which are helpful
>> for reminding me about what does what where, useful for a top-down view of how
>> things are now.
>>
>>>> So overall, I want to be very cautious and SLOW here. So let's please not drop
>>>> the RFC tag until David and I are ok with that?
>>>>
>>>> Also the THP code base is in _dire_ need of rework, and I don't really want to
>>>> add major new features without us paying down some technical debt, to be honest.
>>>>
>>>> So let's proceed with caution, and treat this as a very early bit of
>>>> experimental code.
>>>>
>>>> Thanks, Lorenzo
>>> Ack, yeah so this is mainly an RFC to discuss what the major design choices will be.
>>> I got a kernel with selftests for allocation, memory integrity, fork, partial munmap,
>>> mprotect, reclaim and migration passing and am running them with DEBUG_VM to make sure
>>> we dont get the VM bugs/warnings and the numbers are good, so just wanted to share it
>>> upstream and get your opinions! Basically try and trigger a discussion similar to what
>>> Zi asked in [2]! And also if someone could point out if there is something fundamental
>>> we are missing in this series.
>> Well that's fair enough :)
>>
>> But do come to a THP cabal so we can chat, face-to-face (ok, digital face to
>> digital face ;). It's usually a force-multiplier I find, esp. if multiple people
>> have input which I think is the case here. We're friendly :)
>>
>> In any case, conversations are already kicking off so that's definitely positive!
>>
>> I think we will definitely get there with this at _some point_ but I would urge
>> patience and also I really want to underline my desire for us in THP to start
>> paying down some of this technical debt.
>>
>> I know people are already making efforts (Vernon, Luiz), and sorry that I've not
>> been great at review recently (should be gradually increasing over time), but I
>> feel that for large features to be added like this now we really do require some
>> refactoring work before we take it.
>>
>> We definitely need to rebase this once Nico's series lands (should do next
>> cycle) and think about how it plays with this, I'm not sure if arm64 supports
>> mTHP between PMD and PUD size (Dev? Do you know?) so maybe that one is moot, but
> arm64 does support cont mappings at the PMD level. Currently, they are supported
> for kernel pagetables, and hugetlbpages. You may search around for "CONT_PMD" in
> the codebase. Hence it only supports cont PMD in the "static" case, there is
> no dynamic folding/unfolding of the cont bit at the PMD level, which mTHP requires.
>
> I see that this patchset splits PUD all the way down to PTEs. If we were to split
> it down to PMD, and add arm64 support for dynamic cont mappings at the PMD level,
> it will be nicer. But I guess there is some mapcount/rmap stuff involved
> here stopping us from doing that :(
Hmm, this won't make a difference w.r.t cont PMD. If we were to split PUD folio
down to PMD folios, we won't get cont PMD. But yes, in general PMD mappings
are nicer.
>
>> in general want to make sure it plays nice.
>>
>>> Thanks for the reviews! Really do apprecaite it!
>> No worries! :)
>>
>>> [1] https://lore.kernel.org/all/20f92576-e932-435f-bb7b-de49eb84b012@gmail.com/#t
>>> [2] https://lore.kernel.org/all/3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com/
>> Cheers, Lorenzo
>>
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-04 7:38 ` Usama Arif
@ 2026-02-04 12:55 ` Lorenzo Stoakes
2026-02-05 6:40 ` Usama Arif
0 siblings, 1 reply; 49+ messages in thread
From: Lorenzo Stoakes @ 2026-02-04 12:55 UTC (permalink / raw)
To: Usama Arif
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On Tue, Feb 03, 2026 at 11:38:02PM -0800, Usama Arif wrote:
>
>
> On 02/02/2026 04:15, Lorenzo Stoakes wrote:
> > I think I'm going to have to do several passes on this, so this is just a
> > first one :)
> >
>
> Thanks! Really appreciate the reviews!
No worries!
>
> One thing over here is the higher level design decision when it comes to migration
> of 1G pages. As Zi said in [1]:
> "I also wonder what the purpose of PUD THP migration can be.
> It does not create memory fragmentation, since it is the largest folio size
> we have and contiguous. NUMA balancing 1GB THP seems too much work."
>
> > On Sun, Feb 01, 2026 at 04:50:18PM -0800, Usama Arif wrote:
> >> For page table management, PUD THPs need to pre-deposit page tables
> >> that will be used when the huge page is later split. When a PUD THP
> >> is allocated, we cannot know in advance when or why it might need to
> >> be split (COW, partial unmap, reclaim), but we need page tables ready
> >> for that eventuality. Similar to how PMD THPs deposit a single PTE
> >> table, PUD THPs deposit a PMD table which itself contains deposited
> >> PTE tables - a two-level deposit. This commit adds the deposit/withdraw
> >> infrastructure and a new pud_huge_pmd field in ptdesc to store the
> >> deposited PMD.
> >
> > This feels like you're hacking this support in, honestly. The list_head
> > abuse only adds to that feeling.
> >
>
> Yeah so I hope turning it to something like [2] is the way forward.
Right, that's one option, though David suggested avoiding this altogether by
only pre-allocating PTEs?
>
> > And are we now not required to store rather a lot of memory to keep all of
> > this coherent?
>
> PMD THP allocates 1 4K page (pte_alloc_one) at fault time so that split
> doesnt fail.
>
> For PUD we allocate 2M worth of PTE page tables and 1 4K PMD table at fault
> time so that split doesnt fail due to there not being enough memory.
> Its not great, but its not bad as well.
> The alternative is to allocate this at split time and so we are not
> pre-reserving them. Now there is a chance that allocation and therefore split
> fails, so the tradeoff is some memory vs reliability. This patch favours
> reliability.
That's a significant amount of unmovable, unreclaimable memory though. Going
from 4K to 2M is a pretty huge uptick.
>
> Lets say a user gets 100x1G THPs. They would end up using ~200M for it.
> I think that is okish. If the user has 100G, 200M might not be an issue
> for them :)
But there's more than one user on boxes big enough for this, so this makes me
think we want this to be somehow opt-in right?
And that means we're incurring an unmovable memory penalty, the kind which we're
trying to avoid in general elsewhere in the kernel.
>
> >
> >>
> >> The deposited PMD tables are stored as a singly-linked stack using only
> >> page->lru.next as the link pointer. A doubly-linked list using the
> >> standard list_head mechanism would cause memory corruption: list_del()
> >> poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev
> >> overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD
> >> tables have their own deposited PTE tables stored in pmd_huge_pte,
> >> poisoning lru.prev would corrupt the PTE table list and cause crashes
> >> when withdrawing PTE tables during split. PMD THPs don't have this
> >> problem because their deposited PTE tables don't have sub-deposits.
> >> Using only lru.next avoids the overlap entirely.
> >
> > Yeah this is horrendous and a hack, I don't consider this at all
> > upstreamable.
> >
> > You need to completely rework this.
>
> Hopefully [2] is the path forward!
Ack
> >
> >>
> >> For reverse mapping, PUD THPs need the same rmap support that PMD THPs
> >> have. The page_vma_mapped_walk() function is extended to recognize and
> >> handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD
> >> flag tells the unmap path to split PUD THPs before proceeding, since
> >> there is no PUD-level migration entry format - the split converts the
> >> single PUD mapping into individual PTE mappings that can be migrated
> >> or swapped normally.
> >
> > Individual PTE... mappings? You need to be a lot clearer here, page tables
> > are naturally confusing with entries vs. tables.
> >
> > Let's be VERY specific here. Do you mean you have 1 PMD table and 512 PTE
> > tables reserved, spanning 1 PUD entry and 262,144 PTE entries?
> >
>
> Yes that is correct, Thanks! I will change the commit message in the next revision
> to what you have written: 1 PMD table and 512 PTE tables reserved, spanning
> 1 PUD entry and 262,144 PTE entries.
Yeah :) my concerns remain :)
>
> >>
> >> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
> >
> > How does this change interact with existing DAX/VFIO code, which now it
> > seems will be subject to the mechanisms you introduce here?
>
> I think what you mean here is the change in try_to_migrate_one?
>
>
> So one
Unfinished sentence? :P
No I mean currently we support 1G THP for DAX/VFIO right? So how does this
interplay with how that currently works? Does that change how DAX/VFIO works?
Will that impact existing users?
Or are we extending the existing mechanism?
>
> >
> > Right now DAX/VFIO is only obtainable via a specially THP-aligned
> > get_unmapped_area() + then can only be obtained at fault time.
> > > Is that the intent here also?
> >
>
> Ah thanks for pointing this out. This is something the series is missing.
>
> What I did in the selftest and benchmark was fault on an address that was already aligned.
> i.e. basically call the below function before faulting in.
>
> static inline void *pud_align(void *addr)
> {
> return (void *)(((unsigned long)addr + PUD_SIZE - 1) & ~(PUD_SIZE - 1));
> }
Right yeah :)
>
>
> What I think you are suggesting this series is missing is the below diff? (its untested).
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 87b2c21df4a49..461158a0840db 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1236,6 +1236,12 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
> unsigned long ret;
> loff_t off = (loff_t)pgoff << PAGE_SHIFT;
>
> + if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && len >= PUD_SIZE) {
> + ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PUD_SIZE, vm_flags);
> + if (ret)
> + return ret;
> + }
No not that, that's going to cause issues, see commit d4148aeab4 for details as
to why this can go wrong.
In __get_unmapped_area() where the current 'if PMD size aligned then align area'
logic, like that.
> +
>
>
> > What is your intent - that khugepaged do this, or on alloc? How does it
> > interact with MADV_COLLAPSE?
> >
>
> Ah basically what I mentioned in [3], we want to go slow. Only enable PUD THPs
> page faults at the start. If there is data supporting that khugepaged will work
> than we do it, but we keep it disabled.
Yes I think khugepaged is probably never going to be all that a good idea with
this.
>
> > I noted on the 2nd patch, but you're changing THP_ORDERS_ALL_ANON which
> > alters __thp_vma_allowable_orders() behaviour, that change belongs here...
> >
> >
>
> Thanks for this! I only tried to split this code into logical commits
> after the whole thing was working. Some things are tightly coupled
> and I would need to move them to the right commit.
Yes there's a bunch of things that need tweaking here, to reiterate let's try to
pay down technical debt here and avoid copy/pasting :>)
>
> >> ---
> >> include/linux/huge_mm.h | 5 +++
> >> include/linux/mm.h | 19 ++++++++
> >> include/linux/mm_types.h | 5 ++-
> >> include/linux/pgtable.h | 8 ++++
> >> include/linux/rmap.h | 7 ++-
> >> mm/huge_memory.c | 8 ++++
> >> mm/internal.h | 3 ++
> >> mm/page_vma_mapped.c | 35 +++++++++++++++
> >> mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++
> >> mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++---
> >> 10 files changed, 260 insertions(+), 9 deletions(-)
> >>
> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> >> index a4d9f964dfdea..e672e45bb9cc7 100644
> >> --- a/include/linux/huge_mm.h
> >> +++ b/include/linux/huge_mm.h
> >> @@ -463,10 +463,15 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> >> unsigned long address);
> >>
> >> #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> >> + unsigned long address);
> >> int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >> pud_t *pudp, unsigned long addr, pgprot_t newprot,
> >> unsigned long cp_flags);
> >> #else
> >> +static inline void
> >> +split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> >> + unsigned long address) {}
> >> static inline int
> >> change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >> pud_t *pudp, unsigned long addr, pgprot_t newprot,
> >> diff --git a/include/linux/mm.h b/include/linux/mm.h
> >> index ab2e7e30aef96..a15e18df0f771 100644
> >> --- a/include/linux/mm.h
> >> +++ b/include/linux/mm.h
> >> @@ -3455,6 +3455,22 @@ static inline bool pagetable_pmd_ctor(struct mm_struct *mm,
> >> * considered ready to switch to split PUD locks yet; there may be places
> >> * which need to be converted from page_table_lock.
> >> */
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> +static inline struct page *pud_pgtable_page(pud_t *pud)
> >> +{
> >> + unsigned long mask = ~(PTRS_PER_PUD * sizeof(pud_t) - 1);
> >> +
> >> + return virt_to_page((void *)((unsigned long)pud & mask));
> >> +}
> >> +
> >> +static inline struct ptdesc *pud_ptdesc(pud_t *pud)
> >> +{
> >> + return page_ptdesc(pud_pgtable_page(pud));
> >> +}
> >> +
> >> +#define pud_huge_pmd(pud) (pud_ptdesc(pud)->pud_huge_pmd)
> >> +#endif
> >> +
> >> static inline spinlock_t *pud_lockptr(struct mm_struct *mm, pud_t *pud)
> >> {
> >> return &mm->page_table_lock;
> >> @@ -3471,6 +3487,9 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud)
> >> static inline void pagetable_pud_ctor(struct ptdesc *ptdesc)
> >> {
> >> __pagetable_ctor(ptdesc);
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> + ptdesc->pud_huge_pmd = NULL;
> >> +#endif
> >> }
> >>
> >> static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc)
> >> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> >> index 78950eb8926dc..26a38490ae2e1 100644
> >> --- a/include/linux/mm_types.h
> >> +++ b/include/linux/mm_types.h
> >> @@ -577,7 +577,10 @@ struct ptdesc {
> >> struct list_head pt_list;
> >> struct {
> >> unsigned long _pt_pad_1;
> >> - pgtable_t pmd_huge_pte;
> >> + union {
> >> + pgtable_t pmd_huge_pte; /* For PMD tables: deposited PTE */
> >> + pgtable_t pud_huge_pmd; /* For PUD tables: deposited PMD list */
> >> + };
> >> };
> >> };
> >> unsigned long __page_mapping;
> >> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> >> index 2f0dd3a4ace1a..3ce733c1d71a2 100644
> >> --- a/include/linux/pgtable.h
> >> +++ b/include/linux/pgtable.h
> >> @@ -1168,6 +1168,14 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
> >> #define arch_needs_pgtable_deposit() (false)
> >> #endif
> >>
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> +extern void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
> >> + pmd_t *pmd_table);
> >> +extern pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp);
> >> +extern void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable);
> >> +extern pgtable_t pud_withdraw_pte(pmd_t *pmd_table);
> >
> > These are useless extern's.
> >
>
>
> ack
>
> These are coming from the existing functions from the file:
> extern void pgtable_trans_huge_deposit
> extern pgtable_t pgtable_trans_huge_withdraw
>
> I think the externs can be removed from these as well? We can
> fix those in a separate patch.
Generally the approach is to remove externs when adding/changing new stuff as
otherwise we get completely useless churn on that and annoying git history
changes.
>
>
> >> +#endif
> >> +
> >> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >> /*
> >> * This is an implementation of pmdp_establish() that is only suitable for an
> >> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> >> index daa92a58585d9..08cd0a0eb8763 100644
> >> --- a/include/linux/rmap.h
> >> +++ b/include/linux/rmap.h
> >> @@ -101,6 +101,7 @@ enum ttu_flags {
> >> * do a final flush if necessary */
> >> TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock:
> >> * caller holds it */
> >> + TTU_SPLIT_HUGE_PUD = 0x100, /* split huge PUD if any */
> >> };
> >>
> >> #ifdef CONFIG_MMU
> >> @@ -473,6 +474,8 @@ void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages,
> >> folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags)
> >> void folio_add_anon_rmap_pmd(struct folio *, struct page *,
> >> struct vm_area_struct *, unsigned long address, rmap_t flags);
> >> +void folio_add_anon_rmap_pud(struct folio *, struct page *,
> >> + struct vm_area_struct *, unsigned long address, rmap_t flags);
> >> void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
> >> unsigned long address, rmap_t flags);
> >> void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages,
> >> @@ -933,6 +936,7 @@ struct page_vma_mapped_walk {
> >> pgoff_t pgoff;
> >> struct vm_area_struct *vma;
> >> unsigned long address;
> >> + pud_t *pud;
> >> pmd_t *pmd;
> >> pte_t *pte;
> >> spinlock_t *ptl;
> >> @@ -970,7 +974,7 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> >> static inline void
> >> page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
> >> {
> >> - WARN_ON_ONCE(!pvmw->pmd && !pvmw->pte);
> >> + WARN_ON_ONCE(!pvmw->pud && !pvmw->pmd && !pvmw->pte);
> >>
> >> if (likely(pvmw->ptl))
> >> spin_unlock(pvmw->ptl);
> >> @@ -978,6 +982,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
> >> WARN_ON_ONCE(1);
> >>
> >> pvmw->ptl = NULL;
> >> + pvmw->pud = NULL;
> >> pvmw->pmd = NULL;
> >> pvmw->pte = NULL;
> >> }
> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >> index 40cf59301c21a..3128b3beedb0a 100644
> >> --- a/mm/huge_memory.c
> >> +++ b/mm/huge_memory.c
> >> @@ -2933,6 +2933,14 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> >> spin_unlock(ptl);
> >> mmu_notifier_invalidate_range_end(&range);
> >> }
> >> +
> >> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> >> + unsigned long address)
> >> +{
> >> + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PUD_SIZE));
> >> + if (pud_trans_huge(*pud))
> >> + __split_huge_pud_locked(vma, pud, address);
> >> +}
> >> #else
> >> void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> >> unsigned long address)
> >> diff --git a/mm/internal.h b/mm/internal.h
> >> index 9ee336aa03656..21d5c00f638dc 100644
> >> --- a/mm/internal.h
> >> +++ b/mm/internal.h
> >> @@ -545,6 +545,9 @@ int user_proactive_reclaim(char *buf,
> >> * in mm/rmap.c:
> >> */
> >> pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address);
> >> +#endif
> >>
> >> /*
> >> * in mm/page_alloc.c
> >> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> >> index b38a1d00c971b..d31eafba38041 100644
> >> --- a/mm/page_vma_mapped.c
> >> +++ b/mm/page_vma_mapped.c
> >> @@ -146,6 +146,18 @@ static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
> >> return true;
> >> }
> >>
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> +/* Returns true if the two ranges overlap. Careful to not overflow. */
> >> +static bool check_pud(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
> >> +{
> >> + if ((pfn + HPAGE_PUD_NR - 1) < pvmw->pfn)
> >> + return false;
> >> + if (pfn > pvmw->pfn + pvmw->nr_pages - 1)
> >> + return false;
> >> + return true;
> >> +}
> >> +#endif
> >> +
> >> static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
> >> {
> >> pvmw->address = (pvmw->address + size) & ~(size - 1);
> >> @@ -188,6 +200,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> >> pud_t *pud;
> >> pmd_t pmde;
> >>
> >> + /* The only possible pud mapping has been handled on last iteration */
> >> + if (pvmw->pud && !pvmw->pmd)
> >> + return not_found(pvmw);
> >> +
> >> /* The only possible pmd mapping has been handled on last iteration */
> >> if (pvmw->pmd && !pvmw->pte)
> >> return not_found(pvmw);
> >> @@ -234,6 +250,25 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> >> continue;
> >> }
> >>
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >
> > Said it elsewhere, but it's really weird to treat an arch having the
> > ability to do something as a go ahead for doing it.
> >
> >> + /* Check for PUD-mapped THP */
> >> + if (pud_trans_huge(*pud)) {
> >> + pvmw->pud = pud;
> >> + pvmw->ptl = pud_lock(mm, pud);
> >> + if (likely(pud_trans_huge(*pud))) {
> >> + if (pvmw->flags & PVMW_MIGRATION)
> >> + return not_found(pvmw);
> >> + if (!check_pud(pud_pfn(*pud), pvmw))
> >> + return not_found(pvmw);
> >> + return true;
> >> + }
> >> + /* PUD was split under us, retry at PMD level */
> >> + spin_unlock(pvmw->ptl);
> >> + pvmw->ptl = NULL;
> >> + pvmw->pud = NULL;
> >> + }
> >> +#endif
> >> +
> >
> > Yeah, as I said elsewhere, we got to be refactoring not copy/pasting with
> > modifications :)
> >
>
> Yeah there is repeated code in multiple places, where all I did was replace
> what was done from PMD into PUD. In a lot of places, its actually difficult
> to not repeat the code (unless we want function macros, which is much worse
> IMO).
Not if we actually refactor the existing code :)
When I wanted to make functional changes to mremap I took a lot of time to
refactor the code into something sane before even starting that.
Because I _could_ have added the features there as-is, but it would have been
hellish to do so as-is and added more confusion etc.
So yeah, I think a similar mentality has to be had with this change.
>
> >
> >> pvmw->pmd = pmd_offset(pud, pvmw->address);
> >> /*
> >> * Make sure the pmd value isn't cached in a register by the
> >> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> >> index d3aec7a9926ad..2047558ddcd79 100644
> >> --- a/mm/pgtable-generic.c
> >> +++ b/mm/pgtable-generic.c
> >> @@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
> >> }
> >> #endif
> >>
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> +/*
> >> + * Deposit page tables for PUD THP.
> >> + * Called with PUD lock held. Stores PMD tables in a singly-linked stack
> >> + * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer.
> >> + *
> >> + * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full
> >> + * list_head. This is because lru.prev (offset 16) overlaps with
> >> + * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables.
> >> + * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2.
> >
> > This is horrible and feels like a hack? Treating a doubly-linked list as a
> > singly-linked one like this is not upstreamable.
> >
> >> + *
> >> + * PTE tables should be deposited into the PMD using pud_deposit_pte().
> >> + */
> >> +void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
> >> + pmd_t *pmd_table)
> >
> > This is a horrid, you're depositing the PMD using the... questionable
> > list_head abuse, but then also have pud_deposit_pte()... But here we're
> > depositing a PMD shouldn't the name reflect that?
> >
> >> +{
> >> + pgtable_t pmd_page = virt_to_page(pmd_table);
> >> +
> >> + assert_spin_locked(pud_lockptr(mm, pudp));
> >> +
> >> + /* Push onto stack using only lru.next as the link */
> >> + pmd_page->lru.next = (struct list_head *)pud_huge_pmd(pudp);
> >
> > Yikes...
> >
> >> + pud_huge_pmd(pudp) = pmd_page;
> >> +}
> >> +
> >> +/*
> >> + * Withdraw the deposited PMD table for PUD THP split or zap.
> >> + * Called with PUD lock held.
> >> + * Returns NULL if no more PMD tables are deposited.
> >> + */
> >> +pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp)
> >> +{
> >> + pgtable_t pmd_page;
> >> +
> >> + assert_spin_locked(pud_lockptr(mm, pudp));
> >> +
> >> + pmd_page = pud_huge_pmd(pudp);
> >> + if (!pmd_page)
> >> + return NULL;
> >> +
> >> + /* Pop from stack - lru.next points to next PMD page (or NULL) */
> >> + pud_huge_pmd(pudp) = (pgtable_t)pmd_page->lru.next;
> >
> > Where's the popping? You're just assigning here.
>
>
> Ack on all of the above. Hopefully [1] is better.
Thanks!
> >
> >> +
> >> + return page_address(pmd_page);
> >> +}
> >> +
> >> +/*
> >> + * Deposit a PTE table into a standalone PMD table (not yet in page table hierarchy).
> >> + * Used for PUD THP pre-deposit. The PMD table's pmd_huge_pte stores a linked list.
> >> + * No lock assertion since the PMD isn't visible yet.
> >> + */
> >> +void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable)
> >> +{
> >> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
> >> +
> >> + /* FIFO - add to front of list */
> >> + if (!ptdesc->pmd_huge_pte)
> >> + INIT_LIST_HEAD(&pgtable->lru);
> >> + else
> >> + list_add(&pgtable->lru, &ptdesc->pmd_huge_pte->lru);
> >> + ptdesc->pmd_huge_pte = pgtable;
> >> +}
> >> +
> >> +/*
> >> + * Withdraw a PTE table from a standalone PMD table.
> >> + * Returns NULL if no more PTE tables are deposited.
> >> + */
> >> +pgtable_t pud_withdraw_pte(pmd_t *pmd_table)
> >> +{
> >> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
> >> + pgtable_t pgtable;
> >> +
> >> + pgtable = ptdesc->pmd_huge_pte;
> >> + if (!pgtable)
> >> + return NULL;
> >> + ptdesc->pmd_huge_pte = list_first_entry_or_null(&pgtable->lru,
> >> + struct page, lru);
> >> + if (ptdesc->pmd_huge_pte)
> >> + list_del(&pgtable->lru);
> >> + return pgtable;
> >> +}
> >> +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
> >> +
> >> #ifndef __HAVE_ARCH_PMDP_INVALIDATE
> >> pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> >> pmd_t *pmdp)
> >> diff --git a/mm/rmap.c b/mm/rmap.c
> >> index 7b9879ef442d9..69acabd763da4 100644
> >> --- a/mm/rmap.c
> >> +++ b/mm/rmap.c
> >> @@ -811,6 +811,32 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
> >> return pmd;
> >> }
> >>
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >> +/*
> >> + * Returns the actual pud_t* where we expect 'address' to be mapped from, or
> >> + * NULL if it doesn't exist. No guarantees / checks on what the pud_t*
> >> + * represents.
> >> + */
> >> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address)
> >
> > This series seems to be full of copy/paste.
> >
> > It's just not acceptable given the state of THP code as I said in reply to
> > the cover letter - you need to _refactor_ the code.
> >
> > The code is bug-prone and difficult to maintain as-is, your series has to
> > improve the technical debt, not add to it.
> >
>
> In some cases we might not be able to avoid the copy, but this is definitely
> a place where we dont need to. I will change here. Thanks!
I disagree, see above :) But thanks on this one
>
> >> +{
> >> + pgd_t *pgd;
> >> + p4d_t *p4d;
> >> + pud_t *pud = NULL;
> >> +
> >> + pgd = pgd_offset(mm, address);
> >> + if (!pgd_present(*pgd))
> >> + goto out;
> >> +
> >> + p4d = p4d_offset(pgd, address);
> >> + if (!p4d_present(*p4d))
> >> + goto out;
> >> +
> >> + pud = pud_offset(p4d, address);
> >> +out:
> >> + return pud;
> >> +}
> >> +#endif
> >> +
> >> struct folio_referenced_arg {
> >> int mapcount;
> >> int referenced;
> >> @@ -1415,11 +1441,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
> >> SetPageAnonExclusive(page);
> >> break;
> >> case PGTABLE_LEVEL_PUD:
> >> - /*
> >> - * Keep the compiler happy, we don't support anonymous
> >> - * PUD mappings.
> >> - */
> >> - WARN_ON_ONCE(1);
> >> + SetPageAnonExclusive(page);
> >> break;
> >> default:
> >> BUILD_BUG();
> >> @@ -1503,6 +1525,31 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
> >> #endif
> >> }
> >>
> >> +/**
> >> + * folio_add_anon_rmap_pud - add a PUD mapping to a page range of an anon folio
> >> + * @folio: The folio to add the mapping to
> >> + * @page: The first page to add
> >> + * @vma: The vm area in which the mapping is added
> >> + * @address: The user virtual address of the first page to map
> >> + * @flags: The rmap flags
> >> + *
> >> + * The page range of folio is defined by [first_page, first_page + HPAGE_PUD_NR)
> >> + *
> >> + * The caller needs to hold the page table lock, and the page must be locked in
> >> + * the anon_vma case: to serialize mapping,index checking after setting.
> >> + */
> >> +void folio_add_anon_rmap_pud(struct folio *folio, struct page *page,
> >> + struct vm_area_struct *vma, unsigned long address, rmap_t flags)
> >> +{
> >> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
> >> + defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
> >> + __folio_add_anon_rmap(folio, page, HPAGE_PUD_NR, vma, address, flags,
> >> + PGTABLE_LEVEL_PUD);
> >> +#else
> >> + WARN_ON_ONCE(true);
> >> +#endif
> >> +}
> >
> > More copy/paste... Maybe unavoidable in this case, but be good to try.
> >
> >> +
> >> /**
> >> * folio_add_new_anon_rmap - Add mapping to a new anonymous folio.
> >> * @folio: The folio to add the mapping to.
> >> @@ -1934,6 +1981,20 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >> }
> >>
> >> if (!pvmw.pte) {
> >> + /*
> >> + * Check for PUD-mapped THP first.
> >> + * If we have a PUD mapping and TTU_SPLIT_HUGE_PUD is set,
> >> + * split the PUD to PMD level and restart the walk.
> >> + */
> >
> > This is literally describing the code below, it's not useful.
>
> Ack, Will remove this comment, Thanks!
Thanks
> >
> >> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
> >> + if (flags & TTU_SPLIT_HUGE_PUD) {
> >> + split_huge_pud_locked(vma, pvmw.pud, pvmw.address);
> >> + flags &= ~TTU_SPLIT_HUGE_PUD;
> >> + page_vma_mapped_walk_restart(&pvmw);
> >> + continue;
> >> + }
> >> + }
> >> +
> >> if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
> >> if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
> >> goto walk_done;
> >> @@ -2325,6 +2386,27 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> >> mmu_notifier_invalidate_range_start(&range);
> >>
> >> while (page_vma_mapped_walk(&pvmw)) {
> >> + /* Handle PUD-mapped THP first */
> >
> > How did/will this interact with DAX, VFIO PUD THP?
>
> It wont interact with DAX. try_to_migrate does the below and just returns:
>
> if (folio_is_zone_device(folio) &&
> (!folio_is_device_private(folio) && !folio_is_device_coherent(folio)))
> return;
>
> so DAX would never reach here.
Hmm folio_is_zone_device() always returns true for DAX?
Also that's just one rmap call right?
>
> I think vfio pages are pinned and therefore cant be migrated? (I have
> not looked at vfio code, I will try to get a better understanding tomorrow,
> but please let me know if that sounds wrong.)
OK I've not dug into this either please do check, and be good really to test
this code vs. actual DAX/VFIO scenarios if you can find a way to test that, thanks!
>
>
> >
> >> + if (!pvmw.pte && !pvmw.pmd) {
> >> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> >
> > Won't pud_trans_huge() imply this...
> >
>
> Agreed, I think it should cover it.
Thanks!
>
> >> + /*
> >> + * PUD-mapped THP: skip migration to preserve the huge
> >> + * page. Splitting would defeat the purpose of PUD THPs.
> >> + * Return false to indicate migration failure, which
> >> + * will cause alloc_contig_range() to try a different
> >> + * memory region.
> >> + */
> >> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
> >> + page_vma_mapped_walk_done(&pvmw);
> >> + ret = false;
> >> + break;
> >> + }
> >> +#endif
> >> + /* Unexpected state: !pte && !pmd but not a PUD THP */
> >> + page_vma_mapped_walk_done(&pvmw);
> >> + break;
> >> + }
> >> +
> >> /* PMD-mapped THP migration entry */
> >> if (!pvmw.pte) {
> >> __maybe_unused unsigned long pfn;
> >> @@ -2607,10 +2689,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
> >>
> >> /*
> >> * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
> >> - * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
> >> + * TTU_SPLIT_HUGE_PMD, TTU_SPLIT_HUGE_PUD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
> >> */
> >> if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
> >> - TTU_SYNC | TTU_BATCH_FLUSH)))
> >> + TTU_SPLIT_HUGE_PUD | TTU_SYNC | TTU_BATCH_FLUSH)))
> >> return;
> >>
> >> if (folio_is_zone_device(folio) &&
> >> --
> >> 2.47.3
> >>
> >
> > This isn't a final review, I'll have to look more thoroughly through here
> > over time and you're going to have to be patient in general :)
> >
> > Cheers, Lorenzo
>
>
> Thanks for the review, this is awesome!
Ack, will do more when I have time, and obviously you're getting a lot of input
from others too.
Be good to get a summary at next THP cabal ;)
>
>
> [1] https://lore.kernel.org/all/20f92576-e932-435f-bb7b-de49eb84b012@gmail.com/
> [2] https://lore.kernel.org/all/05d5918f-b61b-4091-b8c6-20eebfffc3c4@gmail.com/
> [3] https://lore.kernel.org/all/2efaa5ed-bd09-41f0-9c07-5cd6cccc4595@gmail.com/
>
>
>
cheers, Lorenzo
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-03 22:07 ` Usama Arif
@ 2026-02-05 4:17 ` Matthew Wilcox
2026-02-05 4:21 ` Matthew Wilcox
0 siblings, 1 reply; 49+ messages in thread
From: Matthew Wilcox @ 2026-02-05 4:17 UTC (permalink / raw)
To: Usama Arif
Cc: Zi Yan, Kiryl Shutsemau, lorenzo.stoakes, Andrew Morton,
David Hildenbrand, linux-mm, hannes, riel, shakeel.butt, baohua,
dev.jain, baolin.wang, npache, Liam.Howlett, ryan.roberts,
vbabka, lance.yang, linux-kernel, kernel-team
On Tue, Feb 03, 2026 at 02:07:25PM -0800, Usama Arif wrote:
> Ah I should have looked at your patches more! I started working by just using lru
> and was using list_add/list_del which was ofcourse corrupting the list and took me
> way more time than I would like to admit to debug what was going on! The diagrams
> in your 2nd link are really useful. I ended up drawing by hand those to debug
> the corruption issue. I will point to that link in the next series :)
>
> How about something like the below diff over this patch? (Not included the comment
> changes that I will make everywhere)
Why are you even talking about "the next series"? The approach is
wrong. You need to put this POC aside and solve the problems that
you've bypassed to create this POC.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-05 4:17 ` Matthew Wilcox
@ 2026-02-05 4:21 ` Matthew Wilcox
2026-02-05 5:13 ` Usama Arif
0 siblings, 1 reply; 49+ messages in thread
From: Matthew Wilcox @ 2026-02-05 4:21 UTC (permalink / raw)
To: Usama Arif
Cc: Zi Yan, Kiryl Shutsemau, lorenzo.stoakes, Andrew Morton,
David Hildenbrand, linux-mm, hannes, riel, shakeel.butt, baohua,
dev.jain, baolin.wang, npache, Liam.Howlett, ryan.roberts,
vbabka, lance.yang, linux-kernel, kernel-team
On Thu, Feb 05, 2026 at 04:17:19AM +0000, Matthew Wilcox wrote:
> Why are you even talking about "the next series"? The approach is
> wrong. You need to put this POC aside and solve the problems that
> you've bypassed to create this POC.
... and gmail is rejecting this email as being spam. You need to stop
using gmail for kernel deveopment work.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-05 4:21 ` Matthew Wilcox
@ 2026-02-05 5:13 ` Usama Arif
2026-02-05 17:40 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 49+ messages in thread
From: Usama Arif @ 2026-02-05 5:13 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Zi Yan, Kiryl Shutsemau, lorenzo.stoakes, Andrew Morton,
David Hildenbrand, linux-mm, hannes, riel, shakeel.butt, baohua,
dev.jain, baolin.wang, npache, Liam.Howlett, ryan.roberts,
vbabka, lance.yang, linux-kernel, kernel-team
On 04/02/2026 20:21, Matthew Wilcox wrote:
> On Thu, Feb 05, 2026 at 04:17:19AM +0000, Matthew Wilcox wrote:
>> Why are you even talking about "the next series"? The approach is
>> wrong. You need to put this POC aside and solve the problems that
>> you've bypassed to create this POC.
Ah is the issue the code duplication that Lorenzo has raised (ofcourse
completely agree that there is quite a bit), the lru.next patch I did
which hopefully [1] makes better, or investigating if it might be
interferring with DAX/VFIO that Lorenzo pointed out (will ofcourse
investigate before sending the next revision)? The mapcount work
(I think David is working on this?) that is needed to allow splitting
PUDs to PMD is completely a separate issue and can be tackled in parallel
to this.
>
> ... and gmail is rejecting this email as being spam. You need to stop
> using gmail for kernel deveopment work.
I asked a couple of folks now and it seems they got it without any issue.
I have used it for a long time. I will try and see if something has changed.
[1] https://lore.kernel.org/all/05d5918f-b61b-4091-b8c6-20eebfffc3c4@gmail.com/
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-04 0:08 ` Frank van der Linden
@ 2026-02-05 5:46 ` Usama Arif
0 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-05 5:46 UTC (permalink / raw)
To: Frank van der Linden
Cc: Zi Yan, Andrew Morton, David Hildenbrand, lorenzo.stoakes,
linux-mm, hannes, riel, shakeel.butt, kas, baohua, dev.jain,
baolin.wang, npache, Liam.Howlett, ryan.roberts, vbabka,
lance.yang, linux-kernel, kernel-team
On 03/02/2026 16:08, Frank van der Linden wrote:
> On Tue, Feb 3, 2026 at 3:29 PM Usama Arif <usamaarif642@gmail.com> wrote:
>>
>>
>>
>> On 02/02/2026 08:24, Zi Yan wrote:
>>> On 1 Feb 2026, at 19:50, Usama Arif wrote:
>>>
>>>> This is an RFC series to implement 1GB PUD-level THPs, allowing
>>>> applications to benefit from reduced TLB pressure without requiring
>>>> hugetlbfs. The patches are based on top of
>>>> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
>>>
>>> It is nice to see you are working on 1GB THP.
>>>
>>>>
>>>> Motivation: Why 1GB THP over hugetlbfs?
>>>> =======================================
>>>>
>>>> While hugetlbfs provides 1GB huge pages today, it has significant limitations
>>>> that make it unsuitable for many workloads:
>>>>
>>>> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot
>>>> or runtime, taking memory away. This requires capacity planning,
>>>> administrative overhead, and makes workload orchastration much much more
>>>> complex, especially colocating with workloads that don't use hugetlbfs.
>>>
>>> But you are using CMA, the same allocation mechanism as hugetlb_cma. What
>>> is the difference?
>>>
>>
>> So we dont really need to use CMA. CMA can help a lot ofcourse, but we dont *need* it.
>> For e.g. I can run the very simple case [1] of trying to get 1G pages in the upstream
>> kernel without CMA on my server and it works. The server has been up for more than a week
>> (so pretty fragmented), is running a bunch of stuff in the background, uses 0 CMA memory,
>> and I tried to get 20x1G pages on it and it worked.
>> It uses folio_alloc_gigantic, which is exactly what this series uses:
>>
>> $ uptime -p
>> up 1 week, 3 days, 5 hours, 7 minutes
>> $ cat /proc/meminfo | grep -i cma
>> CmaTotal: 0 kB
>> CmaFree: 0 kB
>> $ echo 20 | sudo tee /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
>> 20
>> $ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
>> 20
>> $ free -h
>> total used free shared buff/cache available
>> Mem: 1.0Ti 142Gi 292Gi 143Mi 583Gi 868Gi
>> Swap: 129Gi 3.5Gi 126Gi
>> $ ./map_1g_hugepages
>> Mapping 20 x 1GB huge pages (20 GB total)
>> Mapped at 0x7f43c0000000
>> Touched page 0 at 0x7f43c0000000
>> Touched page 1 at 0x7f4400000000
>> Touched page 2 at 0x7f4440000000
>> Touched page 3 at 0x7f4480000000
>> Touched page 4 at 0x7f44c0000000
>> Touched page 5 at 0x7f4500000000
>> Touched page 6 at 0x7f4540000000
>> Touched page 7 at 0x7f4580000000
>> Touched page 8 at 0x7f45c0000000
>> Touched page 9 at 0x7f4600000000
>> Touched page 10 at 0x7f4640000000
>> Touched page 11 at 0x7f4680000000
>> Touched page 12 at 0x7f46c0000000
>> Touched page 13 at 0x7f4700000000
>> Touched page 14 at 0x7f4740000000
>> Touched page 15 at 0x7f4780000000
>> Touched page 16 at 0x7f47c0000000
>> Touched page 17 at 0x7f4800000000
>> Touched page 18 at 0x7f4840000000
>> Touched page 19 at 0x7f4880000000
>> Unmapped successfully
>>
>>
>>
>>
>>>>
>>>> 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails
>>>> rather than falling back to smaller pages. This makes it fragile under
>>>> memory pressure.
>>>
>>> True.
>>>
>>>>
>>>> 4. No Splitting: hugetlbfs pages cannot be split when only partial access
>>>> is needed, leading to memory waste and preventing partial reclaim.
>>>
>>> Since you have PUD THP implementation, have you run any workload on it?
>>> How often you see a PUD THP split?
>>>
>>
>> Ah so running non upstream kernels in production is a bit more difficult
>> (and also risky). I was trying to use the 512M experiment on arm as a comparison,
>> although I know its not the same thing with PAGE_SIZE and pageblock order.
>>
>> I can try some other upstream benchmarks if it helps? Although will need to find
>> ones that create VMA > 1G.
>>
>>> Oh, you actually ran 512MB THP on ARM64 (I saw it below), do you have
>>> any split stats to show the necessity of THP split?
>>>
>>>>
>>>> 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot
>>>> be easily shared with regular memory pools.
>>>
>>> True.
>>>
>>>>
>>>> PUD THP solves these limitations by integrating 1GB pages into the existing
>>>> THP infrastructure.
>>>
>>> The main advantage of PUD THP over hugetlb is that it can be split and mapped
>>> at sub-folio level. Do you have any data to support the necessity of them?
>>> I wonder if it would be easier to just support 1GB folio in core-mm first
>>> and we can add 1GB THP split and sub-folio mapping later. With that, we
>>> can move hugetlb users to 1GB folio.
>>>
>>
>> I would say its not the main advantage? But its definitely one of them.
>> The 2 main areas where split would be helpful is munmap partial
>> range and reclaim (MADV_PAGEOUT). For e.g. jemalloc/tcmalloc can now start
>> taking advantge of 1G pages. My knowledge is not that great when it comes
>> to memory allocators, but I believe they track for how long certain areas
>> have been cold and can trigger reclaim as an example. Then split will be useful.
>> Having memory allocators use hugetlb is probably going to be a no?
>>
>>
>>> BTW, without split support, you can apply HVO to 1GB folio to save memory.
>>> That is a disadvantage of PUD THP. Have you taken that into consideration?
>>> Basically, switching from hugetlb to PUD THP, you will lose memory due
>>> to vmemmap usage.
>>>
>>
>> Yeah so HVO saves 16M per 1G, and the page depost mechanism adds ~2M as per 1G.
>> We have HVO enabled in the meta fleet. I think we should not only think of PUD THP
>> as a replacement for hugetlb, but to also enable further usescases where hugetlb
>> would not be feasible.
>>
>> Ater the basic infrastructure for 1G is there, we can work on optimizing, I think
>> there would be a a lot of interesting work we can do. HVO for 1G THP would be one
>> of them?
>>
>>>>
>>>> Performance Results
>>>> ===================
>>>>
>>>> Benchmark results of these patches on Intel Xeon Platinum 8321HC:
>>>>
>>>> Test: True Random Memory Access [1] test of 4GB memory region with pointer
>>>> chasing workload (4M random pointer dereferences through memory):
>>>>
>>>> | Metric | PUD THP (1GB) | PMD THP (2MB) | Change |
>>>> |-------------------|---------------|---------------|--------------|
>>>> | Memory access | 88 ms | 134 ms | 34% faster |
>>>> | Page fault time | 898 ms | 331 ms | 2.7x slower |
>>>>
>>>> Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)).
>>>> For long-running workloads this will be a one-off cost, and the 34%
>>>> improvement in access latency provides significant benefit.
>>>>
>>>> ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU
>>>> bound workload running on a large number of ARM servers (256G). I enabled
>>>> the 512M THP settings to always for a 100 servers in production (didn't
>>>> really have high expectations :)). The average memory used for the workload
>>>> increased from 217G to 233G. The amount of memory backed by 512M pages was
>>>> 68G! The dTLB misses went down by 26% and the PID multiplier increased input
>>>> by 5.9% (This is a very significant improvment in workload performance).
>>>> A significant number of these THPs were faulted in at application start when
>>>> were present across different VMAs. Ofcourse getting these 512M pages is
>>>> easier on ARM due to bigger PAGE_SIZE and pageblock order.
>>>>
>>>> I am hoping that these patches for 1G THP can be used to provide similar
>>>> benefits for x86. I expect workloads to fault them in at start time when there
>>>> is plenty of free memory available.
>>>>
>>>>
>>>> Previous attempt by Zi Yan
>>>> ==========================
>>>>
>>>> Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been
>>>> significant changes in kernel since then, including folio conversion, mTHP
>>>> framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD
>>>> code as reference for making 1G PUD THP work. I am hoping Zi can provide
>>>> guidance on these patches!
>>>
>>> I am more than happy to help you. :)
>>>
>>
>> Thanks!!!
>>
>>>>
>>>> Major Design Decisions
>>>> ======================
>>>>
>>>> 1. No shared 1G zero page: The memory cost would be quite significant!
>>>>
>>>> 2. Page Table Pre-deposit Strategy
>>>> PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE
>>>> page tables (one for each potential PMD entry after split).
>>>> We allocate a PMD page table and use its pmd_huge_pte list to store
>>>> the deposited PTE tables. This ensures split operations don't fail due
>>>> to page table allocation failures (at the cost of 2M per PUD THP)
>>>>
>>>> 3. Split to Base Pages
>>>> When a PUD THP must be split (COW, partial unmap, mprotect), we split
>>>> directly to base pages (262,144 PTEs). The ideal thing would be to split
>>>> to 2M pages and then to 4K pages if needed. However, this would require
>>>> significant rmap and mapcount tracking changes.
>>>>
>>>> 4. COW and fork handling via split
>>>> Copy-on-write and fork for PUD THP triggers a split to base pages, then
>>>> uses existing PTE-level COW infrastructure. Getting another 1G region is
>>>> hard and could fail. If only a 4K is written, copying 1G is a waste.
>>>> Probably this should only be done on CoW and not fork?
>>>>
>>>> 5. Migration via split
>>>> Split PUD to PTEs and migrate individual pages. It is going to be difficult
>>>> to find a 1G continguous memory to migrate to. Maybe its better to not
>>>> allow migration of PUDs at all? I am more tempted to not allow migration,
>>>> but have kept splitting in this RFC.
>>>
>>> Without migration, PUD THP loses its flexibility and transparency. But with
>>> its 1GB size, I also wonder what the purpose of PUD THP migration can be.
>>> It does not create memory fragmentation, since it is the largest folio size
>>> we have and contiguous. NUMA balancing 1GB THP seems too much work.
>>
>> Yeah this is exactly what I was thinking as well. It is going to be expensive
>> and difficult to migrate 1G pages, and I am not sure if what we get out of it
>> is worth it? I kept the splitting code in this RFC as I wanted to show that
>> its possible to split and migrate and the rejecting migration code is a lot easier.
>>
>>>
>>> BTW, I posted many questions, but that does not mean I object the patchset.
>>> I just want to understand your use case better, reduce unnecessary
>>> code changes, and hopefully get it upstreamed this time. :)
>>>
>>> Thank you for the work.
>>>
>>
>> Ah no this is awesome! Thanks for the questions! Its basically the discussion I
>> wanted to start with the RFC.
>>
>>
>> [1] https://gist.github.com/uarif1/35dcd63f9d76048b07eb5c16ace85991
>>
>>
>
> It looks like the scenario you're going for is an application that
> allocates a sizeable chunk of memory upfront, and would like it to be
> 1G pages as much as possible, right?
>
Hello!
Yes. But also it doesnt need to be a single chunk (VMA).
> You can do that with 1G THPs, the advantage being that any failures to
> get 1G pages are not explicit, so you're not left with having to grow
> the number of hugetlb pages yourself, and see how many you can use.
>
> 1G THPs seem useful for that. I don't recall all of the discussion
> here, but I assume that hooking 1G THP support in to khugepaged is
> quite something else - the potential churn to get an 1G page could
> well cause more system interference than you'd like.
>
Yes completely agree.
> The CMA scenario Rik was talking about is similar: you set
> hugetlb_cma=NG, and then, when you need 1G pages, you grow the hugetlb
> pool and use them. Disadvantage: you have to do it explicitly.
>
> However, hugetlb_cma does give you a much larger chance of getting
> those 1G pages. The example you give, 20 1G pages on a 1T system where
> there is 292G free, isn't much of a problem in my experience. You
> should have no problem getting that amount of 1G pages. Things get
> more difficult when most of your memory is taken - hugetlb_cma really
> helps there. E.g. we have systems that have 90% hugetlb_cma, and there
> is a pretty good success rate converting back and forth between
> hugetlb and normal page allocator pages with hugetlb_cma, while
> operating close to that 90% hugetlb coverage. Without CMA, the success
> rate drops quite a bit at that level.
Yes agreed.
>
> CMA balancing is a related issue, for hugetlb. It fixes a problem that
> has been known for years: the more memory you set aside for movable
> only allocations (e.g. hugetlb_cma), the less breathing room you have
> for unmovable allocations. So you risk the 'false OOM' scenario, where
> the kernel can't make an unmovable allocation, even though there is
> enough memory available, even outside of CMA. It's just that those
> MOVABLE pageblocks were used for movable allocations. So ideally, you
> would migrate those movable allocations to CMA under those
> circumstances. Which is what CMA balancing does. It's worked out very
> well for us in the scenario I list above (most memory being
> hugetlb_cma).
>
> Anyway, I'm rambling on a bit. Let's see if I got this right:
>
> 1G THP
> - advantages: transparent interface
> - disadvantage: no HVO, lower success rate under higher memory
> pressure than hugetlb_cma
>
Yes! But also, the problem of having no HVO for THPs I think can be worked
on once the support for it is there. The lower success rate is a much more
difficult problem to solve.
> hugetlb_cma
> - disadvantage: explicit interface, for higher values needs 'false
> OOM' avoidance
> - advange: better success rate under pressure.
>
> I think 1G THPs are a good solution for "nice to have" scenarios, but
> there will still be use cases where a higher success rate is preferred
> and HugeTLB is preferred.
>
Agreed. I dont think 1G THPs can completely replace hugetlb. Maybe after
getting after several years of work to optimize it, there might be a path to it
but not at the very start.
> Lastly, there's also the ZONE_MOVABLE story. I think 1G THPs and
> ZONE_MOVABLE could work well together, improving the success rate. But
> then the issue of pinning raise its head again, and whether that
> should be allowed or configurable per zone..
>
Ack
> - Frank
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 02/12] mm/thp: add mTHP stats infrastructure for PUD THP
2026-02-02 11:56 ` Lorenzo Stoakes
@ 2026-02-05 5:53 ` Usama Arif
0 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-05 5:53 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On 02/02/2026 03:56, Lorenzo Stoakes wrote:
> On Sun, Feb 01, 2026 at 04:50:19PM -0800, Usama Arif wrote:
>> Extend the mTHP (multi-size THP) statistics infrastructure to support
>> PUD-sized transparent huge pages.
>>
>> The mTHP framework tracks statistics for each supported THP size through
>> per-order counters exposed via sysfs. To add PUD THP support, PUD_ORDER
>> must be included in the set of tracked orders.
>>
>> With this change, PUD THP events (allocations, faults, splits, swaps)
>> are tracked and exposed through the existing sysfs interface at
>> /sys/kernel/mm/transparent_hugepage/hugepages-1048576kB/stats/. This
>> provides visibility into PUD THP behavior for debugging and performance
>> analysis.
>>
>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>
> Yeah we really need to be basing this on mm-unstable once Nico's series is
> landed.
>
> I think it's quite important as well for you to check that khugepaged mTHP works
> with all of this.
>
>> ---
>> include/linux/huge_mm.h | 42 +++++++++++++++++++++++++++++++++++++----
>> mm/huge_memory.c | 3 ++-
>> 2 files changed, 40 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index e672e45bb9cc7..5509ba8555b6e 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -76,7 +76,13 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr;
>> * and including PMD_ORDER, except order-0 (which is not "huge") and order-1
>> * (which is a limitation of the THP implementation).
>> */
>> -#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1)))
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> +#define THP_ORDERS_ALL_ANON_PUD BIT(PUD_ORDER)
>> +#else
>> +#define THP_ORDERS_ALL_ANON_PUD 0
>> +#endif
>> +#define THP_ORDERS_ALL_ANON (((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) | \
>> + THP_ORDERS_ALL_ANON_PUD)
>
> Err what is this change doing in a 'stats' change? This quietly updates
> __thp_vma_allowable_orders() to also support PUD order for anon memory... Can we
> put this in the right place?
>
Yeah I didnt place it in the right place. Thanks!
>>
>> /*
>> * Mask of all large folio orders supported for file THP. Folios in a DAX
>> @@ -146,18 +152,46 @@ enum mthp_stat_item {
>> };
>>
>> #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS)
>> +
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>
> By the way I'm not a fan of us treating an 'arch has' as a 'will use'.
>
>> +#define MTHP_STAT_COUNT (PMD_ORDER + 2)
>
> Yeah I hate this. This is just 'one more thing to remember'.
>
>> +#define MTHP_STAT_PUD_INDEX (PMD_ORDER + 1) /* PUD uses last index */
>> +#else
>> +#define MTHP_STAT_COUNT (PMD_ORDER + 1)
>> +#endif
>> +
>> struct mthp_stat {
>> - unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUNT];
>> + unsigned long stats[MTHP_STAT_COUNT][__MTHP_STAT_COUNT];
>> };
>>
>> DECLARE_PER_CPU(struct mthp_stat, mthp_stats);
>>
>> +static inline int mthp_stat_order_to_index(int order)
>> +{
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> + if (order == PUD_ORDER)
>> + return MTHP_STAT_PUD_INDEX;
>
> This seems like a hack again.
>
>> +#endif
>> + return order;
>> +}
>> +
>> static inline void mod_mthp_stat(int order, enum mthp_stat_item item, int delta)
>> {
>> - if (order <= 0 || order > PMD_ORDER)
>> + int index;
>> +
>> + if (order <= 0)
>> + return;
>> +
>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> + if (order != PUD_ORDER && order > PMD_ORDER)
>> return;
>> +#else
>> + if (order > PMD_ORDER)
>> + return;
>> +#endif
>
> Or we could actually define a max order... except now the hack contorts this
> code.
>
> Is it really that bad to just take up memory for the order between PMD_ORDER and
> PUD_ORDER? ~72 bytes * cores and we avoid having to do this silly dance.
So up until a few hours before I sent the series. What you are saying is exactly what
I was doing, i.e. allocating up until PUD order. Its not a lot of memory wastage,
but it is there, and I saw this patch as an easy solution to it. For a server
with 512 cores, this is 36KB. Its not a lot because a server with 512 cores will
also have several 100GBs or TBs of memory.
I know its not elegant, but I do like the approach in this patch more. If there is a
very strong preference to switch to having all order to PUD as it would the make the
code more elegant, than I can switch to it.
>
>>
>> - this_cpu_add(mthp_stats.stats[order][item], delta);
>> + index = mthp_stat_order_to_index(order);
>> + this_cpu_add(mthp_stats.stats[index][item], delta);
>> }
>>
>> static inline void count_mthp_stat(int order, enum mthp_stat_item item)
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 3128b3beedb0a..d033624d7e1f2 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -598,11 +598,12 @@ static unsigned long sum_mthp_stat(int order, enum mthp_stat_item item)
>> {
>> unsigned long sum = 0;
>> int cpu;
>> + int index = mthp_stat_order_to_index(order);
>>
>> for_each_possible_cpu(cpu) {
>> struct mthp_stat *this = &per_cpu(mthp_stats, cpu);
>>
>> - sum += this->stats[order][item];
>> + sum += this->stats[index][item];
>> }
>>
>> return sum;
>> --
>> 2.47.3
>>
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-04 11:08 ` Lorenzo Stoakes
2026-02-04 11:50 ` Dev Jain
@ 2026-02-05 6:08 ` Usama Arif
1 sibling, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-05 6:08 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On 04/02/2026 03:08, Lorenzo Stoakes wrote:
> On Tue, Feb 03, 2026 at 05:00:10PM -0800, Usama Arif wrote:
>>
>>
>> On 02/02/2026 03:20, Lorenzo Stoakes wrote:
>>> OK so this is somewhat unexpected :)
>>>
>>> It would have been nice to discuss it in the THP cabal or at a conference
>>> etc. so we could discuss approaches ahead of time. Communication is important,
>>> especially with major changes like this.
>>
>> Makes sense!
>>
>>>
>>> And PUD THP is especially problematic in that it requires pages that the page
>>> allocator can't give us, presumably you're doing something with CMA and... it's
>>> a whole kettle of fish.
>>
>> So we dont need CMA. It helps ofcourse, but we don't *need* it.
>> Its summarized in the first reply I gave to Zi in [1]:
>>
>>>
>>> It's also complicated by the fact we _already_ support it in the DAX, VFIO cases
>>> but it's kinda a weird sorta special case that we need to keep supporting.
>>>
>>> There's questions about how this will interact with khugepaged, MADV_COLLAPSE,
>>> mTHP (and really I want to see Nico's series land before we really consider
>>> this).
>>
>>
>> So I have numbers and experiments for page faults which are in the cover letter,
>> but not for khugepaged. I would be very surprised (although pleasently :)) if
>> khugepaged by some magic finds 262144 pages that meets all the khugepaged requirements
>> to collapse the page. In the basic infrastructure support which this series is adding,
>> I want to keep khugepaged collapse disabled for 1G pages. This is also the initial
>> approach that was taken in other mTHP sizes. We should go slow with 1G THPs.
>
> Yes we definitely want to limit to page faults for now.
>
> But keep in mind for that to be viable you'd surely need to update who gets
> appropriate alignment in __get_unmapped_area()... not read through series far
> enough to see so not sure if you update that though!
>
> I guess that'd be the sanest place to start, if an allocation _size_ is aligned
> 1 GB, then align the unmapped area _address_ to 1 GB for maximum chance of 1 GB
> fault-in.
Yeah this was definitely missing. I was manually aligning the fault address in selftest
and benchmarks with the trick used in other selftests
(((unsigned long)addr + PUD_SIZE - 1) & ~(PUD_SIZE - 1))
Thanks for pointing this out! This is basically what I wanted with the RFC, to find out
what I am missing and not testing. Will look into VFIO and DAX as you mentioned as well.
>
> Oh by the way I made some rough THP notes at
> https://publish.obsidian.md/mm/Transparent+Huge+Pages+(THP) which are helpful
> for reminding me about what does what where, useful for a top-down view of how
> things are now.
>
Thanks!
>>
>>>
>>> So overall, I want to be very cautious and SLOW here. So let's please not drop
>>> the RFC tag until David and I are ok with that?
>>>
>>> Also the THP code base is in _dire_ need of rework, and I don't really want to
>>> add major new features without us paying down some technical debt, to be honest.
>>>
>>> So let's proceed with caution, and treat this as a very early bit of
>>> experimental code.
>>>
>>> Thanks, Lorenzo
>>
>> Ack, yeah so this is mainly an RFC to discuss what the major design choices will be.
>> I got a kernel with selftests for allocation, memory integrity, fork, partial munmap,
>> mprotect, reclaim and migration passing and am running them with DEBUG_VM to make sure
>> we dont get the VM bugs/warnings and the numbers are good, so just wanted to share it
>> upstream and get your opinions! Basically try and trigger a discussion similar to what
>> Zi asked in [2]! And also if someone could point out if there is something fundamental
>> we are missing in this series.
>
> Well that's fair enough :)
>
> But do come to a THP cabal so we can chat, face-to-face (ok, digital face to
> digital face ;). It's usually a force-multiplier I find, esp. if multiple people
> have input which I think is the case here. We're friendly :)
Yes, Thanks for this! It would be really helpful to discuss in a call. I didn't
know there was a meeting but have requested details (date/time) in another thread.
>
> In any case, conversations are already kicking off so that's definitely positive!
>
> I think we will definitely get there with this at _some point_ but I would urge
> patience and also I really want to underline my desire for us in THP to start
> paying down some of this technical debt.
>
> I know people are already making efforts (Vernon, Luiz), and sorry that I've not
> been great at review recently (should be gradually increasing over time), but I
> feel that for large features to be added like this now we really do require some
> refactoring work before we take it.
>
Yes agreed! I will definitely need your and others guidance on what needs to be
properly refractored so that this fits well with the current code.
> We definitely need to rebase this once Nico's series lands (should do next
> cycle) and think about how it plays with this, I'm not sure if arm64 supports
> mTHP between PMD and PUD size (Dev? Do you know?) so maybe that one is moot, but
> in general want to make sure it plays nice.
>
Will do!
>>
>> Thanks for the reviews! Really do apprecaite it!
>
> No worries! :)
>
>>
>> [1] https://lore.kernel.org/all/20f92576-e932-435f-bb7b-de49eb84b012@gmail.com/#t
>> [2] https://lore.kernel.org/all/3561FD10-664D-42AA-8351-DE7D8D49D42E@nvidia.com/
>
> Cheers, Lorenzo
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-04 12:55 ` Lorenzo Stoakes
@ 2026-02-05 6:40 ` Usama Arif
0 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-05 6:40 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: ziy, Andrew Morton, David Hildenbrand, linux-mm, hannes, riel,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team
On 04/02/2026 04:55, Lorenzo Stoakes wrote:
> On Tue, Feb 03, 2026 at 11:38:02PM -0800, Usama Arif wrote:
>>
>>
>> On 02/02/2026 04:15, Lorenzo Stoakes wrote:
>>> I think I'm going to have to do several passes on this, so this is just a
>>> first one :)
>>>
>>
>> Thanks! Really appreciate the reviews!
>
> No worries!
>
>>
>> One thing over here is the higher level design decision when it comes to migration
>> of 1G pages. As Zi said in [1]:
>> "I also wonder what the purpose of PUD THP migration can be.
>> It does not create memory fragmentation, since it is the largest folio size
>> we have and contiguous. NUMA balancing 1GB THP seems too much work."
>>
>>> On Sun, Feb 01, 2026 at 04:50:18PM -0800, Usama Arif wrote:
>>>> For page table management, PUD THPs need to pre-deposit page tables
>>>> that will be used when the huge page is later split. When a PUD THP
>>>> is allocated, we cannot know in advance when or why it might need to
>>>> be split (COW, partial unmap, reclaim), but we need page tables ready
>>>> for that eventuality. Similar to how PMD THPs deposit a single PTE
>>>> table, PUD THPs deposit a PMD table which itself contains deposited
>>>> PTE tables - a two-level deposit. This commit adds the deposit/withdraw
>>>> infrastructure and a new pud_huge_pmd field in ptdesc to store the
>>>> deposited PMD.
>>>
>>> This feels like you're hacking this support in, honestly. The list_head
>>> abuse only adds to that feeling.
>>>
>>
>> Yeah so I hope turning it to something like [2] is the way forward.
>
> Right, that's one option, though David suggested avoiding this altogether by
> only pre-allocating PTEs?
Maybe I dont understand it properly, but that wont work, right?
You need 1 PMD table and 512 PTE tables for a PMD. Cant just have PTE tables, right?
>
>>
>>> And are we now not required to store rather a lot of memory to keep all of
>>> this coherent?
>>
>> PMD THP allocates 1 4K page (pte_alloc_one) at fault time so that split
>> doesnt fail.
>>
>> For PUD we allocate 2M worth of PTE page tables and 1 4K PMD table at fault
>> time so that split doesnt fail due to there not being enough memory.
>> Its not great, but its not bad as well.
>> The alternative is to allocate this at split time and so we are not
>> pre-reserving them. Now there is a chance that allocation and therefore split
>> fails, so the tradeoff is some memory vs reliability. This patch favours
>> reliability.
>
> That's a significant amount of unmovable, unreclaimable memory though. Going
> from 4K to 2M is a pretty huge uptick.
>
Yeah I dont like it either, but its the only way to make sure split doesnt fail.
I think there will always be a tradeoff between reliability and memory.
>>
>> Lets say a user gets 100x1G THPs. They would end up using ~200M for it.
>> I think that is okish. If the user has 100G, 200M might not be an issue
>> for them :)
>
> But there's more than one user on boxes big enough for this, so this makes me
> think we want this to be somehow opt-in right?
>
Do you mean madvise?
Also an idea (although probably a very bad one :)) is to have MADV_HUGEPAGE_1G.
> And that means we're incurring an unmovable memory penalty, the kind which we're
> trying to avoid in general elsewhere in the kernel.
>
ack.
>>
>>>
>>>>
>>>> The deposited PMD tables are stored as a singly-linked stack using only
>>>> page->lru.next as the link pointer. A doubly-linked list using the
>>>> standard list_head mechanism would cause memory corruption: list_del()
>>>> poisons both lru.next (offset 8) and lru.prev (offset 16), but lru.prev
>>>> overlaps with ptdesc->pmd_huge_pte at offset 16. Since deposited PMD
>>>> tables have their own deposited PTE tables stored in pmd_huge_pte,
>>>> poisoning lru.prev would corrupt the PTE table list and cause crashes
>>>> when withdrawing PTE tables during split. PMD THPs don't have this
>>>> problem because their deposited PTE tables don't have sub-deposits.
>>>> Using only lru.next avoids the overlap entirely.
>>>
>>> Yeah this is horrendous and a hack, I don't consider this at all
>>> upstreamable.
>>>
>>> You need to completely rework this.
>>
>> Hopefully [2] is the path forward!
>
> Ack
>
>>>
>>>>
>>>> For reverse mapping, PUD THPs need the same rmap support that PMD THPs
>>>> have. The page_vma_mapped_walk() function is extended to recognize and
>>>> handle PUD-mapped folios during rmap traversal. A new TTU_SPLIT_HUGE_PUD
>>>> flag tells the unmap path to split PUD THPs before proceeding, since
>>>> there is no PUD-level migration entry format - the split converts the
>>>> single PUD mapping into individual PTE mappings that can be migrated
>>>> or swapped normally.
>>>
>>> Individual PTE... mappings? You need to be a lot clearer here, page tables
>>> are naturally confusing with entries vs. tables.
>>>
>>> Let's be VERY specific here. Do you mean you have 1 PMD table and 512 PTE
>>> tables reserved, spanning 1 PUD entry and 262,144 PTE entries?
>>>
>>
>> Yes that is correct, Thanks! I will change the commit message in the next revision
>> to what you have written: 1 PMD table and 512 PTE tables reserved, spanning
>> 1 PUD entry and 262,144 PTE entries.
>
> Yeah :) my concerns remain :)
>
>>
>>>>
>>>> Signed-off-by: Usama Arif <usamaarif642@gmail.com>
>>>
>>> How does this change interact with existing DAX/VFIO code, which now it
>>> seems will be subject to the mechanisms you introduce here?
>>
>> I think what you mean here is the change in try_to_migrate_one?
>>
>>
>> So one
>
> Unfinished sentence? :P
>
> No I mean currently we support 1G THP for DAX/VFIO right? So how does this
> interplay with how that currently works? Does that change how DAX/VFIO works?
> Will that impact existing users?
>
> Or are we extending the existing mechanism?
>
A lot of the stuff like copy_huge_pud, zap_huge_pud, __split_huge_pud_locked,
create_huge_pud, wp_huge_pud... is protected by vma_is_anonymous check. I will
try and do a better audit of DAX and VFIO.
>>
>>>
>>> Right now DAX/VFIO is only obtainable via a specially THP-aligned
>>> get_unmapped_area() + then can only be obtained at fault time.
>>>> Is that the intent here also?
>>>
>>
>> Ah thanks for pointing this out. This is something the series is missing.
>>
>> What I did in the selftest and benchmark was fault on an address that was already aligned.
>> i.e. basically call the below function before faulting in.
>>
>> static inline void *pud_align(void *addr)
>> {
>> return (void *)(((unsigned long)addr + PUD_SIZE - 1) & ~(PUD_SIZE - 1));
>> }
>
> Right yeah :)
>
>>
>>
>> What I think you are suggesting this series is missing is the below diff? (its untested).
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 87b2c21df4a49..461158a0840db 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1236,6 +1236,12 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
>> unsigned long ret;
>> loff_t off = (loff_t)pgoff << PAGE_SHIFT;
>>
>> + if (IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) && len >= PUD_SIZE) {
>> + ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PUD_SIZE, vm_flags);
>> + if (ret)
>> + return ret;
>> + }
>
> No not that, that's going to cause issues, see commit d4148aeab4 for details as
> to why this can go wrong.
>
> In __get_unmapped_area() where the current 'if PMD size aligned then align area'
> logic, like that.
Ack, Thanks for pointing to this. I will also add another selftest to see that we actually
get this from _get_unmapped_area when we dont do the pud_align trick I currently have in
the selftests.
>
>> +
>>
>>
>>> What is your intent - that khugepaged do this, or on alloc? How does it
>>> interact with MADV_COLLAPSE?
>>>
>>
>> Ah basically what I mentioned in [3], we want to go slow. Only enable PUD THPs
>> page faults at the start. If there is data supporting that khugepaged will work
>> than we do it, but we keep it disabled.
>
> Yes I think khugepaged is probably never going to be all that a good idea with
> this.
>
>>
>>> I noted on the 2nd patch, but you're changing THP_ORDERS_ALL_ANON which
>>> alters __thp_vma_allowable_orders() behaviour, that change belongs here...
>>>
>>>
>>
>> Thanks for this! I only tried to split this code into logical commits
>> after the whole thing was working. Some things are tightly coupled
>> and I would need to move them to the right commit.
>
> Yes there's a bunch of things that need tweaking here, to reiterate let's try to
> pay down technical debt here and avoid copy/pasting :>)
Yes, will spend a lot more time thinking about how to avoid copy/paste.
>
>>
>>>> ---
>>>> include/linux/huge_mm.h | 5 +++
>>>> include/linux/mm.h | 19 ++++++++
>>>> include/linux/mm_types.h | 5 ++-
>>>> include/linux/pgtable.h | 8 ++++
>>>> include/linux/rmap.h | 7 ++-
>>>> mm/huge_memory.c | 8 ++++
>>>> mm/internal.h | 3 ++
>>>> mm/page_vma_mapped.c | 35 +++++++++++++++
>>>> mm/pgtable-generic.c | 83 ++++++++++++++++++++++++++++++++++
>>>> mm/rmap.c | 96 +++++++++++++++++++++++++++++++++++++---
>>>> 10 files changed, 260 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>> index a4d9f964dfdea..e672e45bb9cc7 100644
>>>> --- a/include/linux/huge_mm.h
>>>> +++ b/include/linux/huge_mm.h
>>>> @@ -463,10 +463,15 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>>>> unsigned long address);
>>>>
>>>> #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
>>>> + unsigned long address);
>>>> int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>>> pud_t *pudp, unsigned long addr, pgprot_t newprot,
>>>> unsigned long cp_flags);
>>>> #else
>>>> +static inline void
>>>> +split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
>>>> + unsigned long address) {}
>>>> static inline int
>>>> change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>>> pud_t *pudp, unsigned long addr, pgprot_t newprot,
>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>>> index ab2e7e30aef96..a15e18df0f771 100644
>>>> --- a/include/linux/mm.h
>>>> +++ b/include/linux/mm.h
>>>> @@ -3455,6 +3455,22 @@ static inline bool pagetable_pmd_ctor(struct mm_struct *mm,
>>>> * considered ready to switch to split PUD locks yet; there may be places
>>>> * which need to be converted from page_table_lock.
>>>> */
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>> +static inline struct page *pud_pgtable_page(pud_t *pud)
>>>> +{
>>>> + unsigned long mask = ~(PTRS_PER_PUD * sizeof(pud_t) - 1);
>>>> +
>>>> + return virt_to_page((void *)((unsigned long)pud & mask));
>>>> +}
>>>> +
>>>> +static inline struct ptdesc *pud_ptdesc(pud_t *pud)
>>>> +{
>>>> + return page_ptdesc(pud_pgtable_page(pud));
>>>> +}
>>>> +
>>>> +#define pud_huge_pmd(pud) (pud_ptdesc(pud)->pud_huge_pmd)
>>>> +#endif
>>>> +
>>>> static inline spinlock_t *pud_lockptr(struct mm_struct *mm, pud_t *pud)
>>>> {
>>>> return &mm->page_table_lock;
>>>> @@ -3471,6 +3487,9 @@ static inline spinlock_t *pud_lock(struct mm_struct *mm, pud_t *pud)
>>>> static inline void pagetable_pud_ctor(struct ptdesc *ptdesc)
>>>> {
>>>> __pagetable_ctor(ptdesc);
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>> + ptdesc->pud_huge_pmd = NULL;
>>>> +#endif
>>>> }
>>>>
>>>> static inline void pagetable_p4d_ctor(struct ptdesc *ptdesc)
>>>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>>>> index 78950eb8926dc..26a38490ae2e1 100644
>>>> --- a/include/linux/mm_types.h
>>>> +++ b/include/linux/mm_types.h
>>>> @@ -577,7 +577,10 @@ struct ptdesc {
>>>> struct list_head pt_list;
>>>> struct {
>>>> unsigned long _pt_pad_1;
>>>> - pgtable_t pmd_huge_pte;
>>>> + union {
>>>> + pgtable_t pmd_huge_pte; /* For PMD tables: deposited PTE */
>>>> + pgtable_t pud_huge_pmd; /* For PUD tables: deposited PMD list */
>>>> + };
>>>> };
>>>> };
>>>> unsigned long __page_mapping;
>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>>>> index 2f0dd3a4ace1a..3ce733c1d71a2 100644
>>>> --- a/include/linux/pgtable.h
>>>> +++ b/include/linux/pgtable.h
>>>> @@ -1168,6 +1168,14 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
>>>> #define arch_needs_pgtable_deposit() (false)
>>>> #endif
>>>>
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>> +extern void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
>>>> + pmd_t *pmd_table);
>>>> +extern pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp);
>>>> +extern void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable);
>>>> +extern pgtable_t pud_withdraw_pte(pmd_t *pmd_table);
>>>
>>> These are useless extern's.
>>>
>>
>>
>> ack
>>
>> These are coming from the existing functions from the file:
>> extern void pgtable_trans_huge_deposit
>> extern pgtable_t pgtable_trans_huge_withdraw
>>
>> I think the externs can be removed from these as well? We can
>> fix those in a separate patch.
>
> Generally the approach is to remove externs when adding/changing new stuff as
> otherwise we get completely useless churn on that and annoying git history
> changes.
Ack
>>
>>
>>>> +#endif
>>>> +
>>>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> /*
>>>> * This is an implementation of pmdp_establish() that is only suitable for an
>>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>>> index daa92a58585d9..08cd0a0eb8763 100644
>>>> --- a/include/linux/rmap.h
>>>> +++ b/include/linux/rmap.h
>>>> @@ -101,6 +101,7 @@ enum ttu_flags {
>>>> * do a final flush if necessary */
>>>> TTU_RMAP_LOCKED = 0x80, /* do not grab rmap lock:
>>>> * caller holds it */
>>>> + TTU_SPLIT_HUGE_PUD = 0x100, /* split huge PUD if any */
>>>> };
>>>>
>>>> #ifdef CONFIG_MMU
>>>> @@ -473,6 +474,8 @@ void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages,
>>>> folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags)
>>>> void folio_add_anon_rmap_pmd(struct folio *, struct page *,
>>>> struct vm_area_struct *, unsigned long address, rmap_t flags);
>>>> +void folio_add_anon_rmap_pud(struct folio *, struct page *,
>>>> + struct vm_area_struct *, unsigned long address, rmap_t flags);
>>>> void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
>>>> unsigned long address, rmap_t flags);
>>>> void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages,
>>>> @@ -933,6 +936,7 @@ struct page_vma_mapped_walk {
>>>> pgoff_t pgoff;
>>>> struct vm_area_struct *vma;
>>>> unsigned long address;
>>>> + pud_t *pud;
>>>> pmd_t *pmd;
>>>> pte_t *pte;
>>>> spinlock_t *ptl;
>>>> @@ -970,7 +974,7 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
>>>> static inline void
>>>> page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
>>>> {
>>>> - WARN_ON_ONCE(!pvmw->pmd && !pvmw->pte);
>>>> + WARN_ON_ONCE(!pvmw->pud && !pvmw->pmd && !pvmw->pte);
>>>>
>>>> if (likely(pvmw->ptl))
>>>> spin_unlock(pvmw->ptl);
>>>> @@ -978,6 +982,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
>>>> WARN_ON_ONCE(1);
>>>>
>>>> pvmw->ptl = NULL;
>>>> + pvmw->pud = NULL;
>>>> pvmw->pmd = NULL;
>>>> pvmw->pte = NULL;
>>>> }
>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>> index 40cf59301c21a..3128b3beedb0a 100644
>>>> --- a/mm/huge_memory.c
>>>> +++ b/mm/huge_memory.c
>>>> @@ -2933,6 +2933,14 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>>>> spin_unlock(ptl);
>>>> mmu_notifier_invalidate_range_end(&range);
>>>> }
>>>> +
>>>> +void split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
>>>> + unsigned long address)
>>>> +{
>>>> + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PUD_SIZE));
>>>> + if (pud_trans_huge(*pud))
>>>> + __split_huge_pud_locked(vma, pud, address);
>>>> +}
>>>> #else
>>>> void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>>>> unsigned long address)
>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>> index 9ee336aa03656..21d5c00f638dc 100644
>>>> --- a/mm/internal.h
>>>> +++ b/mm/internal.h
>>>> @@ -545,6 +545,9 @@ int user_proactive_reclaim(char *buf,
>>>> * in mm/rmap.c:
>>>> */
>>>> pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address);
>>>> +#endif
>>>>
>>>> /*
>>>> * in mm/page_alloc.c
>>>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
>>>> index b38a1d00c971b..d31eafba38041 100644
>>>> --- a/mm/page_vma_mapped.c
>>>> +++ b/mm/page_vma_mapped.c
>>>> @@ -146,6 +146,18 @@ static bool check_pmd(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
>>>> return true;
>>>> }
>>>>
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>> +/* Returns true if the two ranges overlap. Careful to not overflow. */
>>>> +static bool check_pud(unsigned long pfn, struct page_vma_mapped_walk *pvmw)
>>>> +{
>>>> + if ((pfn + HPAGE_PUD_NR - 1) < pvmw->pfn)
>>>> + return false;
>>>> + if (pfn > pvmw->pfn + pvmw->nr_pages - 1)
>>>> + return false;
>>>> + return true;
>>>> +}
>>>> +#endif
>>>> +
>>>> static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
>>>> {
>>>> pvmw->address = (pvmw->address + size) & ~(size - 1);
>>>> @@ -188,6 +200,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>>>> pud_t *pud;
>>>> pmd_t pmde;
>>>>
>>>> + /* The only possible pud mapping has been handled on last iteration */
>>>> + if (pvmw->pud && !pvmw->pmd)
>>>> + return not_found(pvmw);
>>>> +
>>>> /* The only possible pmd mapping has been handled on last iteration */
>>>> if (pvmw->pmd && !pvmw->pte)
>>>> return not_found(pvmw);
>>>> @@ -234,6 +250,25 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>>>> continue;
>>>> }
>>>>
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>
>>> Said it elsewhere, but it's really weird to treat an arch having the
>>> ability to do something as a go ahead for doing it.
>>>
>>>> + /* Check for PUD-mapped THP */
>>>> + if (pud_trans_huge(*pud)) {
>>>> + pvmw->pud = pud;
>>>> + pvmw->ptl = pud_lock(mm, pud);
>>>> + if (likely(pud_trans_huge(*pud))) {
>>>> + if (pvmw->flags & PVMW_MIGRATION)
>>>> + return not_found(pvmw);
>>>> + if (!check_pud(pud_pfn(*pud), pvmw))
>>>> + return not_found(pvmw);
>>>> + return true;
>>>> + }
>>>> + /* PUD was split under us, retry at PMD level */
>>>> + spin_unlock(pvmw->ptl);
>>>> + pvmw->ptl = NULL;
>>>> + pvmw->pud = NULL;
>>>> + }
>>>> +#endif
>>>> +
>>>
>>> Yeah, as I said elsewhere, we got to be refactoring not copy/pasting with
>>> modifications :)
>>>
>>
>> Yeah there is repeated code in multiple places, where all I did was replace
>> what was done from PMD into PUD. In a lot of places, its actually difficult
>> to not repeat the code (unless we want function macros, which is much worse
>> IMO).
>
> Not if we actually refactor the existing code :)
>
> When I wanted to make functional changes to mremap I took a lot of time to
> refactor the code into something sane before even starting that.
>
> Because I _could_ have added the features there as-is, but it would have been
> hellish to do so as-is and added more confusion etc.
>
> So yeah, I think a similar mentality has to be had with this change.
Ack, I will spend a lot more time thinking about the refractoring.
>
>>
>>>
>>>> pvmw->pmd = pmd_offset(pud, pvmw->address);
>>>> /*
>>>> * Make sure the pmd value isn't cached in a register by the
>>>> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
>>>> index d3aec7a9926ad..2047558ddcd79 100644
>>>> --- a/mm/pgtable-generic.c
>>>> +++ b/mm/pgtable-generic.c
>>>> @@ -195,6 +195,89 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
>>>> }
>>>> #endif
>>>>
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>> +/*
>>>> + * Deposit page tables for PUD THP.
>>>> + * Called with PUD lock held. Stores PMD tables in a singly-linked stack
>>>> + * via pud_huge_pmd, using only pmd_page->lru.next as the link pointer.
>>>> + *
>>>> + * IMPORTANT: We use only lru.next (offset 8) for linking, NOT the full
>>>> + * list_head. This is because lru.prev (offset 16) overlaps with
>>>> + * ptdesc->pmd_huge_pte, which stores the PMD table's deposited PTE tables.
>>>> + * Using list_del() would corrupt pmd_huge_pte with LIST_POISON2.
>>>
>>> This is horrible and feels like a hack? Treating a doubly-linked list as a
>>> singly-linked one like this is not upstreamable.
>>>
>>>> + *
>>>> + * PTE tables should be deposited into the PMD using pud_deposit_pte().
>>>> + */
>>>> +void pgtable_trans_huge_pud_deposit(struct mm_struct *mm, pud_t *pudp,
>>>> + pmd_t *pmd_table)
>>>
>>> This is a horrid, you're depositing the PMD using the... questionable
>>> list_head abuse, but then also have pud_deposit_pte()... But here we're
>>> depositing a PMD shouldn't the name reflect that?
>>>
>>>> +{
>>>> + pgtable_t pmd_page = virt_to_page(pmd_table);
>>>> +
>>>> + assert_spin_locked(pud_lockptr(mm, pudp));
>>>> +
>>>> + /* Push onto stack using only lru.next as the link */
>>>> + pmd_page->lru.next = (struct list_head *)pud_huge_pmd(pudp);
>>>
>>> Yikes...
>>>
>>>> + pud_huge_pmd(pudp) = pmd_page;
>>>> +}
>>>> +
>>>> +/*
>>>> + * Withdraw the deposited PMD table for PUD THP split or zap.
>>>> + * Called with PUD lock held.
>>>> + * Returns NULL if no more PMD tables are deposited.
>>>> + */
>>>> +pmd_t *pgtable_trans_huge_pud_withdraw(struct mm_struct *mm, pud_t *pudp)
>>>> +{
>>>> + pgtable_t pmd_page;
>>>> +
>>>> + assert_spin_locked(pud_lockptr(mm, pudp));
>>>> +
>>>> + pmd_page = pud_huge_pmd(pudp);
>>>> + if (!pmd_page)
>>>> + return NULL;
>>>> +
>>>> + /* Pop from stack - lru.next points to next PMD page (or NULL) */
>>>> + pud_huge_pmd(pudp) = (pgtable_t)pmd_page->lru.next;
>>>
>>> Where's the popping? You're just assigning here.
>>
>>
>> Ack on all of the above. Hopefully [1] is better.
>
> Thanks!
>
>>>
>>>> +
>>>> + return page_address(pmd_page);
>>>> +}
>>>> +
>>>> +/*
>>>> + * Deposit a PTE table into a standalone PMD table (not yet in page table hierarchy).
>>>> + * Used for PUD THP pre-deposit. The PMD table's pmd_huge_pte stores a linked list.
>>>> + * No lock assertion since the PMD isn't visible yet.
>>>> + */
>>>> +void pud_deposit_pte(pmd_t *pmd_table, pgtable_t pgtable)
>>>> +{
>>>> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
>>>> +
>>>> + /* FIFO - add to front of list */
>>>> + if (!ptdesc->pmd_huge_pte)
>>>> + INIT_LIST_HEAD(&pgtable->lru);
>>>> + else
>>>> + list_add(&pgtable->lru, &ptdesc->pmd_huge_pte->lru);
>>>> + ptdesc->pmd_huge_pte = pgtable;
>>>> +}
>>>> +
>>>> +/*
>>>> + * Withdraw a PTE table from a standalone PMD table.
>>>> + * Returns NULL if no more PTE tables are deposited.
>>>> + */
>>>> +pgtable_t pud_withdraw_pte(pmd_t *pmd_table)
>>>> +{
>>>> + struct ptdesc *ptdesc = virt_to_ptdesc(pmd_table);
>>>> + pgtable_t pgtable;
>>>> +
>>>> + pgtable = ptdesc->pmd_huge_pte;
>>>> + if (!pgtable)
>>>> + return NULL;
>>>> + ptdesc->pmd_huge_pte = list_first_entry_or_null(&pgtable->lru,
>>>> + struct page, lru);
>>>> + if (ptdesc->pmd_huge_pte)
>>>> + list_del(&pgtable->lru);
>>>> + return pgtable;
>>>> +}
>>>> +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>>>> +
>>>> #ifndef __HAVE_ARCH_PMDP_INVALIDATE
>>>> pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
>>>> pmd_t *pmdp)
>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>> index 7b9879ef442d9..69acabd763da4 100644
>>>> --- a/mm/rmap.c
>>>> +++ b/mm/rmap.c
>>>> @@ -811,6 +811,32 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address)
>>>> return pmd;
>>>> }
>>>>
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>> +/*
>>>> + * Returns the actual pud_t* where we expect 'address' to be mapped from, or
>>>> + * NULL if it doesn't exist. No guarantees / checks on what the pud_t*
>>>> + * represents.
>>>> + */
>>>> +pud_t *mm_find_pud(struct mm_struct *mm, unsigned long address)
>>>
>>> This series seems to be full of copy/paste.
>>>
>>> It's just not acceptable given the state of THP code as I said in reply to
>>> the cover letter - you need to _refactor_ the code.
>>>
>>> The code is bug-prone and difficult to maintain as-is, your series has to
>>> improve the technical debt, not add to it.
>>>
>>
>> In some cases we might not be able to avoid the copy, but this is definitely
>> a place where we dont need to. I will change here. Thanks!
>
> I disagree, see above :) But thanks on this one
>
>>
>>>> +{
>>>> + pgd_t *pgd;
>>>> + p4d_t *p4d;
>>>> + pud_t *pud = NULL;
>>>> +
>>>> + pgd = pgd_offset(mm, address);
>>>> + if (!pgd_present(*pgd))
>>>> + goto out;
>>>> +
>>>> + p4d = p4d_offset(pgd, address);
>>>> + if (!p4d_present(*p4d))
>>>> + goto out;
>>>> +
>>>> + pud = pud_offset(p4d, address);
>>>> +out:
>>>> + return pud;
>>>> +}
>>>> +#endif
>>>> +
>>>> struct folio_referenced_arg {
>>>> int mapcount;
>>>> int referenced;
>>>> @@ -1415,11 +1441,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
>>>> SetPageAnonExclusive(page);
>>>> break;
>>>> case PGTABLE_LEVEL_PUD:
>>>> - /*
>>>> - * Keep the compiler happy, we don't support anonymous
>>>> - * PUD mappings.
>>>> - */
>>>> - WARN_ON_ONCE(1);
>>>> + SetPageAnonExclusive(page);
>>>> break;
>>>> default:
>>>> BUILD_BUG();
>>>> @@ -1503,6 +1525,31 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
>>>> #endif
>>>> }
>>>>
>>>> +/**
>>>> + * folio_add_anon_rmap_pud - add a PUD mapping to a page range of an anon folio
>>>> + * @folio: The folio to add the mapping to
>>>> + * @page: The first page to add
>>>> + * @vma: The vm area in which the mapping is added
>>>> + * @address: The user virtual address of the first page to map
>>>> + * @flags: The rmap flags
>>>> + *
>>>> + * The page range of folio is defined by [first_page, first_page + HPAGE_PUD_NR)
>>>> + *
>>>> + * The caller needs to hold the page table lock, and the page must be locked in
>>>> + * the anon_vma case: to serialize mapping,index checking after setting.
>>>> + */
>>>> +void folio_add_anon_rmap_pud(struct folio *folio, struct page *page,
>>>> + struct vm_area_struct *vma, unsigned long address, rmap_t flags)
>>>> +{
>>>> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
>>>> + defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
>>>> + __folio_add_anon_rmap(folio, page, HPAGE_PUD_NR, vma, address, flags,
>>>> + PGTABLE_LEVEL_PUD);
>>>> +#else
>>>> + WARN_ON_ONCE(true);
>>>> +#endif
>>>> +}
>>>
>>> More copy/paste... Maybe unavoidable in this case, but be good to try.
>>>
>>>> +
>>>> /**
>>>> * folio_add_new_anon_rmap - Add mapping to a new anonymous folio.
>>>> * @folio: The folio to add the mapping to.
>>>> @@ -1934,6 +1981,20 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>> }
>>>>
>>>> if (!pvmw.pte) {
>>>> + /*
>>>> + * Check for PUD-mapped THP first.
>>>> + * If we have a PUD mapping and TTU_SPLIT_HUGE_PUD is set,
>>>> + * split the PUD to PMD level and restart the walk.
>>>> + */
>>>
>>> This is literally describing the code below, it's not useful.
>>
>> Ack, Will remove this comment, Thanks!
>
> Thanks
>
>>>
>>>> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
>>>> + if (flags & TTU_SPLIT_HUGE_PUD) {
>>>> + split_huge_pud_locked(vma, pvmw.pud, pvmw.address);
>>>> + flags &= ~TTU_SPLIT_HUGE_PUD;
>>>> + page_vma_mapped_walk_restart(&pvmw);
>>>> + continue;
>>>> + }
>>>> + }
>>>> +
>>>> if (folio_test_anon(folio) && !folio_test_swapbacked(folio)) {
>>>> if (unmap_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, folio))
>>>> goto walk_done;
>>>> @@ -2325,6 +2386,27 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>>>> mmu_notifier_invalidate_range_start(&range);
>>>>
>>>> while (page_vma_mapped_walk(&pvmw)) {
>>>> + /* Handle PUD-mapped THP first */
>>>
>>> How did/will this interact with DAX, VFIO PUD THP?
>>
>> It wont interact with DAX. try_to_migrate does the below and just returns:
>>
>> if (folio_is_zone_device(folio) &&
>> (!folio_is_device_private(folio) && !folio_is_device_coherent(folio)))
>> return;
>>
>> so DAX would never reach here.
>
> Hmm folio_is_zone_device() always returns true for DAX?
>
Yes, that is my understanding. Both fsdax and devdax call into
devm_memremap_pages() -> memremap_pages() in mm/memremap.c, which
unconditionally places all pages in ZONE_DEVICE.
> Also that's just one rmap call right?
>
Yes,
>>
>> I think vfio pages are pinned and therefore cant be migrated? (I have
>> not looked at vfio code, I will try to get a better understanding tomorrow,
>> but please let me know if that sounds wrong.)
>
> OK I've not dug into this either please do check, and be good really to test
> this code vs. actual DAX/VFIO scenarios if you can find a way to test that, thanks!
I think DAX is ok, will check more into VFIO. I will also CC the people who added
DAX and VFIO PUD support in the next RFC.
>
>>
>>
>>>
>>>> + if (!pvmw.pte && !pvmw.pmd) {
>>>> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>>>
>>> Won't pud_trans_huge() imply this...
>>>
>>
>> Agreed, I think it should cover it.
> Thanks!
>
>>
>>>> + /*
>>>> + * PUD-mapped THP: skip migration to preserve the huge
>>>> + * page. Splitting would defeat the purpose of PUD THPs.
>>>> + * Return false to indicate migration failure, which
>>>> + * will cause alloc_contig_range() to try a different
>>>> + * memory region.
>>>> + */
>>>> + if (pvmw.pud && pud_trans_huge(*pvmw.pud)) {
>>>> + page_vma_mapped_walk_done(&pvmw);
>>>> + ret = false;
>>>> + break;
>>>> + }
>>>> +#endif
>>>> + /* Unexpected state: !pte && !pmd but not a PUD THP */
>>>> + page_vma_mapped_walk_done(&pvmw);
>>>> + break;
>>>> + }
>>>> +
>>>> /* PMD-mapped THP migration entry */
>>>> if (!pvmw.pte) {
>>>> __maybe_unused unsigned long pfn;
>>>> @@ -2607,10 +2689,10 @@ void try_to_migrate(struct folio *folio, enum ttu_flags flags)
>>>>
>>>> /*
>>>> * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and
>>>> - * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
>>>> + * TTU_SPLIT_HUGE_PMD, TTU_SPLIT_HUGE_PUD, TTU_SYNC, and TTU_BATCH_FLUSH flags.
>>>> */
>>>> if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD |
>>>> - TTU_SYNC | TTU_BATCH_FLUSH)))
>>>> + TTU_SPLIT_HUGE_PUD | TTU_SYNC | TTU_BATCH_FLUSH)))
>>>> return;
>>>>
>>>> if (folio_is_zone_device(folio) &&
>>>> --
>>>> 2.47.3
>>>>
>>>
>>> This isn't a final review, I'll have to look more thoroughly through here
>>> over time and you're going to have to be patient in general :)
>>>
>>> Cheers, Lorenzo
>>
>>
>> Thanks for the review, this is awesome!
>
> Ack, will do more when I have time, and obviously you're getting a lot of input
> from others too.
>
> Be good to get a summary at next THP cabal ;)
>
>>
>>
>> [1] https://lore.kernel.org/all/20f92576-e932-435f-bb7b-de49eb84b012@gmail.com/
>> [2] https://lore.kernel.org/all/05d5918f-b61b-4091-b8c6-20eebfffc3c4@gmail.com/
>> [3] https://lore.kernel.org/all/2efaa5ed-bd09-41f0-9c07-5cd6cccc4595@gmail.com/
>>
>>
>>
>
> cheers, Lorenzo
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-02 15:50 ` Zi Yan
2026-02-04 10:56 ` Lorenzo Stoakes
@ 2026-02-05 11:22 ` David Hildenbrand (arm)
1 sibling, 0 replies; 49+ messages in thread
From: David Hildenbrand (arm) @ 2026-02-05 11:22 UTC (permalink / raw)
To: Zi Yan, Lorenzo Stoakes
Cc: Rik van Riel, Usama Arif, Andrew Morton, linux-mm, hannes,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team, Frank van der Linden
On 2/2/26 16:50, Zi Yan wrote:
> On 2 Feb 2026, at 6:30, Lorenzo Stoakes wrote:
>
>> On Sun, Feb 01, 2026 at 09:44:12PM -0500, Rik van Riel wrote:
>>> To address the obvious objection "but how could we
>>> possibly allocate 1GB huge pages while the workload
>>> is running?", I am planning to pick up the CMA balancing
>>> patch series (thank you, Frank) and get that in an
>>> upstream ready shape soon.
>>>
>>> https://lkml.org/2025/9/15/1735
>>
>> That link doesn't work?
>>
>> Did a quick search for CMA balancing on lore, couldn't find anything, could you
>> provide a lore link?
>
> https://lwn.net/Articles/1038263/
>
>>
>>>
>>> That patch set looks like another case where no
>>> amount of internal testing will find every single
>>> corner case, and we'll probably just want to
>>> merge it upstream, deploy it experimentally, and
>>> aggressively deal with anything that might pop up.
>>
>> I'm not really in favour of this kind of approach. There's plenty of things that
>> were considered 'temporary' upstream that became rather permanent :)
>>
>> Maybe we can't cover all corner-cases, but we need to make sure whatever we do
>> send upstream is maintainable, conceptually sensible and doesn't paint us into
>> any corners, etc.
>>
>>>
>>> With CMA balancing, it would be possibly to just
>>> have half (or even more) of system memory for
>>> movable allocations only, which would make it possible
>>> to allocate 1GB huge pages dynamically.
>>
>> Could you expand on that?
>
> I also would like to hear David’s opinion on using CMA for 1GB THP.
> He did not like it[1] when I posted my patch back in 2020, but it has
> been more than 5 years. :)
Hehe, not particularly excited about that.
We really have to avoid short-term hacks by any means. We have enough of
that in THP land already.
We talked about challenges in the past like:
* Controlling who gets to allocate them.
* Having a reasonable swap/migration mechanism
* Reliably allocating them without hacks, while being future-proof
* Long-term pinning them when they are actually on ZONE_MOVABLE or CMA
(the latter could be made working but requires thought)
I agree with Lorenzo that this RFC is a bit surprising, because I assume
none of the real challenges were tackled.
Having that said, it will take me some time to come back to this RFC
here, other stuff that piled up is more urgent and more important.
But I'll note that we really have to cleanup the THP mess before we add
more stuff on it.
For example, I still wonder whether we can just stop pre-allocating page
tables for THPs and instead let code fail+retry in case we cannot remap
the page. I wanted to look into the details a long time ago but never
got to it.
Avoiding that would make the remapping much easier; and we should then
remap from PUD->PMD->PTEs.
Implementing 1 GiB support for shmem might be a reasonable first step,
before we start digging into the anonymous memory land with all these
nasty things involved.
--
Cheers,
David
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-04 10:56 ` Lorenzo Stoakes
@ 2026-02-05 11:29 ` David Hildenbrand (arm)
0 siblings, 0 replies; 49+ messages in thread
From: David Hildenbrand (arm) @ 2026-02-05 11:29 UTC (permalink / raw)
To: Lorenzo Stoakes, Zi Yan
Cc: Rik van Riel, Usama Arif, Andrew Morton, linux-mm, hannes,
shakeel.butt, kas, baohua, dev.jain, baolin.wang, npache,
Liam.Howlett, ryan.roberts, vbabka, lance.yang, linux-kernel,
kernel-team, Frank van der Linden
On 2/4/26 11:56, Lorenzo Stoakes wrote:
> On Mon, Feb 02, 2026 at 10:50:35AM -0500, Zi Yan wrote:
>> On 2 Feb 2026, at 6:30, Lorenzo Stoakes wrote:
>>
>>>
>>> That link doesn't work?
>>>
>>> Did a quick search for CMA balancing on lore, couldn't find anything, could you
>>> provide a lore link?
>>
>> https://lwn.net/Articles/1038263/
>>
>>>
>>>
>>> I'm not really in favour of this kind of approach. There's plenty of things that
>>> were considered 'temporary' upstream that became rather permanent :)
>>>
>>> Maybe we can't cover all corner-cases, but we need to make sure whatever we do
>>> send upstream is maintainable, conceptually sensible and doesn't paint us into
>>> any corners, etc.
>>>
>>>
>>> Could you expand on that?
>>
>> I also would like to hear David’s opinion on using CMA for 1GB THP.
>> He did not like it[1] when I posted my patch back in 2020, but it has
>> been more than 5 years. :)
>
> Yes please David :)
Heh, read Zi's mail first :)
>
> I find the idea of using the CMA for this a bit gross. And I fear we're
> essentially expanding the hacks for DAX to everyone.
Jup.
>
> Again I really feel that we should be tackling technical debt here, rather
> than adding features on shaky foundations and just making things worse.
>
Jup.
> We are inundated with series-after-series for THP trying to add features
> but really not very many that are tackling this debt, and I think it's time
> to get firmer about that.
Almost nobody wants do cleanups because there is the believe that only
features are important; and some companies seem to value features more
than cleanups when it comes to promotions etc.
And cleanups in that area are hard, because you'll very likely just
break stuff because it's all so weirdly interconnected.
See max_ptes_none discussion ...
>
>>
>> The other direction I explored is to get 1GB THP from buddy allocator.
>> That means we need to:
>> 1. bump MAX_PAGE_ORDER to 18 or make it a runtime variable so that only 1GB
>> THP users need to bump it,
>
> Would we need to bump the page block size too to stand more of a chance of
> avoiding fragmentation?
We discussed one idea of another level of anti-fragmentation on top (I
forgot how we called it, essentially bigger blocks that group pages in
the buddy). But implementing that is non trivial.
But long-term we really need something better than pageblocks and using
hacky CMA reservations for anything larger.
--
Cheers,
David
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-05 5:13 ` Usama Arif
@ 2026-02-05 17:40 ` David Hildenbrand (Arm)
2026-02-05 18:05 ` Usama Arif
0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand (Arm) @ 2026-02-05 17:40 UTC (permalink / raw)
To: Usama Arif, Matthew Wilcox
Cc: Zi Yan, Kiryl Shutsemau, lorenzo.stoakes, Andrew Morton,
linux-mm, hannes, riel, shakeel.butt, baohua, dev.jain,
baolin.wang, npache, Liam.Howlett, ryan.roberts, vbabka,
lance.yang, linux-kernel, kernel-team
On 2/5/26 06:13, Usama Arif wrote:
>
>
> On 04/02/2026 20:21, Matthew Wilcox wrote:
>> On Thu, Feb 05, 2026 at 04:17:19AM +0000, Matthew Wilcox wrote:
>>> Why are you even talking about "the next series"? The approach is
>>> wrong. You need to put this POC aside and solve the problems that
>>> you've bypassed to create this POC.
>
>
> Ah is the issue the code duplication that Lorenzo has raised (ofcourse
> completely agree that there is quite a bit), the lru.next patch I did
> which hopefully [1] makes better, or investigating if it might be
> interferring with DAX/VFIO that Lorenzo pointed out (will ofcourse
> investigate before sending the next revision)? The mapcount work
> (I think David is working on this?) that is needed to allow splitting
> PUDs to PMD is completely a separate issue and can be tackled in parallel
> to this.
I would enjoy seeing an investigation where we see what might have to be
done to avoid preallocating page tables for anonymous memory THPs, and
instead, try allocating them on demand when remapping. If allocation
fails, it's just another -ENOMEM or -EAGAIN.
That would not only reduce the page table overhead when using THPs, it
would also avoid the preallocation of two levels like you need here.
Maybe it's doable, maybe not.
Last time I looked into it I was like "there must be a better way to
achieve that" :)
Spinlocks might require preallocating etc.
(as raised elsewhere, staring with shmem support avoid the page table
problem)
>
>>
>> ... and gmail is rejecting this email as being spam. You need to stop
>> using gmail for kernel deveopment work.
>
> I asked a couple of folks now and it seems they got it without any issue.
> I have used it for a long time. I will try and see if something has changed.
Gmail is absolutely horrible for upstream development. For example,
linux-mm recently un-subscribed all gmail addresses.
When I moved to my kernel.org address I thought using gmail as a backend
would be a great choice. I was wrong and after getting daily bounce
notifications from MLs (even though my spamfilter rules essentially
allowed everything). So I moved to something else (I now pay 3Euro a
month, omg! :) ).
--
Cheers,
David
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-05 17:40 ` David Hildenbrand (Arm)
@ 2026-02-05 18:05 ` Usama Arif
2026-02-05 18:11 ` Usama Arif
0 siblings, 1 reply; 49+ messages in thread
From: Usama Arif @ 2026-02-05 18:05 UTC (permalink / raw)
To: David Hildenbrand (Arm), Matthew Wilcox
Cc: Zi Yan, Kiryl Shutsemau, lorenzo.stoakes, Andrew Morton,
linux-mm, hannes, riel, shakeel.butt, baohua, dev.jain,
baolin.wang, npache, Liam.Howlett, ryan.roberts, vbabka,
lance.yang, linux-kernel, kernel-team
On 05/02/2026 09:40, David Hildenbrand (Arm) wrote:
> On 2/5/26 06:13, Usama Arif wrote:
>>
>>
>> On 04/02/2026 20:21, Matthew Wilcox wrote:
>>> On Thu, Feb 05, 2026 at 04:17:19AM +0000, Matthew Wilcox wrote:
>>>> Why are you even talking about "the next series"? The approach is
>>>> wrong. You need to put this POC aside and solve the problems that
>>>> you've bypassed to create this POC.
>>
>>
>> Ah is the issue the code duplication that Lorenzo has raised (ofcourse
>> completely agree that there is quite a bit), the lru.next patch I did
>> which hopefully [1] makes better, or investigating if it might be
>> interferring with DAX/VFIO that Lorenzo pointed out (will ofcourse
>> investigate before sending the next revision)? The mapcount work
>> (I think David is working on this?) that is needed to allow splitting
>> PUDs to PMD is completely a separate issue and can be tackled in parallel
>> to this.
>
> I would enjoy seeing an investigation where we see what might have to be done to avoid preallocating page tables for anonymous memory THPs, and instead, try allocating them on demand when remapping. If allocation fails, it's just another -ENOMEM or -EAGAIN.
>
> That would not only reduce the page table overhead when using THPs, it would also avoid the preallocation of two levels like you need here.
>
> Maybe it's doable, maybe not.
>
> Last time I looked into it I was like "there must be a better way to achieve that" :)
>
> Spinlocks might require preallocating etc.
Thanks for this! I am going to try and implement this now and stress test this as well for 2M THPs.
I have access to some production workloads that use a lot of THPs as well and I can put
counters to see how often this even happens in prod workloads. i.e. how often page table
allocation even fails in 2M THPs if its done on demand instead of preallocating this.
>
> (as raised elsewhere, staring with shmem support avoid the page table problem)
>
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-03 23:29 ` Usama Arif
2026-02-04 0:08 ` Frank van der Linden
@ 2026-02-05 18:07 ` Zi Yan
2026-02-07 23:22 ` Usama Arif
1 sibling, 1 reply; 49+ messages in thread
From: Zi Yan @ 2026-02-05 18:07 UTC (permalink / raw)
To: Usama Arif
Cc: Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm,
hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team
On 3 Feb 2026, at 18:29, Usama Arif wrote:
> On 02/02/2026 08:24, Zi Yan wrote:
>> On 1 Feb 2026, at 19:50, Usama Arif wrote:
>>
>>> This is an RFC series to implement 1GB PUD-level THPs, allowing
>>> applications to benefit from reduced TLB pressure without requiring
>>> hugetlbfs. The patches are based on top of
>>> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
>>
>> It is nice to see you are working on 1GB THP.
>>
>>>
>>> Motivation: Why 1GB THP over hugetlbfs?
>>> =======================================
>>>
>>> While hugetlbfs provides 1GB huge pages today, it has significant limitations
>>> that make it unsuitable for many workloads:
>>>
>>> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot
>>> or runtime, taking memory away. This requires capacity planning,
>>> administrative overhead, and makes workload orchastration much much more
>>> complex, especially colocating with workloads that don't use hugetlbfs.
>>
>> But you are using CMA, the same allocation mechanism as hugetlb_cma. What
>> is the difference?
>>
>
> So we dont really need to use CMA. CMA can help a lot ofcourse, but we dont *need* it.
> For e.g. I can run the very simple case [1] of trying to get 1G pages in the upstream
> kernel without CMA on my server and it works. The server has been up for more than a week
> (so pretty fragmented), is running a bunch of stuff in the background, uses 0 CMA memory,
> and I tried to get 20x1G pages on it and it worked.
> It uses folio_alloc_gigantic, which is exactly what this series uses:
>
> $ uptime -p
> up 1 week, 3 days, 5 hours, 7 minutes
> $ cat /proc/meminfo | grep -i cma
> CmaTotal: 0 kB
> CmaFree: 0 kB
> $ echo 20 | sudo tee /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
> 20
> $ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
> 20
> $ free -h
> total used free shared buff/cache available
> Mem: 1.0Ti 142Gi 292Gi 143Mi 583Gi 868Gi
> Swap: 129Gi 3.5Gi 126Gi
> $ ./map_1g_hugepages
> Mapping 20 x 1GB huge pages (20 GB total)
> Mapped at 0x7f43c0000000
> Touched page 0 at 0x7f43c0000000
> Touched page 1 at 0x7f4400000000
> Touched page 2 at 0x7f4440000000
> Touched page 3 at 0x7f4480000000
> Touched page 4 at 0x7f44c0000000
> Touched page 5 at 0x7f4500000000
> Touched page 6 at 0x7f4540000000
> Touched page 7 at 0x7f4580000000
> Touched page 8 at 0x7f45c0000000
> Touched page 9 at 0x7f4600000000
> Touched page 10 at 0x7f4640000000
> Touched page 11 at 0x7f4680000000
> Touched page 12 at 0x7f46c0000000
> Touched page 13 at 0x7f4700000000
> Touched page 14 at 0x7f4740000000
> Touched page 15 at 0x7f4780000000
> Touched page 16 at 0x7f47c0000000
> Touched page 17 at 0x7f4800000000
> Touched page 18 at 0x7f4840000000
> Touched page 19 at 0x7f4880000000
> Unmapped successfully
>
OK, I see the subtle difference among CMA, hugetlb_cma, alloc_contig_pages(),
although CMA and hugetlb_cma use alloc_contig_pages() behind the scenes:
1. CMA and hugetlb_cma reserves some amount of memory at boot as MIGRATE_CMA
and only CMA allocations are allowed. It is a carveout.
2. alloc_contig_pages() without CMA needs to look for a contiguous physical
range without any unmovable page or pinned movable pages, so that the allocation
can succeeds.
Your example is quite optimistic, since the free memory is much bigger than
the requested 1GB pages, 292GB vs 20GB. Unless the worst scenario, where
each 1GB of the free memory has 1 unmovable pages, happens, alloc_contig_pages()
will succeed. But does it represent the product environment, where free memory
is scarce? And in that case, how long does alloc_contig_pages() take to get
1GB memory? Is that delay tolerable?
This discussion all comes back to
“should we have a dedicated source for 1GB folio?” Yu Zhao’s TAO[1] was
interesting, since it has a dedicated zone for large folios and split is
replaced by migrating after-split folios to a different zone. But how to
adjust that dedicated zone size is still not determined. Lots of ideas,
but no conclusion yet.
[1] https://lwn.net/Articles/964097/
>
>
>
>>>
>>> 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails
>>> rather than falling back to smaller pages. This makes it fragile under
>>> memory pressure.
>>
>> True.
>>
>>>
>>> 4. No Splitting: hugetlbfs pages cannot be split when only partial access
>>> is needed, leading to memory waste and preventing partial reclaim.
>>
>> Since you have PUD THP implementation, have you run any workload on it?
>> How often you see a PUD THP split?
>>
>
> Ah so running non upstream kernels in production is a bit more difficult
> (and also risky). I was trying to use the 512M experiment on arm as a comparison,
> although I know its not the same thing with PAGE_SIZE and pageblock order.
>
> I can try some other upstream benchmarks if it helps? Although will need to find
> ones that create VMA > 1G.
I think getting split stats from ARM 512MB PMD THP can give some clue about
1GB THP, since the THP sizes are similar (yeah, base page to THP size ratios
are 32x different but the gap between base page size and THP size is still
much bigger than 4KB vs 2MB).
>
>> Oh, you actually ran 512MB THP on ARM64 (I saw it below), do you have
>> any split stats to show the necessity of THP split?
>>
>>>
>>> 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot
>>> be easily shared with regular memory pools.
>>
>> True.
>>
>>>
>>> PUD THP solves these limitations by integrating 1GB pages into the existing
>>> THP infrastructure.
>>
>> The main advantage of PUD THP over hugetlb is that it can be split and mapped
>> at sub-folio level. Do you have any data to support the necessity of them?
>> I wonder if it would be easier to just support 1GB folio in core-mm first
>> and we can add 1GB THP split and sub-folio mapping later. With that, we
>> can move hugetlb users to 1GB folio.
>>
>
> I would say its not the main advantage? But its definitely one of them.
> The 2 main areas where split would be helpful is munmap partial
> range and reclaim (MADV_PAGEOUT). For e.g. jemalloc/tcmalloc can now start
> taking advantge of 1G pages. My knowledge is not that great when it comes
> to memory allocators, but I believe they track for how long certain areas
> have been cold and can trigger reclaim as an example. Then split will be useful.
> Having memory allocators use hugetlb is probably going to be a no?
To take advantage of 1GB pages, memory allocators would want to keep that
whole GB mapped by PUD, otherwise TLB wise there is no difference from
using 2MB pages, right? I guess memory allocators would want to promote
a set of stable memory objects to 1GB and demote them from 1GB if any
is gone (promote by migrating them into a 1GB folio, demote by migrating
them out of a 1GB folio) and this can avoid split.
>
>
>> BTW, without split support, you can apply HVO to 1GB folio to save memory.
>> That is a disadvantage of PUD THP. Have you taken that into consideration?
>> Basically, switching from hugetlb to PUD THP, you will lose memory due
>> to vmemmap usage.
>>
>
> Yeah so HVO saves 16M per 1G, and the page depost mechanism adds ~2M as per 1G.
> We have HVO enabled in the meta fleet. I think we should not only think of PUD THP
> as a replacement for hugetlb, but to also enable further usescases where hugetlb
> would not be feasible.
>
> Ater the basic infrastructure for 1G is there, we can work on optimizing, I think
> there would be a a lot of interesting work we can do. HVO for 1G THP would be one
> of them?
HVO would prevent folio split, right? Since most of struct pages are mapped
to the same memory area. You will need to allocate more memory, 16MB, to split
1GB. That further decreases the motivation of splitting 1GB.
>
>>>
>>> Performance Results
>>> ===================
>>>
>>> Benchmark results of these patches on Intel Xeon Platinum 8321HC:
>>>
>>> Test: True Random Memory Access [1] test of 4GB memory region with pointer
>>> chasing workload (4M random pointer dereferences through memory):
>>>
>>> | Metric | PUD THP (1GB) | PMD THP (2MB) | Change |
>>> |-------------------|---------------|---------------|--------------|
>>> | Memory access | 88 ms | 134 ms | 34% faster |
>>> | Page fault time | 898 ms | 331 ms | 2.7x slower |
>>>
>>> Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)).
>>> For long-running workloads this will be a one-off cost, and the 34%
>>> improvement in access latency provides significant benefit.
>>>
>>> ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU
>>> bound workload running on a large number of ARM servers (256G). I enabled
>>> the 512M THP settings to always for a 100 servers in production (didn't
>>> really have high expectations :)). The average memory used for the workload
>>> increased from 217G to 233G. The amount of memory backed by 512M pages was
>>> 68G! The dTLB misses went down by 26% and the PID multiplier increased input
>>> by 5.9% (This is a very significant improvment in workload performance).
>>> A significant number of these THPs were faulted in at application start when
>>> were present across different VMAs. Ofcourse getting these 512M pages is
>>> easier on ARM due to bigger PAGE_SIZE and pageblock order.
>>>
>>> I am hoping that these patches for 1G THP can be used to provide similar
>>> benefits for x86. I expect workloads to fault them in at start time when there
>>> is plenty of free memory available.
>>>
>>>
>>> Previous attempt by Zi Yan
>>> ==========================
>>>
>>> Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been
>>> significant changes in kernel since then, including folio conversion, mTHP
>>> framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD
>>> code as reference for making 1G PUD THP work. I am hoping Zi can provide
>>> guidance on these patches!
>>
>> I am more than happy to help you. :)
>>
>
> Thanks!!!
>
>>>
>>> Major Design Decisions
>>> ======================
>>>
>>> 1. No shared 1G zero page: The memory cost would be quite significant!
>>>
>>> 2. Page Table Pre-deposit Strategy
>>> PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE
>>> page tables (one for each potential PMD entry after split).
>>> We allocate a PMD page table and use its pmd_huge_pte list to store
>>> the deposited PTE tables. This ensures split operations don't fail due
>>> to page table allocation failures (at the cost of 2M per PUD THP)
>>>
>>> 3. Split to Base Pages
>>> When a PUD THP must be split (COW, partial unmap, mprotect), we split
>>> directly to base pages (262,144 PTEs). The ideal thing would be to split
>>> to 2M pages and then to 4K pages if needed. However, this would require
>>> significant rmap and mapcount tracking changes.
>>>
>>> 4. COW and fork handling via split
>>> Copy-on-write and fork for PUD THP triggers a split to base pages, then
>>> uses existing PTE-level COW infrastructure. Getting another 1G region is
>>> hard and could fail. If only a 4K is written, copying 1G is a waste.
>>> Probably this should only be done on CoW and not fork?
>>>
>>> 5. Migration via split
>>> Split PUD to PTEs and migrate individual pages. It is going to be difficult
>>> to find a 1G continguous memory to migrate to. Maybe its better to not
>>> allow migration of PUDs at all? I am more tempted to not allow migration,
>>> but have kept splitting in this RFC.
>>
>> Without migration, PUD THP loses its flexibility and transparency. But with
>> its 1GB size, I also wonder what the purpose of PUD THP migration can be.
>> It does not create memory fragmentation, since it is the largest folio size
>> we have and contiguous. NUMA balancing 1GB THP seems too much work.
>
> Yeah this is exactly what I was thinking as well. It is going to be expensive
> and difficult to migrate 1G pages, and I am not sure if what we get out of it
> is worth it? I kept the splitting code in this RFC as I wanted to show that
> its possible to split and migrate and the rejecting migration code is a lot easier.
Got it. Maybe reframing this patchset as 1GB folio support without split or
migration is better?
>
>>
>> BTW, I posted many questions, but that does not mean I object the patchset.
>> I just want to understand your use case better, reduce unnecessary
>> code changes, and hopefully get it upstreamed this time. :)
>>
>> Thank you for the work.
>>
>
> Ah no this is awesome! Thanks for the questions! Its basically the discussion I
> wanted to start with the RFC.
>
>
> [1] https://gist.github.com/uarif1/35dcd63f9d76048b07eb5c16ace85991
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 01/12] mm: add PUD THP ptdesc and rmap support
2026-02-05 18:05 ` Usama Arif
@ 2026-02-05 18:11 ` Usama Arif
0 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-05 18:11 UTC (permalink / raw)
To: David Hildenbrand (Arm), Matthew Wilcox
Cc: Zi Yan, Kiryl Shutsemau, lorenzo.stoakes, Andrew Morton,
linux-mm, hannes, riel, shakeel.butt, baohua, dev.jain,
baolin.wang, npache, Liam.Howlett, ryan.roberts, vbabka,
lance.yang, linux-kernel, kernel-team
>>
>> (as raised elsewhere, staring with shmem support avoid the page table problem)
>>
>
Also forgot to add here, I will look into this before PUD anon THPs first.
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [RFC 00/12] mm: PUD (1GB) THP implementation
2026-02-05 18:07 ` Zi Yan
@ 2026-02-07 23:22 ` Usama Arif
0 siblings, 0 replies; 49+ messages in thread
From: Usama Arif @ 2026-02-07 23:22 UTC (permalink / raw)
To: Zi Yan
Cc: Andrew Morton, David Hildenbrand, lorenzo.stoakes, linux-mm,
hannes, riel, shakeel.butt, kas, baohua, dev.jain, baolin.wang,
npache, Liam.Howlett, ryan.roberts, vbabka, lance.yang,
linux-kernel, kernel-team
On 05/02/2026 18:07, Zi Yan wrote:
> On 3 Feb 2026, at 18:29, Usama Arif wrote:
>
>> On 02/02/2026 08:24, Zi Yan wrote:
>>> On 1 Feb 2026, at 19:50, Usama Arif wrote:
>>>
>>>> This is an RFC series to implement 1GB PUD-level THPs, allowing
>>>> applications to benefit from reduced TLB pressure without requiring
>>>> hugetlbfs. The patches are based on top of
>>>> f9b74c13b773b7c7e4920d7bc214ea3d5f37b422 from mm-stable (6.19-rc6).
>>>
>>> It is nice to see you are working on 1GB THP.
>>>
>>>>
>>>> Motivation: Why 1GB THP over hugetlbfs?
>>>> =======================================
>>>>
>>>> While hugetlbfs provides 1GB huge pages today, it has significant limitations
>>>> that make it unsuitable for many workloads:
>>>>
>>>> 1. Static Reservation: hugetlbfs requires pre-allocating huge pages at boot
>>>> or runtime, taking memory away. This requires capacity planning,
>>>> administrative overhead, and makes workload orchastration much much more
>>>> complex, especially colocating with workloads that don't use hugetlbfs.
>>>
>>> But you are using CMA, the same allocation mechanism as hugetlb_cma. What
>>> is the difference?
>>>
>>
>> So we dont really need to use CMA. CMA can help a lot ofcourse, but we dont *need* it.
>> For e.g. I can run the very simple case [1] of trying to get 1G pages in the upstream
>> kernel without CMA on my server and it works. The server has been up for more than a week
>> (so pretty fragmented), is running a bunch of stuff in the background, uses 0 CMA memory,
>> and I tried to get 20x1G pages on it and it worked.
>> It uses folio_alloc_gigantic, which is exactly what this series uses:
>>
>> $ uptime -p
>> up 1 week, 3 days, 5 hours, 7 minutes
>> $ cat /proc/meminfo | grep -i cma
>> CmaTotal: 0 kB
>> CmaFree: 0 kB
>> $ echo 20 | sudo tee /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
>> 20
>> $ cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
>> 20
>> $ free -h
>> total used free shared buff/cache available
>> Mem: 1.0Ti 142Gi 292Gi 143Mi 583Gi 868Gi
>> Swap: 129Gi 3.5Gi 126Gi
>> $ ./map_1g_hugepages
>> Mapping 20 x 1GB huge pages (20 GB total)
>> Mapped at 0x7f43c0000000
>> Touched page 0 at 0x7f43c0000000
>> Touched page 1 at 0x7f4400000000
>> Touched page 2 at 0x7f4440000000
>> Touched page 3 at 0x7f4480000000
>> Touched page 4 at 0x7f44c0000000
>> Touched page 5 at 0x7f4500000000
>> Touched page 6 at 0x7f4540000000
>> Touched page 7 at 0x7f4580000000
>> Touched page 8 at 0x7f45c0000000
>> Touched page 9 at 0x7f4600000000
>> Touched page 10 at 0x7f4640000000
>> Touched page 11 at 0x7f4680000000
>> Touched page 12 at 0x7f46c0000000
>> Touched page 13 at 0x7f4700000000
>> Touched page 14 at 0x7f4740000000
>> Touched page 15 at 0x7f4780000000
>> Touched page 16 at 0x7f47c0000000
>> Touched page 17 at 0x7f4800000000
>> Touched page 18 at 0x7f4840000000
>> Touched page 19 at 0x7f4880000000
>> Unmapped successfully
>>
>
> OK, I see the subtle difference among CMA, hugetlb_cma, alloc_contig_pages(),
> although CMA and hugetlb_cma use alloc_contig_pages() behind the scenes:
>
> 1. CMA and hugetlb_cma reserves some amount of memory at boot as MIGRATE_CMA
> and only CMA allocations are allowed. It is a carveout.
Yes, also there is always going to be some amount of movable non-pinned memory in
the system. So it is ok with having a certain percentage of memory dedicated to
CMA even if we never make 1G allocations, as we aren't really taking it away from
the system. When its needed for 1G allocations, the memory will just be migrated out.
>
> 2. alloc_contig_pages() without CMA needs to look for a contiguous physical
> range without any unmovable page or pinned movable pages, so that the allocation
> can succeeds.
>
> Your example is quite optimistic, since the free memory is much bigger than
> the requested 1GB pages, 292GB vs 20GB. Unless the worst scenario, where
> each 1GB of the free memory has 1 unmovable pages, happens, alloc_contig_pages()
> will succeed. But does it represent the product environment, where free memory
> is scarce? And in that case, how long does alloc_contig_pages() take to get
> 1GB memory? Is that delay tolerable?
So this was my personal server, which had been up for more than a week. I was
expecting the worst case as you described, but it seems that doesnt really happen.
I will also try requested a larger amount of 1G pages.
The majority of usecases for this would be applications getting the 1G pages
when they are started (when there is plenty of free memory), and holding them
for a long time. The delay is large (as I showed in the numbers below), but if
the application gets the 1G page at the start and keeps it for long time, its
a one-off cost.
>
> This discussion all comes back to
> “should we have a dedicated source for 1GB folio?” Yu Zhao’s TAO[1] was
> interesting, since it has a dedicated zone for large folios and split is
> replaced by migrating after-split folios to a different zone. But how to
> adjust that dedicated zone size is still not determined. Lots of ideas,
> but no conclusion yet.
>
> [1] https://lwn.net/Articles/964097/
>
Actually I wasn't a big fan of TAO. I would rather have CMA than TAO, as atleast
you wouldn't make the memory unusable if there are no 1G allocations. But as can
be seen, neither is actually needed.
>>
>>
>>
>>>>
>>>> 4. No Fallback: If a 1GB huge page cannot be allocated, hugetlbfs fails
>>>> rather than falling back to smaller pages. This makes it fragile under
>>>> memory pressure.
>>>
>>> True.
>>>
>>>>
>>>> 4. No Splitting: hugetlbfs pages cannot be split when only partial access
>>>> is needed, leading to memory waste and preventing partial reclaim.
>>>
>>> Since you have PUD THP implementation, have you run any workload on it?
>>> How often you see a PUD THP split?
>>>
>>
>> Ah so running non upstream kernels in production is a bit more difficult
>> (and also risky). I was trying to use the 512M experiment on arm as a comparison,
>> although I know its not the same thing with PAGE_SIZE and pageblock order.
>>
>> I can try some other upstream benchmarks if it helps? Although will need to find
>> ones that create VMA > 1G.
>
> I think getting split stats from ARM 512MB PMD THP can give some clue about
> 1GB THP, since the THP sizes are similar (yeah, base page to THP size ratios
> are 32x different but the gap between base page size and THP size is still
> much bigger than 4KB vs 2MB).
>
There were splits, I was running with max_ptes_none = 0, as I didn't want jobs
to OOM, and the THP shrinker was kicking in. I dont have the numbers on hand, but
I cant try and run the job again next week (It takes some time and effort to
set things up).
>>
>>> Oh, you actually ran 512MB THP on ARM64 (I saw it below), do you have
>>> any split stats to show the necessity of THP split?
>>>
>>>>
>>>> 5. Memory Accounting: hugetlbfs memory is accounted separately and cannot
>>>> be easily shared with regular memory pools.
>>>
>>> True.
>>>
>>>>
>>>> PUD THP solves these limitations by integrating 1GB pages into the existing
>>>> THP infrastructure.
>>>
>>> The main advantage of PUD THP over hugetlb is that it can be split and mapped
>>> at sub-folio level. Do you have any data to support the necessity of them?
>>> I wonder if it would be easier to just support 1GB folio in core-mm first
>>> and we can add 1GB THP split and sub-folio mapping later. With that, we
>>> can move hugetlb users to 1GB folio.
>>>
>>
>> I would say its not the main advantage? But its definitely one of them.
>> The 2 main areas where split would be helpful is munmap partial
>> range and reclaim (MADV_PAGEOUT). For e.g. jemalloc/tcmalloc can now start
>> taking advantge of 1G pages. My knowledge is not that great when it comes
>> to memory allocators, but I believe they track for how long certain areas
>> have been cold and can trigger reclaim as an example. Then split will be useful.
>> Having memory allocators use hugetlb is probably going to be a no?
>
> To take advantage of 1GB pages, memory allocators would want to keep that
> whole GB mapped by PUD, otherwise TLB wise there is no difference from
> using 2MB pages, right?
Yes
> I guess memory allocators would want to promote
> a set of stable memory objects to 1GB and demote them from 1GB if any
> is gone (promote by migrating them into a 1GB folio, demote by migrating
> them out of a 1GB folio) and this can avoid split.
>
>>
>>
>>> BTW, without split support, you can apply HVO to 1GB folio to save memory.
>>> That is a disadvantage of PUD THP. Have you taken that into consideration?
>>> Basically, switching from hugetlb to PUD THP, you will lose memory due
>>> to vmemmap usage.
>>>
>>
>> Yeah so HVO saves 16M per 1G, and the page depost mechanism adds ~2M as per 1G.
>> We have HVO enabled in the meta fleet. I think we should not only think of PUD THP
>> as a replacement for hugetlb, but to also enable further usescases where hugetlb
>> would not be feasible.
>>
>> Ater the basic infrastructure for 1G is there, we can work on optimizing, I think
>> there would be a a lot of interesting work we can do. HVO for 1G THP would be one
>> of them?
>
> HVO would prevent folio split, right? Since most of struct pages are mapped
> to the same memory area. You will need to allocate more memory, 16MB, to split
> 1GB. That further decreases the motivation of splitting 1GB.
Yes, thats right.
>>
>>>>
>>>> Performance Results
>>>> ===================
>>>>
>>>> Benchmark results of these patches on Intel Xeon Platinum 8321HC:
>>>>
>>>> Test: True Random Memory Access [1] test of 4GB memory region with pointer
>>>> chasing workload (4M random pointer dereferences through memory):
>>>>
>>>> | Metric | PUD THP (1GB) | PMD THP (2MB) | Change |
>>>> |-------------------|---------------|---------------|--------------|
>>>> | Memory access | 88 ms | 134 ms | 34% faster |
>>>> | Page fault time | 898 ms | 331 ms | 2.7x slower |
>>>>
>>>> Page faulting 1G pages is 2.7x slower (Allocating 1G pages is hard :)).
>>>> For long-running workloads this will be a one-off cost, and the 34%
>>>> improvement in access latency provides significant benefit.
>>>>
>>>> ARM with 64K PAGE_SZIE supports 512M PMD THPs. In meta, we have a CPU
>>>> bound workload running on a large number of ARM servers (256G). I enabled
>>>> the 512M THP settings to always for a 100 servers in production (didn't
>>>> really have high expectations :)). The average memory used for the workload
>>>> increased from 217G to 233G. The amount of memory backed by 512M pages was
>>>> 68G! The dTLB misses went down by 26% and the PID multiplier increased input
>>>> by 5.9% (This is a very significant improvment in workload performance).
>>>> A significant number of these THPs were faulted in at application start when
>>>> were present across different VMAs. Ofcourse getting these 512M pages is
>>>> easier on ARM due to bigger PAGE_SIZE and pageblock order.
>>>>
>>>> I am hoping that these patches for 1G THP can be used to provide similar
>>>> benefits for x86. I expect workloads to fault them in at start time when there
>>>> is plenty of free memory available.
>>>>
>>>>
>>>> Previous attempt by Zi Yan
>>>> ==========================
>>>>
>>>> Zi Yan attempted 1G THPs [2] in kernel version 5.11. There have been
>>>> significant changes in kernel since then, including folio conversion, mTHP
>>>> framework, ptdesc, rmap changes, etc. I found it easier to use the current PMD
>>>> code as reference for making 1G PUD THP work. I am hoping Zi can provide
>>>> guidance on these patches!
>>>
>>> I am more than happy to help you. :)
>>>
>>
>> Thanks!!!
>>
>>>>
>>>> Major Design Decisions
>>>> ======================
>>>>
>>>> 1. No shared 1G zero page: The memory cost would be quite significant!
>>>>
>>>> 2. Page Table Pre-deposit Strategy
>>>> PMD THP deposits a single PTE page table. PUD THP deposits 512 PTE
>>>> page tables (one for each potential PMD entry after split).
>>>> We allocate a PMD page table and use its pmd_huge_pte list to store
>>>> the deposited PTE tables. This ensures split operations don't fail due
>>>> to page table allocation failures (at the cost of 2M per PUD THP)
>>>>
>>>> 3. Split to Base Pages
>>>> When a PUD THP must be split (COW, partial unmap, mprotect), we split
>>>> directly to base pages (262,144 PTEs). The ideal thing would be to split
>>>> to 2M pages and then to 4K pages if needed. However, this would require
>>>> significant rmap and mapcount tracking changes.
>>>>
>>>> 4. COW and fork handling via split
>>>> Copy-on-write and fork for PUD THP triggers a split to base pages, then
>>>> uses existing PTE-level COW infrastructure. Getting another 1G region is
>>>> hard and could fail. If only a 4K is written, copying 1G is a waste.
>>>> Probably this should only be done on CoW and not fork?
>>>>
>>>> 5. Migration via split
>>>> Split PUD to PTEs and migrate individual pages. It is going to be difficult
>>>> to find a 1G continguous memory to migrate to. Maybe its better to not
>>>> allow migration of PUDs at all? I am more tempted to not allow migration,
>>>> but have kept splitting in this RFC.
>>>
>>> Without migration, PUD THP loses its flexibility and transparency. But with
>>> its 1GB size, I also wonder what the purpose of PUD THP migration can be.
>>> It does not create memory fragmentation, since it is the largest folio size
>>> we have and contiguous. NUMA balancing 1GB THP seems too much work.
>>
>> Yeah this is exactly what I was thinking as well. It is going to be expensive
>> and difficult to migrate 1G pages, and I am not sure if what we get out of it
>> is worth it? I kept the splitting code in this RFC as I wanted to show that
>> its possible to split and migrate and the rejecting migration code is a lot easier.
>
> Got it. Maybe reframing this patchset as 1GB folio support without split or
> migration is better?
I think split support is good to have. For e.g. on CoW, partial unmap and mprotect.
I do agree that migration support seems to have little benefit at a high cost, so
simplest to not have it.
>
>>
>>>
>>> BTW, I posted many questions, but that does not mean I object the patchset.
>>> I just want to understand your use case better, reduce unnecessary
>>> code changes, and hopefully get it upstreamed this time. :)
>>>
>>> Thank you for the work.
>>>
>>
>> Ah no this is awesome! Thanks for the questions! Its basically the discussion I
>> wanted to start with the RFC.
>>
>>
>> [1] https://gist.github.com/uarif1/35dcd63f9d76048b07eb5c16ace85991
>
>
> Best Regards,
> Yan, Zi
^ permalink raw reply [flat|nested] 49+ messages in thread
end of thread, other threads:[~2026-02-07 23:22 UTC | newest]
Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-02 0:50 [RFC 00/12] mm: PUD (1GB) THP implementation Usama Arif
2026-02-02 0:50 ` [RFC 01/12] mm: add PUD THP ptdesc and rmap support Usama Arif
2026-02-02 10:44 ` Kiryl Shutsemau
2026-02-02 16:01 ` Zi Yan
2026-02-03 22:07 ` Usama Arif
2026-02-05 4:17 ` Matthew Wilcox
2026-02-05 4:21 ` Matthew Wilcox
2026-02-05 5:13 ` Usama Arif
2026-02-05 17:40 ` David Hildenbrand (Arm)
2026-02-05 18:05 ` Usama Arif
2026-02-05 18:11 ` Usama Arif
2026-02-02 12:15 ` Lorenzo Stoakes
2026-02-04 7:38 ` Usama Arif
2026-02-04 12:55 ` Lorenzo Stoakes
2026-02-05 6:40 ` Usama Arif
2026-02-02 0:50 ` [RFC 02/12] mm/thp: add mTHP stats infrastructure for PUD THP Usama Arif
2026-02-02 11:56 ` Lorenzo Stoakes
2026-02-05 5:53 ` Usama Arif
2026-02-02 0:50 ` [RFC 03/12] mm: thp: add PUD THP allocation and fault handling Usama Arif
2026-02-02 0:50 ` [RFC 04/12] mm: thp: implement PUD THP split to PTE level Usama Arif
2026-02-02 0:50 ` [RFC 05/12] mm: thp: add reclaim and migration support for PUD THP Usama Arif
2026-02-02 0:50 ` [RFC 06/12] selftests/mm: add PUD THP basic allocation test Usama Arif
2026-02-02 0:50 ` [RFC 07/12] selftests/mm: add PUD THP read/write access test Usama Arif
2026-02-02 0:50 ` [RFC 08/12] selftests/mm: add PUD THP fork COW test Usama Arif
2026-02-02 0:50 ` [RFC 09/12] selftests/mm: add PUD THP partial munmap test Usama Arif
2026-02-02 0:50 ` [RFC 10/12] selftests/mm: add PUD THP mprotect split test Usama Arif
2026-02-02 0:50 ` [RFC 11/12] selftests/mm: add PUD THP reclaim test Usama Arif
2026-02-02 0:50 ` [RFC 12/12] selftests/mm: add PUD THP migration test Usama Arif
2026-02-02 2:44 ` [RFC 00/12] mm: PUD (1GB) THP implementation Rik van Riel
2026-02-02 11:30 ` Lorenzo Stoakes
2026-02-02 15:50 ` Zi Yan
2026-02-04 10:56 ` Lorenzo Stoakes
2026-02-05 11:29 ` David Hildenbrand (arm)
2026-02-05 11:22 ` David Hildenbrand (arm)
2026-02-02 4:00 ` Matthew Wilcox
2026-02-02 9:06 ` David Hildenbrand (arm)
2026-02-03 21:11 ` Usama Arif
2026-02-02 11:20 ` Lorenzo Stoakes
2026-02-04 1:00 ` Usama Arif
2026-02-04 11:08 ` Lorenzo Stoakes
2026-02-04 11:50 ` Dev Jain
2026-02-04 12:01 ` Dev Jain
2026-02-05 6:08 ` Usama Arif
2026-02-02 16:24 ` Zi Yan
2026-02-03 23:29 ` Usama Arif
2026-02-04 0:08 ` Frank van der Linden
2026-02-05 5:46 ` Usama Arif
2026-02-05 18:07 ` Zi Yan
2026-02-07 23:22 ` Usama Arif
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox