linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kiryl Shutsemau (Meta)" <kas@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>,
	Muchun Song <muchun.song@linux.dev>,
	David Hildenbrand <david@redhat.com>,
	Matthew Wilcox <willy@infradead.org>,
	Usama Arif <usamaarif642@gmail.com>,
	Frank van der Linden <fvdl@google.com>
Cc: Oscar Salvador <osalvador@suse.de>,
	Mike Rapoport <rppt@kernel.org>, Vlastimil Babka <vbabka@suse.cz>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Zi Yan <ziy@nvidia.com>, Baoquan He <bhe@redhat.com>,
	Michal Hocko <mhocko@suse.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Huacai Chen <chenhuacai@kernel.org>,
	WANG Xuerui <kernel@xen0n.name>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Alexandre Ghiti <alex@ghiti.fr>,
	kernel-team@meta.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
	loongarch@lists.linux.dev, linux-riscv@lists.infradead.org,
	"Kiryl Shutsemau (Meta)" <kas@kernel.org>
Subject: [PATCHv7 09/18] mm/hugetlb: Defer vmemmap population for bootmem hugepages
Date: Fri, 27 Feb 2026 19:30:10 +0000	[thread overview]
Message-ID: <20260227193030.272078-9-kas@kernel.org> (raw)
In-Reply-To: <20260202155634.650837-1-kas@kernel.org>

Currently, the vmemmap for bootmem-allocated gigantic pages is populated
early in hugetlb_vmemmap_init_early(). However, the zone information is
only available after zones are initialized. If it is later discovered
that a page spans multiple zones, the HVO mapping must be undone and
replaced with a normal mapping using vmemmap_undo_hvo().

Defer the actual vmemmap population to hugetlb_vmemmap_init_late(). At
this stage, zones are already initialized, so it can be checked if the
page is valid for HVO before deciding how to populate the vmemmap.

This allows us to remove vmemmap_undo_hvo() and the complex logic
required to rollback HVO mappings.

In hugetlb_vmemmap_init_late(), if HVO population fails or if the zones
are invalid, fall back to a normal vmemmap population.

Postponing population until hugetlb_vmemmap_init_late() also makes zone
information available from within vmemmap_populate_hvo().

Signed-off-by: Kiryl Shutsemau (Meta) <kas@kernel.org>
---
 include/linux/mm.h   |  2 --
 mm/hugetlb_vmemmap.c | 37 +++++++++++++++----------------
 mm/sparse-vmemmap.c  | 53 --------------------------------------------
 3 files changed, 18 insertions(+), 74 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 7f4dbbb9d783..0e2d45008ff4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4484,8 +4484,6 @@ int vmemmap_populate(unsigned long start, unsigned long end, int node,
 		struct vmem_altmap *altmap);
 int vmemmap_populate_hvo(unsigned long start, unsigned long end, int node,
 			 unsigned long headsize);
-int vmemmap_undo_hvo(unsigned long start, unsigned long end, int node,
-		     unsigned long headsize);
 void vmemmap_wrprotect_hvo(unsigned long start, unsigned long end, int node,
 			  unsigned long headsize);
 void vmemmap_populate_print_last(void);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index a9280259e12a..935ec5829be9 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -790,7 +790,6 @@ void __init hugetlb_vmemmap_init_early(int nid)
 {
 	unsigned long psize, paddr, section_size;
 	unsigned long ns, i, pnum, pfn, nr_pages;
-	unsigned long start, end;
 	struct huge_bootmem_page *m = NULL;
 	void *map;
 
@@ -808,14 +807,6 @@ void __init hugetlb_vmemmap_init_early(int nid)
 		paddr = virt_to_phys(m);
 		pfn = PHYS_PFN(paddr);
 		map = pfn_to_page(pfn);
-		start = (unsigned long)map;
-		end = start + nr_pages * sizeof(struct page);
-
-		if (vmemmap_populate_hvo(start, end, nid,
-					HUGETLB_VMEMMAP_RESERVE_SIZE) < 0)
-			continue;
-
-		memmap_boot_pages_add(HUGETLB_VMEMMAP_RESERVE_SIZE / PAGE_SIZE);
 
 		pnum = pfn_to_section_nr(pfn);
 		ns = psize / section_size;
@@ -850,28 +841,36 @@ void __init hugetlb_vmemmap_init_late(int nid)
 		h = m->hstate;
 		pfn = PHYS_PFN(phys);
 		nr_pages = pages_per_huge_page(h);
+		map = pfn_to_page(pfn);
+		start = (unsigned long)map;
+		end = start + nr_pages * sizeof(struct page);
 
 		if (!hugetlb_bootmem_page_zones_valid(nid, m)) {
 			/*
 			 * Oops, the hugetlb page spans multiple zones.
-			 * Remove it from the list, and undo HVO.
+			 * Remove it from the list, and populate it normally.
 			 */
 			list_del(&m->list);
 
-			map = pfn_to_page(pfn);
-
-			start = (unsigned long)map;
-			end = start + nr_pages * sizeof(struct page);
-
-			vmemmap_undo_hvo(start, end, nid,
-					 HUGETLB_VMEMMAP_RESERVE_SIZE);
-			nr_mmap = end - start - HUGETLB_VMEMMAP_RESERVE_SIZE;
+			vmemmap_populate(start, end, nid, NULL);
+			nr_mmap = end - start;
 			memmap_boot_pages_add(DIV_ROUND_UP(nr_mmap, PAGE_SIZE));
 
 			memblock_phys_free(phys, huge_page_size(h));
 			continue;
-		} else
+		}
+
+		if (vmemmap_populate_hvo(start, end, nid,
+					 HUGETLB_VMEMMAP_RESERVE_SIZE) < 0) {
+			/* Fallback if HVO population fails */
+			vmemmap_populate(start, end, nid, NULL);
+			nr_mmap = end - start;
+		} else {
 			m->flags |= HUGE_BOOTMEM_ZONES_VALID;
+			nr_mmap = HUGETLB_VMEMMAP_RESERVE_SIZE;
+		}
+
+		memmap_boot_pages_add(DIV_ROUND_UP(nr_mmap, PAGE_SIZE));
 	}
 }
 #endif
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 37522d6cb398..032a81450838 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -302,59 +302,6 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
 	return vmemmap_populate_range(start, end, node, altmap, -1, 0);
 }
 
-/*
- * Undo populate_hvo, and replace it with a normal base page mapping.
- * Used in memory init in case a HVO mapping needs to be undone.
- *
- * This can happen when it is discovered that a memblock allocated
- * hugetlb page spans multiple zones, which can only be verified
- * after zones have been initialized.
- *
- * We know that:
- * 1) The first @headsize / PAGE_SIZE vmemmap pages were individually
- *    allocated through memblock, and mapped.
- *
- * 2) The rest of the vmemmap pages are mirrors of the last head page.
- */
-int __meminit vmemmap_undo_hvo(unsigned long addr, unsigned long end,
-				      int node, unsigned long headsize)
-{
-	unsigned long maddr, pfn;
-	pte_t *pte;
-	int headpages;
-
-	/*
-	 * Should only be called early in boot, so nothing will
-	 * be accessing these page structures.
-	 */
-	WARN_ON(!early_boot_irqs_disabled);
-
-	headpages = headsize >> PAGE_SHIFT;
-
-	/*
-	 * Clear mirrored mappings for tail page structs.
-	 */
-	for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) {
-		pte = virt_to_kpte(maddr);
-		pte_clear(&init_mm, maddr, pte);
-	}
-
-	/*
-	 * Clear and free mappings for head page and first tail page
-	 * structs.
-	 */
-	for (maddr = addr; headpages-- > 0; maddr += PAGE_SIZE) {
-		pte = virt_to_kpte(maddr);
-		pfn = pte_pfn(ptep_get(pte));
-		pte_clear(&init_mm, maddr, pte);
-		memblock_phys_free(PFN_PHYS(pfn), PAGE_SIZE);
-	}
-
-	flush_tlb_kernel_range(addr, end);
-
-	return vmemmap_populate(addr, end, node, NULL);
-}
-
 /*
  * Write protect the mirrored tail page structs for HVO. This will be
  * called from the hugetlb code when gathering and initializing the
-- 
2.51.2



  parent reply	other threads:[~2026-02-27 19:30 UTC|newest]

Thread overview: 92+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-02 15:56 [PATCHv6 00/17] mm: Eliminate fake head pages from vmemmap optimization Kiryl Shutsemau
2026-02-02 15:56 ` [PATCHv6 01/17] mm: Move MAX_FOLIO_ORDER definition to mmzone.h Kiryl Shutsemau
2026-02-07 20:20   ` Usama Arif
2026-02-10 15:01   ` Vlastimil Babka
2026-02-02 15:56 ` [PATCHv6 02/17] mm: Change the interface of prep_compound_tail() Kiryl Shutsemau
2026-02-04 16:14   ` David Hildenbrand (arm)
2026-02-05 11:35     ` Kiryl Shutsemau
2026-02-05 11:58       ` David Hildenbrand (arm)
2026-02-10 15:06   ` Vlastimil Babka
2026-02-02 15:56 ` [PATCHv6 03/17] mm: Rename the 'compound_head' field in the 'struct page' to 'compound_info' Kiryl Shutsemau
2026-02-04 16:14   ` David Hildenbrand (arm)
2026-02-10 15:09   ` Vlastimil Babka
2026-02-02 15:56 ` [PATCHv6 04/17] mm: Move set/clear_compound_head() next to compound_head() Kiryl Shutsemau
2026-02-04 16:35   ` David Hildenbrand (arm)
2026-02-10 15:10   ` Vlastimil Babka
2026-02-02 15:56 ` [PATCHv6 05/17] riscv/mm: Align vmemmap to maximal folio size Kiryl Shutsemau
2026-02-04 16:50   ` David Hildenbrand (arm)
2026-02-05 13:50     ` Kiryl Shutsemau
2026-02-05 13:54       ` David Hildenbrand (Arm)
2026-02-02 15:56 ` [PATCHv6 06/17] LoongArch/mm: " Kiryl Shutsemau
2026-02-04 16:56   ` David Hildenbrand (arm)
2026-02-05 12:56     ` David Hildenbrand (Arm)
2026-02-05 13:43       ` Kiryl Shutsemau
2026-02-05 13:52         ` David Hildenbrand (Arm)
2026-02-05 13:52     ` Kiryl Shutsemau
2026-02-05 13:57       ` David Hildenbrand (Arm)
2026-02-02 15:56 ` [PATCHv6 07/17] mm: Rework compound_head() for power-of-2 sizeof(struct page) Kiryl Shutsemau
2026-02-05 14:09   ` David Hildenbrand (Arm)
2026-02-07 20:19   ` Usama Arif
2026-02-10 15:40   ` Vlastimil Babka
2026-02-02 15:56 ` [PATCHv6 08/17] mm: Make page_zonenum() use head page Kiryl Shutsemau
2026-02-04  3:40   ` Muchun Song
2026-02-05 13:10   ` David Hildenbrand (Arm)
2026-02-09 11:52     ` Kiryl Shutsemau
2026-02-10 15:57       ` Vlastimil Babka
2026-02-16 11:30         ` Kiryl Shutsemau
2026-02-23 14:52           ` Kiryl Shutsemau
2026-02-15 23:13   ` Matthew Wilcox
2026-02-16  9:06     ` David Hildenbrand (Arm)
2026-02-16 11:20       ` Vlastimil Babka
2026-02-23 18:18       ` Matthew Wilcox
2026-02-23 19:32         ` David Hildenbrand (Arm)
2026-02-23 20:46           ` Frank van der Linden
2026-02-02 15:56 ` [PATCHv6 09/17] mm/sparse: Check memmap alignment for compound_info_has_mask() Kiryl Shutsemau
2026-02-03  3:35   ` Muchun Song
2026-02-05 13:31   ` David Hildenbrand (Arm)
2026-02-05 13:58     ` David Hildenbrand (Arm)
2026-02-02 15:56 ` [PATCHv6 10/17] mm/hugetlb: Refactor code around vmemmap_walk Kiryl Shutsemau
2026-02-02 15:56 ` [PATCHv6 11/17] mm/hugetlb: Remove fake head pages Kiryl Shutsemau
2026-02-03  9:50   ` Muchun Song
2026-02-06  9:14   ` David Hildenbrand (Arm)
2026-02-06  9:36   ` David Hildenbrand (Arm)
2026-02-07 20:16   ` Usama Arif
2026-02-07 21:25     ` David Hildenbrand (Arm)
2026-02-07 22:50       ` Usama Arif
2026-02-02 15:56 ` [PATCHv6 12/17] mm: Drop fake head checks Kiryl Shutsemau
2026-02-06  9:41   ` David Hildenbrand (Arm)
2026-02-10 16:18   ` Vlastimil Babka
2026-02-02 15:56 ` [PATCHv6 13/17] hugetlb: Remove VMEMMAP_SYNCHRONIZE_RCU Kiryl Shutsemau
2026-02-06  9:42   ` David Hildenbrand (Arm)
2026-02-02 15:56 ` [PATCHv6 14/17] mm/hugetlb: Remove hugetlb_optimize_vmemmap_key static key Kiryl Shutsemau
2026-02-06  9:42   ` David Hildenbrand (Arm)
2026-02-02 15:56 ` [PATCHv6 15/17] mm: Remove the branch from compound_head() Kiryl Shutsemau
2026-02-06 10:23   ` David Hildenbrand (Arm)
2026-02-10 16:42   ` Vlastimil Babka
2026-02-02 15:56 ` [PATCHv6 16/17] hugetlb: Update vmemmap_dedup.rst Kiryl Shutsemau
2026-02-06 10:35   ` David Hildenbrand (Arm)
2026-02-02 15:56 ` [PATCHv6 17/17] mm/slab: Use compound_head() in page_slab() Kiryl Shutsemau
2026-02-04  3:39   ` Muchun Song
2026-02-06 10:42   ` David Hildenbrand (Arm)
2026-02-10 16:45   ` Vlastimil Babka
2026-02-27 19:30 ` [PATCHv7 00/17] mm: Eliminate fake head pages from vmemmap optimization Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 01/18] mm: Move MAX_FOLIO_ORDER definition to mmzone.h Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 02/18] mm: Change the interface of prep_compound_tail() Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 03/18] mm: Rename the 'compound_head' field in the 'struct page' to 'compound_info' Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 04/18] mm: Move set/clear_compound_head() next to compound_head() Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 05/18] riscv/mm: Align vmemmap to maximal folio size Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 06/18] LoongArch/mm: " Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 07/18] mm: Rework compound_head() for power-of-2 sizeof(struct page) Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 08/18] mm/sparse: Check memmap alignment for compound_info_has_mask() Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` Kiryl Shutsemau (Meta) [this message]
2026-02-27 19:30 ` [PATCHv7 10/18] mm/hugetlb: Refactor code around vmemmap_walk Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 11/18] x86/vdso: Undefine CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP for vdso32 Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 12/18] mm/hugetlb: Remove fake head pages Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 13/18] mm: Drop fake head checks Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 14/18] hugetlb: Remove VMEMMAP_SYNCHRONIZE_RCU Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 15/18] mm/hugetlb: Remove hugetlb_optimize_vmemmap_key static key Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 16/18] mm: Remove the branch from compound_head() Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 17/18] hugetlb: Update vmemmap_dedup.rst Kiryl Shutsemau (Meta)
2026-02-27 19:30 ` [PATCHv7 18/18] mm/slab: Use compound_head() in page_slab() Kiryl Shutsemau (Meta)
2026-02-27 19:42 ` [PATCHv7 00/17] mm: Eliminate fake head pages from vmemmap optimization Kiryl Shutsemau
2026-02-27 19:42 [PATCHv7 RESEND " Kiryl Shutsemau (Meta)
2026-02-27 19:42 ` [PATCHv7 09/18] mm/hugetlb: Defer vmemmap population for bootmem hugepages Kiryl Shutsemau (Meta)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260227193030.272078-9-kas@kernel.org \
    --to=kas@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=alex@ghiti.fr \
    --cc=aou@eecs.berkeley.edu \
    --cc=bhe@redhat.com \
    --cc=chenhuacai@kernel.org \
    --cc=corbet@lwn.net \
    --cc=david@redhat.com \
    --cc=fvdl@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@meta.com \
    --cc=kernel@xen0n.name \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=loongarch@lists.linux.dev \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mhocko@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=osalvador@suse.de \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=rppt@kernel.org \
    --cc=usamaarif642@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox