linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RESEND v1] mm/memory_hotplug: move debug_pagealloc_map_pages() into online_pages_range()
@ 2024-12-03 10:20 David Hildenbrand
  2024-12-03 15:28 ` Oscar Salvador
  0 siblings, 1 reply; 3+ messages in thread
From: David Hildenbrand @ 2024-12-03 10:20 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-mm, David Hildenbrand, Andrew Morton, Oscar Salvador

In the near future, we want to have a single way to handover PageOffline
pages to the buddy, whereby they could have:

(a) Never been exposed to the buddy before: kept PageOffline when onlining
    the memory block.
(b) Been allocated from the buddy, for example using
    alloc_contig_range() to then be set PageOffline,

Let's start by making generic_online_page() less special compared to
ordinary page freeing (e.g., free_contig_range()), and perform the
debug_pagealloc_map_pages() call unconditionally, even when the online
callback might decide to keep the pages offline.

All pages are already initialized with PageOffline, so nobody touches
them either way.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Oscar Salvador <osalvador@suse.de>
Signed-off-by: David Hildenbrand <david@redhat.com>
---

SMTP server issues, resending so it reaches linux-mm as well.

---
 mm/memory_hotplug.c | 10 +++++++++-
 mm/page_alloc.c     |  6 ------
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c43b4e7fb298..20af14e695c7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -650,6 +650,7 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
 	 * this and the first chunk to online will be pageblock_nr_pages.
 	 */
 	for (pfn = start_pfn; pfn < end_pfn;) {
+		struct page *page = pfn_to_page(pfn);
 		int order;
 
 		/*
@@ -664,7 +665,14 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
 		else
 			order = MAX_PAGE_ORDER;
 
-		(*online_page_callback)(pfn_to_page(pfn), order);
+		/*
+		 * Exposing the page to the buddy by freeing can cause
+		 * issues with debug_pagealloc enabled: some archs don't
+		 * like double-unmappings. So treat them like any pages that
+		 * were allocated from the buddy.
+		 */
+		debug_pagealloc_map_pages(page, 1 << order);
+		(*online_page_callback)(page, order);
 		pfn += (1UL << order);
 	}
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cc3296cf8c95..01927f03af0b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1293,12 +1293,6 @@ void __meminit __free_pages_core(struct page *page, unsigned int order,
 			set_page_count(p, 0);
 		}
 
-		/*
-		 * Freeing the page with debug_pagealloc enabled will try to
-		 * unmap it; some archs don't like double-unmappings, so
-		 * map it first.
-		 */
-		debug_pagealloc_map_pages(page, nr_pages);
 		adjust_managed_page_count(page, nr_pages);
 	} else {
 		for (loop = 0; loop < nr_pages; loop++, p++) {
-- 
2.47.1



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-12-04  9:46 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-12-03 10:20 [PATCH RESEND v1] mm/memory_hotplug: move debug_pagealloc_map_pages() into online_pages_range() David Hildenbrand
2024-12-03 15:28 ` Oscar Salvador
2024-12-04  9:46   ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox