linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic
@ 2025-01-03 10:51 Guo Weikang
  2025-01-03 10:51 ` [PATCH 2/3] mm/memblock: Modify the default failure behavior of memblock_alloc_raw " Guo Weikang
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Guo Weikang @ 2025-01-03 10:51 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton; +Cc: linux-mm, linux-kernel, Guo Weikang

After analyzing the usage of memblock_alloc, it was found that approximately
4/5 (120/155) of the calls expect a panic behavior on allocation failure.
To reflect this common usage pattern, the default failure behavior of
memblock_alloc is now modified to trigger a panic when allocation fails.

Additionally, a new interface, memblock_alloc_no_panic, has been introduced
to handle cases where panic behavior is not desired.

Signed-off-by: Guo Weikang <guoweikang.kernel@gmail.com>
---
 arch/alpha/kernel/core_cia.c            |  2 +-
 arch/alpha/kernel/core_marvel.c         |  6 ++---
 arch/alpha/kernel/pci.c                 |  4 ++--
 arch/alpha/kernel/pci_iommu.c           |  4 ++--
 arch/alpha/kernel/setup.c               |  2 +-
 arch/arm/kernel/setup.c                 |  4 ++--
 arch/arm/mach-omap2/omap_hwmod.c        | 16 +++++++++----
 arch/arm/mm/mmu.c                       |  6 ++---
 arch/arm/mm/nommu.c                     |  2 +-
 arch/arm64/kernel/setup.c               |  2 +-
 arch/loongarch/include/asm/dmi.h        |  2 +-
 arch/loongarch/kernel/setup.c           |  2 +-
 arch/loongarch/mm/init.c                |  6 ++---
 arch/m68k/mm/init.c                     |  2 +-
 arch/m68k/mm/mcfmmu.c                   |  4 ++--
 arch/m68k/mm/motorola.c                 |  2 +-
 arch/m68k/mm/sun3mmu.c                  |  4 ++--
 arch/m68k/sun3/sun3dvma.c               |  2 +-
 arch/mips/kernel/setup.c                |  2 +-
 arch/openrisc/mm/ioremap.c              |  2 +-
 arch/parisc/mm/init.c                   | 10 ++++----
 arch/powerpc/kernel/dt_cpu_ftrs.c       |  2 +-
 arch/powerpc/kernel/pci_32.c            |  2 +-
 arch/powerpc/kernel/setup-common.c      |  2 +-
 arch/powerpc/kernel/setup_32.c          |  2 +-
 arch/powerpc/mm/book3s32/mmu.c          |  2 +-
 arch/powerpc/mm/book3s64/pgtable.c      |  2 +-
 arch/powerpc/mm/kasan/8xx.c             |  5 ++--
 arch/powerpc/mm/kasan/init_32.c         |  7 +++---
 arch/powerpc/mm/kasan/init_book3e_64.c  |  8 +++----
 arch/powerpc/mm/kasan/init_book3s_64.c  |  2 +-
 arch/powerpc/mm/nohash/mmu_context.c    |  6 ++---
 arch/powerpc/mm/pgtable_32.c            |  2 +-
 arch/powerpc/platforms/powermac/nvram.c |  2 +-
 arch/powerpc/platforms/powernv/opal.c   |  2 +-
 arch/powerpc/platforms/ps3/setup.c      |  2 +-
 arch/powerpc/sysdev/msi_bitmap.c        |  2 +-
 arch/riscv/kernel/setup.c               |  2 +-
 arch/riscv/mm/kasan_init.c              | 14 +++++------
 arch/s390/kernel/crash_dump.c           |  2 +-
 arch/s390/kernel/numa.c                 |  2 +-
 arch/s390/kernel/setup.c                |  8 +++----
 arch/s390/kernel/smp.c                  |  4 ++--
 arch/s390/kernel/topology.c             |  4 ++--
 arch/s390/mm/vmem.c                     |  4 ++--
 arch/sh/mm/init.c                       |  4 ++--
 arch/sparc/kernel/prom_32.c             |  2 +-
 arch/sparc/kernel/prom_64.c             |  2 +-
 arch/sparc/mm/init_32.c                 |  2 +-
 arch/sparc/mm/srmmu.c                   |  6 ++---
 arch/um/drivers/net_kern.c              |  2 +-
 arch/um/drivers/vector_kern.c           |  2 +-
 arch/um/kernel/load_file.c              |  2 +-
 arch/x86/coco/sev/core.c                |  2 +-
 arch/x86/kernel/acpi/boot.c             |  2 +-
 arch/x86/kernel/acpi/madt_wakeup.c      |  2 +-
 arch/x86/kernel/apic/io_apic.c          |  4 ++--
 arch/x86/kernel/e820.c                  |  2 +-
 arch/x86/platform/olpc/olpc_dt.c        |  2 +-
 arch/x86/xen/p2m.c                      |  2 +-
 arch/xtensa/mm/kasan_init.c             |  2 +-
 arch/xtensa/platforms/iss/network.c     |  2 +-
 drivers/clk/ti/clk.c                    |  2 +-
 drivers/firmware/memmap.c               |  2 +-
 drivers/macintosh/smu.c                 |  2 +-
 drivers/of/fdt.c                        |  2 +-
 drivers/of/of_reserved_mem.c            |  2 +-
 drivers/of/unittest.c                   |  2 +-
 drivers/usb/early/xhci-dbc.c            |  2 +-
 include/linux/memblock.h                | 10 ++++----
 init/main.c                             | 12 +++++-----
 kernel/dma/swiotlb.c                    |  6 ++---
 kernel/power/snapshot.c                 |  2 +-
 kernel/printk/printk.c                  |  6 ++---
 lib/cpumask.c                           |  2 +-
 mm/kasan/tags.c                         |  2 +-
 mm/kfence/core.c                        |  4 ++--
 mm/kmsan/shadow.c                       |  4 ++--
 mm/memblock.c                           | 18 +++++++-------
 mm/numa.c                               |  2 +-
 mm/numa_emulation.c                     |  2 +-
 mm/numa_memblks.c                       |  2 +-
 mm/percpu.c                             | 32 ++++++++++++-------------
 mm/sparse.c                             |  2 +-
 84 files changed, 173 insertions(+), 165 deletions(-)

diff --git a/arch/alpha/kernel/core_cia.c b/arch/alpha/kernel/core_cia.c
index 6e577228e175..05f80b4bbf12 100644
--- a/arch/alpha/kernel/core_cia.c
+++ b/arch/alpha/kernel/core_cia.c
@@ -331,7 +331,7 @@ cia_prepare_tbia_workaround(int window)
 	long i;
 
 	/* Use minimal 1K map. */
-	ppte = memblock_alloc_or_panic(CIA_BROKEN_TBIA_SIZE, 32768);
+	ppte = memblock_alloc(CIA_BROKEN_TBIA_SIZE, 32768);
 	pte = (virt_to_phys(ppte) >> (PAGE_SHIFT - 1)) | 1;
 
 	for (i = 0; i < CIA_BROKEN_TBIA_SIZE / sizeof(unsigned long); ++i)
diff --git a/arch/alpha/kernel/core_marvel.c b/arch/alpha/kernel/core_marvel.c
index b1bfbd11980d..716ed3197f72 100644
--- a/arch/alpha/kernel/core_marvel.c
+++ b/arch/alpha/kernel/core_marvel.c
@@ -79,9 +79,9 @@ mk_resource_name(int pe, int port, char *str)
 {
 	char tmp[80];
 	char *name;
-	
+
 	sprintf(tmp, "PCI %s PE %d PORT %d", str, pe, port);
-	name = memblock_alloc_or_panic(strlen(tmp) + 1, SMP_CACHE_BYTES);
+	name = memblock_alloc(strlen(tmp) + 1, SMP_CACHE_BYTES);
 	strcpy(name, tmp);
 
 	return name;
@@ -116,7 +116,7 @@ alloc_io7(unsigned int pe)
 		return NULL;
 	}
 
-	io7 = memblock_alloc_or_panic(sizeof(*io7), SMP_CACHE_BYTES);
+	io7 = memblock_alloc(sizeof(*io7), SMP_CACHE_BYTES);
 	io7->pe = pe;
 	raw_spin_lock_init(&io7->irq_lock);
 
diff --git a/arch/alpha/kernel/pci.c b/arch/alpha/kernel/pci.c
index 8e9b4ac86b7e..d359ebaf6de7 100644
--- a/arch/alpha/kernel/pci.c
+++ b/arch/alpha/kernel/pci.c
@@ -391,7 +391,7 @@ alloc_pci_controller(void)
 {
 	struct pci_controller *hose;
 
-	hose = memblock_alloc_or_panic(sizeof(*hose), SMP_CACHE_BYTES);
+	hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES);
 
 	*hose_tail = hose;
 	hose_tail = &hose->next;
@@ -402,7 +402,7 @@ alloc_pci_controller(void)
 struct resource * __init
 alloc_resource(void)
 {
-	return memblock_alloc_or_panic(sizeof(struct resource), SMP_CACHE_BYTES);
+	return memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
 }
 
 
diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c
index 681f56089d9c..7a465c207684 100644
--- a/arch/alpha/kernel/pci_iommu.c
+++ b/arch/alpha/kernel/pci_iommu.c
@@ -71,8 +71,8 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base,
 	if (align < mem_size)
 		align = mem_size;
 
-	arena = memblock_alloc_or_panic(sizeof(*arena), SMP_CACHE_BYTES);
-	arena->ptes = memblock_alloc_or_panic(mem_size, align);
+	arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES);
+	arena->ptes = memblock_alloc(mem_size, align);
 
 	spin_lock_init(&arena->lock);
 	arena->hose = hose;
diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
index bebdffafaee8..6de866a62bd9 100644
--- a/arch/alpha/kernel/setup.c
+++ b/arch/alpha/kernel/setup.c
@@ -269,7 +269,7 @@ move_initrd(unsigned long mem_limit)
 	unsigned long size;
 
 	size = initrd_end - initrd_start;
-	start = memblock_alloc(PAGE_ALIGN(size), PAGE_SIZE);
+	start = memblock_alloc_no_panic(PAGE_ALIGN(size), PAGE_SIZE);
 	if (!start || __pa(start) + size > mem_limit) {
 		initrd_start = initrd_end = 0;
 		return NULL;
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index a41c93988d2c..b36498c0bedd 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -880,7 +880,7 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
 		 */
 		boot_alias_start = phys_to_idmap(start);
 		if (arm_has_idmap_alias() && boot_alias_start != IDMAP_INVALID_ADDR) {
-			res = memblock_alloc_or_panic(sizeof(*res), SMP_CACHE_BYTES);
+			res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES);
 			res->name = "System RAM (boot alias)";
 			res->start = boot_alias_start;
 			res->end = phys_to_idmap(res_end);
@@ -888,7 +888,7 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
 			request_resource(&iomem_resource, res);
 		}
 
-		res = memblock_alloc_or_panic(sizeof(*res), SMP_CACHE_BYTES);
+		res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES);
 		res->name  = "System RAM";
 		res->start = start;
 		res->end = res_end;
diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
index 111677878d9c..30e4f1279cdb 100644
--- a/arch/arm/mach-omap2/omap_hwmod.c
+++ b/arch/arm/mach-omap2/omap_hwmod.c
@@ -709,7 +709,7 @@ static int __init _setup_clkctrl_provider(struct device_node *np)
 	struct clkctrl_provider *provider;
 	int i;
 
-	provider = memblock_alloc(sizeof(*provider), SMP_CACHE_BYTES);
+	provider = memblock_alloc_no_panic(sizeof(*provider), SMP_CACHE_BYTES);
 	if (!provider)
 		return -ENOMEM;
 
@@ -718,16 +718,16 @@ static int __init _setup_clkctrl_provider(struct device_node *np)
 	provider->num_addrs = of_address_count(np);
 
 	provider->addr =
-		memblock_alloc(sizeof(void *) * provider->num_addrs,
+		memblock_alloc_no_panic(sizeof(void *) * provider->num_addrs,
 			       SMP_CACHE_BYTES);
 	if (!provider->addr)
-		return -ENOMEM;
+		goto err_free_provider;
 
 	provider->size =
-		memblock_alloc(sizeof(u32) * provider->num_addrs,
+		memblock_alloc_no_panic(sizeof(u32) * provider->num_addrs,
 			       SMP_CACHE_BYTES);
 	if (!provider->size)
-		return -ENOMEM;
+		goto err_free_addr;
 
 	for (i = 0; i < provider->num_addrs; i++) {
 		struct resource res;
@@ -740,6 +740,12 @@ static int __init _setup_clkctrl_provider(struct device_node *np)
 	list_add(&provider->link, &clkctrl_providers);
 
 	return 0;
+
+err_free_addr:
+	memblock_free(provider->addr, sizeof(void *));
+err_free_provider:
+	memblock_free(provider, sizeof(*provider));
+	return -ENOMEM;
 }
 
 static int __init _init_clkctrl_providers(void)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index f02f872ea8a9..3d788304839e 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -726,7 +726,7 @@ EXPORT_SYMBOL(phys_mem_access_prot);
 
 static void __init *early_alloc(unsigned long sz)
 {
-	return memblock_alloc_or_panic(sz, sz);
+	return memblock_alloc(sz, sz);
 
 }
 
@@ -1022,7 +1022,7 @@ void __init iotable_init(struct map_desc *io_desc, int nr)
 	if (!nr)
 		return;
 
-	svm = memblock_alloc_or_panic(sizeof(*svm) * nr, __alignof__(*svm));
+	svm = memblock_alloc(sizeof(*svm) * nr, __alignof__(*svm));
 
 	for (md = io_desc; nr; md++, nr--) {
 		create_mapping(md);
@@ -1044,7 +1044,7 @@ void __init vm_reserve_area_early(unsigned long addr, unsigned long size,
 	struct vm_struct *vm;
 	struct static_vm *svm;
 
-	svm = memblock_alloc_or_panic(sizeof(*svm), __alignof__(*svm));
+	svm = memblock_alloc(sizeof(*svm), __alignof__(*svm));
 
 	vm = &svm->vm;
 	vm->addr = (void *)addr;
diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
index 1a8f6914ee59..079b4d4acd29 100644
--- a/arch/arm/mm/nommu.c
+++ b/arch/arm/mm/nommu.c
@@ -162,7 +162,7 @@ void __init paging_init(const struct machine_desc *mdesc)
 	mpu_setup();
 
 	/* allocate the zero page. */
-	zero_page = (void *)memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+	zero_page = (void *)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 
 	bootmem_init();
 
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 85104587f849..3012cf9b0f9b 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -223,7 +223,7 @@ static void __init request_standard_resources(void)
 
 	num_standard_resources = memblock.memory.cnt;
 	res_size = num_standard_resources * sizeof(*standard_resources);
-	standard_resources = memblock_alloc_or_panic(res_size, SMP_CACHE_BYTES);
+	standard_resources = memblock_alloc(res_size, SMP_CACHE_BYTES);
 
 	for_each_mem_region(region) {
 		res = &standard_resources[i++];
diff --git a/arch/loongarch/include/asm/dmi.h b/arch/loongarch/include/asm/dmi.h
index 605493417753..6305bc3ba15b 100644
--- a/arch/loongarch/include/asm/dmi.h
+++ b/arch/loongarch/include/asm/dmi.h
@@ -10,7 +10,7 @@
 
 #define dmi_early_remap(x, l)	dmi_remap(x, l)
 #define dmi_early_unmap(x, l)	dmi_unmap(x)
-#define dmi_alloc(l)		memblock_alloc(l, PAGE_SIZE)
+#define dmi_alloc(l)		memblock_alloc_no_panic(l, PAGE_SIZE)
 
 static inline void *dmi_remap(u64 phys_addr, unsigned long size)
 {
diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
index edcfdfcad7d2..56934fe58170 100644
--- a/arch/loongarch/kernel/setup.c
+++ b/arch/loongarch/kernel/setup.c
@@ -431,7 +431,7 @@ static void __init resource_init(void)
 
 	num_standard_resources = memblock.memory.cnt;
 	res_size = num_standard_resources * sizeof(*standard_resources);
-	standard_resources = memblock_alloc_or_panic(res_size, SMP_CACHE_BYTES);
+	standard_resources = memblock_alloc(res_size, SMP_CACHE_BYTES);
 
 	for_each_mem_region(region) {
 		res = &standard_resources[i++];
diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
index ca5aa5f46a9f..99b4d5cf3e9c 100644
--- a/arch/loongarch/mm/init.c
+++ b/arch/loongarch/mm/init.c
@@ -174,7 +174,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
 	pmd_t *pmd;
 
 	if (p4d_none(p4dp_get(p4d))) {
-		pud = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+		pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 		p4d_populate(&init_mm, p4d, pud);
 #ifndef __PAGETABLE_PUD_FOLDED
 		pud_init(pud);
@@ -183,7 +183,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
 
 	pud = pud_offset(p4d, addr);
 	if (pud_none(pudp_get(pud))) {
-		pmd = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+		pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 		pud_populate(&init_mm, pud, pmd);
 #ifndef __PAGETABLE_PMD_FOLDED
 		pmd_init(pmd);
@@ -194,7 +194,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
 	if (!pmd_present(pmdp_get(pmd))) {
 		pte_t *pte;
 
-		pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+		pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 		pmd_populate_kernel(&init_mm, pmd, pte);
 		kernel_pte_init(pte);
 	}
diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c
index 8b11d0d545aa..1ccc238f33d9 100644
--- a/arch/m68k/mm/init.c
+++ b/arch/m68k/mm/init.c
@@ -68,7 +68,7 @@ void __init paging_init(void)
 
 	high_memory = (void *) end_mem;
 
-	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 	max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT;
 	free_area_init(max_zone_pfn);
 }
diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c
index 19a75029036c..26bac0984964 100644
--- a/arch/m68k/mm/mcfmmu.c
+++ b/arch/m68k/mm/mcfmmu.c
@@ -42,14 +42,14 @@ void __init paging_init(void)
 	unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
 	int i;
 
-	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 
 	pg_dir = swapper_pg_dir;
 	memset(swapper_pg_dir, 0, sizeof(swapper_pg_dir));
 
 	size = num_pages * sizeof(pte_t);
 	size = (size + PAGE_SIZE) & ~(PAGE_SIZE-1);
-	next_pgtable = (unsigned long) memblock_alloc_or_panic(size, PAGE_SIZE);
+	next_pgtable = (unsigned long) memblock_alloc(size, PAGE_SIZE);
 
 	pg_dir += PAGE_OFFSET >> PGDIR_SHIFT;
 
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index eab50dda14ee..ce016ae8c972 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -491,7 +491,7 @@ void __init paging_init(void)
 	 * initialize the bad page table and bad page to point
 	 * to a couple of allocated pages
 	 */
-	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 
 	/*
 	 * Set up SFC/DFC registers
diff --git a/arch/m68k/mm/sun3mmu.c b/arch/m68k/mm/sun3mmu.c
index 1ecf6bdd08bf..748645ac8cda 100644
--- a/arch/m68k/mm/sun3mmu.c
+++ b/arch/m68k/mm/sun3mmu.c
@@ -44,7 +44,7 @@ void __init paging_init(void)
 	unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
 	unsigned long size;
 
-	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 
 	address = PAGE_OFFSET;
 	pg_dir = swapper_pg_dir;
@@ -54,7 +54,7 @@ void __init paging_init(void)
 	size = num_pages * sizeof(pte_t);
 	size = (size + PAGE_SIZE) & ~(PAGE_SIZE-1);
 
-	next_pgtable = (unsigned long)memblock_alloc_or_panic(size, PAGE_SIZE);
+	next_pgtable = (unsigned long)memblock_alloc(size, PAGE_SIZE);
 	bootmem_end = (next_pgtable + size + PAGE_SIZE) & PAGE_MASK;
 
 	/* Map whole memory from PAGE_OFFSET (0x0E000000) */
diff --git a/arch/m68k/sun3/sun3dvma.c b/arch/m68k/sun3/sun3dvma.c
index 225fc735e466..681fcf83caa2 100644
--- a/arch/m68k/sun3/sun3dvma.c
+++ b/arch/m68k/sun3/sun3dvma.c
@@ -252,7 +252,7 @@ void __init dvma_init(void)
 
 	list_add(&(hole->list), &hole_list);
 
-	iommu_use = memblock_alloc_or_panic(IOMMU_TOTAL_ENTRIES * sizeof(unsigned long),
+	iommu_use = memblock_alloc(IOMMU_TOTAL_ENTRIES * sizeof(unsigned long),
 				   SMP_CACHE_BYTES);
 	dvma_unmap_iommu(DVMA_START, DVMA_SIZE);
 
diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index fbfe0771317e..fcccff55dc77 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -704,7 +704,7 @@ static void __init resource_init(void)
 	for_each_mem_range(i, &start, &end) {
 		struct resource *res;
 
-		res = memblock_alloc_or_panic(sizeof(struct resource), SMP_CACHE_BYTES);
+		res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
 
 		res->start = start;
 		/*
diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c
index 8e63e86251ca..e0f58f40c0ab 100644
--- a/arch/openrisc/mm/ioremap.c
+++ b/arch/openrisc/mm/ioremap.c
@@ -38,7 +38,7 @@ pte_t __ref *pte_alloc_one_kernel(struct mm_struct *mm)
 	if (likely(mem_init_done)) {
 		pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
 	} else {
-		pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+		pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 	}
 
 	return pte;
diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
index 61c0a2477072..d587a7cf7fdb 100644
--- a/arch/parisc/mm/init.c
+++ b/arch/parisc/mm/init.c
@@ -377,7 +377,7 @@ static void __ref map_pages(unsigned long start_vaddr,
 
 #if CONFIG_PGTABLE_LEVELS == 3
 		if (pud_none(*pud)) {
-			pmd = memblock_alloc_or_panic(PAGE_SIZE << PMD_TABLE_ORDER,
+			pmd = memblock_alloc(PAGE_SIZE << PMD_TABLE_ORDER,
 					     PAGE_SIZE << PMD_TABLE_ORDER);
 			pud_populate(NULL, pud, pmd);
 		}
@@ -386,7 +386,7 @@ static void __ref map_pages(unsigned long start_vaddr,
 		pmd = pmd_offset(pud, vaddr);
 		for (tmp1 = start_pmd; tmp1 < PTRS_PER_PMD; tmp1++, pmd++) {
 			if (pmd_none(*pmd)) {
-				pg_table = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+				pg_table = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 				pmd_populate_kernel(NULL, pmd, pg_table);
 			}
 
@@ -644,7 +644,7 @@ static void __init pagetable_init(void)
 	}
 #endif
 
-	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 
 }
 
@@ -681,7 +681,7 @@ static void __init fixmap_init(void)
 
 #if CONFIG_PGTABLE_LEVELS == 3
 	if (pud_none(*pud)) {
-		pmd = memblock_alloc_or_panic(PAGE_SIZE << PMD_TABLE_ORDER,
+		pmd = memblock_alloc(PAGE_SIZE << PMD_TABLE_ORDER,
 				     PAGE_SIZE << PMD_TABLE_ORDER);
 		pud_populate(NULL, pud, pmd);
 	}
@@ -689,7 +689,7 @@ static void __init fixmap_init(void)
 
 	pmd = pmd_offset(pud, addr);
 	do {
-		pte_t *pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+		pte_t *pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 
 		pmd_populate_kernel(&init_mm, pmd, pte);
 
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index 3af6c06af02f..f00a3b607e06 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -1088,7 +1088,7 @@ static int __init dt_cpu_ftrs_scan_callback(unsigned long node, const char
 	of_scan_flat_dt_subnodes(node, count_cpufeatures_subnodes,
 						&nr_dt_cpu_features);
 	dt_cpu_features =
-		memblock_alloc_or_panic(
+		memblock_alloc(
 			sizeof(struct dt_cpu_feature) * nr_dt_cpu_features,
 			PAGE_SIZE);
 
diff --git a/arch/powerpc/kernel/pci_32.c b/arch/powerpc/kernel/pci_32.c
index f8a3bd8cfae4..b56c853fc8be 100644
--- a/arch/powerpc/kernel/pci_32.c
+++ b/arch/powerpc/kernel/pci_32.c
@@ -213,7 +213,7 @@ pci_create_OF_bus_map(void)
 	struct property* of_prop;
 	struct device_node *dn;
 
-	of_prop = memblock_alloc_or_panic(sizeof(struct property) + 256,
+	of_prop = memblock_alloc(sizeof(struct property) + 256,
 				 SMP_CACHE_BYTES);
 	dn = of_find_node_by_path("/");
 	if (dn) {
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index f3ea1329c566..9c8bf12fdf3a 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -458,7 +458,7 @@ void __init smp_setup_cpu_maps(void)
 
 	DBG("smp_setup_cpu_maps()\n");
 
-	cpu_to_phys_id = memblock_alloc_or_panic(nr_cpu_ids * sizeof(u32),
+	cpu_to_phys_id = memblock_alloc(nr_cpu_ids * sizeof(u32),
 					__alignof__(u32));
 
 	for_each_node_by_type(dn, "cpu") {
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 5a1bf501fbe1..ec440aa52fde 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -140,7 +140,7 @@ arch_initcall(ppc_init);
 
 static void *__init alloc_stack(void)
 {
-	return memblock_alloc_or_panic(THREAD_SIZE, THREAD_ALIGN);
+	return memblock_alloc(THREAD_SIZE, THREAD_ALIGN);
 }
 
 void __init irqstack_early_init(void)
diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
index be9c4106e22f..f18d2a1e0df6 100644
--- a/arch/powerpc/mm/book3s32/mmu.c
+++ b/arch/powerpc/mm/book3s32/mmu.c
@@ -377,7 +377,7 @@ void __init MMU_init_hw(void)
 	 * Find some memory for the hash table.
 	 */
 	if ( ppc_md.progress ) ppc_md.progress("hash:find piece", 0x322);
-	Hash = memblock_alloc_or_panic(Hash_size, Hash_size);
+	Hash = memblock_alloc(Hash_size, Hash_size);
 	_SDR1 = __pa(Hash) | SDR1_LOW_BITS;
 
 	pr_info("Total memory = %lldMB; using %ldkB for hash table\n",
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index ce64abea9e3e..21bf84a134c3 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -330,7 +330,7 @@ void __init mmu_partition_table_init(void)
 	unsigned long ptcr;
 
 	/* Initialize the Partition Table with no entries */
-	partition_tb = memblock_alloc_or_panic(patb_size, patb_size);
+	partition_tb = memblock_alloc(patb_size, patb_size);
 	ptcr = __pa(partition_tb) | (PATB_SIZE_SHIFT - 12);
 	set_ptcr_when_no_uv(ptcr);
 	powernv_set_nmmu_ptcr(ptcr);
diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c
index 989d6cdf4141..c43b1b3bcaac 100644
--- a/arch/powerpc/mm/kasan/8xx.c
+++ b/arch/powerpc/mm/kasan/8xx.c
@@ -22,10 +22,9 @@ kasan_init_shadow_8M(unsigned long k_start, unsigned long k_end, void *block)
 		if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte)
 			continue;
 
-		ptep = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
+		ptep = memblock_alloc_no_panic(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
 		if (!ptep)
 			return -ENOMEM;
-
 		for (i = 0; i < PTRS_PER_PTE; i++) {
 			pte_t pte = pte_mkhuge(pfn_pte(PHYS_PFN(__pa(block + i * PAGE_SIZE)), PAGE_KERNEL));
 
@@ -45,7 +44,7 @@ int __init kasan_init_region(void *start, size_t size)
 	int ret;
 	void *block;
 
-	block = memblock_alloc(k_end - k_start, SZ_8M);
+	block = memblock_alloc_no_panic(k_end - k_start, SZ_8M);
 	if (!block)
 		return -ENOMEM;
 
diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
index 03666d790a53..226b9bfbb784 100644
--- a/arch/powerpc/mm/kasan/init_32.c
+++ b/arch/powerpc/mm/kasan/init_32.c
@@ -42,10 +42,10 @@ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_
 		if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte)
 			continue;
 
-		new = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
-
+		new = memblock_alloc_no_panic(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
 		if (!new)
 			return -ENOMEM;
+
 		kasan_populate_pte(new, PAGE_KERNEL);
 		pmd_populate_kernel(&init_mm, pmd, new);
 	}
@@ -65,7 +65,7 @@ int __init __weak kasan_init_region(void *start, size_t size)
 		return ret;
 
 	k_start = k_start & PAGE_MASK;
-	block = memblock_alloc(k_end - k_start, PAGE_SIZE);
+	block = memblock_alloc_no_panic(k_end - k_start, PAGE_SIZE);
 	if (!block)
 		return -ENOMEM;
 
@@ -129,7 +129,6 @@ void __init kasan_mmu_init(void)
 
 	if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
 		ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
-
 		if (ret)
 			panic("kasan: kasan_init_shadow_page_tables() failed");
 	}
diff --git a/arch/powerpc/mm/kasan/init_book3e_64.c b/arch/powerpc/mm/kasan/init_book3e_64.c
index 60c78aac0f63..43c03b84ff32 100644
--- a/arch/powerpc/mm/kasan/init_book3e_64.c
+++ b/arch/powerpc/mm/kasan/init_book3e_64.c
@@ -40,19 +40,19 @@ static int __init kasan_map_kernel_page(unsigned long ea, unsigned long pa, pgpr
 	pgdp = pgd_offset_k(ea);
 	p4dp = p4d_offset(pgdp, ea);
 	if (kasan_pud_table(*p4dp)) {
-		pudp = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
+		pudp = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
 		memcpy(pudp, kasan_early_shadow_pud, PUD_TABLE_SIZE);
 		p4d_populate(&init_mm, p4dp, pudp);
 	}
 	pudp = pud_offset(p4dp, ea);
 	if (kasan_pmd_table(*pudp)) {
-		pmdp = memblock_alloc_or_panic(PMD_TABLE_SIZE, PMD_TABLE_SIZE);
+		pmdp = memblock_alloc(PMD_TABLE_SIZE, PMD_TABLE_SIZE);
 		memcpy(pmdp, kasan_early_shadow_pmd, PMD_TABLE_SIZE);
 		pud_populate(&init_mm, pudp, pmdp);
 	}
 	pmdp = pmd_offset(pudp, ea);
 	if (kasan_pte_table(*pmdp)) {
-		ptep = memblock_alloc_or_panic(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
+		ptep = memblock_alloc(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
 		memcpy(ptep, kasan_early_shadow_pte, PTE_TABLE_SIZE);
 		pmd_populate_kernel(&init_mm, pmdp, ptep);
 	}
@@ -74,7 +74,7 @@ static void __init kasan_init_phys_region(void *start, void *end)
 	k_start = ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_SIZE);
 	k_end = ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE);
 
-	va = memblock_alloc_or_panic(k_end - k_start, PAGE_SIZE);
+	va = memblock_alloc(k_end - k_start, PAGE_SIZE);
 	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE, va += PAGE_SIZE)
 		kasan_map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
 }
diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kasan/init_book3s_64.c
index 7d959544c077..3fb5ce4f48f4 100644
--- a/arch/powerpc/mm/kasan/init_book3s_64.c
+++ b/arch/powerpc/mm/kasan/init_book3s_64.c
@@ -32,7 +32,7 @@ static void __init kasan_init_phys_region(void *start, void *end)
 	k_start = ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_SIZE);
 	k_end = ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE);
 
-	va = memblock_alloc_or_panic(k_end - k_start, PAGE_SIZE);
+	va = memblock_alloc(k_end - k_start, PAGE_SIZE);
 	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE, va += PAGE_SIZE)
 		map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
 }
diff --git a/arch/powerpc/mm/nohash/mmu_context.c b/arch/powerpc/mm/nohash/mmu_context.c
index a1a4e697251a..eb9ea3e88a10 100644
--- a/arch/powerpc/mm/nohash/mmu_context.c
+++ b/arch/powerpc/mm/nohash/mmu_context.c
@@ -385,11 +385,11 @@ void __init mmu_context_init(void)
 	/*
 	 * Allocate the maps used by context management
 	 */
-	context_map = memblock_alloc_or_panic(CTX_MAP_SIZE, SMP_CACHE_BYTES);
-	context_mm = memblock_alloc_or_panic(sizeof(void *) * (LAST_CONTEXT + 1),
+	context_map = memblock_alloc(CTX_MAP_SIZE, SMP_CACHE_BYTES);
+	context_mm = memblock_alloc(sizeof(void *) * (LAST_CONTEXT + 1),
 				    SMP_CACHE_BYTES);
 	if (IS_ENABLED(CONFIG_SMP)) {
-		stale_map[boot_cpuid] = memblock_alloc_or_panic(CTX_MAP_SIZE, SMP_CACHE_BYTES);
+		stale_map[boot_cpuid] = memblock_alloc(CTX_MAP_SIZE, SMP_CACHE_BYTES);
 		cpuhp_setup_state_nocalls(CPUHP_POWERPC_MMU_CTX_PREPARE,
 					  "powerpc/mmu/ctx:prepare",
 					  mmu_ctx_cpu_prepare, mmu_ctx_cpu_dead);
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 15276068f657..8a523d91512f 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -50,7 +50,7 @@ notrace void __init early_ioremap_init(void)
 
 void __init *early_alloc_pgtable(unsigned long size)
 {
-	return memblock_alloc_or_panic(size, size);
+	return memblock_alloc(size, size);
 
 }
 
diff --git a/arch/powerpc/platforms/powermac/nvram.c b/arch/powerpc/platforms/powermac/nvram.c
index a112d26185a0..e4fec71444cf 100644
--- a/arch/powerpc/platforms/powermac/nvram.c
+++ b/arch/powerpc/platforms/powermac/nvram.c
@@ -514,7 +514,7 @@ static int __init core99_nvram_setup(struct device_node *dp, unsigned long addr)
 		printk(KERN_ERR "nvram: no address\n");
 		return -EINVAL;
 	}
-	nvram_image = memblock_alloc_or_panic(NVRAM_SIZE, SMP_CACHE_BYTES);
+	nvram_image = memblock_alloc(NVRAM_SIZE, SMP_CACHE_BYTES);
 	nvram_data = ioremap(addr, NVRAM_SIZE*2);
 	nvram_naddrs = 1; /* Make sure we get the correct case */
 
diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
index 09bd93464b4f..5763f6e6eb1c 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -180,7 +180,7 @@ int __init early_init_dt_scan_recoverable_ranges(unsigned long node,
 	/*
 	 * Allocate a buffer to hold the MC recoverable ranges.
 	 */
-	mc_recoverable_range = memblock_alloc_or_panic(size, __alignof__(u64));
+	mc_recoverable_range = memblock_alloc(size, __alignof__(u64));
 
 	for (i = 0; i < mc_recoverable_range_len; i++) {
 		mc_recoverable_range[i].start_addr =
diff --git a/arch/powerpc/platforms/ps3/setup.c b/arch/powerpc/platforms/ps3/setup.c
index 150c09b58ae8..082935871b6d 100644
--- a/arch/powerpc/platforms/ps3/setup.c
+++ b/arch/powerpc/platforms/ps3/setup.c
@@ -115,7 +115,7 @@ static void __init prealloc(struct ps3_prealloc *p)
 	if (!p->size)
 		return;
 
-	p->address = memblock_alloc_or_panic(p->size, p->align);
+	p->address = memblock_alloc(p->size, p->align);
 
 	printk(KERN_INFO "%s: %lu bytes at %p\n", p->name, p->size,
 	       p->address);
diff --git a/arch/powerpc/sysdev/msi_bitmap.c b/arch/powerpc/sysdev/msi_bitmap.c
index 456a4f64ae0a..87ec0dc8db3b 100644
--- a/arch/powerpc/sysdev/msi_bitmap.c
+++ b/arch/powerpc/sysdev/msi_bitmap.c
@@ -124,7 +124,7 @@ int __ref msi_bitmap_alloc(struct msi_bitmap *bmp, unsigned int irq_count,
 	if (bmp->bitmap_from_slab)
 		bmp->bitmap = kzalloc(size, GFP_KERNEL);
 	else {
-		bmp->bitmap = memblock_alloc_or_panic(size, SMP_CACHE_BYTES);
+		bmp->bitmap = memblock_alloc(size, SMP_CACHE_BYTES);
 		/* the bitmap won't be freed from memblock allocator */
 		kmemleak_not_leak(bmp->bitmap);
 	}
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index f1793630fc51..3087810c29ca 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -147,7 +147,7 @@ static void __init init_resources(void)
 	res_idx = num_resources - 1;
 
 	mem_res_sz = num_resources * sizeof(*mem_res);
-	mem_res = memblock_alloc_or_panic(mem_res_sz, SMP_CACHE_BYTES);
+	mem_res = memblock_alloc(mem_res_sz, SMP_CACHE_BYTES);
 
 	/*
 	 * Start by adding the reserved regions, if they overlap
diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
index 41c635d6aca4..c301c8d291d2 100644
--- a/arch/riscv/mm/kasan_init.c
+++ b/arch/riscv/mm/kasan_init.c
@@ -32,7 +32,7 @@ static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned
 	pte_t *ptep, *p;
 
 	if (pmd_none(pmdp_get(pmd))) {
-		p = memblock_alloc_or_panic(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE);
+		p = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE);
 		set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(p)), PAGE_TABLE));
 	}
 
@@ -54,7 +54,7 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned
 	unsigned long next;
 
 	if (pud_none(pudp_get(pud))) {
-		p = memblock_alloc_or_panic(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE);
+		p = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE);
 		set_pud(pud, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE));
 	}
 
@@ -85,7 +85,7 @@ static void __init kasan_populate_pud(p4d_t *p4d,
 	unsigned long next;
 
 	if (p4d_none(p4dp_get(p4d))) {
-		p = memblock_alloc_or_panic(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
+		p = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
 		set_p4d(p4d, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE));
 	}
 
@@ -116,7 +116,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
 	unsigned long next;
 
 	if (pgd_none(pgdp_get(pgd))) {
-		p = memblock_alloc_or_panic(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE);
+		p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE);
 		set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE));
 	}
 
@@ -385,7 +385,7 @@ static void __init kasan_shallow_populate_pud(p4d_t *p4d,
 		next = pud_addr_end(vaddr, end);
 
 		if (pud_none(pudp_get(pud_k))) {
-			p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+			p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 			set_pud(pud_k, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE));
 			continue;
 		}
@@ -405,7 +405,7 @@ static void __init kasan_shallow_populate_p4d(pgd_t *pgd,
 		next = p4d_addr_end(vaddr, end);
 
 		if (p4d_none(p4dp_get(p4d_k))) {
-			p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+			p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 			set_p4d(p4d_k, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE));
 			continue;
 		}
@@ -424,7 +424,7 @@ static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long
 		next = pgd_addr_end(vaddr, end);
 
 		if (pgd_none(pgdp_get(pgd_k))) {
-			p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+			p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 			set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE));
 			continue;
 		}
diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
index f699df2a2b11..53cdfec9398c 100644
--- a/arch/s390/kernel/crash_dump.c
+++ b/arch/s390/kernel/crash_dump.c
@@ -63,7 +63,7 @@ struct save_area * __init save_area_alloc(bool is_boot_cpu)
 {
 	struct save_area *sa;
 
-	sa = memblock_alloc(sizeof(*sa), 8);
+	sa = memblock_alloc_no_panic(sizeof(*sa), 8);
 	if (!sa)
 		return NULL;
 
diff --git a/arch/s390/kernel/numa.c b/arch/s390/kernel/numa.c
index a33e20f73330..1b589d575567 100644
--- a/arch/s390/kernel/numa.c
+++ b/arch/s390/kernel/numa.c
@@ -22,7 +22,7 @@ void __init numa_setup(void)
 	node_set(0, node_possible_map);
 	node_set_online(0);
 	for (nid = 0; nid < MAX_NUMNODES; nid++) {
-		NODE_DATA(nid) = memblock_alloc_or_panic(sizeof(pg_data_t), 8);
+		NODE_DATA(nid) = memblock_alloc(sizeof(pg_data_t), 8);
 	}
 	NODE_DATA(0)->node_spanned_pages = memblock_end_of_DRAM() >> PAGE_SHIFT;
 	NODE_DATA(0)->node_id = 0;
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index f873535eddd2..e51426113f26 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -384,7 +384,7 @@ static unsigned long __init stack_alloc_early(void)
 {
 	unsigned long stack;
 
-	stack = (unsigned long)memblock_alloc_or_panic(THREAD_SIZE, THREAD_SIZE);
+	stack = (unsigned long)memblock_alloc(THREAD_SIZE, THREAD_SIZE);
 	return stack;
 }
 
@@ -508,7 +508,7 @@ static void __init setup_resources(void)
 	bss_resource.end = __pa_symbol(__bss_stop) - 1;
 
 	for_each_mem_range(i, &start, &end) {
-		res = memblock_alloc_or_panic(sizeof(*res), 8);
+		res = memblock_alloc(sizeof(*res), 8);
 		res->flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM;
 
 		res->name = "System RAM";
@@ -527,7 +527,7 @@ static void __init setup_resources(void)
 			    std_res->start > res->end)
 				continue;
 			if (std_res->end > res->end) {
-				sub_res = memblock_alloc_or_panic(sizeof(*sub_res), 8);
+				sub_res = memblock_alloc(sizeof(*sub_res), 8);
 				*sub_res = *std_res;
 				sub_res->end = res->end;
 				std_res->start = res->end + 1;
@@ -814,7 +814,7 @@ static void __init setup_randomness(void)
 {
 	struct sysinfo_3_2_2 *vmms;
 
-	vmms = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+	vmms = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 	if (stsi(vmms, 3, 2, 2) == 0 && vmms->count)
 		add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count);
 	memblock_free(vmms, PAGE_SIZE);
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index d77aaefb59bd..9eb4508b4ca4 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -613,7 +613,7 @@ void __init smp_save_dump_ipl_cpu(void)
 	sa = save_area_alloc(true);
 	if (!sa)
 		panic("could not allocate memory for boot CPU save area\n");
-	regs = memblock_alloc_or_panic(512, 8);
+	regs = memblock_alloc(512, 8);
 	copy_oldmem_kernel(regs, __LC_FPREGS_SAVE_AREA, 512);
 	save_area_add_regs(sa, regs);
 	memblock_free(regs, 512);
@@ -792,7 +792,7 @@ void __init smp_detect_cpus(void)
 	u16 address;
 
 	/* Get CPU information */
-	info = memblock_alloc_or_panic(sizeof(*info), 8);
+	info = memblock_alloc(sizeof(*info), 8);
 	smp_get_core_info(info, 1);
 	/* Find boot CPU type */
 	if (sclp.has_core_type) {
diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
index cf5ee6032c0b..fef1c7b4951d 100644
--- a/arch/s390/kernel/topology.c
+++ b/arch/s390/kernel/topology.c
@@ -548,7 +548,7 @@ static void __init alloc_masks(struct sysinfo_15_1_x *info,
 		nr_masks *= info->mag[TOPOLOGY_NR_MAG - offset - 1 - i];
 	nr_masks = max(nr_masks, 1);
 	for (i = 0; i < nr_masks; i++) {
-		mask->next = memblock_alloc_or_panic(sizeof(*mask->next), 8);
+		mask->next = memblock_alloc(sizeof(*mask->next), 8);
 		mask = mask->next;
 	}
 }
@@ -566,7 +566,7 @@ void __init topology_init_early(void)
 	}
 	if (!MACHINE_HAS_TOPOLOGY)
 		goto out;
-	tl_info = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+	tl_info = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 	info = tl_info;
 	store_topology(info);
 	pr_info("The CPU configuration topology of the machine is: %d %d %d %d %d %d / %d\n",
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index 665b8228afeb..df43575564a3 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -33,7 +33,7 @@ static void __ref *vmem_alloc_pages(unsigned int order)
 
 	if (slab_is_available())
 		return (void *)__get_free_pages(GFP_KERNEL, order);
-	return memblock_alloc(size, size);
+	return memblock_alloc_no_panic(size, size);
 }
 
 static void vmem_free_pages(unsigned long addr, int order, struct vmem_altmap *altmap)
@@ -69,7 +69,7 @@ pte_t __ref *vmem_pte_alloc(void)
 	if (slab_is_available())
 		pte = (pte_t *) page_table_alloc(&init_mm);
 	else
-		pte = (pte_t *) memblock_alloc(size, size);
+		pte = (pte_t *) memblock_alloc_no_panic(size, size);
 	if (!pte)
 		return NULL;
 	memset64((u64 *)pte, _PAGE_INVALID, PTRS_PER_PTE);
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index 289a2fecebef..d64c4b54e289 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -137,7 +137,7 @@ static pmd_t * __init one_md_table_init(pud_t *pud)
 	if (pud_none(*pud)) {
 		pmd_t *pmd;
 
-		pmd = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+		pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 		pud_populate(&init_mm, pud, pmd);
 		BUG_ON(pmd != pmd_offset(pud, 0));
 	}
@@ -150,7 +150,7 @@ static pte_t * __init one_page_table_init(pmd_t *pmd)
 	if (pmd_none(*pmd)) {
 		pte_t *pte;
 
-		pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+		pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 		pmd_populate_kernel(&init_mm, pmd, pte);
 		BUG_ON(pte != pte_offset_kernel(pmd, 0));
 	}
diff --git a/arch/sparc/kernel/prom_32.c b/arch/sparc/kernel/prom_32.c
index a67dd67f10c8..e6dfa3895bb5 100644
--- a/arch/sparc/kernel/prom_32.c
+++ b/arch/sparc/kernel/prom_32.c
@@ -28,7 +28,7 @@ void * __init prom_early_alloc(unsigned long size)
 {
 	void *ret;
 
-	ret = memblock_alloc_or_panic(size, SMP_CACHE_BYTES);
+	ret = memblock_alloc(size, SMP_CACHE_BYTES);
 
 	prom_early_allocated += size;
 
diff --git a/arch/sparc/kernel/prom_64.c b/arch/sparc/kernel/prom_64.c
index ba82884cb92a..197771fdf8cc 100644
--- a/arch/sparc/kernel/prom_64.c
+++ b/arch/sparc/kernel/prom_64.c
@@ -30,7 +30,7 @@
 
 void * __init prom_early_alloc(unsigned long size)
 {
-	void *ret = memblock_alloc(size, SMP_CACHE_BYTES);
+	void *ret = memblock_alloc_no_panic(size, SMP_CACHE_BYTES);
 
 	if (!ret) {
 		prom_printf("prom_early_alloc(%lu) failed\n", size);
diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
index d96a14ffceeb..65a4d8ec3972 100644
--- a/arch/sparc/mm/init_32.c
+++ b/arch/sparc/mm/init_32.c
@@ -265,7 +265,7 @@ void __init mem_init(void)
 	i = last_valid_pfn >> ((20 - PAGE_SHIFT) + 5);
 	i += 1;
 	sparc_valid_addr_bitmap = (unsigned long *)
-		memblock_alloc(i << 2, SMP_CACHE_BYTES);
+		memblock_alloc_no_panic(i << 2, SMP_CACHE_BYTES);
 
 	if (sparc_valid_addr_bitmap == NULL) {
 		prom_printf("mem_init: Cannot alloc valid_addr_bitmap.\n");
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index dd32711022f5..4a7d558ed0c9 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -277,12 +277,12 @@ static void __init srmmu_nocache_init(void)
 
 	bitmap_bits = srmmu_nocache_size >> SRMMU_NOCACHE_BITMAP_SHIFT;
 
-	srmmu_nocache_pool = memblock_alloc_or_panic(srmmu_nocache_size,
+	srmmu_nocache_pool = memblock_alloc(srmmu_nocache_size,
 					    SRMMU_NOCACHE_ALIGN_MAX);
 	memset(srmmu_nocache_pool, 0, srmmu_nocache_size);
 
 	srmmu_nocache_bitmap =
-		memblock_alloc_or_panic(BITS_TO_LONGS(bitmap_bits) * sizeof(long),
+		memblock_alloc(BITS_TO_LONGS(bitmap_bits) * sizeof(long),
 			       SMP_CACHE_BYTES);
 	bit_map_init(&srmmu_nocache_map, srmmu_nocache_bitmap, bitmap_bits);
 
@@ -446,7 +446,7 @@ static void __init sparc_context_init(int numctx)
 	unsigned long size;
 
 	size = numctx * sizeof(struct ctx_list);
-	ctx_list_pool = memblock_alloc_or_panic(size, SMP_CACHE_BYTES);
+	ctx_list_pool = memblock_alloc(size, SMP_CACHE_BYTES);
 
 	for (ctx = 0; ctx < numctx; ctx++) {
 		struct ctx_list *clist;
diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
index d5a9c5aabaec..cf3a20440293 100644
--- a/arch/um/drivers/net_kern.c
+++ b/arch/um/drivers/net_kern.c
@@ -636,7 +636,7 @@ static int __init eth_setup(char *str)
 		return 1;
 	}
 
-	new = memblock_alloc_or_panic(sizeof(*new), SMP_CACHE_BYTES);
+	new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES);
 
 	INIT_LIST_HEAD(&new->list);
 	new->index = n;
diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
index 85b129e2b70b..096fefb73e09 100644
--- a/arch/um/drivers/vector_kern.c
+++ b/arch/um/drivers/vector_kern.c
@@ -1694,7 +1694,7 @@ static int __init vector_setup(char *str)
 				 str, error);
 		return 1;
 	}
-	new = memblock_alloc_or_panic(sizeof(*new), SMP_CACHE_BYTES);
+	new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES);
 	INIT_LIST_HEAD(&new->list);
 	new->unit = n;
 	new->arguments = str;
diff --git a/arch/um/kernel/load_file.c b/arch/um/kernel/load_file.c
index cb9d178ab7d8..00e0b789e5ab 100644
--- a/arch/um/kernel/load_file.c
+++ b/arch/um/kernel/load_file.c
@@ -48,7 +48,7 @@ void *uml_load_file(const char *filename, unsigned long long *size)
 		return NULL;
 	}
 
-	area = memblock_alloc_or_panic(*size, SMP_CACHE_BYTES);
+	area = memblock_alloc(*size, SMP_CACHE_BYTES);
 
 	if (__uml_load_file(filename, area, *size)) {
 		memblock_free(area, *size);
diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
index a3c9b7c67640..6dde4ebc7b8e 100644
--- a/arch/x86/coco/sev/core.c
+++ b/arch/x86/coco/sev/core.c
@@ -1572,7 +1572,7 @@ static void __init alloc_runtime_data(int cpu)
 		struct svsm_ca *caa;
 
 		/* Allocate the SVSM CA page if an SVSM is present */
-		caa = memblock_alloc_or_panic(sizeof(*caa), PAGE_SIZE);
+		caa = memblock_alloc(sizeof(*caa), PAGE_SIZE);
 
 		per_cpu(svsm_caa, cpu) = caa;
 		per_cpu(svsm_caa_pa, cpu) = __pa(caa);
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 7c15d6e83c37..a54cdd5be071 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -911,7 +911,7 @@ static int __init acpi_parse_hpet(struct acpi_table_header *table)
 	 * the resource tree during the lateinit timeframe.
 	 */
 #define HPET_RESOURCE_NAME_SIZE 9
-	hpet_res = memblock_alloc_or_panic(sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE,
+	hpet_res = memblock_alloc(sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE,
 				  SMP_CACHE_BYTES);
 
 	hpet_res->name = (void *)&hpet_res[1];
diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c
index d5ef6215583b..bae8b3452834 100644
--- a/arch/x86/kernel/acpi/madt_wakeup.c
+++ b/arch/x86/kernel/acpi/madt_wakeup.c
@@ -62,7 +62,7 @@ static void acpi_mp_cpu_die(unsigned int cpu)
 /* The argument is required to match type of x86_mapping_info::alloc_pgt_page */
 static void __init *alloc_pgt_page(void *dummy)
 {
-	return memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+	return memblock_alloc_no_panic(PAGE_SIZE, PAGE_SIZE);
 }
 
 static void __init free_pgt_page(void *pgt, void *dummy)
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index a57d3fa7c6b6..ebb00747f135 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -2503,7 +2503,7 @@ static struct resource * __init ioapic_setup_resources(void)
 	n = IOAPIC_RESOURCE_NAME_SIZE + sizeof(struct resource);
 	n *= nr_ioapics;
 
-	mem = memblock_alloc_or_panic(n, SMP_CACHE_BYTES);
+	mem = memblock_alloc(n, SMP_CACHE_BYTES);
 	res = (void *)mem;
 
 	mem += sizeof(struct resource) * nr_ioapics;
@@ -2562,7 +2562,7 @@ void __init io_apic_init_mappings(void)
 #ifdef CONFIG_X86_32
 fake_ioapic_page:
 #endif
-			ioapic_phys = (unsigned long)memblock_alloc_or_panic(PAGE_SIZE,
+			ioapic_phys = (unsigned long)memblock_alloc(PAGE_SIZE,
 								    PAGE_SIZE);
 			ioapic_phys = __pa(ioapic_phys);
 		}
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 82b96ed9890a..7c9b25c5f209 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -1146,7 +1146,7 @@ void __init e820__reserve_resources(void)
 	struct resource *res;
 	u64 end;
 
-	res = memblock_alloc_or_panic(sizeof(*res) * e820_table->nr_entries,
+	res = memblock_alloc(sizeof(*res) * e820_table->nr_entries,
 			     SMP_CACHE_BYTES);
 	e820_res = res;
 
diff --git a/arch/x86/platform/olpc/olpc_dt.c b/arch/x86/platform/olpc/olpc_dt.c
index cf5dca2dbb91..90be2eef3910 100644
--- a/arch/x86/platform/olpc/olpc_dt.c
+++ b/arch/x86/platform/olpc/olpc_dt.c
@@ -136,7 +136,7 @@ void * __init prom_early_alloc(unsigned long size)
 		 * fast enough on the platforms we care about while minimizing
 		 * wasted bootmem) and hand off chunks of it to callers.
 		 */
-		res = memblock_alloc_or_panic(chunk_size, SMP_CACHE_BYTES);
+		res = memblock_alloc(chunk_size, SMP_CACHE_BYTES);
 		prom_early_allocated += chunk_size;
 		memset(res, 0, chunk_size);
 		free_mem = chunk_size;
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 56914e21e303..468cfdcf9147 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -178,7 +178,7 @@ static void p2m_init_identity(unsigned long *p2m, unsigned long pfn)
 static void * __ref alloc_p2m_page(void)
 {
 	if (unlikely(!slab_is_available())) {
-		return memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
+		return memblock_alloc(PAGE_SIZE, PAGE_SIZE);
 	}
 
 	return (void *)__get_free_page(GFP_KERNEL);
diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c
index f39c4d83173a..50ee88d9f2cc 100644
--- a/arch/xtensa/mm/kasan_init.c
+++ b/arch/xtensa/mm/kasan_init.c
@@ -39,7 +39,7 @@ static void __init populate(void *start, void *end)
 	unsigned long i, j;
 	unsigned long vaddr = (unsigned long)start;
 	pmd_t *pmd = pmd_off_k(vaddr);
-	pte_t *pte = memblock_alloc_or_panic(n_pages * sizeof(pte_t), PAGE_SIZE);
+	pte_t *pte = memblock_alloc(n_pages * sizeof(pte_t), PAGE_SIZE);
 
 	pr_debug("%s: %p - %p\n", __func__, start, end);
 
diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
index e89f27f2bb18..74e2a149f35f 100644
--- a/arch/xtensa/platforms/iss/network.c
+++ b/arch/xtensa/platforms/iss/network.c
@@ -604,7 +604,7 @@ static int __init iss_net_setup(char *str)
 		return 1;
 	}
 
-	new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES);
+	new = memblock_alloc_no_panic(sizeof(*new), SMP_CACHE_BYTES);
 	if (new == NULL) {
 		pr_err("Alloc_bootmem failed\n");
 		return 1;
diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
index 9c75dcc9a534..7ef6f6e1d063 100644
--- a/drivers/clk/ti/clk.c
+++ b/drivers/clk/ti/clk.c
@@ -449,7 +449,7 @@ void __init omap2_clk_legacy_provider_init(int index, void __iomem *mem)
 {
 	struct clk_iomap *io;
 
-	io = memblock_alloc_or_panic(sizeof(*io), SMP_CACHE_BYTES);
+	io = memblock_alloc(sizeof(*io), SMP_CACHE_BYTES);
 
 	io->mem = mem;
 
diff --git a/drivers/firmware/memmap.c b/drivers/firmware/memmap.c
index 55b9cfad8a04..4cef459855c2 100644
--- a/drivers/firmware/memmap.c
+++ b/drivers/firmware/memmap.c
@@ -325,7 +325,7 @@ int __init firmware_map_add_early(u64 start, u64 end, const char *type)
 {
 	struct firmware_map_entry *entry;
 
-	entry = memblock_alloc(sizeof(struct firmware_map_entry),
+	entry = memblock_alloc_no_panic(sizeof(struct firmware_map_entry),
 			       SMP_CACHE_BYTES);
 	if (WARN_ON(!entry))
 		return -ENOMEM;
diff --git a/drivers/macintosh/smu.c b/drivers/macintosh/smu.c
index a1534cc6c641..e93fbe71ed90 100644
--- a/drivers/macintosh/smu.c
+++ b/drivers/macintosh/smu.c
@@ -492,7 +492,7 @@ int __init smu_init (void)
 		goto fail_np;
 	}
 
-	smu = memblock_alloc_or_panic(sizeof(struct smu_device), SMP_CACHE_BYTES);
+	smu = memblock_alloc(sizeof(struct smu_device), SMP_CACHE_BYTES);
 	spin_lock_init(&smu->lock);
 	INIT_LIST_HEAD(&smu->cmd_list);
 	INIT_LIST_HEAD(&smu->cmd_i2c_list);
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index 2eb718fbeffd..d1f33510db67 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -1126,7 +1126,7 @@ void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size)
 
 static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
 {
-	return memblock_alloc_or_panic(size, align);
+	return memblock_alloc(size, align);
 }
 
 bool __init early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys)
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 45517b9e57b1..f46c8639b535 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -79,7 +79,7 @@ static void __init alloc_reserved_mem_array(void)
 		return;
 	}
 
-	new_array = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+	new_array = memblock_alloc_no_panic(alloc_size, SMP_CACHE_BYTES);
 	if (!new_array) {
 		pr_err("Failed to allocate memory for reserved_mem array with err: %d", -ENOMEM);
 		return;
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 6e8561dba537..e8b0c8d430c2 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -3666,7 +3666,7 @@ static struct device_node *overlay_base_root;
 
 static void * __init dt_alloc_memory(u64 size, u64 align)
 {
-	return memblock_alloc_or_panic(size, align);
+	return memblock_alloc(size, align);
 }
 
 /*
diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
index 341408410ed9..2f4172c98eaf 100644
--- a/drivers/usb/early/xhci-dbc.c
+++ b/drivers/usb/early/xhci-dbc.c
@@ -94,7 +94,7 @@ static void * __init xdbc_get_page(dma_addr_t *dma_addr)
 {
 	void *virt;
 
-	virt = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+	virt = memblock_alloc_no_panic(PAGE_SIZE, PAGE_SIZE);
 	if (!virt)
 		return NULL;
 
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index dee628350cd1..6b21a3834225 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -417,11 +417,13 @@ static __always_inline void *memblock_alloc(phys_addr_t size, phys_addr_t align)
 				      MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
 }
 
-void *__memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align,
-				const char *func);
+void *__memblock_alloc_panic(phys_addr_t size, phys_addr_t align,
+				const char *func, bool should_panic);
 
-#define memblock_alloc_or_panic(size, align)    \
-	 __memblock_alloc_or_panic(size, align, __func__)
+#define memblock_alloc(size, align)    \
+	 __memblock_alloc_panic(size, align, __func__, true)
+#define memblock_alloc_no_panic(size, align)    \
+	 __memblock_alloc_panic(size, align, __func__, false)
 
 static inline void *memblock_alloc_raw(phys_addr_t size,
 					       phys_addr_t align)
diff --git a/init/main.c b/init/main.c
index 4bae539ebc05..302f85078e2b 100644
--- a/init/main.c
+++ b/init/main.c
@@ -379,7 +379,7 @@ static char * __init xbc_make_cmdline(const char *key)
 	if (len <= 0)
 		return NULL;
 
-	new_cmdline = memblock_alloc(len + 1, SMP_CACHE_BYTES);
+	new_cmdline = memblock_alloc_no_panic(len + 1, SMP_CACHE_BYTES);
 	if (!new_cmdline) {
 		pr_err("Failed to allocate memory for extra kernel cmdline.\n");
 		return NULL;
@@ -640,11 +640,11 @@ static void __init setup_command_line(char *command_line)
 
 	len = xlen + strlen(boot_command_line) + ilen + 1;
 
-	saved_command_line = memblock_alloc_or_panic(len, SMP_CACHE_BYTES);
+	saved_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
 
 	len = xlen + strlen(command_line) + 1;
 
-	static_command_line = memblock_alloc_or_panic(len, SMP_CACHE_BYTES);
+	static_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
 
 	if (xlen) {
 		/*
@@ -860,7 +860,7 @@ static void __init print_unknown_bootoptions(void)
 		len += strlen(*p);
 	}
 
-	unknown_options = memblock_alloc(len, SMP_CACHE_BYTES);
+	unknown_options = memblock_alloc_no_panic(len, SMP_CACHE_BYTES);
 	if (!unknown_options) {
 		pr_err("%s: Failed to allocate %zu bytes\n",
 			__func__, len);
@@ -1141,9 +1141,9 @@ static int __init initcall_blacklist(char *str)
 		str_entry = strsep(&str, ",");
 		if (str_entry) {
 			pr_debug("blacklisting initcall %s\n", str_entry);
-			entry = memblock_alloc_or_panic(sizeof(*entry),
+			entry = memblock_alloc(sizeof(*entry),
 					       SMP_CACHE_BYTES);
-			entry->buf = memblock_alloc_or_panic(strlen(str_entry) + 1,
+			entry->buf = memblock_alloc(strlen(str_entry) + 1,
 						    SMP_CACHE_BYTES);
 			strcpy(entry->buf, str_entry);
 			list_add(&entry->next, &blacklisted_initcalls);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index abcf3fa63a56..85381f2b8ab3 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -328,7 +328,7 @@ static void __init *swiotlb_memblock_alloc(unsigned long nslabs,
 	 * memory encryption.
 	 */
 	if (flags & SWIOTLB_ANY)
-		tlb = memblock_alloc(bytes, PAGE_SIZE);
+		tlb = memblock_alloc_no_panic(bytes, PAGE_SIZE);
 	else
 		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
 
@@ -396,14 +396,14 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
 	}
 
 	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
-	mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
+	mem->slots = memblock_alloc_no_panic(alloc_size, PAGE_SIZE);
 	if (!mem->slots) {
 		pr_warn("%s: Failed to allocate %zu bytes align=0x%lx\n",
 			__func__, alloc_size, PAGE_SIZE);
 		return;
 	}
 
-	mem->areas = memblock_alloc(array_size(sizeof(struct io_tlb_area),
+	mem->areas = memblock_alloc_no_panic(array_size(sizeof(struct io_tlb_area),
 		nareas), SMP_CACHE_BYTES);
 	if (!mem->areas) {
 		pr_warn("%s: Failed to allocate mem->areas.\n", __func__);
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index c9fb559a6399..18604fc4103d 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1011,7 +1011,7 @@ void __init register_nosave_region(unsigned long start_pfn, unsigned long end_pf
 		}
 	}
 	/* This allocation cannot fail */
-	region = memblock_alloc_or_panic(sizeof(struct nosave_region),
+	region = memblock_alloc(sizeof(struct nosave_region),
 				SMP_CACHE_BYTES);
 	region->start_pfn = start_pfn;
 	region->end_pfn = end_pfn;
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 80910bc3470c..6a7801b1d283 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1211,7 +1211,7 @@ void __init setup_log_buf(int early)
 		goto out;
 	}
 
-	new_log_buf = memblock_alloc(new_log_buf_len, LOG_ALIGN);
+	new_log_buf = memblock_alloc_no_panic(new_log_buf_len, LOG_ALIGN);
 	if (unlikely(!new_log_buf)) {
 		pr_err("log_buf_len: %lu text bytes not available\n",
 		       new_log_buf_len);
@@ -1219,7 +1219,7 @@ void __init setup_log_buf(int early)
 	}
 
 	new_descs_size = new_descs_count * sizeof(struct prb_desc);
-	new_descs = memblock_alloc(new_descs_size, LOG_ALIGN);
+	new_descs = memblock_alloc_no_panic(new_descs_size, LOG_ALIGN);
 	if (unlikely(!new_descs)) {
 		pr_err("log_buf_len: %zu desc bytes not available\n",
 		       new_descs_size);
@@ -1227,7 +1227,7 @@ void __init setup_log_buf(int early)
 	}
 
 	new_infos_size = new_descs_count * sizeof(struct printk_info);
-	new_infos = memblock_alloc(new_infos_size, LOG_ALIGN);
+	new_infos = memblock_alloc_no_panic(new_infos_size, LOG_ALIGN);
 	if (unlikely(!new_infos)) {
 		pr_err("log_buf_len: %zu info bytes not available\n",
 		       new_infos_size);
diff --git a/lib/cpumask.c b/lib/cpumask.c
index 57274ba8b6d9..d638587f97df 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -83,7 +83,7 @@ EXPORT_SYMBOL(alloc_cpumask_var_node);
  */
 void __init alloc_bootmem_cpumask_var(cpumask_var_t *mask)
 {
-	*mask = memblock_alloc_or_panic(cpumask_size(), SMP_CACHE_BYTES);
+	*mask = memblock_alloc(cpumask_size(), SMP_CACHE_BYTES);
 }
 
 /**
diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
index d65d48b85f90..129acb1d2fe1 100644
--- a/mm/kasan/tags.c
+++ b/mm/kasan/tags.c
@@ -86,7 +86,7 @@ void __init kasan_init_tags(void)
 	if (kasan_stack_collection_enabled()) {
 		if (!stack_ring.size)
 			stack_ring.size = KASAN_STACK_RING_SIZE_DEFAULT;
-		stack_ring.entries = memblock_alloc(
+		stack_ring.entries = memblock_alloc_no_panic(
 			sizeof(stack_ring.entries[0]) * stack_ring.size,
 			SMP_CACHE_BYTES);
 		if (WARN_ON(!stack_ring.entries))
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 67fc321db79b..4676a5557e60 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -869,7 +869,7 @@ void __init kfence_alloc_pool_and_metadata(void)
 	 * re-allocate the memory pool.
 	 */
 	if (!__kfence_pool)
-		__kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
+		__kfence_pool = memblock_alloc_no_panic(KFENCE_POOL_SIZE, PAGE_SIZE);
 
 	if (!__kfence_pool) {
 		pr_err("failed to allocate pool\n");
@@ -877,7 +877,7 @@ void __init kfence_alloc_pool_and_metadata(void)
 	}
 
 	/* The memory allocated by memblock has been zeroed out. */
-	kfence_metadata_init = memblock_alloc(KFENCE_METADATA_SIZE, PAGE_SIZE);
+	kfence_metadata_init = memblock_alloc_no_panic(KFENCE_METADATA_SIZE, PAGE_SIZE);
 	if (!kfence_metadata_init) {
 		pr_err("failed to allocate metadata\n");
 		memblock_free(__kfence_pool, KFENCE_POOL_SIZE);
diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c
index 1bb505a08415..938ca5eb6df7 100644
--- a/mm/kmsan/shadow.c
+++ b/mm/kmsan/shadow.c
@@ -280,8 +280,8 @@ void __init kmsan_init_alloc_meta_for_range(void *start, void *end)
 
 	start = (void *)PAGE_ALIGN_DOWN((u64)start);
 	size = PAGE_ALIGN((u64)end - (u64)start);
-	shadow = memblock_alloc_or_panic(size, PAGE_SIZE);
-	origin = memblock_alloc_or_panic(size, PAGE_SIZE);
+	shadow = memblock_alloc(size, PAGE_SIZE);
+	origin = memblock_alloc(size, PAGE_SIZE);
 
 	for (u64 addr = 0; addr < size; addr += PAGE_SIZE) {
 		page = virt_to_page_or_null((char *)start + addr);
diff --git a/mm/memblock.c b/mm/memblock.c
index 95af35fd1389..901da45ecf8b 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1692,21 +1692,23 @@ void * __init memblock_alloc_try_nid(
 }
 
 /**
- * __memblock_alloc_or_panic - Try to allocate memory and panic on failure
+ * __memblock_alloc_panic - Try to allocate memory and panic on failure
  * @size: size of memory block to be allocated in bytes
  * @align: alignment of the region and block's size
  * @func: caller func name
+ * @should_panic: whether failed panic
  *
- * This function attempts to allocate memory using memblock_alloc,
- * and in case of failure, it calls panic with the formatted message.
- * This function should not be used directly, please use the macro memblock_alloc_or_panic.
+ * In case of failure, it calls panic with the formatted message.
+ * This function should not be used directly, please use the macro
+ * memblock_alloc and memblock_alloc_no_panic.
  */
-void *__init __memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align,
-				       const char *func)
+void *__init __memblock_alloc_panic(phys_addr_t size, phys_addr_t align,
+				    const char *func, bool should_panic)
 {
-	void *addr = memblock_alloc(size, align);
+	void *addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT,
+				      MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
 
-	if (unlikely(!addr))
+	if (unlikely(!addr && should_panic))
 		panic("%s: Failed to allocate %pap bytes\n", func, &size);
 	return addr;
 }
diff --git a/mm/numa.c b/mm/numa.c
index f1787d7713a6..9442448dc74f 100644
--- a/mm/numa.c
+++ b/mm/numa.c
@@ -37,7 +37,7 @@ void __init alloc_node_data(int nid)
 void __init alloc_offline_node_data(int nid)
 {
 	pg_data_t *pgdat;
-	node_data[nid] = memblock_alloc_or_panic(sizeof(*pgdat), SMP_CACHE_BYTES);
+	node_data[nid] = memblock_alloc(sizeof(*pgdat), SMP_CACHE_BYTES);
 }
 
 /* Stub functions: */
diff --git a/mm/numa_emulation.c b/mm/numa_emulation.c
index 031fb9961bf7..958dc5a1715c 100644
--- a/mm/numa_emulation.c
+++ b/mm/numa_emulation.c
@@ -447,7 +447,7 @@ void __init numa_emulation(struct numa_meminfo *numa_meminfo, int numa_dist_cnt)
 
 	/* copy the physical distance table */
 	if (numa_dist_cnt) {
-		phys_dist = memblock_alloc(phys_size, PAGE_SIZE);
+		phys_dist = memblock_alloc_no_panic(phys_size, PAGE_SIZE);
 		if (!phys_dist) {
 			pr_warn("NUMA: Warning: can't allocate copy of distance table, disabling emulation\n");
 			goto no_emu;
diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
index a3877e9bc878..549a6d6607c6 100644
--- a/mm/numa_memblks.c
+++ b/mm/numa_memblks.c
@@ -61,7 +61,7 @@ static int __init numa_alloc_distance(void)
 	cnt++;
 	size = cnt * cnt * sizeof(numa_distance[0]);
 
-	numa_distance = memblock_alloc(size, PAGE_SIZE);
+	numa_distance = memblock_alloc_no_panic(size, PAGE_SIZE);
 	if (!numa_distance) {
 		pr_warn("Warning: can't allocate distance table!\n");
 		/* don't retry until explicitly reset */
diff --git a/mm/percpu.c b/mm/percpu.c
index ac61e3fc5f15..a381d626ed32 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1359,7 +1359,7 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
 	/* allocate chunk */
 	alloc_size = struct_size(chunk, populated,
 				 BITS_TO_LONGS(region_size >> PAGE_SHIFT));
-	chunk = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
+	chunk = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
 
 	INIT_LIST_HEAD(&chunk->list);
 
@@ -1371,14 +1371,14 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
 	region_bits = pcpu_chunk_map_bits(chunk);
 
 	alloc_size = BITS_TO_LONGS(region_bits) * sizeof(chunk->alloc_map[0]);
-	chunk->alloc_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
+	chunk->alloc_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
 
 	alloc_size =
 		BITS_TO_LONGS(region_bits + 1) * sizeof(chunk->bound_map[0]);
-	chunk->bound_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
+	chunk->bound_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
 
 	alloc_size = pcpu_chunk_nr_blocks(chunk) * sizeof(chunk->md_blocks[0]);
-	chunk->md_blocks = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
+	chunk->md_blocks = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
 #ifdef NEED_PCPUOBJ_EXT
 	/* first chunk is free to use */
 	chunk->obj_exts = NULL;
@@ -2399,7 +2399,7 @@ struct pcpu_alloc_info * __init pcpu_alloc_alloc_info(int nr_groups,
 			  __alignof__(ai->groups[0].cpu_map[0]));
 	ai_size = base_size + nr_units * sizeof(ai->groups[0].cpu_map[0]);
 
-	ptr = memblock_alloc(PFN_ALIGN(ai_size), PAGE_SIZE);
+	ptr = memblock_alloc_no_panic(PFN_ALIGN(ai_size), PAGE_SIZE);
 	if (!ptr)
 		return NULL;
 	ai = ptr;
@@ -2582,16 +2582,16 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 
 	/* process group information and build config tables accordingly */
 	alloc_size = ai->nr_groups * sizeof(group_offsets[0]);
-	group_offsets = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
+	group_offsets = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
 
 	alloc_size = ai->nr_groups * sizeof(group_sizes[0]);
-	group_sizes = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
+	group_sizes = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
 
 	alloc_size = nr_cpu_ids * sizeof(unit_map[0]);
-	unit_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
+	unit_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
 
 	alloc_size = nr_cpu_ids * sizeof(unit_off[0]);
-	unit_off = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
+	unit_off = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
 
 	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
 		unit_map[cpu] = UINT_MAX;
@@ -2660,7 +2660,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 	pcpu_free_slot = pcpu_sidelined_slot + 1;
 	pcpu_to_depopulate_slot = pcpu_free_slot + 1;
 	pcpu_nr_slots = pcpu_to_depopulate_slot + 1;
-	pcpu_chunk_lists = memblock_alloc_or_panic(pcpu_nr_slots *
+	pcpu_chunk_lists = memblock_alloc(pcpu_nr_slots *
 					  sizeof(pcpu_chunk_lists[0]),
 					  SMP_CACHE_BYTES);
 
@@ -3010,7 +3010,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
 	size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
 	areas_size = PFN_ALIGN(ai->nr_groups * sizeof(void *));
 
-	areas = memblock_alloc(areas_size, SMP_CACHE_BYTES);
+	areas = memblock_alloc_no_panic(areas_size, SMP_CACHE_BYTES);
 	if (!areas) {
 		rc = -ENOMEM;
 		goto out_free;
@@ -3127,19 +3127,19 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
 	pmd_t *pmd;
 
 	if (pgd_none(*pgd)) {
-		p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
+		p4d = memblock_alloc(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
 		pgd_populate(&init_mm, pgd, p4d);
 	}
 
 	p4d = p4d_offset(pgd, addr);
 	if (p4d_none(*p4d)) {
-		pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
+		pud = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
 		p4d_populate(&init_mm, p4d, pud);
 	}
 
 	pud = pud_offset(p4d, addr);
 	if (pud_none(*pud)) {
-		pmd = memblock_alloc_or_panic(PMD_TABLE_SIZE, PMD_TABLE_SIZE);
+		pmd = memblock_alloc(PMD_TABLE_SIZE, PMD_TABLE_SIZE);
 		pud_populate(&init_mm, pud, pmd);
 	}
 
@@ -3147,7 +3147,7 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
 	if (!pmd_present(*pmd)) {
 		pte_t *new;
 
-		new = memblock_alloc_or_panic(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
+		new = memblock_alloc(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
 		pmd_populate_kernel(&init_mm, pmd, new);
 	}
 
@@ -3198,7 +3198,7 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t
 	/* unaligned allocations can't be freed, round up to page size */
 	pages_size = PFN_ALIGN(unit_pages * num_possible_cpus() *
 			       sizeof(pages[0]));
-	pages = memblock_alloc_or_panic(pages_size, SMP_CACHE_BYTES);
+	pages = memblock_alloc(pages_size, SMP_CACHE_BYTES);
 
 	/* allocate pages */
 	j = 0;
diff --git a/mm/sparse.c b/mm/sparse.c
index 133b033d0cba..56191a32e6c5 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -257,7 +257,7 @@ static void __init memblocks_present(void)
 
 		size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
 		align = 1 << (INTERNODE_CACHE_SHIFT);
-		mem_section = memblock_alloc_or_panic(size, align);
+		mem_section = memblock_alloc(size, align);
 	}
 #endif
 
-- 
2.25.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 2/3] mm/memblock: Modify the default failure behavior of memblock_alloc_raw to panic
  2025-01-03 10:51 [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Guo Weikang
@ 2025-01-03 10:51 ` Guo Weikang
  2025-01-03 10:51 ` [PATCH 3/3] mm/memblock: Modify the default failure behavior of memblock_alloc_low(from) Guo Weikang
  2025-01-03 19:58 ` [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Christophe Leroy
  2 siblings, 0 replies; 8+ messages in thread
From: Guo Weikang @ 2025-01-03 10:51 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton; +Cc: linux-mm, linux-kernel, Guo Weikang

Just like memblock_alloc, the default failure behavior of memblock_alloc_raw
is now modified to trigger a panic when allocation fails.

memblock_alloc_no_panic has been introduced to handle cases where panic
behavior is not desired.

Signed-off-by: Guo Weikang <guoweikang.kernel@gmail.com>
---
 arch/openrisc/mm/init.c                |  3 ---
 arch/powerpc/kernel/paca.c             |  4 ----
 arch/powerpc/kernel/prom.c             |  3 ---
 arch/powerpc/platforms/pseries/plpks.c |  2 +-
 include/linux/memblock.h               | 17 +++++++----------
 mm/memblock.c                          | 13 +++++++++++--
 6 files changed, 19 insertions(+), 23 deletions(-)

diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index d0cb1a0126f9..9e0047764f54 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -96,9 +96,6 @@ static void __init map_ram(void)
 
 			/* Alloc one page for holding PTE's... */
 			pte = memblock_alloc_raw(PAGE_SIZE, PAGE_SIZE);
-			if (!pte)
-				panic("%s: Failed to allocate page for PTEs\n",
-				      __func__);
 			set_pmd(pme, __pmd(_KERNPG_TABLE + __pa(pte)));
 
 			/* Fill the newly allocated page with PTE'S */
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 7502066c3c53..9d15799e97d4 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -246,10 +246,6 @@ void __init allocate_paca_ptrs(void)
 
 	paca_ptrs_size = sizeof(struct paca_struct *) * nr_cpu_ids;
 	paca_ptrs = memblock_alloc_raw(paca_ptrs_size, SMP_CACHE_BYTES);
-	if (!paca_ptrs)
-		panic("Failed to allocate %d bytes for paca pointers\n",
-		      paca_ptrs_size);
-
 	memset(paca_ptrs, 0x88, paca_ptrs_size);
 }
 
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index e0059842a1c6..3aba66ddd2c8 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -128,9 +128,6 @@ static void __init move_device_tree(void)
 	    !memblock_is_memory(start + size - 1) ||
 	    overlaps_crashkernel(start, size) || overlaps_initrd(start, size)) {
 		p = memblock_alloc_raw(size, PAGE_SIZE);
-		if (!p)
-			panic("Failed to allocate %lu bytes to move device tree\n",
-			      size);
 		memcpy(p, initial_boot_params, size);
 		initial_boot_params = p;
 		DBG("Moved device tree to 0x%px\n", p);
diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c
index b1667ed05f98..1bcbed41ce44 100644
--- a/arch/powerpc/platforms/pseries/plpks.c
+++ b/arch/powerpc/platforms/pseries/plpks.c
@@ -671,7 +671,7 @@ void __init plpks_early_init_devtree(void)
 		return;
 	}
 
-	ospassword = memblock_alloc_raw(len, SMP_CACHE_BYTES);
+	ospassword = memblock_alloc_raw_no_panic(len, SMP_CACHE_BYTES);
 	if (!ospassword) {
 		pr_err("Error allocating memory for password.\n");
 		goto out;
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 6b21a3834225..b68c141ebc44 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -418,20 +418,17 @@ static __always_inline void *memblock_alloc(phys_addr_t size, phys_addr_t align)
 }
 
 void *__memblock_alloc_panic(phys_addr_t size, phys_addr_t align,
-				const char *func, bool should_panic);
+			     const char *func, bool should_panic, bool raw);
 
 #define memblock_alloc(size, align)    \
-	 __memblock_alloc_panic(size, align, __func__, true)
+	 __memblock_alloc_panic(size, align, __func__, true, false)
 #define memblock_alloc_no_panic(size, align)    \
-	 __memblock_alloc_panic(size, align, __func__, false)
+	 __memblock_alloc_panic(size, align, __func__, false, false)
 
-static inline void *memblock_alloc_raw(phys_addr_t size,
-					       phys_addr_t align)
-{
-	return memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT,
-					  MEMBLOCK_ALLOC_ACCESSIBLE,
-					  NUMA_NO_NODE);
-}
+#define memblock_alloc_raw(size, align)    \
+	 __memblock_alloc_panic(size, align, __func__, true, true)
+#define memblock_alloc_raw_no_panic(size, align)    \
+	 __memblock_alloc_panic(size, align, __func__, false, true)
 
 static inline void *memblock_alloc_from(phys_addr_t size,
 						phys_addr_t align,
diff --git a/mm/memblock.c b/mm/memblock.c
index 901da45ecf8b..4974ae2ee5ec 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1697,15 +1697,24 @@ void * __init memblock_alloc_try_nid(
  * @align: alignment of the region and block's size
  * @func: caller func name
  * @should_panic: whether failed panic
+ * @raw: whether zeroing mem
  *
  * In case of failure, it calls panic with the formatted message.
  * This function should not be used directly, please use the macro
  * memblock_alloc and memblock_alloc_no_panic.
+ * memblock_alloc_raw and memblock_alloc_raw_no_panic.
  */
 void *__init __memblock_alloc_panic(phys_addr_t size, phys_addr_t align,
-				    const char *func, bool should_panic)
+				    const char *func, bool should_panic,
+				    bool raw)
 {
-	void *addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT,
+	void *addr;
+
+	if (unlikely(raw))
+		addr = memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT,
+				      MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
+	else
+		addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT,
 				      MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
 
 	if (unlikely(!addr && should_panic))
-- 
2.25.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 3/3] mm/memblock: Modify the default failure behavior of memblock_alloc_low(from)
  2025-01-03 10:51 [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Guo Weikang
  2025-01-03 10:51 ` [PATCH 2/3] mm/memblock: Modify the default failure behavior of memblock_alloc_raw " Guo Weikang
@ 2025-01-03 10:51 ` Guo Weikang
  2025-01-04  8:38   ` kernel test robot
  2025-01-03 19:58 ` [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Christophe Leroy
  2 siblings, 1 reply; 8+ messages in thread
From: Guo Weikang @ 2025-01-03 10:51 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton; +Cc: linux-mm, linux-kernel, Guo Weikang

Just like memblock_alloc, the default failure behavior of memblock_alloc_low
and memblock_alloc_from is now modified to trigger a panic when allocation
fails.

Signed-off-by: Guo Weikang <guoweikang.kernel@gmail.com>
---
 arch/arc/mm/highmem.c       |  4 ----
 arch/csky/mm/init.c         |  5 ----
 arch/m68k/atari/stram.c     |  4 ----
 arch/m68k/mm/motorola.c     |  9 -------
 arch/mips/include/asm/dmi.h |  2 +-
 arch/mips/mm/init.c         |  5 ----
 arch/s390/kernel/setup.c    |  4 ----
 arch/s390/kernel/smp.c      |  3 ---
 arch/sparc/mm/init_64.c     | 13 ----------
 arch/um/kernel/mem.c        | 20 ----------------
 arch/xtensa/mm/mmu.c        |  4 ----
 include/linux/memblock.h    | 30 ++++++++++++-----------
 mm/memblock.c               | 47 +++++++++++++++++++++++++++++++++++++
 mm/percpu.c                 |  6 ++---
 14 files changed, 67 insertions(+), 89 deletions(-)

diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index c79912a6b196..4ed597b19388 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -53,10 +53,6 @@ static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
 	pte_t *pte_k;
 
 	pte_k = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
-	if (!pte_k)
-		panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
-		      __func__, PAGE_SIZE, PAGE_SIZE);
-
 	pmd_populate_kernel(&init_mm, pmd_k, pte_k);
 	return pte_k;
 }
diff --git a/arch/csky/mm/init.c b/arch/csky/mm/init.c
index bde7cabd23df..04de02a83564 100644
--- a/arch/csky/mm/init.c
+++ b/arch/csky/mm/init.c
@@ -174,11 +174,6 @@ void __init fixrange_init(unsigned long start, unsigned long end,
 			for (; (k < PTRS_PER_PMD) && (vaddr != end); pmd++, k++) {
 				if (pmd_none(*pmd)) {
 					pte = (pte_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
-					if (!pte)
-						panic("%s: Failed to allocate %lu bytes align=%lx\n",
-						      __func__, PAGE_SIZE,
-						      PAGE_SIZE);
-
 					set_pmd(pmd, __pmd(__pa(pte)));
 					BUG_ON(pte != pte_offset_kernel(pmd, 0));
 				}
diff --git a/arch/m68k/atari/stram.c b/arch/m68k/atari/stram.c
index 922e53bcb853..14f761330b29 100644
--- a/arch/m68k/atari/stram.c
+++ b/arch/m68k/atari/stram.c
@@ -96,10 +96,6 @@ void __init atari_stram_reserve_pages(void *start_mem)
 		pr_debug("atari_stram pool: kernel in ST-RAM, using alloc_bootmem!\n");
 		stram_pool.start = (resource_size_t)memblock_alloc_low(pool_size,
 								       PAGE_SIZE);
-		if (!stram_pool.start)
-			panic("%s: Failed to allocate %lu bytes align=%lx\n",
-			      __func__, pool_size, PAGE_SIZE);
-
 		stram_pool.end = stram_pool.start + pool_size - 1;
 		request_resource(&iomem_resource, &stram_pool);
 		stram_virt_offset = 0;
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index ce016ae8c972..83bbada15be2 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -227,11 +227,6 @@ static pte_t * __init kernel_page_table(void)
 
 	if (PAGE_ALIGNED(last_pte_table)) {
 		pte_table = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
-		if (!pte_table) {
-			panic("%s: Failed to allocate %lu bytes align=%lx\n",
-					__func__, PAGE_SIZE, PAGE_SIZE);
-		}
-
 		clear_page(pte_table);
 		mmu_page_ctor(pte_table);
 
@@ -275,10 +270,6 @@ static pmd_t * __init kernel_ptr_table(void)
 	last_pmd_table += PTRS_PER_PMD;
 	if (PAGE_ALIGNED(last_pmd_table)) {
 		last_pmd_table = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
-		if (!last_pmd_table)
-			panic("%s: Failed to allocate %lu bytes align=%lx\n",
-			      __func__, PAGE_SIZE, PAGE_SIZE);
-
 		clear_page(last_pmd_table);
 		mmu_page_ctor(last_pmd_table);
 	}
diff --git a/arch/mips/include/asm/dmi.h b/arch/mips/include/asm/dmi.h
index dc397f630c66..9698d072cc4d 100644
--- a/arch/mips/include/asm/dmi.h
+++ b/arch/mips/include/asm/dmi.h
@@ -11,7 +11,7 @@
 #define dmi_unmap(x)			iounmap(x)
 
 /* MIPS initialize DMI scan before SLAB is ready, so we use memblock here */
-#define dmi_alloc(l)			memblock_alloc_low(l, PAGE_SIZE)
+#define dmi_alloc(l)			memblock_alloc_low_no_panic(l, PAGE_SIZE)
 
 #if defined(CONFIG_MACH_LOONGSON64)
 #define SMBIOS_ENTRY_POINT_SCAN_START	0xFFFE000
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index 4583d1a2a73e..cca62f23769f 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -257,11 +257,6 @@ void __init fixrange_init(unsigned long start, unsigned long end,
 				if (pmd_none(*pmd)) {
 					pte = (pte_t *) memblock_alloc_low(PAGE_SIZE,
 									   PAGE_SIZE);
-					if (!pte)
-						panic("%s: Failed to allocate %lu bytes align=%lx\n",
-						      __func__, PAGE_SIZE,
-						      PAGE_SIZE);
-
 					set_pmd(pmd, __pmd((unsigned long)pte));
 					BUG_ON(pte != pte_offset_kernel(pmd, 0));
 				}
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index e51426113f26..854d3744dacf 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -397,10 +397,6 @@ static void __init setup_lowcore(void)
 	 */
 	BUILD_BUG_ON(sizeof(struct lowcore) != LC_PAGES * PAGE_SIZE);
 	lc = memblock_alloc_low(sizeof(*lc), sizeof(*lc));
-	if (!lc)
-		panic("%s: Failed to allocate %zu bytes align=%zx\n",
-		      __func__, sizeof(*lc), sizeof(*lc));
-
 	lc->pcpu = (unsigned long)per_cpu_ptr(&pcpu_devices, 0);
 	lc->restart_psw.mask = PSW_KERNEL_BITS & ~PSW_MASK_DAT;
 	lc->restart_psw.addr = __pa(restart_int_handler);
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index 9eb4508b4ca4..467d4f390837 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -631,9 +631,6 @@ void __init smp_save_dump_secondary_cpus(void)
 		return;
 	/* Allocate a page as dumping area for the store status sigps */
 	page = memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
-	if (!page)
-		panic("ERROR: Failed to allocate %lx bytes below %lx\n",
-		      PAGE_SIZE, 1UL << 31);
 
 	/* Set multi-threading state to the previous system. */
 	pcpu_set_smt(sclp.mtid_prev);
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 05882bca5b73..8c813c755eb8 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -1789,8 +1789,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
 
 			new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
 						  PAGE_SIZE);
-			if (!new)
-				goto err_alloc;
 			alloc_bytes += PAGE_SIZE;
 			pgd_populate(&init_mm, pgd, new);
 		}
@@ -1801,8 +1799,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
 
 			new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
 						  PAGE_SIZE);
-			if (!new)
-				goto err_alloc;
 			alloc_bytes += PAGE_SIZE;
 			p4d_populate(&init_mm, p4d, new);
 		}
@@ -1817,8 +1813,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
 			}
 			new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
 						  PAGE_SIZE);
-			if (!new)
-				goto err_alloc;
 			alloc_bytes += PAGE_SIZE;
 			pud_populate(&init_mm, pud, new);
 		}
@@ -1833,8 +1827,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
 			}
 			new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
 						  PAGE_SIZE);
-			if (!new)
-				goto err_alloc;
 			alloc_bytes += PAGE_SIZE;
 			pmd_populate_kernel(&init_mm, pmd, new);
 		}
@@ -1854,11 +1846,6 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
 	}
 
 	return alloc_bytes;
-
-err_alloc:
-	panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n",
-	      __func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
-	return -ENOMEM;
 }
 
 static void __init flush_all_kernel_tsbs(void)
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 53248ed04771..9c161fb4ed3a 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -83,10 +83,6 @@ static void __init one_page_table_init(pmd_t *pmd)
 	if (pmd_none(*pmd)) {
 		pte_t *pte = (pte_t *) memblock_alloc_low(PAGE_SIZE,
 							  PAGE_SIZE);
-		if (!pte)
-			panic("%s: Failed to allocate %lu bytes align=%lx\n",
-			      __func__, PAGE_SIZE, PAGE_SIZE);
-
 		set_pmd(pmd, __pmd(_KERNPG_TABLE +
 					   (unsigned long) __pa(pte)));
 		BUG_ON(pte != pte_offset_kernel(pmd, 0));
@@ -97,10 +93,6 @@ static void __init one_md_table_init(pud_t *pud)
 {
 #if CONFIG_PGTABLE_LEVELS > 2
 	pmd_t *pmd_table = (pmd_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
-	if (!pmd_table)
-		panic("%s: Failed to allocate %lu bytes align=%lx\n",
-		      __func__, PAGE_SIZE, PAGE_SIZE);
-
 	set_pud(pud, __pud(_KERNPG_TABLE + (unsigned long) __pa(pmd_table)));
 	BUG_ON(pmd_table != pmd_offset(pud, 0));
 #endif
@@ -110,10 +102,6 @@ static void __init one_ud_table_init(p4d_t *p4d)
 {
 #if CONFIG_PGTABLE_LEVELS > 3
 	pud_t *pud_table = (pud_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
-	if (!pud_table)
-		panic("%s: Failed to allocate %lu bytes align=%lx\n",
-		      __func__, PAGE_SIZE, PAGE_SIZE);
-
 	set_p4d(p4d, __p4d(_KERNPG_TABLE + (unsigned long) __pa(pud_table)));
 	BUG_ON(pud_table != pud_offset(p4d, 0));
 #endif
@@ -163,10 +151,6 @@ static void __init fixaddr_user_init( void)
 
 	fixrange_init( FIXADDR_USER_START, FIXADDR_USER_END, swapper_pg_dir);
 	v = (unsigned long) memblock_alloc_low(size, PAGE_SIZE);
-	if (!v)
-		panic("%s: Failed to allocate %lu bytes align=%lx\n",
-		      __func__, size, PAGE_SIZE);
-
 	memcpy((void *) v , (void *) FIXADDR_USER_START, size);
 	p = __pa(v);
 	for ( ; size > 0; size -= PAGE_SIZE, vaddr += PAGE_SIZE,
@@ -184,10 +168,6 @@ void __init paging_init(void)
 
 	empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE,
 							       PAGE_SIZE);
-	if (!empty_zero_page)
-		panic("%s: Failed to allocate %lu bytes align=%lx\n",
-		      __func__, PAGE_SIZE, PAGE_SIZE);
-
 	max_zone_pfn[ZONE_NORMAL] = end_iomem >> PAGE_SHIFT;
 	free_area_init(max_zone_pfn);
 
diff --git a/arch/xtensa/mm/mmu.c b/arch/xtensa/mm/mmu.c
index 92e158c69c10..aee020c986a3 100644
--- a/arch/xtensa/mm/mmu.c
+++ b/arch/xtensa/mm/mmu.c
@@ -33,10 +33,6 @@ static void * __init init_pmd(unsigned long vaddr, unsigned long n_pages)
 		 __func__, vaddr, n_pages);
 
 	pte = memblock_alloc_low(n_pages * sizeof(pte_t), PAGE_SIZE);
-	if (!pte)
-		panic("%s: Failed to allocate %lu bytes align=%lx\n",
-		      __func__, n_pages * sizeof(pte_t), PAGE_SIZE);
-
 	for (i = 0; i < n_pages; ++i)
 		pte_clear(NULL, 0, pte + i);
 
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index b68c141ebc44..3f940bf628a9 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -430,20 +430,22 @@ void *__memblock_alloc_panic(phys_addr_t size, phys_addr_t align,
 #define memblock_alloc_raw_no_panic(size, align)    \
 	 __memblock_alloc_panic(size, align, __func__, false, true)
 
-static inline void *memblock_alloc_from(phys_addr_t size,
-						phys_addr_t align,
-						phys_addr_t min_addr)
-{
-	return memblock_alloc_try_nid(size, align, min_addr,
-				      MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
-}
-
-static inline void *memblock_alloc_low(phys_addr_t size,
-					       phys_addr_t align)
-{
-	return memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT,
-				      ARCH_LOW_ADDRESS_LIMIT, NUMA_NO_NODE);
-}
+void *__memblock_alloc_from_panic(phys_addr_t size, phys_addr_t align,
+				  phys_addr_t min_addr,const char *func,
+				  bool should_panic);
+
+#define memblock_alloc_from(size, align, min_addr)    \
+	 __memblock_alloc_from_panic(size, align, min_addr,  __func__, true)
+#define memblock_alloc_from_no_panic(size, align, min_addr)    \
+	 __memblock_alloc_from_panic(size, align, min_addr, __func__, false)
+
+void *__memblock_alloc_low_panic(phys_addr_t size, phys_addr_t align,
+				 const char *func, bool should_panic);
+
+#define memblock_alloc_low(size, align)    \
+	 __memblock_alloc_low_panic(size, align, __func__, true)
+#define memblock_alloc_low_no_panic(size, align)    \
+	 __memblock_alloc_low_panic(size, align, __func__, false)
 
 static inline void *memblock_alloc_node(phys_addr_t size,
 						phys_addr_t align, int nid)
diff --git a/mm/memblock.c b/mm/memblock.c
index 4974ae2ee5ec..22922c81ff77 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1722,6 +1722,53 @@ void *__init __memblock_alloc_panic(phys_addr_t size, phys_addr_t align,
 	return addr;
 }
 
+/**
+ * __memblock_alloc_from_panic - Try to allocate memory and panic on failure
+ * @size: size of memory block to be allocated in bytes
+ * @align: alignment of the region and block's size
+ * @min_addr: the lower bound of the memory region from where the allocation
+ *	  is preferred (phys address)
+ * @func: caller func name
+ * @should_panic: whether failed panic
+ *
+ * In case of failure, it calls panic with the formatted message.
+ * This function should not be used directly, please use the macro
+ * memblock_alloc_from and memblock_alloc_from_no_panic.
+ */
+void *__init __memblock_alloc_from_panic(phys_addr_t size, phys_addr_t align,
+				    phys_addr_t min_addr, const char *func,
+				    bool should_panic)
+{
+	void *addr = memblock_alloc_try_nid(size, align, min_addr,
+				      MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
+
+	if (unlikely(!addr && should_panic))
+		panic("%s: Failed to allocate %pap bytes\n", func, &size);
+	return addr;
+}
+
+/**
+ * __memblock_alloc_low_panic - Try to allocate memory and panic on failure
+ * @size: size of memory block to be allocated in bytes
+ * @align: alignment of the region and block's size
+ * @func: caller func name
+ * @should_panic: whether failed panic
+ *
+ * In case of failure, it calls panic with the formatted message.
+ * This function should not be used directly, please use the macro
+ * memblock_alloc_low and memblock_alloc_low_no_panic.
+ */
+void *__init __memblock_alloc_low_panic(phys_addr_t size, phys_addr_t align,
+					const char *func,bool should_panic)
+{
+	void *addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT,
+				      ARCH_LOW_ADDRESS_LIMIT, NUMA_NO_NODE);
+
+	if (unlikely(!addr && should_panic))
+		panic("%s: Failed to allocate %pap bytes\n", func, &size);
+	return addr;
+}
+
 /**
  * memblock_free_late - free pages directly to buddy allocator
  * @base: phys starting address of the  boot memory block
diff --git a/mm/percpu.c b/mm/percpu.c
index a381d626ed32..980fba4292be 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2933,7 +2933,7 @@ static void * __init pcpu_fc_alloc(unsigned int cpu, size_t size, size_t align,
 		node = cpu_to_nd_fn(cpu);
 
 	if (node == NUMA_NO_NODE || !node_online(node) || !NODE_DATA(node)) {
-		ptr = memblock_alloc_from(size, align, goal);
+		ptr = memblock_alloc_from_no_panic(size, align, goal);
 		pr_info("cpu %d has no node %d or node-local memory\n",
 			cpu, node);
 		pr_debug("per cpu data for cpu%d %zu bytes at 0x%llx\n",
@@ -2948,7 +2948,7 @@ static void * __init pcpu_fc_alloc(unsigned int cpu, size_t size, size_t align,
 	}
 	return ptr;
 #else
-	return memblock_alloc_from(size, align, goal);
+	return memblock_alloc_from_no_panic(size, align, goal);
 #endif
 }
 
@@ -3318,7 +3318,7 @@ void __init setup_per_cpu_areas(void)
 
 	ai = pcpu_alloc_alloc_info(1, 1);
 	fc = memblock_alloc_from(unit_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
-	if (!ai || !fc)
+	if (!ai)
 		panic("Failed to allocate memory for percpu areas.");
 	/* kmemleak tracks the percpu allocations separately */
 	kmemleak_ignore_phys(__pa(fc));
-- 
2.25.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic
  2025-01-03 10:51 [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Guo Weikang
  2025-01-03 10:51 ` [PATCH 2/3] mm/memblock: Modify the default failure behavior of memblock_alloc_raw " Guo Weikang
  2025-01-03 10:51 ` [PATCH 3/3] mm/memblock: Modify the default failure behavior of memblock_alloc_low(from) Guo Weikang
@ 2025-01-03 19:58 ` Christophe Leroy
  2025-01-06  2:17   ` Weikang Guo
  2 siblings, 1 reply; 8+ messages in thread
From: Christophe Leroy @ 2025-01-03 19:58 UTC (permalink / raw)
  To: Guo Weikang, Mike Rapoport, Andrew Morton; +Cc: linux-mm, linux-kernel



Le 03/01/2025 à 11:51, Guo Weikang a écrit :
> After analyzing the usage of memblock_alloc, it was found that approximately
> 4/5 (120/155) of the calls expect a panic behavior on allocation failure.
> To reflect this common usage pattern, the default failure behavior of
> memblock_alloc is now modified to trigger a panic when allocation fails.
> 
> Additionally, a new interface, memblock_alloc_no_panic, has been introduced
> to handle cases where panic behavior is not desired.

Isn't that going in the opposite direction ?

5 years ago we did the exact reverse, see commit c0dbe825a9f1 
("memblock: memblock_alloc_try_nid: don't panic")

Christophe

> 
> Signed-off-by: Guo Weikang <guoweikang.kernel@gmail.com>
> ---
>   arch/alpha/kernel/core_cia.c            |  2 +-
>   arch/alpha/kernel/core_marvel.c         |  6 ++---
>   arch/alpha/kernel/pci.c                 |  4 ++--
>   arch/alpha/kernel/pci_iommu.c           |  4 ++--
>   arch/alpha/kernel/setup.c               |  2 +-
>   arch/arm/kernel/setup.c                 |  4 ++--
>   arch/arm/mach-omap2/omap_hwmod.c        | 16 +++++++++----
>   arch/arm/mm/mmu.c                       |  6 ++---
>   arch/arm/mm/nommu.c                     |  2 +-
>   arch/arm64/kernel/setup.c               |  2 +-
>   arch/loongarch/include/asm/dmi.h        |  2 +-
>   arch/loongarch/kernel/setup.c           |  2 +-
>   arch/loongarch/mm/init.c                |  6 ++---
>   arch/m68k/mm/init.c                     |  2 +-
>   arch/m68k/mm/mcfmmu.c                   |  4 ++--
>   arch/m68k/mm/motorola.c                 |  2 +-
>   arch/m68k/mm/sun3mmu.c                  |  4 ++--
>   arch/m68k/sun3/sun3dvma.c               |  2 +-
>   arch/mips/kernel/setup.c                |  2 +-
>   arch/openrisc/mm/ioremap.c              |  2 +-
>   arch/parisc/mm/init.c                   | 10 ++++----
>   arch/powerpc/kernel/dt_cpu_ftrs.c       |  2 +-
>   arch/powerpc/kernel/pci_32.c            |  2 +-
>   arch/powerpc/kernel/setup-common.c      |  2 +-
>   arch/powerpc/kernel/setup_32.c          |  2 +-
>   arch/powerpc/mm/book3s32/mmu.c          |  2 +-
>   arch/powerpc/mm/book3s64/pgtable.c      |  2 +-
>   arch/powerpc/mm/kasan/8xx.c             |  5 ++--
>   arch/powerpc/mm/kasan/init_32.c         |  7 +++---
>   arch/powerpc/mm/kasan/init_book3e_64.c  |  8 +++----
>   arch/powerpc/mm/kasan/init_book3s_64.c  |  2 +-
>   arch/powerpc/mm/nohash/mmu_context.c    |  6 ++---
>   arch/powerpc/mm/pgtable_32.c            |  2 +-
>   arch/powerpc/platforms/powermac/nvram.c |  2 +-
>   arch/powerpc/platforms/powernv/opal.c   |  2 +-
>   arch/powerpc/platforms/ps3/setup.c      |  2 +-
>   arch/powerpc/sysdev/msi_bitmap.c        |  2 +-
>   arch/riscv/kernel/setup.c               |  2 +-
>   arch/riscv/mm/kasan_init.c              | 14 +++++------
>   arch/s390/kernel/crash_dump.c           |  2 +-
>   arch/s390/kernel/numa.c                 |  2 +-
>   arch/s390/kernel/setup.c                |  8 +++----
>   arch/s390/kernel/smp.c                  |  4 ++--
>   arch/s390/kernel/topology.c             |  4 ++--
>   arch/s390/mm/vmem.c                     |  4 ++--
>   arch/sh/mm/init.c                       |  4 ++--
>   arch/sparc/kernel/prom_32.c             |  2 +-
>   arch/sparc/kernel/prom_64.c             |  2 +-
>   arch/sparc/mm/init_32.c                 |  2 +-
>   arch/sparc/mm/srmmu.c                   |  6 ++---
>   arch/um/drivers/net_kern.c              |  2 +-
>   arch/um/drivers/vector_kern.c           |  2 +-
>   arch/um/kernel/load_file.c              |  2 +-
>   arch/x86/coco/sev/core.c                |  2 +-
>   arch/x86/kernel/acpi/boot.c             |  2 +-
>   arch/x86/kernel/acpi/madt_wakeup.c      |  2 +-
>   arch/x86/kernel/apic/io_apic.c          |  4 ++--
>   arch/x86/kernel/e820.c                  |  2 +-
>   arch/x86/platform/olpc/olpc_dt.c        |  2 +-
>   arch/x86/xen/p2m.c                      |  2 +-
>   arch/xtensa/mm/kasan_init.c             |  2 +-
>   arch/xtensa/platforms/iss/network.c     |  2 +-
>   drivers/clk/ti/clk.c                    |  2 +-
>   drivers/firmware/memmap.c               |  2 +-
>   drivers/macintosh/smu.c                 |  2 +-
>   drivers/of/fdt.c                        |  2 +-
>   drivers/of/of_reserved_mem.c            |  2 +-
>   drivers/of/unittest.c                   |  2 +-
>   drivers/usb/early/xhci-dbc.c            |  2 +-
>   include/linux/memblock.h                | 10 ++++----
>   init/main.c                             | 12 +++++-----
>   kernel/dma/swiotlb.c                    |  6 ++---
>   kernel/power/snapshot.c                 |  2 +-
>   kernel/printk/printk.c                  |  6 ++---
>   lib/cpumask.c                           |  2 +-
>   mm/kasan/tags.c                         |  2 +-
>   mm/kfence/core.c                        |  4 ++--
>   mm/kmsan/shadow.c                       |  4 ++--
>   mm/memblock.c                           | 18 +++++++-------
>   mm/numa.c                               |  2 +-
>   mm/numa_emulation.c                     |  2 +-
>   mm/numa_memblks.c                       |  2 +-
>   mm/percpu.c                             | 32 ++++++++++++-------------
>   mm/sparse.c                             |  2 +-
>   84 files changed, 173 insertions(+), 165 deletions(-)
> 
> diff --git a/arch/alpha/kernel/core_cia.c b/arch/alpha/kernel/core_cia.c
> index 6e577228e175..05f80b4bbf12 100644
> --- a/arch/alpha/kernel/core_cia.c
> +++ b/arch/alpha/kernel/core_cia.c
> @@ -331,7 +331,7 @@ cia_prepare_tbia_workaround(int window)
>   	long i;
>   
>   	/* Use minimal 1K map. */
> -	ppte = memblock_alloc_or_panic(CIA_BROKEN_TBIA_SIZE, 32768);
> +	ppte = memblock_alloc(CIA_BROKEN_TBIA_SIZE, 32768);
>   	pte = (virt_to_phys(ppte) >> (PAGE_SHIFT - 1)) | 1;
>   
>   	for (i = 0; i < CIA_BROKEN_TBIA_SIZE / sizeof(unsigned long); ++i)
> diff --git a/arch/alpha/kernel/core_marvel.c b/arch/alpha/kernel/core_marvel.c
> index b1bfbd11980d..716ed3197f72 100644
> --- a/arch/alpha/kernel/core_marvel.c
> +++ b/arch/alpha/kernel/core_marvel.c
> @@ -79,9 +79,9 @@ mk_resource_name(int pe, int port, char *str)
>   {
>   	char tmp[80];
>   	char *name;
> -	
> +
>   	sprintf(tmp, "PCI %s PE %d PORT %d", str, pe, port);
> -	name = memblock_alloc_or_panic(strlen(tmp) + 1, SMP_CACHE_BYTES);
> +	name = memblock_alloc(strlen(tmp) + 1, SMP_CACHE_BYTES);
>   	strcpy(name, tmp);
>   
>   	return name;
> @@ -116,7 +116,7 @@ alloc_io7(unsigned int pe)
>   		return NULL;
>   	}
>   
> -	io7 = memblock_alloc_or_panic(sizeof(*io7), SMP_CACHE_BYTES);
> +	io7 = memblock_alloc(sizeof(*io7), SMP_CACHE_BYTES);
>   	io7->pe = pe;
>   	raw_spin_lock_init(&io7->irq_lock);
>   
> diff --git a/arch/alpha/kernel/pci.c b/arch/alpha/kernel/pci.c
> index 8e9b4ac86b7e..d359ebaf6de7 100644
> --- a/arch/alpha/kernel/pci.c
> +++ b/arch/alpha/kernel/pci.c
> @@ -391,7 +391,7 @@ alloc_pci_controller(void)
>   {
>   	struct pci_controller *hose;
>   
> -	hose = memblock_alloc_or_panic(sizeof(*hose), SMP_CACHE_BYTES);
> +	hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES);
>   
>   	*hose_tail = hose;
>   	hose_tail = &hose->next;
> @@ -402,7 +402,7 @@ alloc_pci_controller(void)
>   struct resource * __init
>   alloc_resource(void)
>   {
> -	return memblock_alloc_or_panic(sizeof(struct resource), SMP_CACHE_BYTES);
> +	return memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
>   }
>   
>   
> diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c
> index 681f56089d9c..7a465c207684 100644
> --- a/arch/alpha/kernel/pci_iommu.c
> +++ b/arch/alpha/kernel/pci_iommu.c
> @@ -71,8 +71,8 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base,
>   	if (align < mem_size)
>   		align = mem_size;
>   
> -	arena = memblock_alloc_or_panic(sizeof(*arena), SMP_CACHE_BYTES);
> -	arena->ptes = memblock_alloc_or_panic(mem_size, align);
> +	arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES);
> +	arena->ptes = memblock_alloc(mem_size, align);
>   
>   	spin_lock_init(&arena->lock);
>   	arena->hose = hose;
> diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
> index bebdffafaee8..6de866a62bd9 100644
> --- a/arch/alpha/kernel/setup.c
> +++ b/arch/alpha/kernel/setup.c
> @@ -269,7 +269,7 @@ move_initrd(unsigned long mem_limit)
>   	unsigned long size;
>   
>   	size = initrd_end - initrd_start;
> -	start = memblock_alloc(PAGE_ALIGN(size), PAGE_SIZE);
> +	start = memblock_alloc_no_panic(PAGE_ALIGN(size), PAGE_SIZE);
>   	if (!start || __pa(start) + size > mem_limit) {
>   		initrd_start = initrd_end = 0;
>   		return NULL;
> diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
> index a41c93988d2c..b36498c0bedd 100644
> --- a/arch/arm/kernel/setup.c
> +++ b/arch/arm/kernel/setup.c
> @@ -880,7 +880,7 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
>   		 */
>   		boot_alias_start = phys_to_idmap(start);
>   		if (arm_has_idmap_alias() && boot_alias_start != IDMAP_INVALID_ADDR) {
> -			res = memblock_alloc_or_panic(sizeof(*res), SMP_CACHE_BYTES);
> +			res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES);
>   			res->name = "System RAM (boot alias)";
>   			res->start = boot_alias_start;
>   			res->end = phys_to_idmap(res_end);
> @@ -888,7 +888,7 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
>   			request_resource(&iomem_resource, res);
>   		}
>   
> -		res = memblock_alloc_or_panic(sizeof(*res), SMP_CACHE_BYTES);
> +		res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES);
>   		res->name  = "System RAM";
>   		res->start = start;
>   		res->end = res_end;
> diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
> index 111677878d9c..30e4f1279cdb 100644
> --- a/arch/arm/mach-omap2/omap_hwmod.c
> +++ b/arch/arm/mach-omap2/omap_hwmod.c
> @@ -709,7 +709,7 @@ static int __init _setup_clkctrl_provider(struct device_node *np)
>   	struct clkctrl_provider *provider;
>   	int i;
>   
> -	provider = memblock_alloc(sizeof(*provider), SMP_CACHE_BYTES);
> +	provider = memblock_alloc_no_panic(sizeof(*provider), SMP_CACHE_BYTES);
>   	if (!provider)
>   		return -ENOMEM;
>   
> @@ -718,16 +718,16 @@ static int __init _setup_clkctrl_provider(struct device_node *np)
>   	provider->num_addrs = of_address_count(np);
>   
>   	provider->addr =
> -		memblock_alloc(sizeof(void *) * provider->num_addrs,
> +		memblock_alloc_no_panic(sizeof(void *) * provider->num_addrs,
>   			       SMP_CACHE_BYTES);
>   	if (!provider->addr)
> -		return -ENOMEM;
> +		goto err_free_provider;
>   
>   	provider->size =
> -		memblock_alloc(sizeof(u32) * provider->num_addrs,
> +		memblock_alloc_no_panic(sizeof(u32) * provider->num_addrs,
>   			       SMP_CACHE_BYTES);
>   	if (!provider->size)
> -		return -ENOMEM;
> +		goto err_free_addr;
>   
>   	for (i = 0; i < provider->num_addrs; i++) {
>   		struct resource res;
> @@ -740,6 +740,12 @@ static int __init _setup_clkctrl_provider(struct device_node *np)
>   	list_add(&provider->link, &clkctrl_providers);
>   
>   	return 0;
> +
> +err_free_addr:
> +	memblock_free(provider->addr, sizeof(void *));
> +err_free_provider:
> +	memblock_free(provider, sizeof(*provider));
> +	return -ENOMEM;
>   }
>   
>   static int __init _init_clkctrl_providers(void)
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index f02f872ea8a9..3d788304839e 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -726,7 +726,7 @@ EXPORT_SYMBOL(phys_mem_access_prot);
>   
>   static void __init *early_alloc(unsigned long sz)
>   {
> -	return memblock_alloc_or_panic(sz, sz);
> +	return memblock_alloc(sz, sz);
>   
>   }
>   
> @@ -1022,7 +1022,7 @@ void __init iotable_init(struct map_desc *io_desc, int nr)
>   	if (!nr)
>   		return;
>   
> -	svm = memblock_alloc_or_panic(sizeof(*svm) * nr, __alignof__(*svm));
> +	svm = memblock_alloc(sizeof(*svm) * nr, __alignof__(*svm));
>   
>   	for (md = io_desc; nr; md++, nr--) {
>   		create_mapping(md);
> @@ -1044,7 +1044,7 @@ void __init vm_reserve_area_early(unsigned long addr, unsigned long size,
>   	struct vm_struct *vm;
>   	struct static_vm *svm;
>   
> -	svm = memblock_alloc_or_panic(sizeof(*svm), __alignof__(*svm));
> +	svm = memblock_alloc(sizeof(*svm), __alignof__(*svm));
>   
>   	vm = &svm->vm;
>   	vm->addr = (void *)addr;
> diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
> index 1a8f6914ee59..079b4d4acd29 100644
> --- a/arch/arm/mm/nommu.c
> +++ b/arch/arm/mm/nommu.c
> @@ -162,7 +162,7 @@ void __init paging_init(const struct machine_desc *mdesc)
>   	mpu_setup();
>   
>   	/* allocate the zero page. */
> -	zero_page = (void *)memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +	zero_page = (void *)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   
>   	bootmem_init();
>   
> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
> index 85104587f849..3012cf9b0f9b 100644
> --- a/arch/arm64/kernel/setup.c
> +++ b/arch/arm64/kernel/setup.c
> @@ -223,7 +223,7 @@ static void __init request_standard_resources(void)
>   
>   	num_standard_resources = memblock.memory.cnt;
>   	res_size = num_standard_resources * sizeof(*standard_resources);
> -	standard_resources = memblock_alloc_or_panic(res_size, SMP_CACHE_BYTES);
> +	standard_resources = memblock_alloc(res_size, SMP_CACHE_BYTES);
>   
>   	for_each_mem_region(region) {
>   		res = &standard_resources[i++];
> diff --git a/arch/loongarch/include/asm/dmi.h b/arch/loongarch/include/asm/dmi.h
> index 605493417753..6305bc3ba15b 100644
> --- a/arch/loongarch/include/asm/dmi.h
> +++ b/arch/loongarch/include/asm/dmi.h
> @@ -10,7 +10,7 @@
>   
>   #define dmi_early_remap(x, l)	dmi_remap(x, l)
>   #define dmi_early_unmap(x, l)	dmi_unmap(x)
> -#define dmi_alloc(l)		memblock_alloc(l, PAGE_SIZE)
> +#define dmi_alloc(l)		memblock_alloc_no_panic(l, PAGE_SIZE)
>   
>   static inline void *dmi_remap(u64 phys_addr, unsigned long size)
>   {
> diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
> index edcfdfcad7d2..56934fe58170 100644
> --- a/arch/loongarch/kernel/setup.c
> +++ b/arch/loongarch/kernel/setup.c
> @@ -431,7 +431,7 @@ static void __init resource_init(void)
>   
>   	num_standard_resources = memblock.memory.cnt;
>   	res_size = num_standard_resources * sizeof(*standard_resources);
> -	standard_resources = memblock_alloc_or_panic(res_size, SMP_CACHE_BYTES);
> +	standard_resources = memblock_alloc(res_size, SMP_CACHE_BYTES);
>   
>   	for_each_mem_region(region) {
>   		res = &standard_resources[i++];
> diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
> index ca5aa5f46a9f..99b4d5cf3e9c 100644
> --- a/arch/loongarch/mm/init.c
> +++ b/arch/loongarch/mm/init.c
> @@ -174,7 +174,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
>   	pmd_t *pmd;
>   
>   	if (p4d_none(p4dp_get(p4d))) {
> -		pud = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +		pud = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   		p4d_populate(&init_mm, p4d, pud);
>   #ifndef __PAGETABLE_PUD_FOLDED
>   		pud_init(pud);
> @@ -183,7 +183,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
>   
>   	pud = pud_offset(p4d, addr);
>   	if (pud_none(pudp_get(pud))) {
> -		pmd = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +		pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   		pud_populate(&init_mm, pud, pmd);
>   #ifndef __PAGETABLE_PMD_FOLDED
>   		pmd_init(pmd);
> @@ -194,7 +194,7 @@ pte_t * __init populate_kernel_pte(unsigned long addr)
>   	if (!pmd_present(pmdp_get(pmd))) {
>   		pte_t *pte;
>   
> -		pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +		pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   		pmd_populate_kernel(&init_mm, pmd, pte);
>   		kernel_pte_init(pte);
>   	}
> diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c
> index 8b11d0d545aa..1ccc238f33d9 100644
> --- a/arch/m68k/mm/init.c
> +++ b/arch/m68k/mm/init.c
> @@ -68,7 +68,7 @@ void __init paging_init(void)
>   
>   	high_memory = (void *) end_mem;
>   
> -	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   	max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT;
>   	free_area_init(max_zone_pfn);
>   }
> diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c
> index 19a75029036c..26bac0984964 100644
> --- a/arch/m68k/mm/mcfmmu.c
> +++ b/arch/m68k/mm/mcfmmu.c
> @@ -42,14 +42,14 @@ void __init paging_init(void)
>   	unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
>   	int i;
>   
> -	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   
>   	pg_dir = swapper_pg_dir;
>   	memset(swapper_pg_dir, 0, sizeof(swapper_pg_dir));
>   
>   	size = num_pages * sizeof(pte_t);
>   	size = (size + PAGE_SIZE) & ~(PAGE_SIZE-1);
> -	next_pgtable = (unsigned long) memblock_alloc_or_panic(size, PAGE_SIZE);
> +	next_pgtable = (unsigned long) memblock_alloc(size, PAGE_SIZE);
>   
>   	pg_dir += PAGE_OFFSET >> PGDIR_SHIFT;
>   
> diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
> index eab50dda14ee..ce016ae8c972 100644
> --- a/arch/m68k/mm/motorola.c
> +++ b/arch/m68k/mm/motorola.c
> @@ -491,7 +491,7 @@ void __init paging_init(void)
>   	 * initialize the bad page table and bad page to point
>   	 * to a couple of allocated pages
>   	 */
> -	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   
>   	/*
>   	 * Set up SFC/DFC registers
> diff --git a/arch/m68k/mm/sun3mmu.c b/arch/m68k/mm/sun3mmu.c
> index 1ecf6bdd08bf..748645ac8cda 100644
> --- a/arch/m68k/mm/sun3mmu.c
> +++ b/arch/m68k/mm/sun3mmu.c
> @@ -44,7 +44,7 @@ void __init paging_init(void)
>   	unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
>   	unsigned long size;
>   
> -	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   
>   	address = PAGE_OFFSET;
>   	pg_dir = swapper_pg_dir;
> @@ -54,7 +54,7 @@ void __init paging_init(void)
>   	size = num_pages * sizeof(pte_t);
>   	size = (size + PAGE_SIZE) & ~(PAGE_SIZE-1);
>   
> -	next_pgtable = (unsigned long)memblock_alloc_or_panic(size, PAGE_SIZE);
> +	next_pgtable = (unsigned long)memblock_alloc(size, PAGE_SIZE);
>   	bootmem_end = (next_pgtable + size + PAGE_SIZE) & PAGE_MASK;
>   
>   	/* Map whole memory from PAGE_OFFSET (0x0E000000) */
> diff --git a/arch/m68k/sun3/sun3dvma.c b/arch/m68k/sun3/sun3dvma.c
> index 225fc735e466..681fcf83caa2 100644
> --- a/arch/m68k/sun3/sun3dvma.c
> +++ b/arch/m68k/sun3/sun3dvma.c
> @@ -252,7 +252,7 @@ void __init dvma_init(void)
>   
>   	list_add(&(hole->list), &hole_list);
>   
> -	iommu_use = memblock_alloc_or_panic(IOMMU_TOTAL_ENTRIES * sizeof(unsigned long),
> +	iommu_use = memblock_alloc(IOMMU_TOTAL_ENTRIES * sizeof(unsigned long),
>   				   SMP_CACHE_BYTES);
>   	dvma_unmap_iommu(DVMA_START, DVMA_SIZE);
>   
> diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
> index fbfe0771317e..fcccff55dc77 100644
> --- a/arch/mips/kernel/setup.c
> +++ b/arch/mips/kernel/setup.c
> @@ -704,7 +704,7 @@ static void __init resource_init(void)
>   	for_each_mem_range(i, &start, &end) {
>   		struct resource *res;
>   
> -		res = memblock_alloc_or_panic(sizeof(struct resource), SMP_CACHE_BYTES);
> +		res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
>   
>   		res->start = start;
>   		/*
> diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c
> index 8e63e86251ca..e0f58f40c0ab 100644
> --- a/arch/openrisc/mm/ioremap.c
> +++ b/arch/openrisc/mm/ioremap.c
> @@ -38,7 +38,7 @@ pte_t __ref *pte_alloc_one_kernel(struct mm_struct *mm)
>   	if (likely(mem_init_done)) {
>   		pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
>   	} else {
> -		pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +		pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   	}
>   
>   	return pte;
> diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
> index 61c0a2477072..d587a7cf7fdb 100644
> --- a/arch/parisc/mm/init.c
> +++ b/arch/parisc/mm/init.c
> @@ -377,7 +377,7 @@ static void __ref map_pages(unsigned long start_vaddr,
>   
>   #if CONFIG_PGTABLE_LEVELS == 3
>   		if (pud_none(*pud)) {
> -			pmd = memblock_alloc_or_panic(PAGE_SIZE << PMD_TABLE_ORDER,
> +			pmd = memblock_alloc(PAGE_SIZE << PMD_TABLE_ORDER,
>   					     PAGE_SIZE << PMD_TABLE_ORDER);
>   			pud_populate(NULL, pud, pmd);
>   		}
> @@ -386,7 +386,7 @@ static void __ref map_pages(unsigned long start_vaddr,
>   		pmd = pmd_offset(pud, vaddr);
>   		for (tmp1 = start_pmd; tmp1 < PTRS_PER_PMD; tmp1++, pmd++) {
>   			if (pmd_none(*pmd)) {
> -				pg_table = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +				pg_table = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   				pmd_populate_kernel(NULL, pmd, pg_table);
>   			}
>   
> @@ -644,7 +644,7 @@ static void __init pagetable_init(void)
>   	}
>   #endif
>   
> -	empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +	empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   
>   }
>   
> @@ -681,7 +681,7 @@ static void __init fixmap_init(void)
>   
>   #if CONFIG_PGTABLE_LEVELS == 3
>   	if (pud_none(*pud)) {
> -		pmd = memblock_alloc_or_panic(PAGE_SIZE << PMD_TABLE_ORDER,
> +		pmd = memblock_alloc(PAGE_SIZE << PMD_TABLE_ORDER,
>   				     PAGE_SIZE << PMD_TABLE_ORDER);
>   		pud_populate(NULL, pud, pmd);
>   	}
> @@ -689,7 +689,7 @@ static void __init fixmap_init(void)
>   
>   	pmd = pmd_offset(pud, addr);
>   	do {
> -		pte_t *pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +		pte_t *pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   
>   		pmd_populate_kernel(&init_mm, pmd, pte);
>   
> diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
> index 3af6c06af02f..f00a3b607e06 100644
> --- a/arch/powerpc/kernel/dt_cpu_ftrs.c
> +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
> @@ -1088,7 +1088,7 @@ static int __init dt_cpu_ftrs_scan_callback(unsigned long node, const char
>   	of_scan_flat_dt_subnodes(node, count_cpufeatures_subnodes,
>   						&nr_dt_cpu_features);
>   	dt_cpu_features =
> -		memblock_alloc_or_panic(
> +		memblock_alloc(
>   			sizeof(struct dt_cpu_feature) * nr_dt_cpu_features,
>   			PAGE_SIZE);
>   
> diff --git a/arch/powerpc/kernel/pci_32.c b/arch/powerpc/kernel/pci_32.c
> index f8a3bd8cfae4..b56c853fc8be 100644
> --- a/arch/powerpc/kernel/pci_32.c
> +++ b/arch/powerpc/kernel/pci_32.c
> @@ -213,7 +213,7 @@ pci_create_OF_bus_map(void)
>   	struct property* of_prop;
>   	struct device_node *dn;
>   
> -	of_prop = memblock_alloc_or_panic(sizeof(struct property) + 256,
> +	of_prop = memblock_alloc(sizeof(struct property) + 256,
>   				 SMP_CACHE_BYTES);
>   	dn = of_find_node_by_path("/");
>   	if (dn) {
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index f3ea1329c566..9c8bf12fdf3a 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -458,7 +458,7 @@ void __init smp_setup_cpu_maps(void)
>   
>   	DBG("smp_setup_cpu_maps()\n");
>   
> -	cpu_to_phys_id = memblock_alloc_or_panic(nr_cpu_ids * sizeof(u32),
> +	cpu_to_phys_id = memblock_alloc(nr_cpu_ids * sizeof(u32),
>   					__alignof__(u32));
>   
>   	for_each_node_by_type(dn, "cpu") {
> diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
> index 5a1bf501fbe1..ec440aa52fde 100644
> --- a/arch/powerpc/kernel/setup_32.c
> +++ b/arch/powerpc/kernel/setup_32.c
> @@ -140,7 +140,7 @@ arch_initcall(ppc_init);
>   
>   static void *__init alloc_stack(void)
>   {
> -	return memblock_alloc_or_panic(THREAD_SIZE, THREAD_ALIGN);
> +	return memblock_alloc(THREAD_SIZE, THREAD_ALIGN);
>   }
>   
>   void __init irqstack_early_init(void)
> diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
> index be9c4106e22f..f18d2a1e0df6 100644
> --- a/arch/powerpc/mm/book3s32/mmu.c
> +++ b/arch/powerpc/mm/book3s32/mmu.c
> @@ -377,7 +377,7 @@ void __init MMU_init_hw(void)
>   	 * Find some memory for the hash table.
>   	 */
>   	if ( ppc_md.progress ) ppc_md.progress("hash:find piece", 0x322);
> -	Hash = memblock_alloc_or_panic(Hash_size, Hash_size);
> +	Hash = memblock_alloc(Hash_size, Hash_size);
>   	_SDR1 = __pa(Hash) | SDR1_LOW_BITS;
>   
>   	pr_info("Total memory = %lldMB; using %ldkB for hash table\n",
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index ce64abea9e3e..21bf84a134c3 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -330,7 +330,7 @@ void __init mmu_partition_table_init(void)
>   	unsigned long ptcr;
>   
>   	/* Initialize the Partition Table with no entries */
> -	partition_tb = memblock_alloc_or_panic(patb_size, patb_size);
> +	partition_tb = memblock_alloc(patb_size, patb_size);
>   	ptcr = __pa(partition_tb) | (PATB_SIZE_SHIFT - 12);
>   	set_ptcr_when_no_uv(ptcr);
>   	powernv_set_nmmu_ptcr(ptcr);
> diff --git a/arch/powerpc/mm/kasan/8xx.c b/arch/powerpc/mm/kasan/8xx.c
> index 989d6cdf4141..c43b1b3bcaac 100644
> --- a/arch/powerpc/mm/kasan/8xx.c
> +++ b/arch/powerpc/mm/kasan/8xx.c
> @@ -22,10 +22,9 @@ kasan_init_shadow_8M(unsigned long k_start, unsigned long k_end, void *block)
>   		if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte)
>   			continue;
>   
> -		ptep = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
> +		ptep = memblock_alloc_no_panic(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
>   		if (!ptep)
>   			return -ENOMEM;
> -
>   		for (i = 0; i < PTRS_PER_PTE; i++) {
>   			pte_t pte = pte_mkhuge(pfn_pte(PHYS_PFN(__pa(block + i * PAGE_SIZE)), PAGE_KERNEL));
>   
> @@ -45,7 +44,7 @@ int __init kasan_init_region(void *start, size_t size)
>   	int ret;
>   	void *block;
>   
> -	block = memblock_alloc(k_end - k_start, SZ_8M);
> +	block = memblock_alloc_no_panic(k_end - k_start, SZ_8M);
>   	if (!block)
>   		return -ENOMEM;
>   
> diff --git a/arch/powerpc/mm/kasan/init_32.c b/arch/powerpc/mm/kasan/init_32.c
> index 03666d790a53..226b9bfbb784 100644
> --- a/arch/powerpc/mm/kasan/init_32.c
> +++ b/arch/powerpc/mm/kasan/init_32.c
> @@ -42,10 +42,10 @@ int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_
>   		if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte)
>   			continue;
>   
> -		new = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
> -
> +		new = memblock_alloc_no_panic(PTE_FRAG_SIZE, PTE_FRAG_SIZE);
>   		if (!new)
>   			return -ENOMEM;
> +
>   		kasan_populate_pte(new, PAGE_KERNEL);
>   		pmd_populate_kernel(&init_mm, pmd, new);
>   	}
> @@ -65,7 +65,7 @@ int __init __weak kasan_init_region(void *start, size_t size)
>   		return ret;
>   
>   	k_start = k_start & PAGE_MASK;
> -	block = memblock_alloc(k_end - k_start, PAGE_SIZE);
> +	block = memblock_alloc_no_panic(k_end - k_start, PAGE_SIZE);
>   	if (!block)
>   		return -ENOMEM;
>   
> @@ -129,7 +129,6 @@ void __init kasan_mmu_init(void)
>   
>   	if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) {
>   		ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);
> -
>   		if (ret)
>   			panic("kasan: kasan_init_shadow_page_tables() failed");
>   	}
> diff --git a/arch/powerpc/mm/kasan/init_book3e_64.c b/arch/powerpc/mm/kasan/init_book3e_64.c
> index 60c78aac0f63..43c03b84ff32 100644
> --- a/arch/powerpc/mm/kasan/init_book3e_64.c
> +++ b/arch/powerpc/mm/kasan/init_book3e_64.c
> @@ -40,19 +40,19 @@ static int __init kasan_map_kernel_page(unsigned long ea, unsigned long pa, pgpr
>   	pgdp = pgd_offset_k(ea);
>   	p4dp = p4d_offset(pgdp, ea);
>   	if (kasan_pud_table(*p4dp)) {
> -		pudp = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
> +		pudp = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
>   		memcpy(pudp, kasan_early_shadow_pud, PUD_TABLE_SIZE);
>   		p4d_populate(&init_mm, p4dp, pudp);
>   	}
>   	pudp = pud_offset(p4dp, ea);
>   	if (kasan_pmd_table(*pudp)) {
> -		pmdp = memblock_alloc_or_panic(PMD_TABLE_SIZE, PMD_TABLE_SIZE);
> +		pmdp = memblock_alloc(PMD_TABLE_SIZE, PMD_TABLE_SIZE);
>   		memcpy(pmdp, kasan_early_shadow_pmd, PMD_TABLE_SIZE);
>   		pud_populate(&init_mm, pudp, pmdp);
>   	}
>   	pmdp = pmd_offset(pudp, ea);
>   	if (kasan_pte_table(*pmdp)) {
> -		ptep = memblock_alloc_or_panic(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
> +		ptep = memblock_alloc(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
>   		memcpy(ptep, kasan_early_shadow_pte, PTE_TABLE_SIZE);
>   		pmd_populate_kernel(&init_mm, pmdp, ptep);
>   	}
> @@ -74,7 +74,7 @@ static void __init kasan_init_phys_region(void *start, void *end)
>   	k_start = ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_SIZE);
>   	k_end = ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE);
>   
> -	va = memblock_alloc_or_panic(k_end - k_start, PAGE_SIZE);
> +	va = memblock_alloc(k_end - k_start, PAGE_SIZE);
>   	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE, va += PAGE_SIZE)
>   		kasan_map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
>   }
> diff --git a/arch/powerpc/mm/kasan/init_book3s_64.c b/arch/powerpc/mm/kasan/init_book3s_64.c
> index 7d959544c077..3fb5ce4f48f4 100644
> --- a/arch/powerpc/mm/kasan/init_book3s_64.c
> +++ b/arch/powerpc/mm/kasan/init_book3s_64.c
> @@ -32,7 +32,7 @@ static void __init kasan_init_phys_region(void *start, void *end)
>   	k_start = ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_SIZE);
>   	k_end = ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE);
>   
> -	va = memblock_alloc_or_panic(k_end - k_start, PAGE_SIZE);
> +	va = memblock_alloc(k_end - k_start, PAGE_SIZE);
>   	for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE, va += PAGE_SIZE)
>   		map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
>   }
> diff --git a/arch/powerpc/mm/nohash/mmu_context.c b/arch/powerpc/mm/nohash/mmu_context.c
> index a1a4e697251a..eb9ea3e88a10 100644
> --- a/arch/powerpc/mm/nohash/mmu_context.c
> +++ b/arch/powerpc/mm/nohash/mmu_context.c
> @@ -385,11 +385,11 @@ void __init mmu_context_init(void)
>   	/*
>   	 * Allocate the maps used by context management
>   	 */
> -	context_map = memblock_alloc_or_panic(CTX_MAP_SIZE, SMP_CACHE_BYTES);
> -	context_mm = memblock_alloc_or_panic(sizeof(void *) * (LAST_CONTEXT + 1),
> +	context_map = memblock_alloc(CTX_MAP_SIZE, SMP_CACHE_BYTES);
> +	context_mm = memblock_alloc(sizeof(void *) * (LAST_CONTEXT + 1),
>   				    SMP_CACHE_BYTES);
>   	if (IS_ENABLED(CONFIG_SMP)) {
> -		stale_map[boot_cpuid] = memblock_alloc_or_panic(CTX_MAP_SIZE, SMP_CACHE_BYTES);
> +		stale_map[boot_cpuid] = memblock_alloc(CTX_MAP_SIZE, SMP_CACHE_BYTES);
>   		cpuhp_setup_state_nocalls(CPUHP_POWERPC_MMU_CTX_PREPARE,
>   					  "powerpc/mmu/ctx:prepare",
>   					  mmu_ctx_cpu_prepare, mmu_ctx_cpu_dead);
> diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
> index 15276068f657..8a523d91512f 100644
> --- a/arch/powerpc/mm/pgtable_32.c
> +++ b/arch/powerpc/mm/pgtable_32.c
> @@ -50,7 +50,7 @@ notrace void __init early_ioremap_init(void)
>   
>   void __init *early_alloc_pgtable(unsigned long size)
>   {
> -	return memblock_alloc_or_panic(size, size);
> +	return memblock_alloc(size, size);
>   
>   }
>   
> diff --git a/arch/powerpc/platforms/powermac/nvram.c b/arch/powerpc/platforms/powermac/nvram.c
> index a112d26185a0..e4fec71444cf 100644
> --- a/arch/powerpc/platforms/powermac/nvram.c
> +++ b/arch/powerpc/platforms/powermac/nvram.c
> @@ -514,7 +514,7 @@ static int __init core99_nvram_setup(struct device_node *dp, unsigned long addr)
>   		printk(KERN_ERR "nvram: no address\n");
>   		return -EINVAL;
>   	}
> -	nvram_image = memblock_alloc_or_panic(NVRAM_SIZE, SMP_CACHE_BYTES);
> +	nvram_image = memblock_alloc(NVRAM_SIZE, SMP_CACHE_BYTES);
>   	nvram_data = ioremap(addr, NVRAM_SIZE*2);
>   	nvram_naddrs = 1; /* Make sure we get the correct case */
>   
> diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
> index 09bd93464b4f..5763f6e6eb1c 100644
> --- a/arch/powerpc/platforms/powernv/opal.c
> +++ b/arch/powerpc/platforms/powernv/opal.c
> @@ -180,7 +180,7 @@ int __init early_init_dt_scan_recoverable_ranges(unsigned long node,
>   	/*
>   	 * Allocate a buffer to hold the MC recoverable ranges.
>   	 */
> -	mc_recoverable_range = memblock_alloc_or_panic(size, __alignof__(u64));
> +	mc_recoverable_range = memblock_alloc(size, __alignof__(u64));
>   
>   	for (i = 0; i < mc_recoverable_range_len; i++) {
>   		mc_recoverable_range[i].start_addr =
> diff --git a/arch/powerpc/platforms/ps3/setup.c b/arch/powerpc/platforms/ps3/setup.c
> index 150c09b58ae8..082935871b6d 100644
> --- a/arch/powerpc/platforms/ps3/setup.c
> +++ b/arch/powerpc/platforms/ps3/setup.c
> @@ -115,7 +115,7 @@ static void __init prealloc(struct ps3_prealloc *p)
>   	if (!p->size)
>   		return;
>   
> -	p->address = memblock_alloc_or_panic(p->size, p->align);
> +	p->address = memblock_alloc(p->size, p->align);
>   
>   	printk(KERN_INFO "%s: %lu bytes at %p\n", p->name, p->size,
>   	       p->address);
> diff --git a/arch/powerpc/sysdev/msi_bitmap.c b/arch/powerpc/sysdev/msi_bitmap.c
> index 456a4f64ae0a..87ec0dc8db3b 100644
> --- a/arch/powerpc/sysdev/msi_bitmap.c
> +++ b/arch/powerpc/sysdev/msi_bitmap.c
> @@ -124,7 +124,7 @@ int __ref msi_bitmap_alloc(struct msi_bitmap *bmp, unsigned int irq_count,
>   	if (bmp->bitmap_from_slab)
>   		bmp->bitmap = kzalloc(size, GFP_KERNEL);
>   	else {
> -		bmp->bitmap = memblock_alloc_or_panic(size, SMP_CACHE_BYTES);
> +		bmp->bitmap = memblock_alloc(size, SMP_CACHE_BYTES);
>   		/* the bitmap won't be freed from memblock allocator */
>   		kmemleak_not_leak(bmp->bitmap);
>   	}
> diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> index f1793630fc51..3087810c29ca 100644
> --- a/arch/riscv/kernel/setup.c
> +++ b/arch/riscv/kernel/setup.c
> @@ -147,7 +147,7 @@ static void __init init_resources(void)
>   	res_idx = num_resources - 1;
>   
>   	mem_res_sz = num_resources * sizeof(*mem_res);
> -	mem_res = memblock_alloc_or_panic(mem_res_sz, SMP_CACHE_BYTES);
> +	mem_res = memblock_alloc(mem_res_sz, SMP_CACHE_BYTES);
>   
>   	/*
>   	 * Start by adding the reserved regions, if they overlap
> diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c
> index 41c635d6aca4..c301c8d291d2 100644
> --- a/arch/riscv/mm/kasan_init.c
> +++ b/arch/riscv/mm/kasan_init.c
> @@ -32,7 +32,7 @@ static void __init kasan_populate_pte(pmd_t *pmd, unsigned long vaddr, unsigned
>   	pte_t *ptep, *p;
>   
>   	if (pmd_none(pmdp_get(pmd))) {
> -		p = memblock_alloc_or_panic(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE);
> +		p = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t), PAGE_SIZE);
>   		set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(p)), PAGE_TABLE));
>   	}
>   
> @@ -54,7 +54,7 @@ static void __init kasan_populate_pmd(pud_t *pud, unsigned long vaddr, unsigned
>   	unsigned long next;
>   
>   	if (pud_none(pudp_get(pud))) {
> -		p = memblock_alloc_or_panic(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE);
> +		p = memblock_alloc(PTRS_PER_PMD * sizeof(pmd_t), PAGE_SIZE);
>   		set_pud(pud, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE));
>   	}
>   
> @@ -85,7 +85,7 @@ static void __init kasan_populate_pud(p4d_t *p4d,
>   	unsigned long next;
>   
>   	if (p4d_none(p4dp_get(p4d))) {
> -		p = memblock_alloc_or_panic(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
> +		p = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE);
>   		set_p4d(p4d, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE));
>   	}
>   
> @@ -116,7 +116,7 @@ static void __init kasan_populate_p4d(pgd_t *pgd,
>   	unsigned long next;
>   
>   	if (pgd_none(pgdp_get(pgd))) {
> -		p = memblock_alloc_or_panic(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE);
> +		p = memblock_alloc(PTRS_PER_P4D * sizeof(p4d_t), PAGE_SIZE);
>   		set_pgd(pgd, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE));
>   	}
>   
> @@ -385,7 +385,7 @@ static void __init kasan_shallow_populate_pud(p4d_t *p4d,
>   		next = pud_addr_end(vaddr, end);
>   
>   		if (pud_none(pudp_get(pud_k))) {
> -			p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +			p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   			set_pud(pud_k, pfn_pud(PFN_DOWN(__pa(p)), PAGE_TABLE));
>   			continue;
>   		}
> @@ -405,7 +405,7 @@ static void __init kasan_shallow_populate_p4d(pgd_t *pgd,
>   		next = p4d_addr_end(vaddr, end);
>   
>   		if (p4d_none(p4dp_get(p4d_k))) {
> -			p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +			p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   			set_p4d(p4d_k, pfn_p4d(PFN_DOWN(__pa(p)), PAGE_TABLE));
>   			continue;
>   		}
> @@ -424,7 +424,7 @@ static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long
>   		next = pgd_addr_end(vaddr, end);
>   
>   		if (pgd_none(pgdp_get(pgd_k))) {
> -			p = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +			p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   			set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE));
>   			continue;
>   		}
> diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
> index f699df2a2b11..53cdfec9398c 100644
> --- a/arch/s390/kernel/crash_dump.c
> +++ b/arch/s390/kernel/crash_dump.c
> @@ -63,7 +63,7 @@ struct save_area * __init save_area_alloc(bool is_boot_cpu)
>   {
>   	struct save_area *sa;
>   
> -	sa = memblock_alloc(sizeof(*sa), 8);
> +	sa = memblock_alloc_no_panic(sizeof(*sa), 8);
>   	if (!sa)
>   		return NULL;
>   
> diff --git a/arch/s390/kernel/numa.c b/arch/s390/kernel/numa.c
> index a33e20f73330..1b589d575567 100644
> --- a/arch/s390/kernel/numa.c
> +++ b/arch/s390/kernel/numa.c
> @@ -22,7 +22,7 @@ void __init numa_setup(void)
>   	node_set(0, node_possible_map);
>   	node_set_online(0);
>   	for (nid = 0; nid < MAX_NUMNODES; nid++) {
> -		NODE_DATA(nid) = memblock_alloc_or_panic(sizeof(pg_data_t), 8);
> +		NODE_DATA(nid) = memblock_alloc(sizeof(pg_data_t), 8);
>   	}
>   	NODE_DATA(0)->node_spanned_pages = memblock_end_of_DRAM() >> PAGE_SHIFT;
>   	NODE_DATA(0)->node_id = 0;
> diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
> index f873535eddd2..e51426113f26 100644
> --- a/arch/s390/kernel/setup.c
> +++ b/arch/s390/kernel/setup.c
> @@ -384,7 +384,7 @@ static unsigned long __init stack_alloc_early(void)
>   {
>   	unsigned long stack;
>   
> -	stack = (unsigned long)memblock_alloc_or_panic(THREAD_SIZE, THREAD_SIZE);
> +	stack = (unsigned long)memblock_alloc(THREAD_SIZE, THREAD_SIZE);
>   	return stack;
>   }
>   
> @@ -508,7 +508,7 @@ static void __init setup_resources(void)
>   	bss_resource.end = __pa_symbol(__bss_stop) - 1;
>   
>   	for_each_mem_range(i, &start, &end) {
> -		res = memblock_alloc_or_panic(sizeof(*res), 8);
> +		res = memblock_alloc(sizeof(*res), 8);
>   		res->flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM;
>   
>   		res->name = "System RAM";
> @@ -527,7 +527,7 @@ static void __init setup_resources(void)
>   			    std_res->start > res->end)
>   				continue;
>   			if (std_res->end > res->end) {
> -				sub_res = memblock_alloc_or_panic(sizeof(*sub_res), 8);
> +				sub_res = memblock_alloc(sizeof(*sub_res), 8);
>   				*sub_res = *std_res;
>   				sub_res->end = res->end;
>   				std_res->start = res->end + 1;
> @@ -814,7 +814,7 @@ static void __init setup_randomness(void)
>   {
>   	struct sysinfo_3_2_2 *vmms;
>   
> -	vmms = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +	vmms = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   	if (stsi(vmms, 3, 2, 2) == 0 && vmms->count)
>   		add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count);
>   	memblock_free(vmms, PAGE_SIZE);
> diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
> index d77aaefb59bd..9eb4508b4ca4 100644
> --- a/arch/s390/kernel/smp.c
> +++ b/arch/s390/kernel/smp.c
> @@ -613,7 +613,7 @@ void __init smp_save_dump_ipl_cpu(void)
>   	sa = save_area_alloc(true);
>   	if (!sa)
>   		panic("could not allocate memory for boot CPU save area\n");
> -	regs = memblock_alloc_or_panic(512, 8);
> +	regs = memblock_alloc(512, 8);
>   	copy_oldmem_kernel(regs, __LC_FPREGS_SAVE_AREA, 512);
>   	save_area_add_regs(sa, regs);
>   	memblock_free(regs, 512);
> @@ -792,7 +792,7 @@ void __init smp_detect_cpus(void)
>   	u16 address;
>   
>   	/* Get CPU information */
> -	info = memblock_alloc_or_panic(sizeof(*info), 8);
> +	info = memblock_alloc(sizeof(*info), 8);
>   	smp_get_core_info(info, 1);
>   	/* Find boot CPU type */
>   	if (sclp.has_core_type) {
> diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
> index cf5ee6032c0b..fef1c7b4951d 100644
> --- a/arch/s390/kernel/topology.c
> +++ b/arch/s390/kernel/topology.c
> @@ -548,7 +548,7 @@ static void __init alloc_masks(struct sysinfo_15_1_x *info,
>   		nr_masks *= info->mag[TOPOLOGY_NR_MAG - offset - 1 - i];
>   	nr_masks = max(nr_masks, 1);
>   	for (i = 0; i < nr_masks; i++) {
> -		mask->next = memblock_alloc_or_panic(sizeof(*mask->next), 8);
> +		mask->next = memblock_alloc(sizeof(*mask->next), 8);
>   		mask = mask->next;
>   	}
>   }
> @@ -566,7 +566,7 @@ void __init topology_init_early(void)
>   	}
>   	if (!MACHINE_HAS_TOPOLOGY)
>   		goto out;
> -	tl_info = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +	tl_info = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   	info = tl_info;
>   	store_topology(info);
>   	pr_info("The CPU configuration topology of the machine is: %d %d %d %d %d %d / %d\n",
> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
> index 665b8228afeb..df43575564a3 100644
> --- a/arch/s390/mm/vmem.c
> +++ b/arch/s390/mm/vmem.c
> @@ -33,7 +33,7 @@ static void __ref *vmem_alloc_pages(unsigned int order)
>   
>   	if (slab_is_available())
>   		return (void *)__get_free_pages(GFP_KERNEL, order);
> -	return memblock_alloc(size, size);
> +	return memblock_alloc_no_panic(size, size);
>   }
>   
>   static void vmem_free_pages(unsigned long addr, int order, struct vmem_altmap *altmap)
> @@ -69,7 +69,7 @@ pte_t __ref *vmem_pte_alloc(void)
>   	if (slab_is_available())
>   		pte = (pte_t *) page_table_alloc(&init_mm);
>   	else
> -		pte = (pte_t *) memblock_alloc(size, size);
> +		pte = (pte_t *) memblock_alloc_no_panic(size, size);
>   	if (!pte)
>   		return NULL;
>   	memset64((u64 *)pte, _PAGE_INVALID, PTRS_PER_PTE);
> diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
> index 289a2fecebef..d64c4b54e289 100644
> --- a/arch/sh/mm/init.c
> +++ b/arch/sh/mm/init.c
> @@ -137,7 +137,7 @@ static pmd_t * __init one_md_table_init(pud_t *pud)
>   	if (pud_none(*pud)) {
>   		pmd_t *pmd;
>   
> -		pmd = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +		pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   		pud_populate(&init_mm, pud, pmd);
>   		BUG_ON(pmd != pmd_offset(pud, 0));
>   	}
> @@ -150,7 +150,7 @@ static pte_t * __init one_page_table_init(pmd_t *pmd)
>   	if (pmd_none(*pmd)) {
>   		pte_t *pte;
>   
> -		pte = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +		pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   		pmd_populate_kernel(&init_mm, pmd, pte);
>   		BUG_ON(pte != pte_offset_kernel(pmd, 0));
>   	}
> diff --git a/arch/sparc/kernel/prom_32.c b/arch/sparc/kernel/prom_32.c
> index a67dd67f10c8..e6dfa3895bb5 100644
> --- a/arch/sparc/kernel/prom_32.c
> +++ b/arch/sparc/kernel/prom_32.c
> @@ -28,7 +28,7 @@ void * __init prom_early_alloc(unsigned long size)
>   {
>   	void *ret;
>   
> -	ret = memblock_alloc_or_panic(size, SMP_CACHE_BYTES);
> +	ret = memblock_alloc(size, SMP_CACHE_BYTES);
>   
>   	prom_early_allocated += size;
>   
> diff --git a/arch/sparc/kernel/prom_64.c b/arch/sparc/kernel/prom_64.c
> index ba82884cb92a..197771fdf8cc 100644
> --- a/arch/sparc/kernel/prom_64.c
> +++ b/arch/sparc/kernel/prom_64.c
> @@ -30,7 +30,7 @@
>   
>   void * __init prom_early_alloc(unsigned long size)
>   {
> -	void *ret = memblock_alloc(size, SMP_CACHE_BYTES);
> +	void *ret = memblock_alloc_no_panic(size, SMP_CACHE_BYTES);
>   
>   	if (!ret) {
>   		prom_printf("prom_early_alloc(%lu) failed\n", size);
> diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
> index d96a14ffceeb..65a4d8ec3972 100644
> --- a/arch/sparc/mm/init_32.c
> +++ b/arch/sparc/mm/init_32.c
> @@ -265,7 +265,7 @@ void __init mem_init(void)
>   	i = last_valid_pfn >> ((20 - PAGE_SHIFT) + 5);
>   	i += 1;
>   	sparc_valid_addr_bitmap = (unsigned long *)
> -		memblock_alloc(i << 2, SMP_CACHE_BYTES);
> +		memblock_alloc_no_panic(i << 2, SMP_CACHE_BYTES);
>   
>   	if (sparc_valid_addr_bitmap == NULL) {
>   		prom_printf("mem_init: Cannot alloc valid_addr_bitmap.\n");
> diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
> index dd32711022f5..4a7d558ed0c9 100644
> --- a/arch/sparc/mm/srmmu.c
> +++ b/arch/sparc/mm/srmmu.c
> @@ -277,12 +277,12 @@ static void __init srmmu_nocache_init(void)
>   
>   	bitmap_bits = srmmu_nocache_size >> SRMMU_NOCACHE_BITMAP_SHIFT;
>   
> -	srmmu_nocache_pool = memblock_alloc_or_panic(srmmu_nocache_size,
> +	srmmu_nocache_pool = memblock_alloc(srmmu_nocache_size,
>   					    SRMMU_NOCACHE_ALIGN_MAX);
>   	memset(srmmu_nocache_pool, 0, srmmu_nocache_size);
>   
>   	srmmu_nocache_bitmap =
> -		memblock_alloc_or_panic(BITS_TO_LONGS(bitmap_bits) * sizeof(long),
> +		memblock_alloc(BITS_TO_LONGS(bitmap_bits) * sizeof(long),
>   			       SMP_CACHE_BYTES);
>   	bit_map_init(&srmmu_nocache_map, srmmu_nocache_bitmap, bitmap_bits);
>   
> @@ -446,7 +446,7 @@ static void __init sparc_context_init(int numctx)
>   	unsigned long size;
>   
>   	size = numctx * sizeof(struct ctx_list);
> -	ctx_list_pool = memblock_alloc_or_panic(size, SMP_CACHE_BYTES);
> +	ctx_list_pool = memblock_alloc(size, SMP_CACHE_BYTES);
>   
>   	for (ctx = 0; ctx < numctx; ctx++) {
>   		struct ctx_list *clist;
> diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
> index d5a9c5aabaec..cf3a20440293 100644
> --- a/arch/um/drivers/net_kern.c
> +++ b/arch/um/drivers/net_kern.c
> @@ -636,7 +636,7 @@ static int __init eth_setup(char *str)
>   		return 1;
>   	}
>   
> -	new = memblock_alloc_or_panic(sizeof(*new), SMP_CACHE_BYTES);
> +	new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES);
>   
>   	INIT_LIST_HEAD(&new->list);
>   	new->index = n;
> diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
> index 85b129e2b70b..096fefb73e09 100644
> --- a/arch/um/drivers/vector_kern.c
> +++ b/arch/um/drivers/vector_kern.c
> @@ -1694,7 +1694,7 @@ static int __init vector_setup(char *str)
>   				 str, error);
>   		return 1;
>   	}
> -	new = memblock_alloc_or_panic(sizeof(*new), SMP_CACHE_BYTES);
> +	new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES);
>   	INIT_LIST_HEAD(&new->list);
>   	new->unit = n;
>   	new->arguments = str;
> diff --git a/arch/um/kernel/load_file.c b/arch/um/kernel/load_file.c
> index cb9d178ab7d8..00e0b789e5ab 100644
> --- a/arch/um/kernel/load_file.c
> +++ b/arch/um/kernel/load_file.c
> @@ -48,7 +48,7 @@ void *uml_load_file(const char *filename, unsigned long long *size)
>   		return NULL;
>   	}
>   
> -	area = memblock_alloc_or_panic(*size, SMP_CACHE_BYTES);
> +	area = memblock_alloc(*size, SMP_CACHE_BYTES);
>   
>   	if (__uml_load_file(filename, area, *size)) {
>   		memblock_free(area, *size);
> diff --git a/arch/x86/coco/sev/core.c b/arch/x86/coco/sev/core.c
> index a3c9b7c67640..6dde4ebc7b8e 100644
> --- a/arch/x86/coco/sev/core.c
> +++ b/arch/x86/coco/sev/core.c
> @@ -1572,7 +1572,7 @@ static void __init alloc_runtime_data(int cpu)
>   		struct svsm_ca *caa;
>   
>   		/* Allocate the SVSM CA page if an SVSM is present */
> -		caa = memblock_alloc_or_panic(sizeof(*caa), PAGE_SIZE);
> +		caa = memblock_alloc(sizeof(*caa), PAGE_SIZE);
>   
>   		per_cpu(svsm_caa, cpu) = caa;
>   		per_cpu(svsm_caa_pa, cpu) = __pa(caa);
> diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
> index 7c15d6e83c37..a54cdd5be071 100644
> --- a/arch/x86/kernel/acpi/boot.c
> +++ b/arch/x86/kernel/acpi/boot.c
> @@ -911,7 +911,7 @@ static int __init acpi_parse_hpet(struct acpi_table_header *table)
>   	 * the resource tree during the lateinit timeframe.
>   	 */
>   #define HPET_RESOURCE_NAME_SIZE 9
> -	hpet_res = memblock_alloc_or_panic(sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE,
> +	hpet_res = memblock_alloc(sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE,
>   				  SMP_CACHE_BYTES);
>   
>   	hpet_res->name = (void *)&hpet_res[1];
> diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c
> index d5ef6215583b..bae8b3452834 100644
> --- a/arch/x86/kernel/acpi/madt_wakeup.c
> +++ b/arch/x86/kernel/acpi/madt_wakeup.c
> @@ -62,7 +62,7 @@ static void acpi_mp_cpu_die(unsigned int cpu)
>   /* The argument is required to match type of x86_mapping_info::alloc_pgt_page */
>   static void __init *alloc_pgt_page(void *dummy)
>   {
> -	return memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> +	return memblock_alloc_no_panic(PAGE_SIZE, PAGE_SIZE);
>   }
>   
>   static void __init free_pgt_page(void *pgt, void *dummy)
> diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
> index a57d3fa7c6b6..ebb00747f135 100644
> --- a/arch/x86/kernel/apic/io_apic.c
> +++ b/arch/x86/kernel/apic/io_apic.c
> @@ -2503,7 +2503,7 @@ static struct resource * __init ioapic_setup_resources(void)
>   	n = IOAPIC_RESOURCE_NAME_SIZE + sizeof(struct resource);
>   	n *= nr_ioapics;
>   
> -	mem = memblock_alloc_or_panic(n, SMP_CACHE_BYTES);
> +	mem = memblock_alloc(n, SMP_CACHE_BYTES);
>   	res = (void *)mem;
>   
>   	mem += sizeof(struct resource) * nr_ioapics;
> @@ -2562,7 +2562,7 @@ void __init io_apic_init_mappings(void)
>   #ifdef CONFIG_X86_32
>   fake_ioapic_page:
>   #endif
> -			ioapic_phys = (unsigned long)memblock_alloc_or_panic(PAGE_SIZE,
> +			ioapic_phys = (unsigned long)memblock_alloc(PAGE_SIZE,
>   								    PAGE_SIZE);
>   			ioapic_phys = __pa(ioapic_phys);
>   		}
> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
> index 82b96ed9890a..7c9b25c5f209 100644
> --- a/arch/x86/kernel/e820.c
> +++ b/arch/x86/kernel/e820.c
> @@ -1146,7 +1146,7 @@ void __init e820__reserve_resources(void)
>   	struct resource *res;
>   	u64 end;
>   
> -	res = memblock_alloc_or_panic(sizeof(*res) * e820_table->nr_entries,
> +	res = memblock_alloc(sizeof(*res) * e820_table->nr_entries,
>   			     SMP_CACHE_BYTES);
>   	e820_res = res;
>   
> diff --git a/arch/x86/platform/olpc/olpc_dt.c b/arch/x86/platform/olpc/olpc_dt.c
> index cf5dca2dbb91..90be2eef3910 100644
> --- a/arch/x86/platform/olpc/olpc_dt.c
> +++ b/arch/x86/platform/olpc/olpc_dt.c
> @@ -136,7 +136,7 @@ void * __init prom_early_alloc(unsigned long size)
>   		 * fast enough on the platforms we care about while minimizing
>   		 * wasted bootmem) and hand off chunks of it to callers.
>   		 */
> -		res = memblock_alloc_or_panic(chunk_size, SMP_CACHE_BYTES);
> +		res = memblock_alloc(chunk_size, SMP_CACHE_BYTES);
>   		prom_early_allocated += chunk_size;
>   		memset(res, 0, chunk_size);
>   		free_mem = chunk_size;
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 56914e21e303..468cfdcf9147 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -178,7 +178,7 @@ static void p2m_init_identity(unsigned long *p2m, unsigned long pfn)
>   static void * __ref alloc_p2m_page(void)
>   {
>   	if (unlikely(!slab_is_available())) {
> -		return memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
> +		return memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>   	}
>   
>   	return (void *)__get_free_page(GFP_KERNEL);
> diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c
> index f39c4d83173a..50ee88d9f2cc 100644
> --- a/arch/xtensa/mm/kasan_init.c
> +++ b/arch/xtensa/mm/kasan_init.c
> @@ -39,7 +39,7 @@ static void __init populate(void *start, void *end)
>   	unsigned long i, j;
>   	unsigned long vaddr = (unsigned long)start;
>   	pmd_t *pmd = pmd_off_k(vaddr);
> -	pte_t *pte = memblock_alloc_or_panic(n_pages * sizeof(pte_t), PAGE_SIZE);
> +	pte_t *pte = memblock_alloc(n_pages * sizeof(pte_t), PAGE_SIZE);
>   
>   	pr_debug("%s: %p - %p\n", __func__, start, end);
>   
> diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
> index e89f27f2bb18..74e2a149f35f 100644
> --- a/arch/xtensa/platforms/iss/network.c
> +++ b/arch/xtensa/platforms/iss/network.c
> @@ -604,7 +604,7 @@ static int __init iss_net_setup(char *str)
>   		return 1;
>   	}
>   
> -	new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES);
> +	new = memblock_alloc_no_panic(sizeof(*new), SMP_CACHE_BYTES);
>   	if (new == NULL) {
>   		pr_err("Alloc_bootmem failed\n");
>   		return 1;
> diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
> index 9c75dcc9a534..7ef6f6e1d063 100644
> --- a/drivers/clk/ti/clk.c
> +++ b/drivers/clk/ti/clk.c
> @@ -449,7 +449,7 @@ void __init omap2_clk_legacy_provider_init(int index, void __iomem *mem)
>   {
>   	struct clk_iomap *io;
>   
> -	io = memblock_alloc_or_panic(sizeof(*io), SMP_CACHE_BYTES);
> +	io = memblock_alloc(sizeof(*io), SMP_CACHE_BYTES);
>   
>   	io->mem = mem;
>   
> diff --git a/drivers/firmware/memmap.c b/drivers/firmware/memmap.c
> index 55b9cfad8a04..4cef459855c2 100644
> --- a/drivers/firmware/memmap.c
> +++ b/drivers/firmware/memmap.c
> @@ -325,7 +325,7 @@ int __init firmware_map_add_early(u64 start, u64 end, const char *type)
>   {
>   	struct firmware_map_entry *entry;
>   
> -	entry = memblock_alloc(sizeof(struct firmware_map_entry),
> +	entry = memblock_alloc_no_panic(sizeof(struct firmware_map_entry),
>   			       SMP_CACHE_BYTES);
>   	if (WARN_ON(!entry))
>   		return -ENOMEM;
> diff --git a/drivers/macintosh/smu.c b/drivers/macintosh/smu.c
> index a1534cc6c641..e93fbe71ed90 100644
> --- a/drivers/macintosh/smu.c
> +++ b/drivers/macintosh/smu.c
> @@ -492,7 +492,7 @@ int __init smu_init (void)
>   		goto fail_np;
>   	}
>   
> -	smu = memblock_alloc_or_panic(sizeof(struct smu_device), SMP_CACHE_BYTES);
> +	smu = memblock_alloc(sizeof(struct smu_device), SMP_CACHE_BYTES);
>   	spin_lock_init(&smu->lock);
>   	INIT_LIST_HEAD(&smu->cmd_list);
>   	INIT_LIST_HEAD(&smu->cmd_i2c_list);
> diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
> index 2eb718fbeffd..d1f33510db67 100644
> --- a/drivers/of/fdt.c
> +++ b/drivers/of/fdt.c
> @@ -1126,7 +1126,7 @@ void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size)
>   
>   static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
>   {
> -	return memblock_alloc_or_panic(size, align);
> +	return memblock_alloc(size, align);
>   }
>   
>   bool __init early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys)
> diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
> index 45517b9e57b1..f46c8639b535 100644
> --- a/drivers/of/of_reserved_mem.c
> +++ b/drivers/of/of_reserved_mem.c
> @@ -79,7 +79,7 @@ static void __init alloc_reserved_mem_array(void)
>   		return;
>   	}
>   
> -	new_array = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
> +	new_array = memblock_alloc_no_panic(alloc_size, SMP_CACHE_BYTES);
>   	if (!new_array) {
>   		pr_err("Failed to allocate memory for reserved_mem array with err: %d", -ENOMEM);
>   		return;
> diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
> index 6e8561dba537..e8b0c8d430c2 100644
> --- a/drivers/of/unittest.c
> +++ b/drivers/of/unittest.c
> @@ -3666,7 +3666,7 @@ static struct device_node *overlay_base_root;
>   
>   static void * __init dt_alloc_memory(u64 size, u64 align)
>   {
> -	return memblock_alloc_or_panic(size, align);
> +	return memblock_alloc(size, align);
>   }
>   
>   /*
> diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
> index 341408410ed9..2f4172c98eaf 100644
> --- a/drivers/usb/early/xhci-dbc.c
> +++ b/drivers/usb/early/xhci-dbc.c
> @@ -94,7 +94,7 @@ static void * __init xdbc_get_page(dma_addr_t *dma_addr)
>   {
>   	void *virt;
>   
> -	virt = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
> +	virt = memblock_alloc_no_panic(PAGE_SIZE, PAGE_SIZE);
>   	if (!virt)
>   		return NULL;
>   
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index dee628350cd1..6b21a3834225 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -417,11 +417,13 @@ static __always_inline void *memblock_alloc(phys_addr_t size, phys_addr_t align)
>   				      MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
>   }
>   
> -void *__memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align,
> -				const char *func);
> +void *__memblock_alloc_panic(phys_addr_t size, phys_addr_t align,
> +				const char *func, bool should_panic);
>   
> -#define memblock_alloc_or_panic(size, align)    \
> -	 __memblock_alloc_or_panic(size, align, __func__)
> +#define memblock_alloc(size, align)    \
> +	 __memblock_alloc_panic(size, align, __func__, true)
> +#define memblock_alloc_no_panic(size, align)    \
> +	 __memblock_alloc_panic(size, align, __func__, false)
>   
>   static inline void *memblock_alloc_raw(phys_addr_t size,
>   					       phys_addr_t align)
> diff --git a/init/main.c b/init/main.c
> index 4bae539ebc05..302f85078e2b 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -379,7 +379,7 @@ static char * __init xbc_make_cmdline(const char *key)
>   	if (len <= 0)
>   		return NULL;
>   
> -	new_cmdline = memblock_alloc(len + 1, SMP_CACHE_BYTES);
> +	new_cmdline = memblock_alloc_no_panic(len + 1, SMP_CACHE_BYTES);
>   	if (!new_cmdline) {
>   		pr_err("Failed to allocate memory for extra kernel cmdline.\n");
>   		return NULL;
> @@ -640,11 +640,11 @@ static void __init setup_command_line(char *command_line)
>   
>   	len = xlen + strlen(boot_command_line) + ilen + 1;
>   
> -	saved_command_line = memblock_alloc_or_panic(len, SMP_CACHE_BYTES);
> +	saved_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
>   
>   	len = xlen + strlen(command_line) + 1;
>   
> -	static_command_line = memblock_alloc_or_panic(len, SMP_CACHE_BYTES);
> +	static_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
>   
>   	if (xlen) {
>   		/*
> @@ -860,7 +860,7 @@ static void __init print_unknown_bootoptions(void)
>   		len += strlen(*p);
>   	}
>   
> -	unknown_options = memblock_alloc(len, SMP_CACHE_BYTES);
> +	unknown_options = memblock_alloc_no_panic(len, SMP_CACHE_BYTES);
>   	if (!unknown_options) {
>   		pr_err("%s: Failed to allocate %zu bytes\n",
>   			__func__, len);
> @@ -1141,9 +1141,9 @@ static int __init initcall_blacklist(char *str)
>   		str_entry = strsep(&str, ",");
>   		if (str_entry) {
>   			pr_debug("blacklisting initcall %s\n", str_entry);
> -			entry = memblock_alloc_or_panic(sizeof(*entry),
> +			entry = memblock_alloc(sizeof(*entry),
>   					       SMP_CACHE_BYTES);
> -			entry->buf = memblock_alloc_or_panic(strlen(str_entry) + 1,
> +			entry->buf = memblock_alloc(strlen(str_entry) + 1,
>   						    SMP_CACHE_BYTES);
>   			strcpy(entry->buf, str_entry);
>   			list_add(&entry->next, &blacklisted_initcalls);
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index abcf3fa63a56..85381f2b8ab3 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -328,7 +328,7 @@ static void __init *swiotlb_memblock_alloc(unsigned long nslabs,
>   	 * memory encryption.
>   	 */
>   	if (flags & SWIOTLB_ANY)
> -		tlb = memblock_alloc(bytes, PAGE_SIZE);
> +		tlb = memblock_alloc_no_panic(bytes, PAGE_SIZE);
>   	else
>   		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>   
> @@ -396,14 +396,14 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
>   	}
>   
>   	alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
> -	mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
> +	mem->slots = memblock_alloc_no_panic(alloc_size, PAGE_SIZE);
>   	if (!mem->slots) {
>   		pr_warn("%s: Failed to allocate %zu bytes align=0x%lx\n",
>   			__func__, alloc_size, PAGE_SIZE);
>   		return;
>   	}
>   
> -	mem->areas = memblock_alloc(array_size(sizeof(struct io_tlb_area),
> +	mem->areas = memblock_alloc_no_panic(array_size(sizeof(struct io_tlb_area),
>   		nareas), SMP_CACHE_BYTES);
>   	if (!mem->areas) {
>   		pr_warn("%s: Failed to allocate mem->areas.\n", __func__);
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index c9fb559a6399..18604fc4103d 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1011,7 +1011,7 @@ void __init register_nosave_region(unsigned long start_pfn, unsigned long end_pf
>   		}
>   	}
>   	/* This allocation cannot fail */
> -	region = memblock_alloc_or_panic(sizeof(struct nosave_region),
> +	region = memblock_alloc(sizeof(struct nosave_region),
>   				SMP_CACHE_BYTES);
>   	region->start_pfn = start_pfn;
>   	region->end_pfn = end_pfn;
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index 80910bc3470c..6a7801b1d283 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -1211,7 +1211,7 @@ void __init setup_log_buf(int early)
>   		goto out;
>   	}
>   
> -	new_log_buf = memblock_alloc(new_log_buf_len, LOG_ALIGN);
> +	new_log_buf = memblock_alloc_no_panic(new_log_buf_len, LOG_ALIGN);
>   	if (unlikely(!new_log_buf)) {
>   		pr_err("log_buf_len: %lu text bytes not available\n",
>   		       new_log_buf_len);
> @@ -1219,7 +1219,7 @@ void __init setup_log_buf(int early)
>   	}
>   
>   	new_descs_size = new_descs_count * sizeof(struct prb_desc);
> -	new_descs = memblock_alloc(new_descs_size, LOG_ALIGN);
> +	new_descs = memblock_alloc_no_panic(new_descs_size, LOG_ALIGN);
>   	if (unlikely(!new_descs)) {
>   		pr_err("log_buf_len: %zu desc bytes not available\n",
>   		       new_descs_size);
> @@ -1227,7 +1227,7 @@ void __init setup_log_buf(int early)
>   	}
>   
>   	new_infos_size = new_descs_count * sizeof(struct printk_info);
> -	new_infos = memblock_alloc(new_infos_size, LOG_ALIGN);
> +	new_infos = memblock_alloc_no_panic(new_infos_size, LOG_ALIGN);
>   	if (unlikely(!new_infos)) {
>   		pr_err("log_buf_len: %zu info bytes not available\n",
>   		       new_infos_size);
> diff --git a/lib/cpumask.c b/lib/cpumask.c
> index 57274ba8b6d9..d638587f97df 100644
> --- a/lib/cpumask.c
> +++ b/lib/cpumask.c
> @@ -83,7 +83,7 @@ EXPORT_SYMBOL(alloc_cpumask_var_node);
>    */
>   void __init alloc_bootmem_cpumask_var(cpumask_var_t *mask)
>   {
> -	*mask = memblock_alloc_or_panic(cpumask_size(), SMP_CACHE_BYTES);
> +	*mask = memblock_alloc(cpumask_size(), SMP_CACHE_BYTES);
>   }
>   
>   /**
> diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
> index d65d48b85f90..129acb1d2fe1 100644
> --- a/mm/kasan/tags.c
> +++ b/mm/kasan/tags.c
> @@ -86,7 +86,7 @@ void __init kasan_init_tags(void)
>   	if (kasan_stack_collection_enabled()) {
>   		if (!stack_ring.size)
>   			stack_ring.size = KASAN_STACK_RING_SIZE_DEFAULT;
> -		stack_ring.entries = memblock_alloc(
> +		stack_ring.entries = memblock_alloc_no_panic(
>   			sizeof(stack_ring.entries[0]) * stack_ring.size,
>   			SMP_CACHE_BYTES);
>   		if (WARN_ON(!stack_ring.entries))
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index 67fc321db79b..4676a5557e60 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -869,7 +869,7 @@ void __init kfence_alloc_pool_and_metadata(void)
>   	 * re-allocate the memory pool.
>   	 */
>   	if (!__kfence_pool)
> -		__kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
> +		__kfence_pool = memblock_alloc_no_panic(KFENCE_POOL_SIZE, PAGE_SIZE);
>   
>   	if (!__kfence_pool) {
>   		pr_err("failed to allocate pool\n");
> @@ -877,7 +877,7 @@ void __init kfence_alloc_pool_and_metadata(void)
>   	}
>   
>   	/* The memory allocated by memblock has been zeroed out. */
> -	kfence_metadata_init = memblock_alloc(KFENCE_METADATA_SIZE, PAGE_SIZE);
> +	kfence_metadata_init = memblock_alloc_no_panic(KFENCE_METADATA_SIZE, PAGE_SIZE);
>   	if (!kfence_metadata_init) {
>   		pr_err("failed to allocate metadata\n");
>   		memblock_free(__kfence_pool, KFENCE_POOL_SIZE);
> diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c
> index 1bb505a08415..938ca5eb6df7 100644
> --- a/mm/kmsan/shadow.c
> +++ b/mm/kmsan/shadow.c
> @@ -280,8 +280,8 @@ void __init kmsan_init_alloc_meta_for_range(void *start, void *end)
>   
>   	start = (void *)PAGE_ALIGN_DOWN((u64)start);
>   	size = PAGE_ALIGN((u64)end - (u64)start);
> -	shadow = memblock_alloc_or_panic(size, PAGE_SIZE);
> -	origin = memblock_alloc_or_panic(size, PAGE_SIZE);
> +	shadow = memblock_alloc(size, PAGE_SIZE);
> +	origin = memblock_alloc(size, PAGE_SIZE);
>   
>   	for (u64 addr = 0; addr < size; addr += PAGE_SIZE) {
>   		page = virt_to_page_or_null((char *)start + addr);
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 95af35fd1389..901da45ecf8b 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1692,21 +1692,23 @@ void * __init memblock_alloc_try_nid(
>   }
>   
>   /**
> - * __memblock_alloc_or_panic - Try to allocate memory and panic on failure
> + * __memblock_alloc_panic - Try to allocate memory and panic on failure
>    * @size: size of memory block to be allocated in bytes
>    * @align: alignment of the region and block's size
>    * @func: caller func name
> + * @should_panic: whether failed panic
>    *
> - * This function attempts to allocate memory using memblock_alloc,
> - * and in case of failure, it calls panic with the formatted message.
> - * This function should not be used directly, please use the macro memblock_alloc_or_panic.
> + * In case of failure, it calls panic with the formatted message.
> + * This function should not be used directly, please use the macro
> + * memblock_alloc and memblock_alloc_no_panic.
>    */
> -void *__init __memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align,
> -				       const char *func)
> +void *__init __memblock_alloc_panic(phys_addr_t size, phys_addr_t align,
> +				    const char *func, bool should_panic)
>   {
> -	void *addr = memblock_alloc(size, align);
> +	void *addr = memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT,
> +				      MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
>   
> -	if (unlikely(!addr))
> +	if (unlikely(!addr && should_panic))
>   		panic("%s: Failed to allocate %pap bytes\n", func, &size);
>   	return addr;
>   }
> diff --git a/mm/numa.c b/mm/numa.c
> index f1787d7713a6..9442448dc74f 100644
> --- a/mm/numa.c
> +++ b/mm/numa.c
> @@ -37,7 +37,7 @@ void __init alloc_node_data(int nid)
>   void __init alloc_offline_node_data(int nid)
>   {
>   	pg_data_t *pgdat;
> -	node_data[nid] = memblock_alloc_or_panic(sizeof(*pgdat), SMP_CACHE_BYTES);
> +	node_data[nid] = memblock_alloc(sizeof(*pgdat), SMP_CACHE_BYTES);
>   }
>   
>   /* Stub functions: */
> diff --git a/mm/numa_emulation.c b/mm/numa_emulation.c
> index 031fb9961bf7..958dc5a1715c 100644
> --- a/mm/numa_emulation.c
> +++ b/mm/numa_emulation.c
> @@ -447,7 +447,7 @@ void __init numa_emulation(struct numa_meminfo *numa_meminfo, int numa_dist_cnt)
>   
>   	/* copy the physical distance table */
>   	if (numa_dist_cnt) {
> -		phys_dist = memblock_alloc(phys_size, PAGE_SIZE);
> +		phys_dist = memblock_alloc_no_panic(phys_size, PAGE_SIZE);
>   		if (!phys_dist) {
>   			pr_warn("NUMA: Warning: can't allocate copy of distance table, disabling emulation\n");
>   			goto no_emu;
> diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
> index a3877e9bc878..549a6d6607c6 100644
> --- a/mm/numa_memblks.c
> +++ b/mm/numa_memblks.c
> @@ -61,7 +61,7 @@ static int __init numa_alloc_distance(void)
>   	cnt++;
>   	size = cnt * cnt * sizeof(numa_distance[0]);
>   
> -	numa_distance = memblock_alloc(size, PAGE_SIZE);
> +	numa_distance = memblock_alloc_no_panic(size, PAGE_SIZE);
>   	if (!numa_distance) {
>   		pr_warn("Warning: can't allocate distance table!\n");
>   		/* don't retry until explicitly reset */
> diff --git a/mm/percpu.c b/mm/percpu.c
> index ac61e3fc5f15..a381d626ed32 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -1359,7 +1359,7 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
>   	/* allocate chunk */
>   	alloc_size = struct_size(chunk, populated,
>   				 BITS_TO_LONGS(region_size >> PAGE_SHIFT));
> -	chunk = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
> +	chunk = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
>   
>   	INIT_LIST_HEAD(&chunk->list);
>   
> @@ -1371,14 +1371,14 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
>   	region_bits = pcpu_chunk_map_bits(chunk);
>   
>   	alloc_size = BITS_TO_LONGS(region_bits) * sizeof(chunk->alloc_map[0]);
> -	chunk->alloc_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
> +	chunk->alloc_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
>   
>   	alloc_size =
>   		BITS_TO_LONGS(region_bits + 1) * sizeof(chunk->bound_map[0]);
> -	chunk->bound_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
> +	chunk->bound_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
>   
>   	alloc_size = pcpu_chunk_nr_blocks(chunk) * sizeof(chunk->md_blocks[0]);
> -	chunk->md_blocks = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
> +	chunk->md_blocks = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
>   #ifdef NEED_PCPUOBJ_EXT
>   	/* first chunk is free to use */
>   	chunk->obj_exts = NULL;
> @@ -2399,7 +2399,7 @@ struct pcpu_alloc_info * __init pcpu_alloc_alloc_info(int nr_groups,
>   			  __alignof__(ai->groups[0].cpu_map[0]));
>   	ai_size = base_size + nr_units * sizeof(ai->groups[0].cpu_map[0]);
>   
> -	ptr = memblock_alloc(PFN_ALIGN(ai_size), PAGE_SIZE);
> +	ptr = memblock_alloc_no_panic(PFN_ALIGN(ai_size), PAGE_SIZE);
>   	if (!ptr)
>   		return NULL;
>   	ai = ptr;
> @@ -2582,16 +2582,16 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
>   
>   	/* process group information and build config tables accordingly */
>   	alloc_size = ai->nr_groups * sizeof(group_offsets[0]);
> -	group_offsets = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
> +	group_offsets = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
>   
>   	alloc_size = ai->nr_groups * sizeof(group_sizes[0]);
> -	group_sizes = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
> +	group_sizes = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
>   
>   	alloc_size = nr_cpu_ids * sizeof(unit_map[0]);
> -	unit_map = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
> +	unit_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
>   
>   	alloc_size = nr_cpu_ids * sizeof(unit_off[0]);
> -	unit_off = memblock_alloc_or_panic(alloc_size, SMP_CACHE_BYTES);
> +	unit_off = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
>   
>   	for (cpu = 0; cpu < nr_cpu_ids; cpu++)
>   		unit_map[cpu] = UINT_MAX;
> @@ -2660,7 +2660,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
>   	pcpu_free_slot = pcpu_sidelined_slot + 1;
>   	pcpu_to_depopulate_slot = pcpu_free_slot + 1;
>   	pcpu_nr_slots = pcpu_to_depopulate_slot + 1;
> -	pcpu_chunk_lists = memblock_alloc_or_panic(pcpu_nr_slots *
> +	pcpu_chunk_lists = memblock_alloc(pcpu_nr_slots *
>   					  sizeof(pcpu_chunk_lists[0]),
>   					  SMP_CACHE_BYTES);
>   
> @@ -3010,7 +3010,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
>   	size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
>   	areas_size = PFN_ALIGN(ai->nr_groups * sizeof(void *));
>   
> -	areas = memblock_alloc(areas_size, SMP_CACHE_BYTES);
> +	areas = memblock_alloc_no_panic(areas_size, SMP_CACHE_BYTES);
>   	if (!areas) {
>   		rc = -ENOMEM;
>   		goto out_free;
> @@ -3127,19 +3127,19 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
>   	pmd_t *pmd;
>   
>   	if (pgd_none(*pgd)) {
> -		p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
> +		p4d = memblock_alloc(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
>   		pgd_populate(&init_mm, pgd, p4d);
>   	}
>   
>   	p4d = p4d_offset(pgd, addr);
>   	if (p4d_none(*p4d)) {
> -		pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
> +		pud = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
>   		p4d_populate(&init_mm, p4d, pud);
>   	}
>   
>   	pud = pud_offset(p4d, addr);
>   	if (pud_none(*pud)) {
> -		pmd = memblock_alloc_or_panic(PMD_TABLE_SIZE, PMD_TABLE_SIZE);
> +		pmd = memblock_alloc(PMD_TABLE_SIZE, PMD_TABLE_SIZE);
>   		pud_populate(&init_mm, pud, pmd);
>   	}
>   
> @@ -3147,7 +3147,7 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
>   	if (!pmd_present(*pmd)) {
>   		pte_t *new;
>   
> -		new = memblock_alloc_or_panic(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
> +		new = memblock_alloc(PTE_TABLE_SIZE, PTE_TABLE_SIZE);
>   		pmd_populate_kernel(&init_mm, pmd, new);
>   	}
>   
> @@ -3198,7 +3198,7 @@ int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_cpu_to_node_fn_t
>   	/* unaligned allocations can't be freed, round up to page size */
>   	pages_size = PFN_ALIGN(unit_pages * num_possible_cpus() *
>   			       sizeof(pages[0]));
> -	pages = memblock_alloc_or_panic(pages_size, SMP_CACHE_BYTES);
> +	pages = memblock_alloc(pages_size, SMP_CACHE_BYTES);
>   
>   	/* allocate pages */
>   	j = 0;
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 133b033d0cba..56191a32e6c5 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -257,7 +257,7 @@ static void __init memblocks_present(void)
>   
>   		size = sizeof(struct mem_section *) * NR_SECTION_ROOTS;
>   		align = 1 << (INTERNODE_CACHE_SHIFT);
> -		mem_section = memblock_alloc_or_panic(size, align);
> +		mem_section = memblock_alloc(size, align);
>   	}
>   #endif
>   



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] mm/memblock: Modify the default failure behavior of memblock_alloc_low(from)
  2025-01-03 10:51 ` [PATCH 3/3] mm/memblock: Modify the default failure behavior of memblock_alloc_low(from) Guo Weikang
@ 2025-01-04  8:38   ` kernel test robot
  0 siblings, 0 replies; 8+ messages in thread
From: kernel test robot @ 2025-01-04  8:38 UTC (permalink / raw)
  To: Guo Weikang, Mike Rapoport, Andrew Morton
  Cc: oe-kbuild-all, Linux Memory Management List, linux-kernel, Guo Weikang

Hi Guo,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Guo-Weikang/mm-memblock-Modify-the-default-failure-behavior-of-memblock_alloc_raw-to-panic/20250103-185401
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20250103105158.1350689-3-guoweikang.kernel%40gmail.com
patch subject: [PATCH 3/3] mm/memblock: Modify the default failure behavior of memblock_alloc_low(from)
config: sparc-randconfig-001-20250104 (https://download.01.org/0day-ci/archive/20250104/202501041603.0c8v8Wbr-lkp@intel.com/config)
compiler: sparc64-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250104/202501041603.0c8v8Wbr-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202501041603.0c8v8Wbr-lkp@intel.com/

All warnings (new ones prefixed by >>):

   arch/sparc/mm/init_64.c: In function 'arch_hugetlb_valid_size':
   arch/sparc/mm/init_64.c:361:24: warning: variable 'hv_pgsz_idx' set but not used [-Wunused-but-set-variable]
     361 |         unsigned short hv_pgsz_idx;
         |                        ^~~~~~~~~~~
   arch/sparc/mm/init_64.c: In function 'kernel_map_range':
>> arch/sparc/mm/init_64.c:1788:32: warning: variable 'new' set but not used [-Wunused-but-set-variable]
    1788 |                         pud_t *new;
         |                                ^~~
   arch/sparc/mm/init_64.c: In function 'sun4v_linear_pte_xor_finalize':
   arch/sparc/mm/init_64.c:2200:23: warning: variable 'pagecv_flag' set but not used [-Wunused-but-set-variable]
    2200 |         unsigned long pagecv_flag;
         |                       ^~~~~~~~~~~


vim +/new +1788 arch/sparc/mm/init_64.c

0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1764  
896aef430e5afb arch/sparc64/mm/init.c  Sam Ravnborg    2008-02-24  1765  static unsigned long __ref kernel_map_range(unsigned long pstart,
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1766  					    unsigned long pend, pgprot_t prot,
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1767  					    bool use_huge)
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1768  {
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1769  	unsigned long vstart = PAGE_OFFSET + pstart;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1770  	unsigned long vend = PAGE_OFFSET + pend;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1771  	unsigned long alloc_bytes = 0UL;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1772  
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1773  	if ((vstart & ~PAGE_MASK) || (vend & ~PAGE_MASK)) {
13edad7a5cef1c arch/sparc64/mm/init.c  David S. Miller 2005-09-29  1774  		prom_printf("kernel_map: Unaligned physmem[%lx:%lx]\n",
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1775  			    vstart, vend);
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1776  		prom_halt();
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1777  	}
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1778  
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1779  	while (vstart < vend) {
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1780  		unsigned long this_end, paddr = __pa(vstart);
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1781  		pgd_t *pgd = pgd_offset_k(vstart);
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1782  		p4d_t *p4d;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1783  		pud_t *pud;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1784  		pmd_t *pmd;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1785  		pte_t *pte;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1786  
ac55c768143aa3 arch/sparc/mm/init_64.c David S. Miller 2014-09-26  1787  		if (pgd_none(*pgd)) {
ac55c768143aa3 arch/sparc/mm/init_64.c David S. Miller 2014-09-26 @1788  			pud_t *new;
ac55c768143aa3 arch/sparc/mm/init_64.c David S. Miller 2014-09-26  1789  
4fc4a09e4cc112 arch/sparc/mm/init_64.c Mike Rapoport   2018-10-30  1790  			new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
4fc4a09e4cc112 arch/sparc/mm/init_64.c Mike Rapoport   2018-10-30  1791  						  PAGE_SIZE);
ac55c768143aa3 arch/sparc/mm/init_64.c David S. Miller 2014-09-26  1792  			alloc_bytes += PAGE_SIZE;
ac55c768143aa3 arch/sparc/mm/init_64.c David S. Miller 2014-09-26  1793  			pgd_populate(&init_mm, pgd, new);
ac55c768143aa3 arch/sparc/mm/init_64.c David S. Miller 2014-09-26  1794  		}
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1795  
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1796  		p4d = p4d_offset(pgd, vstart);
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1797  		if (p4d_none(*p4d)) {
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1798  			pud_t *new;
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1799  
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1800  			new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1801  						  PAGE_SIZE);
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1802  			alloc_bytes += PAGE_SIZE;
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1803  			p4d_populate(&init_mm, p4d, new);
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1804  		}
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1805  
5637bc50483404 arch/sparc/mm/init_64.c Mike Rapoport   2019-11-24  1806  		pud = pud_offset(p4d, vstart);
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1807  		if (pud_none(*pud)) {
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1808  			pmd_t *new;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1809  
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1810  			if (kernel_can_map_hugepud(vstart, vend, use_huge)) {
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1811  				vstart = kernel_map_hugepud(vstart, vend, pud);
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1812  				continue;
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1813  			}
4fc4a09e4cc112 arch/sparc/mm/init_64.c Mike Rapoport   2018-10-30  1814  			new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
4fc4a09e4cc112 arch/sparc/mm/init_64.c Mike Rapoport   2018-10-30  1815  						  PAGE_SIZE);
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1816  			alloc_bytes += PAGE_SIZE;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1817  			pud_populate(&init_mm, pud, new);
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1818  		}
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1819  
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1820  		pmd = pmd_offset(pud, vstart);
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1821  		if (pmd_none(*pmd)) {
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1822  			pte_t *new;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1823  
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1824  			if (kernel_can_map_hugepmd(vstart, vend, use_huge)) {
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1825  				vstart = kernel_map_hugepmd(vstart, vend, pmd);
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1826  				continue;
0dd5b7b09e13da arch/sparc/mm/init_64.c David S. Miller 2014-09-24  1827  			}
4fc4a09e4cc112 arch/sparc/mm/init_64.c Mike Rapoport   2018-10-30  1828  			new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
4fc4a09e4cc112 arch/sparc/mm/init_64.c Mike Rapoport   2018-10-30  1829  						  PAGE_SIZE);
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1830  			alloc_bytes += PAGE_SIZE;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1831  			pmd_populate_kernel(&init_mm, pmd, new);
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1832  		}
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1833  
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1834  		pte = pte_offset_kernel(pmd, vstart);
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1835  		this_end = (vstart + PMD_SIZE) & PMD_MASK;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1836  		if (this_end > vend)
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1837  			this_end = vend;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1838  
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1839  		while (vstart < this_end) {
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1840  			pte_val(*pte) = (paddr | pgprot_val(prot));
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1841  
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1842  			vstart += PAGE_SIZE;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1843  			paddr += PAGE_SIZE;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1844  			pte++;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1845  		}
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1846  	}
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1847  
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1848  	return alloc_bytes;
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1849  }
56425306517ef2 arch/sparc64/mm/init.c  David S. Miller 2005-09-25  1850  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic
  2025-01-03 19:58 ` [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Christophe Leroy
@ 2025-01-06  2:17   ` Weikang Guo
  2025-01-06  3:03     ` Weikang Guo
  0 siblings, 1 reply; 8+ messages in thread
From: Weikang Guo @ 2025-01-06  2:17 UTC (permalink / raw)
  To: Christophe Leroy; +Cc: Mike Rapoport, Andrew Morton, linux-mm, linux-kernel

Hi,Christophe

Christophe Leroy <christophe.leroy@csgroup.eu> wrote on Saturday, 4
January 2025 03:58:
>
>
>
> Le 03/01/2025 à 11:51, Guo Weikang a écrit :
> > After analyzing the usage of memblock_alloc, it was found that approximately
> > 4/5 (120/155) of the calls expect a panic behavior on allocation failure.
> > To reflect this common usage pattern, the default failure behavior of
> > memblock_alloc is now modified to trigger a panic when allocation fails.
> >
> > Additionally, a new interface, memblock_alloc_no_panic, has been introduced
> > to handle cases where panic behavior is not desired.
>
> Isn't that going in the opposite direction ?
>
> 5 years ago we did the exact reverse, see commit c0dbe825a9f1
> ("memblock: memblock_alloc_try_nid: don't panic")
>
> Christophe
>
> >


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic
  2025-01-06  2:17   ` Weikang Guo
@ 2025-01-06  3:03     ` Weikang Guo
  2025-01-10 10:17       ` Mike Rapoport
  0 siblings, 1 reply; 8+ messages in thread
From: Weikang Guo @ 2025-01-06  3:03 UTC (permalink / raw)
  To: Christophe Leroy, Andrew Morton; +Cc: Mike Rapoport, linux-mm, linux-kernel

Hi,Christophe, Andrew

Weikang Guo <guoweikang.kernel@gmail.com> 于2025年1月6日周一 10:17写道:
>
> Hi,Christophe
>
> Christophe Leroy <christophe.leroy@csgroup.eu> wrote on Saturday, 4
> January 2025 03:58:
> >
> >
> >
> > Le 03/01/2025 à 11:51, Guo Weikang a écrit :
> > > After analyzing the usage of memblock_alloc, it was found that approximately
> > > 4/5 (120/155) of the calls expect a panic behavior on allocation failure.
> > > To reflect this common usage pattern, the default failure behavior of
> > > memblock_alloc is now modified to trigger a panic when allocation fails.
> > >
> > > Additionally, a new interface, memblock_alloc_no_panic, has been introduced
> > > to handle cases where panic behavior is not desired.
> >
> > Isn't that going in the opposite direction ?
> >
> > 5 years ago we did the exact reverse, see commit c0dbe825a9f1
> > ("memblock: memblock_alloc_try_nid: don't panic")

Thank you for providing the historical context. I did notice the
existence of a nopanic
version before. In my earlier patch, I introduced
memblock_alloc_or_panic, which offers
a more explicit interface to clearly indicate to callers that they
don't need to handle panic
separately.

Andrew pointed out that in most scenarios, panic is the expected
behavior, while no_panic
represents an exceptional case. This feedback led to the current
patch, aiming to adjust the
default behavior and open it up for discussion within the community.

However, after reviewing Mike's previous changes, I now believe that
further adjustment to
the default behavior might not be necessary, as it could lead to
confusion for many users.
In fact, the interface that is widely used externally is
memblock_alloc(), and I think providing memblock_alloc_or_panic
explicitly might already be sufficient.

- memblock_alloc_or_panic:
https://lore.kernel.org/lkml/20250102150835.776fe72f565cc3232d83e6a7@linux-foundation.org/
- Drop memblock_alloc_nopanic:
https://lore.kernel.org/lkml/1548057848-15136-1-git-send-email-rppt@linux.ibm.com/

> >
> > Christophe
> >
> > >


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic
  2025-01-06  3:03     ` Weikang Guo
@ 2025-01-10 10:17       ` Mike Rapoport
  0 siblings, 0 replies; 8+ messages in thread
From: Mike Rapoport @ 2025-01-10 10:17 UTC (permalink / raw)
  To: Weikang Guo; +Cc: Christophe Leroy, Andrew Morton, linux-mm, linux-kernel

On Mon, Jan 06, 2025 at 11:03:38AM +0800, Weikang Guo wrote:
> Hi,Christophe, Andrew
> 
> Weikang Guo <guoweikang.kernel@gmail.com> 于2025年1月6日周一 10:17写道:
> >
> > Hi,Christophe
> >
> > Christophe Leroy <christophe.leroy@csgroup.eu> wrote on Saturday, 4
> > January 2025 03:58:
> > >
> > >
> > >
> > > Le 03/01/2025 à 11:51, Guo Weikang a écrit :
> > > > After analyzing the usage of memblock_alloc, it was found that approximately
> > > > 4/5 (120/155) of the calls expect a panic behavior on allocation failure.
> > > > To reflect this common usage pattern, the default failure behavior of
> > > > memblock_alloc is now modified to trigger a panic when allocation fails.
> > > >
> > > > Additionally, a new interface, memblock_alloc_no_panic, has been introduced
> > > > to handle cases where panic behavior is not desired.
> > >
> > > Isn't that going in the opposite direction ?
> > >
> > > 5 years ago we did the exact reverse, see commit c0dbe825a9f1
> > > ("memblock: memblock_alloc_try_nid: don't panic")
> 
> Thank you for providing the historical context. I did notice the
> existence of a nopanic
> version before. In my earlier patch, I introduced
> memblock_alloc_or_panic, which offers
> a more explicit interface to clearly indicate to callers that they
> don't need to handle panic
> separately.
> 
> Andrew pointed out that in most scenarios, panic is the expected
> behavior, while no_panic
> represents an exceptional case. This feedback led to the current
> patch, aiming to adjust the
> default behavior and open it up for discussion within the community.
> 
> However, after reviewing Mike's previous changes, I now believe that
> further adjustment to
> the default behavior might not be necessary, as it could lead to
> confusion for many users.
> In fact, the interface that is widely used externally is
> memblock_alloc(), and I think providing memblock_alloc_or_panic
> explicitly might already be sufficient.

Agree
 
> - memblock_alloc_or_panic:
> https://lore.kernel.org/lkml/20250102150835.776fe72f565cc3232d83e6a7@linux-foundation.org/
> - Drop memblock_alloc_nopanic:
> https://lore.kernel.org/lkml/1548057848-15136-1-git-send-email-rppt@linux.ibm.com/
> 
> > >
> > > Christophe
> > >
> > > >

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-01-10 10:17 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-03 10:51 [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Guo Weikang
2025-01-03 10:51 ` [PATCH 2/3] mm/memblock: Modify the default failure behavior of memblock_alloc_raw " Guo Weikang
2025-01-03 10:51 ` [PATCH 3/3] mm/memblock: Modify the default failure behavior of memblock_alloc_low(from) Guo Weikang
2025-01-04  8:38   ` kernel test robot
2025-01-03 19:58 ` [PATCH 1/3] mm/memblock: Modify the default failure behavior of memblock_alloc to panic Christophe Leroy
2025-01-06  2:17   ` Weikang Guo
2025-01-06  3:03     ` Weikang Guo
2025-01-10 10:17       ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox