From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4081EE81BCF for ; Mon, 9 Feb 2026 14:42:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A220A6B00B3; Mon, 9 Feb 2026 09:42:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A03A26B00B4; Mon, 9 Feb 2026 09:42:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D8136B00B5; Mon, 9 Feb 2026 09:42:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 78D276B00B3 for ; Mon, 9 Feb 2026 09:42:06 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 223781384BB for ; Mon, 9 Feb 2026 14:42:06 +0000 (UTC) X-FDA: 84425183052.06.233B47B Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf29.hostedemail.com (Postfix) with ESMTP id B58C6120007 for ; Mon, 9 Feb 2026 14:42:04 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DDgSgdMN; spf=pass (imf29.hostedemail.com: domain of rppt@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1770648124; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=89UBpLWUVLDz8zCBHHQksmY0ODWLt7f/SYALJIBFUQA=; b=cqXWLI8+tqmqYHj6oi+WOaDIgmUv0S/j4n61Ko8H37KbDFT003l5bGMVly2Mr5IGSgzQU1 7LeYjGR4rCwmNTKLuPXfeDs+k6+YNTbcGzjMOjXEihPWsgO/kft3o80vJ91O/Q/Bo/cYvM QI5+7NWMqO4DLJJCMHNVTSETZjGSv5I= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DDgSgdMN; spf=pass (imf29.hostedemail.com: domain of rppt@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1770648124; a=rsa-sha256; cv=none; b=XjJw5jcC0tBq3d5I6D0um5cIcW+h+bTK+zyYXvUZN+sO/0uV4GcZHEObxfG7zM4zTaoLft 1kb9jzQFRR7ElX5hNXe2K7CE/tBh48VRmbut8VX+g/t+IW5WkL0TcxuTmbVoeLzqsWI6xY idwqYBy1Rpdvq1VBlxNfvXceICHXKuo= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 308FB6011E; Mon, 9 Feb 2026 14:42:04 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E801C2BCB2; Mon, 9 Feb 2026 14:41:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770648123; bh=7cFSZFlwjTUh9dvWnOCSVBRELIfKAznQbQnIrGVc1RU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DDgSgdMNcu8MC1lYLmXATQwVlsww827kGX+kQhoP9oBkxEato8f7lCaW7MzSVppYE vnuhm9TRAGD7I0eZEOf+FdtZ8QO8ZF2WHm0ZXC9xIluaIJEqyJb8X/2UGxGIkN8IIg UwdgNZMnuG+YBw7wCFKOrhU899+4nYALZbpNrY2eBuP55WLVFxP/KmFG3uCKFZq1al +PK5CSMtZ3z95edte2A/41O84/sVxmqechk/vFA1tpe66/dp8vA3A5WXjyalGDQ9Tm McNeIWZeIxMGcXE+cVmPScCvJeKJ54Skywte8/oP/sjxAe9lb1jOUAt2RlWxv20GUY rNUn6xy6sD5yA== From: Mike Rapoport To: Andrew Morton Cc: Andreas Larsson , Borislav Petkov , Brian Cain , Catalin Marinas , "Christophe Leroy (CS GROUP)" , "David S. Miller" , Dave Hansen , David Hildenbrand , Dinh Nguyen , Geert Uytterhoeven , Guo Ren , Helge Deller , Huacai Chen , Ingo Molnar , Johannes Berg , John Paul Adrian Glaubitz , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Magnus Lindholm , Matt Turner , Max Filippov , Michael Ellerman , Michal Hocko , Michal Simek , Mike Rapoport , Palmer Dabbelt , Richard Weinberger , Russell King , Stafford Horne , Suren Baghdasaryan , Thomas Gleixner , Vineet Gupta , Vlastimil Babka , Will Deacon , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-mm@kvack.org, x86@kernel.org Subject: [PATCH v2 4/4] mm: cache struct page for empty_zero_page and return it from ZERO_PAGE() Date: Mon, 9 Feb 2026 16:40:57 +0200 Message-ID: <20260209144058.2092871-5-rppt@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260209144058.2092871-1-rppt@kernel.org> References: <20260209144058.2092871-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: B58C6120007 X-Stat-Signature: xrbtgp9touehukjpz6xk4xid9e3m8xys X-Rspam-User: X-HE-Tag: 1770648124-386957 X-HE-Meta: U2FsdGVkX1+R0PT+Bp05/HSZ94VUJNLkLltGWpX18WMhV7mcp651E+7upglhIaG22RMQcZtyeeWSvRMgWiOh1hDW6lfpZ+0gNQ3q17ks9Kk+gQq2yh2ANIUdzlm4o61ro5RzCqOeQUfcc2E+YZ1Nm9tAXFW13eC5N4skF7/cuFMuxgfL5qJcBCoFuUDQ/eWkHrPfeGjo3bYVWs5szuVEWE5i8sZuBlz/uog7t9HLRNhZnzuk44CSCFZmSO2uolyTxFrXeX1Y+LW4aZvNcurq8eVTZI+oCUKyAMGla3mR9GQlYg4YQIwNZxJWBAzVkfV21S0aDlNaMJjRkYsLBe/8bNVxoKXM8VDcekan++j42K3eL1YBVASbN4gAlJ6rO4Ln0DacSLlLRSBP2I0CMvzbnNBF3pFO4FKTOnHU8dnw6MzxC7hHjzI2IPIwyUCcMR6tyLHbmDYKhge3fe7S8UE5vnx0K+hlVw8m3OHNH4LyBDRMJfqN8ZsJ7SoxeeiD4ipAhbjBnsnRTAu/cB0jbhZ/EqPG1drNOZXSL8dFYAQ52AH6GbmjCb9jP3r9h3TqWD1pB6d0RlBII0LheM8/xuguVk83G/2UJHsMsZKJhbLN8TqLZLjSVwb2L6zMy+xv+JSyVGulCU9ojO/n6MMOryFEBaiP3xdO3H23eHY7oJJkQUM0gNytCcARXc3eRdJBcwjusGk/0itF/AMwbaXvZuVWVddYCdeX/HJMvaVBU9SFPnH6f5zEqSp89Gh1gduBVkXeYCenYPLS7sOhcbgHXmyDtU45VipAxDOIZ6CPTnowLbOfzP0MTlsuSX06VFI+lAmVRZXGv0p2uTofZk9wZnlH/Ca1t9VhpkusEO2ZYHYESoGPPf5hXyG509EFG7RxhXJH6WcMuKszTDhmoXaP2C5wcR+TES0eRLVSnBX4aIFL1xg1W0faO/RYFl82rwo+xXnDKkZnI//xyknWFMihrw8 2RHolHQ3 szGJ6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Mike Rapoport (Microsoft)" For most architectures every invocation of ZERO_PAGE() does virt_to_page(empty_zero_page). But empty_zero_page is in BSS and it is enough to get its struct page once at initialization time and then use it whenever a zero page should be accessed. Add yet another __zero_page variable that will be initialized as virt_to_page(empty_zero_page) for most architectures in a weak arch_setup_zero_pages() function. For architectures that use colored zero pages (MIPS and s390) rename their setup_zero_pages() to arch_setup_zero_pages() and make it global rather than static. For architectures that cannot use virt_to_page() for BSS (arm64 and sparc64) add override of arch_setup_zero_pages(). Signed-off-by: Mike Rapoport (Microsoft) --- arch/arm64/include/asm/pgtable.h | 6 ------ arch/arm64/mm/init.c | 5 +++++ arch/mips/mm/init.c | 11 +---------- arch/s390/mm/init.c | 4 +--- arch/sparc/include/asm/pgtable_64.h | 3 --- arch/sparc/mm/init_64.c | 17 +++++++---------- include/linux/pgtable.h | 11 ++++++++--- mm/mm_init.c | 21 +++++++++++++++++---- 8 files changed, 39 insertions(+), 39 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 63da07398a30..2c1ec7cc8612 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -106,12 +106,6 @@ static inline void arch_leave_lazy_mmu_mode(void) #define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp) \ local_flush_tlb_page_nonotify(vma, address) -/* - * ZERO_PAGE is a global shared page that is always zero: used - * for zero-mapped memory areas etc.. - */ -#define ZERO_PAGE(vaddr) phys_to_page(__pa_symbol(empty_zero_page)) - #define pte_ERROR(e) \ pr_err("%s:%d: bad pte %016llx.\n", __FILE__, __LINE__, pte_val(e)) diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 96711b8578fd..417ec7efe569 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -328,6 +328,11 @@ void __init bootmem_init(void) memblock_dump_all(); } +void __init arch_setup_zero_pages(void) +{ + __zero_page = phys_to_page(__pa_symbol(empty_zero_page)); +} + void __init arch_mm_preinit(void) { unsigned int flags = SWIOTLB_VERBOSE; diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c index 4f6449ad02ca..55b25e85122a 100644 --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -56,10 +56,7 @@ unsigned long empty_zero_page, zero_page_mask; EXPORT_SYMBOL_GPL(empty_zero_page); EXPORT_SYMBOL(zero_page_mask); -/* - * Not static inline because used by IP27 special magic initialization code - */ -static void __init setup_zero_pages(void) +void __init arch_setup_zero_pages(void) { unsigned int order; @@ -450,7 +447,6 @@ void __init arch_mm_preinit(void) BUILD_BUG_ON(IS_ENABLED(CONFIG_32BIT) && (PFN_PTE_SHIFT > PAGE_SHIFT)); maar_init(); - setup_zero_pages(); /* Setup zeroed pages. */ highmem_init(); #ifdef CONFIG_64BIT @@ -461,11 +457,6 @@ void __init arch_mm_preinit(void) 0x80000000 - 4, KCORE_TEXT); #endif } -#else /* CONFIG_NUMA */ -void __init arch_mm_preinit(void) -{ - setup_zero_pages(); /* This comes from node 0 */ -} #endif /* !CONFIG_NUMA */ void free_init_pages(const char *what, unsigned long begin, unsigned long end) diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 3c20475cbee2..1f72efc2a579 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -69,7 +69,7 @@ unsigned long empty_zero_page, zero_page_mask; EXPORT_SYMBOL(empty_zero_page); EXPORT_SYMBOL(zero_page_mask); -static void __init setup_zero_pages(void) +void __init arch_setup_zero_pages(void) { unsigned long total_pages = memblock_estimated_nr_free_pages(); unsigned int order; @@ -159,8 +159,6 @@ void __init arch_mm_preinit(void) cpumask_set_cpu(0, mm_cpumask(&init_mm)); pv_init(); - - setup_zero_pages(); /* Setup zeroed pages. */ } unsigned long memory_block_size_bytes(void) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 615f460c50af..74ede706fb32 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -210,9 +210,6 @@ extern unsigned long _PAGE_CACHE; extern unsigned long pg_iobits; extern unsigned long _PAGE_ALL_SZ_BITS; -extern struct page *mem_map_zero; -#define ZERO_PAGE(vaddr) (mem_map_zero) - /* PFNs are real physical page numbers. However, mem_map only begins to record * per-page information starting at pfn_base. This is to handle systems where * the first physical page in the machine is at some huge physical address, diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 0cc8de2fea90..707c1df67d79 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -177,9 +177,6 @@ extern unsigned long sparc_ramdisk_image64; extern unsigned int sparc_ramdisk_image; extern unsigned int sparc_ramdisk_size; -struct page *mem_map_zero __read_mostly; -EXPORT_SYMBOL(mem_map_zero); - unsigned int sparc64_highest_unlocked_tlb_ent __read_mostly; unsigned long sparc64_kern_pri_context __read_mostly; @@ -2496,11 +2493,17 @@ static void __init register_page_bootmem_info(void) register_page_bootmem_info_node(NODE_DATA(i)); #endif } -void __init mem_init(void) + +void __init arch_setup_zero_pages(void) { phys_addr_t zero_page_pa = kern_base + ((unsigned long)&empty_zero_page[0] - KERNBASE); + __zero_page = phys_to_page(zero_page_pa); +} + +void __init mem_init(void) +{ /* * Must be done after boot memory is put on freelist, because here we * might set fields in deferred struct pages that have not yet been @@ -2509,12 +2512,6 @@ void __init mem_init(void) */ register_page_bootmem_info(); - /* - * Set up the zero page, mark it reserved, so that page count - * is not manipulated when freeing the page from user ptes. - */ - mem_map_zero = pfn_to_page(PHYS_PFN(zero_page_pa)); - if (tlb_type == cheetah || tlb_type == cheetah_plus) cheetah_ecache_flush_init(); } diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 9ba1f03fca54..722df2149d58 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1894,6 +1894,8 @@ static inline void pfnmap_setup_cachemode_pfn(unsigned long pfn, pgprot_t *prot) * For architectures that don't __HAVE_COLOR_ZERO_PAGE the zero page lives in * empty_zero_page in BSS. */ +void arch_setup_zero_pages(void); + extern unsigned long zero_page_pfn; #ifdef __HAVE_COLOR_ZERO_PAGE @@ -1918,10 +1920,13 @@ static inline unsigned long zero_pfn(unsigned long addr) } extern uint8_t empty_zero_page[PAGE_SIZE]; +extern struct page *__zero_page; -#ifndef ZERO_PAGE -#define ZERO_PAGE(vaddr) ((void)(vaddr),virt_to_page(empty_zero_page)) -#endif +static inline struct page *_zero_page(unsigned long addr) +{ + return __zero_page; +} +#define ZERO_PAGE(vaddr) _zero_page(vaddr) #endif /* __HAVE_COLOR_ZERO_PAGE */ diff --git a/mm/mm_init.c b/mm/mm_init.c index 1eac634ece1a..b08608c1b71d 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -59,7 +59,10 @@ EXPORT_SYMBOL(zero_page_pfn); #ifndef __HAVE_COLOR_ZERO_PAGE uint8_t empty_zero_page[PAGE_SIZE] __page_aligned_bss; EXPORT_SYMBOL(empty_zero_page); -#endif + +struct page *__zero_page __ro_after_init; +EXPORT_SYMBOL(__zero_page); +#endif /* __HAVE_COLOR_ZERO_PAGE */ #ifdef CONFIG_DEBUG_MEMORY_INIT int __meminitdata mminit_loglevel; @@ -2675,12 +2678,21 @@ static void __init mem_init_print_info(void) ); } -static int __init init_zero_page_pfn(void) +#ifndef __HAVE_COLOR_ZERO_PAGE +/* + * architectures that __HAVE_COLOR_ZERO_PAGE must define this function + */ +void __init __weak arch_setup_zero_pages(void) +{ + __zero_page = virt_to_page(empty_zero_page); +} +#endif + +static void __init init_zero_page_pfn(void) { + arch_setup_zero_pages(); zero_page_pfn = page_to_pfn(ZERO_PAGE(0)); - return 0; } -early_initcall(init_zero_page_pfn); void __init __weak arch_mm_preinit(void) { @@ -2704,6 +2716,7 @@ void __init mm_core_init_early(void) void __init mm_core_init(void) { arch_mm_preinit(); + init_zero_page_pfn(); /* Initializations relying on SMP setup */ BUILD_BUG_ON(MAX_ZONELISTS > 2); -- 2.51.0