From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B521C369AB for ; Thu, 24 Apr 2025 16:29:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D50F66B0031; Thu, 24 Apr 2025 12:29:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD6B26B00A5; Thu, 24 Apr 2025 12:29:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B78276B00AB; Thu, 24 Apr 2025 12:29:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 981F96B0031 for ; Thu, 24 Apr 2025 12:29:31 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E0EA21A11A9 for ; Thu, 24 Apr 2025 16:29:31 +0000 (UTC) X-FDA: 83369472942.05.4FE6461 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf03.hostedemail.com (Postfix) with ESMTP id 53B2320006 for ; Thu, 24 Apr 2025 16:29:30 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WWHXFCnJ; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf03.hostedemail.com: domain of rppt@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745512170; a=rsa-sha256; cv=none; b=Tf3s9mFM8c7tb6+dfs9RGyRTXZZUB9VS5CXM4yKMhqpfzetdCm3297+JpkfmRe/53apYxg looQgXHf049tucvBguihmWKrqmGPUwxwygJJPjAK0eeFHXYXBbYkLkdWbebW86vNKTRnnH HYGtXpmfzgaoj/LVas3IFknO+wW4vtY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=WWHXFCnJ; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf03.hostedemail.com: domain of rppt@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745512170; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W1qYo+4bvyVHja3tCp72sV5aOHt5649PAZ/OJPL8YYc=; b=Us/bnExKgQ3McwPpe79ar4TyY9wXyh4pG/Zjgwafxw4KhQcC9QujcNETNaMICu1O86ZwMW Ot8tdNZ/uH73KYFOWEqFtTzT8DuRH0vC8soUcW5nwWhkkhpV610MAwDn/g8/HSXPzbtWwR a40AUboJmKlhsWwH7+QNDdZ5Mt6P9cg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 75B246845D; Thu, 24 Apr 2025 16:29:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1F1B6C4CEE3; Thu, 24 Apr 2025 16:29:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745512169; bh=Gz7Clq7IgRybiKXV/CHGhqPgFOb+cxmx46iqHmQXvm4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=WWHXFCnJvHPZX7BrAz/VHdhOdbqh17XrKL/xV8wxCUkg32Ox9khVXmVvvLob1C4aJ aBAlde+akgZxTGEJudVNdlTqjXHYfDRoZNRniXyjUO4PEoAcencjYwGd+Yl8aYBQRU I0a/rEwkHB886XX4wmv05PgAFysYNK3bXTPigF2mvHRoRTrX/l5Zh6KBR90xvebLSl b3XVu5jCLw6zeaU/tFv0wu33SC9o+mdNkeTFmlCqpscoHpRmTiFFLcq+3AxblJnWUr /oixTHw46rIFueTfHIu2V5249V7PTjAFUyPWlh5D8X3MJk7NxMOTRScATVEv1DRr0u Bz+WaT7yt4rPA== Date: Thu, 24 Apr 2025 19:29:19 +0300 From: Mike Rapoport To: Arnd Bergmann Cc: Andrew Morton , Changyuan Lyu , Arnd Bergmann , David Hildenbrand , Vlastimil Babka , "Matthew Wilcox (Oracle)" , Lorenzo Stoakes , Kefeng Wang , Ryan Roberts , Barry Song , Jeff Xu , Wei Yang , Baoquan He , Suren Baghdasaryan , Frank van der Linden , York Jasper Niebuhr , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] memblock: mark init_deferred_page as __init_memblock Message-ID: References: <20250423160824.1498493-1-arnd@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250423160824.1498493-1-arnd@kernel.org> X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 53B2320006 X-Stat-Signature: jqdmcoqbgiwe5r9sgca181y4c4reeaf6 X-Rspam-User: X-HE-Tag: 1745512170-338102 X-HE-Meta: U2FsdGVkX18RMQriyj/hDBwM3g+WQ/cjIDVcdNW5A/OMzhSLW77Sn2j/J101CEC0f742Jpprt/KxnQxCqIU4vYZl6/h8OGlXlyykJin4kLjO0cxchmT8EKPg6xIh0rFbB4gCHnmE/P1vGra8LynSsv1fzjhZ6FkWUyi+kv01zCb0AD6AXdveUpdULl+gP4ybYK/rwV0i0DvQgtiEMxQno7RhM/FnsswRH3WNStOu4hm7pLgZRdFWIXy7P40a0XS5jTb6llWxhUmKzS6hzLGO0T20hIy3mL4s/Ke6KBLsfkqk+1TYoPW0J4vroxhXoWYXM1cWKRddAg4+b9r/YXChyOU+anLWXl6awjnhsPAF0SJPMxHI/Ixraurc57A23TjM1JoJHkxucI/R4/ZaVXE2jlLKcjNDIKW0Xu+rd6CcI8ZfICkZDp4VwuWroW9KnifYa0Ij8Xc8DmM0dW744QCUE5tPmqomNvklWH6J/B4+ahVkuteHRtBnN6DUGlWwcetyhIbl/rV2H5q+zu3kNCXOSJt2AYhMoKVXNUBpeOD215TQTPYVN4QG8mEjpHvicE7uFGo1WIkL7H4eYBoXgySucTb37S1yb5lP+A8qurvmLFRi8Eciux4pQ6twp9k55K3Cl4boGsJbZbxljLSjIWtbeFCo6RolpMMnPgckVmxfjeFQBk5B1Sb4Ye7mlaCxbr/JoaZk0IFQ6pZ8SBfEUCNOxeYPomME8KHLIGzCDIQJG+cltAfHpUxvPBIrPTRAzk5qe08tB9eQKhI+QOfexFyDgfy0Sl9ZbPs2KuyP+2nD5L2PM+QiR0YnYWqh6iaGxgpnaqOy59n90Ds4g1vnoI0pPv14jDaviowlXtIQUlajHUXW5ghU014KZKJHA9OpNPoUy54NhTYXgehS6xSWRvQ8XbQkjdlUEgZc7fcvvNGZqiAWC7l82KQaLevRPULiA98cQXwjitmpYTDY8Al33sA lo1EOUEL MqXduYpsBHW6ccjZgNDo/s5WvRhhXFKI8Nvvy3Hri7jl7NQNRS8AT8MZUrnci/3E4gJ2r2yzqi3Sd3B4Kj0Rvm6YRcsybQuG03/vMlYVOjui/XdEWz2JNV6MD1ixpnLMe22/CvTkCy9BKqVHlSwLfr2JfQ1zl7sIjm5TD3kHCG/BY3EqAvcZnI7MMRl3HxwYhXz8vmW60GfwhxOHHIAy8yY5x1GQDI8gXSnut+22oHCppB3LifF+AFl7gieFxEpKFopFJb5N2VuHbOLX8sqQ7uw/jPubJBpqPw8/FjUJVQrHVOTZDAEY2u/v95pwnOa11xlxYjqRfSwvpnYQyE+UpOyohPyOZzxqmao4oE+9u/9ZDSMpGiY0yHMuSHQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 23, 2025 at 06:08:08PM +0200, Arnd Bergmann wrote: > From: Arnd Bergmann > > On architectures that set CONFIG_ARCH_KEEP_MEMBLOCK, memmap_init_kho_scratch_pages > is not discarded but calls a function that is: > > WARNING: modpost: vmlinux: section mismatch in reference: memmap_init_kho_scratch_pages+0x120 (section: .text) -> init_deferred_page (section: .init.text) > ERROR: modpost: Section mismatches detected. > Set CONFIG_SECTION_MISMATCH_WARN_ONLY=y to allow them. > > Mark init_deferred_page the same way as memmap_init_kho_scratch_pages > to avoid that warning. Unfortunately this requires marking additional > functions the same way to have them stay around as well. > > Ideally memmap_init_kho_scratch_pages would become __meminit instead > of __init_memblock, but I could not convince myself that this is safe. It should be __init even, as well as a few other kho-memblock functions. I'll run some builds to make sure I'm not missing anything. > Fixes: 1b7936623970 ("memblock: introduce memmap_init_kho_scratch()") > Signed-off-by: Arnd Bergmann > --- > mm/internal.h | 7 ++++--- > mm/mm_init.c | 8 ++++---- > 2 files changed, 8 insertions(+), 7 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 838f840ded83..40464f755092 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -9,6 +9,7 @@ > > #include > #include > +#include > #include > #include > #include > @@ -543,7 +544,7 @@ extern int defrag_mode; > > void setup_per_zone_wmarks(void); > void calculate_min_free_kbytes(void); > -int __meminit init_per_zone_wmark_min(void); > +int __init_memblock init_per_zone_wmark_min(void); > void page_alloc_sysctl_init(void); > > /* > @@ -1532,9 +1533,9 @@ static inline bool pte_needs_soft_dirty_wp(struct vm_area_struct *vma, pte_t pte > return vma_soft_dirty_enabled(vma) && !pte_soft_dirty(pte); > } > > -void __meminit __init_single_page(struct page *page, unsigned long pfn, > +void __init_memblock __init_single_page(struct page *page, unsigned long pfn, > unsigned long zone, int nid); > -void __meminit __init_page_from_nid(unsigned long pfn, int nid); > +void __init_memblock __init_page_from_nid(unsigned long pfn, int nid); > > /* shrinker related functions */ > unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 7bb5f77cf195..31cf8bc31cc2 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -578,7 +578,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) > node_states[N_MEMORY] = saved_node_state; > } > > -void __meminit __init_single_page(struct page *page, unsigned long pfn, > +void __init_memblock __init_single_page(struct page *page, unsigned long pfn, > unsigned long zone, int nid) > { > mm_zero_struct_page(page); > @@ -669,7 +669,7 @@ static inline void fixup_hashdist(void) {} > /* > * Initialize a reserved page unconditionally, finding its zone first. > */ > -void __meminit __init_page_from_nid(unsigned long pfn, int nid) > +void __init_memblock __init_page_from_nid(unsigned long pfn, int nid) > { > pg_data_t *pgdat; > int zid; > @@ -744,7 +744,7 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) > return false; > } > > -static void __meminit __init_deferred_page(unsigned long pfn, int nid) > +static void __init_memblock __init_deferred_page(unsigned long pfn, int nid) > { > if (early_page_initialised(pfn, nid)) > return; > @@ -769,7 +769,7 @@ static inline void __init_deferred_page(unsigned long pfn, int nid) > } > #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ > > -void __meminit init_deferred_page(unsigned long pfn, int nid) > +void __init_memblock init_deferred_page(unsigned long pfn, int nid) > { > __init_deferred_page(pfn, nid); > } > -- > 2.39.5 > -- Sincerely yours, Mike.