From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84959C369AB for ; Thu, 24 Apr 2025 19:21:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 871A96B0023; Thu, 24 Apr 2025 15:21:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 81E5D6B0092; Thu, 24 Apr 2025 15:21:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6991B6B0096; Thu, 24 Apr 2025 15:21:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4B9DB6B0023 for ; Thu, 24 Apr 2025 15:21:51 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 58FE4BA9A1 for ; Thu, 24 Apr 2025 19:21:51 +0000 (UTC) X-FDA: 83369907222.27.5CD3A96 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf29.hostedemail.com (Postfix) with ESMTP id 81B50120012 for ; Thu, 24 Apr 2025 19:21:49 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Kkk/fRpn"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf29.hostedemail.com: domain of rppt@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745522509; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LvaBpvHvf0Avj1bnBB1mY/u8rZEUrUpArm5tNXVDhFw=; b=Wb/k6JnctHd9Pg4ph9xb60bDW0NDssW0yaF91BYgEs3GnNNswoEtQ6qPEMNtw4aLX/yo07 maukOaNN+criB/nJwPQIJo18gMDKj83xlrAr8tKmIgcdpZFAhLfsXavGY4w7LF783mOC7o /NOyDMM+sPRvnSkqNJCeGW6wc/lh4MY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745522509; a=rsa-sha256; cv=none; b=CCCsdMxn0NQbbw0blsWU3XIG+X+2YwnWYET6h64uU5Jw6byGkcOafC2hHsGXzwNRBb1fCj Vu7fv+xu3PwzRxxjjENlf8whpXbwUOgqf/+rbdkrYt38Vcf+UNoxW0FxdzD7d9ns8UyLqA +PRFGy/Q7aZWJXOVGZBiln/yvNwnr7U= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Kkk/fRpn"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf29.hostedemail.com: domain of rppt@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=rppt@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 6EFB944306; Thu, 24 Apr 2025 19:21:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA9AFC4CEE3; Thu, 24 Apr 2025 19:21:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745522507; bh=GhdbNQqm5Mo0SBjik8a+TZCQ4tcJfPyU15uycDKJe+I=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Kkk/fRpnyVIZuC8XW1BNl+GeAiHGvQiP5wFA9U+P0w1U8GPcrEWw0/96xqshVHKeA tg6z7OOpWYKREYVPVGuoU5eSaV8N/xU5VkkCnRbKCshzeHN5cFNs+kccmEhjnDsU5F PHrVsHnfXHYNbcN45hS24ZrwTbF9VGEbMM5cxg6YemiDGkDCCrwNw9XIPnmWR5DLKd rmeyeMsLuY2r1I97h4IccM2ryI+MfC2gw+geMGhy6Thv/cvdbQrX17s6ccnFXgyoBz gG8BEIZya3g51i820hoLPoCvWCheYL8Yy9kVAdWBGlh4NbEz/P9eXnDQLPAmCPbPrP mUvE/4E4NSrGg== Date: Thu, 24 Apr 2025 22:21:38 +0300 From: Mike Rapoport To: Arnd Bergmann Cc: Andrew Morton , Changyuan Lyu , Arnd Bergmann , David Hildenbrand , Vlastimil Babka , "Matthew Wilcox (Oracle)" , Lorenzo Stoakes , Kefeng Wang , Ryan Roberts , Barry Song , Jeff Xu , Wei Yang , Baoquan He , Suren Baghdasaryan , Frank van der Linden , York Jasper Niebuhr , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] memblock: mark init_deferred_page as __init_memblock Message-ID: References: <20250423160824.1498493-1-arnd@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 81B50120012 X-Rspam-User: X-Stat-Signature: 96ezdqfq7aogsghbqukgudhes33885nu X-HE-Tag: 1745522509-117728 X-HE-Meta: U2FsdGVkX1/oUShjzVNGShmZLq1OmKPvRe9pOzEHZxI5PsVZcIAbtbw+ovg86W4vjH3R32fj8NJ025t5FEc06RwnkYfr0BMnSIrl6ov3QyeuDRh6YN7mce3CDaqz35j5zQjhVcPykHKaMB/5MS9nDnldqjLOCep8kYlICD7h4c53uOLMj61IQUhcJOc8ZTnqVZ3cvZNwXQXciY/BweDv17Sg3MCrk1q3YEw+dz4X74lT/SomBQHXASElZ+65tD6FxNbOPx45JJp+r1FDgsi5Jxe7Pd9WammI2k03BuGyzAgCsDA+izXF46ANElrNJWHXul68V+XitgAws1ahnK5+ywntukjpFQepaefp7Av+8RA2jiaN9NjOPuyXYu2ULP/7R7g7EcfEpZhrazboBofPFe2bZ51udsZqq1LhKLMJiGlClP8WhNZYLHtMergMCyjuM708/UlTxVk3AzSHkDw5PE/lWr6//ExPPu+4SHJI3H92mhGgWLa/13aQMlJ03mtZ8Xe5kmKwhVodfs4tkVNSEuqkzNVkzeSrVSyYTu0tplFJXpDXpMo+tbZHR67+R4CtrXjXZDZ9FbA0WvQyQcuBADWEXC6qTCgiKE0Jrp1cdX8/UM3BTcmN0dRdE+clm8UG6MdPpMNU6CQuLHW4Q7emQEgLaPfwt4QYagId/dQXr6CvTrdthVdFbTWwaLIUPOdidhWSn3KhrzFe6GEdZ1g3jgUhLMW8X0MFDdYg/O9VvOSc+QZ9ylrzSVYvvXkYRpfsdpSjqUq5Iicse2Y8nL+X3zncydxn9J0Qx+5D6L4n3iJ8pQFklpK3jvVKnfVA3mZIGSCaganONCrQHHFHz4ZaKXDpzhDGJmX7uhI0zxlgA44Q0KFbUzr3gh2uDadQzSuzalqh1+x0dtUt6ooknqCfAH+F3WOFxiUjE1D6kReRAEVhQ/Sm+6ngGkIL+6LcaK5WmS8PAMnDvVSTcvUWE7x o8NPgbS/ +bKuypw03oIMw+AG06Mqe5Vi/MOauJwrThdJFym2Pzvux+Eg+pTbmV4nYNSHG5sXheBoI9A8U/Y0/tl8WAtukJskGCYrrC19VkV6/SnbLGQkoWGx+NV6rsX6uzxPICml95N/FbCHNcslm/OzCOtkiCMtfnvUY9kUboYPPAOGCEcJfInjCly6s+VLas4BQE4BwllX5VlORNpEv+tHYJhGu6zRFDfq8y9NgwTpZrz5oxcHmTDbFIwrcxiyQw3eZxJqpmXFZBVzTXmqa6kvelo4ZlTYFq2eEmUXysV44drcUovDkoc9mJrg3ZzNtww2goBn9+aQfoAJTq7QPKdxl/mW2GjW6YMzmZJBWqTh+dfnnRWV1fW+mLm1Lim1xVQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Apr 24, 2025 at 07:29:29PM +0300, Mike Rapoport wrote: > On Wed, Apr 23, 2025 at 06:08:08PM +0200, Arnd Bergmann wrote: > > From: Arnd Bergmann > > > > On architectures that set CONFIG_ARCH_KEEP_MEMBLOCK, memmap_init_kho_scratch_pages > > is not discarded but calls a function that is: > > > > WARNING: modpost: vmlinux: section mismatch in reference: memmap_init_kho_scratch_pages+0x120 (section: .text) -> init_deferred_page (section: .init.text) > > ERROR: modpost: Section mismatches detected. > > Set CONFIG_SECTION_MISMATCH_WARN_ONLY=y to allow them. > > > > Mark init_deferred_page the same way as memmap_init_kho_scratch_pages > > to avoid that warning. Unfortunately this requires marking additional > > functions the same way to have them stay around as well. > > > > Ideally memmap_init_kho_scratch_pages would become __meminit instead > > of __init_memblock, but I could not convince myself that this is safe. > > It should be __init even, as well as a few other kho-memblock > functions. > I'll run some builds to make sure I'm not missing anything. Yeah, it looks like everything inside CONFIG_MEMBLOCK_KHO_SCRATCH can be just __init unconditionally: diff --git a/mm/memblock.c b/mm/memblock.c index 44d3bacf86a0..994792829ebe 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -942,17 +942,17 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size) #endif #ifdef CONFIG_MEMBLOCK_KHO_SCRATCH -__init_memblock void memblock_set_kho_scratch_only(void) +__init void memblock_set_kho_scratch_only(void) { kho_scratch_only = true; } -__init_memblock void memblock_clear_kho_scratch_only(void) +__init void memblock_clear_kho_scratch_only(void) { kho_scratch_only = false; } -void __init_memblock memmap_init_kho_scratch_pages(void) +__init void memmap_init_kho_scratch_pages(void) { phys_addr_t start, end; unsigned long pfn; > > Fixes: 1b7936623970 ("memblock: introduce memmap_init_kho_scratch()") > > Signed-off-by: Arnd Bergmann > > --- > > mm/internal.h | 7 ++++--- > > mm/mm_init.c | 8 ++++---- > > 2 files changed, 8 insertions(+), 7 deletions(-) > > > > diff --git a/mm/internal.h b/mm/internal.h > > index 838f840ded83..40464f755092 100644 > > --- a/mm/internal.h > > +++ b/mm/internal.h > > @@ -9,6 +9,7 @@ > > > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -543,7 +544,7 @@ extern int defrag_mode; > > > > void setup_per_zone_wmarks(void); > > void calculate_min_free_kbytes(void); > > -int __meminit init_per_zone_wmark_min(void); > > +int __init_memblock init_per_zone_wmark_min(void); > > void page_alloc_sysctl_init(void); > > > > /* > > @@ -1532,9 +1533,9 @@ static inline bool pte_needs_soft_dirty_wp(struct vm_area_struct *vma, pte_t pte > > return vma_soft_dirty_enabled(vma) && !pte_soft_dirty(pte); > > } > > > > -void __meminit __init_single_page(struct page *page, unsigned long pfn, > > +void __init_memblock __init_single_page(struct page *page, unsigned long pfn, > > unsigned long zone, int nid); > > -void __meminit __init_page_from_nid(unsigned long pfn, int nid); > > +void __init_memblock __init_page_from_nid(unsigned long pfn, int nid); > > > > /* shrinker related functions */ > > unsigned long shrink_slab(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, > > diff --git a/mm/mm_init.c b/mm/mm_init.c > > index 7bb5f77cf195..31cf8bc31cc2 100644 > > --- a/mm/mm_init.c > > +++ b/mm/mm_init.c > > @@ -578,7 +578,7 @@ static void __init find_zone_movable_pfns_for_nodes(void) > > node_states[N_MEMORY] = saved_node_state; > > } > > > > -void __meminit __init_single_page(struct page *page, unsigned long pfn, > > +void __init_memblock __init_single_page(struct page *page, unsigned long pfn, > > unsigned long zone, int nid) > > { > > mm_zero_struct_page(page); > > @@ -669,7 +669,7 @@ static inline void fixup_hashdist(void) {} > > /* > > * Initialize a reserved page unconditionally, finding its zone first. > > */ > > -void __meminit __init_page_from_nid(unsigned long pfn, int nid) > > +void __init_memblock __init_page_from_nid(unsigned long pfn, int nid) > > { > > pg_data_t *pgdat; > > int zid; > > @@ -744,7 +744,7 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) > > return false; > > } > > > > -static void __meminit __init_deferred_page(unsigned long pfn, int nid) > > +static void __init_memblock __init_deferred_page(unsigned long pfn, int nid) > > { > > if (early_page_initialised(pfn, nid)) > > return; > > @@ -769,7 +769,7 @@ static inline void __init_deferred_page(unsigned long pfn, int nid) > > } > > #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ > > > > -void __meminit init_deferred_page(unsigned long pfn, int nid) > > +void __init_memblock init_deferred_page(unsigned long pfn, int nid) > > { > > __init_deferred_page(pfn, nid); > > } > > -- > > 2.39.5 > > > > -- > Sincerely yours, > Mike. -- Sincerely yours, Mike.