From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56755C25B79 for ; Tue, 14 May 2024 17:17:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C5B38D0033; Tue, 14 May 2024 13:17:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 84E228D000D; Tue, 14 May 2024 13:17:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EF248D0033; Tue, 14 May 2024 13:17:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4AECB8D000D for ; Tue, 14 May 2024 13:17:20 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B6798141127 for ; Tue, 14 May 2024 17:17:19 +0000 (UTC) X-FDA: 82117657398.02.C649FFA Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by imf11.hostedemail.com (Postfix) with ESMTP id C49B640008 for ; Tue, 14 May 2024 17:17:16 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=aSJSepZh; dmarc=none; spf=pass (imf11.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.208.52 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715707037; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cN/K+9boHAhauCx5TFHKaUKLLLYkMuKUXNq8tR95Jh4=; b=Y6U97q3K/7NSQ+7V1A3izpqGDgHVxX2iL95PdO+joovxO8B9BajvMAke2zbMuQODp4HktZ jLvnh76Q4q2ToR+1C8aQz/opxisRivLFrToEVyKMmYQeBO7tc8i1WdCN/NKeGTrREklH9C l3AE+HkbJEGMI8gFHtPVV+9sVqwOo7g= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=aSJSepZh; dmarc=none; spf=pass (imf11.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.208.52 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715707037; a=rsa-sha256; cv=none; b=4IrBm7e0RjN/34KEvnvBL6edtbFloGIWkl1skvoGO/p1MeWZZYPmx050fV82cl/XuwhzfW NrI3p4B6xNO8xqQTiVB5JJoARRmSxJghvZ8LZrZ4yTby2lFlYP/hg15ftyMfkKYrHpo8ck 3/zTiKX61LUDsJcCYT4wk7low5dQwH0= Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-5723edf0ae5so476757a12.0 for ; Tue, 14 May 2024 10:17:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1715707035; x=1716311835; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=cN/K+9boHAhauCx5TFHKaUKLLLYkMuKUXNq8tR95Jh4=; b=aSJSepZhOZcH/VYMt0KFUw8HjPGh2IFqPVROmFL5NrUwrpNi7qB3QeEYm4DGkgmWCH l1E+9UMyNgSirTSq0nSA88gyAlUCfQR2I8gip8Auf4R7mGs9ztWJtI6BnLetfTXV56F8 VX7kjTiVmr5tF62xq4+7ODjgXw2h5VJD5hQaeXFFb5GOx0OAqoEoqDNFpFPdVCPpfRng uBS6xWvqZJ2MM+8Htb6CsVJvyfBQ7wdF8WWyMlIkoxb31PdSn+pz/3/1IsgJjtYfcl7l Nxd3v5ln40eOIvW0UYcjvxtCxp2KEJDaCMKxS3J1SzTTfiVhGF6+khZdVsXtU3sPAv/J DvcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715707035; x=1716311835; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cN/K+9boHAhauCx5TFHKaUKLLLYkMuKUXNq8tR95Jh4=; b=Z/nSzflcJi5yBR1XzNCcdNoFeQ9hev4oOZXtE/8ezcJOl7TK32oJTkoZRqwFqhpfYb mcvd1wTcjv0gSf2nijF857I8YOCcCJ2rXKfj2LNZzRrK1ycgjg4jYAb91p0peiEaR3w2 Nh2XVZdainPlFX4NXcVyFlhISvRWh8eLbjzeET3xaLsFF+i0j3Hl69PhhUcqcQtL+3al p6aVZ16uVxeQ8LNU78qwae39rVzpCO0pBPIZJeDSQNqrR+QqtYLUwl92Sw6US1ee9Ld2 +lRKWPePJHGVLH/f7OFqUH6Rz3Nc6jsGpwHZzs8B/8FlbeI2WLArfdzTHQhMSo0aiUsa OAhQ== X-Forwarded-Encrypted: i=1; AJvYcCUi8IjUjIVDeQdQOLJBR3RV0/qIZ0N/1m7QWCeCj6Qu849HwjHrSHFo3lsK3iV+JapO6TyVE3RuYd7BCt+GIFTLjkk= X-Gm-Message-State: AOJu0YxZLaXVdUQmjPHP43iYDzWqPW01wGB9V8ziy8JyxpnJpeUG5CVm /4hiHvFO3nd0BXDIz4FGVbLAREm4UwDCCqsfZp3sl8dTJZyGhhH/B8B3b3Kl9ZkLoVM+vtCsSpF ftfdCrWJIwfd/ofIIPUuS3qH82EXYp+jAkN4yKA== X-Google-Smtp-Source: AGHT+IGbLQMoI29kbZ5BuS8XsBelFObdcttwZdyM0fKXvOohQx42kTnJH/5E3IGhVsnh7MYJMHQ+ylOiZR7wfAMouuA= X-Received: by 2002:a50:cdd3:0:b0:56e:63d3:cb3e with SMTP id 4fb4d7f45d1cf-5734d6dff0bmr8140102a12.41.1715707035034; Tue, 14 May 2024 10:17:15 -0700 (PDT) MIME-Version: 1.0 References: <20240514140446.538622-1-bjorn@kernel.org> <20240514140446.538622-3-bjorn@kernel.org> In-Reply-To: <20240514140446.538622-3-bjorn@kernel.org> From: Alexandre Ghiti Date: Tue, 14 May 2024 19:17:04 +0200 Message-ID: Subject: Re: [PATCH v2 2/8] riscv: mm: Change attribute from __init to __meminit for page functions To: =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= Cc: Albert Ou , David Hildenbrand , Palmer Dabbelt , Paul Walmsley , linux-riscv@lists.infradead.org, =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , Andrew Bresticker , Chethan Seshadri , Lorenzo Stoakes , Oscar Salvador , Santosh Mamila , Sivakumar Munnangi , Sunil V L , linux-kernel@vger.kernel.org, linux-mm@kvack.org, virtualization@lists.linux-foundation.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: C49B640008 X-Stat-Signature: 14bjb4pt5zqfka5kdqk9a818epkzemry X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1715707036-557579 X-HE-Meta: U2FsdGVkX1+P50IKonp093oFt66JttrUutpKIbmvc9LkMbMDaoK4Z1PwHduwtvxTW2Ds3wJ161p45rAnVLEKO5iOjgFyvHmixFDWoTGtkcWOXaCLTpokl3QxAx/BoBvS3hAcEBUNF2TbjIwvYJkVGndK4FddFrjrI70Vvc6typ2rfqtEFwXFbTTEMs3e/kOFHYDu7JJgvdkHsqVbW6jE7MvqDGnh1wXjucldEZxWBFXP2BMjC33u9BJ2U9e36yPhXAm1YPFA2k/mhg5E+3Ci04HkmrV6fpLxRlDjTikHcb4dhDwOscKKFNVoY606KjvsaximzoHni8XXJAHCTeaAfoNKDIu6sQKqnpBDdb4kXkwLvsw6TOL3fUlKMOUbXpihmKQlFt1cpAmy/d5I+dIWMsbsP6wleM0UfDw0UqhPlNAynxu3VRLR+WYUHSOk8mOApp/Hwik1nrU+l+g6vM3Ys5REcssn7OrnSKFrE/LAJfE8SzYJkg6tWWj1vNaYGl13f1010V2qups+NocbBhgEaLDdrw5A/uw1FTuFqXDjj7h9Q0FIlW+9UmvpKBsyYZOZavoUkNQS6FulVROOjJ9twoyQHy3pwN0qzmdscpBuMoRlR3jV1IakEQODzDBaC6eI0auTaxqlP99uY7UHRJilVEbd56CIdP9Lboc5Djnw0vCFY7nnO/Y0Ct78GdtIk6yQcUW9SsA/pbfOBePTkqwe9T8KYqhAjkbpE7SRAM4bbsBxQTRDnPLQZFNyZSIj4cLcgs4AllxeULyMH9eJMSzIzmX2d6etVyH8UJv0DnRJb4bad2fNsBdrfvt2e88YK7YnlmN3l2P/Tnzrz/br056dpbYNAYcevIb8ewvAC328vI8hvw9OUcdP/d4IumhVF1nmiJSE6ok9rWUans8GNk/nSRasU6s1+hR9q0BJgw02tJBUQTsSaHPqz/AK/W9zVAZyEb+DpCrsx+aQ41vaVuj Nv1WSN91 BanAzFywv3nU9KmpEvpnlEg9zNpKskJJDPnk9qqjKKBO5SRqN0z5z7NIdm1Roe8dtgc4OqHTzJ9uLBMIGXVPaW0KgzeZBXBVz+5tOlFeTnOjaa8DMaexW4nhcRxWv8yb+FtE3l2EbfJJbESGTpBZTTMKN5CuZmKtw/KjGTl8bYf6BDfNMnF773FfdLq6a0Cs7kN3PQrqLspgPKZyM7j2dQ/5eN516Ckgo/lT+5hNkWN8fxbc/q8GuPWjYod5j0laHaAfAYpbaPRVpXWW4lXdnnu35Z+PQhNclb30cfVBTGyUdc+5hGXxRCoBJ2PbJ+QQvdTKY1/emC/mja0NWvsd0TaLur0875AKW1Zbz9Syl2vL4KWg4n4wq83isdPzjAcZ4MLES2u+fILaypvjA3k+Z4Z1d8OqjxZsnk/egV9MBnXbVuBQkY3WjUogfqw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, May 14, 2024 at 4:05=E2=80=AFPM Bj=C3=B6rn T=C3=B6pel wrote: > > From: Bj=C3=B6rn T=C3=B6pel > > Prepare for memory hotplugging support by changing from __init to > __meminit for the page table functions that are used by the upcoming > architecture specific callbacks. > > Changing the __init attribute to __meminit, avoids that the functions > are removed after init. The __meminit attribute makes sure the > functions are kept in the kernel text post init, but only if memory > hotplugging is enabled for the build. > > Also, make sure that the altmap parameter is properly passed on to > vmemmap_populate_hugepages(). > > Signed-off-by: Bj=C3=B6rn T=C3=B6pel > --- > arch/riscv/include/asm/mmu.h | 4 +-- > arch/riscv/include/asm/pgtable.h | 2 +- > arch/riscv/mm/init.c | 58 ++++++++++++++------------------ > 3 files changed, 29 insertions(+), 35 deletions(-) > > diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h > index 60be458e94da..c09c3c79f496 100644 > --- a/arch/riscv/include/asm/mmu.h > +++ b/arch/riscv/include/asm/mmu.h > @@ -28,8 +28,8 @@ typedef struct { > #endif > } mm_context_t; > > -void __init create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa= , > - phys_addr_t sz, pgprot_t prot); > +void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t= pa, phys_addr_t sz, > + pgprot_t prot); > #endif /* __ASSEMBLY__ */ > > #endif /* _ASM_RISCV_MMU_H */ > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pg= table.h > index 58fd7b70b903..7933f493db71 100644 > --- a/arch/riscv/include/asm/pgtable.h > +++ b/arch/riscv/include/asm/pgtable.h > @@ -162,7 +162,7 @@ struct pt_alloc_ops { > #endif > }; > > -extern struct pt_alloc_ops pt_ops __initdata; > +extern struct pt_alloc_ops pt_ops __meminitdata; > > #ifdef CONFIG_MMU > /* Number of PGD entries that a user-mode program can use */ > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 5b8cdfafb52a..c969427eab88 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c > @@ -295,7 +295,7 @@ static void __init setup_bootmem(void) > } > > #ifdef CONFIG_MMU > -struct pt_alloc_ops pt_ops __initdata; > +struct pt_alloc_ops pt_ops __meminitdata; > > pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss; > pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss; > @@ -357,7 +357,7 @@ static inline pte_t *__init get_pte_virt_fixmap(phys_= addr_t pa) > return (pte_t *)set_fixmap_offset(FIX_PTE, pa); > } > > -static inline pte_t *__init get_pte_virt_late(phys_addr_t pa) > +static inline pte_t *__meminit get_pte_virt_late(phys_addr_t pa) > { > return (pte_t *) __va(pa); > } > @@ -376,7 +376,7 @@ static inline phys_addr_t __init alloc_pte_fixmap(uin= tptr_t va) > return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); > } > > -static phys_addr_t __init alloc_pte_late(uintptr_t va) > +static phys_addr_t __meminit alloc_pte_late(uintptr_t va) > { > struct ptdesc *ptdesc =3D pagetable_alloc(GFP_KERNEL & ~__GFP_HIG= HMEM, 0); > > @@ -384,9 +384,8 @@ static phys_addr_t __init alloc_pte_late(uintptr_t va= ) > return __pa((pte_t *)ptdesc_address(ptdesc)); > } > > -static void __init create_pte_mapping(pte_t *ptep, > - uintptr_t va, phys_addr_t pa, > - phys_addr_t sz, pgprot_t prot) > +static void __meminit create_pte_mapping(pte_t *ptep, uintptr_t va, phys= _addr_t pa, phys_addr_t sz, > + pgprot_t prot) > { > uintptr_t pte_idx =3D pte_index(va); > > @@ -440,7 +439,7 @@ static pmd_t *__init get_pmd_virt_fixmap(phys_addr_t = pa) > return (pmd_t *)set_fixmap_offset(FIX_PMD, pa); > } > > -static pmd_t *__init get_pmd_virt_late(phys_addr_t pa) > +static pmd_t *__meminit get_pmd_virt_late(phys_addr_t pa) > { > return (pmd_t *) __va(pa); > } > @@ -457,7 +456,7 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t = va) > return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); > } > > -static phys_addr_t __init alloc_pmd_late(uintptr_t va) > +static phys_addr_t __meminit alloc_pmd_late(uintptr_t va) > { > struct ptdesc *ptdesc =3D pagetable_alloc(GFP_KERNEL & ~__GFP_HIG= HMEM, 0); > > @@ -465,9 +464,9 @@ static phys_addr_t __init alloc_pmd_late(uintptr_t va= ) > return __pa((pmd_t *)ptdesc_address(ptdesc)); > } > > -static void __init create_pmd_mapping(pmd_t *pmdp, > - uintptr_t va, phys_addr_t pa, > - phys_addr_t sz, pgprot_t prot) > +static void __meminit create_pmd_mapping(pmd_t *pmdp, > + uintptr_t va, phys_addr_t pa, > + phys_addr_t sz, pgprot_t prot) > { > pte_t *ptep; > phys_addr_t pte_phys; > @@ -503,7 +502,7 @@ static pud_t *__init get_pud_virt_fixmap(phys_addr_t = pa) > return (pud_t *)set_fixmap_offset(FIX_PUD, pa); > } > > -static pud_t *__init get_pud_virt_late(phys_addr_t pa) > +static pud_t *__meminit get_pud_virt_late(phys_addr_t pa) > { > return (pud_t *)__va(pa); > } > @@ -521,7 +520,7 @@ static phys_addr_t __init alloc_pud_fixmap(uintptr_t = va) > return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); > } > > -static phys_addr_t alloc_pud_late(uintptr_t va) > +static phys_addr_t __meminit alloc_pud_late(uintptr_t va) > { > unsigned long vaddr; > > @@ -541,7 +540,7 @@ static p4d_t *__init get_p4d_virt_fixmap(phys_addr_t = pa) > return (p4d_t *)set_fixmap_offset(FIX_P4D, pa); > } > > -static p4d_t *__init get_p4d_virt_late(phys_addr_t pa) > +static p4d_t *__meminit get_p4d_virt_late(phys_addr_t pa) > { > return (p4d_t *)__va(pa); > } > @@ -559,7 +558,7 @@ static phys_addr_t __init alloc_p4d_fixmap(uintptr_t = va) > return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); > } > > -static phys_addr_t alloc_p4d_late(uintptr_t va) > +static phys_addr_t __meminit alloc_p4d_late(uintptr_t va) > { > unsigned long vaddr; > > @@ -568,9 +567,8 @@ static phys_addr_t alloc_p4d_late(uintptr_t va) > return __pa(vaddr); > } > > -static void __init create_pud_mapping(pud_t *pudp, > - uintptr_t va, phys_addr_t pa, > - phys_addr_t sz, pgprot_t prot) > +static void __meminit create_pud_mapping(pud_t *pudp, uintptr_t va, phys= _addr_t pa, phys_addr_t sz, > + pgprot_t prot) > { > pmd_t *nextp; > phys_addr_t next_phys; > @@ -595,9 +593,8 @@ static void __init create_pud_mapping(pud_t *pudp, > create_pmd_mapping(nextp, va, pa, sz, prot); > } > > -static void __init create_p4d_mapping(p4d_t *p4dp, > - uintptr_t va, phys_addr_t pa, > - phys_addr_t sz, pgprot_t prot) > +static void __meminit create_p4d_mapping(p4d_t *p4dp, uintptr_t va, phys= _addr_t pa, phys_addr_t sz, > + pgprot_t prot) > { > pud_t *nextp; > phys_addr_t next_phys; > @@ -653,9 +650,8 @@ static void __init create_p4d_mapping(p4d_t *p4dp, > #define create_pmd_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while= (0) > #endif /* __PAGETABLE_PMD_FOLDED */ > > -void __init create_pgd_mapping(pgd_t *pgdp, > - uintptr_t va, phys_addr_t pa, > - phys_addr_t sz, pgprot_t prot) > +void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t= pa, phys_addr_t sz, > + pgprot_t prot) > { > pgd_next_t *nextp; > phys_addr_t next_phys; > @@ -680,8 +676,7 @@ void __init create_pgd_mapping(pgd_t *pgdp, > create_pgd_next_mapping(nextp, va, pa, sz, prot); > } > > -static uintptr_t __init best_map_size(phys_addr_t pa, uintptr_t va, > - phys_addr_t size) > +static uintptr_t __meminit best_map_size(phys_addr_t pa, uintptr_t va, p= hys_addr_t size) > { > if (pgtable_l5_enabled && > !(pa & (P4D_SIZE - 1)) && !(va & (P4D_SIZE - 1)) && size >=3D= P4D_SIZE) > @@ -714,7 +709,7 @@ asmlinkage void __init __copy_data(void) > #endif > > #ifdef CONFIG_STRICT_KERNEL_RWX > -static __init pgprot_t pgprot_from_va(uintptr_t va) > +static __meminit pgprot_t pgprot_from_va(uintptr_t va) > { > if (is_va_kernel_text(va)) > return PAGE_KERNEL_READ_EXEC; > @@ -739,7 +734,7 @@ void mark_rodata_ro(void) > set_memory_ro); > } > #else > -static __init pgprot_t pgprot_from_va(uintptr_t va) > +static __meminit pgprot_t pgprot_from_va(uintptr_t va) > { > if (IS_ENABLED(CONFIG_64BIT) && !is_kernel_mapping(va)) > return PAGE_KERNEL; > @@ -1231,9 +1226,8 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) > pt_ops_set_fixmap(); > } > > -static void __init create_linear_mapping_range(phys_addr_t start, > - phys_addr_t end, > - uintptr_t fixed_map_size) > +static void __meminit create_linear_mapping_range(phys_addr_t start, phy= s_addr_t end, > + uintptr_t fixed_map_siz= e) > { > phys_addr_t pa; > uintptr_t va, map_size; > @@ -1435,7 +1429,7 @@ int __meminit vmemmap_populate(unsigned long start,= unsigned long end, int node, > * memory hotplug, we are not able to update all the page tables = with > * the new PMDs. > */ > - return vmemmap_populate_hugepages(start, end, node, NULL); > + return vmemmap_populate_hugepages(start, end, node, altmap); Is this a fix? Does this deserve to be split into another patch then? > } > #endif > > -- > 2.40.1 >