From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A26CD3C927 for ; Mon, 21 Oct 2024 01:23:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA6B86B007B; Sun, 20 Oct 2024 21:23:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C56DA6B0082; Sun, 20 Oct 2024 21:23:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1E2F6B0083; Sun, 20 Oct 2024 21:23:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 880586B007B for ; Sun, 20 Oct 2024 21:23:34 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CB8281A048F for ; Mon, 21 Oct 2024 01:23:07 +0000 (UTC) X-FDA: 82695861024.17.08120FC Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by imf13.hostedemail.com (Postfix) with ESMTP id EF92D20002 for ; Mon, 21 Oct 2024 01:23:15 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of maobibo@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=maobibo@loongson.cn; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729473736; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S2Pdfzx5bmJ1vrAlHAwkk7gCU//ZgUJZPo7OVlihm/c=; b=oYQY/l2M/3jNjmQgQv5jPeJLeF4/BWrFnihL+Ef20xp0Muh5zb4YX8YGX9x5IzTc8Llq56 h0xx4ubSHyOejmAPLzY3DMFa82Q+xRlLQ7s59kVOUnNIl9olXNEzRHyQ5MPIFupfO3pdI2 KFpwzQGBBwr3rZOWGMqtF1HM3aggN9Q= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of maobibo@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=maobibo@loongson.cn; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729473736; a=rsa-sha256; cv=none; b=OOjL8DEVLHdhE41E+jUpkFTvru6Dk5i9/iaO2W+2nHIU9NIitRGK9N8N1JBV3o01AwRX+/ hx0v/bofMyagXO+BsnpXcvlG/ahVUxdk0afiqdTchxRXJHtZK2wYh8cttZG+R69MKpYncX +Gvt0zxOEjXH/Quz54GLta4YMhv56lE= Received: from loongson.cn (unknown [10.20.42.62]) by gateway (Coremail) with SMTP id _____8DxWOEMrRVnkRABAA--.2514S3; Mon, 21 Oct 2024 09:23:25 +0800 (CST) Received: from [10.20.42.62] (unknown [10.20.42.62]) by front1 (Coremail) with SMTP id qMiowMBxveAKrRVn+ckBAA--.10312S3; Mon, 21 Oct 2024 09:23:22 +0800 (CST) Subject: Re: [PATCH v2 1/3] LoongArch: Set initial pte entry with PAGE_GLOBAL for kernel space To: Huacai Chen Cc: wuruiyang@loongson.cn, Andrey Ryabinin , Andrew Morton , David Hildenbrand , Barry Song , loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org References: <20241014035855.1119220-1-maobibo@loongson.cn> <20241014035855.1119220-2-maobibo@loongson.cn> <5f76ede6-e8be-c7a9-f957-479afa2fb828@loongson.cn> From: maobibo Message-ID: Date: Mon, 21 Oct 2024 09:22:59 +0800 User-Agent: Mozilla/5.0 (X11; Linux loongarch64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-CM-TRANSID:qMiowMBxveAKrRVn+ckBAA--.10312S3 X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBj9fXoWfJw48ZFyfWrWrtw45try3ZFc_yoW8GF1DZo W5JF47tr18JryUJr10y34Dtw1Utw1DKw4UArW2yr4UXF15t34UAr1UJr15XFW7Gr1rJrsr GFyUXr4UZrW7Jrn8l-sFpf9Il3svdjkaLaAFLSUrUUUU0b8apTn2vfkv8UJUUUU8wcxFpf 9Il3svdxBIdaVrn0xqx4xG64xvF2IEw4CE5I8CrVC2j2Jv73VFW2AGmfu7bjvjm3AaLaJ3 UjIYCTnIWjp_UUUOb7kC6x804xWl14x267AKxVWUJVW8JwAFc2x0x2IEx4CE42xK8VAvwI 8IcIk0rVWrJVCq3wAFIxvE14AKwVWUGVWUXwA2ocxC64kIII0Yj41l84x0c7CEw4AK67xG Y2AK021l84ACjcxK6xIIjxv20xvE14v26r1I6r4UM28EF7xvwVC0I7IYx2IY6xkF7I0E14 v26r4j6F4UM28EF7xvwVC2z280aVAFwI0_Gr0_Cr1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6r4j6r4UJwAaw2AFwI0_JF0_Jw1le2I262IYc4CY6c8Ij28IcVAaY2xG8wAqjxCEc2xF0c Ia020Ex4CE44I27wAqx4xG64xvF2IEw4CE5I8CrVC2j2WlYx0E2Ix0cI8IcVAFwI0_JF0_ Jw1lYx0Ex4A2jsIE14v26r1j6r4UMcvjeVCFs4IE7xkEbVWUJVW8JwACjcxG0xvEwIxGrw CYjI0SjxkI62AI1cAE67vIY487MxkF7I0En4kS14v26r126r1DMxAIw28IcxkI7VAKI48J MxC20s026xCaFVCjc4AY6r1j6r4UMxCIbckI1I0E14v26r126r1DMI8I3I0E5I8CrVAFwI 0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVWUtVW8ZwCIc40Y 0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1I6r4UMIIF0xvE2Ix0cI8IcVCY1x0267AKxV WUJVW8JwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1l IxAIcVC2z280aVCY1x0267AKxVWUJVW8JbIYCTnIWIevJa73UjIFyTuYvjxU2txhDUUUU X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: EF92D20002 X-Stat-Signature: ux1ey1kahw6kmqwcf31f4u5a7mg7tetj X-HE-Tag: 1729473795-799746 X-HE-Meta: U2FsdGVkX1/SrxBuSJc0u8KqHwZYClOd0Koc1yuLmbvu/Dm7KuRibdJdSl/NMr4vFmIGaaU2qbN90Zd5Mwd1/Tho7QynWdQQ+3MiBthDQ0B4nSpgm3igGxALcGL5QXQAjGYFIhz5JW6y/LGVvha+qQAKGKIfIrF8ztW8MdHs8RxxJxtNVt7E35Sv/WRh9KUT1DtAAHI0LVV79J39tqbfWWfo9PrMV+mEbfJ0fmX86eLJeUjqnwcGUI53Xy4FvOqSorfzdal+eQlvdHgbpkx2k6R7l11dB7fgOi39cPNeeiwOusdLf+niN3sTN2eZQdI5ZsMJwZBAErRn/BpGGQ9qAv4wgv0CaCTPl/i/AQ6j+nTDyb50gygnwpR130r1ShlC6bCrpZE9z4+hZ83fU5B+xYJtrCB6LsVVp2qDlZuSBczBSWAydculG4cXpIcUG95LdQF/hEmcEfpg08bO1N7NoG4iybJTJLHmGWstl6kEOxFGsaxdtU6aLQPs7N0oYEgnqjcAVKEM/yscQqs2CW1oVuxNEY+wo76FXtznDk7G5HHgyk2HARTseAawTf2QIs2HSI+adXsKzR/5TsIWfb1Ox1TL582UsCYtfm5WQdtsonj4ufpxwIsighKqQwCszk661R2PPur48khlI91cjG/3OcVDKh+/uhkWJl1DBBNq4spJ7uSWQmyv/4ydYr5OO3iFYJ73sHHwiLGxrt8qWHALHQxKaba26BXL4/UT5MTgU4Chq5YQsBA1kvOG/WRL8uPAxCiaSDXJ9gI3qFztPjHodYwX2PTEsrjEPqqSX67TqvLVW5XxlXU1QxfeR0ZaNOL/CWmxYx3mcl7ImaDjeunkc5inW34yjDGTjp3byQ0OJYI7JVCXVGa+QadKQddSekgo/0F6p06sxpSH3UehxHf9i7wiRysI1UUExaJeBIWLkWoO359cFHcsx5542eJIngUEK43+zCct6x63Ksyyp7l 4G71PPuc ZFPjZm/nV65/gX0N3yVtTUeLnhBfgLNMd7QobM3QsAbB8Pircn9bftpKisj+dHP4pF1DGRu6DmLlXq8RZ0yULT3z/QZ0oA8Mqc+26z6PbMyC/LAYWkbP1jWugWgk8O2lzacsOgiK2l724SnFWQ1xAs0TB7MBq9Ipih16Q7ix1ZA4DhZ4MJIveRunOy83G/iK0xFQ5hel3QNd7p3NxohppwEH5bA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/10/18 下午2:32, Huacai Chen wrote: > On Fri, Oct 18, 2024 at 2:23 PM maobibo wrote: >> >> >> >> On 2024/10/18 下午12:23, Huacai Chen wrote: >>> On Fri, Oct 18, 2024 at 12:16 PM maobibo wrote: >>>> >>>> >>>> >>>> On 2024/10/18 下午12:11, Huacai Chen wrote: >>>>> On Fri, Oct 18, 2024 at 11:44 AM maobibo wrote: >>>>>> >>>>>> >>>>>> >>>>>> On 2024/10/18 上午11:14, Huacai Chen wrote: >>>>>>> Hi, Bibo, >>>>>>> >>>>>>> I applied this patch but drop the part of arch/loongarch/mm/kasan_init.c: >>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson.git/commit/?h=loongarch-next&id=15832255e84494853f543b4c70ced50afc403067 >>>>>>> >>>>>>> Because kernel_pte_init() should operate on page-table pages, not on >>>>>>> data pages. You have already handle page-table page in >>>>>>> mm/kasan/init.c, and if we don't drop the modification on data pages >>>>>>> in arch/loongarch/mm/kasan_init.c, the kernel fail to boot if KASAN is >>>>>>> enabled. >>>>>>> >>>>>> static inline void set_pte(pte_t *ptep, pte_t pteval) >>>>>> { >>>>>> WRITE_ONCE(*ptep, pteval); >>>>>> - >>>>>> - if (pte_val(pteval) & _PAGE_GLOBAL) { >>>>>> - pte_t *buddy = ptep_buddy(ptep); >>>>>> - /* >>>>>> - * Make sure the buddy is global too (if it's !none, >>>>>> - * it better already be global) >>>>>> - */ >>>>>> - if (pte_none(ptep_get(buddy))) { >>>>>> -#ifdef CONFIG_SMP >>>>>> - /* >>>>>> - * For SMP, multiple CPUs can race, so we need >>>>>> - * to do this atomically. >>>>>> - */ >>>>>> - __asm__ __volatile__( >>>>>> - __AMOR "$zero, %[global], %[buddy] \n" >>>>>> - : [buddy] "+ZB" (buddy->pte) >>>>>> - : [global] "r" (_PAGE_GLOBAL) >>>>>> - : "memory"); >>>>>> - >>>>>> - DBAR(0b11000); /* o_wrw = 0b11000 */ >>>>>> -#else /* !CONFIG_SMP */ >>>>>> - WRITE_ONCE(*buddy, __pte(pte_val(ptep_get(buddy)) | _PAGE_GLOBAL)); >>>>>> -#endif /* CONFIG_SMP */ >>>>>> - } >>>>>> - } >>>>>> + DBAR(0b11000); /* o_wrw = 0b11000 */ >>>>>> } >>>>>> >>>>>> No, please hold on. This issue exists about twenty years, Do we need be >>>>>> in such a hurry now? >>>>>> >>>>>> why is DBAR(0b11000) added in set_pte()? >>>>> It exists before, not added by this patch. The reason is explained in >>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v6.12-rc3&id=f93f67d06b1023313ef1662eac490e29c025c030 >>>> why speculative accesses may cause spurious page fault in kernel space >>>> with PTE enabled? speculative accesses exists anywhere, it does not >>>> cause spurious page fault. >>> Confirmed by Ruiyang Wu, and even if DBAR(0b11000) is wrong, that >>> means another patch's mistake, not this one. This one just keeps the >>> old behavior. >>> +CC Ruiyang Wu here. >> Also from Ruiyang Wu, the information is that speculative accesses may >> insert stale TLB, however no page fault exception. >> >> So adding barrier in set_pte() does not prevent speculative accesses. >> And you write patch here, however do not know the actual reason? >> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v6.12-rc3&id=f93f67d06b1023313ef1662eac490e29c025c030 > I have CCed Ruiyang, whether the description is correct can be judged by him. There are some problems to add barrier() in set_pte(): 1. There is such issue only for HW ptw enabled and kernel address space, is that? Also it may be two heavy to add barrier in set_pte(), comparing to do this in flush_cache_vmap(). 2. LoongArch is different with other other architectures, two pages are included in one TLB entry. If there is two consecutive page mapped and memory access, there will page fault for the second memory access. Such as: addr1 =percpu_alloc(pagesize); val1 = *(int *)addr1; // With page table walk, addr1 is present and addr2 is pte_none // TLB entry includes valid pte for addr1, invalid pte for addr2 addr2 =percpu_alloc(pagesize); // will not flush tlb in first time val2 = *(int *)addr2; // With page table walk, addr1 is present and addr2 is present also // TLB entry includes valid pte for addr1, invalid pte for addr2 So there will be page fault when accessing address addr2 There there is the same problem with user address space. By the way, there is HW prefetching technology, negative effective of HW prefetching technology will be tlb added. So there is potential page fault if memory is allocated and accessed in the first time. 3. For speculative execution, if it is user address, there is eret from syscall. eret will rollback all speculative execution instruction. So it is only problem for speculative execution. And how to verify whether it is the problem of speculative execution or it is the problem of clause 2? Regards Bibo Mao > > Huacai > >> >> Bibo Mao >>> >>> Huacai >>> >>>> >>>> Obvious you do not it and you write wrong patch. >>>> >>>>> >>>>> Huacai >>>>> >>>>>> >>>>>> Regards >>>>>> Bibo Mao >>>>>>> Huacai >>>>>>> >>>>>>> On Mon, Oct 14, 2024 at 11:59 AM Bibo Mao wrote: >>>>>>>> >>>>>>>> Unlike general architectures, there are two pages in one TLB entry >>>>>>>> on LoongArch system. For kernel space, it requires both two pte >>>>>>>> entries with PAGE_GLOBAL bit set, else HW treats it as non-global >>>>>>>> tlb, there will be potential problems if tlb entry for kernel space >>>>>>>> is not global. Such as fail to flush kernel tlb with function >>>>>>>> local_flush_tlb_kernel_range() which only flush tlb with global bit. >>>>>>>> >>>>>>>> With function kernel_pte_init() added, it can be used to init pte >>>>>>>> table when it is created for kernel address space, and the default >>>>>>>> initial pte value is PAGE_GLOBAL rather than zero at beginning. >>>>>>>> >>>>>>>> Kernel address space areas includes fixmap, percpu, vmalloc, kasan >>>>>>>> and vmemmap areas set default pte entry with PAGE_GLOBAL set. >>>>>>>> >>>>>>>> Signed-off-by: Bibo Mao >>>>>>>> --- >>>>>>>> arch/loongarch/include/asm/pgalloc.h | 13 +++++++++++++ >>>>>>>> arch/loongarch/include/asm/pgtable.h | 1 + >>>>>>>> arch/loongarch/mm/init.c | 4 +++- >>>>>>>> arch/loongarch/mm/kasan_init.c | 4 +++- >>>>>>>> arch/loongarch/mm/pgtable.c | 22 ++++++++++++++++++++++ >>>>>>>> include/linux/mm.h | 1 + >>>>>>>> mm/kasan/init.c | 8 +++++++- >>>>>>>> mm/sparse-vmemmap.c | 5 +++++ >>>>>>>> 8 files changed, 55 insertions(+), 3 deletions(-) >>>>>>>> >>>>>>>> diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h >>>>>>>> index 4e2d6b7ca2ee..b2698c03dc2c 100644 >>>>>>>> --- a/arch/loongarch/include/asm/pgalloc.h >>>>>>>> +++ b/arch/loongarch/include/asm/pgalloc.h >>>>>>>> @@ -10,8 +10,21 @@ >>>>>>>> >>>>>>>> #define __HAVE_ARCH_PMD_ALLOC_ONE >>>>>>>> #define __HAVE_ARCH_PUD_ALLOC_ONE >>>>>>>> +#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL >>>>>>>> #include >>>>>>>> >>>>>>>> +static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) >>>>>>>> +{ >>>>>>>> + pte_t *pte; >>>>>>>> + >>>>>>>> + pte = (pte_t *) __get_free_page(GFP_KERNEL); >>>>>>>> + if (!pte) >>>>>>>> + return NULL; >>>>>>>> + >>>>>>>> + kernel_pte_init(pte); >>>>>>>> + return pte; >>>>>>>> +} >>>>>>>> + >>>>>>>> static inline void pmd_populate_kernel(struct mm_struct *mm, >>>>>>>> pmd_t *pmd, pte_t *pte) >>>>>>>> { >>>>>>>> diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h >>>>>>>> index 9965f52ef65b..22e3a8f96213 100644 >>>>>>>> --- a/arch/loongarch/include/asm/pgtable.h >>>>>>>> +++ b/arch/loongarch/include/asm/pgtable.h >>>>>>>> @@ -269,6 +269,7 @@ extern void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pm >>>>>>>> extern void pgd_init(void *addr); >>>>>>>> extern void pud_init(void *addr); >>>>>>>> extern void pmd_init(void *addr); >>>>>>>> +extern void kernel_pte_init(void *addr); >>>>>>>> >>>>>>>> /* >>>>>>>> * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that >>>>>>>> diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c >>>>>>>> index 8a87a482c8f4..9f26e933a8a3 100644 >>>>>>>> --- a/arch/loongarch/mm/init.c >>>>>>>> +++ b/arch/loongarch/mm/init.c >>>>>>>> @@ -198,9 +198,11 @@ pte_t * __init populate_kernel_pte(unsigned long addr) >>>>>>>> if (!pmd_present(pmdp_get(pmd))) { >>>>>>>> pte_t *pte; >>>>>>>> >>>>>>>> - pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); >>>>>>>> + pte = memblock_alloc_raw(PAGE_SIZE, PAGE_SIZE); >>>>>>>> if (!pte) >>>>>>>> panic("%s: Failed to allocate memory\n", __func__); >>>>>>>> + >>>>>>>> + kernel_pte_init(pte); >>>>>>>> pmd_populate_kernel(&init_mm, pmd, pte); >>>>>>>> } >>>>>>>> >>>>>>>> diff --git a/arch/loongarch/mm/kasan_init.c b/arch/loongarch/mm/kasan_init.c >>>>>>>> index 427d6b1aec09..34988573b0d5 100644 >>>>>>>> --- a/arch/loongarch/mm/kasan_init.c >>>>>>>> +++ b/arch/loongarch/mm/kasan_init.c >>>>>>>> @@ -152,6 +152,8 @@ static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, >>>>>>>> phys_addr_t page_phys = early ? >>>>>>>> __pa_symbol(kasan_early_shadow_page) >>>>>>>> : kasan_alloc_zeroed_page(node); >>>>>>>> + if (!early) >>>>>>>> + kernel_pte_init(__va(page_phys)); >>>>>>>> next = addr + PAGE_SIZE; >>>>>>>> set_pte(ptep, pfn_pte(__phys_to_pfn(page_phys), PAGE_KERNEL)); >>>>>>>> } while (ptep++, addr = next, addr != end && __pte_none(early, ptep_get(ptep))); >>>>>>>> @@ -287,7 +289,7 @@ void __init kasan_init(void) >>>>>>>> set_pte(&kasan_early_shadow_pte[i], >>>>>>>> pfn_pte(__phys_to_pfn(__pa_symbol(kasan_early_shadow_page)), PAGE_KERNEL_RO)); >>>>>>>> >>>>>>>> - memset(kasan_early_shadow_page, 0, PAGE_SIZE); >>>>>>>> + kernel_pte_init(kasan_early_shadow_page); >>>>>>>> csr_write64(__pa_symbol(swapper_pg_dir), LOONGARCH_CSR_PGDH); >>>>>>>> local_flush_tlb_all(); >>>>>>>> >>>>>>>> diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c >>>>>>>> index eb6a29b491a7..228ffc1db0a3 100644 >>>>>>>> --- a/arch/loongarch/mm/pgtable.c >>>>>>>> +++ b/arch/loongarch/mm/pgtable.c >>>>>>>> @@ -38,6 +38,28 @@ pgd_t *pgd_alloc(struct mm_struct *mm) >>>>>>>> } >>>>>>>> EXPORT_SYMBOL_GPL(pgd_alloc); >>>>>>>> >>>>>>>> +void kernel_pte_init(void *addr) >>>>>>>> +{ >>>>>>>> + unsigned long *p, *end; >>>>>>>> + unsigned long entry; >>>>>>>> + >>>>>>>> + entry = (unsigned long)_PAGE_GLOBAL; >>>>>>>> + p = (unsigned long *)addr; >>>>>>>> + end = p + PTRS_PER_PTE; >>>>>>>> + >>>>>>>> + do { >>>>>>>> + p[0] = entry; >>>>>>>> + p[1] = entry; >>>>>>>> + p[2] = entry; >>>>>>>> + p[3] = entry; >>>>>>>> + p[4] = entry; >>>>>>>> + p += 8; >>>>>>>> + p[-3] = entry; >>>>>>>> + p[-2] = entry; >>>>>>>> + p[-1] = entry; >>>>>>>> + } while (p != end); >>>>>>>> +} >>>>>>>> + >>>>>>>> void pgd_init(void *addr) >>>>>>>> { >>>>>>>> unsigned long *p, *end; >>>>>>>> diff --git a/include/linux/mm.h b/include/linux/mm.h >>>>>>>> index ecf63d2b0582..6909fe059a2c 100644 >>>>>>>> --- a/include/linux/mm.h >>>>>>>> +++ b/include/linux/mm.h >>>>>>>> @@ -3818,6 +3818,7 @@ void *sparse_buffer_alloc(unsigned long size); >>>>>>>> struct page * __populate_section_memmap(unsigned long pfn, >>>>>>>> unsigned long nr_pages, int nid, struct vmem_altmap *altmap, >>>>>>>> struct dev_pagemap *pgmap); >>>>>>>> +void kernel_pte_init(void *addr); >>>>>>>> void pmd_init(void *addr); >>>>>>>> void pud_init(void *addr); >>>>>>>> pgd_t *vmemmap_pgd_populate(unsigned long addr, int node); >>>>>>>> diff --git a/mm/kasan/init.c b/mm/kasan/init.c >>>>>>>> index 89895f38f722..ac607c306292 100644 >>>>>>>> --- a/mm/kasan/init.c >>>>>>>> +++ b/mm/kasan/init.c >>>>>>>> @@ -106,6 +106,10 @@ static void __ref zero_pte_populate(pmd_t *pmd, unsigned long addr, >>>>>>>> } >>>>>>>> } >>>>>>>> >>>>>>>> +void __weak __meminit kernel_pte_init(void *addr) >>>>>>>> +{ >>>>>>>> +} >>>>>>>> + >>>>>>>> static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr, >>>>>>>> unsigned long end) >>>>>>>> { >>>>>>>> @@ -126,8 +130,10 @@ static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr, >>>>>>>> >>>>>>>> if (slab_is_available()) >>>>>>>> p = pte_alloc_one_kernel(&init_mm); >>>>>>>> - else >>>>>>>> + else { >>>>>>>> p = early_alloc(PAGE_SIZE, NUMA_NO_NODE); >>>>>>>> + kernel_pte_init(p); >>>>>>>> + } >>>>>>>> if (!p) >>>>>>>> return -ENOMEM; >>>>>>>> >>>>>>>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c >>>>>>>> index edcc7a6b0f6f..c0388b2e959d 100644 >>>>>>>> --- a/mm/sparse-vmemmap.c >>>>>>>> +++ b/mm/sparse-vmemmap.c >>>>>>>> @@ -184,6 +184,10 @@ static void * __meminit vmemmap_alloc_block_zero(unsigned long size, int node) >>>>>>>> return p; >>>>>>>> } >>>>>>>> >>>>>>>> +void __weak __meminit kernel_pte_init(void *addr) >>>>>>>> +{ >>>>>>>> +} >>>>>>>> + >>>>>>>> pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node) >>>>>>>> { >>>>>>>> pmd_t *pmd = pmd_offset(pud, addr); >>>>>>>> @@ -191,6 +195,7 @@ pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node) >>>>>>>> void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node); >>>>>>>> if (!p) >>>>>>>> return NULL; >>>>>>>> + kernel_pte_init(p); >>>>>>>> pmd_populate_kernel(&init_mm, pmd, p); >>>>>>>> } >>>>>>>> return pmd; >>>>>>>> -- >>>>>>>> 2.39.3 >>>>>>>> >>>>>> >>>>>> >>>> >>>> >> >>