From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82679C27C52 for ; Tue, 4 Jun 2024 07:15:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D7F096B0092; Tue, 4 Jun 2024 03:15:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D2E3C6B0095; Tue, 4 Jun 2024 03:15:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF5CE6B0096; Tue, 4 Jun 2024 03:15:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9BE856B0092 for ; Tue, 4 Jun 2024 03:15:15 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4BF5AA12CB for ; Tue, 4 Jun 2024 07:15:15 +0000 (UTC) X-FDA: 82192344990.13.6EBC953 Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by imf20.hostedemail.com (Postfix) with ESMTP id 464861C0023 for ; Tue, 4 Jun 2024 07:15:13 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=yvbyDB1v; spf=pass (imf20.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.208.47 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717485313; a=rsa-sha256; cv=none; b=E+oGEQRcHb4u0etu9z74+u5sRpWvHF9MfKgUH1Sp/1Ccl5Saf/hv17Lia81N6yYbgw8hTH e5l2UQtxjrqOgiP4ubFuTKrvHTNzwrOiQ17BzOJx6kVODWKpxB1aBJ1CW71ntevbNkIxs2 OhjQBiPyPMI5pF29mKUP9T/nYlMY5pw= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=yvbyDB1v; spf=pass (imf20.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.208.47 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717485313; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xGfwkYu7JVFXZOdUmCyYR2r+fVgJf4gONLr+u0dZ9y0=; b=wkd2VPTADbrFYBXrEc1/O13TscImjVLh/7SnyvDEEMDtizzlsXd0T10v7vPSPB29+wq7rH LDbPsiYTCxs6cH2+2R5nrakI22MVOlsZDLAr2UMbAwAy76k1nwaOPB5eP2JYb4N8LDa+hO hrI7GADJzQlEbMzCOrSFvdy/Y30JhBo= Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-57a4ea8058bso3036431a12.1 for ; Tue, 04 Jun 2024 00:15:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1717485311; x=1718090111; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=xGfwkYu7JVFXZOdUmCyYR2r+fVgJf4gONLr+u0dZ9y0=; b=yvbyDB1viXoHoM1hUFF8U+4ydLlYGfbwsSl/8w2KBj4t21VYw4vfHzINm//haXl9QX UmEs5iZ+PIYPwJOoPK9FAHwDcYIgUPWdS8skkBZbeb7jNjjDnBpANvVA8oLjEJ1phUIU 5Xe+sIEbfS/a6dm97YTWoP7Zrs55BjKRb6X6IXQ6Eoh4TNVhuuT0d+QffL6POcZSWpwg QWnjJz/T5q7G6hHTPr74TNpFKfZX4gxbrQj/DyEKexYcFkoL+rDgKB4jM6uFadfVtcib 1DwedMsRY2Yz1DJvgthx4EXYjUoTUbUTy035XrzRINnpqEALfCCHDs7uqrBZ0MewajyV xH7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717485311; x=1718090111; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xGfwkYu7JVFXZOdUmCyYR2r+fVgJf4gONLr+u0dZ9y0=; b=pvQods7IQ9FHo0ge8u0AQOrwJZzT3hpSzj+fLV5abTR04kIKl/8krACr8RMH4sJaK8 35O6LXkiPrbIo9oIhrqFK+1mGPLVY646giIutpmNDW3NXexWotn9CeBzfqSyyF1WHVZe EpJnuPJ1hwgsbDA7tW1wE7kP6GKRPs7Q6T1Z660sunveto5eO90ywsdTHxZ2MLA9cKOe ubyezZi9w7me7Gt/0ZJCCcVJjYa1Amp/paIDyaHb2G4nKyyirWVRaDGLzmQOzdmOK8l9 Ez2ucZ+Wa6y44c05Pv2mwj8Q/oHXPUtoKMOhHTdfQrlihiuYvYxeqVVC+Ao6hW29mF5x JGhQ== X-Forwarded-Encrypted: i=1; AJvYcCXPsLkt8nlJjTcPUTZmimo5qVy/BQBaub0S7p9K8l0YSQJbUUBgt/I8mM8/CEP8J7tEdCEtd6F20eYpnpGV4FaqbIo= X-Gm-Message-State: AOJu0Yx2Lgn8EHoJgYPNmP5WyBKZfW9QKuBpETotcVe4/6iJ/F49boP5 m676jVFt2u07g7dytIS3xSBVRiYdl2OMeAf4UB23DJPXJCIn/+7KkVFTh2jtKY3abT7wwpv+q2F ctgXRXcL3Uy6uiJmC//E/wny6PoxO1XuhTf1lkg== X-Google-Smtp-Source: AGHT+IHarWc6Pn5Ik6tsXrxjrWuTOVIndSB1/IuRfaYcNBnRUoto132QzfDv328/hbJp5jRKeRYQLyvJSTBmGsNUy1M= X-Received: by 2002:a17:906:3656:b0:a68:b8cc:842f with SMTP id a640c23a62f3a-a68b8cc861dmr531801066b.56.1717485311250; Tue, 04 Jun 2024 00:15:11 -0700 (PDT) MIME-Version: 1.0 References: <20240131155929.169961-1-alexghiti@rivosinc.com> <20240131155929.169961-4-alexghiti@rivosinc.com> In-Reply-To: From: Alexandre Ghiti Date: Tue, 4 Jun 2024 09:15:00 +0200 Message-ID: Subject: Re: [External] [PATCH RFC/RFT v2 3/4] riscv: Stop emitting preventive sfence.vma for new vmalloc mappings To: yunhui cui Cc: Catalin Marinas , Will Deacon , Thomas Bogendoerfer , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrew Morton , Ved Shanbhogue , Matt Evans , Dylan Jhong , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 464861C0023 X-Rspam-User: X-Rspamd-Server: rspam12 X-Stat-Signature: 1szysaudsmmbjm6rm4efwdhci98risqb X-HE-Tag: 1717485313-460846 X-HE-Meta: U2FsdGVkX19bYe0ZLZP0KdZ1TAfjachcn0t0LsnYe+19WxARshjoQV5q3NO+MHHnLwlBlT6KvDsZroKhR+VkX6WcQVMVtGAsYYdQU8kd4IDe2ag1+PA+y/VQtYNsdhDOdt/CFPm0dmmRDlzqZprGQdIqIunorJsTH8h3GOAb65eWkpN3odaVBZANaqgjTIjr1CGofFJrlX/1T1ih0hlimbcN8vpoh6Pkzs3yWxotjomptU0o9FVSldAS0LQepFjYzSY0aVzlUTcO2oT9m/BwYfghC+DgbgwmqXT/sCEtRP/8fUwkfO0hQpTVGPHpZc+nvYNqmfkkL9zB8P6mhXFXaV2BShRFXL075sGUlzBmLwbRlKfQVmaAFAISlEKpPN2EY3/QccsjKeVN6cOzCYmScdpPzTbhd4w127wMlLU9QfensRBG7/jyZ2EmtsN3e5zqcycdhhbBQTAUpQMwJEMJwpu+FOUKhY//4CkiDotSAL8Q9cH8MQU8jlxgF2maCD/ERL8udyACo1WeygtsqRS/s9tyRYgSjLPYsima4fCNKSXoId5Tv2cSm1kr7ldIzliaNJIcwG4/awWRQL3HWhYURjz83PnfxtABT/prpQSayCIRLOlMPfdx4U/clCo66H0C9/S+Oh/LzC8xSFYCZyvewB8DLHlKj/lUBjgbwwqWDq1QwmTXrPlQBOQDsVOoOTXRzylkedQWZN1Ya/x9ZxAIccfhd7hJtnomv5FtSyOWiGZngF/2eMHrg291vdEF5SwJhtrcRi5ZxYmjIgWxTV+z9yUCZLSoC9BP/XJs5gfTYGx8V43L2dAvtApx8keUeZ+EMrEotV023xLlt5+Xe/cJVKr0LCVhFQ1KB8rvJyx+Gg6VJJYDxhVEMunjLJJXFnWVCBseEOS35pcuRp2tyZRRPVpAgFcarP+MNqu7/v0rUAYpN7wlre980erphHLoTelvAF+XhdxuZtCMWW+xmoh 1JEbd1FN SBnQshNTM1Eo2MAqSJdLN0Ip5c3uko+T+FL2tZpp3O7KpcMF6bQhkCzUK1isgrPrL32ao9+kfxaK8RsMW90u1/SeoZyz5nf2+BQwURL2MIMkmVJX0sxKkhJsbItuI04+0J+ZtqNjPmDrmWCwpjMk+w/JJ7Fv+s0T/Lb9MeXk7tFE+ip1MeVrpAuyaL5+zjj0YRcMLhLIru+ovxoXmCTv2/5XS2Tcesq3dsLFHVp5OGbkdcBrVTAhSV2+mXy/njgW3425//XC58JfiuUZSNp8OEgiafE3+EFYPgRKemxpy+tkdvHLkezz803U8pFVAsJc4FQRj5Y6+g0r2x10AmhR1Ac1KUaQZg83UWc70xi4Ztk7jnF+/IjXXFrmy3OjRAhyVdd9RzeCzL1qdDjOcorHuVkK9031JxB2EaVCsO5sXA/sYkYcueftuH8nTyS+FEhwJSM/Fl+TZ1+TNh9VoJCnPNt1cjK6TTNq2cLa0PJHaYSe6cdBEazZacMeyvQgscSTP8sa5w7atpVR6MIym8npduZ0Et6QOe2luFjGmvsJa5XdEi4p8nqiZp8KSI2vD4OuvU7i9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Yunhui, On Tue, Jun 4, 2024 at 8:21=E2=80=AFAM yunhui cui = wrote: > > Hi Alexandre, > > On Mon, Jun 3, 2024 at 8:02=E2=80=AFPM Alexandre Ghiti wrote: > > > > Hi Yunhui, > > > > On Mon, Jun 3, 2024 at 4:26=E2=80=AFAM yunhui cui wrote: > > > > > > Hi Alexandre, > > > > > > On Thu, Feb 1, 2024 at 12:03=E2=80=AFAM Alexandre Ghiti wrote: > > > > > > > > In 6.5, we removed the vmalloc fault path because that can't work (= see > > > > [1] [2]). Then in order to make sure that new page table entries we= re > > > > seen by the page table walker, we had to preventively emit a sfence= .vma > > > > on all harts [3] but this solution is very costly since it relies o= n IPI. > > > > > > > > And even there, we could end up in a loop of vmalloc faults if a vm= alloc > > > > allocation is done in the IPI path (for example if it is traced, se= e > > > > [4]), which could result in a kernel stack overflow. > > > > > > > > Those preventive sfence.vma needed to be emitted because: > > > > > > > > - if the uarch caches invalid entries, the new mapping may not be > > > > observed by the page table walker and an invalidation may be need= ed. > > > > - if the uarch does not cache invalid entries, a reordered access > > > > could "miss" the new mapping and traps: in that case, we would ac= tually > > > > only need to retry the access, no sfence.vma is required. > > > > > > > > So this patch removes those preventive sfence.vma and actually hand= les > > > > the possible (and unlikely) exceptions. And since the kernel stacks > > > > mappings lie in the vmalloc area, this handling must be done very e= arly > > > > when the trap is taken, at the very beginning of handle_exception: = this > > > > also rules out the vmalloc allocations in the fault path. > > > > > > > > Link: https://lore.kernel.org/linux-riscv/20230531093817.665799-1-b= jorn@kernel.org/ [1] > > > > Link: https://lore.kernel.org/linux-riscv/20230801090927.2018653-1-= dylan@andestech.com [2] > > > > Link: https://lore.kernel.org/linux-riscv/20230725132246.817726-1-a= lexghiti@rivosinc.com/ [3] > > > > Link: https://lore.kernel.org/lkml/20200508144043.13893-1-joro@8byt= es.org/ [4] > > > > Signed-off-by: Alexandre Ghiti > > > > --- > > > > arch/riscv/include/asm/cacheflush.h | 18 +++++- > > > > arch/riscv/include/asm/thread_info.h | 5 ++ > > > > arch/riscv/kernel/asm-offsets.c | 5 ++ > > > > arch/riscv/kernel/entry.S | 84 ++++++++++++++++++++++++= ++++ > > > > arch/riscv/mm/init.c | 2 + > > > > 5 files changed, 113 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/inclu= de/asm/cacheflush.h > > > > index a129dac4521d..b0d631701757 100644 > > > > --- a/arch/riscv/include/asm/cacheflush.h > > > > +++ b/arch/riscv/include/asm/cacheflush.h > > > > @@ -37,7 +37,23 @@ static inline void flush_dcache_page(struct page= *page) > > > > flush_icache_mm(vma->vm_mm, 0) > > > > > > > > #ifdef CONFIG_64BIT > > > > -#define flush_cache_vmap(start, end) flush_tlb_kernel_ra= nge(start, end) > > > > +extern u64 new_vmalloc[NR_CPUS / sizeof(u64) + 1]; > > > > +extern char _end[]; > > > > +#define flush_cache_vmap flush_cache_vmap > > > > +static inline void flush_cache_vmap(unsigned long start, unsigned = long end) > > > > +{ > > > > + if (is_vmalloc_or_module_addr((void *)start)) { > > > > + int i; > > > > + > > > > + /* > > > > + * We don't care if concurrently a cpu resets this = value since > > > > + * the only place this can happen is in handle_exce= ption() where > > > > + * an sfence.vma is emitted. > > > > + */ > > > > + for (i =3D 0; i < ARRAY_SIZE(new_vmalloc); ++i) > > > > + new_vmalloc[i] =3D -1ULL; > > > > + } > > > > +} > > > > #define flush_cache_vmap_early(start, end) local_flush_tlb_ker= nel_range(start, end) > > > > #endif > > > > > > > > diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/incl= ude/asm/thread_info.h > > > > index 5d473343634b..32631acdcdd4 100644 > > > > --- a/arch/riscv/include/asm/thread_info.h > > > > +++ b/arch/riscv/include/asm/thread_info.h > > > > @@ -60,6 +60,11 @@ struct thread_info { > > > > void *scs_base; > > > > void *scs_sp; > > > > #endif > > > > + /* > > > > + * Used in handle_exception() to save a0, a1 and a2 before = knowing if we > > > > + * can access the kernel stack. > > > > + */ > > > > + unsigned long a0, a1, a2; > > > > }; > > > > > > > > #ifdef CONFIG_SHADOW_CALL_STACK > > > > diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/as= m-offsets.c > > > > index a03129f40c46..939ddc0e3c6e 100644 > > > > --- a/arch/riscv/kernel/asm-offsets.c > > > > +++ b/arch/riscv/kernel/asm-offsets.c > > > > @@ -35,6 +35,8 @@ void asm_offsets(void) > > > > OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]); > > > > OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]); > > > > OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]); > > > > + > > > > + OFFSET(TASK_TI_CPU, task_struct, thread_info.cpu); > > > > OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags); > > > > OFFSET(TASK_TI_PREEMPT_COUNT, task_struct, thread_info.pree= mpt_count); > > > > OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_s= p); > > > > @@ -42,6 +44,9 @@ void asm_offsets(void) > > > > #ifdef CONFIG_SHADOW_CALL_STACK > > > > OFFSET(TASK_TI_SCS_SP, task_struct, thread_info.scs_sp); > > > > #endif > > > > + OFFSET(TASK_TI_A0, task_struct, thread_info.a0); > > > > + OFFSET(TASK_TI_A1, task_struct, thread_info.a1); > > > > + OFFSET(TASK_TI_A2, task_struct, thread_info.a2); > > > > > > > > OFFSET(TASK_TI_CPU_NUM, task_struct, thread_info.cpu); > > > > OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); > > > > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S > > > > index 9d1a305d5508..c1ffaeaba7aa 100644 > > > > --- a/arch/riscv/kernel/entry.S > > > > +++ b/arch/riscv/kernel/entry.S > > > > @@ -19,6 +19,78 @@ > > > > > > > > .section .irqentry.text, "ax" > > > > > > > > +.macro new_vmalloc_check > > > > + REG_S a0, TASK_TI_A0(tp) > > > > + REG_S a1, TASK_TI_A1(tp) > > > > + REG_S a2, TASK_TI_A2(tp) > > > > + > > > > + csrr a0, CSR_CAUSE > > > > + /* Exclude IRQs */ > > > > + blt a0, zero, _new_vmalloc_restore_context > > > > + /* Only check new_vmalloc if we are in page/protection faul= t */ > > > > + li a1, EXC_LOAD_PAGE_FAULT > > > > + beq a0, a1, _new_vmalloc_kernel_address > > > > + li a1, EXC_STORE_PAGE_FAULT > > > > + beq a0, a1, _new_vmalloc_kernel_address > > > > + li a1, EXC_INST_PAGE_FAULT > > > > + bne a0, a1, _new_vmalloc_restore_context > > > > + > > > > +_new_vmalloc_kernel_address: > > > > + /* Is it a kernel address? */ > > > > + csrr a0, CSR_TVAL > > > > + bge a0, zero, _new_vmalloc_restore_context > > > > + > > > > + /* Check if a new vmalloc mapping appeared that could expla= in the trap */ > > > > + > > > > + /* > > > > + * Computes: > > > > + * a0 =3D &new_vmalloc[BIT_WORD(cpu)] > > > > + * a1 =3D BIT_MASK(cpu) > > > > + */ > > > > + REG_L a2, TASK_TI_CPU(tp) > > > > + /* > > > > + * Compute the new_vmalloc element position: > > > > + * (cpu / 64) * 8 =3D (cpu >> 6) << 3 > > > > + */ > > > > + srli a1, a2, 6 > > > > + slli a1, a1, 3 > > > > + la a0, new_vmalloc > > > > + add a0, a0, a1 > > > > + /* > > > > + * Compute the bit position in the new_vmalloc element: > > > > + * bit_pos =3D cpu % 64 =3D cpu - (cpu / 64) * 64 =3D cpu -= (cpu >> 6) << 6 > > > > + * =3D cpu - ((cpu >> 6) << 3) << 3 > > > > + */ > > > > + slli a1, a1, 3 > > > > + sub a1, a2, a1 > > > > + /* Compute the "get mask": 1 << bit_pos */ > > > > + li a2, 1 > > > > + sll a1, a2, a1 > > > > + > > > > + /* Check the value of new_vmalloc for this cpu */ > > > > + REG_L a2, 0(a0) > > > > + and a2, a2, a1 > > > > + beq a2, zero, _new_vmalloc_restore_context > > > > + > > > > + /* Atomically reset the current cpu bit in new_vmalloc */ > > > > + amoxor.w a0, a1, (a0) > > > > + > > > > + /* Only emit a sfence.vma if the uarch caches invalid entri= es */ > > > > + ALTERNATIVE("sfence.vma", "nop", 0, RISCV_ISA_EXT_SVVPTC, 1= ) > > > > + > > > > + REG_L a0, TASK_TI_A0(tp) > > > > + REG_L a1, TASK_TI_A1(tp) > > > > + REG_L a2, TASK_TI_A2(tp) > > > > + csrw CSR_SCRATCH, x0 > > > > + sret > > > > + > > > > +_new_vmalloc_restore_context: > > > > + REG_L a0, TASK_TI_A0(tp) > > > > + REG_L a1, TASK_TI_A1(tp) > > > > + REG_L a2, TASK_TI_A2(tp) > > > > +.endm > > > > + > > > > + > > > > SYM_CODE_START(handle_exception) > > > > /* > > > > * If coming from userspace, preserve the user thread point= er and load > > > > @@ -30,6 +102,18 @@ SYM_CODE_START(handle_exception) > > > > > > > > .Lrestore_kernel_tpsp: > > > > csrr tp, CSR_SCRATCH > > > > + > > > > + /* > > > > + * The RISC-V kernel does not eagerly emit a sfence.vma aft= er each > > > > + * new vmalloc mapping, which may result in exceptions: > > > > + * - if the uarch caches invalid entries, the new mapping w= ould not be > > > > + * observed by the page table walker and an invalidation = is needed. > > > > + * - if the uarch does not cache invalid entries, a reorder= ed access > > > > + * could "miss" the new mapping and traps: in that case, = we only need > > > > + * to retry the access, no sfence.vma is required. > > > > + */ > > > > + new_vmalloc_check > > > > + > > > > REG_S sp, TASK_TI_KERNEL_SP(tp) > > > > > > > > #ifdef CONFIG_VMAP_STACK > > > > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > > > > index eafc4c2200f2..54c9fdeda11e 100644 > > > > --- a/arch/riscv/mm/init.c > > > > +++ b/arch/riscv/mm/init.c > > > > @@ -36,6 +36,8 @@ > > > > > > > > #include "../kernel/head.h" > > > > > > > > +u64 new_vmalloc[NR_CPUS / sizeof(u64) + 1]; > > > > + > > > > struct kernel_mapping kernel_map __ro_after_init; > > > > EXPORT_SYMBOL(kernel_map); > > > > #ifdef CONFIG_XIP_KERNEL > > > > -- > > > > 2.39.2 > > > > > > > > > > > > > > Can we consider using new_vmalloc as a percpu variable, so that we > > > don't need to add a0/1/2 in thread_info? > > > > At first, I used percpu variables. But then I realized that percpu > > areas are allocated in the vmalloc area, so if somehow we take a trap > > when accessing the new_vmalloc percpu variable, we could not recover > > from this as we would trap forever in new_vmalloc_check. But > > admittedly, not sure that can happen. > > > > And how would that remove a0, a1 and a2 from thread_info? We'd still > > need to save some registers somewhere to access the percpu variable > > right? > > > > > Also, try not to do too much > > > calculation logic in new_vmalloc_check, after all, handle_exception i= s > > > a high-frequency path. In this case, can we consider writing > > > new_vmalloc_check in C language to increase readability? > > > > If we write that in C, we don't have the control over the allocated > > registers and then we can't correctly save the context. > > If we use C language, new_vmalloc_check is written just like do_irq(), > then we need _save_context, but for new_vmalloc_check, it is not worth > the loss, because exceptions from user mode do not need > new_vmalloc_check, which also shows that it is reasonable to put > new_vmalloc_check after _restore_kernel_tpsp. > > Saving is necessary. We can save a0, a1, a2 without using thread_info. > We can choose to save on the kernel stack of the current tp, but we > need to add the following instructions: > REG_S sp, TASK_TI_USER_SP(tp) > REG_L sp, TASK_TI_KERNEL_SP(tp) > addi sp, sp, -(PT_SIZE_ON_STACK) > It seems that saving directly on thread_info is more direct, but > saving on the kernel stack is more logically consistent, and there is > no need to increase the size of thread_info. You can't save on the kernel stack since kernel stacks are allocated in the vmalloc area. > > As for the current status of the patch, there are two points that can > be optimized: > 1. Some chip hardware implementations may not cache TLB invalid > entries, so it doesn't matter whether svvptc is available or not. Can > we consider adding a CONFIG_RISCV_SVVPTC to control it? > > 2. .macro new_vmalloc_check > REG_S a0, TASK_TI_A0(tp) > REG_S a1, TASK_TI_A1(tp) > REG_S a2, TASK_TI_A2(tp) > When executing blt a0, zero, _new_vmalloc_restore_context, you can not > save a1, a2 first Ok, I can do that :) Thanks again for your inputs, Alex > > > > > Thanks for your interest in this patchset :) > > > > Alex > > > > > > > > Thanks, > > > Yunhui > > Thanks, > Yunhui