From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7285C25B7E for ; Tue, 4 Jun 2024 07:17:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 590D16B0096; Tue, 4 Jun 2024 03:17:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5407B6B0099; Tue, 4 Jun 2024 03:17:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 408786B009A; Tue, 4 Jun 2024 03:17:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1D2C96B0096 for ; Tue, 4 Jun 2024 03:17:41 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B195EC114D for ; Tue, 4 Jun 2024 07:17:40 +0000 (UTC) X-FDA: 82192351080.19.AF6A492 Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by imf10.hostedemail.com (Postfix) with ESMTP id C93B8C000C for ; Tue, 4 Jun 2024 07:17:38 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=1wNQmqmD; spf=pass (imf10.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717485458; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IZh4GXyXthGySEh8Jh54bi/EEYQGahP59URhoB920JA=; b=whucq4aPF/zU4i5H5dsoHLLKzSRWkl7XDMfDFePGh+x6db2z5iZNJuttEtcru8pJ/EluAd CbEIOuWW9jGAf9yx+Xqt7zRHHuLNoL7MurCbTbORMJYqnH+dOdU35WBS7P3FkqKpOYCRl4 mqtbbjF87NX6mO0eo7zKNbTbRxJU1L0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717485458; a=rsa-sha256; cv=none; b=NXDNk1SUYbcbvtjES4QgBTkZ8PXDKYjwMO7XpkDo7GeYAk2puZqqPEfMmivl2sVasodvuM mfsl+YvydT4TrVUhx6hAstAMADVrKcGFfCjfruRyQpy+yTgbBRaE9rya+mVKKVA721zEc6 3SGFp5boJJ71lVgKS8I/WCM0t9fo+c0= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=1wNQmqmD; spf=pass (imf10.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-57a20c600a7so5772269a12.3 for ; Tue, 04 Jun 2024 00:17:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1717485457; x=1718090257; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=IZh4GXyXthGySEh8Jh54bi/EEYQGahP59URhoB920JA=; b=1wNQmqmDh1wvduGeCWaoTe3VD+KNVfJdkHMVLXBKTq1D1NwQ9EDopAIxzq9iel6u6Y 9rh1gn0m9frdYTqO4oKAKXpAUJ3wG8Qh9nC2UAT/VlQ8yRRFKM5U4xuuAhenZ1jyG9Oy CvAoZI5eADDgfTQae5OOVkQKZmYCa5+m68qCTNNNyneUbNTOUjKTmfGrtUE7LhLQGV+Z iT5OhrCYH/zJZWK14L3Pmzfzyw3mWQHHKGqDgdh3PU0J59Va1WcZbuskOTmwobZ8Yy9T BOU++NfebVaJC/53ErEGH7Q+PwZB4226eC94bvJ0ZsePFsHmZWNS+mJZfymRK0Stb30a 5H5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717485457; x=1718090257; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=IZh4GXyXthGySEh8Jh54bi/EEYQGahP59URhoB920JA=; b=Zt/Q0KB8Fc8VT14MUFeXofq7bj9XX6AsXt5eQSkTz74PrUs3t6vOqL5bn8+FqS/gFQ kQWCt4VVZ+u8bWgFDEGLkv/huR7J+WnN07p3O7z8Byuo0lAZx30wkzTu/UYkK0WkW39Q w3Xs3+VWtUD0nAbNWCKZBikMsMtEQQEEjeIiTcRKGytZm/Et6f+3KLguZjBlXRMiHyV3 vOW4H/tVGq1M8FLEDtRqrK/d0x2h7vdZ7VtzuFjYFYV6JnSp5zzY6qEE/1ETahuwE+gc rNcTiZ5QWmhlkgeg1HKvE7WfjXikatrL2TIKxokrI9cCAKRingtWmYR3LpYY/XZ/4lE+ +yzg== X-Forwarded-Encrypted: i=1; AJvYcCVTekXqmXEEnaV6N7SwDTfKh+l8xYfdkVlFQLeWgj5wIpppahKRb5Rw6BkiE5pc04drVmy8csSvTAHzIju9FQjiyJQ= X-Gm-Message-State: AOJu0Yzc74n/eddyk/LFbYb5NBEvAHtMuZnQX8QDUKyuX7gJMfW2lXgW aAC59NlFXvF9ubZz6nXbdquT5N8xSD4QELdVskF956WrnmfZtazV1cym1EsLIaVvVD8KjXViXZs f6aD3bqH6fF46CxinEb/Y8b7ADYB7MUSVLorNuazzRr8mA2es X-Google-Smtp-Source: AGHT+IG5nXzjvx8bXEv0WLhOTVNftsgmF2nMmtB6/RD6kKxJCZ3EAghyCVxmEN9671FbFB/+m8n5nxo3C2ICfabzPD4= X-Received: by 2002:a50:d4d9:0:b0:573:5c18:c2d5 with SMTP id 4fb4d7f45d1cf-57a36382781mr8548294a12.3.1717485457229; Tue, 04 Jun 2024 00:17:37 -0700 (PDT) MIME-Version: 1.0 References: <20240131155929.169961-1-alexghiti@rivosinc.com> <20240131155929.169961-4-alexghiti@rivosinc.com> In-Reply-To: From: Alexandre Ghiti Date: Tue, 4 Jun 2024 09:17:26 +0200 Message-ID: Subject: Re: [External] [PATCH RFC/RFT v2 3/4] riscv: Stop emitting preventive sfence.vma for new vmalloc mappings To: yunhui cui , Conor Dooley Cc: Catalin Marinas , Will Deacon , Thomas Bogendoerfer , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrew Morton , Ved Shanbhogue , Matt Evans , Dylan Jhong , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: 796twsnfyrcfzn9drgo3gc3qnhka7dmc X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C93B8C000C X-HE-Tag: 1717485458-94991 X-HE-Meta: U2FsdGVkX19RebK8BVQDZH1FRNesxYx1PsPF2FE5AXQT2Vx7R9o7XGi4oHQAn4LegSAuzJLoJCS1E8rFJ1GoeQwDX3mpOjMUX7JgvzoCMuHns182dcq0ApHqX/nGwXKXZ00rZOwXfAt5PjLc9Tz7lLLXq8uhEqEJdbuCtGvDi/ef8DmnEJ0AiVVRDMxeGG/5KhypcFniqBjwzOJdkPZAdIztKtM7NVNx0LGpI1yVcR4vEQ+DQeFXNG8ecQW5ERYw342FKx6QO+/cISJvEVsCr+g5PWvVr2IFuxeULtdPck+lBW50xhO4n7JlGd0fxRlxYsObGpIbuUv/T8S4TNmV5bd3m/5okGeF9YEd3c+w9xMh1fYChrBoVCH+mO9jprITIal95fofmx/o+eukgRZP4yg39A8B/x6FlzAA+/G8fMF71loJR/LKYlgTQ7VdemeLCx4I+YyxBi6t3THEuaMvZ0LyfLdGxgUQo2DDFNFEOZm2N7ni9zyu7HKwFCadx2TIvjjskY4OD/FFPjAiFFDwfOosYSzW+XFONzUy8zqmk/zx+/QgP85U/7Ox7sHfVVdRZuz3P3BEoVdmvb2LNYbm1SEdVQhOaPSatrFZRiuU+FQLxUZJjpR+AFcWio2zv6pA291G3Lm5d7+FVqTGw638dWrX36ElypjMcjopNch+oHNC0XBIX2FZpjiYb76sl+Hmfeyrj6Hj5p1PyRLuMGqR9t1ulMz/0iET1U3hknmDrS0e6L6j1zoL+g5JVO5SAYZxRo6/6HYEjjaLyMUpAj3B+jlEw1F1mkPwlSVx6Pvt8I5s0T8sDy5k2wtYizENO+CXy24YV5ZL/ifqvksnSFWadscm2TMeewEm3DFc/M5TZJvfRQIBd2nyTFu5tBHCQA9ZloqWU6rqBMe4BESh9x3IHxOK94MoA4oOL6ql0Qim5OfI7eC2ia+k4fJVKURyvDdvF4IJfREmi+aVAEHck3j /8mjoyqh 6AoZm+2zTan8/Di1MHN3mO1W57Ixc6B2hw/9+jBr/MAKiTEDaVR3EoGKnfW2DmEXjMmfltU7LZbuWGOwD2PRuJU5WdFaoUJ16k6z42P0D+/vyNHENq6ZbJAQvF2MCWDK4IR24BkHIxkBzhax7o6uXbq5jheHwgXJ3tA+KCdaaV45a31l4o2dVWr/nDL637jVSf+EX2AgoKl5aC3G8VxbBYHZX3qmzYguaz3uDs12AYhCs+ksZ0wcpBFBcTNa1FSq3AzrGMKwZOoTJag1S/jPIYF0+Ar+Xc5D4Oh58pG/cTKPBFmYf+UEPChv7WRg0toW6zB3wChXtUCkFQq9aryESBnTD7Dvpxskp22u9bRBScg3Cbn6BuDE6pR1juK08PWjxwWUhBK7tFxCemI8pzxSkxs2jfGOj9JYBIaXuzYfehtXeEFBtiPmIt4mOiOFSPhXtzsvJUWQMOsPFNEzD+Ncr5Gdm60vEuKjevW6boV5hyfZtwnU17TiRsAbD9TZpCQAa2fgTwz2AoVTlyV1Un9b1WHMCu1P/aNKnHziwkZDBZMCPpiJMqiL7j5Io39NSRYuI/C/m X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jun 4, 2024 at 9:15=E2=80=AFAM Alexandre Ghiti wrote: > > Hi Yunhui, > > On Tue, Jun 4, 2024 at 8:21=E2=80=AFAM yunhui cui wrote: > > > > Hi Alexandre, > > > > On Mon, Jun 3, 2024 at 8:02=E2=80=AFPM Alexandre Ghiti wrote: > > > > > > Hi Yunhui, > > > > > > On Mon, Jun 3, 2024 at 4:26=E2=80=AFAM yunhui cui wrote: > > > > > > > > Hi Alexandre, > > > > > > > > On Thu, Feb 1, 2024 at 12:03=E2=80=AFAM Alexandre Ghiti wrote: > > > > > > > > > > In 6.5, we removed the vmalloc fault path because that can't work= (see > > > > > [1] [2]). Then in order to make sure that new page table entries = were > > > > > seen by the page table walker, we had to preventively emit a sfen= ce.vma > > > > > on all harts [3] but this solution is very costly since it relies= on IPI. > > > > > > > > > > And even there, we could end up in a loop of vmalloc faults if a = vmalloc > > > > > allocation is done in the IPI path (for example if it is traced, = see > > > > > [4]), which could result in a kernel stack overflow. > > > > > > > > > > Those preventive sfence.vma needed to be emitted because: > > > > > > > > > > - if the uarch caches invalid entries, the new mapping may not be > > > > > observed by the page table walker and an invalidation may be ne= eded. > > > > > - if the uarch does not cache invalid entries, a reordered access > > > > > could "miss" the new mapping and traps: in that case, we would = actually > > > > > only need to retry the access, no sfence.vma is required. > > > > > > > > > > So this patch removes those preventive sfence.vma and actually ha= ndles > > > > > the possible (and unlikely) exceptions. And since the kernel stac= ks > > > > > mappings lie in the vmalloc area, this handling must be done very= early > > > > > when the trap is taken, at the very beginning of handle_exception= : this > > > > > also rules out the vmalloc allocations in the fault path. > > > > > > > > > > Link: https://lore.kernel.org/linux-riscv/20230531093817.665799-1= -bjorn@kernel.org/ [1] > > > > > Link: https://lore.kernel.org/linux-riscv/20230801090927.2018653-= 1-dylan@andestech.com [2] > > > > > Link: https://lore.kernel.org/linux-riscv/20230725132246.817726-1= -alexghiti@rivosinc.com/ [3] > > > > > Link: https://lore.kernel.org/lkml/20200508144043.13893-1-joro@8b= ytes.org/ [4] > > > > > Signed-off-by: Alexandre Ghiti > > > > > --- > > > > > arch/riscv/include/asm/cacheflush.h | 18 +++++- > > > > > arch/riscv/include/asm/thread_info.h | 5 ++ > > > > > arch/riscv/kernel/asm-offsets.c | 5 ++ > > > > > arch/riscv/kernel/entry.S | 84 ++++++++++++++++++++++= ++++++ > > > > > arch/riscv/mm/init.c | 2 + > > > > > 5 files changed, 113 insertions(+), 1 deletion(-) > > > > > > > > > > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/inc= lude/asm/cacheflush.h > > > > > index a129dac4521d..b0d631701757 100644 > > > > > --- a/arch/riscv/include/asm/cacheflush.h > > > > > +++ b/arch/riscv/include/asm/cacheflush.h > > > > > @@ -37,7 +37,23 @@ static inline void flush_dcache_page(struct pa= ge *page) > > > > > flush_icache_mm(vma->vm_mm, 0) > > > > > > > > > > #ifdef CONFIG_64BIT > > > > > -#define flush_cache_vmap(start, end) flush_tlb_kernel_= range(start, end) > > > > > +extern u64 new_vmalloc[NR_CPUS / sizeof(u64) + 1]; > > > > > +extern char _end[]; > > > > > +#define flush_cache_vmap flush_cache_vmap > > > > > +static inline void flush_cache_vmap(unsigned long start, unsigne= d long end) > > > > > +{ > > > > > + if (is_vmalloc_or_module_addr((void *)start)) { > > > > > + int i; > > > > > + > > > > > + /* > > > > > + * We don't care if concurrently a cpu resets thi= s value since > > > > > + * the only place this can happen is in handle_ex= ception() where > > > > > + * an sfence.vma is emitted. > > > > > + */ > > > > > + for (i =3D 0; i < ARRAY_SIZE(new_vmalloc); ++i) > > > > > + new_vmalloc[i] =3D -1ULL; > > > > > + } > > > > > +} > > > > > #define flush_cache_vmap_early(start, end) local_flush_tlb_k= ernel_range(start, end) > > > > > #endif > > > > > > > > > > diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/in= clude/asm/thread_info.h > > > > > index 5d473343634b..32631acdcdd4 100644 > > > > > --- a/arch/riscv/include/asm/thread_info.h > > > > > +++ b/arch/riscv/include/asm/thread_info.h > > > > > @@ -60,6 +60,11 @@ struct thread_info { > > > > > void *scs_base; > > > > > void *scs_sp; > > > > > #endif > > > > > + /* > > > > > + * Used in handle_exception() to save a0, a1 and a2 befor= e knowing if we > > > > > + * can access the kernel stack. > > > > > + */ > > > > > + unsigned long a0, a1, a2; > > > > > }; > > > > > > > > > > #ifdef CONFIG_SHADOW_CALL_STACK > > > > > diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/= asm-offsets.c > > > > > index a03129f40c46..939ddc0e3c6e 100644 > > > > > --- a/arch/riscv/kernel/asm-offsets.c > > > > > +++ b/arch/riscv/kernel/asm-offsets.c > > > > > @@ -35,6 +35,8 @@ void asm_offsets(void) > > > > > OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]); > > > > > OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]); > > > > > OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]); > > > > > + > > > > > + OFFSET(TASK_TI_CPU, task_struct, thread_info.cpu); > > > > > OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags); > > > > > OFFSET(TASK_TI_PREEMPT_COUNT, task_struct, thread_info.pr= eempt_count); > > > > > OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel= _sp); > > > > > @@ -42,6 +44,9 @@ void asm_offsets(void) > > > > > #ifdef CONFIG_SHADOW_CALL_STACK > > > > > OFFSET(TASK_TI_SCS_SP, task_struct, thread_info.scs_sp); > > > > > #endif > > > > > + OFFSET(TASK_TI_A0, task_struct, thread_info.a0); > > > > > + OFFSET(TASK_TI_A1, task_struct, thread_info.a1); > > > > > + OFFSET(TASK_TI_A2, task_struct, thread_info.a2); > > > > > > > > > > OFFSET(TASK_TI_CPU_NUM, task_struct, thread_info.cpu); > > > > > OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); > > > > > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.= S > > > > > index 9d1a305d5508..c1ffaeaba7aa 100644 > > > > > --- a/arch/riscv/kernel/entry.S > > > > > +++ b/arch/riscv/kernel/entry.S > > > > > @@ -19,6 +19,78 @@ > > > > > > > > > > .section .irqentry.text, "ax" > > > > > > > > > > +.macro new_vmalloc_check > > > > > + REG_S a0, TASK_TI_A0(tp) > > > > > + REG_S a1, TASK_TI_A1(tp) > > > > > + REG_S a2, TASK_TI_A2(tp) > > > > > + > > > > > + csrr a0, CSR_CAUSE > > > > > + /* Exclude IRQs */ > > > > > + blt a0, zero, _new_vmalloc_restore_context > > > > > + /* Only check new_vmalloc if we are in page/protection fa= ult */ > > > > > + li a1, EXC_LOAD_PAGE_FAULT > > > > > + beq a0, a1, _new_vmalloc_kernel_address > > > > > + li a1, EXC_STORE_PAGE_FAULT > > > > > + beq a0, a1, _new_vmalloc_kernel_address > > > > > + li a1, EXC_INST_PAGE_FAULT > > > > > + bne a0, a1, _new_vmalloc_restore_context > > > > > + > > > > > +_new_vmalloc_kernel_address: > > > > > + /* Is it a kernel address? */ > > > > > + csrr a0, CSR_TVAL > > > > > + bge a0, zero, _new_vmalloc_restore_context > > > > > + > > > > > + /* Check if a new vmalloc mapping appeared that could exp= lain the trap */ > > > > > + > > > > > + /* > > > > > + * Computes: > > > > > + * a0 =3D &new_vmalloc[BIT_WORD(cpu)] > > > > > + * a1 =3D BIT_MASK(cpu) > > > > > + */ > > > > > + REG_L a2, TASK_TI_CPU(tp) > > > > > + /* > > > > > + * Compute the new_vmalloc element position: > > > > > + * (cpu / 64) * 8 =3D (cpu >> 6) << 3 > > > > > + */ > > > > > + srli a1, a2, 6 > > > > > + slli a1, a1, 3 > > > > > + la a0, new_vmalloc > > > > > + add a0, a0, a1 > > > > > + /* > > > > > + * Compute the bit position in the new_vmalloc element: > > > > > + * bit_pos =3D cpu % 64 =3D cpu - (cpu / 64) * 64 =3D cpu= - (cpu >> 6) << 6 > > > > > + * =3D cpu - ((cpu >> 6) << 3) << 3 > > > > > + */ > > > > > + slli a1, a1, 3 > > > > > + sub a1, a2, a1 > > > > > + /* Compute the "get mask": 1 << bit_pos */ > > > > > + li a2, 1 > > > > > + sll a1, a2, a1 > > > > > + > > > > > + /* Check the value of new_vmalloc for this cpu */ > > > > > + REG_L a2, 0(a0) > > > > > + and a2, a2, a1 > > > > > + beq a2, zero, _new_vmalloc_restore_context > > > > > + > > > > > + /* Atomically reset the current cpu bit in new_vmalloc */ > > > > > + amoxor.w a0, a1, (a0) > > > > > + > > > > > + /* Only emit a sfence.vma if the uarch caches invalid ent= ries */ > > > > > + ALTERNATIVE("sfence.vma", "nop", 0, RISCV_ISA_EXT_SVVPTC,= 1) > > > > > + > > > > > + REG_L a0, TASK_TI_A0(tp) > > > > > + REG_L a1, TASK_TI_A1(tp) > > > > > + REG_L a2, TASK_TI_A2(tp) > > > > > + csrw CSR_SCRATCH, x0 > > > > > + sret > > > > > + > > > > > +_new_vmalloc_restore_context: > > > > > + REG_L a0, TASK_TI_A0(tp) > > > > > + REG_L a1, TASK_TI_A1(tp) > > > > > + REG_L a2, TASK_TI_A2(tp) > > > > > +.endm > > > > > + > > > > > + > > > > > SYM_CODE_START(handle_exception) > > > > > /* > > > > > * If coming from userspace, preserve the user thread poi= nter and load > > > > > @@ -30,6 +102,18 @@ SYM_CODE_START(handle_exception) > > > > > > > > > > .Lrestore_kernel_tpsp: > > > > > csrr tp, CSR_SCRATCH > > > > > + > > > > > + /* > > > > > + * The RISC-V kernel does not eagerly emit a sfence.vma a= fter each > > > > > + * new vmalloc mapping, which may result in exceptions: > > > > > + * - if the uarch caches invalid entries, the new mapping= would not be > > > > > + * observed by the page table walker and an invalidatio= n is needed. > > > > > + * - if the uarch does not cache invalid entries, a reord= ered access > > > > > + * could "miss" the new mapping and traps: in that case= , we only need > > > > > + * to retry the access, no sfence.vma is required. > > > > > + */ > > > > > + new_vmalloc_check > > > > > + > > > > > REG_S sp, TASK_TI_KERNEL_SP(tp) > > > > > > > > > > #ifdef CONFIG_VMAP_STACK > > > > > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > > > > > index eafc4c2200f2..54c9fdeda11e 100644 > > > > > --- a/arch/riscv/mm/init.c > > > > > +++ b/arch/riscv/mm/init.c > > > > > @@ -36,6 +36,8 @@ > > > > > > > > > > #include "../kernel/head.h" > > > > > > > > > > +u64 new_vmalloc[NR_CPUS / sizeof(u64) + 1]; > > > > > + > > > > > struct kernel_mapping kernel_map __ro_after_init; > > > > > EXPORT_SYMBOL(kernel_map); > > > > > #ifdef CONFIG_XIP_KERNEL > > > > > -- > > > > > 2.39.2 > > > > > > > > > > > > > > > > > > Can we consider using new_vmalloc as a percpu variable, so that we > > > > don't need to add a0/1/2 in thread_info? > > > > > > At first, I used percpu variables. But then I realized that percpu > > > areas are allocated in the vmalloc area, so if somehow we take a trap > > > when accessing the new_vmalloc percpu variable, we could not recover > > > from this as we would trap forever in new_vmalloc_check. But > > > admittedly, not sure that can happen. > > > > > > And how would that remove a0, a1 and a2 from thread_info? We'd still > > > need to save some registers somewhere to access the percpu variable > > > right? > > > > > > > Also, try not to do too much > > > > calculation logic in new_vmalloc_check, after all, handle_exception= is > > > > a high-frequency path. In this case, can we consider writing > > > > new_vmalloc_check in C language to increase readability? > > > > > > If we write that in C, we don't have the control over the allocated > > > registers and then we can't correctly save the context. > > > > If we use C language, new_vmalloc_check is written just like do_irq(), > > then we need _save_context, but for new_vmalloc_check, it is not worth > > the loss, because exceptions from user mode do not need > > new_vmalloc_check, which also shows that it is reasonable to put > > new_vmalloc_check after _restore_kernel_tpsp. > > > > Saving is necessary. We can save a0, a1, a2 without using thread_info. > > We can choose to save on the kernel stack of the current tp, but we > > need to add the following instructions: > > REG_S sp, TASK_TI_USER_SP(tp) > > REG_L sp, TASK_TI_KERNEL_SP(tp) > > addi sp, sp, -(PT_SIZE_ON_STACK) > > It seems that saving directly on thread_info is more direct, but > > saving on the kernel stack is more logically consistent, and there is > > no need to increase the size of thread_info. > > You can't save on the kernel stack since kernel stacks are allocated > in the vmalloc area. > > > > > As for the current status of the patch, there are two points that can > > be optimized: > > 1. Some chip hardware implementations may not cache TLB invalid > > entries, so it doesn't matter whether svvptc is available or not. Can > > we consider adding a CONFIG_RISCV_SVVPTC to control it? That would produce a non-portable kernel. But I'm not opposed to that at all, let me check how we handle other extensions. Maybe @Conor Dooley has some feedback here? > > > > 2. .macro new_vmalloc_check > > REG_S a0, TASK_TI_A0(tp) > > REG_S a1, TASK_TI_A1(tp) > > REG_S a2, TASK_TI_A2(tp) > > When executing blt a0, zero, _new_vmalloc_restore_context, you can not > > save a1, a2 first > > Ok, I can do that :) > > Thanks again for your inputs, > > Alex > > > > > > > > > Thanks for your interest in this patchset :) > > > > > > Alex > > > > > > > > > > > Thanks, > > > > Yunhui > > > > Thanks, > > Yunhui