From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68C9AC4167B for ; Fri, 8 Dec 2023 14:28:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F356F6B0098; Fri, 8 Dec 2023 09:28:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EE4936B0099; Fri, 8 Dec 2023 09:28:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D85BB6B009A; Fri, 8 Dec 2023 09:28:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id C13D36B0098 for ; Fri, 8 Dec 2023 09:28:21 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 982EF1C0267 for ; Fri, 8 Dec 2023 14:28:21 +0000 (UTC) X-FDA: 81543881202.30.F2E8450 Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com [209.85.221.49]) by imf02.hostedemail.com (Postfix) with ESMTP id 97B8B80002 for ; Fri, 8 Dec 2023 14:28:18 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=qh8QOF3p; spf=pass (imf02.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.221.49 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702045698; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PAMbjbUeNExg3jR9YfrOXqq7gkqmjKC91Dm2N2JHQ/A=; b=Qlh+EiAU3mi4LUd8uiVCEnX4x2wwHWTYVvJybb9U5/+X/WAsbzkOaDmX/Npg2rAUXwftag 9IOknSrn0i0c3fsWdd4NEMDpf3WlsEivcw/s9e8lFXLgtb4oMviZI8KHm0ZxBYKflGXw9L s4IqbDzkHsK4cL9/UDXEPLU/gOmWPl0= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=qh8QOF3p; spf=pass (imf02.hostedemail.com: domain of alexghiti@rivosinc.com designates 209.85.221.49 as permitted sender) smtp.mailfrom=alexghiti@rivosinc.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702045698; a=rsa-sha256; cv=none; b=vdcM5JylwoX+VqehcUrlXI2z5YeI5VT47+GGxiSJtXtI2xWffx5JUlNHX5BK8Vskh51qVf /0Ms668SErp+E5sEE0aNHNHXsSbvBrlvwMpAWg2Tdo60zcqgCsARyc5qtTUFcXIoGdwCjY 4th6xRaOPqxfxdya6yUUrK1nCRfcGHQ= Received: by mail-wr1-f49.google.com with SMTP id ffacd0b85a97d-33338c47134so1961656f8f.1 for ; Fri, 08 Dec 2023 06:28:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1702045697; x=1702650497; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=PAMbjbUeNExg3jR9YfrOXqq7gkqmjKC91Dm2N2JHQ/A=; b=qh8QOF3pjWtSzygi7ugPJGQXvQKptnZICL4aDOORW4JHIeusvEgeuWykTB2VPaw2fS jljogWBDrvKiWSUoar85masq1FC4UUCwAxmAl1nLpH/s5I1IGPeJRHH0kz0W1VZOjy4p tXKT9tKGrVUSQQlwFkV72X6zWeZK196kR38nzelTHpJN4rXcLKBkNsUiwLM0bmSWsA2Y f/aMAeaWN4yjcWSTSYtLU7uscEzXcJux1HlBacbadAiMyIu3ETQD2SQdKyCT7KN6vZ+N SNmO95xPbIt+qOZ5gRTfOvAJn7i5nWR5Cc1rybZLnKyWDQsYi0dyM26nuoAmm/h8V235 2QZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702045697; x=1702650497; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PAMbjbUeNExg3jR9YfrOXqq7gkqmjKC91Dm2N2JHQ/A=; b=RtNB2NzC/zsxb1fGAdsi1WmaTHLTLpCBc8DkHBtDPa6xE2QHYaY5jhOJ8zafjf5qAh 3bGjcGUTEgn5aSTxZ/JA5MVJ+j66IC4SI+vF1uTIGFbMoiHk78/swQj9hNPTZgrqgDsP p/OiI022r1DPSMe9l5Ww0CpdXDZdDrCijpBGqVaZYnJEc5gw30TygRBLcKxTujTR8WxO m5JJyjQHoppQjhNPnrAbE65+UpkNekiZNfhuErgIlJXyE4i2KdjHYmb/dEm/Bzf9gI6u o81q8bKU9z97ytN9gLCL2BMBVU/o4naiCjsYSfRwBNGA1Vk8NxGr1bnKF3xktfxbqu6L 01yw== X-Gm-Message-State: AOJu0YxjoOUq7r716K6nEJguYsFqVeEOPwK2gmqLf8ojb/nFW1lpVwCx ZuJ3QTqjPPHDM7HbRnZ74R9xzi9jS/K2Y2qX/hYyNQ== X-Google-Smtp-Source: AGHT+IHoNadDPxIQtoDc8XaaVyQXJ8+RUUbaGefZZOgCrfyo3ePXP/MyaDv0pS1NmtW4EZmBlWmjUfpYXwwfiE4fKLo= X-Received: by 2002:a05:6000:1a4f:b0:333:2fd2:6f74 with SMTP id t15-20020a0560001a4f00b003332fd26f74mr59420wry.126.1702045696879; Fri, 08 Dec 2023 06:28:16 -0800 (PST) MIME-Version: 1.0 References: <20231207150348.82096-1-alexghiti@rivosinc.com> <20231207150348.82096-2-alexghiti@rivosinc.com> <27d8dffc-cfd8-4f07-9c0a-b7101963c2ae@csgroup.eu> In-Reply-To: <27d8dffc-cfd8-4f07-9c0a-b7101963c2ae@csgroup.eu> From: Alexandre Ghiti Date: Fri, 8 Dec 2023 15:28:06 +0100 Message-ID: Subject: Re: [PATCH RFC/RFT 1/4] riscv: Stop emitting preventive sfence.vma for new vmalloc mappings To: Christophe Leroy Cc: Catalin Marinas , Will Deacon , Thomas Bogendoerfer , Michael Ellerman , Nicholas Piggin , Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrew Morton , Ved Shanbhogue , Matt Evans , Dylan Jhong , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" , "linux-mips@vger.kernel.org" , "linuxppc-dev@lists.ozlabs.org" , "linux-riscv@lists.infradead.org" , "linux-mm@kvack.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 97B8B80002 X-Rspam-User: X-Stat-Signature: j7zytxh5nicp3ueaehe7za3nd1zgcifg X-Rspamd-Server: rspam01 X-HE-Tag: 1702045698-763008 X-HE-Meta: U2FsdGVkX1/7QoOgdxLJIM9TjfStRmDqt4re1Iu11ppJ9qRcGVZYxNgkhCr1z5yi+jKYc7kkfdIjbph1cyWGSrryKKqOF9ldbRXpqJih1yfrfpvmXZKjtllNFjr1Pm5cGlON5g8hMWNiTIKf30EdOrgqzsND/Z6pURI8coBzi6uMghMA4VwXDPYSPOqTaX3bt8k0OXkfzvVN5QsydEUuAEwGDZiqs/xdQrQjvGRH6KXCMMX6XFVcyzWc25UV+jk4H2t7LgENgkX6a/KgpgDIKunejGx53cZ1qGcBMPON5LDCRN4MyURcfAz8VmdbEbDozzYVy5rvsmsz+b/f/DXFu+aenAG1vj5Cacpqt+KN1ZyL+zcAj31aDFVdJa7WPqRXFegFSPz1++xgQ4Qaxu7+JhgYPfYvlYNzMVK8Liao4oObfEP5LDx7BDZ49oQANumjUajgebcl5U+qQjkN7vzEcW0iNQ43+RqIzcDTKm/vt7bNLjGDMfMmVmI9hhp/z8rUT8nqYexqVlNC3olcMXZ1uBm1KEKqq/ABV9ddbU/HDs/zn9vtILUUOwjwC2qdOdT5V/TKIkpn+sLpEnE0NdDZcczIbZexLk162/3FywV56lH9YB9JJDy8ZLPLKbyjgs1JnNLh+9jvwhJPhL8Gt+UhZc4qlJvFBkCiB8u2HFiI1teE/Y/gCtctYs+W02RQbxpIwgGLLLqB1riG4i7FrwqmemHU5Sue3nX0iuYZfDDYPUZoMb6ve3LI+AF+GePxV+P0R8AsaLjCCQyPDjtsHdrkt1OzpGkPVe7a+BWRgZ7x3Qs4PqLh3bfAlL8m9bxTH+CphH7hVczl7eU8GZDtqPkNQOD9EmONgtG5SSyV1xUGWE7+0QenO39eoDDUE5UIhnU9MABaxqO5+IRlFImMb+hGK7AvdDQKGq/rMBqhY+TyStVtr2KdzLjhiztOewbP8d2fetk4Bzg69aHJRFEKmMB +imNtd/s tNK6uAbreN3MNnLf3NdpOgcgG65WB8ASHkd+A/dCrX7aGygA02fhCQx7wtcs6ZKKB8Nosd5giYP4DENTXEr5JA/+6uxEbCVlDPYW7cTvgUMRZrz4dqolUxD415c5jHXLjBWQc6IgbXgq4Nfyb8ywr6gTzYnBIspQRVJ73I2hoTG78V8QGNGNx+NP1iTY0dNeoVwsNjHvhUW+FuQn1TcmM9vn5ydoMkTwyFSvQfBj8pjrrAoML/D/tIzgYkBaH3SJ7ADsLT9RATrcLzjLbJFbQpaV4E+Pp6PO8SJ3od/oCcJoIJPGuVpB9lvktQH9bgamUUfpnPT6ztA47imEwK1xbw8L+/b+gY5g3Acc7AaLhLdShvJhXHIHFWaX3rmYyOfmknAlJLK88HRqb1dFKehRPQwOp6Cqu9vB0EPQut9Znq4zV3WKDxF+oPlydKrgDdJlpctTEJiqZRVnw8/VeXe+h6MYWVfnUmvl1Yff6ULTuCPcJmr41qcNcOZaNhbHPAZCZ8oQVBviw/OIA3pUR6ZNUT/EbyLgEpoSHhma7/72N78KfUV9QVVRf72GPbWLrcqAG7u9nxD62oDoCNSUoXCBlQ/MiMHu4VVKmVSp/bgoG3xtPJuYqjkEYSQT8T0I7cq0WnEqmfAQBx9X2ofIJWmxFVr/8Mg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Christophe, On Thu, Dec 7, 2023 at 4:52=E2=80=AFPM Christophe Leroy wrote: > > > > Le 07/12/2023 =C3=A0 16:03, Alexandre Ghiti a =C3=A9crit : > > In 6.5, we removed the vmalloc fault path because that can't work (see > > [1] [2]). Then in order to make sure that new page table entries were > > seen by the page table walker, we had to preventively emit a sfence.vma > > on all harts [3] but this solution is very costly since it relies on IP= I. > > > > And even there, we could end up in a loop of vmalloc faults if a vmallo= c > > allocation is done in the IPI path (for example if it is traced, see > > [4]), which could result in a kernel stack overflow. > > > > Those preventive sfence.vma needed to be emitted because: > > > > - if the uarch caches invalid entries, the new mapping may not be > > observed by the page table walker and an invalidation may be needed. > > - if the uarch does not cache invalid entries, a reordered access > > could "miss" the new mapping and traps: in that case, we would actua= lly > > only need to retry the access, no sfence.vma is required. > > > > So this patch removes those preventive sfence.vma and actually handles > > the possible (and unlikely) exceptions. And since the kernel stacks > > mappings lie in the vmalloc area, this handling must be done very early > > when the trap is taken, at the very beginning of handle_exception: this > > also rules out the vmalloc allocations in the fault path. > > > > Note that for now, we emit a sfence.vma even for uarchs that do not > > cache invalid entries as we have no means to know that: that will be > > fixed in the next patch. > > > > Link: https://lore.kernel.org/linux-riscv/20230531093817.665799-1-bjorn= @kernel.org/ [1] > > Link: https://lore.kernel.org/linux-riscv/20230801090927.2018653-1-dyla= n@andestech.com [2] > > Link: https://lore.kernel.org/linux-riscv/20230725132246.817726-1-alexg= hiti@rivosinc.com/ [3] > > Link: https://lore.kernel.org/lkml/20200508144043.13893-1-joro@8bytes.o= rg/ [4] > > Signed-off-by: Alexandre Ghiti > > --- > > arch/riscv/include/asm/cacheflush.h | 19 +++++- > > arch/riscv/include/asm/thread_info.h | 5 ++ > > arch/riscv/kernel/asm-offsets.c | 5 ++ > > arch/riscv/kernel/entry.S | 94 +++++++++++++++++++++++++++= + > > arch/riscv/mm/init.c | 2 + > > 5 files changed, 124 insertions(+), 1 deletion(-) > > > > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/a= sm/cacheflush.h > > index 3cb53c4df27c..a916cbc69d47 100644 > > --- a/arch/riscv/include/asm/cacheflush.h > > +++ b/arch/riscv/include/asm/cacheflush.h > > @@ -37,7 +37,24 @@ static inline void flush_dcache_page(struct page *pa= ge) > > flush_icache_mm(vma->vm_mm, 0) > > > > #ifdef CONFIG_64BIT > > -#define flush_cache_vmap(start, end) flush_tlb_kernel_range(start, end= ) > > +extern u64 new_vmalloc[]; > > Can you have the table size here ? Would help GCC static analysis for > boundary checking. Yes, I'll do > > > +extern char _end[]; > > +#define flush_cache_vmap flush_cache_vmap > > +static inline void flush_cache_vmap(unsigned long start, unsigned long= end) > > +{ > > + if ((start < VMALLOC_END && end > VMALLOC_START) || > > + (start < MODULES_END && end > MODULES_VADDR)) { > > Can you use is_vmalloc_or_module_addr() instead ? Yes, I'll do > > > > + int i; > > + > > + /* > > + * We don't care if concurrently a cpu resets this value = since > > + * the only place this can happen is in handle_exception(= ) where > > + * an sfence.vma is emitted. > > + */ > > + for (i =3D 0; i < NR_CPUS / sizeof(u64) + 1; ++i) > > Use ARRAY_SIZE() ? And that too :) Thanks for the review, Alex > > > + new_vmalloc[i] =3D -1ULL; > > + } > > +} > > #endif > > > > #ifndef CONFIG_SMP > > diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/= asm/thread_info.h > > index 1833beb00489..8fe12fa6c329 100644 > > --- a/arch/riscv/include/asm/thread_info.h > > +++ b/arch/riscv/include/asm/thread_info.h > > @@ -60,6 +60,11 @@ struct thread_info { > > long user_sp; /* User stack pointer */ > > int cpu; > > unsigned long syscall_work; /* SYSCALL_WORK_ flags */ > > + /* > > + * Used in handle_exception() to save a0, a1 and a2 before knowin= g if we > > + * can access the kernel stack. > > + */ > > + unsigned long a0, a1, a2; > > }; > > > > /* > > diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-of= fsets.c > > index d6a75aac1d27..340c1c84560d 100644 > > --- a/arch/riscv/kernel/asm-offsets.c > > +++ b/arch/riscv/kernel/asm-offsets.c > > @@ -34,10 +34,15 @@ void asm_offsets(void) > > OFFSET(TASK_THREAD_S9, task_struct, thread.s[9]); > > OFFSET(TASK_THREAD_S10, task_struct, thread.s[10]); > > OFFSET(TASK_THREAD_S11, task_struct, thread.s[11]); > > + > > + OFFSET(TASK_TI_CPU, task_struct, thread_info.cpu); > > OFFSET(TASK_TI_FLAGS, task_struct, thread_info.flags); > > OFFSET(TASK_TI_PREEMPT_COUNT, task_struct, thread_info.preempt_co= unt); > > OFFSET(TASK_TI_KERNEL_SP, task_struct, thread_info.kernel_sp); > > OFFSET(TASK_TI_USER_SP, task_struct, thread_info.user_sp); > > + OFFSET(TASK_TI_A0, task_struct, thread_info.a0); > > + OFFSET(TASK_TI_A1, task_struct, thread_info.a1); > > + OFFSET(TASK_TI_A2, task_struct, thread_info.a2); > > > > OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); > > OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); > > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S > > index 143a2bb3e697..3a3c7b563816 100644 > > --- a/arch/riscv/kernel/entry.S > > +++ b/arch/riscv/kernel/entry.S > > @@ -14,6 +14,88 @@ > > #include > > #include > > > > +.macro new_vmalloc_check > > + REG_S a0, TASK_TI_A0(tp) > > + REG_S a1, TASK_TI_A1(tp) > > + REG_S a2, TASK_TI_A2(tp) > > + > > + csrr a0, CSR_CAUSE > > + /* Exclude IRQs */ > > + blt a0, zero, _new_vmalloc_restore_context > > + /* Only check new_vmalloc if we are in page/protection fault */ > > + li a1, EXC_LOAD_PAGE_FAULT > > + beq a0, a1, _new_vmalloc_kernel_address > > + li a1, EXC_STORE_PAGE_FAULT > > + beq a0, a1, _new_vmalloc_kernel_address > > + li a1, EXC_INST_PAGE_FAULT > > + bne a0, a1, _new_vmalloc_restore_context > > + > > +_new_vmalloc_kernel_address: > > + /* Is it a kernel address? */ > > + csrr a0, CSR_TVAL > > + bge a0, zero, _new_vmalloc_restore_context > > + > > + /* Check if a new vmalloc mapping appeared that could explain the= trap */ > > + > > + /* > > + * Computes: > > + * a0 =3D &new_vmalloc[BIT_WORD(cpu)] > > + * a1 =3D BIT_MASK(cpu) > > + */ > > + REG_L a2, TASK_TI_CPU(tp) > > + /* > > + * Compute the new_vmalloc element position: > > + * (cpu / 64) * 8 =3D (cpu >> 6) << 3 > > + */ > > + srli a1, a2, 6 > > + slli a1, a1, 3 > > + la a0, new_vmalloc > > + add a0, a0, a1 > > + /* > > + * Compute the bit position in the new_vmalloc element: > > + * bit_pos =3D cpu % 64 =3D cpu - (cpu / 64) * 64 =3D cpu - (cpu = >> 6) << 6 > > + * =3D cpu - ((cpu >> 6) << 3) << 3 > > + */ > > + slli a1, a1, 3 > > + sub a1, a2, a1 > > + /* Compute the "get mask": 1 << bit_pos */ > > + li a2, 1 > > + sll a1, a2, a1 > > + > > + /* Check the value of new_vmalloc for this cpu */ > > + ld a2, 0(a0) > > + and a2, a2, a1 > > + beq a2, zero, _new_vmalloc_restore_context > > + > > + ld a2, 0(a0) > > + not a1, a1 > > + and a1, a2, a1 > > + sd a1, 0(a0) > > + > > + /* Only emit a sfence.vma if the uarch caches invalid entries */ > > + la a0, tlb_caching_invalid_entries > > + lb a0, 0(a0) > > + beqz a0, _new_vmalloc_no_caching_invalid_entries > > + sfence.vma > > +_new_vmalloc_no_caching_invalid_entries: > > + // debug > > + la a0, nr_sfence_vma_handle_exception > > + li a1, 1 > > + amoadd.w a0, a1, (a0) > > + // end debug > > + REG_L a0, TASK_TI_A0(tp) > > + REG_L a1, TASK_TI_A1(tp) > > + REG_L a2, TASK_TI_A2(tp) > > + csrw CSR_SCRATCH, x0 > > + sret > > + > > +_new_vmalloc_restore_context: > > + REG_L a0, TASK_TI_A0(tp) > > + REG_L a1, TASK_TI_A1(tp) > > + REG_L a2, TASK_TI_A2(tp) > > +.endm > > + > > + > > SYM_CODE_START(handle_exception) > > /* > > * If coming from userspace, preserve the user thread pointer and= load > > @@ -25,6 +107,18 @@ SYM_CODE_START(handle_exception) > > > > _restore_kernel_tpsp: > > csrr tp, CSR_SCRATCH > > + > > + /* > > + * The RISC-V kernel does not eagerly emit a sfence.vma after eac= h > > + * new vmalloc mapping, which may result in exceptions: > > + * - if the uarch caches invalid entries, the new mapping would n= ot be > > + * observed by the page table walker and an invalidation is nee= ded. > > + * - if the uarch does not cache invalid entries, a reordered acc= ess > > + * could "miss" the new mapping and traps: in that case, we onl= y need > > + * to retry the access, no sfence.vma is required. > > + */ > > + new_vmalloc_check > > + > > REG_S sp, TASK_TI_KERNEL_SP(tp) > > > > #ifdef CONFIG_VMAP_STACK > > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > > index 0798bd861dcb..379403de6c6f 100644 > > --- a/arch/riscv/mm/init.c > > +++ b/arch/riscv/mm/init.c > > @@ -36,6 +36,8 @@ > > > > #include "../kernel/head.h" > > > > +u64 new_vmalloc[NR_CPUS / sizeof(u64) + 1]; > > + > > struct kernel_mapping kernel_map __ro_after_init; > > EXPORT_SYMBOL(kernel_map); > > #ifdef CONFIG_XIP_KERNEL