From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C9B9C369A1 for ; Tue, 8 Apr 2025 08:06:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C0D2280001; Tue, 8 Apr 2025 04:06:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 072066B000E; Tue, 8 Apr 2025 04:06:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7AFB280001; Tue, 8 Apr 2025 04:06:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C9CD06B000D for ; Tue, 8 Apr 2025 04:06:11 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 51A2B57E4A for ; Tue, 8 Apr 2025 08:06:12 +0000 (UTC) X-FDA: 83310143784.05.9377A02 Received: from relay8-d.mail.gandi.net (relay8-d.mail.gandi.net [217.70.183.201]) by imf19.hostedemail.com (Postfix) with ESMTP id E489A1A0005 for ; Tue, 8 Apr 2025 08:06:09 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf19.hostedemail.com: domain of alex@ghiti.fr designates 217.70.183.201 as permitted sender) smtp.mailfrom=alex@ghiti.fr ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744099570; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FGDBkxMN/JoHo9fdrfMA8GWJ0+ClCl4Q2fHElWvJK0M=; b=vfaqGRMwe+zwncWNPCikTwgxj/TlR8UIRKAS+EgUSY6vTUX+5cgRdVFLKempDZeeYiAMdL 39/WXDpmt0jvAjqErXbgX6pHJauN8LQU+1LaOFTdlp3x0PLRNVqH1keK6rjQy8Um6Hxice HiCy3PmctqhunOA2ZdQQFymu3yqrM30= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf19.hostedemail.com: domain of alex@ghiti.fr designates 217.70.183.201 as permitted sender) smtp.mailfrom=alex@ghiti.fr ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744099570; a=rsa-sha256; cv=none; b=cxQp1dWZDXJaF5h/aOKIOCuQj+dGnmVL2BrFmiXjpDkoK6pyHm7R4jBop8xEXuO5a7SyMN pZeu90EyR2B+XvgAMv0phkZngEkfqJ8bgq66pP/V8DIADAoe6KP35/5AngtHNiQ9ckG9kA TxPIF4OeRq2yEZShlSejZc8opHD+0sY= Received: by mail.gandi.net (Postfix) with ESMTPSA id B0E3744445; Tue, 8 Apr 2025 08:05:49 +0000 (UTC) Message-ID: Date: Tue, 8 Apr 2025 10:05:48 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v12 05/28] riscv: usercfi state for task and save/restore of CSR_SSP on trap entry/exit Content-Language: en-US To: Deepak Gupta , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, Zong Li References: <20250314-v5_user_cfi_series-v12-0-e51202b53138@rivosinc.com> <20250314-v5_user_cfi_series-v12-5-e51202b53138@rivosinc.com> From: Alexandre Ghiti In-Reply-To: <20250314-v5_user_cfi_series-v12-5-e51202b53138@rivosinc.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-GND-State: clean X-GND-Score: -100 X-GND-Cause: gggruggvucftvghtrhhoucdtuddrgeefvddrtddtgddvtddvheeiucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuifetpfffkfdpucggtfgfnhhsuhgsshgtrhhisggvnecuuegrihhlohhuthemuceftddunecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefkffggfgfuvfevfhfhjggtgfesthejredttddvjeenucfhrhhomheptehlvgigrghnughrvgcuifhhihhtihcuoegrlhgvgiesghhhihhtihdrfhhrqeenucggtffrrghtthgvrhhnpefhlefhffeggfeftddvtdeukeelgfehkeehhfeuheehleefkeelgffglefghfffueenucffohhmrghinhepvghnthhrhidrshgsnecukfhppedvtddtudemkeeiudemfeefkedvmegvfheltdemhegsgeeimeekledukeemtgeludejmeejkeejvdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepihhnvghtpedvtddtudemkeeiudemfeefkedvmegvfheltdemhegsgeeimeekledukeemtgeludejmeejkeejvddphhgvlhhopeglkffrggeimedvtddtudemkeeiudemfeefkedvmegvfheltdemhegsgeeimeekledukeemtgeludejmeejkeejvdgnpdhmrghilhhfrhhomheprghlvgigsehghhhithhirdhfrhdpnhgspghrtghpthhtohepgeelpdhrtghpthhtohepuggvsghughesrhhivhhoshhinhgtrdgtohhmpdhrtghpthhtohepthhglhigsehlihhnuhhtrhhonhhigidruggvpdhrtghpthhtohepmhhinhhgo hesrhgvu ghhrghtrdgtohhmpdhrtghpthhtohepsghpsegrlhhivghnkedruggvpdhrtghpthhtohepuggrvhgvrdhhrghnshgvnheslhhinhhugidrihhnthgvlhdrtghomhdprhgtphhtthhopeigkeeisehkvghrnhgvlhdrohhrghdprhgtphhtthhopehhphgrseiihihtohhrrdgtohhmpdhrtghpthhtoheprghkphhmsehlihhnuhigqdhfohhunhgurghtihhonhdrohhrgh X-GND-Sasl: alex@ghiti.fr X-Rspamd-Server: rspam01 X-Stat-Signature: fufj8d3urokctpbg94i8o9ntcgdqozja X-Rspam-User: X-Rspamd-Queue-Id: E489A1A0005 X-HE-Tag: 1744099569-509221 X-HE-Meta: U2FsdGVkX1/2X+hPzZI0c3r1j6N+4k8S2WgesnLdBXkcmZaPCBha1AkXQgUyNKit5tO4teg1DsTR+Ly2Usmitzx8mTHpIFUfqw4mnC6j2ow+d082xD7ObMW6KTPU5EgtzfwI5pAt3foEWevvGMD2qVhxKa/kwo6xuodYVWCXK5I9Q+kEkCbMgcimgsCEA0NUVsH1ZpGydcvV724ENi2XSckvGX6bCMMGV1qrrDdLbLAdLmJJOMMOMmw89X+ppvJb/ie497okmk0zvhk7TQfRmSg8Zb3k3eEse2t3lEivNWaSQPklw+23IrdGMTuSfr8NcR3wViEahktbE7/7HurX8zhb5y3Xf0dUARMUsZ4vOT9oLbMFnXiYkAerRcrnnQ1wsgGPpWnRqJUyUJTcAvqQY01cBWQEtdyYL9g/jGKwTzJ4urVJHlEfhU2fNie5hwf0Wc13Nn5J2wjoMhBG5hotqXvYK2T2MAtMu2Cq5uTwwGaoq2Ud1j12hsIjU03fuoCCXxCQePtZbwX/7ghJWmSFcHz6eHZy09ZKz+ovga0Xu2A7O8mmSc/PfMIhoyFLIRE0XsiLW1PLYLbjJmszJ7z8zeS3qA/Zro9lLHdYigwfp2v9LBqt4fuk7udt261EGnrSyOBwrvyLV4Dnu4Usbrs3e1pvl5eo67B6yvPmnSOWLprmaj8fXb7I2uRSjoRwkk2bFn/Jz1hNpBPf9gyrFPasfMTqI6yNqde16FF5ph6ZRrSt5WFl+UvGRv42N6au++NT6FVL/ROqdfwMxpD66uPWYFEY5wSki94FfdY6Px+T5a7Wg5C7EEwRawrZ6MssfITC/UQOJ3CII2w6MN5y3y0SkdjQYSQyKu9rJwI5tzrhGCX0mDkHWEyMWaZ7lCrOe86bAC2fIYm8UBcNooZcRMBAVZYfxrrr+Ok50trSel7FZzbc7avt2CKZ79KUSKlBoTfTaGViMj2VhaYWQZcVC0G 7OCEadOv PfF2BxgA+MT2JK4AaumyQQIeGqw4P3rHA25fHlooKw4aaeZVbwzA/hM8EfIvnSYkjU30J/M6Qcx7aSltY3fyk2B25JhKRBChnF+pR+g79/4rCA60rdpelg64LDWP2c8N4tREvO/mJGfGhTLGekxeEiwGNo1qmxhceo3O4eUwJzbxr0RaXiRUtYRV6ccFaQcPHf9WWKnqwmoDUE9P4+GsZtn4pw5ort6yFlFi77nxdngd7mDd/a7XYLnetHO4PDshB949fcYFHXqjkDJP2gDH24jTEj7/w6viEcP1b2r5Csi5i5r3KN3Isy4ONbxXIk84ROFEBZicRiY5On+I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 14/03/2025 22:39, Deepak Gupta wrote: > Carves out space in arch specific thread struct for cfi status and shadow > stack in usermode on riscv. > > This patch does following > - defines a new structure cfi_status with status bit for cfi feature > - defines shadow stack pointer, base and size in cfi_status structure > - defines offsets to new member fields in thread in asm-offsets.c > - Saves and restore shadow stack pointer on trap entry (U --> S) and exit > (S --> U) > > Shadow stack save/restore is gated on feature availiblity and implemented > using alternative. CSR can be context switched in `switch_to` as well but > soon as kernel shadow stack support gets rolled in, shadow stack pointer > will need to be switched at trap entry/exit point (much like `sp`). It can > be argued that kernel using shadow stack deployment scenario may not be as > prevalant as user mode using this feature. But even if there is some > minimal deployment of kernel shadow stack, that means that it needs to be > supported. And thus save/restore of shadow stack pointer in entry.S instead > of in `switch_to.h`. > > Reviewed-by: Charlie Jenkins > Reviewed-by: Zong Li > Signed-off-by: Deepak Gupta > --- > arch/riscv/include/asm/processor.h | 1 + > arch/riscv/include/asm/thread_info.h | 3 +++ > arch/riscv/include/asm/usercfi.h | 24 ++++++++++++++++++++++++ > arch/riscv/kernel/asm-offsets.c | 4 ++++ > arch/riscv/kernel/entry.S | 26 ++++++++++++++++++++++++++ > 5 files changed, 58 insertions(+) > > diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h > index e3aba3336e63..d851bb5c6da0 100644 > --- a/arch/riscv/include/asm/processor.h > +++ b/arch/riscv/include/asm/processor.h > @@ -14,6 +14,7 @@ > > #include > #include > +#include > > #define arch_get_mmap_end(addr, len, flags) \ > ({ \ > diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h > index f5916a70879a..a0cfe00c2ca6 100644 > --- a/arch/riscv/include/asm/thread_info.h > +++ b/arch/riscv/include/asm/thread_info.h > @@ -62,6 +62,9 @@ struct thread_info { > long user_sp; /* User stack pointer */ > int cpu; > unsigned long syscall_work; /* SYSCALL_WORK_ flags */ > +#ifdef CONFIG_RISCV_USER_CFI > + struct cfi_status user_cfi_state; > +#endif > #ifdef CONFIG_SHADOW_CALL_STACK > void *scs_base; > void *scs_sp; > diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h > new file mode 100644 > index 000000000000..5f2027c51917 > --- /dev/null > +++ b/arch/riscv/include/asm/usercfi.h > @@ -0,0 +1,24 @@ > +/* SPDX-License-Identifier: GPL-2.0 > + * Copyright (C) 2024 Rivos, Inc. > + * Deepak Gupta > + */ > +#ifndef _ASM_RISCV_USERCFI_H > +#define _ASM_RISCV_USERCFI_H > + > +#ifndef __ASSEMBLY__ > +#include > + > +#ifdef CONFIG_RISCV_USER_CFI > +struct cfi_status { > + unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ > + unsigned long rsvd : ((sizeof(unsigned long) * 8) - 1); > + unsigned long user_shdw_stk; /* Current user shadow stack pointer */ > + unsigned long shdw_stk_base; /* Base address of shadow stack */ > + unsigned long shdw_stk_size; /* size of shadow stack */ > +}; > + > +#endif /* CONFIG_RISCV_USER_CFI */ > + > +#endif /* __ASSEMBLY__ */ > + > +#endif /* _ASM_RISCV_USERCFI_H */ > diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c > index e89455a6a0e5..0c188aaf3925 100644 > --- a/arch/riscv/kernel/asm-offsets.c > +++ b/arch/riscv/kernel/asm-offsets.c > @@ -50,6 +50,10 @@ void asm_offsets(void) > #endif > > OFFSET(TASK_TI_CPU_NUM, task_struct, thread_info.cpu); > +#ifdef CONFIG_RISCV_USER_CFI > + OFFSET(TASK_TI_CFI_STATUS, task_struct, thread_info.user_cfi_state); > + OFFSET(TASK_TI_USER_SSP, task_struct, thread_info.user_cfi_state.user_shdw_stk); > +#endif > OFFSET(TASK_THREAD_F0, task_struct, thread.fstate.f[0]); > OFFSET(TASK_THREAD_F1, task_struct, thread.fstate.f[1]); > OFFSET(TASK_THREAD_F2, task_struct, thread.fstate.f[2]); > diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S > index 33a5a9f2a0d4..68c99124ea55 100644 > --- a/arch/riscv/kernel/entry.S > +++ b/arch/riscv/kernel/entry.S > @@ -147,6 +147,20 @@ SYM_CODE_START(handle_exception) > > REG_L s0, TASK_TI_USER_SP(tp) > csrrc s1, CSR_STATUS, t0 > + /* > + * If previous mode was U, capture shadow stack pointer and save it away > + * Zero CSR_SSP at the same time for sanitization. > + */ > + ALTERNATIVE("nop; nop; nop; nop", You could use __nops(4) here instead. > + __stringify( \ > + andi s2, s1, SR_SPP; \ > + bnez s2, skip_ssp_save; \ > + csrrw s2, CSR_SSP, x0; \ > + REG_S s2, TASK_TI_USER_SSP(tp); \ > + skip_ssp_save:), > + 0, > + RISCV_ISA_EXT_ZICFISS, > + CONFIG_RISCV_USER_CFI) > csrr s2, CSR_EPC > csrr s3, CSR_TVAL > csrr s4, CSR_CAUSE > @@ -236,6 +250,18 @@ SYM_CODE_START_NOALIGN(ret_from_exception) > * structures again. > */ > csrw CSR_SCRATCH, tp > + > + /* > + * Going back to U mode, restore shadow stack pointer > + */ > + ALTERNATIVE("nop; nop", Ditto > + __stringify( \ > + REG_L s3, TASK_TI_USER_SSP(tp); \ > + csrw CSR_SSP, s3), > + 0, > + RISCV_ISA_EXT_ZICFISS, > + CONFIG_RISCV_USER_CFI) > + > 1: > #ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE > move a0, sp > Apart from the nits above, you can add: Reviewed-by: Alexandre Ghiti Thanks, Alex