From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BEEAC369AB for ; Thu, 24 Apr 2025 07:21:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23F5D6B002F; Thu, 24 Apr 2025 03:21:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19E956B0030; Thu, 24 Apr 2025 03:21:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0F7F6B0031; Thu, 24 Apr 2025 03:21:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id CBE066B002F for ; Thu, 24 Apr 2025 03:21:02 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4E747BEA31 for ; Thu, 24 Apr 2025 07:21:04 +0000 (UTC) X-FDA: 83368090848.05.C8C1B87 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf16.hostedemail.com (Postfix) with ESMTP id 54F45180010 for ; Thu, 24 Apr 2025 07:21:02 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=AkTKASe+; dmarc=none; spf=pass (imf16.hostedemail.com: domain of debug@rivosinc.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=debug@rivosinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745479262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FXRgJQQTIltTZtvXjLQKOFjSJ5/VYtElzs6cRgkZXmg=; b=l8Mr1d/0Tk+OUhvmzqQ1EwURAno2/p54g3N8/TBcM6BnoWG7K4jbNgEpsvhSTIEvvY3Brg dwLxwBtymlkInTO89eIK+/b3/k4mNDyghJFG2057n5yeSeLB8F7TAq3b0IOlB1Fy3irOcq zkWcMHfZ80zDk8gPQUK15cdemPkPl2Q= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=rivosinc-com.20230601.gappssmtp.com header.s=20230601 header.b=AkTKASe+; dmarc=none; spf=pass (imf16.hostedemail.com: domain of debug@rivosinc.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=debug@rivosinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745479262; a=rsa-sha256; cv=none; b=fxeei+2rKx+djYTtyVzJuVHwKXuvzoVktYYx6wQPjeV7zNg2MUMxUqoRmPZTrbrljtB1yR LIF3sD4juHL87LW2xK+iDHlMTALt9yds5tIrG7QkjdGa4zezTqNfpmd4vfccmcPh2c3JcX IdBoeF6/wOb5WonWJ7bizE1YgcOPiYM= Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-22c33e5013aso8198575ad.0 for ; Thu, 24 Apr 2025 00:21:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1745479261; x=1746084061; darn=kvack.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=FXRgJQQTIltTZtvXjLQKOFjSJ5/VYtElzs6cRgkZXmg=; b=AkTKASe+X57o1+SFUb0CeFmijFOfMuKFuXHIb4HPPYgxr2IWdUZBw8MdIrsN3XHSbV L4lUpd/uhad40HsZSWctyPSSumuSc1rBAFS4DEo3b+7HrxsivA9Ycya9Kc+TOzz+xhjC HkQo7krdmLfm8x4x4sk6pU/RnqberOdkSb5sm5zVsdCdDjgcXv/oZfklHuif1PgQ3WF9 g6EFhxf2qv6mBbgE6tiHOWoTzQEiCAI1xr+jmq8NxtB0sDiHhtX4rREUjlLM8t9qQH1R yRKrZQkNCxxLCQLC5XO5wm8eRLjAD5GM2WfZRmzis+oYxmlpEdkBMpnSFuOmMZN89xRB IAbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745479261; x=1746084061; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FXRgJQQTIltTZtvXjLQKOFjSJ5/VYtElzs6cRgkZXmg=; b=Az++W7vhYbjrnazx8jerTGtGSI3oKhHXhdQTGN87rhLtG3VxOSuNQtlW0ITluES5I0 pGUugHm3I73N8ckGbcwvuZRF73QkgKXZIAYrNL/sw/uXlHctGmsihTPHbldcLGevyz19 LH5s6SgiZR3b6v6wyKu9v/3XKhvS+nFLXmeopheDr7FBLx/Lb1OV/SsId1sdmZut2Ndt Q5UFQIJRVvSv5EVoSmgEMd8eHGoJFFl1T+6oXZw1eUdiMtdjYfQoEEv9vAcG989TrtyI 7Gg8EqD6I+OiIa0XeptxX27oXr7eDHYIGB7wGWs6cBHP4Ctyf1BiydAZKszUfMxO8RB3 l3GQ== X-Forwarded-Encrypted: i=1; AJvYcCXccE8qTmKF1uyrXHPnnhKMvh1suwRkN/l5urx2a6jHqxn7u0bT6m6uYV7oapiQ/Ak6wE0FrmFhig==@kvack.org X-Gm-Message-State: AOJu0Yxiz6TdSntQQ/hSiHB0jBgBp2F4opF65JAAtSA1q+s2hG68eHNu zq8vYQ74eYz6wsCOxD8c/PPlcktpqxQ1ENVehLbqM1hxnwvGrp6K4eTTbMYIaPI= X-Gm-Gg: ASbGncvoiP8/5i9UHq6SFVQfKF/jGnb+fdvQCbb3Jjq/dx0W+ynYQ0Hiozd5de8VlSk 43uZXg2mXyNR9wIJKTKnBfbpDCcptA1DBrFly0KUgN21lu1Mcwco0LmyCpUiVmFbkae0YBShalY iLyBdfKTc+qh6yeZfxzlLWyQ+UtdO4u79UTE2f1+wm4qEtqzlZQoGHV0DSf1VIPJRVgz/WVuD81 5xX1CAAdOkHtCQLj39Z+6UDCIQs2WdPXmq3Jm46nfzeAkXdhZseIKDqxxM+tMe9t028dKlzhjh6 Zobk4JR7+6Pvn8WLeOrNhCWtFzBXmVxuxKA/3S8eLyzlclg80TI= X-Google-Smtp-Source: AGHT+IHL7x0iPR7lyQQWO/z10aN1zmxsOzuENrkXELFl4URl9qpz0RScJ7K4TYuMAkD0FxY4bYumUA== X-Received: by 2002:a17:903:19c7:b0:224:1eab:97b2 with SMTP id d9443c01a7336-22db3db9430mr27781355ad.53.1745479261172; Thu, 24 Apr 2025 00:21:01 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22db52163d6sm6240765ad.214.2025.04.24.00.20.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Apr 2025 00:21:00 -0700 (PDT) From: Deepak Gupta Date: Thu, 24 Apr 2025 00:20:27 -0700 Subject: [PATCH v13 12/28] riscv: Implements arch agnostic shadow stack prctls MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20250424-v5_user_cfi_series-v13-12-971437de586a@rivosinc.com> References: <20250424-v5_user_cfi_series-v13-0-971437de586a@rivosinc.com> In-Reply-To: <20250424-v5_user_cfi_series-v13-0-971437de586a@rivosinc.com> To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andrew Morton , "Liam R. Howlett" , Vlastimil Babka , Lorenzo Stoakes , Paul Walmsley , Palmer Dabbelt , Albert Ou , Conor Dooley , Rob Herring , Krzysztof Kozlowski , Arnd Bergmann , Christian Brauner , Peter Zijlstra , Oleg Nesterov , Eric Biederman , Kees Cook , Jonathan Corbet , Shuah Khan , Jann Horn , Conor Dooley , Miguel Ojeda , Alex Gaynor , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, devicetree@vger.kernel.org, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, alistair.francis@wdc.com, richard.henderson@linaro.org, jim.shu@sifive.com, andybnac@gmail.com, kito.cheng@sifive.com, charlie@rivosinc.com, atishp@rivosinc.com, evan@rivosinc.com, cleger@rivosinc.com, alexghiti@rivosinc.com, samitolvanen@google.com, broonie@kernel.org, rick.p.edgecombe@intel.com, rust-for-linux@vger.kernel.org, Zong Li , Deepak Gupta X-Mailer: b4 0.13.0 X-Stat-Signature: howzks3ahqibuor36w4pb6o3wk1mpkq3 X-Rspamd-Queue-Id: 54F45180010 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1745479262-41327 X-HE-Meta: U2FsdGVkX1/8kTBNCWVYPNDwCauypJuKvubAgFjwLVJTuN7Lu9WlBzfeY6ElbhrIZ+YMeuZnfjtjmSJfUD56VTre6KTb+AOzYlOmVe7qKNfKrr59O7jgkh7/dNjo3j4fWECwIQEpGkFkBhPf1EwPrIuDt876TUzs0qt7oQ5L4OCXwwLTH0nOXIqeDuV1kSdt0rLl3I8bT/D4GdZzjNcLY8Vjd71VBa4Ik1qHCkU5Yqfe/WUxPHrnYCXjJn9Y2qQs6X0D+9PKtr8MXjoSgOUuROxUSHXnOC8QOezckAiTv226d5XD1jmzk6/QymHdELO92coHAmZpFw+OBb9Z5NsEtnoGsMG312cIeH6TCZpmu9QtF2QAmv9c8FVZRxZg/peuacUyqEgjZEfqodotTdNxCDtLlc4A8pWlCFexGSHn6cYDi/LcssFhJvP+RtkisVPO4xNpiisIVgnuaRStTiu6b3L9c7aU5T0OI4/E2I2vFIjBLy3FIW0tZDqGbOm9mORnTKXbccrUCDTDusKcEG+EQnNbeYK2evUvkSe0/p4l98KiJzAsOQoJdnQOeRNoVXn/euTdKYHB1t5J3alJyJ8ZHJ4dpOmvXF2Zi9j8grth+rNRa3sYaES1J+9OlKTfaT2fuVFjZK+Ak4m6ZmxJ+b3RLV8504FJTJyzvAvc6XRuIr+YW8UY8if58OyHek5njLzFNFoYW8rBdyPLfCh3V3EXujcIznBmg7j6ag2TN6KHnTzQ3paqSgiOu6namXXlxZGNB2m0hXLaHC/w88VMNZd73dKVv/PhRuektvW9BQ9OD3U+V0BapO5eq/2R1pPGXLo3Dbj4WBo4QhF/LzhKzd0knWX1lPhRir01UdGhLuPQO+GPnZkTsDIEBJ3aXhGTITSEktMRmNRbGUhxIJfHLTcdqaaykTce5VYrQ652p57c0VAgsdt62ECcu4IuTSmMFwjE5EhN8M1Xmq8s7awz6Uh ROY8sL3A mN2U96ztsEciQuVK1Mp2fUFgccSqhDAZ+jkfEBPYQWcRwLlHDJAVjLKyXFtflli4r5w/qBbDgAdSBjU/rGAsLpcBlbTsLpazklved98w/nInAr3ICGOf/Ha4mfto+fsx+RQmYS/zNxx41d8klDqtj66AvNEtyOymjDoM1+VN64Q1MkCT1x8wxjwa2Xhj1y/BnPXMdYClsW08pDiwXaaCjeoV0SMuOP8K91cHBvRAN/IlugmRFvDmx77GhblyJF3RzSQVenhCwLoAaOrwjNztNZ8u6JzGuulT6Pcw/VOCTlfTbyVpCTYO8Q0czf7+ErCDTEsEsFOvakC99U5NjWZ20ue5HaIAGb7r744itz5yeH6h4RDhMbqzCctsPnHq2a/HCOA7232gFX6N4xqGsDro0snBRafSl0iT915Lw4rXChsUuh0HqIG8Dsen9Jo+5AjOcfAhhsM8S1XaVJzP8dhRUN5YX7t5s1VEts/MGXFV5XXyqNGVvP+WDOuVsUoEWUPbJ/PYl91IXwDYPz0w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implement architecture agnostic prctls() interface for setting and getting shadow stack status. prctls implemented are PR_GET_SHADOW_STACK_STATUS, PR_SET_SHADOW_STACK_STATUS and PR_LOCK_SHADOW_STACK_STATUS. As part of PR_SET_SHADOW_STACK_STATUS/PR_GET_SHADOW_STACK_STATUS, only PR_SHADOW_STACK_ENABLE is implemented because RISCV allows each mode to write to their own shadow stack using `sspush` or `ssamoswap`. PR_LOCK_SHADOW_STACK_STATUS locks current configuration of shadow stack enabling. Reviewed-by: Zong Li Signed-off-by: Deepak Gupta --- arch/riscv/include/asm/usercfi.h | 18 ++++++- arch/riscv/kernel/process.c | 8 +++ arch/riscv/kernel/usercfi.c | 110 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 135 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/usercfi.h b/arch/riscv/include/asm/usercfi.h index 82d28ac98d76..c4dcd256f19a 100644 --- a/arch/riscv/include/asm/usercfi.h +++ b/arch/riscv/include/asm/usercfi.h @@ -7,6 +7,7 @@ #ifndef __ASSEMBLY__ #include +#include struct task_struct; struct kernel_clone_args; @@ -14,7 +15,8 @@ struct kernel_clone_args; #ifdef CONFIG_RISCV_USER_CFI struct cfi_status { unsigned long ubcfi_en : 1; /* Enable for backward cfi. */ - unsigned long rsvd : ((sizeof(unsigned long) * 8) - 1); + unsigned long ubcfi_locked : 1; + unsigned long rsvd : ((sizeof(unsigned long) * 8) - 2); unsigned long user_shdw_stk; /* Current user shadow stack pointer */ unsigned long shdw_stk_base; /* Base address of shadow stack */ unsigned long shdw_stk_size; /* size of shadow stack */ @@ -27,6 +29,12 @@ void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, unsigned unsigned long get_shstk_base(struct task_struct *task, unsigned long *size); void set_active_shstk(struct task_struct *task, unsigned long shstk_addr); bool is_shstk_enabled(struct task_struct *task); +bool is_shstk_locked(struct task_struct *task); +bool is_shstk_allocated(struct task_struct *task); +void set_shstk_lock(struct task_struct *task); +void set_shstk_status(struct task_struct *task, bool enable); + +#define PR_SHADOW_STACK_SUPPORTED_STATUS_MASK (PR_SHADOW_STACK_ENABLE) #else @@ -42,6 +50,14 @@ bool is_shstk_enabled(struct task_struct *task); #define is_shstk_enabled(task) false +#define is_shstk_locked(task) false + +#define is_shstk_allocated(task) false + +#define set_shstk_lock(task) + +#define set_shstk_status(task, enable) + #endif /* CONFIG_RISCV_USER_CFI */ #endif /* __ASSEMBLY__ */ diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 99acb6342a37..cd11667593fe 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -153,6 +153,14 @@ void start_thread(struct pt_regs *regs, unsigned long pc, regs->epc = pc; regs->sp = sp; + /* + * clear shadow stack state on exec. + * libc will set it later via prctl. + */ + set_shstk_status(current, false); + set_shstk_base(current, 0, 0); + set_active_shstk(current, 0); + #ifdef CONFIG_64BIT regs->status &= ~SR_UXL; diff --git a/arch/riscv/kernel/usercfi.c b/arch/riscv/kernel/usercfi.c index ec3d78efd6f3..08620bdae696 100644 --- a/arch/riscv/kernel/usercfi.c +++ b/arch/riscv/kernel/usercfi.c @@ -24,6 +24,16 @@ bool is_shstk_enabled(struct task_struct *task) return task->thread_info.user_cfi_state.ubcfi_en; } +bool is_shstk_allocated(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.shdw_stk_base; +} + +bool is_shstk_locked(struct task_struct *task) +{ + return task->thread_info.user_cfi_state.ubcfi_locked; +} + void set_shstk_base(struct task_struct *task, unsigned long shstk_addr, unsigned long size) { task->thread_info.user_cfi_state.shdw_stk_base = shstk_addr; @@ -42,6 +52,26 @@ void set_active_shstk(struct task_struct *task, unsigned long shstk_addr) task->thread_info.user_cfi_state.user_shdw_stk = shstk_addr; } +void set_shstk_status(struct task_struct *task, bool enable) +{ + if (!cpu_supports_shadow_stack()) + return; + + task->thread_info.user_cfi_state.ubcfi_en = enable ? 1 : 0; + + if (enable) + task->thread.envcfg |= ENVCFG_SSE; + else + task->thread.envcfg &= ~ENVCFG_SSE; + + csr_write(CSR_ENVCFG, task->thread.envcfg); +} + +void set_shstk_lock(struct task_struct *task) +{ + task->thread_info.user_cfi_state.ubcfi_locked = 1; +} + /* * If size is 0, then to be compatible with regular stack we want it to be as big as * regular stack. Else PAGE_ALIGN it and return back @@ -261,3 +291,83 @@ void shstk_release(struct task_struct *tsk) vm_munmap(base, size); set_shstk_base(tsk, 0, 0); } + +int arch_get_shadow_stack_status(struct task_struct *t, unsigned long __user *status) +{ + unsigned long bcfi_status = 0; + + if (!cpu_supports_shadow_stack()) + return -EINVAL; + + /* this means shadow stack is enabled on the task */ + bcfi_status |= (is_shstk_enabled(t) ? PR_SHADOW_STACK_ENABLE : 0); + + return copy_to_user(status, &bcfi_status, sizeof(bcfi_status)) ? -EFAULT : 0; +} + +int arch_set_shadow_stack_status(struct task_struct *t, unsigned long status) +{ + unsigned long size = 0, addr = 0; + bool enable_shstk = false; + + if (!cpu_supports_shadow_stack()) + return -EINVAL; + + /* Reject unknown flags */ + if (status & ~PR_SHADOW_STACK_SUPPORTED_STATUS_MASK) + return -EINVAL; + + /* bcfi status is locked and further can't be modified by user */ + if (is_shstk_locked(t)) + return -EINVAL; + + enable_shstk = status & PR_SHADOW_STACK_ENABLE; + /* Request is to enable shadow stack and shadow stack is not enabled already */ + if (enable_shstk && !is_shstk_enabled(t)) { + /* shadow stack was allocated and enable request again + * no need to support such usecase and return EINVAL. + */ + if (is_shstk_allocated(t)) + return -EINVAL; + + size = calc_shstk_size(0); + addr = allocate_shadow_stack(0, size, 0, false); + if (IS_ERR_VALUE(addr)) + return -ENOMEM; + set_shstk_base(t, addr, size); + set_active_shstk(t, addr + size); + } + + /* + * If a request to disable shadow stack happens, let's go ahead and release it + * Although, if CLONE_VFORKed child did this, then in that case we will end up + * not releasing the shadow stack (because it might be needed in parent). Although + * we will disable it for VFORKed child. And if VFORKed child tries to enable again + * then in that case, it'll get entirely new shadow stack because following condition + * are true + * - shadow stack was not enabled for vforked child + * - shadow stack base was anyways pointing to 0 + * This shouldn't be a big issue because we want parent to have availability of shadow + * stack whenever VFORKed child releases resources via exit or exec but at the same + * time we want VFORKed child to break away and establish new shadow stack if it desires + * + */ + if (!enable_shstk) + shstk_release(t); + + set_shstk_status(t, enable_shstk); + return 0; +} + +int arch_lock_shadow_stack_status(struct task_struct *task, + unsigned long arg) +{ + /* If shtstk not supported or not enabled on task, nothing to lock here */ + if (!cpu_supports_shadow_stack() || + !is_shstk_enabled(task) || arg != 0) + return -EINVAL; + + set_shstk_lock(task); + + return 0; +} -- 2.43.0