From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5E7BC433EF for ; Thu, 14 Apr 2022 12:46:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 131D06B0071; Thu, 14 Apr 2022 08:46:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DFF86B0073; Thu, 14 Apr 2022 08:46:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE99D6B0074; Thu, 14 Apr 2022 08:46:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id E02F76B0071 for ; Thu, 14 Apr 2022 08:46:07 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B1AAE261F1 for ; Thu, 14 Apr 2022 12:46:07 +0000 (UTC) X-FDA: 79355457174.02.13C373D Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf27.hostedemail.com (Postfix) with ESMTP id 1E16540007 for ; Thu, 14 Apr 2022 12:46:06 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 62BBC1424; Thu, 14 Apr 2022 05:46:06 -0700 (PDT) Received: from lakrids (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 133923F70D; Thu, 14 Apr 2022 05:46:03 -0700 (PDT) Date: Thu, 14 Apr 2022 13:46:01 +0100 From: Mark Rutland To: andrey.konovalov@linux.dev Cc: Marco Elver , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Catalin Marinas , Will Deacon , Vincenzo Frascino , Sami Tolvanen , linux-arm-kernel@lists.infradead.org, Peter Collingbourne , Evgenii Stepanov , Florian Mayer , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: Re: [PATCH v3 2/3] kasan, arm64: implement stack_trace_save_shadow Message-ID: References: <78cd352296ceb14da1d0136ff7d0a6818e594ab7.1649877511.git.andreyknvl@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <78cd352296ceb14da1d0136ff7d0a6818e594ab7.1649877511.git.andreyknvl@google.com> X-Rspam-User: X-Stat-Signature: rabozn6jcan1k97xyp5om6nkgpteypxh Authentication-Results: imf27.hostedemail.com; dkim=none; spf=pass (imf27.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com; dmarc=pass (policy=none) header.from=arm.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 1E16540007 X-HE-Tag: 1649940366-272960 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Apr 13, 2022 at 09:26:45PM +0200, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov > > Implement stack_trace_save_shadow() that collects stack traces based on > the Shadow Call Stack (SCS) for arm64 by copiing the frames from SCS. > > The implementation is best-effort and thus has limitations. > > stack_trace_save_shadow() fully handles task and softirq contexts, which > are both processed on the per-task SCS. > > For hardirqs, the support is limited: stack_trace_save_shadow() does not > collect the task part of the stack trace. For KASAN, this is not a problem, > as stack depot only saves the interrupt part of the stack anyway. > > Otherwise, stack_trace_save_shadow() also takes a best-effort approach > with a focus on performance. Thus, it: > > - Does not try to collect stack traces from other exceptions like SDEI. > - Does not try to recover frames modified by KRETPROBES or by FTRACE. > > However, stack_trace_save_shadow() does strip PTR_AUTH tags to avoid > leaking them in stack traces. > > The -ENOSYS return value is deliberatly used to match > stack_trace_save_tsk_reliable(). > > Signed-off-by: Andrey Konovalov > --- > mm/kasan/common.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 62 insertions(+) As things stand, NAK to this patch, for the reasons I have laid out in my replies to earlier postings and to my reply to the cover letter of this posting. To be clear, that NAK applies regardless of where this is placed within the kernel tree. If we *really* need to have a special unwinder, that should live under arch/arm64/, but my first objection is that it is not necessary. I am more than happy to extend the existing unwinder with some options to minimize overhead (e.g. to stop dumping at an exception boundary), since that sounds useful to you, and I know is relatively simple to implement. Thanks, Mark. > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index d9079ec11f31..23b30fa6e270 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -30,6 +30,68 @@ > #include "kasan.h" > #include "../slab.h" > > +#ifdef CONFIG_SHADOW_CALL_STACK > +#include > +#include > + > +/* > + * Collect the stack trace from the Shadow Call Stack in a best-effort manner: > + * > + * - Do not collect the task part of the stack trace when in a hardirq. > + * - Do not collect stack traces from other exception levels like SDEI. > + * - Do not recover frames modified by KRETPROBES or by FTRACE. > + * > + * Note that marking the function with __noscs leads to unnacceptable > + * performance impact, as helper functions stop being inlined. > + */ > +static inline int stack_trace_save_shadow(unsigned long *store, > + unsigned int size) > +{ > + unsigned long *scs_top, *scs_base, *frame; > + unsigned int len = 0; > + > + /* Get the SCS base. */ > + if (in_task() || in_serving_softirq()) { > + /* Softirqs reuse the task SCS area. */ > + scs_base = task_scs(current); > + } else if (in_hardirq()) { > + /* Hardirqs use a per-CPU SCS area. */ > + scs_base = *this_cpu_ptr(&irq_shadow_call_stack_ptr); > + } else { > + /* Ignore other exception levels. */ > + return 0; > + } > + > + /* > + * Get the SCS pointer. > + * > + * Note that this assembly might be placed before the function's > + * prologue. In this case, the last stack frame will be lost. This is > + * acceptable: the lost frame will correspond to an internal KASAN > + * function, which is not relevant to identify the external call site. > + */ > + asm volatile("mov %0, x18" : "=&r" (scs_top)); > + > + /* The top SCS slot is empty. */ > + scs_top -= 1; > + > + for (frame = scs_top; frame >= scs_base; frame--) { > + if (len >= size) > + break; > + /* Do not leak PTR_AUTH tags in stack traces. */ > + store[len++] = ptrauth_strip_insn_pac(*frame); > + } > + > + return len; > +} > +#else /* CONFIG_SHADOW_CALL_STACK */ > +static inline int stack_trace_save_shadow(unsigned long *store, > + unsigned int size) > +{ > + return -ENOSYS; > +} > +#endif /* CONFIG_SHADOW_CALL_STACK */ > + > depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) > { > unsigned long entries[KASAN_STACK_DEPTH]; > -- > 2.25.1 >