From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31B86C433F5 for ; Tue, 5 Apr 2022 15:38:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9AE28D0001; Tue, 5 Apr 2022 11:38:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B499C6B0074; Tue, 5 Apr 2022 11:38:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EB648D0001; Tue, 5 Apr 2022 11:38:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id 9067E6B0073 for ; Tue, 5 Apr 2022 11:38:29 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4BCD5AB1C9 for ; Tue, 5 Apr 2022 15:38:19 +0000 (UTC) X-FDA: 79323231918.17.592CBA9 Received: from mail-io1-f46.google.com (mail-io1-f46.google.com [209.85.166.46]) by imf26.hostedemail.com (Postfix) with ESMTP id D8FF7140036 for ; Tue, 5 Apr 2022 15:38:18 +0000 (UTC) Received: by mail-io1-f46.google.com with SMTP id p22so15619176iod.2 for ; Tue, 05 Apr 2022 08:38:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Oiy4ymydbM7WhtHftpHVioZDVdSgqELYIqmQkx6duOw=; b=ihYjRwCIZTZwXJPzTOWwXK3ednGB5hCoNS1RNHgt1AdElXNvPJiQGMuSWIys6jLwHM hJcEcd/i+mJbqgiTM/QqEMB3jro9dJ6W0v3I+Ty0YNprxSTmdDUpdDZeq3b6WUyBHDN5 a4qxa6AkA5XahcZHSQTXwD24ZAAeQfxg9DUEOkqZ7Y1wTQZYplqniPZvmjXU1AgRdpzo BnAUl8ZsT1JkCpuftrQ/cfzcpCavx7DlzaraGcW3E7Dg3ojL/5b2oOkTNNfYYLeLaZQO nhdojGE0QsWyA0cvn+icLNQTxGFI/wQode6hKOboLUuwaB5P5ZoMTYCIclNR9XEXFc8m o1WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Oiy4ymydbM7WhtHftpHVioZDVdSgqELYIqmQkx6duOw=; b=oZL8nogsW5Hm6ySEtb6rq4pjAHy+yA5l27SPu9erQ1gW2NzIpSRQG+EQTzkeOUzcnW qtrhck9M9PfUw1lemm5vFuZDv+PE/HM53lrE3vyo1Hd2iZoGeX2gaE9NmXCROH+TuZTs G+BdTdLfBvq61G2LnN+peJpX87cAj2lT0wSttHFoFqxPChwznSncjy9fmh2QbqrGS1Ip lZge7a70f5x3M+9QAiQ3ayeyRG+hTH8aO3A88yYVT40KihHfapX5sPOqozo7UwmEPVD6 k0cYa6BEtiDZmWHetDf6x+TrwWzCL3ZR8F1BsqE9rP7ibxO9+2pMp+pbzEHU/rq5Cw13 U7wA== X-Gm-Message-State: AOAM5318B86Jm3+37JiuggsCil8l5/tr92Uc0njmWUJE172fXIkAuUuV UqQ02dBoy4Dl/Gxl1HrCm4ey/xbototzrNZ42f0= X-Google-Smtp-Source: ABdhPJwX8TxPVTrXNP9ml2a4Ds0AfLhTgqMeoGckQ8JopyAaiiqQLa8ounWMHtg7A7wBY4P3eqcIN7bQAQFXYIm6uWA= X-Received: by 2002:a6b:116:0:b0:648:bd29:2f44 with SMTP id 22-20020a6b0116000000b00648bd292f44mr2028950iob.56.1649173098285; Tue, 05 Apr 2022 08:38:18 -0700 (PDT) MIME-Version: 1.0 References: <0bb72ea8fa88ef9ae3508c23d993952a0ae6f0f9.1648049113.git.andreyknvl@google.com> In-Reply-To: From: Andrey Konovalov Date: Tue, 5 Apr 2022 17:38:07 +0200 Message-ID: Subject: Re: [PATCH v2 3/4] arm64: implement stack_trace_save_shadow To: Mark Rutland Cc: andrey.konovalov@linux.dev, Marco Elver , Alexander Potapenko , Catalin Marinas , Will Deacon , Andrew Morton , Dmitry Vyukov , Andrey Ryabinin , kasan-dev , Vincenzo Frascino , Sami Tolvanen , Peter Collingbourne , Evgenii Stepanov , Florian Mayer , Linux Memory Management List , LKML , Andrey Konovalov Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: gotswd851rai5b7oftxe47ih4z73zy9c Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ihYjRwCI; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.166.46 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: D8FF7140036 X-HE-Tag: 1649173098-79240 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Mar 31, 2022 at 11:32 AM Mark Rutland wrote: > > This doesn't do any of the trampoline repatinting (e.g. for kretprobes or > ftrace graph caller) that the regular unwinder does, so if either of those are > in use this is going to produce bogus results. Responded on the cover letter wrt this. > > +noinline notrace int arch_stack_walk_shadow(unsigned long *store, > > + unsigned int size, > > + unsigned int skipnr) > > +{ > > + unsigned long *scs_top, *scs_base, *scs_next; > > + unsigned int len = 0, part; > > + > > + preempt_disable(); > > This doesn't look necessary; it's certinaly not needed for the regular unwinder. > > Critically, in the common case of unwinding just the task stack, we don't need > to look at any of the per-cpu stacks, and so there's no need to disable > preemption. See the stack nesting logic in the regular unwinder. The common unwinder doesn't access per-cpu variables, so preempt_disable() is not required. Although, in this case, the per-cpu variable is read-only, so preempt_disable() is probably also not required. Unless LOCKDEP or some other tools complain about this. > If we *do* need to unwind per-cpu stacks, we figure that out and verify our > countext *at* the transition point. I'm not sure I understand this statement. You mean we need to keep the currently relevant SCS stack base and update it in interrupt handlers? This will require modifying the entry code. > > + > > + /* Get the SCS pointer. */ > > + asm volatile("mov %0, x18" : "=&r" (scs_top)); > > Does the compiler guarantee where this happens relative to any prologue > manipulation of x18? > > This seems like something we should be using a compilar intrinsic for, or have > a wrapper that passes this in if necessary. This is a good point, I'll investigate this. > > + > > + /* The top SCS slot is empty. */ > > + scs_top -= 1; > > + > > + /* Handle SDEI and hardirq frames. */ > > + for (part = 0; part < ARRAY_SIZE(scs_parts); part++) { > > + scs_next = *this_cpu_ptr(scs_parts[part].saved); > > + if (scs_next) { > > + scs_base = *this_cpu_ptr(scs_parts[part].base); > > + if (walk_shadow_stack_part(scs_top, scs_base, store, > > + size, &skipnr, &len)) > > + goto out; > > + scs_top = scs_next; > > + } > > + } > > We have a number of portential stack nesting orders (and may need to introduce > more stacks in future), so I think we need to be more careful with this. The > regular unwinder handles that dynamically. I'll rewrite this part based on the other comments, so let's discuss it then. Thanks!