From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFA8DC433F5 for ; Thu, 12 May 2022 16:17:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24B6E6B0074; Thu, 12 May 2022 12:17:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FA296B0075; Thu, 12 May 2022 12:17:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F6408D0001; Thu, 12 May 2022 12:17:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id F3C646B0074 for ; Thu, 12 May 2022 12:17:14 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id CDA988076B for ; Thu, 12 May 2022 16:17:14 +0000 (UTC) X-FDA: 79457595588.20.A4A4FDE Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf20.hostedemail.com (Postfix) with ESMTP id 609CC1C00BC for ; Thu, 12 May 2022 16:17:04 +0000 (UTC) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1652372231; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=P3WA7a2LKBsvGjpHAfwHZrsdX5lPf+mXnzFb/W6CTG4=; b=M2SNi8xRlFNsaTf4EnM632IULRDzo8Lh9IAaPhgFV1fYK8RqrQDWoPkNTF2vcwphpIqR8c lM0WdyegTuvPqzbzdNjo6FXwZOVH8SYmYie3wSCTCbQt9OAVR7VhT7snHk7EZZ4HSMiVNh YB6rIykaXznfefVPXXp55XmG4NMpCtoasCHoeWXHQQc1U4QPsS6pDRFmQhfG47a2QFvPyf dI5VgqJPpF8of7FJfUD/7R0i2q2kDJXELWPTg4fefEI0EW2vgMFIuYLFfWOSCQR2pwVJdS dHD2sTWVhsHJlsB4lAPZZ1xHYuEFG+ILoVEz5I7pb4/IbxyTZsCtAjyIp12QpA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1652372231; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=P3WA7a2LKBsvGjpHAfwHZrsdX5lPf+mXnzFb/W6CTG4=; b=W90ZtdXlwRF+/ciwbpXvb7TIXlmhwJBIcwThOHIJql1Ajmtg8N8/gWVkSg1avC7iab1Fs1 rtsKmNIz8W8ezIAw== To: Alexander Potapenko Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev , Linux Memory Management List , Linux-Arch , LKML Subject: Re: [PATCH v3 28/46] kmsan: entry: handle register passing from uninstrumented code In-Reply-To: References: <20220426164315.625149-1-glider@google.com> <20220426164315.625149-29-glider@google.com> <87a6c6y7mg.ffs@tglx> <87y1zjlhmj.ffs@tglx> <878rrfiqyr.ffs@tglx> <87k0ayhc43.ffs@tglx> <87h762h5c2.ffs@tglx> <871qx2r09k.ffs@tglx> Date: Thu, 12 May 2022 18:17:11 +0200 Message-ID: <87h75uvi7s.ffs@tglx> MIME-Version: 1.0 Content-Type: text/plain X-Rspamd-Queue-Id: 609CC1C00BC X-Stat-Signature: z7ujfymzucs8cn3ufw93xhhbn53kbrj7 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=M2SNi8xR; dkim=pass header.d=linutronix.de header.s=2020e header.b=W90ZtdXl; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf20.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1652372224-820117 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 12 2022 at 14:24, Alexander Potapenko wrote: > On Mon, May 9, 2022 at 9:09 PM Thomas Gleixner wrote: >> > So in the case when `hardirq_count()>>HARDIRQ_SHIFT` is greater than >> > 1, kmsan_in_runtime() becomes a no-op, which leads to false positives. >> >> But, that'd only > 1 when there is a nested interrupt, which is not the >> case. Interrupt handlers keep interrupts disabled. The last exception from >> that rule was some legacy IDE driver which is gone by now. > > That's good to know, then we probably don't need this hardirq_count() > check anymore. > >> So no, not a good explanation either. > > After looking deeper I see that unpoisoning was indeed skipped because > kmsan_in_runtime() returned true, but I was wrong about the root > cause. > The problem was not caused by a nested hardirq, but rather by the fact > that the KMSAN hook in irqentry_enter() was called with in_task()==1. Argh, the preempt counter increment happens _after_ irqentry_enter(). > I think the best that can be done here is (as suggested above) to > provide some kmsan_unpoison_pt_regs() function that will only be > called from the entry points and won't be doing reentrancy checks. > It should be safe, because unpoisoning boils down to calculating > shadow/origin addresses and calling memset() on them, no instrumented > code will be involved. If you keep them where I placed them, then there is no need for a noinstr function. It's already instrumentable. > We could try to figure out the places in idtentry code where normal > kmsan_unpoison_memory() can be called in IRQ context, but as far as I > can see it will depend on the type of the entry point. NMI is covered as it increments before it invokes the unpoison(). Let me figure out why we increment the preempt count late for interrupts. IIRC it's for symmetry reasons related to softirq processing on return, but let me double check. > Another way to deal with the problem is to not rely on in_task(), but > rather use some per-cpu counter in irqentry_enter()/irqentry_exit() to > figure out whether we are in IRQ code already. Well, if you have a irqentry() specific unpoison, then you know the context, right? > However this is only possible irqentry_enter() itself guarantees that > the execution cannot be rescheduled to another CPU - is that the case? Obviously. It runs with interrupts disabled and eventually on a separate interrupt stack. Thanks, tglx