From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69A56E69187 for ; Sun, 24 Nov 2024 21:46:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA15F6B007B; Sun, 24 Nov 2024 16:46:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D78086B0083; Sun, 24 Nov 2024 16:46:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C41666B0085; Sun, 24 Nov 2024 16:46:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A8D356B007B for ; Sun, 24 Nov 2024 16:46:07 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 27D641401EC for ; Sun, 24 Nov 2024 21:46:07 +0000 (UTC) X-FDA: 82822321890.18.6CDA266 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf21.hostedemail.com (Postfix) with ESMTP id D18421C0009 for ; Sun, 24 Nov 2024 21:46:00 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZNg7MJ2I; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of frederic@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=frederic@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732484764; a=rsa-sha256; cv=none; b=PmykS268aAPTqZ2439eBzHbhIvk6lhlq4kQLWmxpSBcX8IfsBHoZJDsoSXEk7kaNBXEtCq 4Saz4vKHttxiB+tjBtjAYbc949yjN9poj/Y6giFURiuUf5i28MaPF2qpl+cvZ9GYfuBfpk sI9j20qjQ5bFj2bT8SfhdpAsVWrmzvM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZNg7MJ2I; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of frederic@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=frederic@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732484764; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tb69WIyCMkH39icPA2fG3DWQwkJCf2WrKcgl5btH5PA=; b=4xIDKDE4OWzavSSWPLhjk5wwhvsZ1gjr5x3JZjgKYBwycNydOpz3rQf4GU7RggLbIzPsUG 47QMoZMkT9GdN2oNSlbHMRe3DuZMC/710ODMCz//yt2RWU7WUtSyDQwuW4+gj/0QIMFUPP N2iLgjyI2BGbn6NZUOKGdfqR+LjSZVc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 239E3A40FB3; Sun, 24 Nov 2024 21:44:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5EE06C4CECC; Sun, 24 Nov 2024 21:46:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1732484763; bh=/qhmFmtGTN/yG2NensqYHITpZ/haNfld1Woay8hK220=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ZNg7MJ2IIU3CeTaHZ48HzrmcZZAw0SYc+qCR/IKYfojob4+a9LbntBCi696q0OjUN 3nQvd8ihsO2okerTSi3qXbeP7+LNX7rk2RZTot29gYFw3eku4E/jStldwicIm8jQSq Jpvge98weBKXSdbt5FHI9opC+dsG48zcJQqC4EpSMqFwt5BAyZtTlcIZ8F1rOycM8t gBXJmsYnFDfxgSh+rxCGS5FXmXw2NvRkPX7hznrkisrgC//vxx85ldSGSXAXmR7YTU 3LHIuIqDGcPS9H2ByV200QW0+2+eVwzg+CINe8g2Q9xl4+OtQ3aiK5W+MZpdqpHY8C k9ocOhptHnk2g== Date: Sun, 24 Nov 2024 22:46:01 +0100 From: Frederic Weisbecker To: Valentin Schneider Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, x86@kernel.org, rcu@vger.kernel.org, linux-kselftest@vger.kernel.org, Nicolas Saenz Julienne , Steven Rostedt , Masami Hiramatsu , Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Andy Lutomirski , Peter Zijlstra , "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Josh Poimboeuf , Jason Baron , Kees Cook , Sami Tolvanen , Ard Biesheuvel , Nicholas Piggin , Juerg Haefliger , Nicolas Saenz Julienne , "Kirill A. Shutemov" , Nadav Amit , Dan Carpenter , Chuang Wang , Yang Jihong , Petr Mladek , "Jason A. Donenfeld" , Song Liu , Julian Pidancet , Tom Lendacky , Dionna Glaze , Thomas =?iso-8859-1?Q?Wei=DFschuh?= , Juri Lelli , Marcelo Tosatti , Yair Podemsky , Daniel Wagner , Petr Tesarik Subject: Re: [RFC PATCH v3 11/15] context-tracking: Introduce work deferral infrastructure Message-ID: References: <20241119153502.41361-1-vschneid@redhat.com> <20241119153502.41361-12-vschneid@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: D18421C0009 X-Rspamd-Server: rspam01 X-Stat-Signature: zinorqgjxzy44jirezsaxcdi14jkkh6g X-HE-Tag: 1732484760-734176 X-HE-Meta: U2FsdGVkX1/JPcCA83XeHhfGGfYfyCh1qICYgiFXJZiTqCEYjAG6a7I3hN/ZOnjv2cnFdytsJIpQUyInQ7BbzjgptG5RUI7DyeSt0XYTqucyjebNEIAhypjkwEgYSsITTKVP06fcfLZLSYkrkFrKkY3RlAbC87Uj89KSbYFUWc5hfBfUXWYgR4nIojDr2wCcj4lAkeWqi1/DVujyhQzN1cFzKMxkBvkyTdDYweGRPM+0c7hS9X2IsjqtJnXN8elA1XIbZCBpm28rjowxIbElr6pkBpDxs8l8olLBz7XZ2zSHRVk8tVeAubMVX/wWmtY3SZTh+pjqDZstI/hqP/0uZKUbxWE1rlZdzW6MUQXPU45/EHtaxgGyI2+6f1v0tswxT1MlBtkOr40xeGaKoE8e3rJxfn/RlWiSiIUqEUrn3dJ1C6xQd9wHkHGQpbE7xs6etjpebX18xp6vVdNFuXUp3yJYalE7zgpOoRcSoH+8Tf86CMridqmTAtbvkGMvk5SzCYgxagD/j1Y10Sdg6HT6kx09KdQCq8bJstNkI8onXDRTF1wWRs8oMnMpXJgWUSNszbFmagFfT3IKS2YyxX/d7+8K18ehFIwIqK3RTet7wxwRLJv9H/1iWQCn99T/uj8riIbGD2rjrKGb4yZsztbEyjzVz6iVlnYMsF9j9i+AG+XzhGH063EUwuiaJQiZYhpEcrCNzuIpR6m6D5ojoZ1NrpuCB9fkJaCcJbyHOACXd66IXvnDtGI5NswAvGBoh4LdeecpwJ/FG6/HIpyymNHnKlEpD3oxx4kUZ1tnUiYGEOA63R6TQOIGb6jdk6mpz55X9Krrl9B9DqL8iLpe7LEVgYBpDvNTAUVesOaVdeDtGGfl9+4n+T4xeJ++YDvCKCfQQ+b+Pp4Dp/cPcQbDyIhBM7LbbG+DUpumrfYEEMdRfjef8Qb5hWSCgoEGyoN9YXcDCwAc9y1HtP5eL5qQ/C7 Mm1OXqFx LMeMb77EbQJLjP+PXTgy3/yXeOReFuaI6q9r0r/6IfD8q+Zo3QCL+fum9uNLeDX6ctRSTOGOkLD9hPuT7FmiNQcUtnAGI5bR6/qgQJiQ9BFwzRuBzdIouvS1P7d3eXvuozOFl+yUHPhkGtg0d8nkD4eDwwqo7ofslA8p9hPSZ/30HKnhJSEyRio/n19P4qa3UmDq6AbvnudSou0UVC7y+dnEwDgRnI9ZoB9NL19RSx/+5JZ6q37Jcl0bRDHGNPB4AVpxnbaB6tVnwdyk/XcujLdpV9gPPy7ggLAzJGSejP9chjwTOISlwsdyOiw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Le Fri, Nov 22, 2024 at 03:56:59PM +0100, Valentin Schneider a écrit : > On 20/11/24 18:30, Frederic Weisbecker wrote: > > Le Wed, Nov 20, 2024 at 06:10:43PM +0100, Valentin Schneider a écrit : > >> On 20/11/24 15:23, Frederic Weisbecker wrote: > >> > >> > Ah but there is CT_STATE_GUEST and I see the last patch also applies that to > >> > CT_STATE_IDLE. > >> > > >> > So that could be: > >> > > >> > bool ct_set_cpu_work(unsigned int cpu, unsigned int work) > >> > { > >> > struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); > >> > unsigned int old; > >> > bool ret = false; > >> > > >> > preempt_disable(); > >> > > >> > old = atomic_read(&ct->state); > >> > > >> > /* CT_STATE_IDLE can be added to last patch here */ > >> > if (!(old & (CT_STATE_USER | CT_STATE_GUEST))) { > >> > old &= ~CT_STATE_MASK; > >> > old |= CT_STATE_USER; > >> > } > >> > >> Hmph, so that lets us leverage the cmpxchg for a !CT_STATE_KERNEL check, > >> but we get an extra loop if the target CPU exits kernelspace not to > >> userspace (e.g. vcpu or idle) in the meantime - not great, not terrible. > > > > The thing is, what you read with atomic_read() should be close to reality. > > If it already is != CT_STATE_KERNEL then you're good (minus racy changes). > > If it is CT_STATE_KERNEL then you still must do a failing cmpxchg() in any case, > > at least to make sure you didn't miss a context tracking change. So the best > > you can do is a bet. > > > >> > >> At the cost of one extra bit for the CT_STATE area, with CT_STATE_KERNEL=1 > >> we could do: > >> > >> old = atomic_read(&ct->state); > >> old &= ~CT_STATE_KERNEL; > > > > And perhaps also old |= CT_STATE_IDLE (I'm seeing the last patch now), > > so you at least get a chance of making it right (only ~CT_STATE_KERNEL > > will always fail) and CPUs usually spend most of their time idle. > > > > I'm thinking with: > > CT_STATE_IDLE = 0, > CT_STATE_USER = 1, > CT_STATE_GUEST = 2, > CT_STATE_KERNEL = 4, /* Keep that as a standalone bit */ Right! > > we can stick with old &= ~CT_STATE_KERNEL; and that'll let the cmpxchg > succeed for any of IDLE/USER/GUEST. Sure but if (old & CT_STATE_KERNEL), cmpxchg() will consistently fail. But you can make a bet that it has switched to CT_STATE_IDLE between the atomic_read() and the first atomic_cmpxchg(). This way you still have a tiny chance to succeed. That is: old = atomic_read(&ct->state); if (old & CT_STATE_KERNEl) old |= CT_STATE_IDLE; old &= ~CT_STATE_KERNEL; do { atomic_try_cmpxchg(...) Hmm?