From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 896BFD63954 for ; Wed, 20 Nov 2024 14:23:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E14E26B0089; Wed, 20 Nov 2024 09:23:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D9E046B008A; Wed, 20 Nov 2024 09:23:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C17F66B0092; Wed, 20 Nov 2024 09:23:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9E55B6B0089 for ; Wed, 20 Nov 2024 09:23:20 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 1D53B40D14 for ; Wed, 20 Nov 2024 14:23:20 +0000 (UTC) X-FDA: 82806687348.20.108DA8D Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf04.hostedemail.com (Postfix) with ESMTP id 30D7940007 for ; Wed, 20 Nov 2024 14:22:13 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=cOJXeO4d; spf=pass (imf04.hostedemail.com: domain of frederic@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=frederic@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732112416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iHc7623jsYTTS9COtHyAJl02ATHlcxXQ3JaUhfZ/D1M=; b=u1qFoXr9hwR7l5ZDZhHkW8h1C2m6F8tA344No0sJghcWlZ5NKSM5ejwDIPy+RtzXx7578t Z2vqpSC2GNORBkw4ERtb1noM/gIaowoWpkhb4hgQngsjD31AK88slQ/Bsf5UEX5g8fJGHh 886janIeRukGx+2h/ufwGWCsfUHBJbs= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=cOJXeO4d; spf=pass (imf04.hostedemail.com: domain of frederic@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=frederic@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732112416; a=rsa-sha256; cv=none; b=Zc3EWSMjUJSbLEe4EpYXTVBIB60rD4gtb+i8ZIniNJsluj6yew/jSCb3VRRQ6R6Hkgg9B5 GVDAutRu7YAWBRCBJ9LP92EfYJivVRUxzq/mAuKEx4WuJ2vy//VeX4OXk7z0A7DZByMxTy W4bSEOPFjs+lO1/iBPVcvYrMrxcDWkU= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 5E1465C5835; Wed, 20 Nov 2024 14:22:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89846C4CECD; Wed, 20 Nov 2024 14:23:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1732112597; bh=BPomXTZqeEIbKC5DTXtaQe0v+mhLEggg8jnDCvzzxp8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=cOJXeO4d8M/mqZJUH8CB5566vq/Z0/bAxmaeq/2Nxtiwpe+wtSTC/6SmNIk1+SJJZ /yfnF9ol9QoOk56ZHEfwKjQmx6of+pGKJ41lVxK6LciOHxIWC4LDlvEfsKiEw6qmmN xn8oMPj1juCbMl2rXRmHxZLhxeB8ztWS3vWIitW8S9/QUIWBjEk1ljtW5W2sHBrAE5 LmPZgcJ5DcNO+8n0fSgyIa1KdjDoyIK0TpURmc4pOHSp48VL2utv8hHs0RYNGkAP0k 2LtN6sF3YaFxsShTVjNTq6ScITbdERYtqS12vfHNP/f+2EzmOb9f0mMb/yfMQ5mPxo DRkZFo/Kh54Fg== Date: Wed, 20 Nov 2024 15:23:14 +0100 From: Frederic Weisbecker To: Valentin Schneider Cc: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, x86@kernel.org, rcu@vger.kernel.org, linux-kselftest@vger.kernel.org, Nicolas Saenz Julienne , Steven Rostedt , Masami Hiramatsu , Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Andy Lutomirski , Peter Zijlstra , "Paul E. McKenney" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Mathieu Desnoyers , Lai Jiangshan , Zqiang , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Josh Poimboeuf , Jason Baron , Kees Cook , Sami Tolvanen , Ard Biesheuvel , Nicholas Piggin , Juerg Haefliger , Nicolas Saenz Julienne , "Kirill A. Shutemov" , Nadav Amit , Dan Carpenter , Chuang Wang , Yang Jihong , Petr Mladek , "Jason A. Donenfeld" , Song Liu , Julian Pidancet , Tom Lendacky , Dionna Glaze , Thomas =?iso-8859-1?Q?Wei=DFschuh?= , Juri Lelli , Marcelo Tosatti , Yair Podemsky , Daniel Wagner , Petr Tesarik Subject: Re: [RFC PATCH v3 11/15] context-tracking: Introduce work deferral infrastructure Message-ID: References: <20241119153502.41361-1-vschneid@redhat.com> <20241119153502.41361-12-vschneid@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 30D7940007 X-Stat-Signature: qncmcfafugw3ngzyzwxwzydzrgakwj1u X-Rspam-User: X-HE-Tag: 1732112533-460315 X-HE-Meta: U2FsdGVkX1/crN7wNi0HnXt+Iltdc4xc5MKV0V4ddYreiRBpmE4hcpHd+fTehAc72bvUi+Ane9CbB2khT5xFEPrRNCdZshovauX+Q2NFo7Xiy5qXUp3oooLVj/9EZZ7FfbnTT0Osw9RvF5Rzl77L7fJykpfQRtv+cCj2sMCE27OXozfNX3m7AUVOt97zZ2taOtoGp7zd1YHk+NYfRkolgRdaV4it7Q1ssPanYq/mwXzOKnI5bREhIZVog7yoWazB/fKK6i8p8RYlFA+8rE68JWY7Iqu7lPp0h3hN8Etc8ZnLcCufGs/WFOkRnJiWe5wRHhia3uU07M8BctrnVUgKRctSwtBI2qBnZB8ps48/pY0Gjf0gAfqr8p003O1af+4mIbTlkuFhJNDEINYYQhusSn40+nYEETpZSvta5iMZI1p9let8qG9z09Cx12FekZ+0dHTjClFYXtnLrlbb4wZOa12Ual2sQPCJOH9FJXZCiu20SZT8YxC4bU8DEXwwGdNkR4623EyfjaO2fXWtV+7rDPfZ7NXlLztE6Mv3z5X0YAXZqWt+RwtMhN/p6h1C8hNk6/1MKFaCQRRn5KBoRgysaoin/4mrFH9HDo76wigQnzrZrRnlAYE1BDrbEMy4wHEFmZze2T74XQgUfO6QSkFfEqPRemHhJlDHqQ9E5+Gu9uJ8P3w0g5vlr352dl7feZRlvgKfFoHv6MPP7Xo902vhhu2c+d12boUDVy1TogqjBamm2zYiKldFRH/VPk3mzABnsxZdSzofuxm61VOQ3MnFx4LERnlPQXPpUrwOMTZdfGkETQ51QNKdAeOoX3hGx7o0zNfN78p/KEfvPf7pLj8z8KQGQNNoSKPe7bA7wWog8G6i5ghCscBYk4YE6tugIr09uPi2jqOJCHLktxoc3WzIPRP4cI/e+Vbczxse8C5YCcKgq6TdIiTsbUwGcorJTP65cXYtuJCAdLrEgRVlY9R JeX3cnBT uenfHdK7pNLjF65JV4HPi06C0QPCu2Hwl9s/YD0x/yt8erQ4XhzDyyl+6BFJHCFkDrB2DxYvMoENeeOd5nxhsdx8+uM8q0Ejw8HIAC+5jgqP0yjQHc1B+kouh1sp5pmt6m9qzvznqVikA2IIYVO57k/RbXJhiqwk+RGy6b1NSdUwANFsRLCh67egtuM2rqXWenernmlZ2+QUouV3XKbxzN4i5vK9OE1qjetzhJXFeIRvjfs5UUSylwVN+q+q7qbUyFWg1oTzPV9UHdZHj2707WwiVrvr71ykrfnHBypNfC0XaSWAhrf10+V3DmQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Le Wed, Nov 20, 2024 at 11:54:36AM +0100, Frederic Weisbecker a écrit : > Le Tue, Nov 19, 2024 at 04:34:58PM +0100, Valentin Schneider a écrit : > > +bool ct_set_cpu_work(unsigned int cpu, unsigned int work) > > +{ > > + struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); > > + unsigned int old; > > + bool ret = false; > > + > > + preempt_disable(); > > + > > + old = atomic_read(&ct->state); > > + /* > > + * Try setting the work until either > > + * - the target CPU has entered kernelspace > > + * - the work has been set > > + */ > > + do { > > + ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START)); > > + } while (!ret && ((old & CT_STATE_MASK) != CT_STATE_KERNEL)); > > + > > + preempt_enable(); > > + return ret; > > Does it ignore the IPI even if: > > (ret && (old & CT_STATE_MASK) == CT_STATE_KERNEL)) > > ? > > And what about CT_STATE_IDLE? > > Is the work ignored in those two cases? > > But would it be cleaner to never set the work if the target is elsewhere > than CT_STATE_USER. So you don't need to clear the work on kernel exit > but rather on kernel entry. > > That is: > > bool ct_set_cpu_work(unsigned int cpu, unsigned int work) > { > struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); > unsigned int old; > bool ret = false; > > preempt_disable(); > > old = atomic_read(&ct->state); > > /* Start with our best wishes */ > old &= ~CT_STATE_MASK; > old |= CT_STATE_USER > > /* > * Try setting the work until either > * - the target CPU has exited userspace > * - the work has been set > */ > do { > ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START)); > } while (!ret && ((old & CT_STATE_MASK) == CT_STATE_USER)); > > preempt_enable(); > > return ret; > } Ah but there is CT_STATE_GUEST and I see the last patch also applies that to CT_STATE_IDLE. So that could be: bool ct_set_cpu_work(unsigned int cpu, unsigned int work) { struct context_tracking *ct = per_cpu_ptr(&context_tracking, cpu); unsigned int old; bool ret = false; preempt_disable(); old = atomic_read(&ct->state); /* CT_STATE_IDLE can be added to last patch here */ if (!(old & (CT_STATE_USER | CT_STATE_GUEST))) { old &= ~CT_STATE_MASK; old |= CT_STATE_USER; } /* * Try setting the work until either * - the target CPU has exited userspace / guest * - the work has been set */ do { ret = atomic_try_cmpxchg(&ct->state, &old, old | (work << CT_WORK_START)); } while (!ret && old & (CT_STATE_USER | CT_STATE_GUEST)); preempt_enable(); return ret; } Thanks.