From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E207C4332F for ; Mon, 24 Jan 2022 10:27:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0FCDE6B0088; Mon, 24 Jan 2022 05:27:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 086DC6B0089; Mon, 24 Jan 2022 05:27:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E8F2A6B008A; Mon, 24 Jan 2022 05:27:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id D6B956B0088 for ; Mon, 24 Jan 2022 05:27:45 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8F4EE8249980 for ; Mon, 24 Jan 2022 10:27:45 +0000 (UTC) X-FDA: 79064804490.07.7B75B87 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf13.hostedemail.com (Postfix) with ESMTP id 074622003A for ; Mon, 24 Jan 2022 10:27:44 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 037446D; Mon, 24 Jan 2022 02:27:44 -0800 (PST) Received: from FVFF77S0Q05N (unknown [10.57.2.109]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DEA203F73B; Mon, 24 Jan 2022 02:27:40 -0800 (PST) Date: Mon, 24 Jan 2022 10:27:32 +0000 From: Mark Rutland To: Peter Zijlstra Cc: mingo@redhat.com, tglx@linutronix.de, juri.lelli@redhat.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, bristot@redhat.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-api@vger.kernel.org, x86@kernel.org, pjt@google.com, posk@google.com, avagin@google.com, jannh@google.com, tdelisle@uwaterloo.ca, posk@posk.io Subject: Re: [RFC][PATCH v2 5/5] sched: User Mode Concurency Groups Message-ID: References: <20220120155517.066795336@infradead.org> <20220120160822.914418096@infradead.org> <20220124100306.GO20638@worktop.programming.kicks-ass.net> <20220124100704.GC22849@worktop.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220124100704.GC22849@worktop.programming.kicks-ass.net> Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf13.hostedemail.com: domain of mark.rutland@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=mark.rutland@arm.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 074622003A X-Stat-Signature: bmuj4yyiinosqdj6i1zanqc8twtfqtto X-HE-Tag: 1643020064-505747 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 24, 2022 at 11:07:04AM +0100, Peter Zijlstra wrote: > On Mon, Jan 24, 2022 at 11:03:06AM +0100, Peter Zijlstra wrote: > > > > Either way, it looks like we'd need helpers along the lines of: > > > > > > | static __always_inline void umcg_enter_from_user(struct pt_regs *regs) > > > | { > > > | if (current->flags & PF_UMCG_WORKER) > > > | umcg_sys_enter(regs, -1); > > > | } > > > | > > > | static __always_inline void umcg_exit_to_user(struct pt_regs *regs) > > > | { > > > | if (current->flags & PF_UMCG_WORKER) > > > | umcg_sys_exit(regs); > > > | } > > > > Would something like: > > > > #ifndef arch_irqentry_irq_enter > > static __always_inline bool arch_irqentry_irq_enter(struct pt_regs *regs) > > { > > if (!regs_irqs_disabled(regs)) { > > local_irq_enable(); > > return true; > > } > > return false; > > } > > #endif > > > > static __always_inline void irqentry_irq_enter(struct pt_regs *regs) > > { > > if (arch_irqentry_irq_inherit(regs)) { > > if (user_mode(regs) && (current->flags & PF_UMCG_WORKER)) > > umcg_sys_enter(regs, -1); > > } > > } > > > > Work? Then arm64 can do: > > > > static __always_inline bool arch_irqentry_irq_enter(struct pt_regs *regs) > > { > > local_daif_inherit(); > > return interrupts_enabled(regs); > > } > > > > or somesuch... > > Ah,.. just read your other email, so your concern is about the > user_mode() thing due to ARM64 taking a different exception path for > from-user vs from-kernel ? Yup; it's two-fold: 1) We have separate vectors for entry from-user and from-kernel, and I'd like to avoid the conditionality (e.g. the user_mode(regs) checks) where possible. Having that unconditional and explicit in the from-user code avoids redundant work and is much easier to see that it's correct and balanced. We have separate irqentry_from_user() and irqentry_from_kernel() helpers today for this. 2) Due to the way we nest classes of exception, on the entry path we manipulate the flags differently depending on which specific exception we've taken. On the return path we always mask everything (necessary due to the way exception return works architecturally). Luckily exceptions from-user don't nest, so those cases are simpler than exceptions from-kernel. > I don't mind too much if arm64 decides to open-code the umcg hooks, but > please do it such that's hard to forget a spot. I'll see what I can do. :) Thanks, Mark.