From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45372C433FE for ; Tue, 4 Oct 2022 01:36:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 181646B0072; Mon, 3 Oct 2022 21:36:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 130136B0073; Mon, 3 Oct 2022 21:36:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01FAF6B0074; Mon, 3 Oct 2022 21:36:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E1E9E6B0072 for ; Mon, 3 Oct 2022 21:36:26 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8BE29C02D1 for ; Tue, 4 Oct 2022 01:36:26 +0000 (UTC) X-FDA: 79981551972.19.E1089FA Received: from r3-25.sinamail.sina.com.cn (r3-25.sinamail.sina.com.cn [202.108.3.25]) by imf16.hostedemail.com (Postfix) with ESMTP id B7295180011 for ; Tue, 4 Oct 2022 01:36:24 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([114.249.59.78]) by sina.com (172.16.97.23) with ESMTP id 633B8DBC000242E6; Tue, 4 Oct 2022 09:34:54 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 60633254923785 From: Hillf Danton To: John Stultz Cc: LKML , linux-mm@kvack.org, "Connor O'Brien" , Qais Yousef , Peter Zijlstra , Steven Rostedt Subject: Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs Date: Tue, 4 Oct 2022 09:36:11 +0800 Message-Id: <20221004013611.1822-1-hdanton@sina.com> In-Reply-To: <20221003232033.3404802-3-jstultz@google.com> References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664847386; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yiNixis04kPcVo/DBcolwnUgWd+MpsRWrWB2QhcYWF0=; b=lgkDsBoCrb9p4pp7KvK0qYKiIx1i3kJqSzBHnugBqJs6KY9GEoLa6fvno7CRXDx34RSCh7 zar2eQWBxlqbY3Nc4OWSKHtPaQD5UdWtqfcfShXHn+GbzsF2wzqJKpw2ZJMuZTAfqkPQkB T4qWdVf4FWrMda2fX3f1wPnUnGmJG0M= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf16.hostedemail.com: domain of hdanton@sina.com designates 202.108.3.25 as permitted sender) smtp.mailfrom=hdanton@sina.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664847386; a=rsa-sha256; cv=none; b=dDplJhqJhmJG96urkJhIFQwmnvBjdTvHkLCGPRO2MSAsPkCMZz+jPB3Xq2brA3MqkStWSX L0y529NXUoPZZ1x9BlmVzFVVW3TerhupLoakRbtge1jJ3VaR1sLH/lLkRgKbZdF4sSdvcf 8hTbFn7Kir/uaPOPuxp/4bm9mCHmPjg= X-Stat-Signature: t78bh5ytgyz9t5ubzasdijonotnnsw6j X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B7295180011 Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf16.hostedemail.com: domain of hdanton@sina.com designates 202.108.3.25 as permitted sender) smtp.mailfrom=hdanton@sina.com X-HE-Tag: 1664847384-336753 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3 Oct 2022 23:20:32 +0000 John Stultz > +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION > +#define __use_softirq_opt 1 > +/* > + * Return whether the given cpu is currently non-preemptible > + * while handling a potentially long softirq, or if the current > + * task is likely to block preemptions soon because it is a > + * ksoftirq thread that is handling slow softirq. > + */ > +static bool cpu_busy_with_softirqs(int cpu) > +{ > + u32 softirqs = per_cpu(active_softirqs, cpu) | > + __cpu_softirq_pending(cpu); > + struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu); > + struct task_struct *curr; > + struct rq *rq = cpu_rq(cpu); > + int ret; > + > + rcu_read_lock(); > + curr = READ_ONCE(rq->curr); /* unlocked access */ > + ret = (softirqs & LONG_SOFTIRQ_MASK) && > + (curr == cpu_ksoftirqd || > + preempt_count() & SOFTIRQ_MASK); > + rcu_read_unlock(); > + return ret; > +} > +#else > +#define __use_softirq_opt 0 > +static bool cpu_busy_with_softirqs(int cpu) > +{ > + return false; > +} > +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */ > + > +static bool rt_task_fits_cpu(struct task_struct *p, int cpu) > +{ > + return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu); > +} On one hand, RT task is not layency sensitive enough if it fails to preempt ksoftirqd. On the other, deferring softirq to ksoftirqd barely makes sense in 3/3 if it preempts the current RT task.