From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-f200.google.com (mail-qk0-f200.google.com [209.85.220.200]) by kanga.kvack.org (Postfix) with ESMTP id AE8296B0010 for ; Wed, 1 Aug 2018 11:10:42 -0400 (EDT) Received: by mail-qk0-f200.google.com with SMTP id w126-v6so16927650qka.11 for ; Wed, 01 Aug 2018 08:10:42 -0700 (PDT) Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id t22-v6sor9168141qtj.100.2018.08.01.08.10.41 for (Google Transport Security); Wed, 01 Aug 2018 08:10:41 -0700 (PDT) From: Johannes Weiner Subject: [PATCH 7/9] sched: introduce this_rq_lock_irq() Date: Wed, 1 Aug 2018 11:13:06 -0400 Message-Id: <20180801151308.32234-8-hannes@cmpxchg.org> In-Reply-To: <20180801151308.32234-1-hannes@cmpxchg.org> References: <20180801151308.32234-1-hannes@cmpxchg.org> Reply-To: "[PATCH 0/9]"@kvack.org, "psi:pressure"@kvack.org, stall@kvack.org, information@kvack.org, for@kvack.org, CPU@kvack.org, memory@kvack.org, and@kvack.org, IO@kvack.org, v3@kvack.org Sender: owner-linux-mm@kvack.org List-ID: To: Ingo Molnar , Peter Zijlstra , Andrew Morton , Linus Torvalds Cc: Tejun Heo , Suren Baghdasaryan , Daniel Drake , Vinayak Menon , Christopher Lameter , Mike Galbraith , Shakeel Butt , Peter Enderborg , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com do_sched_yield() disables IRQs, looks up this_rq() and locks it. The next patch is adding another site with the same pattern, so provide a convenience function for it. Signed-off-by: Johannes Weiner --- kernel/sched/core.c | 4 +--- kernel/sched/sched.h | 12 ++++++++++++ 2 files changed, 13 insertions(+), 3 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 211890edf37e..9586a8141f16 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4960,9 +4960,7 @@ static void do_sched_yield(void) struct rq_flags rf; struct rq *rq; - local_irq_disable(); - rq = this_rq(); - rq_lock(rq, &rf); + rq = this_rq_lock_irq(&rf); schedstat_inc(rq->yld_count); current->sched_class->yield_task(rq); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index b8f038497240..bc798c7cb4d4 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1119,6 +1119,18 @@ rq_unlock(struct rq *rq, struct rq_flags *rf) raw_spin_unlock(&rq->lock); } +static inline struct rq * +this_rq_lock_irq(struct rq_flags *rf) + __acquires(rq->lock) +{ + struct rq *rq; + + local_irq_disable(); + rq = this_rq(); + rq_lock(rq, rf); + return rq; +} + #ifdef CONFIG_NUMA enum numa_topology_type { NUMA_DIRECT, -- 2.18.0