From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65221C5B552 for ; Fri, 30 May 2025 09:33:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 011EA6B00A5; Fri, 30 May 2025 05:33:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F05266B00A6; Fri, 30 May 2025 05:33:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCD526B00B3; Fri, 30 May 2025 05:33:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BB67A6B00A5 for ; Fri, 30 May 2025 05:33:06 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 76B58EC333 for ; Fri, 30 May 2025 09:33:06 +0000 (UTC) X-FDA: 83499060372.27.5EAD7E4 Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by imf30.hostedemail.com (Postfix) with ESMTP id 70B3180006 for ; Fri, 30 May 2025 09:33:04 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b="A7FDvYE/"; spf=pass (imf30.hostedemail.com: domain of libo.gcs85@bytedance.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=libo.gcs85@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748597584; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dlxrRjV8tJmbvpYFl1bSrUikb+IFpVRg2V1s2EQ5voc=; b=3UXejyKNK7NbFnDA4YxRiJ1xo8iG3cGq6NLBP11L1S9Rs+IaVtYMkvwRtHI0B7UkMEDTU0 9a7BQTlrSa6wVFFUtWRLy43ZdwX2EraWZN+6XwkhRZancWrW6Iv/CP6a3CwDrFYGW3/8o+ DoU2u+zppdcSW3uNo9BO/uXmNcmC/JU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b="A7FDvYE/"; spf=pass (imf30.hostedemail.com: domain of libo.gcs85@bytedance.com designates 209.85.215.178 as permitted sender) smtp.mailfrom=libo.gcs85@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748597584; a=rsa-sha256; cv=none; b=zveNZNFhqoC3+GGtFhf3L5hTkLpYSeyNX+AWdwtPKMd7YoK/yKAcxfv38JodAZ8e7ceMu5 kCN2e21kAEcvkp9anLMefPgVawae4rtor/rPxDpWM+NGoadCehl+l8BoG+i3rktuy0A5rE 38QklHmCOOgcObfH5rd5dnrCW4NW9Rg= Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-b1fb650bdf7so1044808a12.1 for ; Fri, 30 May 2025 02:33:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1748597583; x=1749202383; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dlxrRjV8tJmbvpYFl1bSrUikb+IFpVRg2V1s2EQ5voc=; b=A7FDvYE/jgPvg2LuRBLoDLV3vKtvfhOBPYcFu6Ady46Br7iKuGIHLjXlBZWb2JqO+W +5QmTkmpE/80ugIoNs1COJ46bgulP20ssdg+IRcdMJLKRgWoxCEORv/4dLfDgDZ2jsDI fQTDdQOLY79IGi9q6h0HpNA1srYYGy7UVFw7xGmKbKr15v0MkIzvbobt3CYxauvD46K/ wzxehC3qCUcGyhyeB4I/cyXARoOeHpbPgHDLc/f2A3nwEjEHKBR8AVkI/oa8pHZG7MLB XT9GpdipGRQmCvNiI8ZNdwC3Qvt4EVMxvigjFEX0K0iN+ky6Ova9brkOUPMZGVQE0Txs uSSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748597583; x=1749202383; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dlxrRjV8tJmbvpYFl1bSrUikb+IFpVRg2V1s2EQ5voc=; b=PV3lcDPL5xDLhEKiX6Achsfes97eH+hb9jUaLy9RMIckAihNRmLmEB+aQhpWQTGBk/ UPMUr4pMOCRFf43PMs/289wZPrv+jKYDbS/8aJqR4tgGY/IOvOTSgOpF3atQM00pDIa9 Ey3wKXCoIMjP1TxZsGc3ysX2qQwPMWzP2EatVntgBCD/4XbdKaN7uI3TApiIs3c4+OCt m3Ao8/7CFE+qQ/fNRZru3VGReX3qgTD1q0JoGrPM3sJNwOsRD6WLIBrJT77Kj8E8eB08 dCC58k2zJp8fUp+QkE37fQ7pZrGd8UuHAvNCQ1Dmi7iMOloemAiiTt52D9cCwEe0W78b ZrcQ== X-Forwarded-Encrypted: i=1; AJvYcCW0sYpAhRJ85XMatvrzOFYSm2pRHpmUpvNqpw7sY9YJejnwRuSaW+fSMgTLkrPn2U78Oxmrc0TX1w==@kvack.org X-Gm-Message-State: AOJu0YzBIt4n8ZS+v5ZGUrXCaqgOxTaXLUldJ9gOR9digIHVP3uWCub6 okKW3PIdeQbeOVHcLP7M8pa7JgHrl7Ci/sg8+wQaahvo6GhbMl5m7YCQ0PPBvur0czY= X-Gm-Gg: ASbGnct6SANo+g0M+o5RPod6eTl7HLIwOGmz7G79e3z21vK9zC0ofbx4Lt9j8UZml+Q bI3MhdDMIZVrMiggOcas0b6s/aKpML/F3LVlP1c+pTXF+hNXMnrDYGhwFtp11LrJIVGQ4mNYrtL pKGCKDiiizoJYhEkyMUlJATcMxgk8W9Q4A8Ido/X377fX95QAOSOGBc8Sj/9BRb55fxr4QAswAl Len0jYO4jPcbsnSGmQmUMhRFmsNb0lvCSSpPPiaMn8L3PDHygSEOeUM7yddtgbnj6lFZMrsrw1L EHtIXIYgq8MjvFtYzeolQSM3ue4SnU2lS/O59udHbaUV/28taLMMwGwFmsCwnxs9i5s+BBzZAxn xgSCSG8gpzYI1fcAs7866 X-Google-Smtp-Source: AGHT+IFRxV/uO1ynrbPQpKYbWo1BdkPpqPJfwKaGM+M1EblRzTIKuZgxM4JDshmIh4OUS5cYyqK/IQ== X-Received: by 2002:a17:90b:380a:b0:311:c596:5c6f with SMTP id 98e67ed59e1d1-31250422c83mr2343246a91.17.1748597583233; Fri, 30 May 2025 02:33:03 -0700 (PDT) Received: from FQ627FTG20.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e29f7b8sm838724a91.2.2025.05.30.02.32.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 30 May 2025 02:33:02 -0700 (PDT) From: Bo Li To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, luto@kernel.org, kees@kernel.org, akpm@linux-foundation.org, david@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, peterz@infradead.org Cc: dietmar.eggemann@arm.com, hpa@zytor.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, jannh@google.com, pfalcato@suse.de, riel@surriel.com, harry.yoo@oracle.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, yinhongbo@bytedance.com, dengliang.1214@bytedance.com, xieyongji@bytedance.com, chaiwen.cc@bytedance.com, songmuchun@bytedance.com, yuanzhu@bytedance.com, chengguozhu@bytedance.com, sunjiadong.lff@bytedance.com, Bo Li Subject: [RFC v2 18/35] sched: pick a specified task Date: Fri, 30 May 2025 17:27:46 +0800 Message-Id: <6e785c48ed266694748e0e71e264b94b27d9fa7b.1748594841.git.libo.gcs85@bytedance.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 70B3180006 X-Stat-Signature: sabe74xep6gukzks6wn3fonyx5jcor3j X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1748597584-58449 X-HE-Meta: U2FsdGVkX1/XnTxGo5wji/L3OgwrXEG+lx1MHHgy9tFUX9YYDZSyU17i5Q3JRYc02Tae0tsnBwM93G6pBgdSUj6YJukTqXmtkjirilbMBR/0Gjmp0drhlaq5IXwhZnjlCzJBLUWF6LErmW20p1CnzmM2HSlQ5UyH6FyRi2oonK6fASZfu6J1s9bBYS7oLrsoJf7KddNRAkAMoxdODY7P1o/oz5YKYFyx/jAXpF5yyx6YT0zM/0Y/tCGb2WFS1Lwk6rzrPaC6QXwvtFygdUy5HWqJFBjAHWLNXPPmYndkHnK+knbRpvA0AMXcHnWJCBFyi56iJSweqDb8BRn3vlMAmpnZPYOVo7vD72Fd+d5ba6CP1CMKUNoRO/sonxj2tEae1Sil8qMcmpdYXBQTXbI3gSMQF3m2n8BkkdlQvp+br7cb5xHoZPkLzQYMxZQSNlHDPoyufNCKNUzuyWegGu0AUDP4jmLsU4Bfl2YJRJ2jBisR/rFHpLGrKsM/z03mgPcOK9bNRVUyPr+k+OpiNACt1fs0umUpsFlcYdLeyWD8h+QS0aueeY9Njz5/U/bxwAebos2RtaDkpT1YcNTEFjXBwNRReYcCYhxDrTPwWryjZ/WuulPZi79PIAGUpmXFbiktu/nn35plREKAev0vt8Us5FE/Sx0Gwg7p8Rs/23JZvh+Obo0mqQ2j54u2+ggzbAcrTxiCV8I53RrQnqlOoS9na7Ygd6Oi4XLXlo2z/ZPbDb9RatrUxOLkcj0CGP2/csO1KV5Vw2bQpukF6yQ1vQc40iGl875DCtbN2oss/m0+MeZvrybi9tUkSWKjiK/OijLxYxALptnZ3tNw8ilW9Gv2QUmeL5BccjtEg2UrAiKsmUnm1BSvgXDUGZw7BPkTmCUM4z3vHBL+M4ctZMq7ebFwta62E6QSp+HGM9wqZP5rmgWSFA/x3HXEfNK9nbh8zzzv+bmibZqYd+xpPbWNHyn m0JGVOXw AhxVc8qN4hSu8kauOuiDRWddCNKyYUD+Mv32Riv4U76Ngyk6Gi5bumWw0uX7sDMBwtdJ2PTlsFTkqMdWvbga4DrPm4bo5vz2Xk1t6qXla+E0t0CnlBSkc/JtsrYjRmBkCwJk2k8coPQQQY8EUvENtsPyeOVKWrKpFLumN09CDXJT2UYgtbBGOdn5ItDIWy8yrOCldiaKHfLeTLIHA5TDme9awgGz/brGXp5q4x9Dh4f6FqnpgQFbVegqQEFNqUPBSb1qvXvy/3UeRpF8mh66/Do5ZvWKBAbbM4eog7epUtKb452dmCsJCE/p1NQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When a lazy switch occurs, the kernel already gets the task_struct of the next task to switch to. However, the CFS does not provide an interface to explicitly specify the next task. Therefore, RPAL must implement its own mechanism to pick a specified task. This patch introduces two interfaces, rpal_pick_next_task_fair() and rpal_pick_task_fair(), to achieve this functionality. These interfaces leverage the sched_entity of the target task to modify the CFS data structures directly. Additionally, the patch adapts to the SCHED_CORE feature by temporarily setting the highest weight for the specified task, ensuring that the core will select this task preferentially during scheduling decisions. Signed-off-by: Bo Li --- kernel/sched/core.c | 212 +++++++++++++++++++++++++++++++++++++++++++ kernel/sched/fair.c | 109 ++++++++++++++++++++++ kernel/sched/sched.h | 8 ++ 3 files changed, 329 insertions(+) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index a862bf4a0161..2e76376c5172 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11003,3 +11003,215 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx) set_next_task(rq, ctx->p); } #endif /* CONFIG_SCHED_CLASS_EXT */ + +#ifdef CONFIG_RPAL +#ifdef CONFIG_SCHED_CORE +static inline struct task_struct * +__rpal_pick_next_task(struct rq *rq, struct task_struct *prev, + struct task_struct *next, struct rq_flags *rf) +{ + struct task_struct *p; + + if (likely(prev->sched_class == &fair_sched_class && + next->sched_class == &fair_sched_class)) { + p = rpal_pick_next_task_fair(prev, next, rq, rf); + return p; + } + + BUG(); +} + +static struct task_struct *rpal_pick_next_task(struct rq *rq, + struct task_struct *prev, + struct task_struct *next, + struct rq_flags *rf) +{ + struct task_struct *p; + const struct cpumask *smt_mask; + bool fi_before = false; + bool core_clock_updated = (rq == rq->core); + unsigned long cookie; + int i, cpu, occ = 0; + struct rq *rq_i; + bool need_sync; + + if (!sched_core_enabled(rq)) + return __rpal_pick_next_task(rq, prev, next, rf); + + cpu = cpu_of(rq); + + /* Stopper task is switching into idle, no need core-wide selection. */ + if (cpu_is_offline(cpu)) { + rq->core_pick = NULL; + return __rpal_pick_next_task(rq, prev, next, rf); + } + + if (rq->core->core_pick_seq == rq->core->core_task_seq && + rq->core->core_pick_seq != rq->core_sched_seq && + rq->core_pick) { + WRITE_ONCE(rq->core_sched_seq, rq->core->core_pick_seq); + + /* ignore rq->core_pick, always pick next */ + if (rq->core_pick == next) { + put_prev_task(rq, prev); + set_next_task(rq, next); + + rq->core_pick = NULL; + goto out; + } + } + + put_prev_task_balance(rq, prev, rf); + + smt_mask = cpu_smt_mask(cpu); + need_sync = !!rq->core->core_cookie; + + /* reset state */ + rq->core->core_cookie = 0UL; + if (rq->core->core_forceidle_count) { + if (!core_clock_updated) { + update_rq_clock(rq->core); + core_clock_updated = true; + } + sched_core_account_forceidle(rq); + /* reset after accounting force idle */ + rq->core->core_forceidle_start = 0; + rq->core->core_forceidle_count = 0; + rq->core->core_forceidle_occupation = 0; + need_sync = true; + fi_before = true; + } + + rq->core->core_task_seq++; + + if (!need_sync) { + next = rpal_pick_task_fair(rq, next); + if (!next->core_cookie) { + rq->core_pick = NULL; + /* + * For robustness, update the min_vruntime_fi for + * unconstrained picks as well. + */ + WARN_ON_ONCE(fi_before); + task_vruntime_update(rq, next, false); + goto out_set_next; + } + } + + for_each_cpu_wrap(i, smt_mask, cpu) { + rq_i = cpu_rq(i); + + if (i != cpu && (rq_i != rq->core || !core_clock_updated)) + update_rq_clock(rq_i); + + /* ignore prio, always pick next */ + if (i == cpu) + rq_i->core_pick = rpal_pick_task_fair(rq, next); + else + rq_i->core_pick = pick_task(rq_i); + } + + cookie = rq->core->core_cookie = next->core_cookie; + + for_each_cpu(i, smt_mask) { + rq_i = cpu_rq(i); + p = rq_i->core_pick; + + if (!cookie_equals(p, cookie)) { + p = NULL; + if (cookie) + p = sched_core_find(rq_i, cookie); + if (!p) + p = idle_sched_class.pick_task(rq_i); + } + + rq_i->core_pick = p; + + if (p == rq_i->idle) { + if (rq_i->nr_running) { + rq->core->core_forceidle_count++; + if (!fi_before) + rq->core->core_forceidle_seq++; + } + } else { + occ++; + } + } + + if (schedstat_enabled() && rq->core->core_forceidle_count) { + rq->core->core_forceidle_start = rq_clock(rq->core); + rq->core->core_forceidle_occupation = occ; + } + + rq->core->core_pick_seq = rq->core->core_task_seq; + WARN_ON_ONCE(next != rq->core_pick); + rq->core_sched_seq = rq->core->core_pick_seq; + + for_each_cpu(i, smt_mask) { + rq_i = cpu_rq(i); + + /* + * An online sibling might have gone offline before a task + * could be picked for it, or it might be offline but later + * happen to come online, but its too late and nothing was + * picked for it. That's Ok - it will pick tasks for itself, + * so ignore it. + */ + if (!rq_i->core_pick) + continue; + + /* + * Update for new !FI->FI transitions, or if continuing to be in !FI: + * fi_before fi update? + * 0 0 1 + * 0 1 1 + * 1 0 1 + * 1 1 0 + */ + if (!(fi_before && rq->core->core_forceidle_count)) + task_vruntime_update(rq_i, rq_i->core_pick, + !!rq->core->core_forceidle_count); + + rq_i->core_pick->core_occupation = occ; + + if (i == cpu) { + rq_i->core_pick = NULL; + continue; + } + + /* Did we break L1TF mitigation requirements? */ + WARN_ON_ONCE(!cookie_match(next, rq_i->core_pick)); + + if (rq_i->curr == rq_i->core_pick) { + rq_i->core_pick = NULL; + continue; + } + + resched_curr(rq_i); + } + +out_set_next: + set_next_task(rq, next); +out: + if (rq->core->core_forceidle_count && next == rq->idle) + queue_core_balance(rq); + + return next; +} +#else +static inline struct task_struct * +rpal_pick_next_task(struct rq *rq, struct task_struct *prev, + struct task_struct *next, struct rq_flags *rf) +{ + struct task_struct *p; + + if (likely(prev->sched_class == &fair_sched_class && + next->sched_class == &fair_sched_class)) { + p = rpal_pick_next_task_fair(prev, next, rq, rf); + return p; + } + + BUG(); +} +#endif +#endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 125912c0e9dd..d9c16d974a47 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -8983,6 +8983,115 @@ void fair_server_init(struct rq *rq) dl_server_init(dl_se, rq, fair_server_has_tasks, fair_server_pick_task); } +#ifdef CONFIG_RPAL +/* if the next is throttled, unthrottle it */ +static void rpal_unthrottle(struct rq *rq, struct task_struct *next) +{ + struct sched_entity *se; + struct cfs_rq *cfs_rq; + + se = &next->se; + for_each_sched_entity(se) { + cfs_rq = cfs_rq_of(se); + if (cfs_rq_throttled(cfs_rq)) + unthrottle_cfs_rq(cfs_rq); + + if (cfs_rq == &rq->cfs) + break; + } +} + +struct task_struct *rpal_pick_task_fair(struct rq *rq, struct task_struct *next) +{ + struct sched_entity *se; + struct cfs_rq *cfs_rq; + + rpal_unthrottle(rq, next); + + se = &next->se; + for_each_sched_entity(se) { + cfs_rq = cfs_rq_of(se); + + if (cfs_rq->curr && cfs_rq->curr->on_rq) + update_curr(cfs_rq); + + if (unlikely(check_cfs_rq_runtime(cfs_rq))) + continue; + + clear_buddies(cfs_rq, se); + } + + return next; +} + +struct task_struct *rpal_pick_next_task_fair(struct task_struct *prev, + struct task_struct *next, + struct rq *rq, struct rq_flags *rf) +{ + struct cfs_rq *cfs_rq; + struct sched_entity *se; + struct task_struct *p; + + rpal_unthrottle(rq, next); + + p = rpal_pick_task_fair(rq, next); + + if (!sched_fair_runnable(rq)) + panic("rpal error: !sched_fair_runnable\n"); + +#ifdef CONFIG_FAIR_GROUP_SCHED + __put_prev_set_next_dl_server(rq, prev, next); + + se = &next->se; + p = task_of(se); + + /* + * Since we haven't yet done put_prev_entity and if the selected task + * is a different task than we started out with, try and touch the + * least amount of cfs_rqs. + */ + if (prev != p) { + struct sched_entity *pse = &prev->se; + + while (!(cfs_rq = is_same_group(se, pse))) { + int se_depth = se->depth; + int pse_depth = pse->depth; + + if (se_depth <= pse_depth) { + put_prev_entity(cfs_rq_of(pse), pse); + pse = parent_entity(pse); + } + if (se_depth >= pse_depth) { + set_next_entity(cfs_rq_of(se), se); + se = parent_entity(se); + } + } + + put_prev_entity(cfs_rq, pse); + set_next_entity(cfs_rq, se); + } +#endif +#ifdef CONFIG_SMP + /* + * Move the next running task to the front of + * the list, so our cfs_tasks list becomes MRU + * one. + */ + list_move(&p->se.group_node, &rq->cfs_tasks); +#endif + + WARN_ON_ONCE(se->sched_delayed); + + if (hrtick_enabled_fair(rq)) + hrtick_start_fair(rq, p); + + update_misfit_status(p, rq); + sched_fair_update_stop_tick(rq, p); + + return p; +} +#endif + /* * Account for a descheduled task: */ diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c5a6a503eb6d..f8fd26b584c9 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2575,6 +2575,14 @@ static inline bool sched_fair_runnable(struct rq *rq) extern struct task_struct *pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf); extern struct task_struct *pick_task_idle(struct rq *rq); +#ifdef CONFIG_RPAL +extern struct task_struct *rpal_pick_task_fair(struct rq *rq, + struct task_struct *next); +extern struct task_struct *rpal_pick_next_task_fair(struct task_struct *prev, + struct task_struct *next, + struct rq *rq, + struct rq_flags *rf); +#endif #define SCA_CHECK 0x01 #define SCA_MIGRATE_DISABLE 0x02 -- 2.20.1