From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4843C83F03 for ; Thu, 3 Jul 2025 14:07:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B28826B019F; Thu, 3 Jul 2025 10:07:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ADB436B01A1; Thu, 3 Jul 2025 10:07:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 99FEF6B01A2; Thu, 3 Jul 2025 10:07:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 87E376B019F for ; Thu, 3 Jul 2025 10:07:52 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5314C104358 for ; Thu, 3 Jul 2025 14:07:52 +0000 (UTC) X-FDA: 83623131984.23.6C4112E Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf10.hostedemail.com (Postfix) with ESMTP id A6C7EC001C for ; Thu, 3 Jul 2025 14:07:50 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IIgVUaiJ; spf=pass (imf10.hostedemail.com: domain of frederic@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=frederic@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751551670; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=no98oRTGO4tgJwu8CB99ycU/G6XQagwEh3unR4fJrA8=; b=GQH+JAoh84Nz29y41hYzE2aiB192q3kn+5urECzYL0Vd6dsvQ14HYV7JrNwf9RflLWlVG1 SpoN49sf9hfQyCtwqGaudZXQ9aDyd0fo6rojvnUqJ3NNukK89TGrrg2mwPXRiprXVYcQ18 wTEColq4ClgDYDkJiOFsX3ELpuGUjSk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751551670; a=rsa-sha256; cv=none; b=idNnbmw2xVDtrpHfnm3kJw3R7JF6jP9FGFJ++gxDyqwMQl316aA2r8Phsd7gH/IvG4LnSm IlH29Tc6RA1uru81pXgCApqPN6zkzNmUmjWqOuhOrUGNt3ZvQblqI0Ua0/BWoNFk2nIwcA TYw9hXgkC7WEIPYBS1qXcf+PPYiqCbA= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=IIgVUaiJ; spf=pass (imf10.hostedemail.com: domain of frederic@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=frederic@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 08A18A53892; Thu, 3 Jul 2025 14:07:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 81FC7C4CEF2; Thu, 3 Jul 2025 14:07:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751551669; bh=6MRU5dtxFNiZV2KGlLDNWprwfE+/qRvCU7C2Kg8bEfI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IIgVUaiJTYwAzh+S9P2+Kk/G8TEtCZ3FaKFey34KGeUdQ4gVjMQWVDAgIInASZV9j 5C7DRCC2ppsPPNaP16VNACnpP3eitZEWzp7jSHe0WQfZ0Sqx+3uIBjm4bIz1hHReET PtGkqKf1uBn7s/Ypn3evUuCIbHJFXNcFSNWpeIhKONKSAkehS4axozi03jUUNs2Sp2 b6zPZpzCB1/aK8YdN3im+cT631nQNcPGZ98IADyO9GX+oiHhB7+7GHiDrzxvkdu4vC i4dL2ApqJzAsyKDIeBoOKGmuKWRmyJtPwEfGoZ11avLVPbi2zYL34yOxRReDPzuq0L OLkUNqemcJYzg== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Andrew Morton , Ingo Molnar , Marcelo Tosatti , Michal Hocko , Oleg Nesterov , Peter Zijlstra , Thomas Gleixner , Valentin Schneider , Vlastimil Babka , linux-mm@kvack.org Subject: [PATCH 6/6] mm: Drain LRUs upon resume to userspace on nohz_full CPUs Date: Thu, 3 Jul 2025 16:07:17 +0200 Message-ID: <20250703140717.25703-7-frederic@kernel.org> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250703140717.25703-1-frederic@kernel.org> References: <20250703140717.25703-1-frederic@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A6C7EC001C X-Stat-Signature: uoj1fy3yq81qpjgyfohpd7dqn1pufty8 X-Rspam-User: X-HE-Tag: 1751551670-141407 X-HE-Meta: U2FsdGVkX18ngNW44SYsLRAIYLb41Hx4b/xNfr8qbDJRbi0v5BmbVES6iRyKdKWJoXLQjYUPUdsXEJ+uva5437/jjOcgwwXz6chaMUrCbg6DHhzbTo9DBgIYk83o0vUOufX7rE0LCdZGgvlS3a3qNyNPd6Pp9lQWr2bGjhXge4H8fx5R/mUHxWI3UJpEeKUsLhwg7zvo9XvQn7f+PxYy1UUp0Z61hm5vo5K/CkUp/Bn15BQxzFHaJdIwBRitmLK8gpfA3DDAi10jO9Tqns2gtZ4X4F2XzXKyCXw1Obb3IZwua6dOmR/XoxYuNA8PVVYxrzVI25Lsi+/+RyuWMwF5NZ2CpmHIm8/RntC46KRcPBlWNkBoIoAu2GOJWacvAF3k1YecBbfDnIJrYWcwx1dSdyy4dTQ/vkAS9mAIT5lqYlsG4vpS+WSyGZmHPw+2z5lswn1JIcNUeVnWxAhKoR+709OUnwU+d/sB+eqQdjwgcaAOdCkDx9wBQGfAD3deY1mqRH0HCk21GjpEVVnQ7rlSHeiIqixnYKDigrnVJKwGpKczQamUU9lG6+q9YAZ0qe7/vD603YQi0w388UWBo1bR/ZuP+0i8aref6GTQlnBysCQeSP8xJ38S4SAkwcMvQFQkKGVwQPNcmRKuqwTgVt8i0WAhY07ot8LY61i/23whEvo0+ES6MQAh4hINIAKBehLJhf8al9pZV9MPw3M5PflIsxBogIkurdTN1IUxL9kAbbdDeuCVfev235OqdxhwD7ZW0n6gW6YW2wlfwEiJphsL/pGu2tj4ULXHcwuc4hYMROU3JYtQDrYxygy4Xu2CeW83MR6NiskP0ZIZ6ImrL6uYUv4m0NM75JTnRhT9BdBM1WLComRSLR99BgUKHF+tAmXiHndbp2XLZtoxhONHJGuFeVv9x/zFOqFXkjGKu41ZxY/nbx57KN/qX4/oMq37syTmYK+gL1IjdKI+UzVey3i 30tKIMTv 4oFaZbzy5HU/vdFVCzJ03qDQzMDHrtGZaXgms6RoBQJhm0w+4TCq5nkGK89iouyo/YfsnxXB1wVyb3/OeBjSYmQol3IIBqF2uUvBSinXiKC1chyDk6HMP465vIB8A70kE7sRROLo4l2F+j5J0eA5+ClsQ+c4b9ly8GzItolZtdWog0TtSgElxyp+ZscIfwIzxNnaORVX0G+KPzq8syzmiOhRpPC58mu6XbQodjAfNMtdglrgDCRbCMxyyMpr3xok6294VylEz/hLR3OZvhtLhFmCgpf1wRykGnC3WJSHpcTsX7szVEKYcX1+n2PXt91/p991TpMxkiq6eNEIzNJUlUzZv4qFmlfdsAN6j X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: LRU batching can be source of disturbances for isolated workloads running in the userspace because it requires kernel worker to handle that and that would preempt the said task. The primary source for such disruption would be __lru_add_drain_all which could be triggered from non-isolated CPUs. Why would an isolated CPU have anything on the pcp cache? Many syscalls allocate pages that might end there. A typical and unavoidable one would be fork/exec leaving pages on the cache behind just waiting for somebody to drain. Address the problem by noting a batch has been added to the cache and schedule draining upon return to userspace so the work is done while the syscall is still executing and there are no suprises while the task runs in the userspace where it doesn't want to be preempted. Signed-off-by: Frederic Weisbecker --- include/linux/pagevec.h | 18 ++---------------- include/linux/swap.h | 1 + kernel/sched/isolation.c | 3 +++ mm/swap.c | 30 +++++++++++++++++++++++++++++- 4 files changed, 35 insertions(+), 17 deletions(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 5d3a0cccc6bf..7e647b8df4c7 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -61,22 +61,8 @@ static inline unsigned int folio_batch_space(struct folio_batch *fbatch) return PAGEVEC_SIZE - fbatch->nr; } -/** - * folio_batch_add() - Add a folio to a batch. - * @fbatch: The folio batch. - * @folio: The folio to add. - * - * The folio is added to the end of the batch. - * The batch must have previously been initialised using folio_batch_init(). - * - * Return: The number of slots still available. - */ -static inline unsigned folio_batch_add(struct folio_batch *fbatch, - struct folio *folio) -{ - fbatch->folios[fbatch->nr++] = folio; - return folio_batch_space(fbatch); -} +unsigned int folio_batch_add(struct folio_batch *fbatch, + struct folio *folio); /** * folio_batch_next - Return the next folio to process. diff --git a/include/linux/swap.h b/include/linux/swap.h index bc0e1c275fc0..d74ad6c893a1 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -401,6 +401,7 @@ extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); +extern void lru_add_and_bh_lrus_drain(void); void folio_deactivate(struct folio *folio); void folio_mark_lazyfree(struct folio *folio); extern void swap_setup(void); diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index d74c4ef91ce2..06882916c24f 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -8,6 +8,8 @@ * */ +#include + enum hk_flags { HK_FLAG_DOMAIN = BIT(HK_TYPE_DOMAIN), HK_FLAG_MANAGED_IRQ = BIT(HK_TYPE_MANAGED_IRQ), @@ -253,6 +255,7 @@ __setup("isolcpus=", housekeeping_isolcpus_setup); #ifdef CONFIG_NO_HZ_FULL_WORK static void isolated_task_work(struct callback_head *head) { + lru_add_and_bh_lrus_drain(); } int __isolated_task_work_queue(void) diff --git a/mm/swap.c b/mm/swap.c index 4fc322f7111a..da08c918cef4 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -37,6 +37,7 @@ #include #include #include +#include #include "internal.h" @@ -155,6 +156,29 @@ static void lru_add(struct lruvec *lruvec, struct folio *folio) trace_mm_lru_insertion(folio); } +/** + * folio_batch_add() - Add a folio to a batch. + * @fbatch: The folio batch. + * @folio: The folio to add. + * + * The folio is added to the end of the batch. + * The batch must have previously been initialised using folio_batch_init(). + * + * Return: The number of slots still available. + */ +unsigned int folio_batch_add(struct folio_batch *fbatch, + struct folio *folio) +{ + unsigned int ret; + + fbatch->folios[fbatch->nr++] = folio; + ret = folio_batch_space(fbatch); + isolated_task_work_queue(); + + return ret; +} +EXPORT_SYMBOL(folio_batch_add); + static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) { int i; @@ -738,7 +762,7 @@ void lru_add_drain(void) * the same cpu. It shouldn't be a problem in !SMP case since * the core is only one and the locks will disable preemption. */ -static void lru_add_and_bh_lrus_drain(void) +void lru_add_and_bh_lrus_drain(void) { local_lock(&cpu_fbatches.lock); lru_add_drain_cpu(smp_processor_id()); @@ -864,6 +888,10 @@ static inline void __lru_add_drain_all(bool force_all_cpus) for_each_online_cpu(cpu) { struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); + /* Isolated CPUs handle their cache upon return to userspace */ + if (IS_ENABLED(CONFIG_NO_HZ_FULL_WORK) && !housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE)) + continue; + if (cpu_needs_drain(cpu)) { INIT_WORK(work, lru_add_drain_per_cpu); queue_work_on(cpu, mm_percpu_wq, work); -- 2.48.1