From: Michal Hocko <mhocko@suse.com>
To: Frederic Weisbecker <frederic@kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Valentin Schneider <vschneid@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
Thomas Gleixner <tglx@linutronix.de>,
Oleg Nesterov <oleg@redhat.com>,
linux-mm@kvack.org
Subject: Re: [PATCH 6/6 v2] mm: Drain LRUs upon resume to userspace on nohz_full CPUs
Date: Tue, 11 Feb 2025 12:42:51 +0100 [thread overview]
Message-ID: <Z6s3u6Uu8I7r2Jox@tiehlicka> (raw)
In-Reply-To: <20250209223005.11519-7-frederic@kernel.org>
On Sun 09-02-25 23:30:04, Frederic Weisbecker wrote:
> LRUs can be drained through several ways. One of them may add disturbances
> to isolated workloads while queuing a work at any time to any target,
> whether running in nohz_full mode or not.
>
> Prevent from that on isolated tasks with defering LRUs drains upon
> resuming to userspace using the isolated task work framework.
I have to say this is rather cryptic description of the udnerlying
problem. What do you think about the following:
LRU batching can be source of disturbances for isolated workloads
running in the userspace because it requires kernel worker to handle
that and that would preempt the said task. The primary source for such
disruption would be __lru_add_drain_all which could be triggered from
non-isolated CPUs.
Why would an isolated CPU have anything on the pcp cache? Many syscalls
allocate pages that might end there. A typical and unavoidable one would
be fork/exec leaving pages on the cache behind just waiting for somebody
to drain.
This patch addresses the problem by noting a patch has been added to the
cache and schedule draining to the return path to the userspace so the
work is done while the syscall is still executing and there are no
suprises while the task runs in the userspace where it doesn't want to
be preempted.
>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> ---
> include/linux/swap.h | 1 +
> kernel/sched/isolation.c | 3 +++
> mm/swap.c | 8 +++++++-
> 3 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index b13b72645db3..a6fdcc04403e 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -406,6 +406,7 @@ extern void lru_add_drain(void);
> extern void lru_add_drain_cpu(int cpu);
> extern void lru_add_drain_cpu_zone(struct zone *zone);
> extern void lru_add_drain_all(void);
> +extern void lru_add_and_bh_lrus_drain(void);
> void folio_deactivate(struct folio *folio);
> void folio_mark_lazyfree(struct folio *folio);
> extern void swap_setup(void);
> diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> index f25a5cb33c0d..1f9ec201864c 100644
> --- a/kernel/sched/isolation.c
> +++ b/kernel/sched/isolation.c
> @@ -8,6 +8,8 @@
> *
> */
>
> +#include <linux/swap.h>
> +
> enum hk_flags {
> HK_FLAG_DOMAIN = BIT(HK_TYPE_DOMAIN),
> HK_FLAG_MANAGED_IRQ = BIT(HK_TYPE_MANAGED_IRQ),
> @@ -253,6 +255,7 @@ __setup("isolcpus=", housekeeping_isolcpus_setup);
> #if defined(CONFIG_NO_HZ_FULL)
> static void isolated_task_work(struct callback_head *head)
> {
> + lru_add_and_bh_lrus_drain();
> }
>
> int __isolated_task_work_queue(void)
> diff --git a/mm/swap.c b/mm/swap.c
> index fc8281ef4241..da1e569ee3ce 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -37,6 +37,7 @@
> #include <linux/page_idle.h>
> #include <linux/local_lock.h>
> #include <linux/buffer_head.h>
> +#include <linux/sched/isolation.h>
>
> #include "internal.h"
>
> @@ -376,6 +377,8 @@ static void __lru_cache_activate_folio(struct folio *folio)
> }
>
> local_unlock(&cpu_fbatches.lock);
> +
> + isolated_task_work_queue();
> }
This placement doens't make much sense to me. I would put
isolated_task_work_queue when we queue something up. That would be
folio_batch_add if folio_batch_space(fbatch) > 0.
>
> #ifdef CONFIG_LRU_GEN
> @@ -738,7 +741,7 @@ void lru_add_drain(void)
> * the same cpu. It shouldn't be a problem in !SMP case since
> * the core is only one and the locks will disable preemption.
> */
> -static void lru_add_and_bh_lrus_drain(void)
> +void lru_add_and_bh_lrus_drain(void)
> {
> local_lock(&cpu_fbatches.lock);
> lru_add_drain_cpu(smp_processor_id());
> @@ -769,6 +772,9 @@ static bool cpu_needs_drain(unsigned int cpu)
> {
> struct cpu_fbatches *fbatches = &per_cpu(cpu_fbatches, cpu);
>
> + if (!housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE))
> + return false;
> +
Would it make more sense to use cpu_is_isolated() and use it explicitly
in __lru_add_drain_all so that it is clearly visible - with a comment
that isolated workloads are dealing with cache on their return to
userspace.
> /* Check these in order of likelihood that they're not zero */
> return folio_batch_count(&fbatches->lru_add) ||
> folio_batch_count(&fbatches->lru_move_tail) ||
> --
> 2.46.0
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2025-02-11 11:43 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-09 22:29 [PATCH 0/6 v2] mm: LRU drain flush on nohz_full Frederic Weisbecker
2025-02-09 22:29 ` [PATCH 1/6 v2] task_work: Provide means to check if a work is queued Frederic Weisbecker
2025-02-10 12:43 ` Oleg Nesterov
2025-03-25 14:25 ` Frederic Weisbecker
2025-02-27 16:25 ` Valentin Schneider
2025-02-09 22:30 ` [PATCH 2/6 v2] sched/fair: Use task_work_queued() on numa_work Frederic Weisbecker
2025-02-10 12:47 ` Oleg Nesterov
2025-02-27 16:25 ` Valentin Schneider
2025-02-09 22:30 ` [PATCH 3/6 v2] sched: Use task_work_queued() on cid_work Frederic Weisbecker
2025-02-10 12:49 ` Oleg Nesterov
2025-02-09 22:30 ` [PATCH 4/6 v2] tick/nohz: Move nohz_full related fields out of hot task struct's places Frederic Weisbecker
2025-02-09 22:30 ` [PATCH 5/6 v2] sched/isolation: Introduce isolated task work Frederic Weisbecker
2025-02-09 22:30 ` [PATCH 6/6 v2] mm: Drain LRUs upon resume to userspace on nohz_full CPUs Frederic Weisbecker
2025-02-10 10:50 ` Hillf Danton
2025-02-10 11:19 ` Michal Hocko
2025-02-10 11:46 ` Frederic Weisbecker
2025-02-11 11:31 ` Hillf Danton
2025-02-11 11:42 ` Michal Hocko [this message]
2025-04-04 13:14 ` Frederic Weisbecker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z6s3u6Uu8I7r2Jox@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=frederic@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=mtosatti@redhat.com \
--cc=oleg@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=vschneid@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox