linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Gabriele Monaco <gmonaco@redhat.com>
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Ingo Molnar <mingo@redhat.org>,
	linux-kernel@vger.kernel.org,
	Andrew Morton	 <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>,
	Ingo Molnar	 <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	linux-mm@kvack.org,  Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs in batches
Date: Thu, 28 Aug 2025 10:36:45 +0200	[thread overview]
Message-ID: <10f2d5094a6c2dae1bcbf7d7f8198c11c6fce4c1.camel@redhat.com> (raw)
In-Reply-To: <7fddf82f-e85e-42c5-90f3-9cfca4d8756a@efficios.com>

On Tue, 2025-08-26 at 14:10 -0400, Mathieu Desnoyers wrote:
> On 2025-07-16 12:06, Gabriele Monaco wrote:
> > Currently, task_mm_cid_work() is called from
> > resume_user_mode_work().
> > This can delay the execution of the corresponding thread for the
> > entire duration of the function, negatively affecting the response
> > in case of real time tasks.
> > In practice, we observe task_mm_cid_work increasing the latency of
> > 30-35us on a 128 cores system, this order of magnitude is
> > meaningful under PREEMPT_RT.
> > 
> > Run the task_mm_cid_work in batches of up to
> > CONFIG_RSEQ_CID_SCAN_BATCH CPUs, this reduces the duration of the
> > delay for each scan.
> > 
> > The task_mm_cid_work contains a mechanism to avoid running more
> > frequently than every 100ms. Keep this pseudo-periodicity only on
> > complete scans.
> > This means each call to task_mm_cid_work returns prematurely if the
> > period did not elapse and a scan is not ongoing (i.e. the next
> > batch to scan is not the first).
> > This way full scans are not excessively delayed while still keeping
> > each run, and introduced latency, short.
> 
> With your test hardware/workload as reference, do you have an idea of
> how many CPUs would be needed to require more than 100ms to iterate
> on all CPUs with the default scan batch size (8) ?

As you guessed, this is strongly dependent on the workload, where
workloads with less threads are more likely to take longer.
I used cyclictest (threads with 100us period) and hackbench (processes)
on a 128 CPUs machine and measured the time to complete the scan (16
iterations) as well as the time between non-complete scans (not delayed
by 100ms):

cyclictest: delay 0-400 us , complete scan 1.5-2 ms
hackbench: delay 5us - 3ms , complete scan 1.5-15 ms

So to answer your question, in the observed worst case for hackbench,
it would take more than 800 CPUs to reach the 100ms limit.

That said, the problematic latency was observed on a full scan (128
CPUs), so perhaps the default of 8 is a bit too conservative and could
easily be doubled.

Measurements showed these durations for each call to task_mm_cid_scan:

batch size  8:  1-11 us (majority below 10)
batch size 16:  3-16 us (majority below 10)
batch size 32: 10-21 us (majority above 15)

20 us is considered a relevant latency on this machine, so 16 seems a
good tradeoff for a batch size to me.


I'm going to include those numbers in the next iteration of the series.

...
> > +cid_compact:
> > +	if (!try_cmpxchg(&mm->mm_cid_scan_batch, &this_batch,
> > next_batch))
> > +		return;
> >   	cidmask = mm_cidmask(mm);
> >   	/* Clear cids that were not recently used. */
> > -	for_each_possible_cpu(cpu)
> > +	idx = 0;
> > +	cpu = from_cpu;
> > +	for_each_cpu_from(cpu, cpu_possible_mask) {
> > +		if (idx == CONFIG_RSEQ_CID_SCAN_BATCH)
> 
> could do "if (idx++ == CONFIG_RSEQ_CID_SCAN_BATCH)"
> 
> > +			break;
> >   		sched_mm_cid_remote_clear_old(mm, cpu);
> > +		++idx;
> 
> and remove this ^
> 
> > +	}
> >   	weight = cpumask_weight(cidmask);
> >   	/*
> >   	 * Clear cids that are greater or equal to the cidmask
> > weight to
> >   	 * recompact it.
> >   	 */
> > -	for_each_possible_cpu(cpu)
> > +	idx = 0;
> > +	cpu = from_cpu;
> > +	for_each_cpu_from(cpu, cpu_possible_mask) {
> > +		if (idx == CONFIG_RSEQ_CID_SCAN_BATCH)
> 
> Likewise.
> 
> > +			break;
> >   		sched_mm_cid_remote_clear_weight(mm, cpu, weight);
> > +		++idx;
> 
> Likewise.

Sure, will do.

Thanks,
Gabriele



      reply	other threads:[~2025-08-28  8:36 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20250716160603.138385-6-gmonaco@redhat.com>
2025-07-16 16:06 ` [PATCH v2 2/4] rseq: Run the mm_cid_compaction from rseq_handle_notify_resume() Gabriele Monaco
2025-08-26 18:01   ` Mathieu Desnoyers
2025-08-27  6:55     ` Gabriele Monaco
2025-09-24 15:22     ` Gabriele Monaco
2025-09-29 22:01       ` Thomas Gleixner
2025-09-30 10:18         ` Gabriele Monaco
2025-10-02  1:22         ` Thomas Gleixner
2025-07-16 16:06 ` [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs in batches Gabriele Monaco
2025-08-26 18:10   ` Mathieu Desnoyers
2025-08-28  8:36     ` Gabriele Monaco [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=10f2d5094a6c2dae1bcbf7d7f8198c11c6fce4c1.camel@redhat.com \
    --to=gmonaco@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mingo@redhat.com \
    --cc=mingo@redhat.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox