linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Leonardo Bras <leobras.c@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	linux-mm@kvack.org, Johannes Weiner <hannes@cmpxchg.org>,
	Roman Gushchin <roman.gushchin@linux.dev>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Muchun Song <muchun.song@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	Leonardo Bras <leobras@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Waiman Long <longman@redhat.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	Frederic Weisbecker <fweisbecker@suse.de>
Subject: Re: [PATCH 0/4] Introduce QPW for per-cpu operations
Date: Mon, 2 Mar 2026 21:19:44 -0300	[thread overview]
Message-ID: <aaYpICV55B70U1I2@tpad> (raw)
In-Reply-To: <aaJDjmnfuo8AM6J9@WindFlash>

On Fri, Feb 27, 2026 at 10:23:27PM -0300, Leonardo Bras wrote:
> On Mon, Feb 23, 2026 at 10:06:32AM +0100, Michal Hocko wrote:
> > On Fri 20-02-26 18:58:14, Leonardo Bras wrote:
> > > On Mon, Feb 16, 2026 at 12:00:55PM +0100, Michal Hocko wrote:
> > > > On Sat 14-02-26 19:02:19, Leonardo Bras wrote:
> > > > > On Wed, Feb 11, 2026 at 05:38:47PM +0100, Michal Hocko wrote:
> > > > > > On Wed 11-02-26 09:01:12, Marcelo Tosatti wrote:
> > > > > > > On Tue, Feb 10, 2026 at 03:01:10PM +0100, Michal Hocko wrote:
> > > > > > [...]
> > > > > > > > What about !PREEMPT_RT? We have people running isolated workloads and
> > > > > > > > these sorts of pcp disruptions are really unwelcome as well. They do not
> > > > > > > > have requirements as strong as RT workloads but the underlying
> > > > > > > > fundamental problem is the same. Frederic (now CCed) is working on
> > > > > > > > moving those pcp book keeping activities to be executed to the return to
> > > > > > > > the userspace which should be taking care of both RT and non-RT
> > > > > > > > configurations AFAICS.
> > > > > > > 
> > > > > > > Michal,
> > > > > > > 
> > > > > > > For !PREEMPT_RT, _if_ you select CONFIG_QPW=y, then there is a kernel
> > > > > > > boot option qpw=y/n, which controls whether the behaviour will be
> > > > > > > similar (the spinlock is taken on local_lock, similar to PREEMPT_RT).
> > > > > > 
> > > > > > My bad. I've misread the config space of this.
> > > > > > 
> > > > > > > If CONFIG_QPW=n, or kernel boot option qpw=n, then only local_lock 
> > > > > > > (and remote work via work_queue) is used.
> > > > > > > 
> > > > > > > What "pcp book keeping activities" you refer to ? I don't see how
> > > > > > > moving certain activities that happen under SLUB or LRU spinlocks
> > > > > > > to happen before return to userspace changes things related 
> > > > > > > to avoidance of CPU interruption ?
> > > > > > 
> > > > > > Essentially delayed operations like pcp state flushing happens on return
> > > > > > to the userspace on isolated CPUs. No locking changes are required as
> > > > > > the work is still per-cpu.
> > > > > > 
> > > > > > In other words the approach Frederic is working on is to not change the
> > > > > > locking of pcp delayed work but instead move that work into well defined
> > > > > > place - i.e. return to the userspace.
> > > > > > 
> > > > > > Btw. have you measure the impact of preempt_disbale -> spinlock on hot
> > > > > > paths like SLUB sheeves?
> > > > > 
> > > > > Hi Michal,
> > > > > 
> > > > > I have done some study on this (which I presented on Plumbers 2023):
> > > > > https://lpc.events/event/17/contributions/1484/ 
> > > > > 
> > > > > Since they are per-cpu spinlocks, and the remote operations are not that 
> > > > > frequent, as per design of the current approach, we are not supposed to see 
> > > > > contention (I was not able to detect contention even after stress testing 
> > > > > for weeks), nor relevant cacheline bouncing.
> > > > > 
> > > > > That being said, for RT local_locks already get per-cpu spinlocks, so there 
> > > > > is only difference for !RT, which as you mention, does preemtp_disable():
> > > > > 
> > > > > The performance impact noticed was mostly about jumping around in 
> > > > > executable code, as inlining spinlocks (test #2 on presentation) took care 
> > > > > of most of the added extra cycles, adding about 4-14 extra cycles per 
> > > > > lock/unlock cycle. (tested on memcg with kmalloc test)
> > > > > 
> > > > > Yeah, as expected there is some extra cycles, as we are doing extra atomic 
> > > > > operations (even if in a local cacheline) in !RT case, but this could be 
> > > > > enabled only if the user thinks this is an ok cost for reducing 
> > > > > interruptions.
> > > > > 
> > > > > What do you think?
> > > > 
> > > > The fact that the behavior is opt-in for !RT is certainly a plus. I also
> > > > do not expect the overhead to be really be really big. 
> > > 
> > > Awesome! Thanks for reviewing!
> > > 
> > > > To me, a much
> > > > more important question is which of the two approaches is easier to
> > > > maintain long term. The pcp work needs to be done one way or the other.
> > > > Whether we want to tweak locking or do it at a very well defined time is
> > > > the bigger question.
> > > 
> > > That crossed my mind as well, and I went with the idea of changing locking 
> > > because I was working on workloads in which deferring work to a kernel 
> > > re-entry would cause deadline misses as well. Or more critically, the 
> > > drains could take forever, as some of those tasks would avoid returning to 
> > > kernel as much as possible. 
> > 
> > Could you be more specific please?
> 
> Hi Michal,
> Sorry for the delay
> 
> I think Marcelo covered some of the main topics earlier in this 
> thread:
> 
> https://lore.kernel.org/all/aZ3ejedS7nE5mnva@tpad/
> 
> But in syntax:
> - There are workloads that are projected not avoid as much as possible 
> return to kernelspace, as they are either cpu intensive, or latency 
> sensitive (RT workloads) such as low-latency automation.
> 
> There are scenarios such as industrial automation in which 
> the applications are supposed to reply a request in less than 50us since it 
> was generated (IIRC), so sched-out, dealing with interruptions, or syscalls 
> are a no-go. In those cases, using cpu isolation is a must, and since it 
> can stay really long running in userspace, it may take a very long time to 
> do any syscall to actually perform the scheduled flush.
> 
> - Other workloads may need to use syscalls, or rely in interrupts, such as 
> HPC, but it's also not interesting to take long on them, as the time spent 
> there is time not used for processing the required data.
> 
> Let's say that for the sake of cpu isolation, a lot of different
> requests made to given isolated cpu are batched to be run on syscall 
> entry/exit. It means the next syscall may take much longer than 
> usual.
> - This may break other RT workloads such as  sensor/sound/image sampling, 
> which could be generally ok with some of the faster syscalls for their 
> application, and now may perceive an error because one of those syscalls 
> took too long. 
> 
> While the qpw approach may cost a few extra cycles, it operates remotelly 
> and makes the system a bit more predictable. 
> 
> Also, when I was planning the mechanism, I remember it was meant to add 
> zero overhead in case of CONFIG_QPW=n, very little overhead in case of 
> CONFIG_QPW=y + qpw=0 (a couple of static branches, possibly with the 
> cost removed by the cpu branch predictor),  and only add a few cycles in 
> case of qpw=1 + !RT. Which means we may be missing just a few adjustments 
> to get there.

Leo,

v2 of the patchset adds only 2 cycles to CONFIG_QPW=y + qpw=0. 
The larger overhead was due to migrate_disable, which is now (on v2)
hidden inside the static branch.
My bad.

> BTW, if the numbers are not that great for your workloads, we could take a 
> look at adding an extra QPW mode in which local_locks are taken in 
> the fastpath and it allows the flush wq to be posponed to that point in 
> syscall return that you mentioned. What I mean is that we don't need to be 
> limitted to choosing between solutions, but instead allow the user (or 
> distro) to choose the desired behavior.
> 
> Thanks!
> Leo 

I think 2 cycles is acceptable.



  reply	other threads:[~2026-03-03  0:20 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-06 14:34 Marcelo Tosatti
2026-02-06 14:34 ` [PATCH 1/4] Introducing qpw_lock() and per-cpu queue & flush work Marcelo Tosatti
2026-02-06 15:20   ` Marcelo Tosatti
2026-02-07  0:16   ` Leonardo Bras
2026-02-11 12:09     ` Marcelo Tosatti
2026-02-14 21:32       ` Leonardo Bras
2026-02-06 14:34 ` [PATCH 2/4] mm/swap: move bh draining into a separate workqueue Marcelo Tosatti
2026-02-06 14:34 ` [PATCH 3/4] swap: apply new queue_percpu_work_on() interface Marcelo Tosatti
2026-02-07  1:06   ` Leonardo Bras
2026-02-26 15:49     ` Marcelo Tosatti
2026-02-06 14:34 ` [PATCH 4/4] slub: " Marcelo Tosatti
2026-02-07  1:27   ` Leonardo Bras
2026-02-06 23:56 ` [PATCH 0/4] Introduce QPW for per-cpu operations Leonardo Bras
2026-02-10 14:01 ` Michal Hocko
2026-02-11 12:01   ` Marcelo Tosatti
2026-02-11 12:11     ` Marcelo Tosatti
2026-02-14 21:35       ` Leonardo Bras
2026-02-11 16:38     ` Michal Hocko
2026-02-11 16:50       ` Marcelo Tosatti
2026-02-11 16:59         ` Vlastimil Babka
2026-02-11 17:07         ` Michal Hocko
2026-02-14 22:02       ` Leonardo Bras
2026-02-16 11:00         ` Michal Hocko
2026-02-19 15:27           ` Marcelo Tosatti
2026-02-19 19:30             ` Michal Hocko
2026-02-20 14:30               ` Marcelo Tosatti
2026-02-23  9:18                 ` Michal Hocko
2026-02-23 21:56               ` Frederic Weisbecker
2026-02-24 17:23                 ` Marcelo Tosatti
2026-02-25 21:49                   ` Frederic Weisbecker
2026-02-26  7:06                     ` Michal Hocko
2026-02-26 11:41                     ` Marcelo Tosatti
2026-02-20 10:48             ` Vlastimil Babka
2026-02-20 12:31               ` Michal Hocko
2026-02-20 17:35               ` Marcelo Tosatti
2026-02-20 17:58                 ` Vlastimil Babka
2026-02-20 19:01                   ` Marcelo Tosatti
2026-02-23  9:11                     ` Michal Hocko
2026-02-23 11:20                       ` Marcelo Tosatti
2026-02-24 14:40                 ` Frederic Weisbecker
2026-02-24 18:12                   ` Marcelo Tosatti
2026-02-20 16:51           ` Marcelo Tosatti
2026-02-20 16:55             ` Marcelo Tosatti
2026-02-20 22:38               ` Leonardo Bras
2026-02-23 18:09               ` Vlastimil Babka
2026-02-26 18:24                 ` Marcelo Tosatti
2026-02-20 21:58           ` Leonardo Bras
2026-02-23  9:06             ` Michal Hocko
2026-02-28  1:23               ` Leonardo Bras
2026-03-03  0:19                 ` Marcelo Tosatti [this message]
2026-02-19 13:15       ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aaYpICV55B70U1I2@tpad \
    --to=mtosatti@redhat.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=boqun.feng@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=cl@linux.com \
    --cc=fweisbecker@suse.de \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=leobras.c@gmail.com \
    --cc=leobras@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox