From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Mateusz Guzik <mjguzik@gmail.com>, Harry Yoo <harry.yoo@oracle.com>
Cc: Jan Kara <jack@suse.cz>,
Gabriel Krisman Bertazi <krisman@suse.de>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Shakeel Butt <shakeel.butt@linux.dev>,
Michal Hocko <mhocko@kernel.org>, Dennis Zhou <dennis@kernel.org>,
Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@gentwo.org>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [RFC PATCH 0/4] Optimize rss_stat initialization/teardown for single-threaded tasks
Date: Mon, 1 Dec 2025 09:47:57 -0500 [thread overview]
Message-ID: <355143c9-78c7-4da1-9033-5ae6fa50efad@efficios.com> (raw)
In-Reply-To: <CAGudoHHeqvtK83ub1FPuc4JYk4XMxPM+Es=Pp3V0Zq-EKGBs5A@mail.gmail.com>
On 2025-12-01 06:31, Mateusz Guzik wrote:
> On Mon, Dec 1, 2025 at 11:39 AM Harry Yoo <harry.yoo@oracle.com> wrote:
>> Apologies for not reposting it for a while. I have limited capacity to push
>> this forward right now, but FYI... I just pushed slab-destructor-rfc-v2r2-wip
>> branch after rebasing it onto the latest slab/for-next.
>>
>> https://gitlab.com/hyeyoo/linux/-/commits/slab-destructor-rfc-v2r2-wip?ref_type=heads
>>
>
> nice, thanks. This takes care of majority of the needful(tm).
>
> To reiterate, should something like this land, it is going to address
> the multicore scalability concern for single-threaded processes better
> than the patchset by Gabriel thanks to also taking care of cid. Bonus
> points for handling creation and teardown of multi-threaded processes.
>
> However, this is still going to suffer from doing a full cpu walk on
> process exit. As I described earlier the current handling can be
> massively depessimized by reimplementing this to take care of all 4
> counters in each iteration, instead of walking everything 4 times.
> This is still going to be slower than not doing the walk at all, but
> it may be fast enough that Gabriel's patchset is no longer
> justifiable.
>
> But then the test box is "only" 256 hw threads, what about bigger boxes?
>
> Given my previous note about increased use of multithreading in
> userspace, the more concerned you happen to be about such a walk, the
> more you want an actual solution which takes care of multithreaded
> processes.
>
> Additionally one has to assume per-cpu memory will be useful for other
> facilities down the line, making such a walk into an even bigger
> problem.
>
> Thus ultimately *some* tracking of whether given mm was ever active on
> a given cpu is needed, preferably cheaply implemented at least for the
> context switch code. Per what I described in another e-mail, one way
> to do it would be to coalesce it with tlb handling by changing how the
> bitmap tracking is handled -- having 2 adjacent bits denote cpu usage
> + tlb separately. For the common case this should be almost the code
> to set the two. Iteration for tlb shootdowns would be less efficient
> but that's probably tolerable. Maybe there is a better way, I did not
> put much thought into it. I just claim sooner or later this will need
> to get solved. At the same time would be a bummer to add stopgaps
> without even trying.
>
> With the cpu tracking problem solved, check_mm would visit few cpus in
> the benchmark (probably just 1) and it would be faster single-threaded
> than the proposed patch *and* would retain that for processes which
> went multithreaded.
Looking at this problem, it appears to be a good fit for rseq mm_cid
(per-mm concurrency ids). Let me explain.
I originally implemented the rseq mm_cid for userspace. It keeps track
of max_mm_cid = min(nr_threads, nr_allowed_cpus) for each mm, and lets
the scheduler select a current mm_cid value within the range
[0 .. max_mm_cid - 1]. With Thomas Gleixner's rewrite (currently in
tip), we even have hooks in thread clone/exit where we know when
max_mm_cid is increased/decreased for a mm. So we could keep track of
the maximum value of max_mm_cid over the lifetime of a mm.
So using mm_cid for per-mm rss counter would involve:
- Still allocating memory per-cpu on mm allocation (nr_cpu_ids), but
without zeroing all that memory (we eliminate a possible cpus walk on
allocation).
- Initialize CPU counters on thread clone when max_mm_cid is increased.
Keep track of the max value of max_mm_cid over mm lifetime.
- Rather than using the per-cpu accessors to access the counters, we
would have to load the per-task mm_cid field to get the counter index.
This would have a slight added overhead on the fast path, because we
would change a segment-selector prefix operation for an access that
depends on a load of the task struct current mm_cid index.
- Iteration on all possible cpus at process exit is replaced by an
iteration on mm maximum max_mm_cid, which will be bound by
the maximum value of min(nr_threads, nr_allowed_cpus) over the
mm lifetime. This iteration should be done with the new mm_cid
mutex held across thread clone/exit.
One more downside to consider is loss of NUMA locality, because the
index used to access the per-cpu memory would not take into account
the hardware topology. The index to topology should stay stable for
a given mm, but if we mix the memory allocation of per-cpu data
across different mm, then the NUMA locality would be degraded.
Ideally we'd have a per-cpu allocator with per-mm arenas for mm_cid
indexing if we care about NUMA locality.
So let's say you have a 256-core machine, where cpu numbers can go
from 0 to 255, with a 4-thread process, mm_cid will be limited to
the range [0..3]. Likewise if there are tons of threads in a process
limited to a few cores (e.g. pinned on cores from 10 to 19), which
will limit the range to [0..9].
This approach solves the runtime overhead issue of zeroing per-cpu
memory for all scenarios:
* single-threaded: index = 0
* nr_threads < nr_cpu_ids
* nr_threads < nr_allowed_cpus: index = [0 .. nr_threads - 1]
* nr_threads >= nr_allowed_cpus: index = [0 .. nr_allowed_cpus - 1]
* nr_threads >= nr_cpus_ids: index = [0 .. nr_cpu_ids - 1]
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
next prev parent reply other threads:[~2025-12-01 14:48 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-27 23:36 Gabriel Krisman Bertazi
2025-11-27 23:36 ` [RFC PATCH 1/4] lib/percpu_counter: Split out a helper to insert into hotplug list Gabriel Krisman Bertazi
2025-11-27 23:36 ` [RFC PATCH 2/4] lib: Support lazy initialization of per-cpu counters Gabriel Krisman Bertazi
2025-11-27 23:36 ` [RFC PATCH 3/4] mm: Avoid percpu MM counters on single-threaded tasks Gabriel Krisman Bertazi
2025-11-27 23:36 ` [RFC PATCH 4/4] mm: Split a slow path for updating mm counters Gabriel Krisman Bertazi
2025-12-01 10:19 ` David Hildenbrand (Red Hat)
2025-11-28 13:30 ` [RFC PATCH 0/4] Optimize rss_stat initialization/teardown for single-threaded tasks Mathieu Desnoyers
2025-11-28 20:10 ` Jan Kara
2025-11-28 20:12 ` Mathieu Desnoyers
2025-11-29 5:57 ` Mateusz Guzik
2025-11-29 7:50 ` Mateusz Guzik
2025-12-01 10:38 ` Harry Yoo
2025-12-01 11:31 ` Mateusz Guzik
2025-12-01 14:47 ` Mathieu Desnoyers [this message]
2025-12-01 15:23 ` Gabriel Krisman Bertazi
2025-12-01 19:16 ` Harry Yoo
2025-12-03 11:02 ` Mateusz Guzik
2025-12-03 11:54 ` Mateusz Guzik
2025-12-03 14:36 ` Mateusz Guzik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=355143c9-78c7-4da1-9033-5ae6fa50efad@efficios.com \
--to=mathieu.desnoyers@efficios.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=cl@gentwo.org \
--cc=david@redhat.com \
--cc=dennis@kernel.org \
--cc=harry.yoo@oracle.com \
--cc=jack@suse.cz \
--cc=krisman@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@kernel.org \
--cc=mjguzik@gmail.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox