From: Vlastimil Babka <vbabka@suse.cz>
To: Harry Yoo <harry.yoo@oracle.com>, Mateusz Guzik <mjguzik@gmail.com>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>,
linux-mm <linux-mm@kvack.org>, Dennis Zhou <dennis@kernel.org>
Subject: Re: a case for a destructor for slub: mm_struct
Date: Mon, 17 Mar 2025 10:02:48 +0100 [thread overview]
Message-ID: <4aa49f49-baec-4ef6-87c7-effd5a1dc5eb@suse.cz> (raw)
In-Reply-To: <Z9e2L7TGIvZgwDXB@harry>
On 3/17/25 06:42, Harry Yoo wrote:
> On Fri, Mar 14, 2025 at 01:32:16PM +0100, Mateusz Guzik wrote:
>>
>> It's a spinlock which disables interrupts around itself, so it should
>> not be a problem.
>>
>> > > > > there may be spurious mm_struct's hanging out and eating pcpu resources.
>> > > > > Something can be added to reclaim those by the pcpu allocator.
>> > > >
>> > > > Not sure if I follow. What do you mean by spurious mm_struct, and how
>> > > > does the pcpu allocator reclaim that?
>> > > >
>> > >
>> > > Suppose a workload was ran which created tons of mm_struct. The
>> > > workload is done and they can be reclaimed, but hang out just in case.
>> > >
>> > > Another workload showed up, but one which wants to do many percpu
>> > > allocs and is not mm_struct-heavy.
>> > >
>> > > In case of resource shortage it would be good if the percpu allocator
>> > > knew how to reclaim the known cached-but-not-used memory instead of
>> > > grabbing new patches.
>> > >
>> > > As for how to get there, so happens the primary consumer (percpu
>> > > counters) already has a global list of all allocated objects. The
>> > > allocator could walk it and reclaim as needed.
>> >
>> > You mean reclaiming per-cpu objects along withthe slab objects that uses them?
>> > That sounds like a new slab shrinker for mm_struct?
>> >
>>
>> at least the per-cpu thing, mm_struct itself optionally
>
> If we allow reclaiming per-cpu stuff only biut do not reclaim
> the slab object that contains it...
>
> Does that mean the users of the cache need to check if the percpu
> memory has been reclaimed and if so, should call init routines (e.g.,
> mm_init())?
That sounds like something we'd better avoid? Think it would need to imply
some locking between the shrinker and slab allocator so it doesn't hand out
a mm_struct where its percpu memory is reclaimed.
I hope it's enough if we're able to shrink what slab allocator has cached in
per-cpu (partial) slabs, there's already flushing of that from e.g. sysfs
but can't recall if there's a shrinker. Of course there will always be free
mm_struct objects in partially full slabs due to fragmentation, but I doubt
we'd need to worry specifically about the percpu memory those "own".
next prev parent reply other threads:[~2025-03-17 9:02 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-12 10:33 Mateusz Guzik
2025-03-13 8:59 ` Harry Yoo
2025-03-13 11:23 ` Mateusz Guzik
2025-03-13 11:25 ` Mateusz Guzik
2025-03-14 8:27 ` Harry Yoo
2025-03-14 8:34 ` Harry Yoo
2025-03-14 12:32 ` Mateusz Guzik
2025-03-17 5:42 ` Harry Yoo
2025-03-17 9:02 ` Vlastimil Babka [this message]
2025-03-17 9:17 ` Mateusz Guzik
2025-03-17 9:23 ` Mateusz Guzik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4aa49f49-baec-4ef6-87c7-effd5a1dc5eb@suse.cz \
--to=vbabka@suse.cz \
--cc=42.hyeyoo@gmail.com \
--cc=dennis@kernel.org \
--cc=harry.yoo@oracle.com \
--cc=linux-mm@kvack.org \
--cc=mjguzik@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox