From: David Hildenbrand <david@redhat.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH] mm: tag kernel stack pages
Date: Thu, 4 Sep 2025 12:31:34 +0200 [thread overview]
Message-ID: <0ba60468-fc6d-4f07-a9ea-e16b8bcd5575@redhat.com> (raw)
In-Reply-To: <aLdLDEW2d3hK4gUV@casper.infradead.org>
On 02.09.25 21:52, Matthew Wilcox wrote:
> On Thu, Aug 21, 2025 at 02:44:31PM +0200, David Hildenbrand wrote:
>> On 20.08.25 22:20, Vishal Moola (Oracle) wrote:
>>> Currently, we have no way to distinguish a kernel stack page from an
>>> unidentified page. Being able to track this information can be
>>> beneficial for optimizing kernel memory usage (i.e. analyzing
>>> fragmentation, location etc.). Knowing a page is being used for a kernel
>>> stack gives us more insight about pages that are certainly immovable and
>>> important to kernel functionality.
>>
>> It's a very niche use case. Anything that's not clearly a folio or a special
>> movable_ops page is certainly immovable. So we can identify pretty reliable
>> what's movable and what's not.
>>
>> Happy to learn how you would want to use that knowledge to reduce
>> fragmentation. :)
>>
>> So this reads a bit hand-wavy.
>
> I have a theory that we should always be attempting to do aligned
> allocations if we can, falling back to individual allocations if
> we can't. This is an attempt to gather some data to inform us whether
> that theory is true, and to help us measure whether any effort we
> take to improve that situation is effective.
>
> Eyeballing the output of tools/testing/page-types certainly lends
> some credence to this. On x86-64 with its 16KiB stacks and 4KiB
> page size, we often see four consecutive pages allocated as type
> KernelStack, and as you'd expect only about 25% of the time are they
> aligned to a 16KiB boundary. That is, at least 75% of the time they
> prevent _two_ order-2 pages from being available.
I assume, ideally, you'd also know whether all these stack pages belong
to the same thread, not various ones, right? ("context" can matter as well)
>
> As you say, they're not movable. I'm not sure if it makes sense to
> go to the effort of making them movable; it'd require interacting
> with the scheduler (to prevent the task we're relocating from
> being scheduled), and I don't think the realtime people would be
> terribly keen on that idea. So that isn't one of the ideas we
> have on the table for improving matters.
Yeah, while possible I am also not sure if we always want that.
>
> Ideas we have been batting around:
>
> - Have kernel stacks try to do an order-N allocation and vmap()
> the result, fall back to current implementation
> - Have vmalloc try to do an order-N allocation, fall back down the
> orders on failure to allocate
> - Change the alloc_bulk implementation to do the order-N allocation
> and fall back
>
> I'm sure other possibilities also exist.
>
>> staring at [1], we allocate from vmalloc, so I would assume that these will
>> be vmalloc-typed pages in the future and we cannot change the type later.
>>
>> [1] https://kernelnewbies.org/MatthewWilcox/Memdescs
>
> I see the vmalloc subtype as being a "we don't know any better" type.
I guess this could get nasty once we would have metadata assigned to the
vmalloc allocations (struct vmdesc).
> We could allocate another subtype of type 0 to mean "kernel stacks"
> and have it be implicit that kernel stacks are allocated from vmalloc.
Yes, that would work.
> This would probably require that we have a vmalloc interface that lets us
> specify a subtype, which I think is probably something we'd want anyway.
vmalloc subtypes don't sound like a bad idea.
>
> I think it's fine to say "This doesn't add enough value to merge it
> upstream". I will note one minor advantage which is that typing these
> pages as PGTY_kstack today prevents them from being inadvertently mapped
> to userspace (whether by malicious code or innocent bug).
Yes, as raised elsewhere, if we can do this consistently today (stack ->
PGTY_kstack), fine with me.
--
Cheers
David / dhildenb
next prev parent reply other threads:[~2025-09-04 10:31 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-20 20:20 Vishal Moola (Oracle)
2025-08-21 12:44 ` David Hildenbrand
2025-09-02 19:52 ` Matthew Wilcox
2025-09-04 10:31 ` David Hildenbrand [this message]
2025-09-03 7:49 ` David Hildenbrand
2025-09-03 18:19 ` Vishal Moola (Oracle)
2025-09-04 10:23 ` David Hildenbrand
2025-09-05 17:47 ` Vishal Moola (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0ba60468-fc6d-4f07-a9ea-e16b8bcd5575@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=vishal.moola@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox