From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Rik van Riel <riel@surriel.com>,
Usama Arif <usama.arif@linux.dev>,
"willy@infradead.org" <willy@infradead.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Zi Yan <ziy@nvidia.com>,
Andrew Morton <akpm@linux-foundation.org>,
lsf-pc@lists.linux-foundation.org,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Shakeel Butt <shakeel.butt@linux.dev>,
Kiryl Shutsemau <kas@kernel.org>, Barry Song <baohua@kernel.org>,
Dev Jain <dev.jain@arm.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Nico Pache <npache@redhat.com>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Ryan Roberts <ryan.roberts@arm.com>,
Vlastimil Babka <vbabka@suse.cz>,
Lance Yang <lance.yang@linux.dev>,
Frank van der Linden <fvdl@google.com>
Subject: Re: [LSF/MM/BPF TOPIC] Beyond 2MB: Why Terabyte-Scale Machines Need 1GB Transparent Huge Pages
Date: Fri, 20 Feb 2026 11:00:09 +0100 [thread overview]
Message-ID: <8b675db3-530a-4b60-96c3-cff936ed764f@kernel.org> (raw)
In-Reply-To: <0c81121c23a9b1016425da100f11cb31feddd7ad.camel@surriel.com>
On 2/19/26 20:02, Rik van Riel wrote:
> On Thu, 2026-02-19 at 15:53 +0000, Usama Arif wrote:
>>
>> Is CMA needed to make this work?
>> ================================
>>
>> The short answer is no. 1G THPs can be gotten without it. CMA can
>> help a lot
>> ofcourse, but we dont *need* it. For e.g. I can run the very simple
>> case of
>> trying to get 1G pages in the upstream kernel without CMA on my
>> server via
>> hugetlb and it works. The server has been up for more than 2 weeks
>> (so pretty
>> fragmented), is running a bunch of stuff in the background, uses 0
>> CMA memory,
>> and I tried to get 100x1G pages on it and it worked.
>> It uses folio_alloc_gigantic, which is exactly what this RFC uses:
>
> While I agree with the idea of starting simple, I think
> we should ask the question of what we want physical memory
> handling to look like if 1TB pages become more common,
> and applications start to rely on them to meet their
> performance goals.
>
> We have CMA balancing code today. It seems to work, but
> it likely is not the long term direction we want to go,
> mostly due to the way CMA does allocations.
>
> It seems clear that in order to prevent memory fragmentation,
> we need to split up system memory in some way between an area
> that is used only for movable allocations, and an area where
> any kind of allocation can go.
>
> This would need something similar to CMA balancing to prevent
> false OOMs for non-movable allocations.
>
> However, beyond that I really do not have any idea of what
> things should look like.
>
> What do we want the kernel to do here?
This subtopic is certainly worth a separate session as it's quite
involved, but I assume the right (tm) thing to do will be
(a) Teaching the buddy to manage pages larger than the current maximum
buddy order. There will certainly be some work required to get to
that point (and Zi Yan already did some work). It might also be
fair to say that order > current buddy order might behave different
at least to some degree (thinking about relation to zone alignment,
section sizes etc).
If we require vmemmap for these larger orders, maybe the buddy order
could more easily exceed the section size; I don't remember all of
the details why that limitation was in place (but one of them was
memmap continuity within a high-order buddy page, which is only
guaranteed within a memory section with CONFIG_SPARSEMEM).
(b) Teaching compaction etc. to *also* compact/group on a larger
granularity (in addition to current sized pageblocks). When we
discussed that in the past we used the term superblock, that
Zi Yan just brought up again in another thread [1].
There was a proposal a while ago to internally separate zones into
chunks of memory (I think the proposal used DRAM banks, such that you
could more easily power down unused DRAM banks). I'm not saying we
should do that, but maybe something like sub-zones could be something to
explore. Maybe not.
Big, more complex topic :)
[1]
https://lore.kernel.org/r/34730030-48F6-4D0C-91EA-998A5AF93F5F@nvidia.com
--
Cheers,
David
prev parent reply other threads:[~2026-02-20 10:00 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-19 15:53 Usama Arif
2026-02-19 16:00 ` David Hildenbrand (Arm)
2026-02-19 16:48 ` Johannes Weiner
2026-02-19 16:52 ` Zi Yan
2026-02-19 17:08 ` Johannes Weiner
2026-02-19 17:09 ` David Hildenbrand (Arm)
2026-02-19 17:09 ` David Hildenbrand (Arm)
2026-02-19 16:49 ` Zi Yan
2026-02-19 17:13 ` Matthew Wilcox
2026-02-19 17:28 ` Zi Yan
2026-02-19 19:02 ` Rik van Riel
2026-02-20 10:00 ` David Hildenbrand (Arm) [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8b675db3-530a-4b60-96c3-cff936ed764f@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=dev.jain@arm.com \
--cc=fvdl@google.com \
--cc=hannes@cmpxchg.org \
--cc=kas@kernel.org \
--cc=lance.yang@linux.dev \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=npache@redhat.com \
--cc=riel@surriel.com \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=usama.arif@linux.dev \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox