linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <muchun.song@linux.dev>
To: Mike Rapoport <rppt@kernel.org>,
	"David Hildenbrand (Arm)" <david@kernel.org>
Cc: Muchun Song <songmuchun@bytedance.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	yinghai@kernel.org, Lorenzo Stoakes <ljs@kernel.org>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Vlastimil Babka <vbabka@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH] mm/sparse: remove sparse_buffer
Date: Fri, 10 Apr 2026 11:07:21 +0800	[thread overview]
Message-ID: <BD30B00B-76F4-4984-9B29-9668896E761F@linux.dev> (raw)
In-Reply-To: <adfBVc8ohLrtIe3j@kernel.org>



> On Apr 9, 2026, at 23:10, Mike Rapoport <rppt@kernel.org> wrote:
> 
> Hi,
> 
> On Thu, Apr 09, 2026 at 02:29:38PM +0200, David Hildenbrand (Arm) wrote:
>> On 4/9/26 13:40, Muchun Song wrote:
>>> 
>>> 
>>>> On Apr 8, 2026, at 21:40, David Hildenbrand (Arm) <david@kernel.org> wrote:
>>>> 
>>>> On 4/7/26 10:39, Muchun Song wrote:
>>>>> The sparse_buffer was originally introduced in commit 9bdac9142407
>>>>> ("sparsemem: Put mem map for one node together.") to allocate a
>>>>> contiguous block of memory for all memmaps of a NUMA node.
>>>>> 
>>>>> However, the original commit message did not clearly state the actual
>>>>> benefits or the necessity of keeping all memmap areas strictly
>>>>> contiguous for a given node.
>>>> 
>>>> We don't want the memmap to be scattered around, given that it is one of
>>>> the biggest allocations during boot.
>>>> 
>>>> It's related to not turning too many memory blocks/sections
>>>> un-offlinable I think.
>>>> 
>>>> I always imagined that memblock would still keep these allocations close
>>>> to each other. Can you verify if that is indeed true?
>>> 
>>> You raised a very interesting point about whether memblock keeps
>>> these allocations close to each other. I've done a thorough test
>>> on a 16GB VM by printing the actual physical allocations.
> 
> memblock always allocates in order, so if there are no other memblock
> allocations between the calls to memmap_alloc(), all these allocations will
> be together and they all will be coalesced to a single region in
> memblock.reserved.
> 
>>> I enabled the existing debug logs in arch/x86/mm/init_64.c to
>>> trace the vmemmap_set_pmd allocations. Here is what really happens:
>>> 
>>> When using vmemmap_alloc_block without sparse_buffer, the
>>> memblock allocator allocates 2MB chunks. Because memblock
>>> allocates top-down by default, the physical allocations look
>>> like this:
>>> 
>>> [ffe6475cc0000000-ffe6475cc01fffff] PMD -> [ff3cb082bfc00000-ff3cb082bfdfffff] on node 0
>>> [ffe6475cc0200000-ffe6475cc03fffff] PMD -> [ff3cb082bfa00000-ff3cb082bfbfffff] on node 0
>>> [ffe6475cc0400000-ffe6475cc05fffff] PMD -> [ff3cb082bf800000-ff3cb082bf9fffff] on node 0
> 
> ...
> 
>>> Notice that the physical chunks are strictly adjacent to each
>>> other, but in descending order!
>>> 
>>> So, they are NOT "scattered around" the whole node randomly.
>>> Instead, they are packed densely back-to-back in a single
>>> contiguous physical range (just mapped top-down in 2MB pieces).
>>> 
>>> Because they are packed tightly together within the same
>>> contiguous physical memory range, they will at most consume or
>>> pollute the exact same number of memory blocks as a single
>>> contiguous allocation (like sparse_buffer did). Therefore, this
>>> will NOT turn additional memory blocks/sections into an
>>> "un-offlinable" state.
>>> 
>>> It seems we can safely remove the sparse buffer preallocation
>>> mechanism, don't you think?
>> 
>> Yes, what I suspected. Is there a performance implication when doing
>> many individual memmap_alloc(), for example, on a larger system with
>> many sections?
> 
> memmap_alloc() will be slower than sparse_buffer_alloc(), allocating from
> memblock is more involved that sparse_buffer_alloc(), but without
> measurements it's hard to tell how much it'll affect overall sparse_init().

I ran a test on a 256GB VM, and the results are as follows:

	With patch:    741,292 ns
	Without patch: 199,555 ns

The performance is approximately 3.7x slower with the patch applied.

Thanks,
Muchun

> 
>> -- 
>> Cheers,
>> 
>> David
> 
> -- 
> Sincerely yours,
> Mike.




  reply	other threads:[~2026-04-10  3:08 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-07  8:39 Muchun Song
2026-04-08 13:40 ` David Hildenbrand (Arm)
2026-04-09 11:40   ` Muchun Song
2026-04-09 12:29     ` David Hildenbrand (Arm)
2026-04-09 15:10       ` Mike Rapoport
2026-04-10  3:07         ` Muchun Song [this message]
2026-04-10  6:05           ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BD30B00B-76F4-4984-9B29-9668896E761F@linux.dev \
    --to=muchun.song@linux.dev \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@suse.com \
    --cc=rppt@kernel.org \
    --cc=songmuchun@bytedance.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox