linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <muchun.song@linux.dev>
To: Mike Rapoport <rppt@kernel.org>
Cc: Muchun Song <songmuchun@bytedance.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@kernel.org>,
	linux-mm@kvack.org, Lorenzo Stoakes <ljs@kernel.org>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Vlastimil Babka <vbabka@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/5] mm/sparse-vmemmap: provide generic vmemmap_set_pmd() and vmemmap_check_pmd()
Date: Sun, 5 Apr 2026 22:07:36 +0800	[thread overview]
Message-ID: <D0D4B0DA-56F3-4ECD-8855-96C21A6DE1F0@linux.dev> (raw)
In-Reply-To: <adIKIhRL6q2NIgnj@kernel.org>



> On Apr 5, 2026, at 15:07, Mike Rapoport <rppt@kernel.org> wrote:
> 
> Hi,
> 
> On Sat, Apr 04, 2026 at 08:20:54PM +0800, Muchun Song wrote:
>> The two weak functions are currently no-ops on every architecture,
>> forcing each platform that needs them to duplicate the same handful
>> of lines.  Provide a generic implementation:
>> 
>> - vmemmap_set_pmd() simply sets a huge PMD with PAGE_KERNEL protection.
>> 
>> - vmemmap_check_pmd() verifies that the PMD is present and leaf,
>>  then calls the existing vmemmap_verify() helper.
>> 
>> Architectures that need special handling can continue to override the
>> weak symbols; everyone else gets the standard version for free.
>> 
>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>> ---
>> mm/sparse-vmemmap.c | 7 ++++++-
>> 1 file changed, 6 insertions(+), 1 deletion(-)
>> 
>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>> index 6eadb9d116e4..1eb990610d50 100644
>> --- a/mm/sparse-vmemmap.c
>> +++ b/mm/sparse-vmemmap.c
>> @@ -391,12 +391,17 @@ int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end,
>> void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
>>       unsigned long addr, unsigned long next)
>> {
>> + 	BUG_ON(!pmd_set_huge(pmd, virt_to_phys(p), PAGE_KERNEL));
> 
> Do we have to crash the kernel here?
> Wouldn't be better to make vmemmap_set_pmd() return error and make
> vmemmap_populate_hugepages() fall back to base pages in case
> vmemmap_set_pmd() errored?

Hi Mike,

Thanks for the review. Let me explain my original thought process here.

My assumption was that pmd_set_huge() for the kernel virtual address space
should rarely, if ever, fail in this context. Furthermore, if we look at the
architectures this patch replaces (e.g., arm64 and riscv), they are either
ignoring the return value of pmd_set_huge() entirely or lacking any graceful
fallback mechanism anyway.

So, to keep the initial generic implementation as simple as possible, I used
BUG_ON() as a strict assertion.

Do you think we really need to introduce a more flexible, fallback-capable
solution at this stage? Based on the current architecture implementations, it
might not be strictly necessary right now. We could keep it simple and add the
error handling/fallback logic in the future if more architectures start using
this generic code and actually require error handling.

However, I am completely open to your suggestion. If you feel it's better to be
proactive and make the generic vmemmap_set_pmd() return an error code, allowing
vmemmap_populate_hugepages() to gracefully fall back to base pages right from
the start, I totally agree and will be happy to update it in v3.

Please let me know your thoughts.

Thanks,
Muchun

> 
>> }
>> 
>> int __weak __meminit vmemmap_check_pmd(pmd_t *pmd, int node,
>>        unsigned long addr, unsigned long next)
>> {
>> - 	return 0;
>> + 	if (!pmd_leaf(pmdp_get(pmd)))
>> + 		return 0;
>> + 	vmemmap_verify((pte_t *)pmd, node, addr, next);
>> +
>> + 	return 1;
>> }
>> 
>> int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end,
>> -- 
>> 2.20.1
>> 
> 
> -- 
> Sincerely yours,
> Mike.




  reply	other threads:[~2026-04-05 14:09 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-04 12:20 [PATCH v2 0/5] " Muchun Song
2026-04-04 12:20 ` [PATCH v2 1/5] " Muchun Song
2026-04-05  7:07   ` Mike Rapoport
2026-04-05 14:07     ` Muchun Song [this message]
2026-04-04 12:20 ` [PATCH v2 2/5] arm64/mm: drop vmemmap_pmd helpers and use generic code Muchun Song
2026-04-04 12:20 ` [PATCH v2 3/5] riscv/mm: " Muchun Song
2026-04-04 12:20 ` [PATCH v2 4/5] loongarch/mm: drop vmemmap_check_pmd helper " Muchun Song
2026-04-04 12:20 ` [PATCH v2 5/5] sparc/mm: " Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=D0D4B0DA-56F3-4ECD-8855-96C21A6DE1F0@linux.dev \
    --to=muchun.song@linux.dev \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@suse.com \
    --cc=rppt@kernel.org \
    --cc=songmuchun@bytedance.com \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox