* [PATCH v8] arm64: mm: Populate vmemmap at the page level if not section aligned
@ 2025-02-19 8:40 Zhenhua Huang
2025-03-03 10:01 ` David Hildenbrand
0 siblings, 1 reply; 3+ messages in thread
From: Zhenhua Huang @ 2025-02-19 8:40 UTC (permalink / raw)
To: anshuman.khandual, catalin.marinas, david
Cc: will, ardb, ryan.roberts, mark.rutland, joey.gouly, dave.hansen,
akpm, chenfeiyang, chenhuacai, linux-mm, linux-arm-kernel,
linux-kernel, quic_tingweiz, Zhenhua Huang, stable
On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is set
to 27, making one section 128M. The related page struct which vmemmap
points to is 2M then.
Commit c1cc1552616d ("arm64: MMU initialisation") optimizes the
vmemmap to populate at the PMD section level which was suitable
initially since hot plug granule is always one section(128M). However,
commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the
existing arm64 assumptions.
The first problem is that if start or end is not aligned to a section
boundary, such as when a subsection is hot added, populating the entire
section is wasteful.
The next problem is if we hotplug something that spans part of 128 MiB
section (subsections, let's call it memblock1), and then hotplug something
that spans another part of a 128 MiB section(subsections, let's call it
memblock2), and subsequently unplug memblock1, vmemmap_free() will clear
the entire PMD entry which also supports memblock2 even though memblock2
is still active.
Assuming hotplug/unplug sizes are guaranteed to be symmetric. Do the
fix similar to x86-64: populate to pages levels if start/end is not aligned
with section boundary.
Cc: <stable@vger.kernel.org> # v5.4+
Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
Signed-off-by: Zhenhua Huang <quic_zhenhuah@quicinc.com>
---
Hi Catalin and David,
Following our latest discussion, I've updated the patch for your review.
I also removed Catalin's review tag since I've made significant modifications.
arch/arm64/mm/mmu.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b4df5bc5b1b8..de05ccf47f21 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1177,8 +1177,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
struct vmem_altmap *altmap)
{
WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
+ /* [start, end] should be within one section */
+ WARN_ON(end - start > PAGES_PER_SECTION * sizeof(struct page));
- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
+ if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
+ (end - start < PAGES_PER_SECTION * sizeof(struct page)))
return vmemmap_populate_basepages(start, end, node, altmap);
else
return vmemmap_populate_hugepages(start, end, node, altmap);
--
2.25.1
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v8] arm64: mm: Populate vmemmap at the page level if not section aligned
2025-02-19 8:40 [PATCH v8] arm64: mm: Populate vmemmap at the page level if not section aligned Zhenhua Huang
@ 2025-03-03 10:01 ` David Hildenbrand
2025-03-04 7:25 ` Zhenhua Huang
0 siblings, 1 reply; 3+ messages in thread
From: David Hildenbrand @ 2025-03-03 10:01 UTC (permalink / raw)
To: Zhenhua Huang, anshuman.khandual, catalin.marinas
Cc: will, ardb, ryan.roberts, mark.rutland, joey.gouly, dave.hansen,
akpm, chenfeiyang, chenhuacai, linux-mm, linux-arm-kernel,
linux-kernel, quic_tingweiz, stable
On 19.02.25 09:40, Zhenhua Huang wrote:
> On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is set
> to 27, making one section 128M. The related page struct which vmemmap
> points to is 2M then.
> Commit c1cc1552616d ("arm64: MMU initialisation") optimizes the
> vmemmap to populate at the PMD section level which was suitable
> initially since hot plug granule is always one section(128M). However,
> commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the
> existing arm64 assumptions.
>
> The first problem is that if start or end is not aligned to a section
> boundary, such as when a subsection is hot added, populating the entire
> section is wasteful.
>
> The next problem is if we hotplug something that spans part of 128 MiB
> section (subsections, let's call it memblock1), and then hotplug something
> that spans another part of a 128 MiB section(subsections, let's call it
> memblock2), and subsequently unplug memblock1, vmemmap_free() will clear
> the entire PMD entry which also supports memblock2 even though memblock2
> is still active.
>
> Assuming hotplug/unplug sizes are guaranteed to be symmetric. Do the
> fix similar to x86-64: populate to pages levels if start/end is not aligned
> with section boundary.
>
> Cc: <stable@vger.kernel.org> # v5.4+
> Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
> Signed-off-by: Zhenhua Huang <quic_zhenhuah@quicinc.com>
> ---
> Hi Catalin and David,
> Following our latest discussion, I've updated the patch for your review.
> I also removed Catalin's review tag since I've made significant modifications.
> arch/arm64/mm/mmu.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index b4df5bc5b1b8..de05ccf47f21 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1177,8 +1177,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
> struct vmem_altmap *altmap)
> {
> WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
> + /* [start, end] should be within one section */
> + WARN_ON(end - start > PAGES_PER_SECTION * sizeof(struct page));
>
> - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
> + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
> + (end - start < PAGES_PER_SECTION * sizeof(struct page)))
Indentation should be
if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
(end - start < PAGES_PER_SECTION * sizeof(struct page)))
Acked-by: David Hildenbrand <david@redhat.com>
Thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v8] arm64: mm: Populate vmemmap at the page level if not section aligned
2025-03-03 10:01 ` David Hildenbrand
@ 2025-03-04 7:25 ` Zhenhua Huang
0 siblings, 0 replies; 3+ messages in thread
From: Zhenhua Huang @ 2025-03-04 7:25 UTC (permalink / raw)
To: David Hildenbrand, anshuman.khandual, catalin.marinas
Cc: will, ardb, ryan.roberts, mark.rutland, joey.gouly, dave.hansen,
akpm, chenfeiyang, chenhuacai, linux-mm, linux-arm-kernel,
linux-kernel, quic_tingweiz, stable
On 2025/3/3 18:01, David Hildenbrand wrote:
> On 19.02.25 09:40, Zhenhua Huang wrote:
>> On the arm64 platform with 4K base page config, SECTION_SIZE_BITS is set
>> to 27, making one section 128M. The related page struct which vmemmap
>> points to is 2M then.
>> Commit c1cc1552616d ("arm64: MMU initialisation") optimizes the
>> vmemmap to populate at the PMD section level which was suitable
>> initially since hot plug granule is always one section(128M). However,
>> commit ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
>> introduced a 2M(SUBSECTION_SIZE) hot plug granule, which disrupted the
>> existing arm64 assumptions.
>>
>> The first problem is that if start or end is not aligned to a section
>> boundary, such as when a subsection is hot added, populating the entire
>> section is wasteful.
>>
>> The next problem is if we hotplug something that spans part of 128 MiB
>> section (subsections, let's call it memblock1), and then hotplug
>> something
>> that spans another part of a 128 MiB section(subsections, let's call it
>> memblock2), and subsequently unplug memblock1, vmemmap_free() will clear
>> the entire PMD entry which also supports memblock2 even though memblock2
>> is still active.
>>
>> Assuming hotplug/unplug sizes are guaranteed to be symmetric. Do the
>> fix similar to x86-64: populate to pages levels if start/end is not
>> aligned
>> with section boundary.
>>
>> Cc: <stable@vger.kernel.org> # v5.4+
>> Fixes: ba72b4c8cf60 ("mm/sparsemem: support sub-section hotplug")
>> Signed-off-by: Zhenhua Huang <quic_zhenhuah@quicinc.com>
>> ---
>> Hi Catalin and David,
>> Following our latest discussion, I've updated the patch for your review.
>> I also removed Catalin's review tag since I've made significant
>> modifications.
>> arch/arm64/mm/mmu.c | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index b4df5bc5b1b8..de05ccf47f21 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -1177,8 +1177,11 @@ int __meminit vmemmap_populate(unsigned long
>> start, unsigned long end, int node,
>> struct vmem_altmap *altmap)
>> {
>> WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>> + /* [start, end] should be within one section */
>> + WARN_ON(end - start > PAGES_PER_SECTION * sizeof(struct page));
>> - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
>> + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
>> + (end - start < PAGES_PER_SECTION * sizeof(struct page)))
>
> Indentation should be
>
> if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
> (end - start < PAGES_PER_SECTION * sizeof(struct page)))
>
Thanks, I will repost with the above fix and WARN_ON_ONCE as you
preferred in v7.
>
> Acked-by: David Hildenbrand <david@redhat.com>
>
>
> Thanks!
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-03-04 7:26 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-19 8:40 [PATCH v8] arm64: mm: Populate vmemmap at the page level if not section aligned Zhenhua Huang
2025-03-03 10:01 ` David Hildenbrand
2025-03-04 7:25 ` Zhenhua Huang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox