* [PATCH v2] mm/sparse: fix comment for section map alignment
@ 2026-04-02 10:23 Muchun Song
2026-04-02 10:33 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 2+ messages in thread
From: Muchun Song @ 2026-04-02 10:23 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Petr Tesarik, linux-mm,
linux-kernel, Muchun Song, muchun.song
The comment in mmzone.h currently details exhaustive per-architecture
bit-width lists and explains alignment using min(PAGE_SHIFT,
PFN_SECTION_SHIFT). Such details risk falling out of date over time
and may inadvertently be left un-updated.
We always expect a single section to cover full pages. Therefore,
we can safely assume that PFN_SECTION_SHIFT is large enough to
accommodate SECTION_MAP_LAST_BIT. We use BUILD_BUG_ON() to ensure this.
Update the comment to accurately reflect this consensus, making it
clear that we rely on a single section covering full pages.
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
v1 -> v2:
- Drop the actual BUILD_BUG_ON logic modification (keeping the simple
comparison) and only simplify/clarify the mmzone.h comment.
- Add explanation explicitly noting that a single section is always
expected to cover full pages, per discussions with David Hildenbrand
and Andrew Morton.
---
include/linux/mmzone.h | 25 ++++++++++---------------
1 file changed, 10 insertions(+), 15 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 7de42be81d4b..a071f1a0e242 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -2056,21 +2056,16 @@ static inline struct mem_section *__nr_to_section(unsigned long nr)
extern size_t mem_section_usage_size(void);
/*
- * We use the lower bits of the mem_map pointer to store
- * a little bit of information. The pointer is calculated
- * as mem_map - section_nr_to_pfn(pnum). The result is
- * aligned to the minimum alignment of the two values:
- * 1. All mem_map arrays are page-aligned.
- * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT
- * lowest bits. PFN_SECTION_SHIFT is arch-specific
- * (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the
- * worst combination is powerpc with 256k pages,
- * which results in PFN_SECTION_SHIFT equal 6.
- * To sum it up, at least 6 bits are available on all architectures.
- * However, we can exceed 6 bits on some other architectures except
- * powerpc (e.g. 15 bits are available on x86_64, 13 bits are available
- * with the worst case of 64K pages on arm64) if we make sure the
- * exceeded bit is not applicable to powerpc.
+ * We use the lower bits of the mem_map pointer to store a little bit of
+ * information. The pointer is calculated as mem_map - section_nr_to_pfn().
+ * The result is aligned to the minimum alignment of the two values:
+ *
+ * 1. All mem_map arrays are page-aligned.
+ * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT lowest bits.
+ *
+ * We always expect a single section to cover full pages. Therefore,
+ * we can safely assume that PFN_SECTION_SHIFT is large enough to
+ * accommodate SECTION_MAP_LAST_BIT. We use BUILD_BUG_ON() to ensure this.
*/
enum {
SECTION_MARKED_PRESENT_BIT,
--
2.20.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v2] mm/sparse: fix comment for section map alignment
2026-04-02 10:23 [PATCH v2] mm/sparse: fix comment for section map alignment Muchun Song
@ 2026-04-02 10:33 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 2+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-02 10:33 UTC (permalink / raw)
To: Muchun Song, Andrew Morton
Cc: Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Petr Tesarik, linux-mm,
linux-kernel, muchun.song
On 4/2/26 12:23, Muchun Song wrote:
> The comment in mmzone.h currently details exhaustive per-architecture
> bit-width lists and explains alignment using min(PAGE_SHIFT,
> PFN_SECTION_SHIFT). Such details risk falling out of date over time
> and may inadvertently be left un-updated.
>
> We always expect a single section to cover full pages. Therefore,
> we can safely assume that PFN_SECTION_SHIFT is large enough to
> accommodate SECTION_MAP_LAST_BIT. We use BUILD_BUG_ON() to ensure this.
>
> Update the comment to accurately reflect this consensus, making it
> clear that we rely on a single section covering full pages.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> v1 -> v2:
> - Drop the actual BUILD_BUG_ON logic modification (keeping the simple
> comparison) and only simplify/clarify the mmzone.h comment.
> - Add explanation explicitly noting that a single section is always
> expected to cover full pages, per discussions with David Hildenbrand
> and Andrew Morton.
> ---
> include/linux/mmzone.h | 25 ++++++++++---------------
> 1 file changed, 10 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 7de42be81d4b..a071f1a0e242 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -2056,21 +2056,16 @@ static inline struct mem_section *__nr_to_section(unsigned long nr)
> extern size_t mem_section_usage_size(void);
>
> /*
> - * We use the lower bits of the mem_map pointer to store
> - * a little bit of information. The pointer is calculated
> - * as mem_map - section_nr_to_pfn(pnum). The result is
> - * aligned to the minimum alignment of the two values:
> - * 1. All mem_map arrays are page-aligned.
> - * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT
> - * lowest bits. PFN_SECTION_SHIFT is arch-specific
> - * (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the
> - * worst combination is powerpc with 256k pages,
> - * which results in PFN_SECTION_SHIFT equal 6.
> - * To sum it up, at least 6 bits are available on all architectures.
> - * However, we can exceed 6 bits on some other architectures except
> - * powerpc (e.g. 15 bits are available on x86_64, 13 bits are available
> - * with the worst case of 64K pages on arm64) if we make sure the
> - * exceeded bit is not applicable to powerpc.
> + * We use the lower bits of the mem_map pointer to store a little bit of
> + * information. The pointer is calculated as mem_map - section_nr_to_pfn().
> + * The result is aligned to the minimum alignment of the two values:
> + *
> + * 1. All mem_map arrays are page-aligned.
> + * 2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT lowest bits.
> + *
> + * We always expect a single section to cover full pages. Therefore,
> + * we can safely assume that PFN_SECTION_SHIFT is large enough to
> + * accommodate SECTION_MAP_LAST_BIT. We use BUILD_BUG_ON() to ensure this.
> */
> enum {
> SECTION_MARKED_PRESENT_BIT,
Thanks!
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
--
Cheers,
David
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-04-02 10:33 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-04-02 10:23 [PATCH v2] mm/sparse: fix comment for section map alignment Muchun Song
2026-04-02 10:33 ` David Hildenbrand (Arm)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox