* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-04 15:56 [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
@ 2026-03-04 16:04 ` Zi Yan
2026-03-04 20:22 ` Andi Kleen
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Zi Yan @ 2026-03-04 16:04 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Baolin Wang, Liam R . Howlett, Nico Pache,
Dev Jain, Barry Song, Lance Yang, Jonathan Corbet, Shuah Khan,
Usama Arif, Andi Kleen
On 4 Mar 2026, at 10:56, David Hildenbrand (Arm) wrote:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
>
> Ever since we added THP support many, many years ago, these entries
> would keep reporting the smallest (fallback) granularity in a VMA.
>
> For this reason, they default to PAGE_SIZE for all VMAs except for
> VMAs where we have the guarantee that the system and the MMU will
> always use larger page sizes. hugetlb, for example, exposes a custom
> vm_ops->pagesize callback to handle that. Similarly, dax/device
> exposes a custom vm_ops->pagesize callback and provides similar
> guarantees.
>
> Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
> and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
> regarding PMD entries.
>
> While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
> and "ShmemPmdMapped" entries really mean, and make it clear that there
> are no other entries for other THP/folio sizes or mappings.
>
> Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Shuah Khan <skhan@linuxfoundation.org>
> Cc: Usama Arif <usamaarif642@gmail.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> ---
> Documentation/filesystems/proc.rst | 37 ++++++++++++++++++++++--------
> 1 file changed, 27 insertions(+), 10 deletions(-)
>
LGTM.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-04 15:56 [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
2026-03-04 16:04 ` Zi Yan
@ 2026-03-04 20:22 ` Andi Kleen
2026-03-05 8:45 ` David Hildenbrand (Arm)
2026-03-05 3:21 ` Lance Yang
` (2 subsequent siblings)
4 siblings, 1 reply; 9+ messages in thread
From: Andi Kleen @ 2026-03-04 20:22 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R . Howlett,
Nico Pache, Dev Jain, Barry Song, Lance Yang, Jonathan Corbet,
Shuah Khan, Usama Arif
On Wed, Mar 04, 2026 at 04:56:36PM +0100, David Hildenbrand (Arm) wrote:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
It still seems like a bug to me, only documented now, but seems
I'm in the minority on that.
But anyways if you change this file you should probably remove
the duplicated KernelPageSize/MMUPageSize entries in the example
too. That triped me up the last time.
-Andi
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-04 20:22 ` Andi Kleen
@ 2026-03-05 8:45 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 9+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-05 8:45 UTC (permalink / raw)
To: Andi Kleen
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R . Howlett,
Nico Pache, Dev Jain, Barry Song, Lance Yang, Jonathan Corbet,
Shuah Khan, Usama Arif
On 3/4/26 21:22, Andi Kleen wrote:
> On Wed, Mar 04, 2026 at 04:56:36PM +0100, David Hildenbrand (Arm) wrote:
>> There was recently some confusion around THPs and the interaction with
>> KernelPageSize / MMUPageSize. Historically, these entries always
>> correspond to the smallest size we could encounter, not any current
>> usage of transparent huge pages or larger sizes used by the MMU.
>
> It still seems like a bug to me, only documented now, but seems
> I'm in the minority on that.
I'd agree that something smarter and more future proof should have been
done if THPs wouldn't have been added (checks notes) 15 years ago and
how KernelPageSize / MMUPageSize behaves and what it means would already
be set in stone for a long time.
People decided to add things like "AnonHugePages" (which I also detest,
but here we are) instead.
I wish we would have never exposed MMUPageSize (in 2009) to handle some
ppc oddity.
While digging through some history, there were earlier conclusions
(around 10 years ago) that MMUPageSize [1] is not a good fit, and people
even tried exposing actual amounts (Ptes@4kB, Ptes@2MB). I don't think
that's the right approach either for various reasons raised already
(e.g., doesn't really tell you the full TLB/MMU story).
>
> But anyways if you change this file you should probably remove
> the duplicated KernelPageSize/MMUPageSize entries in the example
> too. That triped me up the last time.
Good point. I also discovered that that PPC64 thing is still in place
(maybe it's dead code, but at least code-wise, it's not a thing from the
past ).
[1] https://lkml.org/lkml/2016/11/29/882
--
Cheers,
David
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-04 15:56 [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
2026-03-04 16:04 ` Zi Yan
2026-03-04 20:22 ` Andi Kleen
@ 2026-03-05 3:21 ` Lance Yang
2026-03-05 9:03 ` Vlastimil Babka
2026-03-05 10:46 ` Lorenzo Stoakes (Oracle)
4 siblings, 0 replies; 9+ messages in thread
From: Lance Yang @ 2026-03-05 3:21 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R . Howlett,
Nico Pache, Dev Jain, Barry Song, Jonathan Corbet, linux-kernel,
Shuah Khan, Usama Arif, Andi Kleen
On 2026/3/4 23:56, David Hildenbrand (Arm) wrote:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
>
> Ever since we added THP support many, many years ago, these entries
> would keep reporting the smallest (fallback) granularity in a VMA.
>
> For this reason, they default to PAGE_SIZE for all VMAs except for
> VMAs where we have the guarantee that the system and the MMU will
> always use larger page sizes. hugetlb, for example, exposes a custom
> vm_ops->pagesize callback to handle that. Similarly, dax/device
> exposes a custom vm_ops->pagesize callback and provides similar
> guarantees.
>
> Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
> and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
> regarding PMD entries.
>
> While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
> and "ShmemPmdMapped" entries really mean, and make it clear that there
> are no other entries for other THP/folio sizes or mappings.
>
> Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Shuah Khan <skhan@linuxfoundation.org>
> Cc: Usama Arif <usamaarif642@gmail.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> ---
Makes sense to me. Feel free to add:
Reviewed-by: Lance Yang <lance.yang@linux.dev>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-04 15:56 [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
` (2 preceding siblings ...)
2026-03-05 3:21 ` Lance Yang
@ 2026-03-05 9:03 ` Vlastimil Babka
2026-03-05 10:46 ` Lorenzo Stoakes (Oracle)
4 siblings, 0 replies; 9+ messages in thread
From: Vlastimil Babka @ 2026-03-05 9:03 UTC (permalink / raw)
To: David Hildenbrand (Arm), linux-kernel
Cc: linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R . Howlett,
Nico Pache, Dev Jain, Barry Song, Lance Yang, Jonathan Corbet,
Shuah Khan, Usama Arif, Andi Kleen
On 3/4/26 16:56, David Hildenbrand (Arm) wrote:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
>
> Ever since we added THP support many, many years ago, these entries
> would keep reporting the smallest (fallback) granularity in a VMA.
>
> For this reason, they default to PAGE_SIZE for all VMAs except for
> VMAs where we have the guarantee that the system and the MMU will
> always use larger page sizes. hugetlb, for example, exposes a custom
> vm_ops->pagesize callback to handle that. Similarly, dax/device
> exposes a custom vm_ops->pagesize callback and provides similar
> guarantees.
>
> Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
> and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
> regarding PMD entries.
>
> While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
> and "ShmemPmdMapped" entries really mean, and make it clear that there
> are no other entries for other THP/folio sizes or mappings.
>
> Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Shuah Khan <skhan@linuxfoundation.org>
> Cc: Usama Arif <usamaarif642@gmail.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Thanks.
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-04 15:56 [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
` (3 preceding siblings ...)
2026-03-05 9:03 ` Vlastimil Babka
@ 2026-03-05 10:46 ` Lorenzo Stoakes (Oracle)
2026-03-05 13:32 ` David Hildenbrand (Arm)
4 siblings, 1 reply; 9+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-05 10:46 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R . Howlett,
Nico Pache, Dev Jain, Barry Song, Lance Yang, Jonathan Corbet,
Shuah Khan, Usama Arif, Andi Kleen
On Wed, Mar 04, 2026 at 04:56:36PM +0100, David Hildenbrand (Arm) wrote:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
>
> Ever since we added THP support many, many years ago, these entries
> would keep reporting the smallest (fallback) granularity in a VMA.
>
> For this reason, they default to PAGE_SIZE for all VMAs except for
> VMAs where we have the guarantee that the system and the MMU will
> always use larger page sizes. hugetlb, for example, exposes a custom
> vm_ops->pagesize callback to handle that. Similarly, dax/device
> exposes a custom vm_ops->pagesize callback and provides similar
> guarantees.
>
> Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
> and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
> regarding PMD entries.
>
> While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
> and "ShmemPmdMapped" entries really mean, and make it clear that there
> are no other entries for other THP/folio sizes or mappings.
>
> Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Shuah Khan <skhan@linuxfoundation.org>
> Cc: Usama Arif <usamaarif642@gmail.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
Overall this is great, some various nits and comments below so we can tweak it.
Cheers, Lorenzo
> ---
> Documentation/filesystems/proc.rst | 37 ++++++++++++++++++++++--------
> 1 file changed, 27 insertions(+), 10 deletions(-)
>
> diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
> index b0c0d1b45b99..0f67e47528fc 100644
> --- a/Documentation/filesystems/proc.rst
> +++ b/Documentation/filesystems/proc.rst
> @@ -464,6 +464,7 @@ Memory Area, or VMA) there is a series of lines such as the following::
> KSM: 0 kB
> LazyFree: 0 kB
> AnonHugePages: 0 kB
> + FilePmdMapped: 0 kB
> ShmemPmdMapped: 0 kB
> Shared_Hugetlb: 0 kB
> Private_Hugetlb: 0 kB
> @@ -477,13 +478,25 @@ Memory Area, or VMA) there is a series of lines such as the following::
>
> The first of these lines shows the same information as is displayed for
> the mapping in /proc/PID/maps. Following lines show the size of the
> -mapping (size); the size of each page allocated when backing a VMA
> -(KernelPageSize), which is usually the same as the size in the page table
> -entries; the page size used by the MMU when backing a VMA (in most cases,
> -the same as KernelPageSize); the amount of the mapping that is currently
> -resident in RAM (RSS); the process's proportional share of this mapping
> -(PSS); and the number of clean and dirty shared and private pages in the
> -mapping.
> +mapping (size); the smallest possible page size allocated when
> +backing a VMA (KernelPageSize), which is the granularity in which VMA
> +modifications can be performed; the smallest possible page size that could
> +be used by the MMU (MMUPageSize) when backing a VMA; the amount of the
Is it worth retaining 'in most cases the same as KernelPageSize' here?
Ah wait you dedicate a whole paragraph after this to tha :)
> +mapping that is currently resident in RAM (RSS); the process's proportional
> +share of this mapping (PSS); and the number of clean and dirty shared and
> +private pages in the mapping.
> +
> +Historically, the "KernelPageSize" always corresponds to the "MMUPageSize",
> +except when a larger kernel page size is emulated on a system with a smaller
NIT: is -> was, as historically implies past tense.
But it's maybe better to say:
+Historically, the "KernelPageSize" has always corresponded to the "MMUPageSize",
And:
+except when a larger kernel page size is being emulated on a system with a smaller
> +page size used by the MMU, which was the case for PPC64 in the past.
> +Further, "KernelPageSize" and "MMUPageSize" always correspond to the
NIT: Further -> Furthermore
> +smallest possible granularity (fallback) that could be encountered in a
could be -> can be
Since we are really talking about the current situation, even if this, is
effect, a legacy thing.
> +VMA throughout its lifetime. These values are not affected by any current
> +transparent grouping of pages by Linux (Transparent Huge Pages) or any
'transparent grouping of pages' reads a bit weirdly.
Maybe simplify to:
+These values are not affected by Transparent Huge Pages being in effect, or any...
> +current usage of larger MMU page sizes (either through architectural
NIT: current usage -> usage
> +huge-page mappings or other transparent groupings done by the MMU).
Again I think 'transparent groupings' is a bit unclear. Perhaps instead:
+huge-page mappings or other explicit or implicit coalescing of virtual ranges
+performed by the MMU).
?
> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" provide insight into
> +the usage of some architectural huge-page mappings.
Is 'some' necessary here? Seems to make it a bit vague.
>
> The "proportional set size" (PSS) of a process is the count of pages it has
> in memory, where each page is divided by the number of processes sharing it.
> @@ -528,10 +541,14 @@ pressure if the memory is clean. Please note that the printed value might
> be lower than the real value due to optimizations used in the current
> implementation. If this is not desirable please file a bug report.
>
> -"AnonHugePages" shows the amount of memory backed by transparent hugepage.
> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of
> +memory backed by transparent hugepages that are currently mapped through
> +architectural huge-page mappings (PMD). "AnonHugePages" corresponds to memory
'mapped through architectural huge-page mappings (PMD)' reads a bit strangely to
me,
Perhaps 'mapped by transparent huge pages at a PMD page table level' instead?
> +that does not belong to a file, "ShmemPmdMapped" to shared memory (shmem/tmpfs)
> +and "FilePmdMapped" to file-backed memory (excluding shmem/tmpfs).
>
> -"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by
> -huge pages.
> +There are no dedicated entries for transparent huge pages (or similar concepts)
> +that are not mapped through architectural huge-page mappings (PMD).
similarly, perhaps better as 'are not mapped by transparent huge pages at a PMD
page table level'?
>
> "Shared_Hugetlb" and "Private_Hugetlb" show the amounts of memory backed by
> hugetlbfs page which is *not* counted in "RSS" or "PSS" field for historical
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-05 10:46 ` Lorenzo Stoakes (Oracle)
@ 2026-03-05 13:32 ` David Hildenbrand (Arm)
2026-03-05 14:57 ` Lorenzo Stoakes (Oracle)
0 siblings, 1 reply; 9+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-05 13:32 UTC (permalink / raw)
To: Lorenzo Stoakes (Oracle)
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R . Howlett,
Nico Pache, Dev Jain, Barry Song, Lance Yang, Jonathan Corbet,
Shuah Khan, Usama Arif, Andi Kleen
>
> Ah wait you dedicate a whole paragraph after this to tha :)
Correct :)
>
>> +mapping that is currently resident in RAM (RSS); the process's proportional
>> +share of this mapping (PSS); and the number of clean and dirty shared and
>> +private pages in the mapping.
>> +
>> +Historically, the "KernelPageSize" always corresponds to the "MMUPageSize",
>> +except when a larger kernel page size is emulated on a system with a smaller
>
> NIT: is -> was, as historically implies past tense.
>
> But it's maybe better to say:
>
> +Historically, the "KernelPageSize" has always corresponded to the "MMUPageSize",
>
> And:
>
> +except when a larger kernel page size is being emulated on a system with a smaller
>
Given that the PPC64 thingy still exists in the tree, I'll probably do:
"KernelPageSize" always corresponds to "MMUPageSize", except when a
larger kernel page size is emulated on a system with a smaller page size
used by the MMU, which is the case for some PPC64 setups with hugetlb.
>> +page size used by the MMU, which was the case for PPC64 in the past.
>> +Further, "KernelPageSize" and "MMUPageSize" always correspond to the
>
> NIT: Further -> Furthermore
>
Helpful.
>> +smallest possible granularity (fallback) that could be encountered in a
>
> could be -> can be
>
> Since we are really talking about the current situation, even if this, is
> effect, a legacy thing.
>
>> +VMA throughout its lifetime. These values are not affected by any current
>> +transparent grouping of pages by Linux (Transparent Huge Pages) or any
>
> 'transparent grouping of pages' reads a bit weirdly.
>
> Maybe simplify to:
>
> +These values are not affected by Transparent Huge Pages being in effect, or any...
Works for me.
>
>> +current usage of larger MMU page sizes (either through architectural
>
> NIT: current usage -> usage
Ack.
>
>> +huge-page mappings or other transparent groupings done by the MMU).
>
> Again I think 'transparent groupings' is a bit unclear. Perhaps instead:
>
> +huge-page mappings or other explicit or implicit coalescing of virtual ranges
> +performed by the MMU).
I'd assume the educated reader does not know what "explicit/implicit
coalescing" even means, but works for me. :)
>
> ?
>
>> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" provide insight into
>> +the usage of some architectural huge-page mappings.
>
> Is 'some' necessary here? Seems to make it a bit vague.
I had PUDs in mind. I can just call it
"PMD-level architectural ..."
>
>>
>> The "proportional set size" (PSS) of a process is the count of pages it has
>> in memory, where each page is divided by the number of processes sharing it.
>> @@ -528,10 +541,14 @@ pressure if the memory is clean. Please note that the printed value might
>> be lower than the real value due to optimizations used in the current
>> implementation. If this is not desirable please file a bug report.
>>
>> -"AnonHugePages" shows the amount of memory backed by transparent hugepage.
>> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of
>> +memory backed by transparent hugepages that are currently mapped through
>> +architectural huge-page mappings (PMD). "AnonHugePages" corresponds to memory
>
> 'mapped through architectural huge-page mappings (PMD)' reads a bit strangely to
> me,
>
> Perhaps 'mapped by transparent huge pages at a PMD page table level' instead?
>
I'll simplify to
"mapped by architectural huge-page mappings at the PMD level"
>> +that does not belong to a file, "ShmemPmdMapped" to shared memory (shmem/tmpfs)
>> +and "FilePmdMapped" to file-backed memory (excluding shmem/tmpfs).
>>
>> -"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by
>> -huge pages.
>> +There are no dedicated entries for transparent huge pages (or similar concepts)
>> +that are not mapped through architectural huge-page mappings (PMD).
>
> similarly, perhaps better as 'are not mapped by transparent huge pages at a PMD
> page table level'?
I'll similarly call it "mapped by architectural huge-page mappings at
the PMD level"
Thanks a bunch!
--
Cheers,
David
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-05 13:32 ` David Hildenbrand (Arm)
@ 2026-03-05 14:57 ` Lorenzo Stoakes (Oracle)
0 siblings, 0 replies; 9+ messages in thread
From: Lorenzo Stoakes (Oracle) @ 2026-03-05 14:57 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R . Howlett,
Nico Pache, Dev Jain, Barry Song, Lance Yang, Jonathan Corbet,
Shuah Khan, Usama Arif, Andi Kleen
On Thu, Mar 05, 2026 at 02:32:49PM +0100, David Hildenbrand (Arm) wrote:
>
> >
> > Ah wait you dedicate a whole paragraph after this to tha :)
>
> Correct :)
>
> >
> >> +mapping that is currently resident in RAM (RSS); the process's proportional
> >> +share of this mapping (PSS); and the number of clean and dirty shared and
> >> +private pages in the mapping.
> >> +
> >> +Historically, the "KernelPageSize" always corresponds to the "MMUPageSize",
> >> +except when a larger kernel page size is emulated on a system with a smaller
> >
> > NIT: is -> was, as historically implies past tense.
> >
> > But it's maybe better to say:
> >
> > +Historically, the "KernelPageSize" has always corresponded to the "MMUPageSize",
> >
> > And:
> >
> > +except when a larger kernel page size is being emulated on a system with a smaller
> >
>
> Given that the PPC64 thingy still exists in the tree, I'll probably do:
>
> "KernelPageSize" always corresponds to "MMUPageSize", except when a
> larger kernel page size is emulated on a system with a smaller page size
> used by the MMU, which is the case for some PPC64 setups with hugetlb.
>
> >> +page size used by the MMU, which was the case for PPC64 in the past.
> >> +Further, "KernelPageSize" and "MMUPageSize" always correspond to the
> >
> > NIT: Further -> Furthermore
> >
>
> Helpful.
>
> >> +smallest possible granularity (fallback) that could be encountered in a
> >
> > could be -> can be
> >
> > Since we are really talking about the current situation, even if this, is
> > effect, a legacy thing.
> >
> >> +VMA throughout its lifetime. These values are not affected by any current
> >> +transparent grouping of pages by Linux (Transparent Huge Pages) or any
> >
> > 'transparent grouping of pages' reads a bit weirdly.
> >
> > Maybe simplify to:
> >
> > +These values are not affected by Transparent Huge Pages being in effect, or any...
>
> Works for me.
>
> >
> >> +current usage of larger MMU page sizes (either through architectural
> >
> > NIT: current usage -> usage
>
> Ack.
>
> >
> >> +huge-page mappings or other transparent groupings done by the MMU).
> >
> > Again I think 'transparent groupings' is a bit unclear. Perhaps instead:
> >
> > +huge-page mappings or other explicit or implicit coalescing of virtual ranges
> > +performed by the MMU).
>
> I'd assume the educated reader does not know what "explicit/implicit
> coalescing" even means, but works for me. :)
>
> >
> > ?
> >
> >> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" provide insight into
> >> +the usage of some architectural huge-page mappings.
> >
> > Is 'some' necessary here? Seems to make it a bit vague.
>
> I had PUDs in mind. I can just call it
>
> "PMD-level architectural ..."
>
> >
> >>
> >> The "proportional set size" (PSS) of a process is the count of pages it has
> >> in memory, where each page is divided by the number of processes sharing it.
> >> @@ -528,10 +541,14 @@ pressure if the memory is clean. Please note that the printed value might
> >> be lower than the real value due to optimizations used in the current
> >> implementation. If this is not desirable please file a bug report.
> >>
> >> -"AnonHugePages" shows the amount of memory backed by transparent hugepage.
> >> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of
> >> +memory backed by transparent hugepages that are currently mapped through
> >> +architectural huge-page mappings (PMD). "AnonHugePages" corresponds to memory
> >
> > 'mapped through architectural huge-page mappings (PMD)' reads a bit strangely to
> > me,
> >
> > Perhaps 'mapped by transparent huge pages at a PMD page table level' instead?
> >
>
> I'll simplify to
>
> "mapped by architectural huge-page mappings at the PMD level"
>
>
> >> +that does not belong to a file, "ShmemPmdMapped" to shared memory (shmem/tmpfs)
> >> +and "FilePmdMapped" to file-backed memory (excluding shmem/tmpfs).
> >>
> >> -"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by
> >> -huge pages.
> >> +There are no dedicated entries for transparent huge pages (or similar concepts)
> >> +that are not mapped through architectural huge-page mappings (PMD).
> >
> > similarly, perhaps better as 'are not mapped by transparent huge pages at a PMD
> > page table level'?
>
> I'll similarly call it "mapped by architectural huge-page mappings at
> the PMD level"
>
> Thanks a bunch!
>
> --
> Cheers,
>
> David
Thanks on all!
^ permalink raw reply [flat|nested] 9+ messages in thread