* [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
@ 2026-03-04 15:56 David Hildenbrand (Arm)
2026-03-04 16:04 ` Zi Yan
0 siblings, 1 reply; 2+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-04 15:56 UTC (permalink / raw)
To: linux-kernel
Cc: linux-fsdevel, linux-doc, linux-mm, David Hildenbrand (Arm),
Andrew Morton, Lorenzo Stoakes, Zi Yan, Baolin Wang,
Liam R . Howlett, Nico Pache, Dev Jain, Barry Song, Lance Yang,
Jonathan Corbet, Shuah Khan, Usama Arif, Andi Kleen
There was recently some confusion around THPs and the interaction with
KernelPageSize / MMUPageSize. Historically, these entries always
correspond to the smallest size we could encounter, not any current
usage of transparent huge pages or larger sizes used by the MMU.
Ever since we added THP support many, many years ago, these entries
would keep reporting the smallest (fallback) granularity in a VMA.
For this reason, they default to PAGE_SIZE for all VMAs except for
VMAs where we have the guarantee that the system and the MMU will
always use larger page sizes. hugetlb, for example, exposes a custom
vm_ops->pagesize callback to handle that. Similarly, dax/device
exposes a custom vm_ops->pagesize callback and provides similar
guarantees.
Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
regarding PMD entries.
While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
and "ShmemPmdMapped" entries really mean, and make it clear that there
are no other entries for other THP/folio sizes or mappings.
Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Shuah Khan <skhan@linuxfoundation.org>
Cc: Usama Arif <usamaarif642@gmail.com>
Cc: Andi Kleen <ak@linux.intel.com>
Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
---
Documentation/filesystems/proc.rst | 37 ++++++++++++++++++++++--------
1 file changed, 27 insertions(+), 10 deletions(-)
diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index b0c0d1b45b99..0f67e47528fc 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -464,6 +464,7 @@ Memory Area, or VMA) there is a series of lines such as the following::
KSM: 0 kB
LazyFree: 0 kB
AnonHugePages: 0 kB
+ FilePmdMapped: 0 kB
ShmemPmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
@@ -477,13 +478,25 @@ Memory Area, or VMA) there is a series of lines such as the following::
The first of these lines shows the same information as is displayed for
the mapping in /proc/PID/maps. Following lines show the size of the
-mapping (size); the size of each page allocated when backing a VMA
-(KernelPageSize), which is usually the same as the size in the page table
-entries; the page size used by the MMU when backing a VMA (in most cases,
-the same as KernelPageSize); the amount of the mapping that is currently
-resident in RAM (RSS); the process's proportional share of this mapping
-(PSS); and the number of clean and dirty shared and private pages in the
-mapping.
+mapping (size); the smallest possible page size allocated when
+backing a VMA (KernelPageSize), which is the granularity in which VMA
+modifications can be performed; the smallest possible page size that could
+be used by the MMU (MMUPageSize) when backing a VMA; the amount of the
+mapping that is currently resident in RAM (RSS); the process's proportional
+share of this mapping (PSS); and the number of clean and dirty shared and
+private pages in the mapping.
+
+Historically, the "KernelPageSize" always corresponds to the "MMUPageSize",
+except when a larger kernel page size is emulated on a system with a smaller
+page size used by the MMU, which was the case for PPC64 in the past.
+Further, "KernelPageSize" and "MMUPageSize" always correspond to the
+smallest possible granularity (fallback) that could be encountered in a
+VMA throughout its lifetime. These values are not affected by any current
+transparent grouping of pages by Linux (Transparent Huge Pages) or any
+current usage of larger MMU page sizes (either through architectural
+huge-page mappings or other transparent groupings done by the MMU).
+"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" provide insight into
+the usage of some architectural huge-page mappings.
The "proportional set size" (PSS) of a process is the count of pages it has
in memory, where each page is divided by the number of processes sharing it.
@@ -528,10 +541,14 @@ pressure if the memory is clean. Please note that the printed value might
be lower than the real value due to optimizations used in the current
implementation. If this is not desirable please file a bug report.
-"AnonHugePages" shows the amount of memory backed by transparent hugepage.
+"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of
+memory backed by transparent hugepages that are currently mapped through
+architectural huge-page mappings (PMD). "AnonHugePages" corresponds to memory
+that does not belong to a file, "ShmemPmdMapped" to shared memory (shmem/tmpfs)
+and "FilePmdMapped" to file-backed memory (excluding shmem/tmpfs).
-"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by
-huge pages.
+There are no dedicated entries for transparent huge pages (or similar concepts)
+that are not mapped through architectural huge-page mappings (PMD).
"Shared_Hugetlb" and "Private_Hugetlb" show the amounts of memory backed by
hugetlbfs page which is *not* counted in "RSS" or "PSS" field for historical
--
2.43.0
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
2026-03-04 15:56 [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
@ 2026-03-04 16:04 ` Zi Yan
0 siblings, 0 replies; 2+ messages in thread
From: Zi Yan @ 2026-03-04 16:04 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: linux-kernel, linux-fsdevel, linux-doc, linux-mm, Andrew Morton,
Lorenzo Stoakes, Baolin Wang, Liam R . Howlett, Nico Pache,
Dev Jain, Barry Song, Lance Yang, Jonathan Corbet, Shuah Khan,
Usama Arif, Andi Kleen
On 4 Mar 2026, at 10:56, David Hildenbrand (Arm) wrote:
> There was recently some confusion around THPs and the interaction with
> KernelPageSize / MMUPageSize. Historically, these entries always
> correspond to the smallest size we could encounter, not any current
> usage of transparent huge pages or larger sizes used by the MMU.
>
> Ever since we added THP support many, many years ago, these entries
> would keep reporting the smallest (fallback) granularity in a VMA.
>
> For this reason, they default to PAGE_SIZE for all VMAs except for
> VMAs where we have the guarantee that the system and the MMU will
> always use larger page sizes. hugetlb, for example, exposes a custom
> vm_ops->pagesize callback to handle that. Similarly, dax/device
> exposes a custom vm_ops->pagesize callback and provides similar
> guarantees.
>
> Let's clarify the historical meaning of KernelPageSize / MMUPageSize,
> and point at "AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped"
> regarding PMD entries.
>
> While at it, document "FilePmdMapped", clarify what the "AnonHugePages"
> and "ShmemPmdMapped" entries really mean, and make it clear that there
> are no other entries for other THP/folio sizes or mappings.
>
> Link: https://lore.kernel.org/all/20260225232708.87833-1-ak@linux.intel.com/
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
> Cc: Nico Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Barry Song <baohua@kernel.org>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Shuah Khan <skhan@linuxfoundation.org>
> Cc: Usama Arif <usamaarif642@gmail.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> ---
> Documentation/filesystems/proc.rst | 37 ++++++++++++++++++++++--------
> 1 file changed, 27 insertions(+), 10 deletions(-)
>
LGTM.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-03-04 16:04 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-03-04 15:56 [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps David Hildenbrand (Arm)
2026-03-04 16:04 ` Zi Yan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox