linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
To: "David Hildenbrand (Arm)" <david@kernel.org>
Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	 linux-doc@vger.kernel.org, linux-mm@kvack.org,
	Andrew Morton <akpm@linux-foundation.org>,
	 Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	Zi Yan <ziy@nvidia.com>,
	 Baolin Wang <baolin.wang@linux.alibaba.com>,
	"Liam R . Howlett" <Liam.Howlett@oracle.com>,
	 Nico Pache <npache@redhat.com>, Dev Jain <dev.jain@arm.com>,
	Barry Song <baohua@kernel.org>,
	 Lance Yang <lance.yang@linux.dev>,
	Jonathan Corbet <corbet@lwn.net>,
	 Shuah Khan <skhan@linuxfoundation.org>,
	Usama Arif <usamaarif642@gmail.com>,
	 Andi Kleen <ak@linux.intel.com>
Subject: Re: [PATCH v1] docs: filesystems: clarify KernelPageSize vs. MMUPageSize in smaps
Date: Thu, 5 Mar 2026 14:57:15 +0000	[thread overview]
Message-ID: <ce2f4343-8bc4-4edd-a922-dd71a09a34e3@lucifer.local> (raw)
In-Reply-To: <c5330f9e-db41-496b-b580-73ebec9cd811@kernel.org>

On Thu, Mar 05, 2026 at 02:32:49PM +0100, David Hildenbrand (Arm) wrote:
>
> >
> > Ah wait you dedicate a whole paragraph after this to tha :)
>
> Correct :)
>
> >
> >> +mapping that is currently resident in RAM (RSS); the process's proportional
> >> +share of this mapping (PSS); and the number of clean and dirty shared and
> >> +private pages in the mapping.
> >> +
> >> +Historically, the "KernelPageSize" always corresponds to the "MMUPageSize",
> >> +except when a larger kernel page size is emulated on a system with a smaller
> >
> > NIT: is -> was, as historically implies past tense.
> >
> > But it's maybe better to say:
> >
> > +Historically, the "KernelPageSize" has always corresponded to the "MMUPageSize",
> >
> > And:
> >
> > +except when a larger kernel page size is being emulated on a system with a smaller
> >
>
> Given that the PPC64 thingy still exists in the tree, I'll probably do:
>
> "KernelPageSize" always corresponds to "MMUPageSize", except when a
> larger kernel page size is emulated on a system with a smaller page size
> used by the MMU, which is the case for some PPC64 setups with hugetlb.
>
> >> +page size used by the MMU, which was the case for PPC64 in the past.
> >> +Further, "KernelPageSize" and "MMUPageSize" always correspond to the
> >
> > NIT: Further -> Furthermore
> >
>
> Helpful.
>
> >> +smallest possible granularity (fallback) that could be encountered in a
> >
> > could be -> can be
> >
> > Since we are really talking about the current situation, even if this, is
> > effect, a legacy thing.
> >
> >> +VMA throughout its lifetime.  These values are not affected by any current
> >> +transparent grouping of pages by Linux (Transparent Huge Pages) or any
> >
> > 'transparent grouping of pages' reads a bit weirdly.
> >
> > Maybe simplify to:
> >
> > +These values are not affected by Transparent Huge Pages being in effect, or any...
>
> Works for me.
>
> >
> >> +current usage of larger MMU page sizes (either through architectural
> >
> > NIT: current usage -> usage
>
> Ack.
>
> >
> >> +huge-page mappings or other transparent groupings done by the MMU).
> >
> > Again I think 'transparent groupings' is a bit unclear. Perhaps instead:
> >
> > +huge-page mappings or other explicit or implicit coalescing of virtual ranges
> > +performed by the MMU).
>
> I'd assume the educated reader does not know what "explicit/implicit
> coalescing" even means, but works for me. :)
>
> >
> > ?
> >
> >> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" provide insight into
> >> +the usage of some architectural huge-page mappings.
> >
> > Is 'some' necessary here? Seems to make it a bit vague.
>
> I had PUDs in mind. I can just call it
>
> "PMD-level architectural ..."
>
> >
> >>
> >>  The "proportional set size" (PSS) of a process is the count of pages it has
> >>  in memory, where each page is divided by the number of processes sharing it.
> >> @@ -528,10 +541,14 @@ pressure if the memory is clean. Please note that the printed value might
> >>  be lower than the real value due to optimizations used in the current
> >>  implementation. If this is not desirable please file a bug report.
> >>
> >> -"AnonHugePages" shows the amount of memory backed by transparent hugepage.
> >> +"AnonHugePages", "ShmemPmdMapped" and "FilePmdMapped" show the amount of
> >> +memory backed by transparent hugepages that are currently mapped through
> >> +architectural huge-page mappings (PMD). "AnonHugePages" corresponds to memory
> >
> > 'mapped through architectural huge-page mappings (PMD)' reads a bit strangely to
> > me,
> >
> > Perhaps 'mapped by transparent huge pages at a PMD page table level' instead?
> >
>
> I'll simplify to
>
> "mapped by architectural huge-page mappings at the PMD level"
>
>
> >> +that does not belong to a file, "ShmemPmdMapped" to shared memory (shmem/tmpfs)
> >> +and "FilePmdMapped" to file-backed memory (excluding shmem/tmpfs).
> >>
> >> -"ShmemPmdMapped" shows the amount of shared (shmem/tmpfs) memory backed by
> >> -huge pages.
> >> +There are no dedicated entries for transparent huge pages (or similar concepts)
> >> +that are not mapped through architectural huge-page mappings (PMD).
> >
> > similarly, perhaps better as 'are not mapped by transparent huge pages at a PMD
> > page table level'?
>
> I'll similarly call it "mapped by architectural huge-page mappings at
> the PMD level"
>
> Thanks a bunch!
>
> --
> Cheers,
>
> David

Thanks on all!


      reply	other threads:[~2026-03-05 14:57 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-04 15:56 David Hildenbrand (Arm)
2026-03-04 16:04 ` Zi Yan
2026-03-04 20:22 ` Andi Kleen
2026-03-05  8:45   ` David Hildenbrand (Arm)
2026-03-05  3:21 ` Lance Yang
2026-03-05  9:03 ` Vlastimil Babka
2026-03-05 10:46 ` Lorenzo Stoakes (Oracle)
2026-03-05 13:32   ` David Hildenbrand (Arm)
2026-03-05 14:57     ` Lorenzo Stoakes (Oracle) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ce2f4343-8bc4-4edd-a922-dd71a09a34e3@lucifer.local \
    --to=ljs@kernel.org \
    --cc=Liam.Howlett@oracle.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=corbet@lwn.net \
    --cc=david@kernel.org \
    --cc=dev.jain@arm.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=skhan@linuxfoundation.org \
    --cc=usamaarif642@gmail.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox