From: Pasha Tatashin <pasha.tatashin@soleen.com>
To: Alison Schofield <alison.schofield@intel.com>
Cc: Sourav Panda <souravpanda@google.com>,
corbet@lwn.net, gregkh@linuxfoundation.org, rafael@kernel.org,
akpm@linux-foundation.org, mike.kravetz@oracle.com,
muchun.song@linux.dev, rppt@kernel.org, david@redhat.com,
rdunlap@infradead.org, chenlinxuan@uniontech.com,
yang.yang29@zte.com.cn, tomas.mudrunka@gmail.com,
bhelgaas@google.com, ivan@cloudflare.com, yosryahmed@google.com,
hannes@cmpxchg.org, shakeelb@google.com,
kirill.shutemov@linux.intel.com, wangkefeng.wang@huawei.com,
adobriyan@gmail.com, vbabka@suse.cz, Liam.Howlett@oracle.com,
surenb@google.com, linux-kernel@vger.kernel.org,
linux-fsdevel@vger.kernel.org, linux-doc@vger.kernel.org,
linux-mm@kvack.org, willy@infradead.org, weixugc@google.com,
David Rientjes <rientjes@google.com>,
nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org,
yi.zhang@redhat.com
Subject: Re: [PATCH v13] mm: report per-page metadata information
Date: Mon, 5 Aug 2024 14:40:48 -0400 [thread overview]
Message-ID: <CA+CK2bAfgamzFos1M-6AtozEDwRPJzARJOmccfZ=uzKyJ7w=kQ@mail.gmail.com> (raw)
In-Reply-To: <Zq0tPd2h6alFz8XF@aschofie-mobl2>
On Fri, Aug 2, 2024 at 3:02 PM Alison Schofield
<alison.schofield@intel.com> wrote:
>
> ++ nvdimm, linux-cxl, Yu Zhang
>
> On Wed, Jun 05, 2024 at 10:27:51PM +0000, Sourav Panda wrote:
> > Today, we do not have any observability of per-page metadata
> > and how much it takes away from the machine capacity. Thus,
> > we want to describe the amount of memory that is going towards
> > per-page metadata, which can vary depending on build
> > configuration, machine architecture, and system use.
> >
> > This patch adds 2 fields to /proc/vmstat that can used as shown
> > below:
> >
> > Accounting per-page metadata allocated by boot-allocator:
> > /proc/vmstat:nr_memmap_boot * PAGE_SIZE
> >
> > Accounting per-page metadata allocated by buddy-allocator:
> > /proc/vmstat:nr_memmap * PAGE_SIZE
> >
> > Accounting total Perpage metadata allocated on the machine:
> > (/proc/vmstat:nr_memmap_boot +
> > /proc/vmstat:nr_memmap) * PAGE_SIZE
> >
> > Utility for userspace:
> >
> > Observability: Describe the amount of memory overhead that is
> > going to per-page metadata on the system at any given time since
> > this overhead is not currently observable.
> >
> > Debugging: Tracking the changes or absolute value in struct pages
> > can help detect anomalies as they can be correlated with other
> > metrics in the machine (e.g., memtotal, number of huge pages,
> > etc).
> >
> > page_ext overheads: Some kernel features such as page_owner
> > page_table_check that use page_ext can be optionally enabled via
> > kernel parameters. Having the total per-page metadata information
> > helps users precisely measure impact. Furthermore, page-metadata
> > metrics will reflect the amount of struct pages reliquished
> > (or overhead reduced) when hugetlbfs pages are reserved which
> > will vary depending on whether hugetlb vmemmap optimization is
> > enabled or not.
> >
> > For background and results see:
> > lore.kernel.org/all/20240220214558.3377482-1-souravpanda@google.com
> >
> > Acked-by: David Rientjes <rientjes@google.com>
> > Signed-off-by: Sourav Panda <souravpanda@google.com>
> > Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
>
> This patch is leading to Oops in 6.11-rc1 when CONFIG_MEMORY_HOTPLUG
> is enabled. Folks hitting it have had success with reverting this patch.
> Disabling CONFIG_MEMORY_HOTPLUG is not a long term solution.
>
> Reported here:
> https://lore.kernel.org/linux-cxl/CAHj4cs9Ax1=CoJkgBGP_+sNu6-6=6v=_L-ZBZY0bVLD3wUWZQg@mail.gmail.com/
Thank you for the heads up. Can you please attach a full config file,
also was anyone able to reproduce this problem in qemu with emulated
nvdimm?
Pasha
next prev parent reply other threads:[~2024-08-05 18:41 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-05 22:27 Sourav Panda
2024-06-11 22:30 ` Andrew Morton
2024-06-12 17:53 ` Pasha Tatashin
2024-08-02 19:02 ` Alison Schofield
2024-08-05 18:40 ` Pasha Tatashin [this message]
2024-08-05 23:06 ` Dan Williams
2024-08-05 23:18 ` Alison Schofield
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CA+CK2bAfgamzFos1M-6AtozEDwRPJzARJOmccfZ=uzKyJ7w=kQ@mail.gmail.com' \
--to=pasha.tatashin@soleen.com \
--cc=Liam.Howlett@oracle.com \
--cc=adobriyan@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=alison.schofield@intel.com \
--cc=bhelgaas@google.com \
--cc=chenlinxuan@uniontech.com \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=gregkh@linuxfoundation.org \
--cc=hannes@cmpxchg.org \
--cc=ivan@cloudflare.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=muchun.song@linux.dev \
--cc=nvdimm@lists.linux.dev \
--cc=rafael@kernel.org \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=rppt@kernel.org \
--cc=shakeelb@google.com \
--cc=souravpanda@google.com \
--cc=surenb@google.com \
--cc=tomas.mudrunka@gmail.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=yang.yang29@zte.com.cn \
--cc=yi.zhang@redhat.com \
--cc=yosryahmed@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox