From: Mike Rapoport <rppt@kernel.org>
To: Muchun Song <songmuchun@bytedance.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Muchun Song <muchun.song@linux.dev>,
Oscar Salvador <osalvador@suse.de>,
Michael Ellerman <mpe@ellerman.id.au>,
Madhavan Srinivasan <maddy@linux.ibm.com>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Nicholas Piggin <npiggin@gmail.com>,
Christophe Leroy <chleroy@kernel.org>,
aneesh.kumar@linux.ibm.com, joao.m.martins@oracle.com,
linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 3/6] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization
Date: Wed, 15 Apr 2026 18:58:47 +0300 [thread overview]
Message-ID: <ad-1t9sMWBqmgJ3g@kernel.org> (raw)
In-Reply-To: <20260415111412.1003526-4-songmuchun@bytedance.com>
On Wed, Apr 15, 2026 at 07:14:09PM +0800, Muchun Song wrote:
> When vmemmap optimization is enabled for DAX, the nr_memmap_pages
> counter in /proc/vmstat is incorrect. The current code always accounts
> for the full, non-optimized vmemmap size, but vmemmap optimization
> reduces the actual number of vmemmap pages by reusing tail pages. This
> causes the system to overcount vmemmap usage, leading to inaccurate
> page statistics in /proc/vmstat.
>
> Fix this by introducing section_vmemmap_pages(), which returns the exact
> vmemmap page count for a given pfn range based on whether optimization
> is in effect.
>
> Fixes: 15995a352474 ("mm: report per-page metadata information")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> mm/sparse-vmemmap.c | 32 ++++++++++++++++++++++++++++----
> 1 file changed, 28 insertions(+), 4 deletions(-)
>
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 40290fbc1db4..05e3e2b94e32 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -652,6 +652,29 @@ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
> }
> }
>
> +static int __meminit section_vmemmap_pages(unsigned long pfn, unsigned long nr_pages,
> + struct vmem_altmap *altmap,
> + struct dev_pagemap *pgmap)
> +{
> + unsigned int order = pgmap ? pgmap->vmemmap_shift : 0;
> + unsigned long pages_per_compound = 1L << order;
> +
> + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, min(pages_per_compound,
> + PAGES_PER_SECTION)));
> + VM_WARN_ON_ONCE(pfn_to_section_nr(pfn) != pfn_to_section_nr(pfn + nr_pages - 1));
> +
> + if (!vmemmap_can_optimize(altmap, pgmap))
> + return DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE);
> +
> + if (order < PFN_SECTION_SHIFT)
> + return VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound;
> +
> + if (IS_ALIGNED(pfn, pages_per_compound))
> + return VMEMMAP_RESERVE_NR;
> +
> + return 0;
> +}
> +
> static struct page * __meminit populate_section_memmap(unsigned long pfn,
> unsigned long nr_pages, int nid, struct vmem_altmap *altmap,
> struct dev_pagemap *pgmap)
> @@ -659,7 +682,7 @@ static struct page * __meminit populate_section_memmap(unsigned long pfn,
> struct page *page = __populate_section_memmap(pfn, nr_pages, nid, altmap,
> pgmap);
>
> - memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE));
> + memmap_pages_add(section_vmemmap_pages(pfn, nr_pages, altmap, pgmap));
>
> return page;
> }
> @@ -670,7 +693,7 @@ static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
> unsigned long start = (unsigned long) pfn_to_page(pfn);
> unsigned long end = start + nr_pages * sizeof(struct page);
>
> - memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)));
> + memmap_pages_add(-section_vmemmap_pages(pfn, nr_pages, altmap, pgmap));
> vmemmap_free(start, end, altmap);
> }
>
> @@ -679,9 +702,10 @@ static void free_map_bootmem(struct page *memmap, struct vmem_altmap *altmap,
> {
> unsigned long start = (unsigned long)memmap;
> unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION);
> + unsigned long pfn = page_to_pfn(memmap);
>
> - memmap_boot_pages_add(-1L * (DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page),
> - PAGE_SIZE)));
> + memmap_boot_pages_add(-section_vmemmap_pages(pfn, PAGES_PER_SECTION,
> + altmap, pgmap));
> vmemmap_free(start, end, NULL);
> }
>
> --
> 2.20.1
>
--
Sincerely yours,
Mike.
next prev parent reply other threads:[~2026-04-15 15:59 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-15 11:14 [PATCH v2 0/6] mm: Fix vmemmap optimization accounting and initialization Muchun Song
2026-04-15 11:14 ` [PATCH v2 1/6] mm/sparse-vmemmap: Fix vmemmap accounting underflow Muchun Song
2026-04-15 11:26 ` Muchun Song
2026-04-15 15:53 ` Mike Rapoport
2026-04-15 11:14 ` [PATCH v2 2/6] mm/sparse-vmemmap: Pass @pgmap argument to memory deactivation paths Muchun Song
2026-04-15 15:55 ` Mike Rapoport
2026-04-15 11:14 ` [PATCH v2 3/6] mm/sparse-vmemmap: Fix DAX vmemmap accounting with optimization Muchun Song
2026-04-15 15:58 ` Mike Rapoport [this message]
2026-04-15 11:14 ` [PATCH v2 4/6] mm/sparse-vmemmap: Pass @pgmap argument to arch vmemmap_populate() Muchun Song
2026-04-15 12:13 ` Joao Martins
2026-04-15 12:21 ` Muchun Song
2026-04-15 11:14 ` [PATCH v2 5/6] mm/sparse-vmemmap: Fix missing architecture-specific page table sync Muchun Song
2026-04-15 11:14 ` [PATCH v2 6/6] mm/mm_init: Fix pageblock migratetype for ZONE_DEVICE compound pages Muchun Song
2026-04-15 17:03 ` Mike Rapoport
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ad-1t9sMWBqmgJ3g@kernel.org \
--to=rppt@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=chleroy@kernel.org \
--cc=david@kernel.org \
--cc=joao.m.martins@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=ljs@kernel.org \
--cc=maddy@linux.ibm.com \
--cc=mhocko@suse.com \
--cc=mpe@ellerman.id.au \
--cc=muchun.song@linux.dev \
--cc=npiggin@gmail.com \
--cc=osalvador@suse.de \
--cc=songmuchun@bytedance.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox