From: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>
To: David Hildenbrand <david@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>,
Pankaj Raghav <p.raghav@samsung.com>,
Suren Baghdasaryan <surenb@google.com>,
Ryan Roberts <ryan.roberts@arm.com>,
Mike Rapoport <rppt@kernel.org>, Michal Hocko <mhocko@suse.com>,
Thomas Gleixner <tglx@linutronix.de>,
Nico Pache <npache@redhat.com>, Dev Jain <dev.jain@arm.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Borislav Petkov <bp@alien8.de>, Ingo Molnar <mingo@redhat.com>,
"H . Peter Anvin" <hpa@zytor.com>,
Vlastimil Babka <vbabka@suse.cz>, Zi Yan <ziy@nvidia.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Andrew Morton <akpm@linux-foundation.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Jens Axboe <axboe@kernel.dk>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
willy@infradead.org, x86@kernel.org,
linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org,
"Darrick J . Wong" <djwong@kernel.org>,
mcgrof@kernel.org, gost.dev@samsung.com, hch@lst.de
Subject: Re: [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option
Date: Mon, 16 Jun 2025 12:49:27 +0200 [thread overview]
Message-ID: <vmc7bu6muygheuepfltjvbbio6gvjemxostq4rjum66s4ok2f7@x7l3y7ot7mf4> (raw)
In-Reply-To: <b128d1de-9ad5-4de7-8cd7-1490ae31d20f@redhat.com>
> > >
> > > The mm is a nice convenient place to stick an mm but there are other
> > > ways to keep an efficient refcount around. For instance, you could just
> > > bump a per-cpu refcount and then have the shrinker sum up all the
> > > refcounts to see if there are any outstanding on the system as a whole.
> > >
> > > I understand that the current refcounts are tied to an mm, but you could
> > > either replace the mm-specific ones or add something in parallel for
> > > when there's no mm.
> >
> > But the whole idea of allocating a static PMD page for sane
> > architectures like x86 started with the intent of avoiding the refcounts and
> > shrinker.
> >
> > This was the initial feedback I got[2]:
> >
> > I mean, the whole thing about dynamically allocating/freeing it was for
> > memory-constrained systems. For large systems, we just don't care.
>
> For non-mm usage we can just use the folio refcount. The per-mm refcounts
> are all combined into a single folio refcount. The way the global variable
> is managed based on per-mm refcounts is the weird thing.
>
> In some corner cases we might end up having multiple instances of huge zero
> folios right now. Just imagine:
>
> 1) Allocate huge zero folio during read fault
> 2) vmsplice() it
> 3) Unmap the huge zero folio
> 4) Shrinker runs and frees it
> 5) Repeat with 1)
>
> As long as the folio is vmspliced(), it will not get actually freed ...
>
> I would hope that we could remove the shrinker completely, and simply never
> free the huge zero folio once allocated. Or at least, only free it once it
> is actually no longer used.
>
Thanks for the explanation, David.
But I am still a bit confused on how to proceed with these patches.
So IIUC, our eventual goal is to get rid of the shrinker.
But do we still want to add a static PMD page in the .bss or do we take
an alternate approach here?
--
Pankaj
next prev parent reply other threads:[~2025-06-16 10:49 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-12 10:50 Pankaj Raghav
2025-06-12 10:50 ` [PATCH 1/5] mm: move huge_zero_page declaration from huge_mm.h to mm.h Pankaj Raghav
2025-06-12 10:50 ` [PATCH 2/5] huge_memory: add huge_zero_page_shrinker_(init|exit) function Pankaj Raghav
2025-06-12 10:50 ` [PATCH 3/5] mm: add static PMD zero page Pankaj Raghav
2025-06-24 8:51 ` kernel test robot
2025-06-12 10:50 ` [PATCH 4/5] mm: add mm_get_static_huge_zero_folio() routine Pankaj Raghav
2025-06-12 14:09 ` Dave Hansen
2025-06-12 20:54 ` Pankaj Raghav (Samsung)
2025-06-16 9:14 ` David Hildenbrand
2025-06-16 10:41 ` Pankaj Raghav (Samsung)
2025-06-12 10:51 ` [PATCH 5/5] block: use mm_huge_zero_folio in __blkdev_issue_zero_pages() Pankaj Raghav
2025-06-12 13:50 ` [PATCH 0/5] add STATIC_PMD_ZERO_PAGE config option Dave Hansen
2025-06-12 20:36 ` Pankaj Raghav (Samsung)
2025-06-12 21:46 ` Dave Hansen
2025-06-13 8:58 ` Pankaj Raghav (Samsung)
2025-06-16 9:12 ` David Hildenbrand
2025-06-16 10:49 ` Pankaj Raghav (Samsung) [this message]
2025-06-16 5:40 ` Christoph Hellwig
2025-06-16 9:00 ` Pankaj Raghav
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=vmc7bu6muygheuepfltjvbbio6gvjemxostq4rjum66s4ok2f7@x7l3y7ot7mf4 \
--to=kernel@pankajraghav.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=baolin.wang@linux.alibaba.com \
--cc=bp@alien8.de \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hch@lst.de \
--cc=hpa@zytor.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mcgrof@kernel.org \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=npache@redhat.com \
--cc=p.raghav@samsung.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox