From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>
Cc: Suren Baghdasaryan <surenb@google.com>,
Ryan Roberts <ryan.roberts@arm.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Vlastimil Babka <vbabka@suse.cz>, Zi Yan <ziy@nvidia.com>,
Mike Rapoport <rppt@kernel.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Michal Hocko <mhocko@suse.com>,
David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Thomas Gleixner <tglx@linutronix.de>,
Nico Pache <npache@redhat.com>, Dev Jain <dev.jain@arm.com>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Jens Axboe <axboe@kernel.dk>,
linux-kernel@vger.kernel.org, willy@infradead.org,
linux-mm@kvack.org, Ritesh Harjani <ritesh.list@gmail.com>,
linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org,
"Darrick J . Wong" <djwong@kernel.org>,
mcgrof@kernel.org, gost.dev@samsung.com, hch@lst.de,
Pankaj Raghav <p.raghav@samsung.com>
Subject: Re: [PATCH v2 3/5] mm: add persistent huge zero folio
Date: Fri, 8 Aug 2025 16:47:38 +0100 [thread overview]
Message-ID: <731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local> (raw)
In-Reply-To: <20250808121141.624469-4-kernel@pankajraghav.com>
On Fri, Aug 08, 2025 at 02:11:39PM +0200, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
>
> Many places in the kernel need to zero out larger chunks, but the
> maximum segment that can be zeroed out at a time by ZERO_PAGE is limited
> by PAGE_SIZE.
>
> This is especially annoying in block devices and filesystems where
> multiple ZERO_PAGEs are attached to the bio in different bvecs. With
> multipage bvec support in block layer, it is much more efficient to send
> out larger zero pages as a part of single bvec.
>
> This concern was raised during the review of adding Large Block Size
> support to XFS[1][2].
>
> Usually huge_zero_folio is allocated on demand, and it will be
> deallocated by the shrinker if there are no users of it left. At moment,
> huge_zero_folio infrastructure refcount is tied to the process lifetime
> that created it. This might not work for bio layer as the completions
> can be async and the process that created the huge_zero_folio might no
> longer be alive. And, one of the main points that came up during
> discussion is to have something bigger than zero page as a drop-in
> replacement.
>
> Add a config option PERSISTENT_HUGE_ZERO_FOLIO that will result in
> allocating the huge zero folio during early init and never free the memory
> by disabling the shrinker. This makes using the huge_zero_folio without
> having to pass any mm struct and does not tie the lifetime of the zero
> folio to anything, making it a drop-in replacement for ZERO_PAGE.
>
> If PERSISTENT_HUGE_ZERO_FOLIO config option is enabled, then
> mm_get_huge_zero_folio() will simply return the allocated page instead of
> dynamically allocating a new PMD page.
>
> Use this option carefully in resource constrained systems as it uses
> one full PMD sized page for zeroing purposes.
>
> [1] https://lore.kernel.org/linux-xfs/20231027051847.GA7885@lst.de/
> [2] https://lore.kernel.org/linux-xfs/ZitIK5OnR7ZNY0IG@infradead.org/
>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Pankaj Raghav <p.raghav@samsung.com>
This is much nicer and now _super_ simple, I like it.
A few nits below but generally:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/huge_mm.h | 16 ++++++++++++++++
> mm/Kconfig | 16 ++++++++++++++++
> mm/huge_memory.c | 40 ++++++++++++++++++++++++++++++----------
> 3 files changed, 62 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 7748489fde1b..bd547857c6c1 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -495,6 +495,17 @@ static inline bool is_huge_zero_pmd(pmd_t pmd)
> struct folio *mm_get_huge_zero_folio(struct mm_struct *mm);
> void mm_put_huge_zero_folio(struct mm_struct *mm);
>
> +static inline struct folio *get_persistent_huge_zero_folio(void)
> +{
> + if (!IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> + return NULL;
> +
> + if (unlikely(!huge_zero_folio))
> + return NULL;
> +
> + return huge_zero_folio;
> +}
> +
> static inline bool thp_migration_supported(void)
> {
> return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION);
> @@ -685,6 +696,11 @@ static inline int change_huge_pud(struct mmu_gather *tlb,
> {
> return 0;
> }
> +
> +static inline struct folio *get_persistent_huge_zero_folio(void)
> +{
> + return NULL;
> +}
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> static inline int split_folio_to_list_to_order(struct folio *folio,
> diff --git a/mm/Kconfig b/mm/Kconfig
> index e443fe8cd6cf..fbe86ef97fd0 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -823,6 +823,22 @@ config ARCH_WANT_GENERAL_HUGETLB
> config ARCH_WANTS_THP_SWAP
> def_bool n
>
> +config PERSISTENT_HUGE_ZERO_FOLIO
> + bool "Allocate a PMD sized folio for zeroing"
> + depends on TRANSPARENT_HUGEPAGE
I feel like we really need to sort out what is/isn't predicated on THP... it
seems like THP is sort of short hand for 'any large folio stuff' but not
always...
But this is a more general point :)
> + help
> + Enable this option to reduce the runtime refcounting overhead
> + of the huge zero folio and expand the places in the kernel
> + that can use huge zero folios. This can potentially improve
> + the performance while performing an I/O.
NIT: I think we can drop 'an', and probably refactor this sentence to something
like 'For instance, block I/O benefits from access to large folios for zeroing
memory'.
> +
> + With this option enabled, the huge zero folio is allocated
> + once and never freed. One full huge page worth of memory shall
> + be used.
NIT: huge page worth -> huge page's worth
> +
> + Say Y if your system has lots of memory. Say N if you are
> + memory constrained.
> +
> config MM_ID
> def_bool n
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ff06dee213eb..bedda9640936 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -248,6 +248,9 @@ static void put_huge_zero_folio(void)
>
> struct folio *mm_get_huge_zero_folio(struct mm_struct *mm)
> {
> + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> + return huge_zero_folio;
> +
> if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags))
> return READ_ONCE(huge_zero_folio);
>
> @@ -262,6 +265,9 @@ struct folio *mm_get_huge_zero_folio(struct mm_struct *mm)
>
> void mm_put_huge_zero_folio(struct mm_struct *mm)
> {
> + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO))
> + return;
> +
> if (test_bit(MMF_HUGE_ZERO_FOLIO, &mm->flags))
> put_huge_zero_folio();
> }
> @@ -849,16 +855,34 @@ static inline void hugepage_exit_sysfs(struct kobject *hugepage_kobj)
>
> static int __init thp_shrinker_init(void)
> {
> - huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero");
> - if (!huge_zero_folio_shrinker)
> - return -ENOMEM;
> -
> deferred_split_shrinker = shrinker_alloc(SHRINKER_NUMA_AWARE |
> SHRINKER_MEMCG_AWARE |
> SHRINKER_NONSLAB,
> "thp-deferred_split");
> - if (!deferred_split_shrinker) {
> - shrinker_free(huge_zero_folio_shrinker);
> + if (!deferred_split_shrinker)
> + return -ENOMEM;
> +
> + deferred_split_shrinker->count_objects = deferred_split_count;
> + deferred_split_shrinker->scan_objects = deferred_split_scan;
> + shrinker_register(deferred_split_shrinker);
> +
> + if (IS_ENABLED(CONFIG_PERSISTENT_HUGE_ZERO_FOLIO)) {
> + /*
> + * Bump the reference of the huge_zero_folio and do not
> + * initialize the shrinker.
> + *
> + * huge_zero_folio will always be NULL on failure. We assume
> + * that get_huge_zero_folio() will most likely not fail as
> + * thp_shrinker_init() is invoked early on during boot.
> + */
> + if (!get_huge_zero_folio())
> + pr_warn("Allocating static huge zero folio failed\n");
> + return 0;
> + }
> +
> + huge_zero_folio_shrinker = shrinker_alloc(0, "thp-zero");
> + if (!huge_zero_folio_shrinker) {
> + shrinker_free(deferred_split_shrinker);
> return -ENOMEM;
> }
>
> @@ -866,10 +890,6 @@ static int __init thp_shrinker_init(void)
> huge_zero_folio_shrinker->scan_objects = shrink_huge_zero_folio_scan;
> shrinker_register(huge_zero_folio_shrinker);
>
> - deferred_split_shrinker->count_objects = deferred_split_count;
> - deferred_split_shrinker->scan_objects = deferred_split_scan;
> - shrinker_register(deferred_split_shrinker);
> -
> return 0;
> }
>
> --
> 2.49.0
>
next prev parent reply other threads:[~2025-08-08 15:48 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-08 12:11 [PATCH v2 0/5] add persistent huge zero folio support Pankaj Raghav (Samsung)
2025-08-08 12:11 ` [PATCH v2 1/5] mm: rename huge_zero_page to huge_zero_folio Pankaj Raghav (Samsung)
2025-08-18 6:38 ` Hannes Reinecke
2025-08-08 12:11 ` [PATCH v2 2/5] mm: rename MMF_HUGE_ZERO_PAGE to MMF_HUGE_ZERO_FOLIO Pankaj Raghav (Samsung)
2025-08-18 6:39 ` Hannes Reinecke
2025-08-08 12:11 ` [PATCH v2 3/5] mm: add persistent huge zero folio Pankaj Raghav (Samsung)
2025-08-08 12:47 ` Pankaj Raghav (Samsung)
2025-08-08 15:47 ` Lorenzo Stoakes [this message]
2025-08-11 8:33 ` Pankaj Raghav (Samsung)
2025-08-18 12:02 ` Hannes Reinecke
2025-08-08 12:11 ` [PATCH v2 4/5] mm: add largest_zero_folio() routine Pankaj Raghav (Samsung)
2025-08-08 15:50 ` Lorenzo Stoakes
2025-08-18 12:04 ` Hannes Reinecke
2025-08-08 12:11 ` [PATCH v2 5/5] block: use largest_zero_folio in __blkdev_issue_zero_pages() Pankaj Raghav (Samsung)
2025-08-18 12:05 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=731d8b44-1a45-40bc-a274-8f39a7ae0f7f@lucifer.local \
--to=lorenzo.stoakes@oracle.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=baolin.wang@linux.alibaba.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hch@lst.de \
--cc=kernel@pankajraghav.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=p.raghav@samsung.com \
--cc=ritesh.list@gmail.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox