From: David Hildenbrand <david@redhat.com>
To: "Pankaj Raghav (Samsung)" <kernel@pankajraghav.com>,
Suren Baghdasaryan <surenb@google.com>,
Ryan Roberts <ryan.roberts@arm.com>,
Mike Rapoport <rppt@kernel.org>, Michal Hocko <mhocko@suse.com>,
Thomas Gleixner <tglx@linutronix.de>,
Nico Pache <npache@redhat.com>, Dev Jain <dev.jain@arm.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Borislav Petkov <bp@alien8.de>, Ingo Molnar <mingo@redhat.com>,
"H . Peter Anvin" <hpa@zytor.com>,
Vlastimil Babka <vbabka@suse.cz>, Zi Yan <ziy@nvidia.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Andrew Morton <akpm@linux-foundation.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Jens Axboe <axboe@kernel.dk>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
willy@infradead.org, x86@kernel.org, linux-block@vger.kernel.org,
linux-fsdevel@vger.kernel.org,
"Darrick J . Wong" <djwong@kernel.org>,
mcgrof@kernel.org, gost.dev@samsung.com, hch@lst.de,
Pankaj Raghav <p.raghav@samsung.com>
Subject: Re: [RFC 0/4] add static huge zero folio support
Date: Wed, 23 Jul 2025 10:45:57 +0200 [thread overview]
Message-ID: <e6648680-da88-4f01-9811-00229da858e6@redhat.com> (raw)
In-Reply-To: <20250722094215.448132-1-kernel@pankajraghav.com>
On 22.07.25 11:42, Pankaj Raghav (Samsung) wrote:
> From: Pankaj Raghav <p.raghav@samsung.com>
>
> NOTE: I am resending as an RFC again based on Lorenzo's feedback. The
> old series can be found here [1].
>
> There are many places in the kernel where we need to zeroout larger
> chunks but the maximum segment we can zeroout at a time by ZERO_PAGE
> is limited by PAGE_SIZE.
>
> This concern was raised during the review of adding Large Block Size support
> to XFS[2][3].
>
> This is especially annoying in block devices and filesystems where we
> attach multiple ZERO_PAGEs to the bio in different bvecs. With multipage
> bvec support in block layer, it is much more efficient to send out
> larger zero pages as a part of a single bvec.
>
> Some examples of places in the kernel where this could be useful:
> - blkdev_issue_zero_pages()
> - iomap_dio_zero()
> - vmalloc.c:zero_iter()
> - rxperf_process_call()
> - fscrypt_zeroout_range_inline_crypt()
> - bch2_checksum_update()
> ...
>
> Usually huge_zero_folio is allocated on demand, and it will be
> deallocated by the shrinker if there are no users of it left. At the moment,
> huge_zero_folio infrastructure refcount is tied to the process lifetime
> that created it. This might not work for bio layer as the completions
> can be async and the process that created the huge_zero_folio might no
> longer be alive. And, one of the main point that came during discussion
> is to have something bigger than zero page as a drop-in replacement.
>
> Add a config option STATIC_HUGE_ZERO_FOLIO that will always allocate
> the huge_zero_folio, and it will never drop the reference. This makes
> using the huge_zero_folio without having to pass any mm struct and does
> not tie the lifetime of the zero folio to anything, making it a drop-in
> replacement for ZERO_PAGE.
>
> I have converted blkdev_issue_zero_pages() as an example as a part of
> this series. I also noticed close to 4% performance improvement just by
> replacing ZERO_PAGE with static huge_zero_folio.
>
> I will send patches to individual subsystems using the huge_zero_folio
> once this gets upstreamed.
>
> Looking forward to some feedback.
Please run scripts/checkpatch.pl on your patches.
There are quite some warning for patch #2 and #3, in particular, around
using spaces vs. tabs.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-07-23 8:46 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-22 9:42 Pankaj Raghav (Samsung)
2025-07-22 9:42 ` [RFC 1/4] mm: rename huge_zero_page_shrinker to huge_zero_folio_shrinker Pankaj Raghav (Samsung)
2025-07-22 19:54 ` David Hildenbrand
2025-07-22 9:42 ` [RFC 2/4] mm: add static huge zero folio Pankaj Raghav (Samsung)
2025-07-23 9:06 ` David Hildenbrand
2025-07-23 9:24 ` Pankaj Raghav (Samsung)
2025-07-22 9:42 ` [RFC 3/4] mm: add largest_zero_folio() routine Pankaj Raghav (Samsung)
2025-07-22 9:42 ` [RFC 4/4] block: use largest_zero_folio in __blkdev_issue_zero_pages() Pankaj Raghav (Samsung)
2025-07-23 8:45 ` David Hildenbrand [this message]
2025-07-23 9:08 ` [RFC 0/4] add static huge zero folio support Pankaj Raghav (Samsung)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e6648680-da88-4f01-9811-00229da858e6@redhat.com \
--to=david@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@kernel.dk \
--cc=baolin.wang@linux.alibaba.com \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=dev.jain@arm.com \
--cc=djwong@kernel.org \
--cc=gost.dev@samsung.com \
--cc=hch@lst.de \
--cc=hpa@zytor.com \
--cc=kernel@pankajraghav.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mcgrof@kernel.org \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=npache@redhat.com \
--cc=p.raghav@samsung.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox