From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: lsf-pc@lists.linux-foundation.org, Linux-MM <linux-mm@kvack.org>,
Matthew Wilcox <willy@infradead.org>,
Dave Chinner <david@fromorbit.com>
Subject: Re: [LSF/MM/BPF TOPIC] Mapping text with large folios
Date: Sun, 30 Mar 2025 12:46:00 +0800 [thread overview]
Message-ID: <CAGsJ_4zETytN_6pzNQjnt3Jc1ubAjCr3toHf0LcRA_7hmMMuxg@mail.gmail.com> (raw)
In-Reply-To: <3e3a2d12-efcb-44e7-bd03-e8211161f3a4@arm.com>
On Thu, Mar 20, 2025 at 10:57 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 19/03/2025 20:47, Barry Song wrote:
> > On Thu, Mar 20, 2025 at 4:38 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> Hi All,
> >>
> >> I know this is very last minute, but I was hoping that it might be possible to
> >> squeeze in a session to discuss the following?
> >>
> >> Summary/Background:
> >>
> >> On arm64, physically contiguous and naturally aligned regions can take advantage
> >> of contpte mappings (e.g. 64 KB) to reduce iTLB pressure. However, for file
> >> regions containing text, current readahead behaviour often yields small,
> >> misaligned folios, preventing this optimization. This proposal introduces a
> >> special-case path for executable mappings, performing synchronous reads of an
> >> architecture-chosen size into large folios (64 KB on arm64). Early performance
> >> tests on real-world workloads (e.g. nginx, redis, kernel compilation) show ~2-9%
> >> gains.
> >>
> >> I’ve previously posted attempts to enable this performance improvement ([1],
> >> [2]), but there were objections and conversation fizzled out. Now that I have
> >> more compelling performance data, I’m hoping there is now stronger
> >> justification, and we can find a path forwards.
> >>
> >> What I’d Like to Cover:
> >>
> >> - Describe how text memory should ideally be mapped and why it benefits
> >> performance.
> >>
> >> - Brief review of performance data.
> >>
> >> - Discuss options for the best way to encourage text into large folios:
> >> - Let the architecture request a preferred size
> >> - Extend VMA attributes to include preferred THP size hint
> >
> > We might need this for a couple of other cases.
> >
> > 1. The native heap—for example, a native heap like jemalloc—can configure
> > the base "granularity" and then use MADV_DONTNEED/FREE at that granularity
> > to manage memory. Currently, the default granularity is PAGE_SIZE, which can
> > lead to excessive folio splitting. For instance, if we set jemalloc's
> > granularity to
> > 16KB while sysfs supports 16KB, 32KB, 64KB, etc., splitting can still occur.
> > Therefore, in some cases, I believe the kernel should be aware of how
> > userspace is managing memory.
> >
> > 2. Java heap GC compaction - userfaultfd_move() things.
> > I am considering adding support for batched PTE/folios moves in
> > userfaultfd_move().
> > If sysfs enables 16KB, 32KB, 64KB, 128KB, etc., but the userspace Java
> > heap moves
> > memory at a 16KB granularity, it could lead to excessive folio splitting.
>
> Would these heaps ever use a 64K granule or is that too big? If they can use
> 64K, then one simple solution would be to only enable mTHP sizes upto 64K (which
> is the magic size for arm64).
>
I'm uncertain about how Lokesh plans to implement userfaultfd_move()
mTHP support
in what granularity he'll use in Java heap GC. However, regarding
jemalloc, I've found
that 64KB is actually too large - it ends up increasing memory usage.
The issue is that
we need at least 64KB of freed small objects before we can effectively use
MADV_DONTNEED. Perhaps we could try 16KB instead.
The key requirement is that the kernel's maximum large folio size cannot exceed
the memory management granularity used by userspace heap implementations.
Before implementing madvise-based per-VMA large folios for Java heap, I plan
to first propose a large-folio aware userfaultfd_move() and discuss
this approach
with Lokesh.
> Alternatively they could use MADV_NOHUGEPAGE today and be guarranteed that
> memory would remain mapped as small folios.
Right, I'm using this MADV_NOHUGEPAGE specifically for small size classes in
jemalloc now as large folios will soon be splitted due to unaligned
userspace heap
management.
>
> But I see the potential problem if you want to benefit from HPA with 16K granule
> there but still enable 64K globally. We have briefly discussed the idea of
> supporting MADV_HUGEPAGE via madvise_process() in the past; that has an extra
> param that could encode the size hint(s).
>
I'm not sure what granularity Lokesh plans to support for moving large folios in
Java GC. But first, we need kernel support for userfaultfd_move() with mTHP.
Maybe this could serve as a use case to justify the size hint in
MADV_HUGEPAGE.
> >
> > For exec, it seems we need a userspace-transparent approach. Asking each
> > application to modify its code to madvise the kernel on its preferred exec folio
> > size seems cumbersome.
>
> I would much prefer a transparent approach. If we did take the approach of using
> a per-VMA size hint, I was thinking that could be handled by the dynamic linker.
> Then it's only one place to update.
The dynamic linker (ld.so) primarily manages the runtime linking of
shared libraries for
executables. However, the initial memory mapping of the executable
itself (the binary
file, e.g., a.out) is performed by the kernel during program execution?
>
> >
> > I mean, we could whitelist all execs by default unless an application explicitly
> > requests to disable it?
>
> I guess the explicit disable would be MADV_NOHUGEPAGE. But I don't believe the
> pagecache honours this right now; presumably because the memory is shared. What
> would you do if one process disabled and another didn't?
Correct. My previous concern is that if memory-constrained devices
could experience
increased memory pressure due to mandatory 64KB read operations. A particular
concern is that the 64KiB folio remains in LRU queue when any single subpage
is active, whereas smaller folios would have been reclaimable when inactive.
However, this appears unrelated to your patch [1]. Perhaps such systems should
disable file large folios entirely?
[1] https://lore.kernel.org/all/20240215154059.2863126-1-ryan.roberts@arm.com/
>
> Thanks,
> Ryan
>
> >
> >> - Provide a sysfs knob
> >> - Plug into the “mapping min folio order” infrastructure
> >> - Other approaches?
> >>
> >> [1] https://lore.kernel.org/all/20240215154059.2863126-1-ryan.roberts@arm.com/
> >> [2] https://lore.kernel.org/all/20240717071257.4141363-1-ryan.roberts@arm.com/
> >>
> >> Thanks,
> >> Ryan
> >
Thanks
Barry
next prev parent reply other threads:[~2025-03-30 4:46 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-19 15:38 Ryan Roberts
2025-03-19 18:16 ` Yang Shi
2025-03-19 20:38 ` Dave Chinner
2025-03-19 22:13 ` Barry Song
2025-03-20 0:53 ` Dave Chinner
2025-03-20 14:47 ` Ryan Roberts
2025-03-20 12:16 ` Ryan Roberts
2025-03-20 12:13 ` Ryan Roberts
2025-03-19 20:47 ` Barry Song
2025-03-20 14:57 ` Ryan Roberts
2025-03-30 4:46 ` Barry Song [this message]
2025-04-01 11:09 ` Ryan Roberts
2025-04-01 10:53 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGsJ_4zETytN_6pzNQjnt3Jc1ubAjCr3toHf0LcRA_7hmMMuxg@mail.gmail.com \
--to=21cnbao@gmail.com \
--cc=david@fromorbit.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=ryan.roberts@arm.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox