From: David Hildenbrand <david@redhat.com>
To: Barry Song <21cnbao@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
ryan.roberts@arm.com, yuzhao@google.com
Cc: akpm@linux-foundation.org, linux-mm@kvack.org,
kasong@tencent.com, yosryahmed@google.com,
cerasuolodomenico@gmail.com, surenb@google.com,
Barry Song <v-songbaohua@oppo.com>,
Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [RFC PATCH] mm: show mthp_fault_alloc and mthp_fault_fallback of multi-size THPs
Date: Wed, 27 Mar 2024 12:35:39 +0100 [thread overview]
Message-ID: <330c90c6-87ef-4497-a9c7-739dcd686ca2@redhat.com> (raw)
In-Reply-To: <CAGsJ_4xxsVhfexnjby+CoVLu4ujTEsBh7k_xx+QiwH85NskS9Q@mail.gmail.com>
On 26.03.24 23:19, Barry Song wrote:
> On Tue, Mar 26, 2024 at 4:40 PM Barry Song <21cnbao@gmail.com> wrote:
>>
>> On Tue, Mar 26, 2024 at 4:25 PM Matthew Wilcox <willy@infradead.org> wrote:
>>>
>>> On Tue, Mar 26, 2024 at 04:01:03PM +1300, Barry Song wrote:
>>>> Profiling a system blindly with mTHP has become challenging due
>>>> to the lack of visibility into its operations. While displaying
>>>> additional statistics such as partial map/unmap actions may
>>>> spark debate, presenting the success rate of mTHP allocations
>>>> appears to be a straightforward and pressing need.
>>>
>>> Ummm ... no? Not like this anyway. It has the bad assumption that
>>> "mTHP" only comes in one size.
>>
>>
>> I had initially considered per-size allocation and fallback before sending
>> the RFC. However, in order to prompt discussion and exploration
>> into profiling possibilities, I opted to send the simplest code instead.
>>
>> We could consider two options for displaying per-size statistics.
>>
>> 1. A single file could be used to display data for all sizes.
>> 1024KiB fault allocation:
>> 1024KiB fault fallback:
>> 512KiB fault allocation:
>> 512KiB fault fallback:
>> ....
>> 64KiB fault allocation:
>> 64KiB fault fallback:
>>
>> 2. A separate file for each size
>> For example,
>>
>> /sys/kernel/debug/transparent_hugepage/hugepages-1024kB/vmstat
>> /sys/kernel/debug/transparent_hugepage/hugepages-512kB/vmstat
>> ...
>> /sys/kernel/debug/transparent_hugepage/hugepages-64kB/vmstat
>>
>
> Hi Ryan, David, Willy, Yu,
Hi!
>
> I'm collecting feedback on whether you'd prefer access to something similar
> to /sys/kernel/debug/transparent_hugepage/hugepages-<size>/stat to help
> determine the direction to take for this patch.
I discussed in the past that we might want to place statistics into
sysfs. The idea was to place them into our new hierarchy:
/sys/kernel/mm/transparent_hugepage/hugepages-1024kB/...
following the "one value per file" sysfs design principle.
We could have a new folder "stats" in there that contains files with
statistics we care about.
Of course, we could also place that initially into debugfs in a similar
fashion, and move it over once the interface is considered good and stable.
My 2 cents would be to avoid a "single file".
>
> This is important to us because we're keen on understanding how often
> folios allocations fail on a system with limited memory, such as a phone.
>
> Presently, I've observed a success rate of under 8% for 64KiB allocations.
> Yet, integrating Yu's TAO optimization [1] and establishing an 800MiB
> nomerge zone on a phone with 8GiB memory, there's a substantial
> enhancement in the success rate, reaching approximately 40%. I'm still
> fine-tuning the optimal size for the zone.
Just as a side note:
I didn't have the capacity to comment in detail on the "new zones"
proposal in-depth so far (I'm hoping / assume there will be discussions
at LSF/MM), but I'm hoping we can avoid that for now and instead improve
our pageblock infrastructure, like Johannes is trying to, to achieve
similar gains.
I suspect "some things we can do with new zones we can also do with
pageblocks inside a zone". For example, there were discussions in the
past to have "sticky movable" pageblocks: pageblocks that may only
contain movable data. One could do the same with "pageblocks may not
contain allocations < order X" etc. So one could similarly optimize the
memmap to some degree for these pageblocks.
IMHO we should first try making THP <= pageblock allocations more
reliable, not using new zones, and I'm happy that Johannes et al. are
doing work in that direction. But it's a longer discussion to be had at
LSF/MM.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-03-27 11:35 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-26 3:01 Barry Song
2024-03-26 3:24 ` Matthew Wilcox
2024-03-26 3:40 ` Barry Song
2024-03-26 22:19 ` Barry Song
2024-03-27 11:35 ` David Hildenbrand [this message]
2024-03-27 12:17 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=330c90c6-87ef-4497-a9c7-739dcd686ca2@redhat.com \
--to=david@redhat.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cerasuolodomenico@gmail.com \
--cc=hannes@cmpxchg.org \
--cc=kasong@tencent.com \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=v-songbaohua@oppo.com \
--cc=willy@infradead.org \
--cc=yosryahmed@google.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox