From: Wenchao Hao <haowenchao22@gmail.com>
To: David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Oscar Salvador <osalvador@suse.de>,
Muhammad Usama Anjum <usama.anjum@collabora.com>,
Andrii Nakryiko <andrii@kernel.org>,
Ryan Roberts <ryan.roberts@arm.com>, Peter Xu <peterx@redhat.com>,
Barry Song <21cnbao@gmail.com>,
linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH] smaps: count large pages smaller than PMD size to anonymous_thp
Date: Wed, 4 Dec 2024 22:47:04 +0800 [thread overview]
Message-ID: <605e5e98-863f-41fe-9a84-071c1843d684@gmail.com> (raw)
In-Reply-To: <f002188e-8990-4c72-ad84-966518279dce@redhat.com>
On 2024/12/4 22:37, David Hildenbrand wrote:
> On 04.12.24 15:30, Wenchao Hao wrote:
>> On 2024/12/3 22:17, David Hildenbrand wrote:
>>> On 03.12.24 14:49, Wenchao Hao wrote:
>>>> Currently, /proc/xxx/smaps reports the size of anonymous huge pages for
>>>> each VMA, but it does not include large pages smaller than PMD size.
>>>>
>>>> This patch adds the statistics of anonymous huge pages allocated by
>>>> mTHP which is smaller than PMD size to AnonHugePages field in smaps.
>>>>
>>>> Signed-off-by: Wenchao Hao <haowenchao22@gmail.com>
>>>> ---
>>>> fs/proc/task_mmu.c | 6 ++++++
>>>> 1 file changed, 6 insertions(+)
>>>>
>>>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>>>> index 38a5a3e9cba2..b655011627d8 100644
>>>> --- a/fs/proc/task_mmu.c
>>>> +++ b/vim @@ -717,6 +717,12 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
>>>> if (!folio_test_swapbacked(folio) && !dirty &&
>>>> !folio_test_dirty(folio))
>>>> mss->lazyfree += size;
>>>> +
>>>> + /*
>>>> + * Count large pages smaller than PMD size to anonymous_thp
>>>> + */
>>>> + if (!compound && PageHead(page) && folio_order(folio))
>>>> + mss->anonymous_thp += folio_size(folio);
>>>> }
>>>> if (folio_test_ksm(folio))
>>>
>>>
>>> I think we decided to leave this (and /proc/meminfo) be one of the last
>>> interfaces where this is only concerned with PMD-sized ones:
>>>
>>
>> Could you explain why?
>>
>> When analyzing the impact of mTHP on performance, we need to understand
>> how many pages in the process are actually present as large pages.
>> By comparing this value with the actual memory usage of the process,
>> we can analyze the large page allocation success rate of the process,
>> and further investigate the situation of khugepaged. If the actual
>> proportion of large pages is low, the performance of the process may
>> be affected, which could be directly reflected in the high number of
>> TLB misses and page faults.
>>
>> However, currently, only PMD-sized large pages are being counted,
>> which is insufficient.
>
> As Ryan said, we have scripts to analyze that. We did not come to a conclusion yet how to handle smaps stats differently -- and whether we want to at all.
>
Hi David,
I replied Ryan about few disadvantages of the scripts. The scripts
is not helpful for my scenario.
>>
>>> Documentation/admin-guide/mm/transhuge.rst:
>>>
>>> The number of PMD-sized anonymous transparent huge pages currently used by the
>>> system is available by reading the AnonHugePages field in ``/proc/meminfo``.
>>> To identify what applications are using PMD-sized anonymous transparent huge
>>> pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages
>>> fields for each mapping. (Note that AnonHugePages only applies to traditional
>>> PMD-sized THP for historical reasons and should have been called
>>> AnonHugePmdMapped).
>>>
>>
>> Maybe rename this field, then AnonHugePages contains huge page of mTHP?
>
> It has the potential of breaking existing user space, which is why we didn't look into that yet.
>
Got it.
> AnonHugePmdMapped would be a lot cleaner, and could be added independently. It would be required as a first step.
>
While, if the meaning of AnonHugePages remains unchanged, simply adding a new field doesn't
seem to have any practical significance.
Thanks
next prev parent reply other threads:[~2024-12-04 14:47 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-03 13:49 Wenchao Hao
2024-12-03 14:17 ` David Hildenbrand
2024-12-03 14:42 ` Ryan Roberts
2024-12-04 14:40 ` Wenchao Hao
2024-12-04 17:05 ` Ryan Roberts
2024-12-16 15:58 ` Wenchao Hao
2024-12-20 6:48 ` Dev Jain
2024-12-04 14:30 ` Wenchao Hao
2024-12-04 14:37 ` David Hildenbrand
2024-12-04 14:47 ` Wenchao Hao [this message]
2024-12-04 17:07 ` Ryan Roberts
2024-12-06 11:16 ` Lance Yang
2024-12-08 6:06 ` Barry Song
2024-12-09 10:07 ` Ryan Roberts
2024-12-16 16:03 ` Wenchao Hao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=605e5e98-863f-41fe-9a84-071c1843d684@gmail.com \
--to=haowenchao22@gmail.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andrii@kernel.org \
--cc=david@redhat.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=usama.anjum@collabora.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox