linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Wenchao Hao <haowenchao22@gmail.com>,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	Oscar Salvador <osalvador@suse.de>,
	Muhammad Usama Anjum <usama.anjum@collabora.com>,
	Andrii Nakryiko <andrii@kernel.org>, Peter Xu <peterx@redhat.com>,
	Barry Song <21cnbao@gmail.com>,
	linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, Dev Jain <dev.jain@arm.com>
Subject: Re: [PATCH] smaps: count large pages smaller than PMD size to anonymous_thp
Date: Wed, 4 Dec 2024 17:07:53 +0000	[thread overview]
Message-ID: <b4205df7-e15f-4daf-bf12-720c73e15fa2@arm.com> (raw)
In-Reply-To: <e6199ca4-1f87-4ec5-b886-11482b082931@gmail.com>

+ Dev Jain

On 04/12/2024 14:30, Wenchao Hao wrote:
> On 2024/12/3 22:17, David Hildenbrand wrote:
>> On 03.12.24 14:49, Wenchao Hao wrote:
>>> Currently, /proc/xxx/smaps reports the size of anonymous huge pages for
>>> each VMA, but it does not include large pages smaller than PMD size.
>>>
>>> This patch adds the statistics of anonymous huge pages allocated by
>>> mTHP which is smaller than PMD size to AnonHugePages field in smaps.
>>>
>>> Signed-off-by: Wenchao Hao <haowenchao22@gmail.com>
>>> ---
>>>   fs/proc/task_mmu.c | 6 ++++++
>>>   1 file changed, 6 insertions(+)
>>>
>>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>>> index 38a5a3e9cba2..b655011627d8 100644
>>> --- a/fs/proc/task_mmu.c
>>> +++ b/fs/proc/task_mmu.c
>>> @@ -717,6 +717,12 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page,
>>>           if (!folio_test_swapbacked(folio) && !dirty &&
>>>               !folio_test_dirty(folio))
>>>               mss->lazyfree += size;
>>> +
>>> +        /*
>>> +         * Count large pages smaller than PMD size to anonymous_thp
>>> +         */
>>> +        if (!compound && PageHead(page) && folio_order(folio))
>>> +            mss->anonymous_thp += folio_size(folio);
>>>       }
>>>         if (folio_test_ksm(folio))
>>
>>
>> I think we decided to leave this (and /proc/meminfo) be one of the last
>> interfaces where this is only concerned with PMD-sized ones:
>>
> 
> Could you explain why?
> 
> When analyzing the impact of mTHP on performance, we need to understand
> how many pages in the process are actually present as large pages.
> By comparing this value with the actual memory usage of the process,
> we can analyze the large page allocation success rate of the process,
> and further investigate the situation of khugepaged. If the actual

Note that khugepaged does not yet support collapse to mTHP sizes. Dev Jain
(CCed) is working on an implementation for that now. If you are planning to look
at this area, you might want to chat first.

> proportion of large pages is low, the performance of the process may
> be affected, which could be directly reflected in the high number of
> TLB misses and page faults.
> 
> However, currently, only PMD-sized large pages are being counted, 
> which is insufficient.
> 
>> Documentation/admin-guide/mm/transhuge.rst:
>>
>> The number of PMD-sized anonymous transparent huge pages currently used by the
>> system is available by reading the AnonHugePages field in ``/proc/meminfo``.
>> To identify what applications are using PMD-sized anonymous transparent huge
>> pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages
>> fields for each mapping. (Note that AnonHugePages only applies to traditional
>> PMD-sized THP for historical reasons and should have been called
>> AnonHugePmdMapped).
>>
> 
> Maybe rename this field, then AnonHugePages contains huge page of mTHP?
> 
> Thanks,
> wenchao
> 
>>
>>
> 



  parent reply	other threads:[~2024-12-04 17:08 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-03 13:49 Wenchao Hao
2024-12-03 14:17 ` David Hildenbrand
2024-12-03 14:42   ` Ryan Roberts
2024-12-04 14:40     ` Wenchao Hao
2024-12-04 17:05       ` Ryan Roberts
2024-12-16 15:58         ` Wenchao Hao
2024-12-20  6:48           ` Dev Jain
2024-12-04 14:30   ` Wenchao Hao
2024-12-04 14:37     ` David Hildenbrand
2024-12-04 14:47       ` Wenchao Hao
2024-12-04 17:07     ` Ryan Roberts [this message]
2024-12-06 11:16   ` Lance Yang
2024-12-08  6:06     ` Barry Song
2024-12-09 10:07       ` Ryan Roberts
2024-12-16 16:03       ` Wenchao Hao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b4205df7-e15f-4daf-bf12-720c73e15fa2@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrii@kernel.org \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=haowenchao22@gmail.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=osalvador@suse.de \
    --cc=peterx@redhat.com \
    --cc=usama.anjum@collabora.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox