linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Barry Song <21cnbao@gmail.com>
Cc: John Hubbard <jhubbard@nvidia.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Zenghui Yu <yuzenghui@huawei.com>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>, Zi Yan <ziy@nvidia.com>,
	Alistair Popple <apopple@nvidia.com>,
	linux-mm@kvack.org
Subject: Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info
Date: Wed, 10 Jan 2024 09:20:48 +0000	[thread overview]
Message-ID: <4b38cf27-9aee-468c-bdfb-12637b59b600@arm.com> (raw)
In-Reply-To: <CAGsJ_4wZGEMgVygmEzHs3LLHfUcEN2oqzuHTSsTk5y1wAhxO7g@mail.gmail.com>

On 10/01/2024 09:09, Barry Song wrote:
> On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> On 10/01/2024 08:02, Barry Song wrote:
>>> On Wed, Jan 10, 2024 at 12:16 PM John Hubbard <jhubbard@nvidia.com> wrote:
>>>>
>>>> On 1/9/24 19:51, Barry Song wrote:
>>>>> On Wed, Jan 10, 2024 at 11:35 AM John Hubbard <jhubbard@nvidia.com> wrote:
>>>> ...
>>>>>> Hi Ryan,
>>>>>>
>>>>>> One thing that immediately came up during some recent testing of mTHP
>>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm running
>>>>>> tests on a machine at a time for now, inside various containers and
>>>>>> such, and it would be nice if there were an easy way to get some numbers
>>>>>> for the mTHPs across the whole machine.
>>
>> Just to confirm, you're expecting these "global" stats be truely global and not
>> per-container? (asking because you exploicitly mentioned being in a container).
>> If you want per-container, then you can probably just create the container in a
>> cgroup?
>>
>>>>>>
>>>>>> I'm not sure if that changes anything about thpmaps here. Probably
>>>>>> this is fine as-is. But I wanted to give some initial reactions from
>>>>>> just some quick runs: the global state would be convenient.
>>
>> Thanks for taking this for a spin! Appreciate the feedback.
>>
>>>>>
>>>>> +1. but this seems to be impossible by scanning pagemap?
>>>>> so may we add this statistics information in kernel just like
>>>>> /proc/meminfo or a separate /proc/mthp_info?
>>>>>
>>>>
>>>> Yes. From my perspective, it looks like the global stats are more useful
>>>> initially, and the more detailed per-pid or per-cgroup stats are the
>>>> next level of investigation. So feels odd to start with the more
>>>> detailed stats.
>>>>
>>>
>>> probably because this can be done without the modification of the kernel.
>>
>> Yes indeed, as John said in an earlier thread, my previous attempts to add stats
>> directly in the kernel got pushback; DavidH was concerned that we don't really
>> know exectly how to account mTHPs yet
>> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up adding
>> the wrong ABI and having to maintain it forever. There has also been some
>> pushback regarding adding more values to multi-value files in sysfs, so David
>> was suggesting coming up with a whole new scheme at some point (I know
>> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and cgroups
>> do live in sysfs).
>>
>> Anyway, this script was my attempt to 1) provide a short term solution to the
>> "we need some stats" request and 2) provide a context in which to explore what
>> the right stats are - this script can evolve without the ABI problem.
>>
>>> The detailed per-pid or per-cgroup is still quite useful to my case in which
>>> we set mTHP enabled/disabled and allowed sizes according to vma types,
>>> eg. libc_malloc, java heaps etc.
>>>
>>> Different vma types can have different anon_name. So I can use the detailed
>>> info to find out if specific VMAs have gotten mTHP properly and how many
>>> they have gotten.
>>>
>>>> However, Ryan did clearly say, above, "In future we may wish to
>>>> introduce stats directly into the kernel (e.g. smaps or similar)". And
>>>> earlier he ran into some pushback on trying to set up /proc or /sys
>>>> values because this is still such an early feature.
>>>>
>>>> I wonder if we could put the global stats in debugfs for now? That's
>>>> specifically supposed to be a "we promise *not* to keep this ABI stable"
>>>> location.
>>
>> Now that I think about it, I wonder if we can add a --global mode to the script
>> (or just infer global when neither --pid nor --cgroup are provided). I think I
>> should be able to determine all the physical memory ranges from /proc/iomem,
>> then grab all the info we need from /proc/kpageflags. We should then be able to
>> process it all in much the same way as for --pid/--cgroup and provide the same
>> stats, but it will apply globally. What do you think?
> 
> for debug purposes, it should be good. imaging there is a health
> monitor which needs
> to sample the stats of large folios online and periodically, this
> might be too expensive.

Yes, understood - the long term aim needs to be to get stats into the kernel.
This is intended as a step to help make that happen.

> 
>>
>> If we can possibly avoid sysfs/debugfs I would prefer to keep it all in a script
>> for now.
>>
>>>
>>> +1.
>>>
>>>>
>>>>
>>>> thanks,
>>>> --
>>>> John Hubbard
>>>> NVIDIA
>>>>
>>>
> 
> Thanks
> Barry



  reply	other threads:[~2024-01-10  9:20 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-02 15:38 Ryan Roberts
2024-01-03  6:44 ` Barry Song
2024-01-03  8:07   ` William Kucharski
2024-01-03  8:24     ` Ryan Roberts
2024-01-03  9:16       ` Barry Song
2024-01-03  9:35         ` Ryan Roberts
2024-01-03 10:09           ` William Kucharski
2024-01-03 10:20             ` Ryan Roberts
2024-01-04 22:48               ` John Hubbard
2024-01-05  8:35                 ` Ryan Roberts
2024-01-05 11:30                   ` William Kucharski
2024-01-05 23:07                     ` John Hubbard
2024-01-05 23:18                   ` John Hubbard
2024-01-10  8:43                     ` Ryan Roberts
2024-01-05  8:40 ` Ryan Roberts
2024-01-10  3:34 ` John Hubbard
2024-01-10  3:51   ` Barry Song
2024-01-10  4:15     ` John Hubbard
2024-01-10  8:02       ` Barry Song
2024-01-10  8:58         ` Ryan Roberts
2024-01-10  9:09           ` Barry Song
2024-01-10  9:20             ` Ryan Roberts [this message]
2024-01-10 10:23             ` Ryan Roberts
2024-01-10 10:30               ` Barry Song
2024-01-10 10:38                 ` Ryan Roberts
2024-01-10 10:42                   ` David Hildenbrand
2024-01-10 10:55                     ` Ryan Roberts
2024-01-10 11:00                       ` David Hildenbrand
2024-01-10 11:20                         ` Ryan Roberts
2024-01-10 11:24                           ` David Hildenbrand
2024-01-10 11:38                           ` Barry Song
2024-01-10 11:59                             ` Ryan Roberts
2024-01-10 12:05                               ` Barry Song
2024-01-10 12:12                                 ` David Hildenbrand
2024-01-10 15:19                                   ` Zi Yan
2024-01-10 15:27                                     ` David Hildenbrand
2024-01-10 22:14                               ` Barry Song
2024-01-11 12:25                                 ` Ryan Roberts
2024-01-11 13:18                                   ` David Hildenbrand
2024-01-11 20:21                                     ` Barry Song
2024-01-11 20:28                                       ` David Hildenbrand
2024-01-12  6:03                                         ` Barry Song
2024-01-12 10:44                                           ` Ryan Roberts
2024-01-12 10:18                                     ` Ryan Roberts
2024-01-17 15:49                                       ` David Hildenbrand
2024-01-11 20:45                                   ` Barry Song
2024-01-12 10:25                                     ` Ryan Roberts
2024-01-10 23:34                           ` Barry Song
2024-01-10 10:48                   ` Barry Song
2024-01-10 10:54                     ` David Hildenbrand
2024-01-10 10:58                       ` Ryan Roberts
2024-01-10 11:02                         ` David Hildenbrand
2024-01-10 11:07                         ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4b38cf27-9aee-468c-bdfb-12637b59b600@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=david@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=yuzenghui@huawei.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox