From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>, Barry Song <21cnbao@gmail.com>
Cc: John Hubbard <jhubbard@nvidia.com>,
Andrew Morton <akpm@linux-foundation.org>,
Zenghui Yu <yuzenghui@huawei.com>,
Matthew Wilcox <willy@infradead.org>,
Kefeng Wang <wangkefeng.wang@huawei.com>, Zi Yan <ziy@nvidia.com>,
Alistair Popple <apopple@nvidia.com>,
linux-mm@kvack.org
Subject: Re: [RFC PATCH v1] tools/mm: Add thpmaps script to dump THP usage info
Date: Wed, 10 Jan 2024 12:02:45 +0100 [thread overview]
Message-ID: <70c30979-e793-4fcb-99b3-e9f2e8fbbf3b@redhat.com> (raw)
In-Reply-To: <8070266d-7cff-40c9-892c-2c7d1ca2e85d@arm.com>
On 10.01.24 11:58, Ryan Roberts wrote:
> On 10/01/2024 10:54, David Hildenbrand wrote:
>> On 10.01.24 11:48, Barry Song wrote:
>>> On Wed, Jan 10, 2024 at 6:38 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>>
>>>> On 10/01/2024 10:30, Barry Song wrote:
>>>>> On Wed, Jan 10, 2024 at 6:23 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>>>>
>>>>>> On 10/01/2024 09:09, Barry Song wrote:
>>>>>>> On Wed, Jan 10, 2024 at 4:58 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>>>>>>
>>>>>>>> On 10/01/2024 08:02, Barry Song wrote:
>>>>>>>>> On Wed, Jan 10, 2024 at 12:16 PM John Hubbard <jhubbard@nvidia.com> wrote:
>>>>>>>>>>
>>>>>>>>>> On 1/9/24 19:51, Barry Song wrote:
>>>>>>>>>>> On Wed, Jan 10, 2024 at 11:35 AM John Hubbard <jhubbard@nvidia.com>
>>>>>>>>>>> wrote:
>>>>>>>>>> ...
>>>>>>>>>>>> Hi Ryan,
>>>>>>>>>>>>
>>>>>>>>>>>> One thing that immediately came up during some recent testing of mTHP
>>>>>>>>>>>> on arm64: the pid requirement is sometimes a little awkward. I'm running
>>>>>>>>>>>> tests on a machine at a time for now, inside various containers and
>>>>>>>>>>>> such, and it would be nice if there were an easy way to get some numbers
>>>>>>>>>>>> for the mTHPs across the whole machine.
>>>>>>>>
>>>>>>>> Just to confirm, you're expecting these "global" stats be truely global
>>>>>>>> and not
>>>>>>>> per-container? (asking because you exploicitly mentioned being in a
>>>>>>>> container).
>>>>>>>> If you want per-container, then you can probably just create the
>>>>>>>> container in a
>>>>>>>> cgroup?
>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I'm not sure if that changes anything about thpmaps here. Probably
>>>>>>>>>>>> this is fine as-is. But I wanted to give some initial reactions from
>>>>>>>>>>>> just some quick runs: the global state would be convenient.
>>>>>>>>
>>>>>>>> Thanks for taking this for a spin! Appreciate the feedback.
>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> +1. but this seems to be impossible by scanning pagemap?
>>>>>>>>>>> so may we add this statistics information in kernel just like
>>>>>>>>>>> /proc/meminfo or a separate /proc/mthp_info?
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Yes. From my perspective, it looks like the global stats are more useful
>>>>>>>>>> initially, and the more detailed per-pid or per-cgroup stats are the
>>>>>>>>>> next level of investigation. So feels odd to start with the more
>>>>>>>>>> detailed stats.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> probably because this can be done without the modification of the kernel.
>>>>>>>>
>>>>>>>> Yes indeed, as John said in an earlier thread, my previous attempts to
>>>>>>>> add stats
>>>>>>>> directly in the kernel got pushback; DavidH was concerned that we don't
>>>>>>>> really
>>>>>>>> know exectly how to account mTHPs yet
>>>>>>>> (whole/partial/aligned/unaligned/per-size/etc) so didn't want to end up
>>>>>>>> adding
>>>>>>>> the wrong ABI and having to maintain it forever. There has also been some
>>>>>>>> pushback regarding adding more values to multi-value files in sysfs, so
>>>>>>>> David
>>>>>>>> was suggesting coming up with a whole new scheme at some point (I know
>>>>>>>> /proc/meminfo isn't sysfs, but the equivalent files for NUMA nodes and
>>>>>>>> cgroups
>>>>>>>> do live in sysfs).
>>>>>>>>
>>>>>>>> Anyway, this script was my attempt to 1) provide a short term solution to
>>>>>>>> the
>>>>>>>> "we need some stats" request and 2) provide a context in which to explore
>>>>>>>> what
>>>>>>>> the right stats are - this script can evolve without the ABI problem.
>>>>>>>>
>>>>>>>>> The detailed per-pid or per-cgroup is still quite useful to my case in
>>>>>>>>> which
>>>>>>>>> we set mTHP enabled/disabled and allowed sizes according to vma types,
>>>>>>>>> eg. libc_malloc, java heaps etc.
>>>>>>>>>
>>>>>>>>> Different vma types can have different anon_name. So I can use the detailed
>>>>>>>>> info to find out if specific VMAs have gotten mTHP properly and how many
>>>>>>>>> they have gotten.
>>>>>>>>>
>>>>>>>>>> However, Ryan did clearly say, above, "In future we may wish to
>>>>>>>>>> introduce stats directly into the kernel (e.g. smaps or similar)". And
>>>>>>>>>> earlier he ran into some pushback on trying to set up /proc or /sys
>>>>>>>>>> values because this is still such an early feature.
>>>>>>>>>>
>>>>>>>>>> I wonder if we could put the global stats in debugfs for now? That's
>>>>>>>>>> specifically supposed to be a "we promise *not* to keep this ABI stable"
>>>>>>>>>> location.
>>>>>>>>
>>>>>>>> Now that I think about it, I wonder if we can add a --global mode to the
>>>>>>>> script
>>>>>>>> (or just infer global when neither --pid nor --cgroup are provided). I
>>>>>>>> think I
>>>>>>>> should be able to determine all the physical memory ranges from /proc/iomem,
>>>>>>>> then grab all the info we need from /proc/kpageflags. We should then be
>>>>>>>> able to
>>>>>>>> process it all in much the same way as for --pid/--cgroup and provide the
>>>>>>>> same
>>>>>>>> stats, but it will apply globally. What do you think?
>>>>>>
>>>>>> Having now thought about this for a few mins (in the shower, if anyone
>>>>>> wants the
>>>>>> complete picture :) ), this won't quite work. This approach doesn't have the
>>>>>> virtual mapping information so the best it can do is tell us "how many of each
>>>>>> size of THP are allocated?" - it doesn't tell us anything about whether
>>>>>> they are
>>>>>> fully or partially mapped or what their alignment is (all necessary if we want
>>>>>> to know if they are contpte-mapped). So I don't think this approach is
>>>>>> going to
>>>>>> be particularly useful.
>>>>>>
>>>>>> And this is also the big problem if we want to gather stats inside the kernel;
>>>>>> if we want something equivalant to /proc/meminfo's
>>>>>> AnonHugePages/ShmemPmdMapped/FilePmdMapped, we need to consider not just the
>>>>>> allocation of the THP but also whether it is mapped. That's easy for
>>>>>> PMD-mappings, because there is only one entry to consider - when you set
>>>>>> it, you
>>>>>> increment the number of PMD-mapped THPs, when you clear it, you decrement. But
>>>>>> for PTE-mappings it's harder; you know the size when you are mapping so its
>>>>>> easy
>>>>>> to increment, but you can do a partial unmap, so you would need to scan the
>>>>>> PTEs
>>>>>> to figure out if we are unmapping the first page of a previously
>>>>>> fully-PTE-mapped THP, which is expensive. We would need a cheap mechanism to
>>>>>> determine "is this folio fully and contiguously mapped in at least one
>>>>>> process?".
>>>>>
>>>>> as OPPO's approach I shared to you before is maintaining two mapcount
>>>>> 1. entire map
>>>>> 2. subpage's map
>>>>> 3. if 1 and 2 both exist, it is DoubleMapped.
>>>>>
>>>>> This isn't a problem for us. and everytime if we do a partial unmap,
>>>>> we have an explicit
>>>>> cont_pte split which will decrease the entire map and increase the
>>>>> subpage's mapcount.
>>>>>
>>>>> but its downside is that we expose this info to mm-core.
>>>>
>>>> OK, but I think we have a slightly more generic situation going on with the
>>>> upstream; If I've understood correctly, you are using the PTE_CONT bit in the
>>>> PTE to determne if its fully mapped? That works for your case where you only
>>>> have 1 size of THP that you care about (contpte-size). But for the upstream, we
>>>> have multi-size THP so we can't use the PTE_CONT bit to determine if its fully
>>>> mapped because we can only use that bit if the THP is at least 64K and aligned,
>>>> and only on arm64. We would need a SW bit for this purpose, and the mm would
>>>> need to update that SW bit for every PTE one the full -> partial map transition.
>>>
>>> My current implementation does use cont_pte but i don't think it is a must-have.
>>> we don't need a bit in PTE to know if we are partially unmapping a large folio
>>> at all.
>>>
>>> as long as we are unmapping a part of a large folio, we do know what we are
>>> doing. if a large folio is mapped entirely in a process, we get only
>>> entire_map +1,
>>> if we are unmapping a subpage of it, we get entire_map -1 and remained subpage's
>>> mapcount + 1. if we are only mapping a part of this large folio, we
>>> only increase
>>> its subpages' mapcount.
>>
>> That doesn't work as soon as you unmap a second subpage. Not to mention that
>> people ( :) ) are working on removing the subpage mapcounts.
>
> Yes, that was my point - Oppo's implementation relies on the bit in the PTE to
> tell the difference between unmapping the first subpage and unmapping the
> others. We don't have that luxury here.
Yes, and once we're thinking of bigger folios that eventually span
multiple page tables, these PTE-bit games won't scale.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-01-10 11:02 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-02 15:38 Ryan Roberts
2024-01-03 6:44 ` Barry Song
2024-01-03 8:07 ` William Kucharski
2024-01-03 8:24 ` Ryan Roberts
2024-01-03 9:16 ` Barry Song
2024-01-03 9:35 ` Ryan Roberts
2024-01-03 10:09 ` William Kucharski
2024-01-03 10:20 ` Ryan Roberts
2024-01-04 22:48 ` John Hubbard
2024-01-05 8:35 ` Ryan Roberts
2024-01-05 11:30 ` William Kucharski
2024-01-05 23:07 ` John Hubbard
2024-01-05 23:18 ` John Hubbard
2024-01-10 8:43 ` Ryan Roberts
2024-01-05 8:40 ` Ryan Roberts
2024-01-10 3:34 ` John Hubbard
2024-01-10 3:51 ` Barry Song
2024-01-10 4:15 ` John Hubbard
2024-01-10 8:02 ` Barry Song
2024-01-10 8:58 ` Ryan Roberts
2024-01-10 9:09 ` Barry Song
2024-01-10 9:20 ` Ryan Roberts
2024-01-10 10:23 ` Ryan Roberts
2024-01-10 10:30 ` Barry Song
2024-01-10 10:38 ` Ryan Roberts
2024-01-10 10:42 ` David Hildenbrand
2024-01-10 10:55 ` Ryan Roberts
2024-01-10 11:00 ` David Hildenbrand
2024-01-10 11:20 ` Ryan Roberts
2024-01-10 11:24 ` David Hildenbrand
2024-01-10 11:38 ` Barry Song
2024-01-10 11:59 ` Ryan Roberts
2024-01-10 12:05 ` Barry Song
2024-01-10 12:12 ` David Hildenbrand
2024-01-10 15:19 ` Zi Yan
2024-01-10 15:27 ` David Hildenbrand
2024-01-10 22:14 ` Barry Song
2024-01-11 12:25 ` Ryan Roberts
2024-01-11 13:18 ` David Hildenbrand
2024-01-11 20:21 ` Barry Song
2024-01-11 20:28 ` David Hildenbrand
2024-01-12 6:03 ` Barry Song
2024-01-12 10:44 ` Ryan Roberts
2024-01-12 10:18 ` Ryan Roberts
2024-01-17 15:49 ` David Hildenbrand
2024-01-11 20:45 ` Barry Song
2024-01-12 10:25 ` Ryan Roberts
2024-01-10 23:34 ` Barry Song
2024-01-10 10:48 ` Barry Song
2024-01-10 10:54 ` David Hildenbrand
2024-01-10 10:58 ` Ryan Roberts
2024-01-10 11:02 ` David Hildenbrand [this message]
2024-01-10 11:07 ` Barry Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=70c30979-e793-4fcb-99b3-e9f2e8fbbf3b@redhat.com \
--to=david@redhat.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=yuzenghui@huawei.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox