From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>,
Lance Yang <ioworker0@gmail.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Hugh Dickins <hughd@google.com>, Jonathan Corbet <corbet@lwn.net>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Barry Song <baohua@kernel.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v1 2/2] mm: mTHP stats for pagecache folio allocations
Date: Wed, 17 Jul 2024 11:48:43 +0100 [thread overview]
Message-ID: <d727ceb1-8396-4303-b8c1-dbaf75f760fc@arm.com> (raw)
In-Reply-To: <f021fb0f-fe2b-4ba3-abc1-d649967ebeb4@redhat.com>
On 17/07/2024 11:25, David Hildenbrand wrote:
> On 17.07.24 12:18, Ryan Roberts wrote:
>> On 17/07/2024 11:03, David Hildenbrand wrote:
>>>>>>>>
>>>>>>>> But today, controls and stats are exposed for:
>>>>>>>>
>>>>>>>> anon:
>>>>>>>> min order: 2
>>>>>>>> max order: PMD_ORDER
>>>>>>>> anon-shmem:
>>>>>>>> min order: 2
>>>>>>>> max order: PMD_ORDER
>>>>>>>> tmpfs-shmem:
>>>>>>>> min order: PMD_ORDER
>>>>>>>> max order: PMD_ORDER
>>>>>>>> file:
>>>>>>>> min order: Nothing yet (this patch proposes 1)
>>>>>>>> max order: Nothing yet (this patch proposes MAX_PAGECACHE_ORDER)
>>>>>>>>
>>>>>>>> So I think there is definitely a bug for shmem where the minimum order
>>>>>>>> control
>>>>>>>> should be order-1 but its currently order-2.
>>>>>>>
>>>>>>> Maybe, did not play with that yet. Likely order-1 will work. (although
>>>>>>> probably
>>>>>>> of questionable use :) )
>>>>>>
>>>>>> You might have to expand on why its of "questionable use". I'd assume it has
>>>>>> the
>>>>>> same amount of value as using order-1 for regular page cache pages? i.e. half
>>>>>> the number of objects to manage for the same amount of memory.
>>>>>
>>>>> order-1 was recently added for the pagecache to get some device setups running
>>>>> (IIRC, where we cannot use order-0, because device blocksize > PAGE_SIZE).
>>>>>
>>>>> You might be right about "half the number of objects", but likely just
>>>>> going for
>>>>> order-2, order-3, order-4 ... for shmem might be even better. And simply
>>>>> falling
>>>>> back to order-0 when you cannot get the larger orders.
>>>>
>>>> Sure, but then you're into the territory of baking in policy. Remember that
>>>> originally I was only interested in 64K but the concensus was to expose all the
>>>> sizes. Same argument applies to 8K; expose it and let others decide policy.
>>>
>>> I don't disagree. The point I'm trying to make is that there was so far there
>>> was no strong evidence that it is really required. Support for the pagecache had
>>> a different motivation for these special devices.
>>
>> Sure, but there was no clear need for anon mTHP orders other than order-2 and
>> order-4 (for arm64's HPA and contpte, respectively), but we still chose to
>> expose all the others.
>
> order-2 and order-3 are valuable for AMD EPYC (depending on the generation 16
> vs. 32 KiB coalescing).
>
> But in general, at least for me, it's easier to argue why larger orders make
> more sense than very tiny ones.
>
> For example, order-5 can be mapped using cont-pte as well and you get roughly
> half the memory allocation+page fault overhead compared to order-4.
>
> order-1 ? No TLB optimization at least on any current HW I know.
I believe there are some variants of HPA that coalesce "up to" 4 pages, meaning
2 pages (or 3 or 4) could be coalesced into a single TLB entry. But I'm not 100%
sure on that.
>
> But I believe we're in violent agreement here :)
>
next prev parent reply other threads:[~2024-07-17 10:48 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-11 7:29 [PATCH v1 0/2] mTHP allocation stats for file-backed memory Ryan Roberts
2024-07-11 7:29 ` [PATCH v1 1/2] mm: Cleanup count_mthp_stat() definition Ryan Roberts
2024-07-11 8:20 ` Barry Song
2024-07-12 2:31 ` Baolin Wang
2024-07-12 11:57 ` Lance Yang
2024-07-11 7:29 ` [PATCH v1 2/2] mm: mTHP stats for pagecache folio allocations Ryan Roberts
2024-07-12 3:00 ` Baolin Wang
2024-07-12 12:22 ` Lance Yang
2024-07-13 1:08 ` David Hildenbrand
2024-07-13 10:45 ` Ryan Roberts
2024-07-16 8:31 ` Ryan Roberts
2024-07-16 10:19 ` David Hildenbrand
2024-07-16 11:14 ` Ryan Roberts
2024-07-17 8:02 ` David Hildenbrand
2024-07-17 8:29 ` Ryan Roberts
2024-07-17 8:44 ` David Hildenbrand
2024-07-17 9:50 ` Ryan Roberts
2024-07-17 10:03 ` David Hildenbrand
2024-07-17 10:18 ` Ryan Roberts
2024-07-17 10:25 ` David Hildenbrand
2024-07-17 10:48 ` Ryan Roberts [this message]
2024-07-13 11:00 ` Ryan Roberts
2024-07-13 12:54 ` Baolin Wang
2024-07-14 9:05 ` Ryan Roberts
2024-07-22 3:52 ` Baolin Wang
2024-07-22 7:36 ` Ryan Roberts
2024-07-12 22:44 ` kernel test robot
2024-07-15 13:55 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d727ceb1-8396-4303-b8c1-dbaf75f760fc@arm.com \
--to=ryan.roberts@arm.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox