linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Ryan Roberts <ryan.roberts@arm.com>,
	Lance Yang <ioworker0@gmail.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Hugh Dickins <hughd@google.com>, Jonathan Corbet <corbet@lwn.net>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Barry Song <baohua@kernel.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v1 2/2] mm: mTHP stats for pagecache folio allocations
Date: Wed, 17 Jul 2024 12:25:56 +0200	[thread overview]
Message-ID: <f021fb0f-fe2b-4ba3-abc1-d649967ebeb4@redhat.com> (raw)
In-Reply-To: <41831175-6ea4-4e0b-8588-e51e5ee87f19@arm.com>

On 17.07.24 12:18, Ryan Roberts wrote:
> On 17/07/2024 11:03, David Hildenbrand wrote:
>>>>>>>
>>>>>>> But today, controls and stats are exposed for:
>>>>>>>
>>>>>>>       anon:
>>>>>>>         min order: 2
>>>>>>>         max order: PMD_ORDER
>>>>>>>       anon-shmem:
>>>>>>>         min order: 2
>>>>>>>         max order: PMD_ORDER
>>>>>>>       tmpfs-shmem:
>>>>>>>         min order: PMD_ORDER
>>>>>>>         max order: PMD_ORDER
>>>>>>>       file:
>>>>>>>         min order: Nothing yet (this patch proposes 1)
>>>>>>>         max order: Nothing yet (this patch proposes MAX_PAGECACHE_ORDER)
>>>>>>>
>>>>>>> So I think there is definitely a bug for shmem where the minimum order
>>>>>>> control
>>>>>>> should be order-1 but its currently order-2.
>>>>>>
>>>>>> Maybe, did not play with that yet. Likely order-1 will work. (although
>>>>>> probably
>>>>>> of questionable use :) )
>>>>>
>>>>> You might have to expand on why its of "questionable use". I'd assume it has
>>>>> the
>>>>> same amount of value as using order-1 for regular page cache pages? i.e. half
>>>>> the number of objects to manage for the same amount of memory.
>>>>
>>>> order-1 was recently added for the pagecache to get some device setups running
>>>> (IIRC, where we cannot use order-0, because device blocksize > PAGE_SIZE).
>>>>
>>>> You might be right about "half the number of objects", but likely just going for
>>>> order-2, order-3, order-4 ... for shmem might be even better. And simply falling
>>>> back to order-0 when you cannot get the larger orders.
>>>
>>> Sure, but then you're into the territory of baking in policy. Remember that
>>> originally I was only interested in 64K but the concensus was to expose all the
>>> sizes. Same argument applies to 8K; expose it and let others decide policy.
>>
>> I don't disagree. The point I'm trying to make is that there was so far there
>> was no strong evidence that it is really required. Support for the pagecache had
>> a different motivation for these special devices.
> 
> Sure, but there was no clear need for anon mTHP orders other than order-2 and
> order-4 (for arm64's HPA and contpte, respectively), but we still chose to
> expose all the others.

order-2 and order-3 are valuable for AMD EPYC (depending on the 
generation 16 vs. 32 KiB coalescing).

But in general, at least for me, it's easier to argue why larger orders 
make more sense than very tiny ones.

For example, order-5 can be mapped using cont-pte as well and you get 
roughly half the memory allocation+page fault overhead compared to order-4.

order-1 ? No TLB optimization at least on any current HW I know.

But I believe we're in violent agreement here :)

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-07-17 10:26 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-11  7:29 [PATCH v1 0/2] mTHP allocation stats for file-backed memory Ryan Roberts
2024-07-11  7:29 ` [PATCH v1 1/2] mm: Cleanup count_mthp_stat() definition Ryan Roberts
2024-07-11  8:20   ` Barry Song
2024-07-12  2:31   ` Baolin Wang
2024-07-12 11:57   ` Lance Yang
2024-07-11  7:29 ` [PATCH v1 2/2] mm: mTHP stats for pagecache folio allocations Ryan Roberts
2024-07-12  3:00   ` Baolin Wang
2024-07-12 12:22     ` Lance Yang
2024-07-13  1:08       ` David Hildenbrand
2024-07-13 10:45         ` Ryan Roberts
2024-07-16  8:31           ` Ryan Roberts
2024-07-16 10:19             ` David Hildenbrand
2024-07-16 11:14               ` Ryan Roberts
2024-07-17  8:02                 ` David Hildenbrand
2024-07-17  8:29                   ` Ryan Roberts
2024-07-17  8:44                     ` David Hildenbrand
2024-07-17  9:50                       ` Ryan Roberts
2024-07-17 10:03                         ` David Hildenbrand
2024-07-17 10:18                           ` Ryan Roberts
2024-07-17 10:25                             ` David Hildenbrand [this message]
2024-07-17 10:48                               ` Ryan Roberts
2024-07-13 11:00     ` Ryan Roberts
2024-07-13 12:54       ` Baolin Wang
2024-07-14  9:05         ` Ryan Roberts
2024-07-22  3:52           ` Baolin Wang
2024-07-22  7:36             ` Ryan Roberts
2024-07-12 22:44   ` kernel test robot
2024-07-15 13:55     ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f021fb0f-fe2b-4ba3-abc1-d649967ebeb4@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=corbet@lwn.net \
    --cc=hughd@google.com \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox