linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dev Jain <dev.jain@arm.com>
To: Nico Pache <npache@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
	linux-mm@kvack.org, ryan.roberts@arm.com,
	anshuman.khandual@arm.com, catalin.marinas@arm.com,
	cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com,
	apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org,
	baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu,
	haowenchao22@gmail.com, hughd@google.com,
	aneesh.kumar@kernel.org, yang@os.amperecomputing.com,
	peterx@redhat.com, ioworker0@gmail.com,
	wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com,
	surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com,
	zhengqi.arch@bytedance.com, jhubbard@nvidia.com,
	21cnbao@gmail.com, willy@infradead.org,
	kirill.shutemov@linux.intel.com, david@redhat.com,
	aarcange@redhat.com, raquini@redhat.com, sunnanyong@huawei.com,
	usamaarif642@gmail.com, audra@redhat.com,
	akpm@linux-foundation.org, rostedt@goodmis.org,
	mathieu.desnoyers@efficios.com, tiwai@suse.de
Subject: Re: [RFC v2 0/9] khugepaged: mTHP support
Date: Thu, 13 Feb 2025 16:51:19 +0530	[thread overview]
Message-ID: <0909ff71-ab15-495d-bb49-a432295c6e92@arm.com> (raw)
In-Reply-To: <4ba52062-1bd3-4d53-aa28-fcbbd4913801@arm.com>



On 13/02/25 1:56 pm, Dev Jain wrote:
> 
> 
> On 12/02/25 10:19 pm, Nico Pache wrote:
>> On Tue, Feb 11, 2025 at 5:50 AM Dev Jain <dev.jain@arm.com> wrote:
>>>
>>>
>>>
>>> On 11/02/25 6:00 am, Nico Pache wrote:
>>>> The following series provides khugepaged and madvise collapse with the
>>>> capability to collapse regions to mTHPs.
>>>>
>>>> To achieve this we generalize the khugepaged functions to no longer 
>>>> depend
>>>> on PMD_ORDER. Then during the PMD scan, we keep track of chunks of 
>>>> pages
>>>> (defined by MTHP_MIN_ORDER) that are utilized. This info is tracked
>>>> using a bitmap. After the PMD scan is done, we do binary recursion 
>>>> on the
>>>> bitmap to find the optimal mTHP sizes for the PMD range. The 
>>>> restriction
>>>> on max_ptes_none is removed during the scan, to make sure we account 
>>>> for
>>>> the whole PMD range. max_ptes_none will be scaled by the attempted 
>>>> collapse
>>>> order to determine how full a THP must be to be eligible. If a mTHP 
>>>> collapse
>>>> is attempted, but contains swapped out, or shared pages, we dont 
>>>> perform the
>>>> collapse.
>>>>
>>>> With the default max_ptes_none=511, the code should keep its most of 
>>>> its
>>>> original behavior. To exercise mTHP collapse we need to set 
>>>> max_ptes_none<=255.
>>>> With max_ptes_none > HPAGE_PMD_NR/2 you will experience collapse 
>>>> "creep" and
>>>> constantly promote mTHPs to the next available size.
>>>>
>>>> Patch 1:     Some refactoring to combine madvise_collapse and 
>>>> khugepaged
>>>> Patch 2:     Refactor/rename hpage_collapse
>>>> Patch 3-5:   Generalize khugepaged functions for arbitrary orders
>>>> Patch 6-9:   The mTHP patches
>>>>
>>>> ---------
>>>>    Testing
>>>> ---------
>>>> - Built for x86_64, aarch64, ppc64le, and s390x
>>>> - selftests mm
>>>> - I created a test script that I used to push khugepaged to its 
>>>> limits while
>>>>      monitoring a number of stats and tracepoints. The code is 
>>>> available
>>>>      here[1] (Run in legacy mode for these changes and set mthp 
>>>> sizes to inherit)
>>>>      The summary from my testings was that there was no significant 
>>>> regression
>>>>      noticed through this test. In some cases my changes had better 
>>>> collapse
>>>>      latencies, and was able to scan more pages in the same amount 
>>>> of time/work,
>>>>      but for the most part the results were consistant.
>>>> - redis testing. I tested these changes along with my defer changes
>>>>     (see followup post for more details).
>>>> - some basic testing on 64k page size.
>>>> - lots of general use. These changes have been running in my VM for 
>>>> some time.
>>>>
>>>> Changes since V1 [2]:
>>>> - Minor bug fixes discovered during review and testing
>>>> - removed dynamic allocations for bitmaps, and made them stack based
>>>> - Adjusted bitmap offset from u8 to u16 to support 64k pagesize.
>>>> - Updated trace events to include collapsing order info.
>>>> - Scaled max_ptes_none by order rather than scaling to a 0-100 scale.
>>>> - No longer require a chunk to be fully utilized before setting the 
>>>> bit. Use
>>>>      the same max_ptes_none scaling principle to achieve this.
>>>> - Skip mTHP collapse that requires swapin or shared handling. This 
>>>> helps prevent
>>>>      some of the "creep" that was discovered in v1.
>>>>
>>>> [1] - https://gitlab.com/npache/khugepaged_mthp_test
>>>> [2] - https://lore.kernel.org/lkml/20250108233128.14484-1- 
>>>> npache@redhat.com/
>>>>
>>>> Nico Pache (9):
>>>>     introduce khugepaged_collapse_single_pmd to unify khugepaged and
>>>>       madvise_collapse
>>>>     khugepaged: rename hpage_collapse_* to khugepaged_*
>>>>     khugepaged: generalize hugepage_vma_revalidate for mTHP support
>>>>     khugepaged: generalize alloc_charge_folio for mTHP support
>>>>     khugepaged: generalize __collapse_huge_page_* for mTHP support
>>>>     khugepaged: introduce khugepaged_scan_bitmap for mTHP support
>>>>     khugepaged: add mTHP support
>>>>     khugepaged: improve tracepoints for mTHP orders
>>>>     khugepaged: skip collapsing mTHP to smaller orders
>>>>
>>>>    include/linux/khugepaged.h         |   4 +
>>>>    include/trace/events/huge_memory.h |  34 ++-
>>>>    mm/khugepaged.c                    | 422 ++++++++++++++++++ 
>>>> +----------
>>>>    3 files changed, 306 insertions(+), 154 deletions(-)
>>>>
>>>
>>> Does this patchset suffer from the problem described here:
>>> https://lore.kernel.org/all/8abd99d5-329f-4f8d-8680- 
>>> c2d48d4963b6@arm.com/
>> Hi Dev,
>>
>> Sorry I meant to get back to you about that.
>>
>> I understand your concern, but like I've mentioned before, the scan
>> with the read lock was done so we dont have to do the more expensive
>> locking, and could still gain insight into the state. You are right
>> that this info could become stale if the state changes dramatically,
>> but the collapse_isolate function will verify it and not collapse.
> 
> If the state changes dramatically, the _isolate function will verify it, 
> and fallback. And this fallback happens after following this costly 
> path: retrieve a large folio from the buddy allocator -> swapin pages 
> from the disk -> mmap_write_lock() -> anon_vma_lock_write() -> TLB flush 
> on all CPUs -> fallback in _isolate().
> If you do fail in _isolate(), doesn't it make sense to get the updated 
> state for the next fallback order immediately, because we have prior 
> information that we failed because of PTE state? What your algorithm 
> will do is *still* follow the costly path described above, and again 
> fail in _isolate(), instead of failing in hpage_collapse_scan_pmd() like 
> mine would.
> 
> The verification of the PTE state by the _isolate() function is the "no 
> turning back" point of the algorithm. The verification by 
> hpage_collapse_scan_pmd() is the "let us see if proceeding is even worth 
> it, before we do costly operations" point of the algorithm.
> 
>>  From my testing I found this to rarely happen.
> 
> Unfortunately, I am not very familiar with performance testing/load 
> testing, I am fairly new to kernel programming, so I am getting there. 
> But it really depends on the type of test you are running, what actually 
> runs on memory-intensive systems, etc etc. In fact, on loaded systems I 
> would expect the PTE state to dramatically change. But still, no opinion 
> here.
> 
>>
>> Also, khugepaged, my changes, and your changes are all a victim of
>> this. Once we drop the read lock (to either allocate the folio, or
>> right before acquiring the write_lock), the state can change. In your
>> case, yes, you are gathering more up to date information, but is it
>> really that important/worth it to retake locks and rescan for each
>> instance if we are about to reverify with the write lock taken?
> 
> You said "reverify": You are removing the verification, so this step 
> won't be reverification, it will be verification. We do not want to 
> verify *after* we have already done 95% of latency-heavy stuff, only to 
> know that we are going to fail.
> 
> Algorithms in the kernel, in general, are of the following form: 1) 
> Verify if a condition is true, resulting in taking a control path -> 2) 
> do a lot of stuff -> "no turning back" step, wherein before committing 
> (by taking locks, say), reverify if this is the control path we should 
> be in. You are eliminating step 1).
> 
> Therefore, I will have to say that I disagree with your approach.
> 
> On top of this, in the subjective analysis in [1], point number 7 (along 
> with point number 1) remains. And, point number 4 remains.
> 
> [1] https://lore.kernel.org/all/23023f48-95c6-4a24-ac8b- 
> aba4b1a441b4@arm.com/
> 
>>
>> So in my eyes, this is not a "problem"
> 
> Looks like the kernel scheduled us for a high-priority debate, I hope 
> there's no deadlock :)
> 
>>
>> Cheers,
>> -- Nico
>>
>>

In any case, As Andrew notes, the first step is to justify that this 
functionality is beneficial to get in. The second step would be to 
compare the implementations. It would be great if someone from the 
community can jump in to test these things out : ) but in the meantime
I will begin working on the first step.

>>>
>>
> 
> 



  reply	other threads:[~2025-02-13 11:21 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-11  0:30 Nico Pache
2025-02-11  0:30 ` [RFC v2 1/9] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-02-17 17:11   ` Usama Arif
2025-02-17 19:56     ` Nico Pache
2025-02-18 16:26   ` Ryan Roberts
2025-02-18 22:24     ` Nico Pache
2025-02-11  0:30 ` [RFC v2 2/9] khugepaged: rename hpage_collapse_* to khugepaged_* Nico Pache
2025-02-18 16:29   ` Ryan Roberts
2025-02-11  0:30 ` [RFC v2 3/9] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-02-11  0:30 ` [RFC v2 4/9] khugepaged: generalize alloc_charge_folio " Nico Pache
2025-02-19 15:29   ` Ryan Roberts
2025-02-11  0:30 ` [RFC v2 5/9] khugepaged: generalize __collapse_huge_page_* " Nico Pache
2025-02-19 15:39   ` Ryan Roberts
2025-02-19 16:02     ` Nico Pache
2025-02-11  0:30 ` [RFC v2 6/9] khugepaged: introduce khugepaged_scan_bitmap " Nico Pache
2025-02-17  7:27   ` Dev Jain
2025-02-17 19:12   ` Usama Arif
2025-02-19 16:28   ` Ryan Roberts
2025-02-20 18:48     ` Nico Pache
2025-02-11  0:30 ` [RFC v2 7/9] khugepaged: add " Nico Pache
2025-02-12 17:04   ` Usama Arif
2025-02-12 18:16     ` Nico Pache
2025-02-17 20:55   ` Usama Arif
2025-02-17 21:22     ` Nico Pache
2025-02-18  4:22   ` Dev Jain
2025-03-03 19:18     ` Nico Pache
2025-03-04  5:10       ` Dev Jain
2025-02-19 16:52   ` Ryan Roberts
2025-03-03 19:13     ` Nico Pache
2025-03-05  9:11       ` Dev Jain
2025-03-05  9:07     ` Dev Jain
2025-03-07  6:38   ` Dev Jain
2025-03-07 20:14     ` Nico Pache
2025-03-10  4:17       ` Dev Jain
2025-02-11  0:30 ` [RFC v2 8/9] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-02-11  0:30 ` [RFC v2 9/9] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-02-19 16:57   ` Ryan Roberts
2025-02-11 12:49 ` [RFC v2 0/9] khugepaged: mTHP support Dev Jain
2025-02-12 16:49   ` Nico Pache
2025-02-13  8:26     ` Dev Jain
2025-02-13 11:21       ` Dev Jain [this message]
2025-02-13 19:39       ` Nico Pache
2025-02-14  2:01         ` Dev Jain
2025-02-15  0:52           ` Nico Pache
2025-02-15  6:38             ` Dev Jain
2025-02-17  8:05               ` Dev Jain
2025-02-17 19:19                 ` Nico Pache
2025-02-17  6:39 ` Dev Jain
2025-02-17 19:15   ` Nico Pache
2025-02-18 16:07 ` Ryan Roberts
2025-02-18 22:30   ` Nico Pache
2025-02-19  9:01     ` Dev Jain
2025-02-20 19:12       ` Nico Pache
2025-02-21  4:57         ` Dev Jain
2025-02-19 17:00 ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0909ff71-ab15-495d-bb49-a432295c6e92@arm.com \
    --to=dev.jain@arm.com \
    --cc=21cnbao@gmail.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@kernel.org \
    --cc=anshuman.khandual@arm.com \
    --cc=apopple@nvidia.com \
    --cc=audra@redhat.com \
    --cc=baohua@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=cl@gentwo.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=haowenchao22@gmail.com \
    --cc=hughd@google.com \
    --cc=ioworker0@gmail.com \
    --cc=jack@suse.cz \
    --cc=jglisse@google.com \
    --cc=jhubbard@nvidia.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=peterx@redhat.com \
    --cc=raquini@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=ryan.roberts@arm.com \
    --cc=srivatsa@csail.mit.edu \
    --cc=sunnanyong@huawei.com \
    --cc=surenb@google.com \
    --cc=tiwai@suse.de \
    --cc=usamaarif642@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=vishal.moola@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang@os.amperecomputing.com \
    --cc=zhengqi.arch@bytedance.com \
    --cc=ziy@nvidia.com \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox