linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dev Jain <dev.jain@arm.com>
To: David Hildenbrand <david@redhat.com>,
	Ryan Roberts <ryan.roberts@arm.com>,
	Nico Pache <npache@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	anshuman.khandual@arm.com, catalin.marinas@arm.com,
	cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com,
	apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org,
	baohua@kernel.org, jack@suse.cz, srivatsa@csail.mit.edu,
	haowenchao22@gmail.com, hughd@google.com,
	aneesh.kumar@kernel.org, yang@os.amperecomputing.com,
	peterx@redhat.com, ioworker0@gmail.com,
	wangkefeng.wang@huawei.com, ziy@nvidia.com, jglisse@google.com,
	surenb@google.com, vishal.moola@gmail.com, zokeefe@google.com,
	zhengqi.arch@bytedance.com, jhubbard@nvidia.com,
	21cnbao@gmail.com, willy@infradead.org,
	kirill.shutemov@linux.intel.com, aarcange@redhat.com,
	raquini@redhat.com, sunnanyong@huawei.com,
	usamaarif642@gmail.com, audra@redhat.com,
	akpm@linux-foundation.org
Subject: Re: [RFC 00/11] khugepaged: mTHP support
Date: Mon, 27 Jan 2025 15:01:44 +0530	[thread overview]
Message-ID: <41d85d62-3234-478e-8cd7-571a49cfc031@arm.com> (raw)
In-Reply-To: <e40a4097-b921-4af7-8c52-550c515ec7cd@redhat.com>



On 21/01/25 3:49 pm, David Hildenbrand wrote:
>> Hmm that's an interesting idea; If I've understood, we would 
>> effectively test
>> the PMD for collapse as if we were collapsing to PMD-size, but then do 
>> the
>> actual collapse to the "highest allowed order" (dictated by what's 
>> enabled +
>> MADV_HUGEPAGE config).
>>
>> I'm not so sure this is a good way to go; there would be no way to 
>> support VMAs
>> (or parts of VMAs) that don't span a full PMD. 
> 
> 
> In Nicos approach to locking, we temporarily have to remove the PTE 
> table either way. While holding the mmap lock in write mode, the VMAs 
> cannot go away, so we could scan the whole PTE table to figure it out.
> 
> To just figure out "none" vs. "non-none" vs. "swap PTE", we'd probably 
> don't need the other VMA information. Figuring out "shared" is trickier, 
> because we have to obtain the folio and would have to walk the other VMAs.
> 
> It's a good question if we would have to VMA-write-lock the other 
> affected VMAs as well in order to temporarily remove the PTE table that 
> crosses multiple VMAs, or if we'd need something different (collapse PMD 
> marker) so the page table walkers could handle that case properly -- 
> keep retrying or fallback to the mmap lock.

I missed this reply, could have saved me some trouble :) When collapsing 
for VMAs < PMD, we *will* have to write lock the VMAs, write lock the 
anon_vma's, and write lock vma->vm_file->f_mapping for file VMAs, 
otherwise someone may fault on another VMA mapping the same PTE table. I 
was trying to implement this, but cannot find a clean way: we will have 
to implement it like mm_take_all_locks(), with a similar bit like 
AS_MM_ALL_LOCKS, because, suppose we need to lock all anon_vma's, then 
two VMAs may have the same anon_vma, and we cannot get away with the 
following check:

lock only if !rwsem_is_locked(&vma->anon_vma->root->rwsem)

since I need to skip the lock only when it is khugepaged which has taken 
the lock.

I guess the way to go about this then is the PMD-marker thingy, which I 
am not very familiar with.

> 
>> And I can imagine we might see
>> memory bloat; imagine you have 2M=madvise, 64K=always, 
>> max_ptes_none=511, and
>> let's say we have a 2M (aligned portion of a) VMA that does NOT have
>> MADV_HUGEPAGE set and has a single page populated. It passes the PMD- 
>> size test,
>> but we opt to collapse to 64K (since 2M=madvise). So now we end up 
>> with 32x 64K
>> folios, 31 of which are all zeros. We have spent the same amount of 
>> memory as if
>> 2M=always. Perhaps that's a detail that could be solved by ignoring 
>> fully none
>> 64K blocks when collapsing to 64K...
> 
> Yes, that's what I had in mind. No need to collapse where there is 
> nothing at all ...
> 
>>
>> Personally, I think your "enforce simplicifation of the tunables for mTHP
>> collapse" idea is the best we have so far.
> 
> Right.
> 
>>
>> But I'll just push against your pushback of the per-VMA cursor idea 
>> briefly. It
>> strikes me that this could be useful for khugepaged regardless of mTHP 
>> support.
> 
> Not a clear pushback, as you say to me this is a different optimization 
> and I am missing how it could really solve the problem at hand here.
> 
> Note that we're already fighting with not growing VMAs (see the VMA 
> locking changes under review), but maybe we could still squeeze it in 
> there without requiring a bigger slab.
> 
>> Today, it starts scanning a VMA, collapses the first PMD it finds that 
>> meets the
>> requirements, then switches to scanning another VMA. When it 
>> eventually gets
>> back to scanning the first VMA, it starts from the beginning again. 
>> Wouldn't a
>> cursor help reduce the amount of scanning it has to do?
> 
> Yes, that whole scanning approach sound weird. I would have assumed that 
> it might nowdays be smarter to just scan the MM sequentially, and not 
> jump between VMAs.
> 
> Assume you only have a handfull of large VMAs (like in a VMM), you'd end 
> up scanning the same handful of VMAs over and over again.
> 
> I think a lot of the khugepaged codebase is just full with historical 
> baggage that must be cleaned up and re-validated if it still required ...
> 



  reply	other threads:[~2025-01-27  9:32 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-08 23:31 Nico Pache
2025-01-08 23:31 ` [RFC 01/11] introduce khugepaged_collapse_single_pmd to collapse a single pmd Nico Pache
2025-01-10  6:25   ` Dev Jain
2025-01-08 23:31 ` [RFC 02/11] khugepaged: refactor madvise_collapse and khugepaged_scan_mm_slot Nico Pache
2025-01-08 23:31 ` [RFC 03/11] khugepaged: Don't allocate khugepaged mm_slot early Nico Pache
2025-01-10  6:11   ` Dev Jain
2025-01-10 19:37     ` Nico Pache
2025-01-08 23:31 ` [RFC 04/11] khugepaged: rename hpage_collapse_* to khugepaged_* Nico Pache
2025-01-08 23:31 ` [RFC 05/11] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-01-08 23:31 ` [RFC 06/11] khugepaged: generalize alloc_charge_folio " Nico Pache
2025-01-10  6:23   ` Dev Jain
2025-01-10 19:41     ` Nico Pache
2025-01-08 23:31 ` [RFC 07/11] khugepaged: generalize __collapse_huge_page_* " Nico Pache
2025-01-10  6:38   ` Dev Jain
2025-01-08 23:31 ` [RFC 08/11] khugepaged: introduce khugepaged_scan_bitmap " Nico Pache
2025-01-10  9:05   ` Dev Jain
2025-01-10 21:48     ` Nico Pache
2025-01-12 11:23       ` Dev Jain
2025-01-13 22:25         ` Nico Pache
2025-01-10 14:54   ` Dev Jain
2025-01-10 21:48     ` Nico Pache
2025-01-12 15:13   ` Dev Jain
2025-01-12 16:41     ` Dev Jain
2025-01-08 23:31 ` [RFC 09/11] khugepaged: add " Nico Pache
2025-01-10  9:20   ` Dev Jain
2025-01-10 13:36   ` Dev Jain
2025-01-08 23:31 ` [RFC 10/11] khugepaged: remove max_ptes_none restriction on the pmd scan Nico Pache
2025-01-08 23:31 ` [RFC 11/11] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-01-09  6:22 ` [RFC 00/11] khugepaged: mTHP support Dev Jain
2025-01-10  2:27   ` Nico Pache
2025-01-10  4:56     ` Dev Jain
2025-01-10 22:01       ` Nico Pache
2025-01-12 14:11         ` Dev Jain
2025-01-13 23:00           ` Nico Pache
2025-01-09  6:27 ` Dev Jain
2025-01-10  1:28   ` Nico Pache
2025-01-16  9:47 ` Ryan Roberts
2025-01-16 20:53   ` Nico Pache
2025-01-20  5:17     ` Dev Jain
2025-01-23 20:24       ` Nico Pache
2025-01-24  7:13         ` Dev Jain
2025-01-24  7:38           ` Dev Jain
2025-01-20 12:49     ` Ryan Roberts
2025-01-23 20:42       ` Nico Pache
2025-01-20 12:54     ` David Hildenbrand
2025-01-20 13:37       ` Ryan Roberts
2025-01-20 13:56         ` David Hildenbrand
2025-01-20 16:27           ` Ryan Roberts
2025-01-20 18:39             ` David Hildenbrand
2025-01-21  9:48               ` Ryan Roberts
2025-01-21 10:19                 ` David Hildenbrand
2025-01-27  9:31                   ` Dev Jain [this message]
2025-01-22  5:18                 ` Dev Jain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=41d85d62-3234-478e-8cd7-571a49cfc031@arm.com \
    --to=dev.jain@arm.com \
    --cc=21cnbao@gmail.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@kernel.org \
    --cc=anshuman.khandual@arm.com \
    --cc=apopple@nvidia.com \
    --cc=audra@redhat.com \
    --cc=baohua@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=cl@gentwo.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=haowenchao22@gmail.com \
    --cc=hughd@google.com \
    --cc=ioworker0@gmail.com \
    --cc=jack@suse.cz \
    --cc=jglisse@google.com \
    --cc=jhubbard@nvidia.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=peterx@redhat.com \
    --cc=raquini@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=srivatsa@csail.mit.edu \
    --cc=sunnanyong@huawei.com \
    --cc=surenb@google.com \
    --cc=usamaarif642@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=vishal.moola@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang@os.amperecomputing.com \
    --cc=zhengqi.arch@bytedance.com \
    --cc=ziy@nvidia.com \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox