From: Matthew Wilcox <willy@infradead.org>
To: Peter Xu <peterx@redhat.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Andrew Morton <akpm@linux-foundation.org>,
Suren Baghdasaryan <surenb@google.com>,
Lokesh Gidra <lokeshgidra@google.com>,
Alistair Popple <apopple@nvidia.com>
Subject: Re: [PATCH] mm: Always sanity check anon_vma first for per-vma locks
Date: Fri, 26 Apr 2024 15:00:42 +0100 [thread overview]
Message-ID: <ZiuzikG6-jDpbitv@casper.infradead.org> (raw)
In-Reply-To: <ZhinCD-PoblxGFm0@casper.infradead.org>
On Fri, Apr 12, 2024 at 04:14:16AM +0100, Matthew Wilcox wrote:
> Suren, what would you think to this?
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 6e2fe960473d..e495adcbe968 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5821,15 +5821,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> if (!vma_start_read(vma))
> goto inval;
>
> - /*
> - * find_mergeable_anon_vma uses adjacent vmas which are not locked.
> - * This check must happen after vma_start_read(); otherwise, a
> - * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA
> - * from its anon_vma.
> - */
> - if (unlikely(vma_is_anonymous(vma) && !vma->anon_vma))
> - goto inval_end_read;
> -
> /* Check since vm_start/vm_end might change before we lock the VMA */
> if (unlikely(address < vma->vm_start || address >= vma->vm_end))
> goto inval_end_read;
>
> That takes a few insns out of the page fault path (good!) at the cost
> of one extra trip around the fault handler for the first fault on an
> anon vma. It makes the file & anon paths more similar to each other
> (good!)
>
> We'd need some data to be sure it's really a win, but less code is
> always good.
Intel's 0day got back to me with data and it's ridiculously good.
Headline figure: over 3x throughput improvement with vm-scalability
https://lore.kernel.org/all/202404261055.c5e24608-oliver.sang@intel.com/
I can't see why it's that good. It shouldn't be that good. I'm
seeing big numbers here:
4366 ± 2% +565.6% 29061 perf-stat.overall.cycles-between-cache-misses
and the code being deleted is only checking vma->vm_ops and
vma->anon_vma. Surely that cache line is referenced so frequently
during pagefault that deleting a reference here will make no difference
at all?
We've clearly got an inlining change. viz:
72.57 -72.6 0.00 perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
73.28 -72.6 0.70 perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
72.55 -72.5 0.00 perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
69.93 -69.9 0.00 perf-profile.calltrace.cycles-pp.lock_mm_and_find_vma.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
69.12 -69.1 0.00 perf-profile.calltrace.cycles-pp.down_read_killable.lock_mm_and_find_vma.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
68.78 -68.8 0.00 perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.down_read_killable.lock_mm_and_find_vma.do_user_addr_fault.exc_page_fault
65.78 -65.8 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.rwsem_down_read_slowpath.down_read_killable.lock_mm_and_find_vma.do_user_addr_fault
65.43 -65.4 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.rwsem_down_read_slowpath.down_read_killable.lock_mm_and_find_vma
11.22 +86.5 97.68 perf-profile.calltrace.cycles-pp.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.14 +86.5 97.66 perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
3.17 ± 2% +94.0 97.12 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff
3.45 ± 2% +94.1 97.59 perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff
0.00 +98.2 98.15 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +98.2 98.16 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
so maybe the compiler has been able to eliminate some loads from
contended cachelines?
703147 -87.6% 87147 ± 2% perf-stat.ps.context-switches
663.67 ± 5% +7551.9% 50783 vm-scalability.time.involuntary_context_switches
1.105e+08 -86.7% 14697764 ± 2% vm-scalability.time.voluntary_context_switches
indicates to me that we're taking the mmap rwsem far less often (those
would be accounted as voluntary context switches).
So maybe the cache miss reduction is a consequence of just running for
longer before being preempted.
I still don't understand why we have to take the mmap_sem less often.
Is there perhaps a VMA for which we have a NULL vm_ops, but don't set
an anon_vma on a page fault?
next prev parent reply other threads:[~2024-04-26 14:00 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-10 17:06 Peter Xu
2024-04-10 20:26 ` Matthew Wilcox
[not found] ` <Zhb6B8UsidEEbFu3@x1n>
2024-04-10 21:10 ` Matthew Wilcox
2024-04-10 21:23 ` Peter Xu
2024-04-10 23:59 ` Matthew Wilcox
2024-04-11 0:20 ` Peter Xu
2024-04-11 14:50 ` Matthew Wilcox
2024-04-11 15:34 ` Peter Xu
2024-04-11 17:14 ` Matthew Wilcox
2024-04-11 15:42 ` Suren Baghdasaryan
2024-04-11 17:13 ` Liam R. Howlett
[not found] ` <ZhhSItiyLYBEdAX3@x1n>
2024-04-11 21:27 ` Matthew Wilcox
2024-04-11 21:46 ` Peter Xu
2024-04-11 22:02 ` Matthew Wilcox
2024-04-12 3:14 ` Matthew Wilcox
2024-04-12 12:38 ` Peter Xu
2024-04-12 13:06 ` Suren Baghdasaryan
2024-04-12 14:16 ` Matthew Wilcox
2024-04-12 14:53 ` Suren Baghdasaryan
2024-04-12 15:19 ` Matthew Wilcox
2024-04-12 15:31 ` Matthew Wilcox
2024-04-13 21:46 ` Suren Baghdasaryan
2024-04-13 22:52 ` Matthew Wilcox
2024-04-13 23:11 ` Suren Baghdasaryan
2024-04-13 21:41 ` Suren Baghdasaryan
2024-04-13 22:46 ` Matthew Wilcox
2024-04-15 15:58 ` Suren Baghdasaryan
2024-04-15 16:13 ` Matthew Wilcox
2024-04-15 16:19 ` Suren Baghdasaryan
2024-04-15 16:26 ` Matthew Wilcox
2024-04-12 12:46 ` Suren Baghdasaryan
2024-04-12 13:32 ` Matthew Wilcox
2024-04-12 13:46 ` Suren Baghdasaryan
2024-04-26 14:00 ` Matthew Wilcox [this message]
2024-04-26 15:07 ` Suren Baghdasaryan
2024-04-26 15:28 ` Matthew Wilcox
2024-04-26 15:32 ` Suren Baghdasaryan
2024-04-26 15:50 ` Matthew Wilcox
2024-04-26 15:32 ` Liam R. Howlett
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZiuzikG6-jDpbitv@casper.infradead.org \
--to=willy@infradead.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lokeshgidra@google.com \
--cc=peterx@redhat.com \
--cc=surenb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox