linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm: memory: check userfaultfd_wp() in vmf_orig_pte_uffd_wp()
@ 2024-04-18 12:06 Kefeng Wang
  2024-04-18 16:32 ` Peter Xu
  0 siblings, 1 reply; 7+ messages in thread
From: Kefeng Wang @ 2024-04-18 12:06 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Peter Xu, linux-mm, Kefeng Wang

Add userfaultfd_wp() check in vmf_orig_pte_uffd_wp() to avoid the 
unnecessary pte_marker_entry_uffd_wp() in most pagefault, difference
as shows below from perf data of lat_pagefault, note, the function
vmf_orig_pte_uffd_wp() is not inlined in the two kernel versions.

  perf report -i perf.data.before | grep vmf
     0.17%     0.13%  lat_pagefault  [kernel.kallsyms]      [k] vmf_orig_pte_uffd_wp.part.0.isra.0
  perf report -i perf.data.after  | grep vmf

In addition, directly call vmf_orig_pte_uffd_wp() in do_anonymous_page()
and set_pte_range() to save a uffd_wp variable.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
v2: update changelog

 mm/memory.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 5ae2409d3cb9..2cf54def3995 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -116,6 +116,8 @@ static bool vmf_orig_pte_uffd_wp(struct vm_fault *vmf)
 {
 	if (!(vmf->flags & FAULT_FLAG_ORIG_PTE_VALID))
 		return false;
+	if (!userfaultfd_wp(vmf->vma))
+		return false;
 
 	return pte_marker_uffd_wp(vmf->orig_pte);
 }
@@ -4388,7 +4390,6 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
  */
 static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 {
-	bool uffd_wp = vmf_orig_pte_uffd_wp(vmf);
 	struct vm_area_struct *vma = vmf->vma;
 	unsigned long addr = vmf->address;
 	struct folio *folio;
@@ -4488,7 +4489,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 	folio_add_new_anon_rmap(folio, vma, addr);
 	folio_add_lru_vma(folio, vma);
 setpte:
-	if (uffd_wp)
+	if (vmf_orig_pte_uffd_wp(vmf))
 		entry = pte_mkuffd_wp(entry);
 	set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages);
 
@@ -4663,7 +4664,6 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
 		struct page *page, unsigned int nr, unsigned long addr)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	bool uffd_wp = vmf_orig_pte_uffd_wp(vmf);
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool prefault = in_range(vmf->address, addr, nr * PAGE_SIZE);
 	pte_t entry;
@@ -4678,7 +4678,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio,
 
 	if (write)
 		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
-	if (unlikely(uffd_wp))
+	if (unlikely(vmf_orig_pte_uffd_wp(vmf)))
 		entry = pte_mkuffd_wp(entry);
 	/* copy-on-write page */
 	if (write && !(vma->vm_flags & VM_SHARED)) {
-- 
2.27.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-04-22  2:13 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-18 12:06 [PATCH v2] mm: memory: check userfaultfd_wp() in vmf_orig_pte_uffd_wp() Kefeng Wang
2024-04-18 16:32 ` Peter Xu
2024-04-19  3:00   ` Kefeng Wang
2024-04-19 15:17     ` Peter Xu
2024-04-20  4:05       ` Kefeng Wang
2024-04-21 13:53         ` Peter Xu
2024-04-22  2:13           ` Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox