From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Yin Fengwei <fengwei.yin@intel.com>, linux-mm@kvack.org
Cc: david@redhat.com, dave.hansen@intel.com, tim.c.chen@intel.com,
ying.huang@intel.com, Matthew Wilcox <willy@infradead.org>
Subject: [PATCH v5 4/5] mm: Convert do_set_pte() to set_pte_range()
Date: Tue, 7 Feb 2023 19:49:34 +0000 [thread overview]
Message-ID: <20230207194937.122543-5-willy@infradead.org> (raw)
In-Reply-To: <20230207194937.122543-1-willy@infradead.org>
From: Yin Fengwei <fengwei.yin@intel.com>
set_pte_range() allows to setup page table entries for a specific
range. It takes advantage of batched rmap update for large folio.
It now takes care of calling update_mmu_cache().
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
Documentation/filesystems/locking.rst | 2 +-
include/linux/mm.h | 3 ++-
mm/filemap.c | 3 +--
mm/memory.c | 31 ++++++++++++++++-----------
4 files changed, 23 insertions(+), 16 deletions(-)
diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index 7de7a7272a5e..922886fefb7f 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -663,7 +663,7 @@ locked. The VM will unlock the page.
Filesystem should find and map pages associated with offsets from "start_pgoff"
till "end_pgoff". ->map_pages() is called with page table locked and must
not block. If it's not possible to reach a page without blocking,
-filesystem should skip it. Filesystem should use do_set_pte() to setup
+filesystem should skip it. Filesystem should use set_pte_range() to setup
page table entry. Pointer to entry associated with the page is passed in
"pte" field in vm_fault structure. Pointers to entries for other offsets
should be calculated relative to "pte".
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d6f8f41514cc..b39493b5c49e 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1161,7 +1161,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
}
vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page);
-void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr);
+void set_pte_range(struct vm_fault *vmf, struct folio *folio,
+ struct page *page, unsigned int nr, unsigned long addr);
vm_fault_t finish_fault(struct vm_fault *vmf);
vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
diff --git a/mm/filemap.c b/mm/filemap.c
index 9aa7a4fdc374..745f1eb2a87f 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3375,8 +3375,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
ret = VM_FAULT_NOPAGE;
ref_count++;
- do_set_pte(vmf, page, addr);
- update_mmu_cache(vma, addr, vmf->pte);
+ set_pte_range(vmf, folio, page, 1, addr);
} while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages);
/* Restore the vmf->pte */
diff --git a/mm/memory.c b/mm/memory.c
index 7a04a1130ec1..f1cf47b7e3bb 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4257,15 +4257,18 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
}
#endif
-void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr)
+void set_pte_range(struct vm_fault *vmf, struct folio *folio,
+ struct page *page, unsigned int nr, unsigned long addr)
{
struct vm_area_struct *vma = vmf->vma;
bool uffd_wp = pte_marker_uffd_wp(vmf->orig_pte);
bool write = vmf->flags & FAULT_FLAG_WRITE;
bool prefault = vmf->address != addr;
pte_t entry;
+ unsigned int i;
- flush_icache_page(vma, page);
+ for (i = 0; i < nr; i++)
+ flush_icache_page(vma, page + i);
entry = mk_pte(page, vma->vm_page_prot);
if (prefault && arch_wants_old_prefaulted_pte())
@@ -4279,14 +4282,20 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr)
entry = pte_mkuffd_wp(entry);
/* copy-on-write page */
if (write && !(vma->vm_flags & VM_SHARED)) {
- inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
- page_add_new_anon_rmap(page, vma, addr);
- lru_cache_add_inactive_or_unevictable(page, vma);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr);
+ VM_BUG_ON_FOLIO(nr != 1, folio);
+ folio_add_new_anon_rmap(folio, vma, addr);
+ folio_add_lru_vma(folio, vma);
} else {
- inc_mm_counter(vma->vm_mm, mm_counter_file(page));
- page_add_file_rmap(page, vma, false);
+ add_mm_counter(vma->vm_mm, mm_counter_file(page), nr);
+ folio_add_file_rmap_range(folio, page, nr, vma, false);
}
- set_pte_at(vma->vm_mm, addr, vmf->pte, entry);
+ set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr);
+
+ /* no need to invalidate: a not-present page won't be cached */
+ for (i = 0; i < nr; i++)
+ update_mmu_cache(vma, addr + i * PAGE_SIZE, vmf->pte + i);
+
}
static bool vmf_pte_changed(struct vm_fault *vmf)
@@ -4359,11 +4368,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
/* Re-check under ptl */
if (likely(!vmf_pte_changed(vmf))) {
- do_set_pte(vmf, page, vmf->address);
-
- /* no need to invalidate: a not-present page won't be cached */
- update_mmu_cache(vma, vmf->address, vmf->pte);
+ struct folio *folio = page_folio(page);
+ set_pte_range(vmf, folio, page, 1, vmf->address);
ret = 0;
} else {
update_mmu_tlb(vma, vmf->address, vmf->pte);
--
2.35.1
next prev parent reply other threads:[~2023-02-07 19:49 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-07 19:49 [PATCH v5 0/5] Batched page table updates for file-backed large folios Matthew Wilcox (Oracle)
2023-02-07 19:49 ` [PATCH v5 1/5] filemap: Add filemap_map_folio_range() Matthew Wilcox (Oracle)
2023-02-07 19:49 ` [PATCH v5 2/5] mm: Add generic set_ptes() Matthew Wilcox (Oracle)
2023-02-08 0:48 ` kernel test robot
2023-02-08 2:20 ` kernel test robot
2023-02-08 4:57 ` Yin Fengwei
2023-02-08 8:38 ` Kirill A. Shutemov
2023-02-08 18:14 ` Matthew Wilcox
2023-02-09 8:11 ` Kirill A. Shutemov
2023-02-07 19:49 ` [PATCH v5 3/5] rmap: add folio_add_file_rmap_range() Matthew Wilcox (Oracle)
2023-02-07 19:49 ` Matthew Wilcox (Oracle) [this message]
2023-02-07 19:49 ` [PATCH v5 5/5] filemap: Batch PTE mappings Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230207194937.122543-5-willy@infradead.org \
--to=willy@infradead.org \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=linux-mm@kvack.org \
--cc=tim.c.chen@intel.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox