linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] mm: handle swap page faults if the faulting page can be locked
@ 2023-04-14 18:00 Suren Baghdasaryan
  2023-04-14 18:32 ` Suren Baghdasaryan
  2023-04-14 18:43 ` Matthew Wilcox
  0 siblings, 2 replies; 14+ messages in thread
From: Suren Baghdasaryan @ 2023-04-14 18:00 UTC (permalink / raw)
  To: akpm
  Cc: willy, hannes, mhocko, josef, jack, ldufour, laurent.dufour,
	michel, liam.howlett, jglisse, vbabka, minchan, dave,
	punit.agrawal, lstoakes, surenb, linux-mm, linux-fsdevel,
	linux-kernel, kernel-team

When page fault is handled under VMA lock protection, all swap page
faults are retried with mmap_lock because folio_lock_or_retry
implementation has to drop and reacquire mmap_lock if folio could
not be immediately locked.
Instead of retrying all swapped page faults, retry only when folio
locking fails.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Patch applies cleanly over linux-next and mm-unstable

 mm/filemap.c | 6 ++++++
 mm/memory.c  | 5 -----
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 6f3a7e53fccf..67b937b0f436 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1706,6 +1706,8 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait)
  *     mmap_lock has been released (mmap_read_unlock(), unless flags had both
  *     FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in
  *     which case mmap_lock is still held.
+ *     If flags had FAULT_FLAG_VMA_LOCK set, meaning the operation is performed
+ *     with VMA lock only, the VMA lock is still held.
  *
  * If neither ALLOW_RETRY nor KILLABLE are set, will always return true
  * with the folio locked and the mmap_lock unperturbed.
@@ -1713,6 +1715,10 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait)
 bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm,
 			 unsigned int flags)
 {
+	/* Can't do this if not holding mmap_lock */
+	if (flags & FAULT_FLAG_VMA_LOCK)
+		return false;
+
 	if (fault_flag_allow_retry_first(flags)) {
 		/*
 		 * CAUTION! In this case, mmap_lock is not released
diff --git a/mm/memory.c b/mm/memory.c
index d88f370eacd1..3301a8d01820 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3715,11 +3715,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	if (!pte_unmap_same(vmf))
 		goto out;
 
-	if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
-		ret = VM_FAULT_RETRY;
-		goto out;
-	}
-
 	entry = pte_to_swp_entry(vmf->orig_pte);
 	if (unlikely(non_swap_entry(entry))) {
 		if (is_migration_entry(entry)) {
-- 
2.40.0.634.g4ca3ef3211-goog



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-05-01 17:55 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-14 18:00 [PATCH 1/1] mm: handle swap page faults if the faulting page can be locked Suren Baghdasaryan
2023-04-14 18:32 ` Suren Baghdasaryan
2023-04-14 18:43 ` Matthew Wilcox
2023-04-14 19:48   ` Suren Baghdasaryan
2023-04-14 20:31     ` Matthew Wilcox
2023-04-14 21:51       ` Suren Baghdasaryan
2023-04-15  0:34         ` Hillf Danton
2023-04-15  2:15         ` Matthew Wilcox
2023-04-17  0:49   ` Alistair Popple
2023-04-17 18:13     ` Suren Baghdasaryan
2023-04-17 23:33       ` Alistair Popple
2023-04-17 23:50         ` Suren Baghdasaryan
2023-04-18  1:07           ` Suren Baghdasaryan
2023-05-01 17:54             ` Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox