* [PATCH v2] mm/shmem: Fix race in shmem_undo_range w/THP
@ 2023-04-18 8:40 David Stevens
0 siblings, 0 replies; only message in thread
From: David Stevens @ 2023-04-18 8:40 UTC (permalink / raw)
To: linux-mm
Cc: Andrew Morton, Matthew Wilcox (Oracle),
Suleiman Souhlal, linux-kernel, David Stevens, stable
From: David Stevens <stevensd@chromium.org>
Split folios during the second loop of shmem_undo_range. It's not
sufficient to only split folios when dealing with partial pages, since
it's possible for a THP to be faulted in after that point. Calling
truncate_inode_folio in that situation can result in throwing away data
outside of the range being targeted.
Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large folios")
Cc: stable@vger.kernel.org
Signed-off-by: David Stevens <stevensd@chromium.org>
---
v1 -> v2:
- Actually drop pages after splitting a THP
mm/shmem.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 9218c955f482..226c94a257b1 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1033,7 +1033,22 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
}
VM_BUG_ON_FOLIO(folio_test_writeback(folio),
folio);
- truncate_inode_folio(mapping, folio);
+
+ if (!folio_test_large(folio)) {
+ truncate_inode_folio(mapping, folio);
+ } else if (truncate_inode_partial_folio(folio, lstart, lend)) {
+ /*
+ * If we split a page, reset the loop so that we
+ * pick up the new sub pages. Otherwise the THP
+ * was entirely dropped or the target range was
+ * zeroed, so just continue the loop as is.
+ */
+ if (!folio_test_large(folio)) {
+ folio_unlock(folio);
+ index = start;
+ break;
+ }
+ }
}
folio_unlock(folio);
}
--
2.40.0.634.g4ca3ef3211-goog
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2023-04-18 8:40 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-18 8:40 [PATCH v2] mm/shmem: Fix race in shmem_undo_range w/THP David Stevens
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox