linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2] mm/gup: folio_split_user_page_pin
@ 2024-09-24 15:05 Steve Sistare
  2024-09-24 16:55 ` Jason Gunthorpe
  2024-09-27 15:44 ` David Hildenbrand
  0 siblings, 2 replies; 8+ messages in thread
From: Steve Sistare @ 2024-09-24 15:05 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, David Hildenbrand, Jason Gunthorpe,
	Matthew Wilcox, Steve Sistare

Export a function that repins a high-order folio at small-page granularity.
This allows any range of small pages within the folio to be unpinned later.
For example, pages pinned via memfd_pin_folios and modified by
folio_split_user_page_pin could be unpinned via unpin_user_page(s).

Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Steve Sistare <steven.sistare@oracle.com>

---
In V2 this has been renamed from repin_folio_unhugely, but is
otherwise unchanged from V1.
---
---
 include/linux/mm.h |  1 +
 mm/gup.c           | 20 ++++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 13bff7c..b0b572d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2521,6 +2521,7 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
 long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end,
 		      struct folio **folios, unsigned int max_folios,
 		      pgoff_t *offset);
+void folio_split_user_page_pin(struct folio *folio, unsigned long npages);
 
 int get_user_pages_fast(unsigned long start, int nr_pages,
 			unsigned int gup_flags, struct page **pages);
diff --git a/mm/gup.c b/mm/gup.c
index fcd602b..94ee79dd 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -3733,3 +3733,23 @@ long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end,
 	return ret;
 }
 EXPORT_SYMBOL_GPL(memfd_pin_folios);
+
+/**
+ * folio_split_user_page_pin() - split the pin on a high order folio
+ * @folio: the folio to split
+ * @npages: The new number of pages the folio pin reference should hold
+ *
+ * Given a high order folio that is already pinned, adjust the reference
+ * count to allow unpin_user_page_range() and related to be called on a
+ * the folio. npages is the number of pages that will be passed to a
+ * future unpin_user_page_range().
+ */
+void folio_split_user_page_pin(struct folio *folio, unsigned long npages)
+{
+	if (!folio_test_large(folio) || is_huge_zero_folio(folio) ||
+	    npages == 1)
+		return;
+	atomic_add(npages - 1, &folio->_refcount);
+	atomic_add(npages - 1, &folio->_pincount);
+}
+EXPORT_SYMBOL_GPL(folio_split_user_page_pin);
-- 
1.8.3.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-10-04 20:19 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-24 15:05 [PATCH V2] mm/gup: folio_split_user_page_pin Steve Sistare
2024-09-24 16:55 ` Jason Gunthorpe
2024-09-27 15:44 ` David Hildenbrand
2024-09-27 15:58   ` Jason Gunthorpe
2024-10-01 17:17     ` Steven Sistare
2024-10-04 10:04       ` David Hildenbrand
2024-10-04 17:20         ` Steven Sistare
2024-10-04 20:19           ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox