linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem
@ 2024-05-21 11:03 Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 1/8] mm: fix shmem swapout statistic Baolin Wang
                   ` (7 more replies)
  0 siblings, 8 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

Shmem will support large folio allocation [1] [2] to get a better performance,
however, the memory reclaim still splits the precious large folios when trying
to swap-out shmem, which may lead to the memory fragmentation issue and can not
take advantage of the large folio for shmeme.

Moreover, the swap code already supports for swapping out large folio without
split, and large folio swap-in[3] is under reviewing. Hence this patch set also
supports the large folio swap-out and swap-in for shmem.

Note: this patch set is currently just to show some thoughts and gather some
suggestionsis, and it is based on Barry's large folio swap-in patch set [3] and
my anon shmem mTHP patch set [1].

[1] https://lore.kernel.org/all/cover.1715571279.git.baolin.wang@linux.alibaba.com/
[2] https://lore.kernel.org/all/20240515055719.32577-1-da.gomez@samsung.com/
[3] https://lore.kernel.org/all/20240508224040.190469-6-21cnbao@gmail.com/T/

Baolin Wang (8):
  mm: fix shmem swapout statistic
  mm: vmscan: add validation before spliting shmem large folio
  mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM
    flag setting
  mm: shmem: support large folio allocation for shmem_replace_folio()
  mm: shmem: extend shmem_partial_swap_usage() to support large folio
    swap
  mm: add new 'orders' parameter for find_get_entries() and
    find_lock_entries()
  mm: shmem: use swap_free_nr() to free shmem swap entries
  mm: shmem: support large folio swap out

 drivers/gpu/drm/i915/gem/i915_gem_shmem.c |  1 +
 include/linux/swap.h                      |  4 +-
 include/linux/writeback.h                 |  1 +
 mm/filemap.c                              | 27 ++++++-
 mm/internal.h                             |  4 +-
 mm/page_io.c                              |  4 +-
 mm/shmem.c                                | 59 ++++++++------
 mm/swapfile.c                             | 98 ++++++++++++-----------
 mm/truncate.c                             |  8 +-
 mm/vmscan.c                               | 22 ++++-
 10 files changed, 143 insertions(+), 85 deletions(-)

-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 1/8] mm: fix shmem swapout statistic
  2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
@ 2024-05-21 11:03 ` Baolin Wang
  2024-05-22  7:16   ` Huang, Ying
  2024-05-21 11:03 ` [RFC PATCH 2/8] mm: vmscan: add validation before spliting shmem large folio Baolin Wang
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

As we know, shmem not only supports the sharing of anonymous pages, but also
the RAM-based temporary filesystem. Therefore, shmem swapouts should not be
marked as anonymous swapout statistics. Fix it by adding folio_test_anon().

Fixes: d0f048ac39f6 ("mm: add per-order mTHP anon_swpout and anon_swpout_fallback counters")
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/page_io.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index 46c603dddf04..b181b81f39e3 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -217,7 +217,9 @@ static inline void count_swpout_vm_event(struct folio *folio)
 		count_memcg_folio_events(folio, THP_SWPOUT, 1);
 		count_vm_event(THP_SWPOUT);
 	}
-	count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_SWPOUT);
+
+	if (folio_test_anon(folio))
+		count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_SWPOUT);
 #endif
 	count_vm_events(PSWPOUT, folio_nr_pages(folio));
 }
-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 2/8] mm: vmscan: add validation before spliting shmem large folio
  2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 1/8] mm: fix shmem swapout statistic Baolin Wang
@ 2024-05-21 11:03 ` Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 3/8] mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM flag setting Baolin Wang
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

Add swap available space validation before spliting shmem large folio to
avoid redundant split, since we can not write shmem folio to the swap device
in this case.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/vmscan.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6981a71c8ef0..bf11c0cbf12e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1252,6 +1252,14 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 			}
 		} else if (folio_test_swapbacked(folio) &&
 			   folio_test_large(folio)) {
+
+			/*
+			 * Do not split shmem folio if no swap memory
+			 * available.
+			 */
+			if (!total_swap_pages)
+				goto activate_locked;
+
 			/* Split shmem folio */
 			if (split_folio_to_list(folio, folio_list))
 				goto keep_locked;
-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 3/8] mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM flag setting
  2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 1/8] mm: fix shmem swapout statistic Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 2/8] mm: vmscan: add validation before spliting shmem large folio Baolin Wang
@ 2024-05-21 11:03 ` Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 4/8] mm: shmem: support large folio allocation for shmem_replace_folio() Baolin Wang
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

To support shmem large folio swap operations, add a new parameter to
swap_shmem_alloc() that allows batch SWAP_MAP_SHMEM flag setting for
shmem swap entries.

While we are at it, using folio_nr_pages() to get the number of pages
of the folio as a preparation.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 include/linux/swap.h |  4 +-
 mm/shmem.c           |  6 ++-
 mm/swapfile.c        | 98 +++++++++++++++++++++++---------------------
 3 files changed, 57 insertions(+), 51 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 48131b869a4d..78922922abbd 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -479,7 +479,7 @@ void put_swap_folio(struct folio *folio, swp_entry_t entry);
 extern swp_entry_t get_swap_page_of_type(int);
 extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order);
 extern int add_swap_count_continuation(swp_entry_t, gfp_t);
-extern void swap_shmem_alloc(swp_entry_t);
+extern void swap_shmem_alloc(swp_entry_t, int);
 extern int swap_duplicate(swp_entry_t);
 extern int swapcache_prepare(swp_entry_t);
 extern void swap_free_nr(swp_entry_t entry, int nr_pages);
@@ -546,7 +546,7 @@ static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask)
 	return 0;
 }
 
-static inline void swap_shmem_alloc(swp_entry_t swp)
+static inline void swap_shmem_alloc(swp_entry_t swp, int nr)
 {
 }
 
diff --git a/mm/shmem.c b/mm/shmem.c
index fd2cb2e73a21..daab124c3e61 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1433,6 +1433,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
 	swp_entry_t swap;
 	pgoff_t index;
+	int nr_pages;
 
 	/*
 	 * Our capabilities prevent regular writeback or sync from ever calling
@@ -1465,6 +1466,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	}
 
 	index = folio->index;
+	nr_pages = folio_nr_pages(folio);
 
 	/*
 	 * This is somewhat ridiculous, but without plumbing a SWAP_MAP_FALLOC
@@ -1517,8 +1519,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	if (add_to_swap_cache(folio, swap,
 			__GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN,
 			NULL) == 0) {
-		shmem_recalc_inode(inode, 0, 1);
-		swap_shmem_alloc(swap);
+		shmem_recalc_inode(inode, 0, nr_pages);
+		swap_shmem_alloc(swap, nr_pages);
 		shmem_delete_from_page_cache(folio, swp_to_radix_entry(swap));
 
 		mutex_unlock(&shmem_swaplist_mutex);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 99e701620562..2f23b87ddcb3 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3387,62 +3387,58 @@ void si_swapinfo(struct sysinfo *val)
  * - swap-cache reference is requested but the entry is not used. -> ENOENT
  * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
  */
-static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+static int __swap_duplicate(struct swap_info_struct *p, unsigned long offset,
+			    int nr, unsigned char usage)
 {
-	struct swap_info_struct *p;
 	struct swap_cluster_info *ci;
-	unsigned long offset;
 	unsigned char count;
 	unsigned char has_cache;
-	int err;
+	int err, i;
 
-	p = swp_swap_info(entry);
-
-	offset = swp_offset(entry);
 	ci = lock_cluster_or_swap_info(p, offset);
 
-	count = p->swap_map[offset];
-
-	/*
-	 * swapin_readahead() doesn't check if a swap entry is valid, so the
-	 * swap entry could be SWAP_MAP_BAD. Check here with lock held.
-	 */
-	if (unlikely(swap_count(count) == SWAP_MAP_BAD)) {
-		err = -ENOENT;
-		goto unlock_out;
-	}
-
-	has_cache = count & SWAP_HAS_CACHE;
-	count &= ~SWAP_HAS_CACHE;
-	err = 0;
-
-	if (usage == SWAP_HAS_CACHE) {
+	for (i = 0; i < nr; i++) {
+		count = p->swap_map[offset + i];
 
-		/* set SWAP_HAS_CACHE if there is no cache and entry is used */
-		if (!has_cache && count)
-			has_cache = SWAP_HAS_CACHE;
-		else if (has_cache)		/* someone else added cache */
-			err = -EEXIST;
-		else				/* no users remaining */
+		/*
+		 * swapin_readahead() doesn't check if a swap entry is valid, so the
+		 * swap entry could be SWAP_MAP_BAD. Check here with lock held.
+		 */
+		if (unlikely(swap_count(count) == SWAP_MAP_BAD)) {
 			err = -ENOENT;
+			break;
+		}
 
-	} else if (count || has_cache) {
+		has_cache = count & SWAP_HAS_CACHE;
+		count &= ~SWAP_HAS_CACHE;
+		err = 0;
+
+		if (usage == SWAP_HAS_CACHE) {
+			/* set SWAP_HAS_CACHE if there is no cache and entry is used */
+			if (!has_cache && count)
+				has_cache = SWAP_HAS_CACHE;
+			else if (has_cache)		/* someone else added cache */
+				err = -EEXIST;
+			else				/* no users remaining */
+				err = -ENOENT;
+		} else if (count || has_cache) {
+			if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX)
+				count += usage;
+			else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX)
+				err = -EINVAL;
+			else if (swap_count_continued(p, offset + i, count))
+				count = COUNT_CONTINUED;
+			else
+				err = -ENOMEM;
+		} else
+			err = -ENOENT;			/* unused swap entry */
 
-		if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX)
-			count += usage;
-		else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX)
-			err = -EINVAL;
-		else if (swap_count_continued(p, offset, count))
-			count = COUNT_CONTINUED;
-		else
-			err = -ENOMEM;
-	} else
-		err = -ENOENT;			/* unused swap entry */
+		if (err)
+			break;
 
-	if (!err)
-		WRITE_ONCE(p->swap_map[offset], count | has_cache);
+		WRITE_ONCE(p->swap_map[offset + i], count | has_cache);
+	}
 
-unlock_out:
 	unlock_cluster_or_swap_info(p, ci);
 	return err;
 }
@@ -3451,9 +3447,12 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
  * Help swapoff by noting that swap entry belongs to shmem/tmpfs
  * (in which case its reference count is never incremented).
  */
-void swap_shmem_alloc(swp_entry_t entry)
+void swap_shmem_alloc(swp_entry_t entry, int nr)
 {
-	__swap_duplicate(entry, SWAP_MAP_SHMEM);
+	struct swap_info_struct *p = swp_swap_info(entry);
+	unsigned long offset = swp_offset(entry);
+
+	__swap_duplicate(p, offset, nr, SWAP_MAP_SHMEM);
 }
 
 /*
@@ -3465,9 +3464,11 @@ void swap_shmem_alloc(swp_entry_t entry)
  */
 int swap_duplicate(swp_entry_t entry)
 {
+	struct swap_info_struct *p = swp_swap_info(entry);
+	unsigned long offset = swp_offset(entry);
 	int err = 0;
 
-	while (!err && __swap_duplicate(entry, 1) == -ENOMEM)
+	while (!err && __swap_duplicate(p, offset, 1, 1) == -ENOMEM)
 		err = add_swap_count_continuation(entry, GFP_ATOMIC);
 	return err;
 }
@@ -3482,7 +3483,10 @@ int swap_duplicate(swp_entry_t entry)
  */
 int swapcache_prepare(swp_entry_t entry)
 {
-	return __swap_duplicate(entry, SWAP_HAS_CACHE);
+	struct swap_info_struct *p = swp_swap_info(entry);
+	unsigned long offset = swp_offset(entry);
+
+	return __swap_duplicate(p, offset, 1, SWAP_HAS_CACHE);
 }
 
 void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 4/8] mm: shmem: support large folio allocation for shmem_replace_folio()
  2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
                   ` (2 preceding siblings ...)
  2024-05-21 11:03 ` [RFC PATCH 3/8] mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM flag setting Baolin Wang
@ 2024-05-21 11:03 ` Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 5/8] mm: shmem: extend shmem_partial_swap_usage() to support large folio swap Baolin Wang
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

To support large folio swapin for shmem in the following patches, add
large folio allocation for the new replacement folio in shmem_replace_folio(),
as well as updating statistics using the number of pages in the folio.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/shmem.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index daab124c3e61..74821a7031b8 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1901,8 +1901,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
 	 * limit chance of success by further cpuset and node constraints.
 	 */
 	gfp &= ~GFP_CONSTRAINT_MASK;
-	VM_BUG_ON_FOLIO(folio_test_large(old), old);
-	new = shmem_alloc_folio(gfp, info, index);
+	new = shmem_alloc_hugefolio(gfp, info, index, folio_order(old));
 	if (!new)
 		return -ENOMEM;
 
@@ -1923,11 +1922,13 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
 	xa_lock_irq(&swap_mapping->i_pages);
 	error = shmem_replace_entry(swap_mapping, swap_index, old, new);
 	if (!error) {
+		int nr_pages = folio_nr_pages(old);
+
 		mem_cgroup_migrate(old, new);
-		__lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1);
-		__lruvec_stat_mod_folio(new, NR_SHMEM, 1);
-		__lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1);
-		__lruvec_stat_mod_folio(old, NR_SHMEM, -1);
+		__lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages);
+		__lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages);
+		__lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages);
+		__lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages);
 	}
 	xa_unlock_irq(&swap_mapping->i_pages);
 
-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 5/8] mm: shmem: extend shmem_partial_swap_usage() to support large folio swap
  2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
                   ` (3 preceding siblings ...)
  2024-05-21 11:03 ` [RFC PATCH 4/8] mm: shmem: support large folio allocation for shmem_replace_folio() Baolin Wang
@ 2024-05-21 11:03 ` Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 6/8] mm: add new 'orders' parameter for find_get_entries() and find_lock_entries() Baolin Wang
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

To support shmem large folio swapout in the following patches, using
xa_get_order() to get the order of the swap entry to calculate the swap
usage of shmem.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/shmem.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 74821a7031b8..bc099e8b9952 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -865,13 +865,16 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
 	struct page *page;
 	unsigned long swapped = 0;
 	unsigned long max = end - 1;
+	int order;
 
 	rcu_read_lock();
 	xas_for_each(&xas, page, max) {
 		if (xas_retry(&xas, page))
 			continue;
-		if (xa_is_value(page))
-			swapped++;
+		if (xa_is_value(page)) {
+			order = xa_get_order(xas.xa, xas.xa_index);
+			swapped += 1 << order;
+		}
 		if (xas.xa_index == max)
 			break;
 		if (need_resched()) {
-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 6/8] mm: add new 'orders' parameter for find_get_entries() and find_lock_entries()
  2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
                   ` (4 preceding siblings ...)
  2024-05-21 11:03 ` [RFC PATCH 5/8] mm: shmem: extend shmem_partial_swap_usage() to support large folio swap Baolin Wang
@ 2024-05-21 11:03 ` Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 7/8] mm: shmem: use swap_free_nr() to free shmem swap entries Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 8/8] mm: shmem: support large folio swap out Baolin Wang
  7 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

In the following patches, shmem will support the swap out of large folios,
which means the shmem mappings may contain large order swap entries, so an
'orders' array is added for find_get_entries() and find_lock_entries() to
obtain the order size of shmem swap entries, which will help in the release
of shmem large folio swap entries.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/filemap.c  | 27 +++++++++++++++++++++++++--
 mm/internal.h |  4 ++--
 mm/shmem.c    | 17 +++++++++--------
 mm/truncate.c |  8 ++++----
 4 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index ec273b00ce5f..9d8544df5e4a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2036,14 +2036,24 @@ static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max,
  * Return: The number of entries which were found.
  */
 unsigned find_get_entries(struct address_space *mapping, pgoff_t *start,
-		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices)
+		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices,
+		int *orders)
 {
 	XA_STATE(xas, &mapping->i_pages, *start);
 	struct folio *folio;
+	int order;
 
 	rcu_read_lock();
 	while ((folio = find_get_entry(&xas, end, XA_PRESENT)) != NULL) {
 		indices[fbatch->nr] = xas.xa_index;
+		if (orders) {
+			if (!xa_is_value(folio))
+				order = folio_order(folio);
+			else
+				order = xa_get_order(xas.xa, xas.xa_index);
+
+			orders[fbatch->nr] = order;
+		}
 		if (!folio_batch_add(fbatch, folio))
 			break;
 	}
@@ -2056,6 +2066,8 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start,
 		folio = fbatch->folios[idx];
 		if (!xa_is_value(folio))
 			nr = folio_nr_pages(folio);
+		else if (orders)
+			nr = 1 << orders[idx];
 		*start = indices[idx] + nr;
 	}
 	return folio_batch_count(fbatch);
@@ -2082,10 +2094,12 @@ unsigned find_get_entries(struct address_space *mapping, pgoff_t *start,
  * Return: The number of entries which were found.
  */
 unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start,
-		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices)
+		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices,
+		int *orders)
 {
 	XA_STATE(xas, &mapping->i_pages, *start);
 	struct folio *folio;
+	int order;
 
 	rcu_read_lock();
 	while ((folio = find_get_entry(&xas, end, XA_PRESENT))) {
@@ -2099,9 +2113,16 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start,
 			if (folio->mapping != mapping ||
 			    folio_test_writeback(folio))
 				goto unlock;
+			if (orders)
+				order = folio_order(folio);
 			VM_BUG_ON_FOLIO(!folio_contains(folio, xas.xa_index),
 					folio);
+		} else if (orders) {
+			order = xa_get_order(xas.xa, xas.xa_index);
 		}
+
+		if (orders)
+			orders[fbatch->nr] = order;
 		indices[fbatch->nr] = xas.xa_index;
 		if (!folio_batch_add(fbatch, folio))
 			break;
@@ -2120,6 +2141,8 @@ unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start,
 		folio = fbatch->folios[idx];
 		if (!xa_is_value(folio))
 			nr = folio_nr_pages(folio);
+		else if (orders)
+			nr = 1 << orders[idx];
 		*start = indices[idx] + nr;
 	}
 	return folio_batch_count(fbatch);
diff --git a/mm/internal.h b/mm/internal.h
index 17b0a1824948..755df223cd3a 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -339,9 +339,9 @@ static inline void force_page_cache_readahead(struct address_space *mapping,
 }
 
 unsigned find_lock_entries(struct address_space *mapping, pgoff_t *start,
-		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices);
+		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices, int *orders);
 unsigned find_get_entries(struct address_space *mapping, pgoff_t *start,
-		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices);
+		pgoff_t end, struct folio_batch *fbatch, pgoff_t *indices, int *orders);
 void filemap_free_folio(struct address_space *mapping, struct folio *folio);
 int truncate_inode_folio(struct address_space *mapping, struct folio *folio);
 bool truncate_inode_partial_folio(struct folio *folio, loff_t start,
diff --git a/mm/shmem.c b/mm/shmem.c
index bc099e8b9952..b3e39d9cf42c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -840,14 +840,14 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
  * Remove swap entry from page cache, free the swap and its page cache.
  */
 static int shmem_free_swap(struct address_space *mapping,
-			   pgoff_t index, void *radswap)
+			   pgoff_t index, void *radswap, int order)
 {
 	void *old;
 
 	old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0);
 	if (old != radswap)
 		return -ENOENT;
-	free_swap_and_cache(radix_to_swp_entry(radswap));
+	free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order);
 	return 0;
 }
 
@@ -981,6 +981,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 	pgoff_t end = (lend + 1) >> PAGE_SHIFT;
 	struct folio_batch fbatch;
 	pgoff_t indices[PAGEVEC_SIZE];
+	int orders[PAGEVEC_SIZE];
 	struct folio *folio;
 	bool same_folio;
 	long nr_swaps_freed = 0;
@@ -996,15 +997,15 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 	folio_batch_init(&fbatch);
 	index = start;
 	while (index < end && find_lock_entries(mapping, &index, end - 1,
-			&fbatch, indices)) {
+			&fbatch, indices, orders)) {
 		for (i = 0; i < folio_batch_count(&fbatch); i++) {
 			folio = fbatch.folios[i];
 
 			if (xa_is_value(folio)) {
 				if (unfalloc)
 					continue;
-				nr_swaps_freed += !shmem_free_swap(mapping,
-							indices[i], folio);
+				if (!shmem_free_swap(mapping, indices[i], folio, orders[i]))
+					nr_swaps_freed += 1 << orders[i];
 				continue;
 			}
 
@@ -1058,7 +1059,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 		cond_resched();
 
 		if (!find_get_entries(mapping, &index, end - 1, &fbatch,
-				indices)) {
+				indices, orders)) {
 			/* If all gone or hole-punch or unfalloc, we're done */
 			if (index == start || end != -1)
 				break;
@@ -1072,12 +1073,12 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 			if (xa_is_value(folio)) {
 				if (unfalloc)
 					continue;
-				if (shmem_free_swap(mapping, indices[i], folio)) {
+				if (shmem_free_swap(mapping, indices[i], folio, orders[i])) {
 					/* Swap was replaced by page: retry */
 					index = indices[i];
 					break;
 				}
-				nr_swaps_freed++;
+				nr_swaps_freed += 1 << orders[i];
 				continue;
 			}
 
diff --git a/mm/truncate.c b/mm/truncate.c
index e99085bf3d34..514834045bc8 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -352,7 +352,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
 	folio_batch_init(&fbatch);
 	index = start;
 	while (index < end && find_lock_entries(mapping, &index, end - 1,
-			&fbatch, indices)) {
+			&fbatch, indices, NULL)) {
 		truncate_folio_batch_exceptionals(mapping, &fbatch, indices);
 		for (i = 0; i < folio_batch_count(&fbatch); i++)
 			truncate_cleanup_folio(fbatch.folios[i]);
@@ -392,7 +392,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
 	while (index < end) {
 		cond_resched();
 		if (!find_get_entries(mapping, &index, end - 1, &fbatch,
-				indices)) {
+				indices, NULL)) {
 			/* If all gone from start onwards, we're done */
 			if (index == start)
 				break;
@@ -496,7 +496,7 @@ unsigned long mapping_try_invalidate(struct address_space *mapping,
 	int i;
 
 	folio_batch_init(&fbatch);
-	while (find_lock_entries(mapping, &index, end, &fbatch, indices)) {
+	while (find_lock_entries(mapping, &index, end, &fbatch, indices, NULL)) {
 		for (i = 0; i < folio_batch_count(&fbatch); i++) {
 			struct folio *folio = fbatch.folios[i];
 
@@ -622,7 +622,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
 
 	folio_batch_init(&fbatch);
 	index = start;
-	while (find_get_entries(mapping, &index, end, &fbatch, indices)) {
+	while (find_get_entries(mapping, &index, end, &fbatch, indices, NULL)) {
 		for (i = 0; i < folio_batch_count(&fbatch); i++) {
 			struct folio *folio = fbatch.folios[i];
 
-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 7/8] mm: shmem: use swap_free_nr() to free shmem swap entries
  2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
                   ` (5 preceding siblings ...)
  2024-05-21 11:03 ` [RFC PATCH 6/8] mm: add new 'orders' parameter for find_get_entries() and find_lock_entries() Baolin Wang
@ 2024-05-21 11:03 ` Baolin Wang
  2024-05-21 11:03 ` [RFC PATCH 8/8] mm: shmem: support large folio swap out Baolin Wang
  7 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

As a preparation for supporting shmem large folio swapout, use swap_free_nr()
to free some continuous swap entries of the shmem large folio when the
large folio was swapped in from the swap cache.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/shmem.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index b3e39d9cf42c..fdc71e14916c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1962,6 +1962,7 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
 	struct address_space *mapping = inode->i_mapping;
 	swp_entry_t swapin_error;
 	void *old;
+	int nr_pages;
 
 	swapin_error = make_poisoned_swp_entry();
 	old = xa_cmpxchg_irq(&mapping->i_pages, index,
@@ -1970,6 +1971,7 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
 	if (old != swp_to_radix_entry(swap))
 		return;
 
+	nr_pages = folio_nr_pages(folio);
 	folio_wait_writeback(folio);
 	delete_from_swap_cache(folio);
 	/*
@@ -1977,8 +1979,8 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
 	 * won't be 0 when inode is released and thus trigger WARN_ON(i_blocks)
 	 * in shmem_evict_inode().
 	 */
-	shmem_recalc_inode(inode, -1, -1);
-	swap_free(swap);
+	shmem_recalc_inode(inode, -nr_pages, -nr_pages);
+	swap_free_nr(swap, nr_pages);
 }
 
 /*
@@ -1997,7 +1999,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 	struct swap_info_struct *si;
 	struct folio *folio = NULL;
 	swp_entry_t swap;
-	int error;
+	int error, nr_pages;
 
 	VM_BUG_ON(!*foliop || !xa_is_value(*foliop));
 	swap = radix_to_swp_entry(*foliop);
@@ -2044,6 +2046,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 		goto failed;
 	}
 	folio_wait_writeback(folio);
+	nr_pages = folio_nr_pages(folio);
 
 	/*
 	 * Some architectures may have to restore extra metadata to the
@@ -2062,14 +2065,14 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 	if (error)
 		goto failed;
 
-	shmem_recalc_inode(inode, 0, -1);
+	shmem_recalc_inode(inode, 0, -nr_pages);
 
 	if (sgp == SGP_WRITE)
 		folio_mark_accessed(folio);
 
 	delete_from_swap_cache(folio);
 	folio_mark_dirty(folio);
-	swap_free(swap);
+	swap_free_nr(swap, nr_pages);
 	put_swap_device(si);
 
 	*foliop = folio;
-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 8/8] mm: shmem: support large folio swap out
  2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
                   ` (6 preceding siblings ...)
  2024-05-21 11:03 ` [RFC PATCH 7/8] mm: shmem: use swap_free_nr() to free shmem swap entries Baolin Wang
@ 2024-05-21 11:03 ` Baolin Wang
  7 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-21 11:03 UTC (permalink / raw)
  To: akpm, hughd
  Cc: willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, ying.huang, 21cnbao, ryan.roberts, shy828301,
	ziy, baolin.wang, linux-mm, linux-kernel

Shmem will support large folio allocation [1] [2] to get a better performance,
however, the memory reclaim still splits the precious large folios when trying
to swap out shmem, which may lead to the memory fragmentation issue and can not
take advantage of the large folio for shmeme.

Moreover, the swap code already supports for swapping out large folio without
split, hence this patch set supports the large folio swap out for shmem.

Note the i915_gem_shmem driver still need to be split when swapping, thus
add a new flag 'split_large_folio' for writeback_control to indicate spliting
the large folio.

[1] https://lore.kernel.org/all/cover.1715571279.git.baolin.wang@linux.alibaba.com/
[2] https://lore.kernel.org/all/20240515055719.32577-1-da.gomez@samsung.com/
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c |  1 +
 include/linux/writeback.h                 |  1 +
 mm/shmem.c                                |  3 +--
 mm/vmscan.c                               | 14 ++++++++++++--
 4 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 38b72d86560f..968274be14ef 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -308,6 +308,7 @@ void __shmem_writeback(size_t size, struct address_space *mapping)
 		.range_start = 0,
 		.range_end = LLONG_MAX,
 		.for_reclaim = 1,
+		.split_large_folio = 1,
 	};
 	unsigned long i;
 
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 112d806ddbe4..6f2599244ae0 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -63,6 +63,7 @@ struct writeback_control {
 	unsigned range_cyclic:1;	/* range_start is cyclic */
 	unsigned for_sync:1;		/* sync(2) WB_SYNC_ALL writeback */
 	unsigned unpinned_netfs_wb:1;	/* Cleared I_PINNING_NETFS_WB */
+	unsigned split_large_folio:1;	/* Split large folio for shmem writeback */
 
 	/*
 	 * When writeback IOs are bounced through async layers, only the
diff --git a/mm/shmem.c b/mm/shmem.c
index fdc71e14916c..6645169aa913 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -776,7 +776,6 @@ static int shmem_add_to_page_cache(struct folio *folio,
 	VM_BUG_ON_FOLIO(index != round_down(index, nr), folio);
 	VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
 	VM_BUG_ON_FOLIO(!folio_test_swapbacked(folio), folio);
-	VM_BUG_ON(expected && folio_test_large(folio));
 
 	folio_ref_add(folio, nr);
 	folio->mapping = mapping;
@@ -1460,7 +1459,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
 	 * "force", drivers/gpu/drm/i915/gem/i915_gem_shmem.c gets huge pages,
 	 * and its shmem_writeback() needs them to be split when swapping.
 	 */
-	if (folio_test_large(folio)) {
+	if (wbc->split_large_folio && folio_test_large(folio)) {
 		/* Ensure the subpages are still dirty */
 		folio_test_set_dirty(folio);
 		if (split_huge_page(page) < 0)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bf11c0cbf12e..856286e84d62 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1260,8 +1260,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 			if (!total_swap_pages)
 				goto activate_locked;
 
-			/* Split shmem folio */
-			if (split_folio_to_list(folio, folio_list))
+			/*
+			 * Only split shmem folio when CONFIG_THP_SWAP
+			 * is not enabled.
+			 */
+			if (!IS_ENABLED(CONFIG_THP_SWAP) &&
+			    split_folio_to_list(folio, folio_list))
 				goto keep_locked;
 		}
 
@@ -1363,10 +1367,16 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 			 * starts and then write it out here.
 			 */
 			try_to_unmap_flush_dirty();
+try_pageout:
 			switch (pageout(folio, mapping, &plug)) {
 			case PAGE_KEEP:
 				goto keep_locked;
 			case PAGE_ACTIVATE:
+				if (shmem_mapping(mapping) && folio_test_large(folio) &&
+				    !split_folio_to_list(folio, folio_list)) {
+					nr_pages = 1;
+					goto try_pageout;
+				}
 				goto activate_locked;
 			case PAGE_SUCCESS:
 				stat->nr_pageout += nr_pages;
-- 
2.39.3



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH 1/8] mm: fix shmem swapout statistic
  2024-05-21 11:03 ` [RFC PATCH 1/8] mm: fix shmem swapout statistic Baolin Wang
@ 2024-05-22  7:16   ` Huang, Ying
  2024-05-22  7:54     ` Baolin Wang
  0 siblings, 1 reply; 11+ messages in thread
From: Huang, Ying @ 2024-05-22  7:16 UTC (permalink / raw)
  To: Baolin Wang
  Cc: akpm, hughd, willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, 21cnbao, ryan.roberts, shy828301, ziy, linux-mm,
	linux-kernel

Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> As we know, shmem not only supports the sharing of anonymous pages, but also
> the RAM-based temporary filesystem. Therefore, shmem swapouts should not be
> marked as anonymous swapout statistics. Fix it by adding folio_test_anon().
>
> Fixes: d0f048ac39f6 ("mm: add per-order mTHP anon_swpout and anon_swpout_fallback counters")
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>  mm/page_io.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/page_io.c b/mm/page_io.c
> index 46c603dddf04..b181b81f39e3 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -217,7 +217,9 @@ static inline void count_swpout_vm_event(struct folio *folio)
>  		count_memcg_folio_events(folio, THP_SWPOUT, 1);
>  		count_vm_event(THP_SWPOUT);
>  	}
> -	count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_SWPOUT);
> +
> +	if (folio_test_anon(folio))
> +		count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_SWPOUT);

Do we need to distinguish anonymous swapout and non-anonymous swapout?
IMHO, we don't.  Just like we have done for small folio and THP.

If so, how about fix this in another direction?  That is, remove "ANON"
from mTHP swapout statistics?

>  #endif
>  	count_vm_events(PSWPOUT, folio_nr_pages(folio));
>  }

--
Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH 1/8] mm: fix shmem swapout statistic
  2024-05-22  7:16   ` Huang, Ying
@ 2024-05-22  7:54     ` Baolin Wang
  0 siblings, 0 replies; 11+ messages in thread
From: Baolin Wang @ 2024-05-22  7:54 UTC (permalink / raw)
  To: Huang, Ying
  Cc: akpm, hughd, willy, david, ioworker0, hrisl, p.raghav, da.gomez,
	wangkefeng.wang, 21cnbao, ryan.roberts, shy828301, ziy, linux-mm,
	linux-kernel



On 2024/5/22 15:16, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
> 
>> As we know, shmem not only supports the sharing of anonymous pages, but also
>> the RAM-based temporary filesystem. Therefore, shmem swapouts should not be
>> marked as anonymous swapout statistics. Fix it by adding folio_test_anon().
>>
>> Fixes: d0f048ac39f6 ("mm: add per-order mTHP anon_swpout and anon_swpout_fallback counters")
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   mm/page_io.c | 4 +++-
>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/page_io.c b/mm/page_io.c
>> index 46c603dddf04..b181b81f39e3 100644
>> --- a/mm/page_io.c
>> +++ b/mm/page_io.c
>> @@ -217,7 +217,9 @@ static inline void count_swpout_vm_event(struct folio *folio)
>>   		count_memcg_folio_events(folio, THP_SWPOUT, 1);
>>   		count_vm_event(THP_SWPOUT);
>>   	}
>> -	count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_SWPOUT);
>> +
>> +	if (folio_test_anon(folio))
>> +		count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_SWPOUT);
> 
> Do we need to distinguish anonymous swapout and non-anonymous swapout?
> IMHO, we don't.  Just like we have done for small folio and THP.

Yes, old counters did not add 'anon_' prefix.

> If so, how about fix this in another direction?  That is, remove "ANON"
> from mTHP swapout statistics?

This sounds good to me. And I will separate out an individual patch with 
your suggestion. Thanks.


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2024-05-22  7:54 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-21 11:03 [RFC PATCH 0/8] support large folio swap-out and swap-in for shmem Baolin Wang
2024-05-21 11:03 ` [RFC PATCH 1/8] mm: fix shmem swapout statistic Baolin Wang
2024-05-22  7:16   ` Huang, Ying
2024-05-22  7:54     ` Baolin Wang
2024-05-21 11:03 ` [RFC PATCH 2/8] mm: vmscan: add validation before spliting shmem large folio Baolin Wang
2024-05-21 11:03 ` [RFC PATCH 3/8] mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM flag setting Baolin Wang
2024-05-21 11:03 ` [RFC PATCH 4/8] mm: shmem: support large folio allocation for shmem_replace_folio() Baolin Wang
2024-05-21 11:03 ` [RFC PATCH 5/8] mm: shmem: extend shmem_partial_swap_usage() to support large folio swap Baolin Wang
2024-05-21 11:03 ` [RFC PATCH 6/8] mm: add new 'orders' parameter for find_get_entries() and find_lock_entries() Baolin Wang
2024-05-21 11:03 ` [RFC PATCH 7/8] mm: shmem: use swap_free_nr() to free shmem swap entries Baolin Wang
2024-05-21 11:03 ` [RFC PATCH 8/8] mm: shmem: support large folio swap out Baolin Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox