linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] mm/shmem: add mTHP swpout fallback statistics in shmem_writeout()
@ 2025-12-12  3:12 Weilin Tong
  2025-12-12  5:48 ` Baolin Wang
  0 siblings, 1 reply; 2+ messages in thread
From: Weilin Tong @ 2025-12-12  3:12 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Baolin Wang, Andrew Morton, linux-mm, linux-kernel, Weilin Tong

Currently, when shmem mTHPs are split and swapped out via shmem_writeout(),
there are no unified statistics to trace these mTHP swpout fallback events.
This makes it difficult to analyze the prevalence of mTHP splitting and
fallback during swap operations, which is important for memory diagnostics.

Here we add statistics counting for mTHP fallback to small pages
when splitting and swapping out in shmem_writeout().

Signed-off-by: Weilin Tong <tongweilin@linux.alibaba.com>
---
 mm/shmem.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/mm/shmem.c b/mm/shmem.c
index 3f194c9842a8..aa624c447358 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1593,11 +1593,23 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
 	}
 
 	if (split) {
+		int order;
+
 try_split:
+		order = folio_order(folio);
 		/* Ensure the subpages are still dirty */
 		folio_test_set_dirty(folio);
 		if (split_folio_to_list(folio, folio_list))
 			goto redirty;
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+		if (order >= HPAGE_PMD_ORDER) {
+			count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1);
+			count_vm_event(THP_SWPOUT_FALLBACK);
+		}
+#endif
+		count_mthp_stat(order, MTHP_STAT_SWPOUT_FALLBACK);
+
 		folio_clear_dirty(folio);
 	}
 
-- 
2.43.5



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [RFC PATCH] mm/shmem: add mTHP swpout fallback statistics in shmem_writeout()
  2025-12-12  3:12 [RFC PATCH] mm/shmem: add mTHP swpout fallback statistics in shmem_writeout() Weilin Tong
@ 2025-12-12  5:48 ` Baolin Wang
  0 siblings, 0 replies; 2+ messages in thread
From: Baolin Wang @ 2025-12-12  5:48 UTC (permalink / raw)
  To: Weilin Tong, Hugh Dickins; +Cc: Andrew Morton, linux-mm, linux-kernel



On 2025/12/12 11:12, Weilin Tong wrote:
> Currently, when shmem mTHPs are split and swapped out via shmem_writeout(),
> there are no unified statistics to trace these mTHP swpout fallback events.
> This makes it difficult to analyze the prevalence of mTHP splitting and
> fallback during swap operations, which is important for memory diagnostics.
> 
> Here we add statistics counting for mTHP fallback to small pages
> when splitting and swapping out in shmem_writeout().
> 
> Signed-off-by: Weilin Tong <tongweilin@linux.alibaba.com>
> ---

Looks reasonable to me. Thanks.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

>   mm/shmem.c | 12 ++++++++++++
>   1 file changed, 12 insertions(+)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 3f194c9842a8..aa624c447358 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1593,11 +1593,23 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
>   	}
>   
>   	if (split) {
> +		int order;
> +
>   try_split:
> +		order = folio_order(folio);
>   		/* Ensure the subpages are still dirty */
>   		folio_test_set_dirty(folio);
>   		if (split_folio_to_list(folio, folio_list))
>   			goto redirty;
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +		if (order >= HPAGE_PMD_ORDER) {
> +			count_memcg_folio_events(folio, THP_SWPOUT_FALLBACK, 1);
> +			count_vm_event(THP_SWPOUT_FALLBACK);
> +		}
> +#endif
> +		count_mthp_stat(order, MTHP_STAT_SWPOUT_FALLBACK);
> +
>   		folio_clear_dirty(folio);
>   	}
>   



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-12-12  5:48 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-12  3:12 [RFC PATCH] mm/shmem: add mTHP swpout fallback statistics in shmem_writeout() Weilin Tong
2025-12-12  5:48 ` Baolin Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox