linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: add per-order mTHP swap-in fallback counters
@ 2024-11-21 16:27 Wenchao Hao
  2024-11-21 22:32 ` Barry Song
  0 siblings, 1 reply; 3+ messages in thread
From: Wenchao Hao @ 2024-11-21 16:27 UTC (permalink / raw)
  To: Jonathan Corbet, Andrew Morton, David Hildenbrand, Barry Song,
	Ryan Roberts, Baolin Wang, Usama Arif, Lance Yang,
	Matthew Wilcox, Peter Xu, linux-doc, linux-kernel, linux-mm
  Cc: Wenchao Hao, Chuanhua Han

Now large folio swap-in is supported, but we do not have a method
to analyze the success ratio of large folio swap-ins. Similar to
anon_fault_fallback, we add a per-order mTHP swpin_fallback to help
calculate the success ratio. The new counter is located at:

/sys/kernel/mm/transparent_hugepage/hugepages-<size>/stats/swpin_fallback

Signed-off-by: Wenchao Hao <haowenchao22@gmail.com>
CC: Chuanhua Han <hanchuanhua@oppo.com>
---
 Documentation/admin-guide/mm/transhuge.rst | 5 +++++
 include/linux/huge_mm.h                    | 1 +
 mm/huge_memory.c                           | 3 +++
 mm/memory.c                                | 1 +
 4 files changed, 10 insertions(+)

diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index 5034915f4e8e..f5c775457913 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -561,6 +561,11 @@ swpin
 	is incremented every time a huge page is swapped in from a non-zswap
 	swap device in one piece.
 
+swpin_fallback
+	is incremented if a huge page swapin fails to allocate a huge page
+	and instead falls back to using huge pages with lower orders or
+	small pages.
+
 swpout
 	is incremented every time a huge page is swapped out to a non-zswap
 	swap device in one piece without splitting.
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index b94c2e8ee918..dcf08f8fdf52 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -121,6 +121,7 @@ enum mthp_stat_item {
 	MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE,
 	MTHP_STAT_ZSWPOUT,
 	MTHP_STAT_SWPIN,
+	MTHP_STAT_SWPIN_FALLBACK,
 	MTHP_STAT_SWPOUT,
 	MTHP_STAT_SWPOUT_FALLBACK,
 	MTHP_STAT_SHMEM_ALLOC,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ee335d96fc39..6b089a41acef 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -617,6 +617,7 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
 DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
 DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT);
 DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN);
+DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK);
 DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT);
 DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK);
 #ifdef CONFIG_SHMEM
@@ -637,6 +638,7 @@ static struct attribute *anon_stats_attrs[] = {
 #ifndef CONFIG_SHMEM
 	&zswpout_attr.attr,
 	&swpin_attr.attr,
+	&swpin_fallback_attr.attr,
 	&swpout_attr.attr,
 	&swpout_fallback_attr.attr,
 #endif
@@ -669,6 +671,7 @@ static struct attribute *any_stats_attrs[] = {
 #ifdef CONFIG_SHMEM
 	&zswpout_attr.attr,
 	&swpin_attr.attr,
+	&swpin_fallback_attr.attr,
 	&swpout_attr.attr,
 	&swpout_fallback_attr.attr,
 #endif
diff --git a/mm/memory.c b/mm/memory.c
index 209885a4134f..7cda8b65e0c9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4191,6 +4191,7 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
 				return folio;
 			folio_put(folio);
 		}
+		count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK);
 		order = next_order(&orders, order);
 	}
 
-- 
2.45.0



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm: add per-order mTHP swap-in fallback counters
  2024-11-21 16:27 [PATCH] mm: add per-order mTHP swap-in fallback counters Wenchao Hao
@ 2024-11-21 22:32 ` Barry Song
  2024-11-22  2:25   ` Wenchao Hao
  0 siblings, 1 reply; 3+ messages in thread
From: Barry Song @ 2024-11-21 22:32 UTC (permalink / raw)
  To: Wenchao Hao
  Cc: Jonathan Corbet, Andrew Morton, David Hildenbrand, Ryan Roberts,
	Baolin Wang, Usama Arif, Lance Yang, Matthew Wilcox, Peter Xu,
	linux-doc, linux-kernel, linux-mm, Chuanhua Han

On Fri, Nov 22, 2024 at 5:28 AM Wenchao Hao <haowenchao22@gmail.com> wrote:
>
> Now large folio swap-in is supported, but we do not have a method
> to analyze the success ratio of large folio swap-ins. Similar to
> anon_fault_fallback, we add a per-order mTHP swpin_fallback to help
> calculate the success ratio. The new counter is located at:
>
> /sys/kernel/mm/transparent_hugepage/hugepages-<size>/stats/swpin_fallback

Well,  this could be useful for profiling, but why not also add
MTHP_STAT_SWPIN_FALLBACK_CHARGE ?

>
> Signed-off-by: Wenchao Hao <haowenchao22@gmail.com>
> CC: Chuanhua Han <hanchuanhua@oppo.com>
> ---
>  Documentation/admin-guide/mm/transhuge.rst | 5 +++++
>  include/linux/huge_mm.h                    | 1 +
>  mm/huge_memory.c                           | 3 +++
>  mm/memory.c                                | 1 +
>  4 files changed, 10 insertions(+)
>
> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> index 5034915f4e8e..f5c775457913 100644
> --- a/Documentation/admin-guide/mm/transhuge.rst
> +++ b/Documentation/admin-guide/mm/transhuge.rst
> @@ -561,6 +561,11 @@ swpin
>         is incremented every time a huge page is swapped in from a non-zswap
>         swap device in one piece.
>
> +swpin_fallback
> +       is incremented if a huge page swapin fails to allocate a huge page
> +       and instead falls back to using huge pages with lower orders or
> +       small pages.
> +
>  swpout
>         is incremented every time a huge page is swapped out to a non-zswap
>         swap device in one piece without splitting.
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index b94c2e8ee918..dcf08f8fdf52 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -121,6 +121,7 @@ enum mthp_stat_item {
>         MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE,
>         MTHP_STAT_ZSWPOUT,
>         MTHP_STAT_SWPIN,
> +       MTHP_STAT_SWPIN_FALLBACK,
>         MTHP_STAT_SWPOUT,
>         MTHP_STAT_SWPOUT_FALLBACK,
>         MTHP_STAT_SHMEM_ALLOC,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ee335d96fc39..6b089a41acef 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -617,6 +617,7 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
>  DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>  DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT);
>  DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN);
> +DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK);
>  DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT);
>  DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK);
>  #ifdef CONFIG_SHMEM
> @@ -637,6 +638,7 @@ static struct attribute *anon_stats_attrs[] = {
>  #ifndef CONFIG_SHMEM
>         &zswpout_attr.attr,
>         &swpin_attr.attr,
> +       &swpin_fallback_attr.attr,
>         &swpout_attr.attr,
>         &swpout_fallback_attr.attr,
>  #endif
> @@ -669,6 +671,7 @@ static struct attribute *any_stats_attrs[] = {
>  #ifdef CONFIG_SHMEM
>         &zswpout_attr.attr,
>         &swpin_attr.attr,
> +       &swpin_fallback_attr.attr,
>         &swpout_attr.attr,
>         &swpout_fallback_attr.attr,
>  #endif
> diff --git a/mm/memory.c b/mm/memory.c
> index 209885a4134f..7cda8b65e0c9 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4191,6 +4191,7 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
>                                 return folio;
>                         folio_put(folio);
>                 }
> +               count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK);
>                 order = next_order(&orders, order);
>         }
>
> --
> 2.45.0
>

Thanks
Barry


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm: add per-order mTHP swap-in fallback counters
  2024-11-21 22:32 ` Barry Song
@ 2024-11-22  2:25   ` Wenchao Hao
  0 siblings, 0 replies; 3+ messages in thread
From: Wenchao Hao @ 2024-11-22  2:25 UTC (permalink / raw)
  To: Barry Song
  Cc: Jonathan Corbet, Andrew Morton, David Hildenbrand, Ryan Roberts,
	Baolin Wang, Usama Arif, Lance Yang, Matthew Wilcox, Peter Xu,
	linux-doc, linux-kernel, linux-mm

On 2024/11/22 06:32, Barry Song wrote:
> On Fri, Nov 22, 2024 at 5:28 AM Wenchao Hao <haowenchao22@gmail.com> wrote:
>>
>> Now large folio swap-in is supported, but we do not have a method
>> to analyze the success ratio of large folio swap-ins. Similar to
>> anon_fault_fallback, we add a per-order mTHP swpin_fallback to help
>> calculate the success ratio. The new counter is located at:
>>
>> /sys/kernel/mm/transparent_hugepage/hugepages-<size>/stats/swpin_fallback
> 
> Well,  this could be useful for profiling, but why not also add
> MTHP_STAT_SWPIN_FALLBACK_CHARGE ?
> 

Hi, since my current testing scenario does not involve
MTHP_STAT_SWPIN_FALLBACK_CHARGE, I will soon resend a V2 version
to add it.

Thanks.


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-11-22  2:25 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-21 16:27 [PATCH] mm: add per-order mTHP swap-in fallback counters Wenchao Hao
2024-11-21 22:32 ` Barry Song
2024-11-22  2:25   ` Wenchao Hao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox