linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation
@ 2025-01-07  9:17 Nikhil Dhama
  2025-01-08  5:05 ` Andrew Morton
  0 siblings, 1 reply; 12+ messages in thread
From: Nikhil Dhama @ 2025-01-07  9:17 UTC (permalink / raw)
  To: akpm
  Cc: Nikhil Dhama, Ying Huang, linux-mm, linux-kernel, Bharata B Rao,
	Raghavendra

In current PCP auto-tuning desgin, free_count was introduced to track
the consecutive page freeing with a counter, This counter is incremented
by the exact amount of pages that are freed, but reduced by half on
allocation. This is causing a 2-node iperf3 client to server's network
bandwidth to drop by 30% if we scale number of client-server pairs from 32
(where we achieved peak network bandwidth) to 64.

To fix this issue, on allocation, reduce free_count by the exact number
of pages that are allocated instead of halving it.

On a 2-node AMD server, one running iperf3 clients and other iperf3
sever, This patch restores the performance drop.

Fixes: 6ccdcb6d3a74 ("mm, pcp: reduce detecting time of consecutive high order page freeing")

Signed-off-by: Nikhil Dhama <nikhil.dhama@amd.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ying Huang <huang.ying.caritas@gmail.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: Bharata B Rao <bharata@amd.com>
Cc: Raghavendra <raghavendra.kodsarathimmappa@amd.com>
---
 mm/page_alloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cae7b93864c2..e2a8ec5584f8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3037,10 +3037,10 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
 
 	/*
 	 * On allocation, reduce the number of pages that are batch freed.
-	 * See nr_pcp_free() where free_factor is increased for subsequent
+	 * See free_unref_page_commit() where free_count is increased for subsequent
 	 * frees.
 	 */
-	pcp->free_count >>= 1;
+	pcp->free_count -= (1 << order);
 	list = &pcp->lists[order_to_pindex(migratetype, order)];
 	page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list);
 	pcp_spin_unlock(pcp);
-- 
2.25.1



^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-03-25 17:23 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-07  9:17 [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation Nikhil Dhama
2025-01-08  5:05 ` Andrew Morton
2025-01-09 11:42   ` Nikhil Dhama
2025-01-15 11:06     ` Huang, Ying
2025-01-15 11:19   ` [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation, Huang, Ying
2025-01-29  4:31     ` Andrew Morton
2025-02-12  5:04       ` [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation Nikhil Dhama
2025-02-12  8:40         ` Huang, Ying
2025-02-12 10:06           ` Nikhil Dhama
2025-03-19  8:14           ` [PATCH -V2] mm: pcp: scale batch to reduce number of high order pcp flushes on deallocation Nikhil Dhama
2025-03-25  8:00             ` Raghavendra K T
2025-03-25 17:23               ` Nikhil Dhama

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox