From: Nikhil Dhama <nikhil.dhama@amd.com>
To: <akpm@linux-foundation.org>
Cc: Nikhil Dhama <nikhil.dhama@amd.com>,
Ying Huang <huang.ying.caritas@gmail.com>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>, Bharata B Rao <bharata@amd.com>,
Raghavendra <raghavendra.kodsarathimmappa@amd.com>
Subject: [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation
Date: Tue, 7 Jan 2025 14:47:24 +0530 [thread overview]
Message-ID: <20250107091724.35287-1-nikhil.dhama@amd.com> (raw)
In current PCP auto-tuning desgin, free_count was introduced to track
the consecutive page freeing with a counter, This counter is incremented
by the exact amount of pages that are freed, but reduced by half on
allocation. This is causing a 2-node iperf3 client to server's network
bandwidth to drop by 30% if we scale number of client-server pairs from 32
(where we achieved peak network bandwidth) to 64.
To fix this issue, on allocation, reduce free_count by the exact number
of pages that are allocated instead of halving it.
On a 2-node AMD server, one running iperf3 clients and other iperf3
sever, This patch restores the performance drop.
Fixes: 6ccdcb6d3a74 ("mm, pcp: reduce detecting time of consecutive high order page freeing")
Signed-off-by: Nikhil Dhama <nikhil.dhama@amd.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ying Huang <huang.ying.caritas@gmail.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: Bharata B Rao <bharata@amd.com>
Cc: Raghavendra <raghavendra.kodsarathimmappa@amd.com>
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index cae7b93864c2..e2a8ec5584f8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3037,10 +3037,10 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
/*
* On allocation, reduce the number of pages that are batch freed.
- * See nr_pcp_free() where free_factor is increased for subsequent
+ * See free_unref_page_commit() where free_count is increased for subsequent
* frees.
*/
- pcp->free_count >>= 1;
+ pcp->free_count -= (1 << order);
list = &pcp->lists[order_to_pindex(migratetype, order)];
page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list);
pcp_spin_unlock(pcp);
--
2.25.1
next reply other threads:[~2025-01-07 9:18 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-07 9:17 Nikhil Dhama [this message]
2025-01-08 5:05 ` Andrew Morton
2025-01-09 11:42 ` Nikhil Dhama
2025-01-15 11:06 ` Huang, Ying
2025-01-15 11:19 ` [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation, Huang, Ying
2025-01-29 4:31 ` Andrew Morton
2025-02-12 5:04 ` [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation Nikhil Dhama
2025-02-12 8:40 ` Huang, Ying
2025-02-12 10:06 ` Nikhil Dhama
2025-03-19 8:14 ` [PATCH -V2] mm: pcp: scale batch to reduce number of high order pcp flushes on deallocation Nikhil Dhama
2025-03-25 8:00 ` Raghavendra K T
2025-03-25 17:23 ` Nikhil Dhama
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250107091724.35287-1-nikhil.dhama@amd.com \
--to=nikhil.dhama@amd.com \
--cc=akpm@linux-foundation.org \
--cc=bharata@amd.com \
--cc=huang.ying.caritas@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=raghavendra.kodsarathimmappa@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox