From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1F60C28B20 for ; Sun, 30 Mar 2025 06:53:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ADF5628018C; Sun, 30 Mar 2025 02:53:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A906D280189; Sun, 30 Mar 2025 02:53:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 958F928018C; Sun, 30 Mar 2025 02:53:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 6F960280189 for ; Sun, 30 Mar 2025 02:53:06 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 8C05EAA58D for ; Sun, 30 Mar 2025 06:53:06 +0000 (UTC) X-FDA: 83277300372.21.A95FEB3 Received: from out30-111.freemail.mail.aliyun.com (out30-111.freemail.mail.aliyun.com [115.124.30.111]) by imf06.hostedemail.com (Postfix) with ESMTP id 02664180005 for ; Sun, 30 Mar 2025 06:53:03 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=rMKobvW8; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf06.hostedemail.com: domain of ying.huang@linux.alibaba.com designates 115.124.30.111 as permitted sender) smtp.mailfrom=ying.huang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743317585; a=rsa-sha256; cv=none; b=URG2w+eakSsPNBmEbpIL85WDBkQjy8aoDZ1BgIRB24NE1M02iQLgYSDcfx6YYVMEtbqMKm 8fxzFqrNRk1fDZCgoy2+Ay855gRYzsFhbeyu/qXAQ8rccLB2OR1UCJ9freLFS/PvgH6pjX +CYB8U8CzI89o8NG1i3gIRCZlCFkS58= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=rMKobvW8; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf06.hostedemail.com: domain of ying.huang@linux.alibaba.com designates 115.124.30.111 as permitted sender) smtp.mailfrom=ying.huang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743317585; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kNdfZoUpFKB6YFxeh+rDzkVIdDXkzGcNrgN7Sp6e5qo=; b=Pkj52vgkKTdNZ4cPu4BuDCCLVmkzJPrXOoADB7OxJnaEmiCeQfHiZumgLtfz+8FxAntzQo xRqImPbsnfAD77Klp05FPlydaggiEyMsC2ttv0B3Y+6TEJUTWhBj78yc9sURkTwWQ7X8K7 pUICtqQPaGfVSX6ipm7G+Xy7tT7Gqiw= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1743317580; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; bh=kNdfZoUpFKB6YFxeh+rDzkVIdDXkzGcNrgN7Sp6e5qo=; b=rMKobvW8+QbMCKONMZ2B+OVkf2iCNafYXzLkr7x+I+qRfLUsW0WyQbugd9G6uVQ9/CBfaSUBNSGSGMmTwUVoPUT2TxkWtRp5KEjnQvOq9KZoBggtQvXHb9xswFIZGh5DQQxdy2ARXB1HWsJ1WG0sxzx/cSuOhbpDdAC98GLvaS0= Received: from DESKTOP-5N7EMDA(mailfrom:ying.huang@linux.alibaba.com fp:SMTPD_---0WTLiftQ_1743317566 cluster:ay36) by smtp.aliyun-inc.com; Sun, 30 Mar 2025 14:52:58 +0800 From: "Huang, Ying" To: Nikhil Dhama Cc: , Ying Huang , , , Bharata B Rao , Raghavendra , Mel Gorman Subject: Re: [PATCH] mm: pcp: scale batch to reduce number of high order pcp flushes on deallocation In-Reply-To: <20250325171915.14384-1-nikhil.dhama@amd.com> (Nikhil Dhama's message of "Tue, 25 Mar 2025 22:49:15 +0530") References: <20250325171915.14384-1-nikhil.dhama@amd.com> Date: Sun, 30 Mar 2025 14:52:46 +0800 Message-ID: <8734evf1s1.fsf@DESKTOP-5N7EMDA> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 02664180005 X-Stat-Signature: zxy186r3ypyzkcod8nj7iy7af7zp5poy X-Rspam-User: X-HE-Tag: 1743317583-670339 X-HE-Meta: U2FsdGVkX1/Ddx9cDvxKFxyMoi3HJJSVbVB6ttZYeu1teK4XJfYT119HTA7rGWKp9vGt3WVIHeyi/ZNVVzKoKxZYrd4xTNM5SxsOpwtomZEWVenMciNa1vIbOenmL8EMtqhCdA108DU3hLXvsrZWuYLbjXx5szAbTTXPMKgCLERMGiQrKCNRFCAKb7S9DVSOedkJ0/Hqnd3+M9B1hQrjBQlxBBYjtJmdUNie7MN3gOHAEjQzNyDfeAdgie48ceKlm0wm+CpH81C2SJc1FWmJEjCuNLeH0RPYxHf7iUrwyzyTvUiS1fRZBNo0xRq+C1jIF0XTAoQRZdRt0y7bFsSndbrNC7gDs2uP5ipC1V1Q9x7U959mrVHNJbkI4L9VpeaaUWuLn2KdqZwBJL21R/jiIj5WdDVSLYwl8iUyH9xkAqvzUEeKKMUMl93xTn2hjs4UHM0I+ySXfbgU5KueqlxKV1hLBuLSISeGQRjkdqcMhA2re42TqNfkwhsVwxzz/AIGMaErOi5jH53WeUxRbkodHM1p9A1O5n/O03HKsyiJqFCbIGIIxsASQDA7ud1Gf7qKSFnh2O/oBOPb3rMnrJvsOH/rANa2bU4o5IX+IuiEN9+lrxmI0cfUwe7EqhzdbseDpriKkjypsC8371p1TdEVhRTcGSBo6a58teUqdq45/E+i9uV06AEH9sqI/9nJPtGp0PE21ARGXCV6LS+MBpExvF87wn9AB/0Gg7ZtPN/hXfsMAmOAFvapn+8iXuEA59Cswfi7UkH3aJYKmVypHCu/gIZQnqbqISCqqEOXzjiiq4oXfiGfZl+t50Kw/+/cvA118Dlx+tOHUz/MJwLeFbLbvFoW/PQCj4Ztrgc+uxfzezPgS9avtmE92w9uv+2Ksc3sZvIlGNXx4lvXYJXq4cTiEelnAIF3R68Gxz26WmDXREbijtMm2UqZeknVp6qgXZRZ5X+VdIq70aLP9nqGBPr SiwZCRAb iGAfBf0/z7PC1lmPfHn6Cu9zi2MTxiwbSzFD5QWGWyDG13eshgUDamnlbcR/Mnuax3r+WYwvDOr8Ajh3SJ9qhg1s4xB9NB24sZzvpb23DGSiKrjAVpxI1LUq8rL6Pg67g2fu6ZRS5IYsH1iTXXjzDAymqmYgDaieVWIU5HUK/nno07svpP0hVfYyMKEYhlqme1p0Rjci4kEGxRJX49iaWWTu7rHKUGnR9JOTLocuY3lCbd/RYYGRA613XrVGcHy60CIaLc0A6l8Sj5WeKCgIRweCP1UHhwH7jX3pxD2vNOlSpUVHYRDp3IObtK9FFPLsZl5LHRs84/2G+ZssW+xyw95ll018PAQyftqcW7/biu62h4/gRTZMFiJrkoEb2vJOtmfzq0R3cTBLCep8foJlqvmijxw65hTCHwSEsb7RGsxKUT+QKZJ/Xz0+Dl45uKlwmcdu51Kzp/T//9p8k/StgKyTXMA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi, Nikhil, Nikhil Dhama writes: > In old pcp design, pcp->free_factor gets incremented in nr_pcp_free() > which is invoked by free_pcppages_bulk(). So, it used to increase > free_factor by 1 only when we try to reduce the size of pcp list or > flush for high order. > and free_high used to trigger only for order > 0 and order < > costly_order and free_factor > 0. > > and free_factor used to scale down by a factor of 2 on every successful > allocation. > > for iperf3 I noticed that with older design in kernel v6.6, pcp list was > drained mostly when pcp->count > high (more often when count goes above > 530). and most of the time free_factor was 0, triggering very few > high order flushes. > > Whereas in the current design, free_factor is changed to free_count to keep > track of the number of pages freed contiguously, > and with this design for iperf3, pcp list is getting flushed more > frequently because free_high heuristics is triggered more often now. > > In current design, free_count is incremented on every deallocation, > irrespective of whether pcp list was reduced or not. And logic to > trigger free_high is if free_count goes above batch (which is 63) and > there are two contiguous page free without any allocation. > (and with cache slice optimisation). > > With this design, I observed that high order pcp list is drained as soon > as both count and free_count goes about 63. > > and due to this more aggressive high order flushing, applications > doing contiguous high order allocation will require to go to global list > more frequently. > > On a 2-node AMD machine with 384 vCPUs on each node, > connected via Mellonox connectX-7, I am seeing a ~30% performance > reduction if we scale number of iperf3 client/server pairs from 32 to 64. > > So, though this new design reduced the time to detect high order flushes, > but for application which are allocating high order pages more > frequently it may be flushing the high order list pre-maturely. > This motivates towards tuning on how late or early we should flush > high order lists. > > for free_high heuristics. I tried to scale batch and tune it, > which will delay the free_high flushes. > > > score # free_high > ----------- ----- ----------- > v6.6 (base) 100 4 > v6.12 (batch*1) 69 170 > batch*2 69 150 > batch*4 74 101 > batch*5 100 53 > batch*6 100 36 > batch*8 100 3 > > scaling batch for free_high heuristics with a factor of 5 or above restores > the performance, as it is reducing the number of high order flushes. > > On 2-node AMD server with 384 vCPUs each,score for other benchmarks with > patch v2 along with iperf3 are as follows: Em..., IIUC, this may disable the free_high optimizations. free_high optimization is introduced by Mel Gorman in commit f26b3fa04611 ("mm/page_alloc: limit number of high-order pages on PCP during bulk free"). So, this may trigger regression for the workloads in the commit. Can you try it too? > iperf3 lmbench3 netperf kbuild > (AF_UNIX) (SCTP_STREAM_MANY) > ------- --------- ----------------- ------ > v6.6 (base) 100 100 100 100 > v6.12 69 113 98.5 98.8 > v6.12 with patch 100 112.5 100.1 99.6 > > for network workloads, clients and server are running on different > machines conneted via Mellanox Connect-7 NIC. > > number of free_high: > iperf3 lmbench3 netperf kbuild > (AF_UNIX) (SCTP_STREAM_MANY) > ------- --------- ----------------- ------ > v6.6 (base) 5 12 6 2 > v6.12 170 11 92 2 > v6.12 with patch 58 11 34 2 > > > Signed-off-by: Nikhil Dhama > Cc: Andrew Morton > Cc: Ying Huang > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > Cc: Bharata B Rao > Cc: Raghavendra > > --- > mm/page_alloc.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index b6958333054d..326d5fbae353 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2617,7 +2617,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, > * stops will be drained from vmstat refresh context. > */ > if (order && order <= PAGE_ALLOC_COSTLY_ORDER) { > - free_high = (pcp->free_count >= batch && > + free_high = (pcp->free_count >= (batch*5) && > (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) && > (!(pcp->flags & PCPF_FREE_HIGH_BATCH) || > pcp->count >= READ_ONCE(batch))); --- Best Regards, Huang, Ying