From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14680C36014 for ; Thu, 3 Apr 2025 01:36:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00812280003; Wed, 2 Apr 2025 21:36:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF930280001; Wed, 2 Apr 2025 21:36:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEA2D280003; Wed, 2 Apr 2025 21:36:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C443E280001 for ; Wed, 2 Apr 2025 21:36:25 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7C6411A0DBB for ; Thu, 3 Apr 2025 01:36:25 +0000 (UTC) X-FDA: 83291017530.25.24CCB7D Received: from out199-13.us.a.mail.aliyun.com (out199-13.us.a.mail.aliyun.com [47.90.199.13]) by imf21.hostedemail.com (Postfix) with ESMTP id 8A31D1C0002 for ; Thu, 3 Apr 2025 01:36:22 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=a8wiqEfr; spf=pass (imf21.hostedemail.com: domain of ying.huang@linux.alibaba.com designates 47.90.199.13 as permitted sender) smtp.mailfrom=ying.huang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743644183; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4zuhH53JmwK3MjZjhc7ETdHqf8vx/j+8xmJK3JZE744=; b=rOvMwA63m8Pt3hL2MA78NG32d7qXzBpXGRYE5OPEYGQcxXyhD9DAxD8vg0RlvVnU0LQK8f hn53fORaKgydChzB7axJgfLgMCnyeYo2Fw40kBohsvOcRvbBOMKRFpoZsUSRRPTS+OoMP7 aQ7r+edtIk9j605Fzyiou1PMDlBoAn0= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=a8wiqEfr; spf=pass (imf21.hostedemail.com: domain of ying.huang@linux.alibaba.com designates 47.90.199.13 as permitted sender) smtp.mailfrom=ying.huang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743644183; a=rsa-sha256; cv=none; b=nf+hZ0usltu5el0JxtPCXtOL7+/GqIcgTpMNQAIpHxCG82enpRI0LEL6dYd0J534wtt0/O C5B3CiJwxGwdFNa8dMQNwkrg2pUNwaqr3QC9ERjE8j7E2DQCg5gz/e4MGMKizY1JHwm4GE SNWNpeFHoUhTqrNkjgR6DnbH1bYzazo= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1743644168; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; bh=4zuhH53JmwK3MjZjhc7ETdHqf8vx/j+8xmJK3JZE744=; b=a8wiqEfrAlv71wFb97c8PXAC15yUbNSTQS74lVYRQtIgKYXLd1/4QgZPSMeaxSwmr7t7vuBrO0cpbiPUpf+0EY1fffUlV+BEuK/hcbpuUjEFMZqfFSUJkX435qNfrocJ74JhMkdD8GnhEbdn1udoSLsx8jFaDvAqC04f5eonv6c= Received: from DESKTOP-5N7EMDA(mailfrom:ying.huang@linux.alibaba.com fp:SMTPD_---0WUXH76._1743644166 cluster:ay36) by smtp.aliyun-inc.com; Thu, 03 Apr 2025 09:36:07 +0800 From: "Huang, Ying" To: Nikhil Dhama Cc: , , , , , , , , Subject: Re: [PATCH] mm: pcp: scale batch to reduce number of high order pcp flushes on deallocation In-Reply-To: <20250401135638.25436-1-nikhil.dhama@amd.com> (Nikhil Dhama's message of "Tue, 1 Apr 2025 19:26:38 +0530") References: <202503312148.c74b0351-lkp@intel.com> <20250401135638.25436-1-nikhil.dhama@amd.com> Date: Thu, 03 Apr 2025 09:36:14 +0800 Message-ID: <875xjmuiup.fsf@DESKTOP-5N7EMDA> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: 8A31D1C0002 X-Rspamd-Server: rspam05 X-Rspam-User: X-Stat-Signature: 74s5qyt8nutp6ox75a9mz7gmq6kb3tpu X-HE-Tag: 1743644182-170743 X-HE-Meta: U2FsdGVkX19ZeAQf+yJpQZMBo1P9hjhzzk4uwbY/6g+MriHN5HBWutWhuuobPgOVSv5RVMwOEQDpoJFwxYMyXEyiydaPlLfNbnvEqJyybPcQquWewPpmZQGVNKrNLpk/0d5WlGYLCbnZQNKCZz2dT6We1D92THWS0aTGIpXApr8zaRRCcz0nMQVIEk87yFnzSi7xdsapX4QipeM7pIIVUCsNDtaUaNejA5MSjPjMqptM2G4k2AKVEouBpc4AUStKe2Gf8fnYyC37uj9P5ibLSxaSHXnQIr+NBImh7b10KI/agzRl3U14jqDOy2i6EJHZo8DK/gTE1NrjhwUpLzed7cxlAdmuywq2B8D41fwwQ17zMknwQM4ai+lBLrVDi4NLivstiSfh/21EqdKKlC0o9dM2zsmocvi/w7Hty5/vGthpGLspUIu0VGOvbHlXgYVWu5Sz70SfMu3sEDMV1bK0Ykkvf2suAkqq8qS0v940xAad1tzRy1e2QUMrr2KfGCQU4HTfHFoVTUfIU1mR95kj69nTW8sDDII9QwvDpMuJLMCgkPxj46uw++9cDOG5uxCdkqjPC32zP9OxSe7Py1VAllbWa7RQJiYoEUZtIEYXE3LMYZjbi5sZm6uId4UZPTpShGeYvBa0CSyMlNEoBluoPoNllJTRCu1hqb18hMmMh3vR1rdxpDez2+5US4ZNTM0hF4fGDqYQY5rw4cMjiIHoxyScX0L5dMQC/NbGLms492mgapJtZU66um9ENWU3l+bKVMWo+Q33P1gZtKgVr44hvAj81R2o01LGT0zCj7QfxKojg1Jia75JnEykI8XzJgNHR806TaxNoDnHKV89tQ137ISKhDskdT0mV5I3KCYqSFKndzvgqwG8p6yWLEmvLFRi1kg7oCLatag9/06GdyFR3TY1qI1j0el01/t2LUy6mygczW6257zkgHD1IRSXX1XmTyX32aJm/cV7WoZqtR0 f+2277sL fMZTIHaIg1GUEyVkyGbjelS/HzVMMtc8g1Px6w6DbNX5qAVXZbfzoICiDCKDbzV1DVo3rF8WA9F6yfjgeOFUR/AqqxXKCAyxf7ixRA//14q+mxwi8EQY4dDe5W1acJgHXJ3p7nqmhiF1HMfxHotBSa8n6k2uddtfY9kiWnM0LxD7Si1zP8fl0ZSXUtDpcLzRsyD5pRrEgEVV9PcOtP77fmbrypOlPzrML7Xl5yQI/x8jO+dx5y8h5N2Uq3TUaMIlIZGhXj+SHr6Exiz6hQraePdNfS8n9913RbZEQpWtPViFtWO4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Nikhil Dhama writes: > On 3/30/2025 12:22 PM, Huang, Ying wrote: > >> >> Hi, Nikhil, >> >> Nikhil Dhama writes: >> >>> In old pcp design, pcp->free_factor gets incremented in nr_pcp_free() >>> which is invoked by free_pcppages_bulk(). So, it used to increase >>> free_factor by 1 only when we try to reduce the size of pcp list or >>> flush for high order. >>> and free_high used to trigger only for order > 0 and order < >>> costly_order and free_factor > 0. >>> >>> and free_factor used to scale down by a factor of 2 on every successful >>> allocation. >>> >>> for iperf3 I noticed that with older design in kernel v6.6, pcp list was >>> drained mostly when pcp->count > high (more often when count goes above >>> 530). and most of the time free_factor was 0, triggering very few >>> high order flushes. >>> >>> Whereas in the current design, free_factor is changed to free_count to keep >>> track of the number of pages freed contiguously, >>> and with this design for iperf3, pcp list is getting flushed more >>> frequently because free_high heuristics is triggered more often now. >>> >>> In current design, free_count is incremented on every deallocation, >>> irrespective of whether pcp list was reduced or not. And logic to >>> trigger free_high is if free_count goes above batch (which is 63) and >>> there are two contiguous page free without any allocation. >>> (and with cache slice optimisation). >>> >>> With this design, I observed that high order pcp list is drained as soon >>> as both count and free_count goes about 63. >>> >>> and due to this more aggressive high order flushing, applications >>> doing contiguous high order allocation will require to go to global list >>> more frequently. >>> >>> On a 2-node AMD machine with 384 vCPUs on each node, >>> connected via Mellonox connectX-7, I am seeing a ~30% performance >>> reduction if we scale number of iperf3 client/server pairs from 32 to 64. >>> >>> So, though this new design reduced the time to detect high order flushes, >>> but for application which are allocating high order pages more >>> frequently it may be flushing the high order list pre-maturely. >>> This motivates towards tuning on how late or early we should flush >>> high order lists. >>> >>> for free_high heuristics. I tried to scale batch and tune it, >>> which will delay the free_high flushes. >>> >>> >>> score # free_high >>> ----------- ----- ----------- >>> v6.6 (base) 100 4 >>> v6.12 (batch*1) 69 170 >>> batch*2 69 150 >>> batch*4 74 101 >>> batch*5 100 53 >>> batch*6 100 36 >>> batch*8 100 3 >>> >>> scaling batch for free_high heuristics with a factor of 5 or above restores >>> the performance, as it is reducing the number of high order flushes. >>> >>> On 2-node AMD server with 384 vCPUs each,score for other benchmarks with >>> patch v2 along with iperf3 are as follows: >> >> Em..., IIUC, this may disable the free_high optimizations. free_high >> optimization is introduced by Mel Gorman in commit f26b3fa04611 >> ("mm/page_alloc: limit number of high-order pages on PCP during bulk >> free"). So, this may trigger regression for the workloads in the >> commit. Can you try it too? >> > > Hi, I ran netperf-tcp as in commit f26b3fa04611 ("mm/page_alloc: limit > number of high-order pages on PCP during bulk free"), > > On a 2-node AMD server with 384 vCPUs, results I observed are as follows: > > 6.12 6.12 > vanilla freehigh-heuristicsopt > Hmean 64 732.14 ( 0.00%) 736.90 ( 0.65%) > Hmean 128 1417.46 ( 0.00%) 1421.54 ( 0.29%) > Hmean 256 2679.67 ( 0.00%) 2689.68 ( 0.37%) > Hmean 1024 8328.52 ( 0.00%) 8413.94 ( 1.03%) > Hmean 2048 12716.98 ( 0.00%) 12838.94 ( 0.96%) > Hmean 3312 15787.79 ( 0.00%) 15822.40 ( 0.22%) > Hmean 4096 17311.91 ( 0.00%) 17328.74 ( 0.10%) > Hmean 8192 20310.73 ( 0.00%) 20447.12 ( 0.67%) > > It is not regressing for netperf-tcp. Thanks a lot for your data! Think about this again. Compared with the pcp->free_factor solution, the pcp->free_count solution will trigger free_high heuristics more early, this causes performance regression in your workloads. So, it's reasonable to raise the bar to trigger free_high. And, it's also reasonable to use a stricter threshold, as you have done in this patch. However, "5 * batch" appears too magic and adapt to one type of machine. Let's step back to do some analysis. In the original pcp->free_factor solution, free_high is triggered for contiguous freeing with size ranging from "batch" to "pcp->high + batch". So, the average value is about "batch + pcp->high / 2". While in the pcp->free_count solution, free_high will be triggered for contiguous freeing with size "batch". So, to restore the original behavior, it seems that we can use the threshold "batch + pcp->high_min / 2". Do you think that this is reasonable? If so, can you give it a try? --- Best Regards, Huang, Ying