From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 061CEC36010 for ; Mon, 7 Apr 2025 06:34:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35D636B0008; Mon, 7 Apr 2025 02:34:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 30B516B000A; Mon, 7 Apr 2025 02:34:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 186996B000C; Mon, 7 Apr 2025 02:34:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E95DC6B0008 for ; Mon, 7 Apr 2025 02:34:18 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CF51B1CC725 for ; Mon, 7 Apr 2025 06:34:18 +0000 (UTC) X-FDA: 83306283396.30.6EA257C Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2055.outbound.protection.outlook.com [40.107.92.55]) by imf25.hostedemail.com (Postfix) with ESMTP id D630CA0005 for ; Mon, 7 Apr 2025 06:34:15 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b="aSY/D0t/"; spf=pass (imf25.hostedemail.com: domain of Nikhil.Dhama@amd.com designates 40.107.92.55 as permitted sender) smtp.mailfrom=Nikhil.Dhama@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744007656; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=I2F0VtLMqXlUovE3h19aajWpNhTPGDOFZnO1dibnpDU=; b=XMeuVGCMOMUg1QFXXJh9R7xLcjO31f0HP1IddeiAzrSubo2ScDi6y+xmC2P6AnFyyECuzr dVc9MziYZbdpYeAq5TkgWZWAGlVQVHHRj27lZWCAr4gMBFc2uMTbCm6NvHGzdxVMkbjBBD N4BLDrq3ckKEvVFLI5x8V1dE+Z74hAQ= ARC-Authentication-Results: i=2; imf25.hostedemail.com; dkim=pass header.d=amd.com header.s=selector1 header.b="aSY/D0t/"; spf=pass (imf25.hostedemail.com: domain of Nikhil.Dhama@amd.com designates 40.107.92.55 as permitted sender) smtp.mailfrom=Nikhil.Dhama@amd.com; dmarc=pass (policy=quarantine) header.from=amd.com; arc=pass ("microsoft.com:s=arcselector10001:i=1") ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1744007656; a=rsa-sha256; cv=pass; b=7ZuFM/4snfSGKPxmD02S9DbAVVOEs7knFCYAveK7Bfxk4y7aNlEvu5/7JLCJBX2K4NpGfD Q1CLFXE3iRWxdmEzQiOw5v9AKIke5iOneGEHTD65h3imfTir8v+xJcRYsDXGvOe2M+/7A3 RPKXRcBgoxAZ2EyhcjNZCYWoTftIOxY= ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Uz4jbfVuJxBba7PwDEaZqcA+xx6jq0nMEV3DW0qciOTe7lvi1/g4hYwUxcK4RouwZ+IuFoktqI7pNDoyxIYIcLrUqUibImg6Q9Y3ZfbaS01o/Z47J5XIqgpVRlzF2WG/rmMRwNeJiFLEyXxRgdgNM5S5EYLLyLYvzBocWAjTnPEFn+kcOLxiSzqyVI013IAwu5P16HRXEDeOYvYXdhbKOKySPeg3IF/GOYECKOWg5aoqjUycVELs+CE8GzVSoZkDRPEexXHp5nq0ymGUTPLBs6Le9eaW9qELi8JgAZXCLObhRqUXImmXmBvgQfaBuAhSR4uJmkphikFIMvmwkSMvIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=I2F0VtLMqXlUovE3h19aajWpNhTPGDOFZnO1dibnpDU=; b=V+kc7+lQENaGs8QkDwyh7Gw0Qq06necpV38C5WGmoM2mpKSWbo5R7M0oUPfDbuTNTowZwXAYipJLwoCKVnPUWR3bFgu4oyTpY0FabeZGbWPY/0cEli0+QVIWWk4u0wqZfcv37onXe5VZ2h1X1k4bO9+gGCwz6kpRHOug4G3ed6rxGkmsh/1bP3H9GIiXZI7PXXnC1mdKfSDNLkIJOt309jTSz0EHjmr8OEWSaXbLJjnHBdpT+vQ/ubb+MN856cK7pGxp6ipi/H4zL5+wW+kHgzkzuLI3PnXBnXR2fokZRFLUgTtJEBRRwbwRaPKERPkzaNrtEDd4ME0aNGdnBMTGaQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=linux.alibaba.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=I2F0VtLMqXlUovE3h19aajWpNhTPGDOFZnO1dibnpDU=; b=aSY/D0t/3MuKfatzF3RhipN4zYpv57A3WGhMfK1pBT8VzE3xn2R4IM3qSNtm7giSBH+JSYv4HxvnuNdzQ1PKhcV/rkm1miLH67A9QbkmD8XRRsEcPzl2FmOve2HTJDUt07c+j+vQuXSimJdF6DdNL5HgcOW3HMA4uMjuCRiTaSU= Received: from DS7PR06CA0045.namprd06.prod.outlook.com (2603:10b6:8:54::26) by MW4PR12MB6952.namprd12.prod.outlook.com (2603:10b6:303:207::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8606.27; Mon, 7 Apr 2025 06:34:10 +0000 Received: from CY4PEPF0000E9D4.namprd03.prod.outlook.com (2603:10b6:8:54:cafe::5) by DS7PR06CA0045.outlook.office365.com (2603:10b6:8:54::26) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8606.34 via Frontend Transport; Mon, 7 Apr 2025 06:34:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000E9D4.mail.protection.outlook.com (10.167.241.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8606.22 via Frontend Transport; Mon, 7 Apr 2025 06:34:10 +0000 Received: from spgblr-titan-01.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 7 Apr 2025 01:34:06 -0500 From: Nikhil Dhama To: CC: , , , , , , , , , Nikhil Dhama Subject: Re: [PATCH] mm: pcp: scale batch to reduce number of high order pcp flushes on deallocation Date: Mon, 7 Apr 2025 12:02:59 +0530 Message-ID: <20250407063259.49271-1-nikhil.dhama@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <875xjmuiup.fsf@DESKTOP-5N7EMDA> References: <875xjmuiup.fsf@DESKTOP-5N7EMDA> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D4:EE_|MW4PR12MB6952:EE_ X-MS-Office365-Filtering-Correlation-Id: 003e7635-b9c2-48f1-237f-08dd759e3475 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|82310400026|1800799024|36860700013; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?RxnMF3vSnZjgJnj70/OZ4U1bP62XZxBkNZCAoBPqBIIggLtNGUVXAhXS7Nzf?= =?us-ascii?Q?gv0v++BjAQgHKA3kyPitQrXEkyNp3PBfW6YhliTSF5BUP4WZLLsLmsxK6J5e?= =?us-ascii?Q?d/Qi4ApT0VAVEAAa9xTARelni0Py22lU/Zwwe96LueYfz4PPS+5lA8RqkuUc?= =?us-ascii?Q?dmtqNgKAFCdxQz7O4+D58WK90PBykumCjdTmlBuNI/K0geBcmffzwVlGtV0y?= =?us-ascii?Q?GwFhMNORAY6OixcfUoOPbY0/McUmpPfdYQFkzDQX4uMx77mOkIq/Az+zAPcN?= =?us-ascii?Q?p+G/hOFBYk3RMOdKQvsQiKThzfGvjRUy9hYFuZjn8Cs621xdZYZxTEHpoUcH?= =?us-ascii?Q?t3K66frtILLrcwFQhjWa3cZhX/UsDlMrs9bhonYDdEJUVrAoHXtkUSesQiJ9?= =?us-ascii?Q?7EZr9x4gutCqalhQb5aDGqWsOqJHYowBuXRF9+1H87PfluBUKzixkXaUCZkc?= =?us-ascii?Q?Slfm1l/i7USIIlU36gGuIjIBcR3DLAMIfN/5ltKNYG9Ur9IT5oZdJE2Ut0BN?= =?us-ascii?Q?IMYo4G8K6ijlcfdef5brRXmSYEY3IuUKEJD7nljtdKGd/0kelC6FIEssB04O?= =?us-ascii?Q?cTodoJ/ulk3r5lLc9ixtA41tFUDJXHaIizAcF/N15KDec27PSiNHdBR67rym?= =?us-ascii?Q?bES62SXbQ3pBgFM9nAeMXPk2DbR6v5TemvePm3NyMht13gFUQK8V9o07eC5i?= =?us-ascii?Q?YdMs6LPPjRVzB1WPHBXfX1jIyIbDqrtLuiyhWa2hQNHgaFREKEtYGlVQpYOd?= =?us-ascii?Q?M5pmHRgIjSx2sAxYz2UWZ/1c+X4AGGT5g8U4bjof/7H3I+ysOwWAC7IIAh+x?= =?us-ascii?Q?lcf0WuR+4vHXadXttHHe7LA8H6ppwTK8VZ+UIDImUONUCXKpn8dowZAjnztp?= =?us-ascii?Q?ELq5Oz3I502HKxcqd+eV11gw1FCRwoHWKI147GctBvmkFZ2Jz9Ge9wWOZ+4i?= =?us-ascii?Q?i2BJ6bWxygpoWjSGIkaJx0SdMRoRu4xADBGfQJk50fMmTVUGnVvOsc0/mry3?= =?us-ascii?Q?l+TZeTmvV/GvhL0aeJ1ytgIEXZdBe8N/YDlR9kb78F5BEiopMMv6GT5n/u3m?= =?us-ascii?Q?H9PIMF1p9OF4HsnHvgpUjXSkfMYm2yEI7hIj7vhHOHPffoNLFnoNRd8agkg2?= =?us-ascii?Q?I619Kg4aa408Myxw+oOUiH3mSxPSRix5kyvXUn0yLj2pL1b3sp5L9vmyYesL?= =?us-ascii?Q?o0iJAsFKTcDnNsMvMPxqXJypa0hS4nqOOy8mGuRhmYgduba1h6L5J0YfJ2a9?= =?us-ascii?Q?v8WoClfoRdP42DuQZkBy/hx98pJz/IdHroq6ekzy9yvhAWxsov1U23UnO+0t?= =?us-ascii?Q?ZB35csolwxl2Yz5kFDYEoLjMhLX7zzyuCUZ2j1eFa5+KQPFcN7V1KF4CRutx?= =?us-ascii?Q?GTOsOt00NV8Dlt+FGV6+0PvjZc1daWDdu3m2+wTQWxyPMjdBeqme6NxINf/i?= =?us-ascii?Q?3ilZ8pqcTkeD95tETRsi0LXrAzjGjBT4rQuo1K6hqtXCeuhILGR7V8zIEfre?= =?us-ascii?Q?+OV/JrLI6pxqHCU=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(82310400026)(1800799024)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Apr 2025 06:34:10.0446 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 003e7635-b9c2-48f1-237f-08dd759e3475 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D4.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6952 X-Rspamd-Queue-Id: D630CA0005 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: jtff1g4cdb6qr7p5a4g3zsaakp9snry4 X-HE-Tag: 1744007655-526232 X-HE-Meta: U2FsdGVkX1/jGrIQKjmvqZof0Wwozdjq3SGzkH7xVcidZIUANRKWQhS2mSb1iFRGk+YDye+jvgOvdrmlAsZ1wkEzbOpPxOWw0hCCE6MTIRXfvi7lETgPE3Sw4ry+MNIYrvhkOYbFHDBVUNiGY2nyVB7f0buQEpebsm8ydaLn3p6pj1N0qHpOk7idCF/jVI+sD7E+t7YGaLlWj7Fy3TUXfIPhKA7h8Url3dExSxxT+Y6pVoxgZnkzu6J+gVsVny6sCReyOrFz+LCR9IokQlFnit2iQZMTsVF5UmOaz2855uNbcB50HyGUdOjpIKypywYiimb1WnTbhr+j14cSbAjcWyWsI7qpHBCq4a8wsfIXCsY95O52+MwRfdpJ3QybYVsATykWFbkOjGKb0QT5grs5wiuY4XAYo7kbmYvBg8ChPfNVs6qg1tezQuyBZukP4Andk7PkQk6vIENBosjM525saVr+JwPRPHHQ5YShxm6R6HvcCyjvHDr+Z/TZRQMCDt2ZSwD2WhgLzwRWnaAeDL/JIKqLmNLftYJJTLf7hFDjXkOZwTKoQV5Q/ZXgyKiABk7Nv9r3TZoqQM5ptju4WP0QFKoPgL5l21B6YeS5TFsv7f4N6sdSukyhWqMoG6CRT7X3GYMhNtMFaN41UOy0y8DNTkQ2N6d/J0t+5PZesiw1JrzwFmmhglrc2g5iduCYzGubsEGkGdCu2ST++VblN/fpEv6CgdB0Q6oJbyJjUu7m53ANGuvqgpUr4XKMFvAZRsJS0eL/S9vM1cmzm3cka9yIY3M6XSqWfW8jeXIjc1U8RGzP4b5kJGjWqVJnS+WLldYFp5Pgc5kIkYXnTtrbtiUhlg1189dvxmZwJjaWuZebFyTOTn7Xmte8QeoRFv0ATBAG6LSPfZOJXI8FCQp2uG9L+/liUzrWzCAqUeTNns5Yh/1ZjusmuEbNWRJZTWioP92V6hyNAFm/KoMbmw0XObM /TvxS26H GuAds/9V8HIh8pwUjDA3eQlff+FJJ26VpXDoEesEtmVRlf8f92RlrcZR8wSJUmtBThYZKMQTLL9EY+vu/JgbyPkM3Qhb7+XRtAlEx1CoJ9hCZmZ0pwggDmVRhVRyl3qOIQ802XbikV/0RoTsPZV8oaOW7qF1a2r3FfY5Ei1T91svMTzYESFu6rtx60lUWga7p6qly7y5ufjj2+9iNvCBP7nXlrFqwubWqBrKlQL8PVbkg//yguk/Y9jACEuVW9u/tIvDxnqRzFpgWh/tRsFoq3sSXeNKDF5BV3i2G+VX1L1wnPn3epCygYUx7N2REcfSBzBpeCfcB08PJJt3cvAgvVXrrP8Jxntn59RepVwg79mcqb6RLvQoneQJ2tAAqQ62aLAm1yYM2IlUf9bAGyK3DJC8FkcAma8EVzUx5n2U8g2hnRsc/Mt6eDizwBMY2kz5WSw2cIS8pTz6Ag8xtMXCk65POWUPV2IjyHvb2PuaTcj2XPpnfzjUNbiexugSY+5WuR8N/vUYy5crbhkeZhP18D1JAyvAo1wwGgjm9udlLbSTs4QvJemBRCGMMk9OShNpDWFNr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 4/3/2025 7:06 AM, Huang, Ying wrote: > > Nikhil Dhama writes: > >> On 3/30/2025 12:22 PM, Huang, Ying wrote: >> >>> >>> Hi, Nikhil, >>> >>> Nikhil Dhama writes: >>> >>>> In old pcp design, pcp->free_factor gets incremented in nr_pcp_free() >>>> which is invoked by free_pcppages_bulk(). So, it used to increase >>>> free_factor by 1 only when we try to reduce the size of pcp list or >>>> flush for high order. >>>> and free_high used to trigger only for order > 0 and order < >>>> costly_order and free_factor > 0. >>>> >>>> and free_factor used to scale down by a factor of 2 on every successful >>>> allocation. >>>> >>>> for iperf3 I noticed that with older design in kernel v6.6, pcp list was >>>> drained mostly when pcp->count > high (more often when count goes above >>>> 530). and most of the time free_factor was 0, triggering very few >>>> high order flushes. >>>> >>>> Whereas in the current design, free_factor is changed to free_count to keep >>>> track of the number of pages freed contiguously, >>>> and with this design for iperf3, pcp list is getting flushed more >>>> frequently because free_high heuristics is triggered more often now. >>>> >>>> In current design, free_count is incremented on every deallocation, >>>> irrespective of whether pcp list was reduced or not. And logic to >>>> trigger free_high is if free_count goes above batch (which is 63) and >>>> there are two contiguous page free without any allocation. >>>> (and with cache slice optimisation). >>>> >>>> With this design, I observed that high order pcp list is drained as soon >>>> as both count and free_count goes about 63. >>>> >>>> and due to this more aggressive high order flushing, applications >>>> doing contiguous high order allocation will require to go to global list >>>> more frequently. >>>> >>>> On a 2-node AMD machine with 384 vCPUs on each node, >>>> connected via Mellonox connectX-7, I am seeing a ~30% performance >>>> reduction if we scale number of iperf3 client/server pairs from 32 to 64. >>>> >>>> So, though this new design reduced the time to detect high order flushes, >>>> but for application which are allocating high order pages more >>>> frequently it may be flushing the high order list pre-maturely. >>>> This motivates towards tuning on how late or early we should flush >>>> high order lists. >>>> >>>> for free_high heuristics. I tried to scale batch and tune it, >>>> which will delay the free_high flushes. >>>> >>>> >>>> score # free_high >>>> ----------- ----- ----------- >>>> v6.6 (base) 100 4 >>>> v6.12 (batch*1) 69 170 >>>> batch*2 69 150 >>>> batch*4 74 101 >>>> batch*5 100 53 >>>> batch*6 100 36 >>>> batch*8 100 3 >>>> >>>> scaling batch for free_high heuristics with a factor of 5 or above restores >>>> the performance, as it is reducing the number of high order flushes. >>>> >>>> On 2-node AMD server with 384 vCPUs each,score for other benchmarks with >>>> patch v2 along with iperf3 are as follows: >>> >>> Em..., IIUC, this may disable the free_high optimizations. free_high >>> optimization is introduced by Mel Gorman in commit f26b3fa04611 >>> ("mm/page_alloc: limit number of high-order pages on PCP during bulk >>> free"). So, this may trigger regression for the workloads in the >>> commit. Can you try it too? >>> >> >> Hi, I ran netperf-tcp as in commit f26b3fa04611 ("mm/page_alloc: limit >> number of high-order pages on PCP during bulk free"), >> >> On a 2-node AMD server with 384 vCPUs, results I observed are as follows: >> >> 6.12 6.12 >> vanilla freehigh-heuristicsopt >> Hmean 64 732.14 ( 0.00%) 736.90 ( 0.65%) >> Hmean 128 1417.46 ( 0.00%) 1421.54 ( 0.29%) >> Hmean 256 2679.67 ( 0.00%) 2689.68 ( 0.37%) >> Hmean 1024 8328.52 ( 0.00%) 8413.94 ( 1.03%) >> Hmean 2048 12716.98 ( 0.00%) 12838.94 ( 0.96%) >> Hmean 3312 15787.79 ( 0.00%) 15822.40 ( 0.22%) >> Hmean 4096 17311.91 ( 0.00%) 17328.74 ( 0.10%) >> Hmean 8192 20310.73 ( 0.00%) 20447.12 ( 0.67%) >> >> It is not regressing for netperf-tcp. > > Thanks a lot for your data! > > Think about this again. Compared with the pcp->free_factor solution, > the pcp->free_count solution will trigger free_high heuristics more > early, this causes performance regression in your workloads. So, it's > reasonable to raise the bar to trigger free_high. And, it's also > reasonable to use a stricter threshold, as you have done in this patch. > However, "5 * batch" appears too magic and adapt to one type of machine. > > Let's step back to do some analysis. In the original pcp->free_factor > solution, free_high is triggered for contiguous freeing with size > ranging from "batch" to "pcp->high + batch". So, the average value is > about "batch + pcp->high / 2". While in the pcp->free_count solution, > free_high will be triggered for contiguous freeing with size "batch". > So, to restore the original behavior, it seems that we can use the > threshold "batch + pcp->high_min / 2". Do you think that this is > reasonable? If so, can you give it a try? Hi, I have tried your suggestion as setting threshold to "batch + pcp->high_min / 2", scores for different benchmarks on the same machine (2-Node AMD server with 384 vCPUs each) are as follows: iperf3 lmbench3 netperf kbuild (AF_UNIX) (SCTP_STREAM_MANY) ------- --------- ----------------- ------ v6.6 vanilla (base) 100 100 100 100 v6.12 vanilla 69 113 98.5 98.8 v6.12 avg_threshold 100 110.3 100.2 99.3 and for netperf-tcp, it is as follows: 6.12 6.12 vanilla avg_free_high_threshold Hmean 64 732.14 ( 0.00%) 730.45 ( -0.23%) Hmean 128 1417.46 ( 0.00%) 1419.44 ( 0.14%) Hmean 256 2679.67 ( 0.00%) 2676.45 ( -0.12%) Hmean 1024 8328.52 ( 0.00%) 8339.34 ( 0.13%) Hmean 2048 12716.98 ( 0.00%) 12743.68 ( 0.21%) Hmean 3312 15787.79 ( 0.00%) 15887.25 ( 0.63%) Hmean 4096 17311.91 ( 0.00%) 17332.68 ( 0.12%) Hmean 8192 20310.73 ( 0.00%) 20465.09 ( 0.76%) Thanks, Nikhil Dhama