From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F164C2BD09 for ; Wed, 10 Jul 2024 01:53:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DAC1F6B009F; Tue, 9 Jul 2024 21:53:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D5CFF6B00A0; Tue, 9 Jul 2024 21:53:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C23CA6B00A1; Tue, 9 Jul 2024 21:53:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A4BD16B009F for ; Tue, 9 Jul 2024 21:53:52 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3FAB11C2CAB for ; Wed, 10 Jul 2024 01:53:52 +0000 (UTC) X-FDA: 82322171904.26.51FFE00 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by imf14.hostedemail.com (Postfix) with ESMTP id 9518810000F for ; Wed, 10 Jul 2024 01:53:49 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MDO+CjB9; spf=pass (imf14.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.18 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720576399; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5dAbWFVg01x8/85ZmuC6Q8YSm4PTzExezXO4Ux/waHc=; b=JU9HsdWKBBcI2lT7IAmr4u63dDV6I0B/I37ROFSprQ/QTFN0IiVfiJ7C0MLIBi2ij8CoeP HgzPgXt/1N5dFjYG2HOyiAXBm5rtWrz+W/ltAGEd4Hmt+VfPJmeVF9Et4kUw93La04w64X GUFzlcSmVJJP+RYE1G1fpZLViImIbyg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720576399; a=rsa-sha256; cv=none; b=GYKjIl+a2DcCtRKtjYyLLLUhQ8uTBa5K4AW00Tv5rQCIgajuU9t/MFdVS2hQcPs+c6KWs0 8Eu/Nyrw7a7vFmWlXrpAyOLSWcQtIUQwZt3m5LcOUbNB8nYoAUHYD+cnErceU6AoWblAre Z993LdDfTaAj7i7FwG2mf2yemV16cRY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=MDO+CjB9; spf=pass (imf14.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.18 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1720576430; x=1752112430; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=fnAthbawIMkvHwEQGx1Qggmw5QNcaVkK0H3+bUWg7+k=; b=MDO+CjB9u8vp3TybZurlawkktdPfSIt0KACiu9suced6G7pQHgRPDyKx f7JvzNRPn8r5TMqBdGaTMrZEI4bfeOXvNK9RbLH15/hHNGoqHDCKLTkJm ahUCQgzz1MzfAGonsl+TV0ZIQLS6vtCHS79y2PG96ceP/1fR8ErZx2R9C AxJXhDD7zhaoDBLaRpfnwckKCiLYtLNFmsao1kmq8B2vT6e+qU7VAqcvd ikwUncPt5sJoB8dZ99gV6zzRcXUiq5ZP85YQ3XB6u9sz7OWP+a3xNm6V/ wNEwUnTUmz6BL1tOdFZXnWeEHGMvhQy3Uxqck8RamkylAMEMnI3ptKWyu w==; X-CSE-ConnectionGUID: /u3yKAp1SISLZhRUsHlBVg== X-CSE-MsgGUID: aXDTVF+hQU6/CP7jHLvpqw== X-IronPort-AV: E=McAfee;i="6700,10204,11128"; a="18009986" X-IronPort-AV: E=Sophos;i="6.09,196,1716274800"; d="scan'208";a="18009986" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jul 2024 18:53:48 -0700 X-CSE-ConnectionGUID: eKjj0VVeRjyEJqMBEI23Ug== X-CSE-MsgGUID: MqWSgfJZTEmB8B3Jfk4W1A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,196,1716274800"; d="scan'208";a="47960767" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa010-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jul 2024 18:53:47 -0700 From: "Huang, Ying" To: Yafang Shao Cc: akpm@linux-foundation.org, mgorman@techsingularity.net, linux-mm@kvack.org Subject: Re: [PATCH 2/3] mm/page_alloc: Avoid changing pcp->high decaying when adjusting CONFIG_PCP_BATCH_SCALE_MAX In-Reply-To: <20240707094956.94654-3-laoar.shao@gmail.com> (Yafang Shao's message of "Sun, 7 Jul 2024 17:49:55 +0800") References: <20240707094956.94654-1-laoar.shao@gmail.com> <20240707094956.94654-3-laoar.shao@gmail.com> Date: Wed, 10 Jul 2024 09:51:55 +0800 Message-ID: <87h6cyau9w.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9518810000F X-Stat-Signature: ck6i4jzr8pc55skpjknppfeswoicfg9k X-HE-Tag: 1720576429-65887 X-HE-Meta: U2FsdGVkX18wbI8w6egd4HVHt6T9LNy/g2q3f/KPiZdPkrdUIXDWzgPYUb7gI/e4JVGmfhQrwGW66MAHrbiV3jlw+dj0ltsHaAyPPoSPTkht7jJkeWuDugCYqyFGSqs3vPQAcltxCAmdOEirLFRmAe9Rpv2yEhchDzsRokD12bMuvk1aDsVx3Iu5ZxDRcJRl2+jzhtXTjZmIC3Lacsm2KfyscfCQNhSJtA2X8fXP3/qv9iRB8gmJ9KpIQfDCwRjhe4RcdNxIFLZoNeKE8eCAzO5eDna9FPvPJ/y58i/EvbCa9lmJZL5sKXBME8eHhyYT7IHhAkJcD0gQoZh4u5G2jqKXfy+jpzyhtM1HFdMn11hVdPpzkbWG7K+TxF9h4+mTTyX9qEIrOxsxrCOLou+eDWeet4xjohOnxMPZsOxWbbIWejsAljGgsJ6GfaYcK6ylptaTBRzHmY24JarBnQemFOt1dgaGtATk0drS41Pi/EFgsBchNdcJDzjk0eJV/cooE0WLGNFyFhCAq+SxqWKvDA/kCnGpBjwdpP6wzUrPQM5PZlfw7Xg0yysOlwQPFo4W1KtnBcxUwzSU9CHh4DNiopNsVofpgN0dU2bDVztPhWe7qMCfBAWQfMUs1swrL7LFqH9/rytzBmUEc4FPQn8jtBVCMA6nCar3i09y4JOTJyLp06lrcwfqJTShF0QfV//FATYsUFJZMsHvLxsA/jwtFu00+s8gCht+H2E5PVJ0aAlfS6ax7zetIWAHrzuJms3Lk6tLlGDpmyefIWmbDFHDPbXnPZU8L/U/xJVTd1M50dc1jLwZQuRxFGD9963Z74rmWXUxT5wBUyL0scmAzucvbcdLIVrzp3or0B0Q7AOXn81GuFNhI4d0IubXkVX9yJYMI8QAUgnM2unbyC0EhVelHlu8r6MMWYv+1jXGJ6EIzDZ9U9T47t9u/x7WI9b6LvQj1vADMae15Anm1a8ORIy 9BeHfBdI 5sHqCSPuTUIK1eaUaYxWK3dBOqikpJnOpdGxc2Ew0Ql3oDzlLA1fm+Lnw0FyZf5m8HZlMrHZVjaoku8X/whaXmWnxMhVQZVlMVGVjU6jXf19A5mW6+BqtAz7CGAMM9CBLdbDq8tch2OzMBbHNBHGHchFI5v23efTEE/TnAeR3/1cqwUOewotLxahdQXUBNRtaV0MisipfWuP/gf7JEMpKpjibro00hJ47AUlY4XD+4DmHEWB49ZBFJxf5tfObgkSfmWcU/cN/tQ7KBNv4xhKpoVQmHXlbz/vPrVFO7TAhfRWeAAo+TkhXe1n8/g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.004519, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Yafang Shao writes: > When adjusting the CONFIG_PCP_BATCH_SCALE_MAX configuration from its > default value of 5 to a lower value, such as 0, it's important to ensure > that the pcp->high decaying is not inadvertently slowed down. Similarly, > when increasing CONFIG_PCP_BATCH_SCALE_MAX to a larger value, like 6, we > must avoid inadvertently increasing the number of pages freed in > free_pcppages_bulk() as a result of this change. > > So below improvements are made: > - hardcode the default value of 5 to avoiding modifying the pcp->high > - refactore free_pcppages_bulk() into multiple steps, with each step > processing a fixed batch size of pages This is confusing. You don't change free_pcppages_bulk() itself. I guess what you mean is "change free_pcppages_bulk() calling into multiple steps". > > Suggested-by: "Huang, Ying" > Signed-off-by: Yafang Shao > --- > mm/page_alloc.c | 15 +++++++++++---- > 1 file changed, 11 insertions(+), 4 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 8e2f4e1ab4f2..2b76754a48e0 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2247,7 +2247,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, > int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) > { > int high_min, to_drain, batch; > - int todo = 0; > + int todo = 0, count = 0; > > high_min = READ_ONCE(pcp->high_min); > batch = READ_ONCE(pcp->batch); > @@ -2257,18 +2257,25 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) > * control latency. This caps pcp->high decrement too. > */ > if (pcp->high > high_min) { > - pcp->high = max3(pcp->count - (batch << CONFIG_PCP_BATCH_SCALE_MAX), > + /* When tuning the pcp batch scale value, we want to ensure that > + * the pcp->high decay rate is not slowed down. Therefore, we > + * hard-code the historical default scale value of 5 here to > + * prevent any unintended effects. > + */ This is good description for history. But, in the result code, it's not easy for people to connect the code with pcp batch scale directly. How about something as follows, We will decay 1/8 pcp->high each time in general, so that the idle PCP pages can be returned to buddy system timely. To control the max latency of decay, we also constrain the number pages freed each time. > + pcp->high = max3(pcp->count - (batch << 5), > pcp->high - (pcp->high >> 3), high_min); > if (pcp->high > high_min) > todo++; > } > > to_drain = pcp->count - pcp->high; > - if (to_drain > 0) { > + while (count < to_drain) { > spin_lock(&pcp->lock); > - free_pcppages_bulk(zone, to_drain, pcp, 0); > + free_pcppages_bulk(zone, batch, pcp, 0); "to_drain - count" may < batch. > spin_unlock(&pcp->lock); > + count += batch; > todo++; > + cond_resched(); > } > > return todo; -- Best Regards, Huang, Ying