From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1527ECD6E73 for ; Wed, 11 Oct 2023 14:10:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 881418004A; Wed, 11 Oct 2023 10:10:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8315F8D0109; Wed, 11 Oct 2023 10:10:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 71FA28004A; Wed, 11 Oct 2023 10:10:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5E1A88D0109 for ; Wed, 11 Oct 2023 10:10:02 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 79E4C12020D for ; Wed, 11 Oct 2023 14:10:01 +0000 (UTC) X-FDA: 81333364602.11.28B7553 Received: from outbound-smtp44.blacknight.com (outbound-smtp44.blacknight.com [46.22.136.52]) by imf22.hostedemail.com (Postfix) with ESMTP id 3AB01C0038 for ; Wed, 11 Oct 2023 14:09:57 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.52 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697033399; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/ZlvnRangQhqPLg7SfuJjB1uttL+NzQM5+1KKw5yHoc=; b=8K/lC6ZeBAyjTzV+o70ReeN2mmnFLaku+GU0W5ZQWiO6fNsPJuAYMuDpoNb1QwFDE3gmD5 l/FvUjJmHVwKL1rpf1qz4vPR6tRL3OjsTirvOZ5CmfZyfbsP0SZRFHu7UyBONa+MvMWm1A sTB0Dia0t19jjsyouSHLULhs7PvItrQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697033399; a=rsa-sha256; cv=none; b=IYRkEsTqgDGS4AdxYn481MtLt3z6kPvx0pvYc++r9SdLdcTyD9lb3fCUYdH5URaAwY0VWH xsv7ByxkyAxk9Xb1vBSu4CVh76MnBj3wj8xoyZ9NFJPyEvPJJkfvhFOXZyeWwtWfKRMA4B jiuiIJPebB0IHKGCdM9CdsBKwRRRsfk= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; spf=pass (imf22.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.52 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp44.blacknight.com (Postfix) with ESMTPS id 2F32BF867F for ; Wed, 11 Oct 2023 15:09:53 +0100 (IST) Received: (qmail 27907 invoked from network); 11 Oct 2023 14:09:51 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 11 Oct 2023 14:09:51 -0000 Date: Wed, 11 Oct 2023 15:09:49 +0100 From: Mel Gorman To: Huang Ying Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Andrew Morton , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Michal Hocko , Pavel Tatashin , Matthew Wilcox , Christoph Lameter Subject: Re: [PATCH 09/10] mm, pcp: avoid to reduce PCP high unnecessarily Message-ID: <20231011140949.rwsqfb57vyuub6va@techsingularity.net> References: <20230920061856.257597-1-ying.huang@intel.com> <20230920061856.257597-10-ying.huang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20230920061856.257597-10-ying.huang@intel.com> X-Rspamd-Queue-Id: 3AB01C0038 X-Rspam-User: X-Stat-Signature: jtxqcqh6cien6daz1pf4i1ohi3faa8fm X-Rspamd-Server: rspam03 X-HE-Tag: 1697033397-771685 X-HE-Meta: U2FsdGVkX1+rP6t8oHyhi/Kjv58Cp0pkOsteZ2X3ovMRxuDFyzUfbmBQZ7AS7B4EIA5KENQH5yWrzm02HkBOQt8WyrZOYlbnzlc2GDozP8S66l6vNfcK92mwqxXfxleIpqFe6RFMTLJR4yiBnOFIOB2mHpLhVLwsPL3VWah9tWVQX4Blcef/1S2HX4vDuzCjA/vvzS+7z1u4DK1ecG1GDAuTA+Q/6efMjsRdGYzpUyRLYTZFPt527kVJ+K/la5faB85GJTClv0YKm+H3+bEVUg1J65t1K8AZv4bcIyJb4jTboWe//KMc0fny43RlDq8ywDYH58UxuQTKHBEwhY0YdcKptQmzgEDcvm/xZxQ3pWMzl15bhaLZY+tdUfgnUDxnx8ea+fLZuE2wepCPudZizihTaRziY58nydNpCBfRSM70bYJi5StoPZ0D3PopWuVcawqRa8cXvckcSfhVI05gP920S0sC0K5yXWjOcRnuc2/mvJpf6wIQPsKD4os08dcwEbCvq3l4AbkqAw5HPrYd+YjLut8SWtPW7ItklDVHZqppFE+WUgevXV2fd7dx0atHFrMlvrg2FLSxO6ot4sZFjsn08PM/vJUaAnG+H8ivD9765+yPirDkfJhSzQH3qy6JN9slbQ1sJ1ITK88z3T7RrhpqmwDuMxDsLZMnkXOlccwmosFtHTeDRALjD3AfGTqMXrgYxHOrTa6X12YYMn4VteQ1bE+ZeXHaSq1cHoMP2MzrbYSm4HFBfp8PQ/zwv43c8NyTYuGh4jZuQVAHEt2HHcW3HUaO0O8sBwnhx25SvJWPTKKyuzmkf3kenhZ7huAY8TGaotUAj49LX+BO7fLWyUsax+I4ZmRHrYlolnMoPIFtQGoRMkD5h6zhcRpEMd1oIpLoxl9ht7iu8VBlAYOQnupM837RJ2DAHzwmIi/BVFG07hlXvdGLjgkeZoBX1H+LfT0jv1KjgiFrkccIcp1 v6Bhiwhf iGsten8CRV6nzlPlvX9hX49yw4Olz9dY6+RhqawyJxmeHQL8IRvjKY3ot2PwyfWGUcf2elLXqaBFshDNOxLmSRmE85YdKIYQzCOtu7E3yKxZxB3tc4HAf6ZOG3AJ0VLPMjvNoUQ+PPorsQhIozEBcBJEeZGPTi2JDhqRqYrPrbCgEIGHW6LFDaH4874mIp3AQnyMZ+hZHgFLknqR1YSFjStcvuiPPgIj3Tp5ACD9AHxrn6I4XG4SmG2/Lfp+ki1OfJqHlWhXzChtzZygHaObu9AsKjxY1oYqnub+kK3e0Sm4r7kcN3KrsZeKM5LSbEtStfbgLYuotgmkK8hGpI2n+fVVq/6Lib8h9fni9oH1PSHyThVnWXWRG+2si6lBxgtCO9FQZn4l2JEG5IBzS4bVry/P/YlMDTZeh1fgcj96pE11patWo9svJkEqS4i+49tWbNuEdv6Jlrum/BkRPiI/4I/5wgGwUUgi+Jq2ncreJgOB0Nzc2utXRvqSbWrfaRh8SB2zt6N4uBJILZHlf1RYsQOnF/lMqW/uyHEtbBSk4B+a8n3lrtAvZ2FZ/Ml/LfvFvwP/1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 20, 2023 at 02:18:55PM +0800, Huang Ying wrote: > In PCP high auto-tuning algorithm, to minimize idle pages in PCP, in > periodic vmstat updating kworker (via refresh_cpu_vm_stats()), we will > decrease PCP high to try to free possible idle PCP pages. One issue > is that even if the page allocating/freeing depth is larger than > maximal PCP high, we may reduce PCP high unnecessarily. > > To avoid the above issue, in this patch, we will track the minimal PCP > page count. And, the periodic PCP high decrement will not more than > the recent minimal PCP page count. So, only detected idle pages will > be freed. > > On a 2-socket Intel server with 224 logical CPU, we tested kbuild on > one socket with `make -j 112`. With the patch, The number of pages > allocated from zone (instead of from PCP) decreases 25.8%. > > Signed-off-by: "Huang, Ying" > Cc: Andrew Morton > Cc: Mel Gorman > Cc: Vlastimil Babka > Cc: David Hildenbrand > Cc: Johannes Weiner > Cc: Dave Hansen > Cc: Michal Hocko > Cc: Pavel Tatashin > Cc: Matthew Wilcox > Cc: Christoph Lameter > --- > include/linux/mmzone.h | 1 + > mm/page_alloc.c | 15 ++++++++++----- > 2 files changed, 11 insertions(+), 5 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 8a19e2af89df..35b78c7522a7 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -682,6 +682,7 @@ enum zone_watermarks { > struct per_cpu_pages { > spinlock_t lock; /* Protects lists field */ > int count; /* number of pages in the list */ > + int count_min; /* minimal number of pages in the list recently */ > int high; /* high watermark, emptying needed */ > int high_min; /* min high watermark */ > int high_max; /* max high watermark */ > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3f8c7dfeed23..77e9b7b51688 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2166,19 +2166,20 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order, > */ > int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) > { > - int high_min, to_drain, batch; > + int high_min, decrease, to_drain, batch; > int todo = 0; > > high_min = READ_ONCE(pcp->high_min); > batch = READ_ONCE(pcp->batch); > /* > - * Decrease pcp->high periodically to try to free possible > - * idle PCP pages. And, avoid to free too many pages to > - * control latency. > + * Decrease pcp->high periodically to free idle PCP pages counted > + * via pcp->count_min. And, avoid to free too many pages to > + * control latency. This caps pcp->high decrement too. > */ > if (pcp->high > high_min) { > + decrease = min(pcp->count_min, pcp->high / 5); Not directly related to this patch but why 20%, it seems a bit arbitrary. While this is not an fast path, using a divide rather than a shift seems unnecessarily expensive. > pcp->high = max3(pcp->count - (batch << PCP_BATCH_SCALE_MAX), > - pcp->high * 4 / 5, high_min); > + pcp->high - decrease, high_min); > if (pcp->high > high_min) > todo++; > } > @@ -2191,6 +2192,8 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp) > todo++; > } > > + pcp->count_min = pcp->count; > + > return todo; > } > > @@ -2828,6 +2831,8 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, > page = list_first_entry(list, struct page, pcp_list); > list_del(&page->pcp_list); > pcp->count -= 1 << order; > + if (pcp->count < pcp->count_min) > + pcp->count_min = pcp->count; While the accounting for this is in a relatively fast path. At the moment I don't have a better suggestion but I'm not as keen on this patch. It seems like it would have been more appropriate to decay if there was no recent allocation activity tracked via pcp->flags. The major caveat there is tracking a bit and clearing it may very well be in a fast path unless it was tried to refills but that is subject to timing issues and the allocation request stream :( While you noted the difference in buddy allocations which may tie into lock contention issues, how much difference to it make to the actual performance of the workload? -- Mel Gorman SUSE Labs