From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26DD6C433FE for ; Mon, 21 Nov 2022 16:03:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B02DF6B0071; Mon, 21 Nov 2022 11:03:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A8B7C6B0074; Mon, 21 Nov 2022 11:03:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92C436B0075; Mon, 21 Nov 2022 11:03:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 80CB96B0071 for ; Mon, 21 Nov 2022 11:03:31 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4908840932 for ; Mon, 21 Nov 2022 16:03:31 +0000 (UTC) X-FDA: 80157919422.25.249EA52 Received: from outbound-smtp01.blacknight.com (outbound-smtp01.blacknight.com [81.17.249.7]) by imf22.hostedemail.com (Postfix) with ESMTP id 98742C002C for ; Mon, 21 Nov 2022 16:03:29 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp01.blacknight.com (Postfix) with ESMTPS id B3DEDC4A73 for ; Mon, 21 Nov 2022 16:03:27 +0000 (GMT) Received: (qmail 9867 invoked from network); 21 Nov 2022 16:03:27 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 21 Nov 2022 16:03:27 -0000 Date: Mon, 21 Nov 2022 16:03:24 +0000 From: Mel Gorman To: Vlastimil Babka Cc: Andrew Morton , Hugh Dickins , Yu Zhao , Marcelo Tosatti , Michal Hocko , Marek Szyprowski , LKML , Linux-MM Subject: Re: [PATCH 2/2] mm/page_alloc: Leave IRQs enabled for per-cpu page allocations Message-ID: <20221121160324.4q7clvqdqohgycqh@techsingularity.net> References: <20221118101714.19590-1-mgorman@techsingularity.net> <20221118101714.19590-3-mgorman@techsingularity.net> <20221121120121.djgvgm5bsklgfx7c@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20221121120121.djgvgm5bsklgfx7c@techsingularity.net> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669046610; a=rsa-sha256; cv=none; b=5Y4kJZmrNy5T4clnbYgBKQLM+aNyUTnT/5HiQJbwExA1z+rQAMqB+xy1STgBja3RQxeo5y WkqOEGu8qMGisr8jTrYJvsu8L3C4pNl9rkOouFiKL3wpiFrtHzjCvFAHgDdJh+I97wJcFx dp0UX1m5dFn1wSuHtMWqh2XOYi2G7Ww= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.7 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669046610; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S6PTt2dYchDSZQZP1K0YsCq0PD8FLZb6WnB8C5SK/2k=; b=EZp08fWNLg9X6Jc5zhYcMZuFLwIJvPRlVrOJpGqMq/4p4vJdiRgYBotI4E6zg1mu7vKnUG PsYf7+jNeIa5qYdt7xzQK4sgMyEVnnyKPvmV27Z4iVRwPJFp+O3c/h+PXo8tDc9jgxnDTT VxPUqrvbJZjl8JYNaidpd+8v3AcVhHU= X-Stat-Signature: jpgemtfbsyzirier1to3h38edwnwaxsd X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 98742C002C Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.7 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspam-User: X-HE-Tag: 1669046609-221811 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Nov 21, 2022 at 12:01:23PM +0000, Mel Gorman wrote: > On Fri, Nov 18, 2022 at 03:30:57PM +0100, Vlastimil Babka wrote: > > On 11/18/22 11:17, Mel Gorman wrote: > > AFAICS if this block was just "locked_zone = NULL;" then the existing code > > would do the right thing. > > Or maybe to have simpler code, just do batch_count++ here and > > make the relocking check do > > if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX) > > > > While I think you're right, I think it's a bit subtle, the batch reset would > need to move, rechecked within the "Different zone, different pcp lock." > block and it would be easy to forget exactly why it's structured like > that in the future. Rather than being a fix, it could be a standalone > patch so it would be obvious in git blame but I don't feel particularly > strongly about it. > Ok, less subtle than I initially thought but still deserving of a separate patch instead of being a fix. This? --8<-- mm/page_alloc: Simplify locking during free_unref_page_list While freeing a large list, the zone lock will be released and reacquired to avoid long hold times since commit c24ad77d962c ("mm/page_alloc.c: avoid excessive IRQ disabled times in free_unref_page_list()"). As suggested by Vlastimil Babka, the lockrelease/reacquire logic can be simplified by reusing the logic that acquires a different lock when changing zones. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 25 +++++++++---------------- 1 file changed, 9 insertions(+), 16 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 445066617204..08e32daf0918 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3518,13 +3518,19 @@ void free_unref_page_list(struct list_head *list) list_del(&page->lru); migratetype = get_pcppage_migratetype(page); - /* Different zone, different pcp lock. */ - if (zone != locked_zone) { + /* + * Either different zone requiring a different pcp lock or + * excessive lock hold times when freeing a large list of + * pages. + */ + if (zone != locked_zone || batch_count == SWAP_CLUSTER_MAX) { if (pcp) { pcp_spin_unlock(pcp); pcp_trylock_finish(UP_flags); } + batch_count = 0; + /* * trylock is necessary as pages may be getting freed * from IRQ or SoftIRQ context after an IO completion. @@ -3539,7 +3545,6 @@ void free_unref_page_list(struct list_head *list) continue; } locked_zone = zone; - batch_count = 0; } /* @@ -3551,19 +3556,7 @@ void free_unref_page_list(struct list_head *list) trace_mm_page_free_batched(page); free_unref_page_commit(zone, pcp, page, migratetype, 0); - - /* - * Guard against excessive lock hold times when freeing - * a large list of pages. Lock will be reacquired if - * necessary on the next iteration. - */ - if (++batch_count == SWAP_CLUSTER_MAX) { - pcp_spin_unlock(pcp); - pcp_trylock_finish(UP_flags); - batch_count = 0; - pcp = NULL; - locked_zone = NULL; - } + batch_count++; } if (pcp) {