From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7C72C369A8 for ; Thu, 10 Apr 2025 13:55:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3AF4280104; Thu, 10 Apr 2025 09:55:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AEAF7280103; Thu, 10 Apr 2025 09:55:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 961A2280104; Thu, 10 Apr 2025 09:55:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 749FD280103 for ; Thu, 10 Apr 2025 09:55:31 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C9B1B120C8A for ; Thu, 10 Apr 2025 13:55:32 +0000 (UTC) X-FDA: 83318281704.10.00A5D52 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf23.hostedemail.com (Postfix) with ESMTP id D1808140002 for ; Thu, 10 Apr 2025 13:55:30 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=gN9zuMkD; spf=pass (imf23.hostedemail.com: domain of 30c33ZwgKCJI5wy68w9x2AA270.yA8749GJ-886Hwy6.AD2@flex--jackmanb.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=30c33ZwgKCJI5wy68w9x2AA270.yA8749GJ-886Hwy6.AD2@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744293331; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QDFIq8Rc3inYMDqZcvdxiRvCnaWwkWGcb7xaJWd+b3Y=; b=OfphsP27cu78nfpaxnsPOhsHfkiLZ2PTDzJZtbG/IkB5Gcu4hONuShgQjX9pkXJ2b27EHZ oqQV9O5SRe9O0VvCK9JzO5HXPBNWp9d8mEtxfOK/b/Vg1v/iNmvrs3P7VG9gYtdHdQfMNC ALnWXX7rV0gLscvndYS/P0S/RFi9DYw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744293331; a=rsa-sha256; cv=none; b=0HElrTIYQB/eiQ9Km1muYHQKbR7cP1If0g+VgWyS1XFEEJPs6eOaHXR204tIGe7bxx+kS0 N4AP06fnCIBzEEQP81rB59QXRC7QQvDG1Ccc5fLdO0WxA7QEv3eHwOToXcXg78Y41Fa2YY A2mw1ptXnJFfW3c7CqM7qo9ttbkK/h0= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=gN9zuMkD; spf=pass (imf23.hostedemail.com: domain of 30c33ZwgKCJI5wy68w9x2AA270.yA8749GJ-886Hwy6.AD2@flex--jackmanb.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=30c33ZwgKCJI5wy68w9x2AA270.yA8749GJ-886Hwy6.AD2@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d733063cdso7835615e9.0 for ; Thu, 10 Apr 2025 06:55:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744293329; x=1744898129; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QDFIq8Rc3inYMDqZcvdxiRvCnaWwkWGcb7xaJWd+b3Y=; b=gN9zuMkDRKA52SxkLIAuNy8oIjOwEctZ47qj+Wtdvn9c8/KXNL9Bm4IDTTSdFBvjYZ DPly6ocezgN7C31Wj9yeVR0aMnRiVggQPVkWLTXj3KlVkAnYZ6yyr5cExwQPsEwal6X6 6sOIeJHNLadScEutUnYWct2oYfGaltbxBJIZSVefvlmTnJplcslcun66HokiEtsjL3pd c1QGfNgHxxKuwv083qfaWSwHB9aOLhxy2O8cYPECG0KMJHMXsTon1CX8dGaiI2Bg+0Tc EtrwxeKtaQ0Y1+EYlvUWgpgdQifMqYs6ywymdiKYewlDeJTMvkAm7DXJn1GTb7WGvRjZ FP+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744293329; x=1744898129; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QDFIq8Rc3inYMDqZcvdxiRvCnaWwkWGcb7xaJWd+b3Y=; b=mEZrlU47KdxFcMGfcclTXGkpFsQ5Fvmg8qKffDqHR/D1quqq9YmyVXZW42bXl1rvv4 PSubZJAP2luO0lW59bOflGPewiHa25aBnqfLW+f9ysL/jkUB99Vts35Aw16pdx/cyRDX RRhTAsqrOemzrceyb3pcVe3pQrWKmB1ftZAPVenyhCd13zgPtHEKk3ejbUInZi8RGV2r gyIqR/mhJPS5Z/v9xloFdLc/eSO5dUmq3ZI8AoiR9jZc2Bqilu5hWqf9M2CRXroKn55+ NTgBtRwxWjjdhe1jYXgCLiXsmBmlfqW8OEbxxER2ReXbyhbEe7VN32zT1Ko0JwYlb4LS Sy6w== X-Forwarded-Encrypted: i=1; AJvYcCVrEIyYwKFmCU3Z9JIyAe6vVU95wL5AbmOYZJpv1G0Gn8EZAWzcV0VkyvRC+dH+KNtHsWCDiuTV2Q==@kvack.org X-Gm-Message-State: AOJu0Yz9sqhS3eusoE0v0DCrAajyai+PJBbyDlQE2JlFcWSku/haFHa3 CX9NvDj1jyBNohJyBCy55U+qdb4ZPYwo5BTstG8Z82/qMih+ykmRUwflaMj0Lthov6s+yIeoGXO L8iAiFqhJ8A== X-Google-Smtp-Source: AGHT+IF36wK+RgxWudcaKSM179JMydTdTYiZ6lfF38iCsfbydhNnVkj9Jh+6mJ8bFYUC1D8C/FVuSmbhjqfnsA== X-Received: from wmbh17.prod.google.com ([2002:a05:600c:a111:b0:43d:1c11:4e5d]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c8e:b0:43c:f64c:44a4 with SMTP id 5b1f17b1804b1-43f2d7b4ed6mr22820475e9.8.1744293329091; Thu, 10 Apr 2025 06:55:29 -0700 (PDT) Date: Thu, 10 Apr 2025 13:55:27 +0000 In-Reply-To: <20250407180154.63348-2-hannes@cmpxchg.org> Mime-Version: 1.0 References: <20250407180154.63348-1-hannes@cmpxchg.org> <20250407180154.63348-2-hannes@cmpxchg.org> X-Mailer: aerc 0.20.1-64-g7cb8e0e7ce24 Message-ID: Subject: Re: [PATCH 2/2] mm: page_alloc: tighten up find_suitable_fallback() From: Brendan Jackman To: Johannes Weiner , Andrew Morton Cc: Vlastimil Babka , Brendan Jackman , Mel Gorman , Carlos Song , , Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: o5w96oq313i4ybb1rfbgizzcyf4antf8 X-Rspam-User: X-Rspamd-Queue-Id: D1808140002 X-Rspamd-Server: rspam08 X-HE-Tag: 1744293330-909658 X-HE-Meta: U2FsdGVkX1/4AvZ6f91QsvWxew/EQvKDwL/2Yp+OBRaoIiR054zqlQYFB7rE3tr+JjqMtwrvx818GdPLHrpqbLFiZNEuAwu8atOfS7xgM0xTnCHLJCT/tzpJ36v3GaY96T8QvZA4eeqskWAIRYlj3LaQ19HcjmY9L0roKvipMpgiw5wgfKkrfxi8ji+E0InGYZ/noRtT/HgOJ5n+6m702DRTcOpQT70nDAlVtmoYmci3TgFyZMom6u/5cU3u8fYSXkpCqkHUsV0/kU2zX/FndUHDt8XJ/zXvF5UNtfxvP+4P8AKIZoimxJpGioRF2VgemKLkSloVXBVIMMFC1w4gnWzoTdvW2qkrTb2O6O0nd0jIpGQ98J6OwqNBu73cySAt755Rcra+wAXWK/mkRwUx1Rv/EbcxuwcxktqKyZJ73aJOXG2SmxtY7B04tnOJP1Ha55RG4vsgbGYBFlJpkFVs0r9XUZiPihr1W3b3MjUYuvcBC41lXKp//a8G6igtbB1UE8Js3c8Il6mn2K4rVCN89L9gnc7wSuoUz+NLWJZZgKEHHjFU1UhLcwGtcl8Kw4Cm8h8ncbBO2uydQf/qkNWoPHv7/aoG1r0+eRgHEFCXAkJgwN8QOKVYEwFc+SAHeySgJft4SaPi0OUdWjpU3BeKiGeJSmTVQgAs+MWef9nJgcdDsQruTksoEIab21eikiiux1hpEDScJcM/K1ksH0dxOLpHTEa8VXpQF9Buw31JIzLvYD4oerj5CISObmYdCM/ABD9O8zyMlme7RrgjbpzWlU0vsj1VQv3zl9+GyU8JW3tfRmcUuH7FPNXaABTNodr0BegBv3Tu3ZKJa6UxPO9v2sLAFKntyffTH9Bgywgac3K8UcPP/3WvOMBknY4bzxJGHVUk1VlSWHViaosDEFh7yN9zSliHL6+Ft6ZHm3uLAuhAscamDb8U72lnlpyS87m3Iz8AwIl2XlttY/63vA7 /KiI1Dmy Ony9dQkauNdmK9jNRJO/vAw5Na8PK+REH40YjdzFmPFFT6uclB5xaYbEN5F5lCnljTIGjtr9AE52DlO6ZbXC3VXZfy0kGdo/mwpEdvkzMuT+k5ZR4fYepvXLlCThBTuUHZYmTdx9e7xSuJLMbP12yJmviEZcnz1W2WKDqHJigcbi+6HXxiKTwDCJ85cNKlXXQ8AxQzKoN3mgse0VO6IxS99qhuYMDe9IbeG/2Jlkyzl/rswLqxQzSwS3eSBCKMYbB3TU2qp8oiB+AtBZm81VXLbab+LXPEFzFlwy7LvSbkyPuI6NXKci1M++IXacjWo3husCr84FIS0vdiUKIg3biKmkqw1qw/wsFwApfJ27Rf3MntrGitpdL5GzWWKyVG3S0Q9D21V4CkLZ/C4mOqFbD7rfDCCzdL4q1nQk6wmEbhgskc5LqUzYnSsPlWW6GFBLSdaXwi+Vyfo2Kmhi3SG6ITbhLiHrPmP3ILdev0L4akqddg1KZP0UEGfEx+VgTc+WwhD8PJJDXVLD+6Cs+AWnyyVmQqgC6mBX/FgTUuhCTsbvUAa6PCPmmBBrZriksvLxucJJwpLWX+gjgh3bFhbBrvdWGVVQ424rYZP9a X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon Apr 7, 2025 at 6:01 PM UTC, Johannes Weiner wrote: > find_suitable_fallback() is not as efficient as it could be, and > somewhat difficult to follow. > > 1. should_try_claim_block() is a loop invariant. There is no point in > checking fallback areas if the caller is interested in claimable > blocks but the order and the migratetype don't allow for that. > > 2. __rmqueue_steal() doesn't care about claimability, so it shouldn't > have to run those tests. > > Different callers want different things from this helper: > > 1. __compact_finished() scans orders up until it finds a claimable block > 2. __rmqueue_claim() scans orders down as long as blocks are claimable > 3. __rmqueue_steal() doesn't care about claimability at all > > Move should_try_claim_block() out of the loop. Only test it for the > two callers who care in the first place. Distinguish "no blocks" from > "order + mt are not claimable" in the return value; __rmqueue_claim() > can stop once order becomes unclaimable, __compact_finished() can keep > advancing until order becomes claimable. Nice! My initial thought was: now we can drop the boolean arg and just have the callers who care about claimability just call should_try_claim_block() themselves. Then we can also get rid of the magic -2 return value and find_suitable_fallback() becomes a pretty obvious function. I think it's a win on balance but it does make more verbosity at the callsites, and an extra interface between page_alloc.c and compaction.c So maybe it's a wash, maybe you already considered it and decided you don't prefer it. So, LGTM either way, I will attempt to attach the optional additional patch for your perusal, hopefully without molesting the mail encoding this time... Reviewed-by: Brendan Jackman --- >From 25f77012674db95354fb2496bc89954b8f8b4e6c Mon Sep 17 00:00:00 2001 From: Brendan Jackman Date: Thu, 10 Apr 2025 13:22:58 +0000 Subject: [PATCH] mm: page_alloc: Split up find_suitable_fallback() Now that it's been simplified, it's clear that the bool arg isn't needed, callers can just use should_try_claim_block(). Once that logic is stripped out, the function becomes very obvious and can get a more straightforward name and comment. Signed-off-by: Brendan Jackman --- mm/compaction.c | 3 ++- mm/internal.h | 5 +++-- mm/page_alloc.c | 33 +++++++++++++-------------------- 3 files changed, 18 insertions(+), 23 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 39a4d178dff3c..d735c22e71029 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2363,7 +2363,8 @@ static enum compact_result __compact_finished(struct compact_control *cc) * Job done if allocation would steal freepages from * other migratetype buddy lists. */ - if (find_suitable_fallback(area, order, migratetype, true) >= 0) + if (should_try_claim_block(order, migratetype) && + find_fallback_migratetype(area, order, migratetype) >= 0) /* * Movable pages are OK in any pageblock. If we are * stealing for a non-movable allocation, make sure diff --git a/mm/internal.h b/mm/internal.h index 4e0ea83aaf1c8..93a8be54924f4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -914,8 +914,9 @@ static inline void init_cma_pageblock(struct page *page) #endif -int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool claimable); +int find_fallback_migratetype(struct free_area *area, unsigned int order, + int migratetype); +bool should_try_claim_block(unsigned int order, int start_mt); static inline bool free_area_empty(struct free_area *area, int migratetype) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0a1f28bf5255c..604cad16e1b5a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2034,7 +2034,7 @@ static inline bool boost_watermark(struct zone *zone) * try to claim an entire block to satisfy further allocations, instead of * polluting multiple pageblocks? */ -static bool should_try_claim_block(unsigned int order, int start_mt) +bool should_try_claim_block(unsigned int order, int start_mt) { /* * Leaving this order check is intended, although there is @@ -2076,20 +2076,12 @@ static bool should_try_claim_block(unsigned int order, int start_mt) return false; } -/* - * Check whether there is a suitable fallback freepage with requested order. - * If claimable is true, this function returns fallback_mt only if - * we would do this whole-block claiming. This would help to reduce - * fragmentation due to mixed migratetype pages in one pageblock. - */ -int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool claimable) +/* Find a fallback migratetype with at least one page of the given order. */ +int find_fallback_migratetype(struct free_area *area, unsigned int order, + int migratetype) { int i; - if (claimable && !should_try_claim_block(order, migratetype)) - return -2; - if (area->nr_free == 0) return -1; @@ -2209,18 +2201,19 @@ __rmqueue_claim(struct zone *zone, int order, int start_migratetype, */ for (current_order = MAX_PAGE_ORDER; current_order >= min_order; --current_order) { + + /* Advanced into orders too low to claim, abort */ + if (!should_try_claim_block(order, start_migratetype)) + break; + area = &(zone->free_area[current_order]); - fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, true); + fallback_mt = find_fallback_migratetype(area, current_order, + start_migratetype); /* No block in that order */ if (fallback_mt == -1) continue; - /* Advanced into orders too low to claim, abort */ - if (fallback_mt == -2) - break; - page = get_page_from_free_area(area, fallback_mt); page = try_to_claim_block(zone, page, current_order, order, start_migratetype, fallback_mt, @@ -2249,8 +2242,8 @@ __rmqueue_steal(struct zone *zone, int order, int start_migratetype) for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) { area = &(zone->free_area[current_order]); - fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, false); + fallback_mt = find_fallback_migratetype(area, current_order, + start_migratetype); if (fallback_mt == -1) continue; base-commit: 0e56a6f04d3b06eafe0000d2e3c3d7c2d554366a -- 2.49.0.504.g3bcea36a83-goog