From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3F49C36010 for ; Fri, 11 Apr 2025 15:07:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67E5B680009; Fri, 11 Apr 2025 11:07:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 62D40680005; Fri, 11 Apr 2025 11:07:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F58B680009; Fri, 11 Apr 2025 11:07:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 2FE85680005 for ; Fri, 11 Apr 2025 11:07:07 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2D6E71403FF for ; Fri, 11 Apr 2025 15:07:07 +0000 (UTC) X-FDA: 83322090894.15.8403B76 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf08.hostedemail.com (Postfix) with ESMTP id 5936916000B for ; Fri, 11 Apr 2025 15:07:05 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=w+DYeGFz; spf=pass (imf08.hostedemail.com: domain of 3FzD5ZwgKCKILCEMOCPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--jackmanb.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3FzD5ZwgKCKILCEMOCPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744384025; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zT9HK7f4zut0JMwgxK/+pZ5AGVyQQmaHlu9Z+2ZcOlE=; b=vYEBdExBknzQ62WSEFJO9oNWoCC1OBBbjMPkl0vuuZIbV4MQ6d4S/uHDJ/INjoCJx5zsvk PzgcfaLiebbk7AjvrCGq43FaPBvwHZVPz9VqWtvXwlOGYCQBoSUXaSRzgJ8y5Vxe+SFnkq pU7m19ZwbgE88xSw+TZet2JjHPvo6mo= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=w+DYeGFz; spf=pass (imf08.hostedemail.com: domain of 3FzD5ZwgKCKILCEMOCPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--jackmanb.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3FzD5ZwgKCKILCEMOCPDIQQING.EQONKPWZ-OOMXCEM.QTI@flex--jackmanb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744384025; a=rsa-sha256; cv=none; b=dlkAs6R7TRchsFd9ibHakrEZgZtQrv4hZydawWtmFmEqVQ+SJHoljsg9yT2JmNSMG9u8Vp Rl6hG/mur9IpzLmY5MjctcRPNkOqrnd99mFApRlevyRQKmm/AOPGCFLva7TSZkNgMU+dIT yIDK8Lz2buxuPFyrsMsXiX4npOTb1KA= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43f405810b4so2820505e9.1 for ; Fri, 11 Apr 2025 08:07:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744384024; x=1744988824; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zT9HK7f4zut0JMwgxK/+pZ5AGVyQQmaHlu9Z+2ZcOlE=; b=w+DYeGFzFRrSJ+hoGuDuQe1uyU3F3YVAhRkH0vVzMu4riF0i0/l9w51ZI/jAnZEQwz B86VLbkfMaklQWjeewHLGjiL9wKqJLSi5Xi6x61Wxyh5HvW8sLKAKt5v3KZrIuUq6nFe 905HlE9Wh9mDialWCDHVHwKe1nc12FvzUoAuaAoHGCzSBE9oEgQwcZzpNfrvL6wamcmK S02/Y2IaJHnqXXkhmAadIloEqTLU+Gi3QYKCb9dA6HxoEgRfPyCr5amKLg+Ag7SCyqiy fxDnH9JNZYc+SLUo7cVo44QLSUwnixxAG2HhDeXrqPHckplbP+LwgvCUBiH2F9628tLb CjtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744384024; x=1744988824; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zT9HK7f4zut0JMwgxK/+pZ5AGVyQQmaHlu9Z+2ZcOlE=; b=u/bjaLsHKJ05CRPr+tI20hXr6wmNaXi6Y1Sk0LgBv8F4ZbcUwDA+dfEBZYSjJk570D +3Cgn15UAu7XtcqRCxqYi+jDjahCngu/BZfyZ30jcp00PE78G8ZDwj3riU/wkvY4SSfJ T3HO9ZLBoOhYQ91BCixql1iezLYuxfDFwEv4/4B5BNlS7JTekct5l0RGk062Oym8ZUXH zRs0awvWI2ycKgcJw78UGPYDsprnweEUhAe6gfRrA9dt0ACjMg94gcFSXk9QVWmvdVNz XekZppPQidhTwWBZ4qjJtfByF9JGK4+GBW8uU8VWy600JGBUGCjKkwu8VSmo57vYz3So d15w== X-Forwarded-Encrypted: i=1; AJvYcCX9/p2dB8iamrENjCa8CmlZgcJjTD6HV1/y0qkCAbNJbQPMQl33bZDTuiyyZogXCvwxH7eyY9zi+w==@kvack.org X-Gm-Message-State: AOJu0YwZYNnfOP0+eiWjSS/rVUEHXbXxkQB7sMhTO/7+5mL2ITfBD2Co 0wTK15b6ePRJQH/cyslY83y7bzPJlScWLjsFneZS/ynecfjItLqwqaigzuYfUqs6oVjSHx/DFmV QAse0vXhy2w== X-Google-Smtp-Source: AGHT+IHO4xZ09nGgDUkGFuaki3SkgqbYyALc8qGc3LePIPFrD28NPTB6CI/SE4o7MkIyp0jXLwML7Vw5WAvp8g== X-Received: from wmbbh10.prod.google.com ([2002:a05:600c:3d0a:b0:43b:c7e5:66e0]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b9d:b0:43d:934:ea97 with SMTP id 5b1f17b1804b1-43f3a9af8c5mr31645595e9.27.1744384023753; Fri, 11 Apr 2025 08:07:03 -0700 (PDT) Date: Fri, 11 Apr 2025 15:07:01 +0000 In-Reply-To: <20250411134550.GB366747@cmpxchg.org> Mime-Version: 1.0 References: <20250407180154.63348-1-hannes@cmpxchg.org> <20250407180154.63348-2-hannes@cmpxchg.org> <20250411134550.GB366747@cmpxchg.org> X-Mailer: aerc 0.20.0 Message-ID: Subject: Re: [PATCH 2/2] mm: page_alloc: tighten up find_suitable_fallback() From: Brendan Jackman To: Johannes Weiner Cc: Andrew Morton , Vlastimil Babka , Mel Gorman , Carlos Song , , Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5936916000B X-Stat-Signature: sk1yak7z396dutzajhsyifpiicjp46w1 X-HE-Tag: 1744384025-603437 X-HE-Meta: U2FsdGVkX1+Sy9/yeMBjT9xjegN6IhYJqSh8tgU8xbhpQjO4sNFU0AWkZ4XotSBN3EGMtYRg0QnVXt2i6kfdZkMWI0pESh84o/IksjYY8s+DQAD1T1P3d9zfz6weqYbvhJc7ZP+WLkIg1rZzvbjqcF6zFMhJYepyT+BI8DE+JkxXcFLv5NwpWxripabCRxrUNM4VPrh1zDECWV7TlXNDzuoPACzG+2AVkjvBfKMtTZTpp+O5KVbDNWAM3orMc+IUzGhLLQx3cL8sVMI1vLo9ezcG4WCClxXBgYj6ZMztikSYsESVzC2i4mbqYO1sPcpZF34LyHDZpeLhigfluFhqotI5rVjr7jF68+Lu4QMdvP8JGn3is46lBWYo41L911keiZxEqIu1qomSAFzVtC6PRIKeBo9fnTMJ8ucI1ghdVZ64ekvvuXLPwDe2K38ADUFR1+uyfFaiuJkMgSS5UVBHVATON5x5tG+ZVB3aBZ7G7xsseB47zsEC5Pk5DumTaS4vWyiTxT208veE1U5CSnMF7ZdnZ/21a61PnQIIGshHziUAdtUBRguIQtnLL9W2N+wwl3AMMduf8LhftCDiEN8avPfN1gS/M/OGTYSt73Xxegn71FCXcBW8fhdq54x5mw0GMxMQ28+4l6Ms3SUnNh7pVKWlS8Pr4nYptnOdSxVjWjOViu74rhtOCLV0Q5raAQPMfjC5+onoj+5gWhb4f+KKpzrWfJT6sFggU8wkd6ZdeFnjE9sM5+GOblRNOZ+OXRmbolJA6inLppz5e0zEbw6iQueMkbiLsP5t5djCxW9DDqa6FfTgAbBPBUehZ7oB/QpKS7knZLEXVQsv3Y0OaxQ7Cb9Cvsi9AbvqNKfwWcmyGSCIcHmdu42EXoohjKQWeo0IW/4YfoxaXiCGNbBBtBIK0Mx8WmaLp+8bONmvEy016j/94h3NRsT8rS1Laxh2n2cup2jfDqKAIiUl5WBPTux vKqKF4pp XDvRbHjEd9eMg9poPyN9djhQ3dUmT0Xe5QrjdnyVwzkIIuX352JuiV7yAyPVaWCwpHEZ3rDy1nB2vJkSq6djSPNt+qcmeuGbpTySe+HPvIcqHCXAlmheNpr7tiKUh5wlY2VDZcwzTKNNInfU4f6iBs9q4Tqc2Vd0ZEpVoeSXlL4dc+BLISYqV6GhPDTff7DR+Wnm2MyTPmv0u+ubE7Hy23G6Qu5RQ8CyNa/l6QDno6aLL6UbFQLtFmxUfAbRUkttIgGg0CoJFN05nHetTOfQ0BDK6cLi/NV0xHPJlNyIaIVqZT86U2tqUn3Chbsk8VkQRkNAH/+3Go3r1adeBBKSf1UY4lx4BvvAxSnRtRS6ut4dmHJi7q18oplsDLYp8/WP2Ty6DSYz6e4kDG46uBCWc3eHQbeK2odKlfMSdWJeM0vElv/IpJr56S8GuSpdXNti9s0iSpdKsf5adjLdvrrTKaF46+ul5ygdeCZdWK2HO2095/iz19j4XgkotVIcQynTscsR2VohFnlZqRPHf2LcB4IqLba+0oUJz1K0iE+u1AIclj5h3Vzry7BVbMkJvYos7CZHKULYF3YqN/UFCGzrGRhWVASUNvqgWpOxFttcwyDKhIHf0e8EeygSZ/YW5DbhSHi673am9xfcQi12lFte9HcH1UKBzD//etoTYBDdHG2+yd98= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri Apr 11, 2025 at 1:45 PM UTC, Johannes Weiner wrote: >> - if (find_suitable_fallback(area, order, migratetype, true) >= 0) >> + if (should_try_claim_block(order, migratetype) && >> + find_fallback_migratetype(area, order, migratetype) >= 0) > > So I agree with pushing the test into the callers. However, I think > the name "should_try_claim_block()" is not great for this. It makes > sense in the alloc/fallback path, but compaction here doesn't claim > anything. It just wants to know if this order + migratetype is > eligible under block claiming rules. > > IMO this would be more readable with the old terminology: > > if (can_claim_block(order, migratetype) && > find_fallback_migratetype(area, order, migratetype) >= 0) Sure, that makes sense, here's a modified version of the patch: --- >From 85be0fca4627c5b832a3382c92b6310609e14ca4 Mon Sep 17 00:00:00 2001 From: Brendan Jackman Date: Thu, 10 Apr 2025 13:22:58 +0000 Subject: [PATCH] mm: page_alloc: Split up find_suitable_fallback() Now that it's been simplified, it's clear that the bool arg isn't needed, callers can just use should_try_claim_block(). Once that logic is stripped out, the function becomes very obvious and can get a more straightforward name and comment. Since should_try_claim_block() is now exported to compaction.c, give it a name that makes more sense outside the context of allocation - should_claim_block() seems confusing in code that has no interest in actually claiming a block. Signed-off-by: Brendan Jackman --- mm/compaction.c | 3 ++- mm/internal.h | 5 +++-- mm/page_alloc.c | 33 +++++++++++++-------------------- 3 files changed, 18 insertions(+), 23 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 39a4d178dff3c..0528996c40507 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2363,7 +2363,8 @@ static enum compact_result __compact_finished(struct compact_control *cc) * Job done if allocation would steal freepages from * other migratetype buddy lists. */ - if (find_suitable_fallback(area, order, migratetype, true) >= 0) + if (can_claim_block(order, migratetype) && + find_fallback_migratetype(area, order, migratetype) >= 0) /* * Movable pages are OK in any pageblock. If we are * stealing for a non-movable allocation, make sure diff --git a/mm/internal.h b/mm/internal.h index 4e0ea83aaf1c8..5450ea7f5b1ec 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -914,8 +914,9 @@ static inline void init_cma_pageblock(struct page *page) #endif -int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool claimable); +int find_fallback_migratetype(struct free_area *area, unsigned int order, + int migratetype); +bool can_claim_block(unsigned int order, int start_mt); static inline bool free_area_empty(struct free_area *area, int migratetype) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0a1f28bf5255c..c27a106ec5985 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2034,7 +2034,7 @@ static inline bool boost_watermark(struct zone *zone) * try to claim an entire block to satisfy further allocations, instead of * polluting multiple pageblocks? */ -static bool should_try_claim_block(unsigned int order, int start_mt) +bool can_claim_block(unsigned int order, int start_mt) { /* * Leaving this order check is intended, although there is @@ -2076,20 +2076,12 @@ static bool should_try_claim_block(unsigned int order, int start_mt) return false; } -/* - * Check whether there is a suitable fallback freepage with requested order. - * If claimable is true, this function returns fallback_mt only if - * we would do this whole-block claiming. This would help to reduce - * fragmentation due to mixed migratetype pages in one pageblock. - */ -int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool claimable) +/* Find a fallback migratetype with at least one page of the given order. */ +int find_fallback_migratetype(struct free_area *area, unsigned int order, + int migratetype) { int i; - if (claimable && !should_try_claim_block(order, migratetype)) - return -2; - if (area->nr_free == 0) return -1; @@ -2209,18 +2201,19 @@ __rmqueue_claim(struct zone *zone, int order, int start_migratetype, */ for (current_order = MAX_PAGE_ORDER; current_order >= min_order; --current_order) { + + /* Advanced into orders too low to claim, abort */ + if (!can_claim_block(order, start_migratetype)) + break; + area = &(zone->free_area[current_order]); - fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, true); + fallback_mt = find_fallback_migratetype(area, current_order, + start_migratetype); /* No block in that order */ if (fallback_mt == -1) continue; - /* Advanced into orders too low to claim, abort */ - if (fallback_mt == -2) - break; - page = get_page_from_free_area(area, fallback_mt); page = try_to_claim_block(zone, page, current_order, order, start_migratetype, fallback_mt, @@ -2249,8 +2242,8 @@ __rmqueue_steal(struct zone *zone, int order, int start_migratetype) for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) { area = &(zone->free_area[current_order]); - fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, false); + fallback_mt = find_fallback_migratetype(area, current_order, + start_migratetype); if (fallback_mt == -1) continue; -- 2.49.0.604.gff1f9ca942-goog