From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 067A4C369A5 for ; Tue, 8 Apr 2025 18:50:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93D216B009C; Tue, 8 Apr 2025 14:50:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8EB336B009E; Tue, 8 Apr 2025 14:50:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B5236B00A0; Tue, 8 Apr 2025 14:50:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5703B6B009C for ; Tue, 8 Apr 2025 14:50:16 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 02FF3160BA8 for ; Tue, 8 Apr 2025 18:50:16 +0000 (UTC) X-FDA: 83311766874.15.ECA84BB Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf27.hostedemail.com (Postfix) with ESMTP id DA52540012 for ; Tue, 8 Apr 2025 18:50:14 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="A8/wXw8C"; spf=pass (imf27.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.181 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744138215; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FwDylyzDPsrF74vrQtby3ilZT9HGhvLa0st+FyPJEAY=; b=c6Qgw3ZnnK2yPvJLuJzLD0r3JHQfxt0rWGwIjbNgBTeW4cAUdBmLG7fNGVaSflvlssVtGL evEhP5IE5VMcVdpYseowVjRdXQNOgiuDY5lVrd7rIyet72pmUzmG1q/zuVZ11Zjf164vZ8 8Nqa19pQyzHJgDUk/3KycHUYR34+kmY= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="A8/wXw8C"; spf=pass (imf27.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.181 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744138215; a=rsa-sha256; cv=none; b=Dm0R31yLZkUy7w+TX0d2j45+KLrGa2CpsXb7pngN4uJ+wyUR8QmBbWpr3Sh4/A3UkDTU8G PYzgHT0SPGJKJmBJyuR8fBaD+d6NyNFKdnS2bbl3r1IM9LBVkB0mcQEeDuKCVlwCsHij23 Q4XpI5wxt5eAcGI+7IQDrwCif+WkYK4= Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-47691d82bfbso116221131cf.0 for ; Tue, 08 Apr 2025 11:50:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1744138214; x=1744743014; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=FwDylyzDPsrF74vrQtby3ilZT9HGhvLa0st+FyPJEAY=; b=A8/wXw8CrtxEzGcmnhZjW8CccToxwaSqntxNnY7njGEDQMjIGjdg94BtwCzajv0XZi tQSMvlaEisIyjwtOI+ilYcODonmwFmuiyMr7nUfFIVy5zvNRrOhKuimTT61jgTnYwLOD Eq2emjXcDjoZ/cYwzcSBiCyAfC0o8XDndrVwIfL9DB2j3tdFMxapFRp7rYLrjBG1qC+e A4Fnkh9oEilNDFoN335PyxmH25DlMkSS89oMuSGyREVcdGjtKU53zDSfw+0NwYUTkKla nxZwSJmV3RnDtmv/Rk8QGZkE+K5wqew78rmZ6oJ/cSQ/I+U5gklnQBtUT75NrAqJJjW9 wKKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744138214; x=1744743014; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=FwDylyzDPsrF74vrQtby3ilZT9HGhvLa0st+FyPJEAY=; b=lfeiPjKxebX5ixFLywlr6HXNvWn2NhFGn5maBhJ6phhPQxvfZb7m2dZc2n0+G8szpe aKV3Hg3B60/Rzj4cgRoBixq+wmI8kA6QGwPzPz3yKRUARz3UYdBoxUCHCmRV7dXsclBN LBQaorwmEOtJdH/34zXhr3Tw6yvFvCLtM0xxYP2ty/Pqu5pKjY/toDRV5+WEECy0nAAW 3QhEILM9ZGdPHKLmoOQPH9cXYRfb2/8lhgb+Eq6dhRixf2khaQnZ2Gj2EHKjAs6Pe4vA lp48l4YWluuaatwQr6S6gBijLh5WPmTUHd7gwDNwemHG5B98MJFqQD/YYlFvY0gmBxk9 Vkdg== X-Forwarded-Encrypted: i=1; AJvYcCUnn4yQsjK2jurq4PX/Y3GfiC2yecDXcfCT4rdQxExq/1IJ9+LkD89bhNQ0fP6ZZbslhtMVLqsVUw==@kvack.org X-Gm-Message-State: AOJu0YyRcPXaJggqPQ4IP1cIttQ1Jjq0/+c75Wr1obvbEcgpPwy588jw h2YZEllTcxT5mS+c0/n6FjaJp5Hc8S0RpIaO1G/G+vSTyd3lE+tXuuEfczcnh5E= X-Gm-Gg: ASbGncuV44ZK9bEYiFTpTKSrBKHDM/E56WwyH5EFCQAk4EtFZLZpg0p+Kuj9a2lpG96 TUOQKHNqbTyVP6DOq3VDNN4HHnm9mI7GuBQQOM+sgaesJe/E+xOyeaNEczVsi98LOEAq+ZDMCm6 xxZkP+tkghvv3boOq5fDg+cd/ty3uNZifnKxAEZ2cUudSzuWI80OiLW/61r97EWAVvgjNfEJvXJ XbWRnxaQf+OWMwfxzI7wQy21nXdUbh7Qoax8xdXpKhOs/88Z2qLWiyqT8ko5V58suljxGefwb7J GC0eezLj+9ppFi6jOaRSaNgMOMQcDBsiKf9PDzqk0eQ= X-Google-Smtp-Source: AGHT+IHbWilH5kMWNF4Wf7I0KGh/1NSK677mnEizu3hGba+oaeLYEvko8oaHEsEM9Yb1/YDtPtxO5Q== X-Received: by 2002:ac8:5d04:0:b0:476:90ee:bc6a with SMTP id d75a77b69052e-4795f2e3607mr761051cf.28.1744138213965; Tue, 08 Apr 2025 11:50:13 -0700 (PDT) Received: from localhost ([2603:7000:c01:2716:365a:60ff:fe62:ff29]) by smtp.gmail.com with UTF8SMTPSA id d75a77b69052e-4791b088346sm81560331cf.41.2025.04.08.11.50.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Apr 2025 11:50:13 -0700 (PDT) Date: Tue, 8 Apr 2025 14:50:09 -0400 From: Johannes Weiner To: Brendan Jackman Cc: Andrew Morton , Vlastimil Babka , Mel Gorman , Carlos Song , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel test robot , stable@vger.kernel.org Subject: Re: [PATCH 1/2] mm: page_alloc: speed up fallbacks in rmqueue_bulk() Message-ID: <20250408185009.GF816@cmpxchg.org> References: <20250407180154.63348-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: DA52540012 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 8ti65j5d3g98tyhy8yoip48ug3g5gjjs X-HE-Tag: 1744138214-612942 X-HE-Meta: U2FsdGVkX1+PgpJbD31z/l096REVOywYZ+s51KFQd2qldsCDADIzkdHX9c30uLbZwve5GZ2wlG/Z9cCXkvKes47i56a4Q3ScapbrHXSygZDZ45Ek88cuxuxPkokBfq1fZQDBqCt54hOGADHDWYnZTJAo3TCzqKeF1zS5jDVWoj6Z6aENEoYDCMQrz276a1K3NRH7Sv7ZuSWLcK/gZXiKNiXhgJ3MdP7uPinRsgLwHoPjGLXHoUgEJyWa2ZtBtREd3CaUYjulIoHdijrmo6Frn5ukENicNb4CvxU8z5oTDM15U1fYb2siqdczalRMm6SoEb9DhGli5OFhqIOjKZxi6xq4n3a9K7JADGKUFoRW5ulbwGWFX5eNAH7ThzFcWtjvUp4j8gLno+WiEcdSdaZZDX9Qre7KoJCqSomImWXArNToPlwAXRiMBydHfYLfSmSmjeD3SCSw9tPGAJYiADQKZHhRE0FmPyjBfJvFIlo+PmzCA1yJN66kxhQ5SlblW312Oy8G1ZtGKdFeAkN76c70ZZT9JgQYlvoI8v7YmW6uyRBe7ecjCDDAnnFbpYuopyEsJWX3a+GBdMB+Oe3KRzuOUZsnLh6MDyE12t/CKR6IxjHJ/lHeFaP39L5B1JK/FnbXEqdzEa3BBYkVqleid/RU1wKHsrI9e7WaN9012f0lWQ4rYR13p6cWsvgAuQ9Qk5idkI3M5bIq8WuXvLLHQSnFznzyvu/y9UTl8enokSLWG/sUcQszh5eVKEqWuQ8McOI0fDpvQhnSVl2yTGpttxPCHTQ9VJl0mO3O6j0l9VIQb8bFHCIk0nrs+o/64hekzPKx5JwO2jLLuOCzarWz7QFK1WUNFpJzKHIyg0hzbD4CLEpkW0PZMG3F6uE7VmJp87C1lqgvG5M/ZUrEgV8NyWMdBXDXMKFR/bNDGtCCQiEFeWQSqWJ4BZ4LJfTbLY9cbXVgo2k20L3DpqSO5n2CJL3 oSsYbqaC RiAoHRKfjclGK/YLBeacMI6ZBjZCjne4SBeuyPnsB9bJ0yoU4RKI7NikUHfSYmSJOs9xN7yvmX7AIcWDDXlANKmDTFUhAI5MkcHSkjdWLR9pu8yPBVsqichkrqugO+R/meyzcjry7FRGq0hgeuCb8aecv0vIJDh9JypBWzw/Bpgb+RmEAfHV9buMFIRpoJfaOoawIM+HWmCqPJtuZ0HCN5Uw5O9TSDm7+yafV/QIf7ZXViZoTdObpkAX/HukYWnFZmQLciSjW5s4QronArD+HSbERqZinOXcOMfr1JSLGk+LiutBITTePtcJBLfdNrGiJgfOJrsKXIhwMfyGv2zCv7YTT/ivVECaGp1ngysnJOC1YUr9GQDw5lnZNPn4MNYhyTXayrUCqo0BgeYAvfGbvKtV8CqfpycVSOsiA8tlZwu32uSeiqyHZPayLXZ/Jav7a0eb5MYM7WzorkS/+4K0HUZiU2lppyYYKcQn/sq1bz78A/dgA3YkpoDhVWU1p0Vd8Qsc9BAvfiD9lXTxuS0MOYL5Utg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 08, 2025 at 05:22:00PM +0000, Brendan Jackman wrote: > On Mon Apr 7, 2025 at 6:01 PM UTC, Johannes Weiner wrote: > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2194,11 +2194,11 @@ try_to_claim_block(struct zone *zone, struct page *page, > > * The use of signed ints for order and current_order is a deliberate > > * deviation from the rest of this file, to make the for loop > > * condition simpler. > > - * > > - * Return the stolen page, or NULL if none can be found. > > */ > > This commentary is pretty confusing now, there's a block of text that > kinda vaguely applies to the aggregate of __rmqueue_steal(), > __rmqueue_fallback() and half of __rmqueue(). I think this new code does > a better job of speaking for itself so I think we should just delete > this block comment and replace it with some more verbosity elsewhere. I'm glad you think so, let's remove it then! > > +/* Try to claim a whole foreign block, take a page, expand the remainder */ > > Also on the commentary front, I am not a fan of "foreign" and "native": > > - "Foreign" is already used in this file to mean NUMA-nonlocal. > > - We already have "start" and "fallback" being used in identifiers > as adjectives to describe the mitegratetype concept. > > I wouldn't say those are _better_, "native" and "foreign" might be > clearer, but it's not worth introducing inconsistency IMO. That's a fair point, no objection to renaming them. > > static __always_inline struct page * > > -__rmqueue_fallback(struct zone *zone, int order, int start_migratetype, > > +__rmqueue_claim(struct zone *zone, int order, int start_migratetype, > > unsigned int alloc_flags) > > { > > struct free_area *area; > > [pasting in more context that wasn't in the original diff..] > > /* > > * Find the largest available free page in the other list. This roughly > > * approximates finding the pageblock with the most free pages, which > > * would be too costly to do exactly. > > */ > > for (current_order = MAX_PAGE_ORDER; current_order >= min_order; > > --current_order) { > > IIUC we could go one step further here and also avoid repeating this > iteration? Maybe something for a separate patch though? That might be worth a test, but agree this should be a separate patch. AFAICS, in the most common configurations MAX_PAGE_ORDER is only one step above pageblock_order or even the same. It might not be worth the complication. > Anyway, the approach seems like a clear improvement, thanks. I will need > to take a closer look at it tomorrow, I've run out of brain juice today. Much appreciate you taking a look, thanks. > Here's what I got from redistributing the block comment and flipping > the terminology: > > diff --git i/mm/page_alloc.c w/mm/page_alloc.c > index dfb2b3f508af..b8142d605691 100644 > --- i/mm/page_alloc.c > +++ w/mm/page_alloc.c > @@ -2183,21 +2183,13 @@ try_to_claim_block(struct zone *zone, struct page *page, > } > > /* > - * Try finding a free buddy page on the fallback list. > - * > - * This will attempt to claim a whole pageblock for the requested type > - * to ensure grouping of such requests in the future. > - * > - * If a whole block cannot be claimed, steal an individual page, regressing to > - * __rmqueue_smallest() logic to at least break up as little contiguity as > - * possible. > + * Try to allocate from some fallback migratetype by claiming the entire block, > + * i.e. converting it to the allocation's start migratetype. > * > * The use of signed ints for order and current_order is a deliberate > * deviation from the rest of this file, to make the for loop > * condition simpler. > */ > - > -/* Try to claim a whole foreign block, take a page, expand the remainder */ > static __always_inline struct page * > __rmqueue_claim(struct zone *zone, int order, int start_migratetype, > unsigned int alloc_flags) > @@ -2247,7 +2239,10 @@ __rmqueue_claim(struct zone *zone, int order, int start_migratetype, > return NULL; > } > > -/* Try to steal a single page from a foreign block */ > +/* > + * Try to steal a single page from some fallback migratetype. Leave the rest of > + * the block as its current migratetype, potentially causing fragmentation. > + */ > static __always_inline struct page * > __rmqueue_steal(struct zone *zone, int order, int start_migratetype) > { > @@ -2307,7 +2302,9 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, > } > > /* > - * Try the different freelists, native then foreign. > + * First try the freelists of the requested migratetype, then try > + * fallbacks. Roughly, each fallback stage poses more of a fragmentation > + * risk. How about "then try fallback modes with increasing levels of fragmentation risk." > * The fallback logic is expensive and rmqueue_bulk() calls in > * a loop with the zone->lock held, meaning the freelists are > @@ -2332,7 +2329,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, > case RMQUEUE_CLAIM: > page = __rmqueue_claim(zone, order, migratetype, alloc_flags); > if (page) { > - /* Replenished native freelist, back to normal mode */ > + /* Replenished requested migratetype's freelist, back to normal mode */ > *mode = RMQUEUE_NORMAL; This line is kind of long now. How about: /* Replenished preferred freelist, back to normal mode */ But yeah, I like your proposed changes. Would you care to send a proper patch? Acked-by: Johannes Weiner