From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D223C18E7C for ; Wed, 26 Feb 2025 15:08:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B5B436B0089; Wed, 26 Feb 2025 10:08:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B09B66B008C; Wed, 26 Feb 2025 10:08:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9AB236B0093; Wed, 26 Feb 2025 10:08:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7DB076B0089 for ; Wed, 26 Feb 2025 10:08:31 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5B0DC121BD0 for ; Wed, 26 Feb 2025 15:06:42 +0000 (UTC) X-FDA: 83162422644.06.F9A6F69 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf05.hostedemail.com (Postfix) with ESMTP id 82B1E10001B for ; Wed, 26 Feb 2025 15:06:33 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bYlvG+JB; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="cPsWv/Uf"; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bYlvG+JB; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="cPsWv/Uf"; spf=pass (imf05.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740582394; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jxm8Pr+FqnfY8/g+l47LAw1z03IBsuprmZ2kuOjhmlE=; b=CMv/wNvbkhJFi+agMaKWx2olWG9E4I4lDrqgmqfRMGwe0FgNGZhHihj/47vrpEEpqZuxhG AsyKAXCKsxMDMpbAgLDUk7BvOjL8R5o5PdVh1DYNpym2ejVOWsQdIA3+hEv1ksI2fkHzx8 YVsRon+6ZK8b5bIjD4D7IQwuFB5+Nu0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740582394; a=rsa-sha256; cv=none; b=AJ0ru3ThUYKmNvVzlc+c6xL/8585ulU0s418JhsWGeDlk4DBGA/EinCG4dSOChNOeuIMWy gaywNjebKt+aqBL7AQRPmPLumhLsuDOyILOiLCaMfXSMDLHxtwXZGrG/U3x0S8cy2VK+8G ExuDGQNpoJmwSxm41YnMhNbxdSgG4dE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bYlvG+JB; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="cPsWv/Uf"; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bYlvG+JB; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b="cPsWv/Uf"; spf=pass (imf05.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 821631F38F; Wed, 26 Feb 2025 15:06:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1740582389; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jxm8Pr+FqnfY8/g+l47LAw1z03IBsuprmZ2kuOjhmlE=; b=bYlvG+JB/XoJDCzLp6IlQipEqh5LcQ7iVXu6/EwtfZ3EfQ7Yvk5IcUslX7prkSCrnc2/H7 DO+AgUDEf1j9VP2fwkW9WIIO15ByrBaS5uBchyKqshXy766W2P5r8/uV/W7wCR6ScVePMa /v+fhWD1urCe+fwcdylOGNbnZ6ubz68= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1740582389; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jxm8Pr+FqnfY8/g+l47LAw1z03IBsuprmZ2kuOjhmlE=; b=cPsWv/UfIzC6/1oQqGDH2PY8IjoHcvtkbeXps2cQmSd8UDKxyBVg62iG3i2R2nWxKvnZsi P2bOGucwMsHzwWBw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1740582389; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jxm8Pr+FqnfY8/g+l47LAw1z03IBsuprmZ2kuOjhmlE=; b=bYlvG+JB/XoJDCzLp6IlQipEqh5LcQ7iVXu6/EwtfZ3EfQ7Yvk5IcUslX7prkSCrnc2/H7 DO+AgUDEf1j9VP2fwkW9WIIO15ByrBaS5uBchyKqshXy766W2P5r8/uV/W7wCR6ScVePMa /v+fhWD1urCe+fwcdylOGNbnZ6ubz68= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1740582389; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jxm8Pr+FqnfY8/g+l47LAw1z03IBsuprmZ2kuOjhmlE=; b=cPsWv/UfIzC6/1oQqGDH2PY8IjoHcvtkbeXps2cQmSd8UDKxyBVg62iG3i2R2nWxKvnZsi P2bOGucwMsHzwWBw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 6D9871377F; Wed, 26 Feb 2025 15:06:29 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id j4IrGvUtv2dAUgAAD6G6ig (envelope-from ); Wed, 26 Feb 2025 15:06:29 +0000 Message-ID: Date: Wed, 26 Feb 2025 16:08:09 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 1/2] mm/page_alloc: Clarify terminology in migratetype fallback code To: Brendan Jackman , Andrew Morton Cc: Mel Gorman , Michal Hocko , Johannes Weiner , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yosry Ahmed References: <20250225-clarify-steal-v3-0-f2550ead0139@google.com> <20250225-clarify-steal-v3-1-f2550ead0139@google.com> From: Vlastimil Babka Content-Language: en-US In-Reply-To: <20250225-clarify-steal-v3-1-f2550ead0139@google.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: 82B1E10001B X-Rspamd-Server: rspam07 X-Stat-Signature: fdw5176g8dysjk9fgscyct7c3qdkngf3 X-HE-Tag: 1740582393-654307 X-HE-Meta: U2FsdGVkX187fyXEQGT1jJtkkzZmDJayboD5yZK1M28uvOrnOezz0O9LE4q0+oS9IHhpyTXzL4mo/WNwdd27eqnmf+AWNyNnb3hcOKV2Fo7lBqgyRtKGeIyKMPuyud550fH9Wr87/oLxJ8QOAQbTA9a24NdCOmeQsGFT5wYuuUTEmaCC4XYToznf4OIYATjVzvDg+iAIeWFWfsjuBMwpl5pbfsTel09yd6Gmhiwo3Ci07pNzGtP97YanXvCN0CqrYvOg5rnXeFP2Q0kolzBhq18HF3QvT8ep7WE0D/TE/JQ+Hh6naYHpaReKTwPiPh/TRaIBw3x3XXkDk74SgNB9SOMSazrCfhGMKn0ESqb8aQkVu/i0FbdPZsOrv15+6J44kHQt14nc1+xKlyUzpPVcY4aF0kVeXMcn4tphigi0LQgGHPhdeUXyOnkmVdcdwJhxMQP4gn9XLntCb0JCWBjmqk2jdz0+goRxARRKsWxNtwvjRR1EUbidd57qyqsV0DVUiO2dq9O9XGj1IqmkowXZ3C+R28vL2k2aspCkPya2Qp9HpDaiJTcyP1f06hPd/BRUmYGpGTLvqDxHwwsxvqo8k1ux7Zrj+y/IbswPFXJJTf7oydr8TAxqcTbhQQ0/gzooxhltVe5/nuxtQQSuexBRoYgb9c1ycBaw/okBv03/flEGt6McAUPUctWpvJs+gcFNvD7yzzPNWsWU1OxiAjcfIgGvaiVPZOPylKSLU14lAGcfjfK6uPO8Wnjx3J/we4M5NTRDISx7GW4wIhmMqrbTIVdisAAXWxbHzeQoC/vllHOpIZg7vNSV/5JjDvMDJmsu/9libDgj5cSjNqmB3AR0uAR70y9tTE7oYJKsoBIl7sbv/2gIFRmZIDZT7KCO65DvUy6Y5DeaVPU12kpWuBwaTWGANxgSE0N/nzNYhYp7Ugal8inmRCRITlXy3vU3EBS9C7UIw19m4c8DOKer68Y +bs0H64E kdrN3CTrovLICn+fGEOdwQ1p26usOGVGRdXATYvllYi16Q65OfXfJjj0efdfRTahRDeA73/poZI6oVvQqbxiymuLJGBDFJTYhx7rG+YnQEtMqKWqnVrqgGP/mdOprFsTsnHUF+b3ChNuWVhsJbyjJ/OfDgnip8L7/0wYeUGw1rTI/4PQTpgZtcEaDaZwTGEd5YIjXEaO4T+8Xhgz2IavUuNLcucgFrStznkNVQb4GqX6swyY6zBB4bkgOvMOffoMCqtrvgG4gR4WiS/Vmh4EZ89b6Sn61qgHl22zpIRTwOkacyRLGUxdMTYrEEkYm26lz7JD/6nRLIc1X+UvfBN3USUoFoazROxKJL81P X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2/25/25 4:29 PM, Brendan Jackman wrote: > This code is rather confusing because: > > 1. "Steal" is sometimes used to refer to the general concept of > allocating from a from a block of a fallback migratetype > (steal_suitable_fallback()) but sometimes it refers specifically to > converting a whole block's migratetype (can_steal_fallback()). > > 2. can_steal_fallback() sounds as though it's answering the question "am > I functionally permitted to allocate from that other type" but in > fact it is encoding a heuristic preference. > > 3. The same piece of data has different names in different places: > can_steal vs whole_block. This reinforces point 2 because it looks > like the different names reflect a shift in intent from "am I > allowed to steal" to "do I want to steal", but no such shift exists. > > Fix 1. by avoiding the term "steal" in ambiguous contexts. Start using > the term "claim" to refer to the special case of stealing the entire > block. > > Fix 2. by using "should" instead of "can", and also rename its > parameters and add some commentary to make it more explicit what they > mean. > > Fix 3. by adopting the new "claim" terminology universally for this > set of variables. > > Signed-off-by: Brendan Jackman Reviewed-by: Vlastimil Babka Some nits: > --- > mm/compaction.c | 4 ++-- > mm/internal.h | 2 +- > mm/page_alloc.c | 72 ++++++++++++++++++++++++++++----------------------------- > 3 files changed, 39 insertions(+), 39 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index 0992106d4ea751f7f1f8ebf7c75cd433d676cbe0..550ce50218075509ccb5f9485fd84f5d1f3d23a7 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -2333,7 +2333,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) > ret = COMPACT_NO_SUITABLE_PAGE; > for (order = cc->order; order < NR_PAGE_ORDERS; order++) { > struct free_area *area = &cc->zone->free_area[order]; > - bool can_steal; > + bool claim_block; > > /* Job done if page is free of the right migratetype */ > if (!free_area_empty(area, migratetype)) > @@ -2350,7 +2350,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) > * other migratetype buddy lists. > */ > if (find_suitable_fallback(area, order, migratetype, > - true, &can_steal) != -1) > + true, &claim_block) != -1) > /* > * Movable pages are OK in any pageblock. If we are > * stealing for a non-movable allocation, make sure > diff --git a/mm/internal.h b/mm/internal.h > index b07550db2bfd1d152fa90f91b3687b0fa1a9f653..aa30282a774ae26349944a75da854ae6a3da2a98 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -863,7 +863,7 @@ static inline void init_cma_pageblock(struct page *page) > > > int find_suitable_fallback(struct free_area *area, unsigned int order, > - int migratetype, bool only_stealable, bool *can_steal); > + int migratetype, bool claim_only, bool *claim_block); > > static inline bool free_area_empty(struct free_area *area, int migratetype) > { > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 5d8e274c8b1d500d263a17ef36fe190f60b88196..5e694046ef92965b34d4831e96d92f02681a8b45 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1942,22 +1942,22 @@ static inline bool boost_watermark(struct zone *zone) > > /* > * When we are falling back to another migratetype during allocation, try to > - * steal extra free pages from the same pageblocks to satisfy further > - * allocations, instead of polluting multiple pageblocks. > + * claim entire blocks to satisfy further allocations, instead of polluting > + * multiple pageblocks. > * > - * If we are stealing a relatively large buddy page, it is likely there will > - * be more free pages in the pageblock, so try to steal them all. For > - * reclaimable and unmovable allocations, we steal regardless of page size, > - * as fragmentation caused by those allocations polluting movable pageblocks > - * is worse than movable allocations stealing from unmovable and reclaimable > - * pageblocks. > + * If we are stealing a relatively large buddy page, it is likely there will be > + * more free pages in the pageblock, so try to claim the whole block. For > + * reclaimable and unmovable allocations, we claim the whole block regardless of It's also "try to claim" here as it may still fail due to not enough free/compatible pages even for those migratetypes. Maybe the question (out of scope of the patch) if they should get a lower threshold than half. Before migratetype hygiene, the "we steal regardless" meant that we really would steal all free pages even if not claiming the pageblock. > + * page size, as fragmentation caused by those allocations polluting movable > + * pageblocks is worse than movable allocations stealing from unmovable and > + * reclaimable pageblocks. > */ > -static bool can_steal_fallback(unsigned int order, int start_mt) > +static bool should_claim_block(unsigned int order, int start_mt) So technically it's should_try_claim_block() if we want to be precise (but longer). > { > /* > * Leaving this order check is intended, although there is > * relaxed order check in next check. The reason is that > - * we can actually steal whole pageblock if this condition met, > + * we can actually claim the whole pageblock if this condition met, try claiming > * but, below check doesn't guarantee it and that is just heuristic > * so could be changed anytime. > */ > @@ -1970,7 +1970,7 @@ static bool can_steal_fallback(unsigned int order, int start_mt) > * reclaimable pages that are closest to the request size. After a > * while, memory compaction may occur to form large contiguous pages, > * and the next movable allocation may not need to steal. Unmovable and > - * reclaimable allocations need to actually steal pages. > + * reclaimable allocations need to actually claim the whole block. also > */ > if (order >= pageblock_order / 2 || > start_mt == MIGRATE_RECLAIMABLE ||