From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61223C6FD18 for ; Tue, 25 Apr 2023 14:39:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB1E26B007E; Tue, 25 Apr 2023 10:39:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C61DE6B0083; Tue, 25 Apr 2023 10:39:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B299F6B0085; Tue, 25 Apr 2023 10:39:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A138A6B007E for ; Tue, 25 Apr 2023 10:39:18 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 39BE540164 for ; Tue, 25 Apr 2023 14:39:18 +0000 (UTC) X-FDA: 80720171196.17.90D1A71 Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45]) by imf28.hostedemail.com (Postfix) with ESMTP id E0BDDC0003 for ; Tue, 25 Apr 2023 14:39:14 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=4M8NDPSQ; spf=pass (imf28.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.45 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682433555; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hAymwB2ajYRlu+sRIxNbgxXmCl1PbjcxmIoXSAKN4EM=; b=DJ/1jMOek0NQoMjAnLZ1+d0CRiTlTVdLM5SF3u82UWg4R4VHoWcgCQyG18SLKtEMeoZKWi JRiOtqsAt8elp90DYmij2YTMj5GsfkfO1XaeiJDCvPzWP0xtIJXFWdMHtK8XmfIpMF5E1U XnHJhNfJbLgxXnDqpe5xVWUMfIRDcCo= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=4M8NDPSQ; spf=pass (imf28.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.45 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682433555; a=rsa-sha256; cv=none; b=UC/hncRbEPpO4J2bNjVNJBwUpVKBk2V7ZU2GWX0o2S+aLTV5C+bOeme4C3OcqGbkeWKj8G 8xPX/ySW7eCMkx6YIJm8fcnkxH206Ifmgqob9APdQlt+krZ7wB8sz/3R5+Ed+4saRb3zHc moZLLsBLa74w2YY7HjI3KiXaTIhrY9o= Received: by mail-qv1-f45.google.com with SMTP id 6a1803df08f44-5ef3ec38bb5so24327276d6.0 for ; Tue, 25 Apr 2023 07:39:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1682433554; x=1685025554; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=hAymwB2ajYRlu+sRIxNbgxXmCl1PbjcxmIoXSAKN4EM=; b=4M8NDPSQ1aHK3VBE0rM8aYdsDyZfgRRXN8P8uij4AX+H6Q8elgffA+4W04vlkKzSKo K2DIfH3mCWrULuyoR6pbpEylqv3+wAhLDMuYxVKm+7uhYUVdjEluXTTY1T/Tt+jJEVtY EQcxQrB6Yue3WPtcmImi1ErLKfBZFiVAqt1d38UN+Z+g+mTsXVWX2c0NKBdeSWX3az5I cNHdSDqjQ/OntfmA3Q9LRqQ7bh6mtr34CKRHzAfdJ3C3Ek1MXFn+GMjmx0rMHome7lxh LRUzjBJlHTcb5tZxWkBgKK09BirhcyeFEje81qul7ieUkouf4kk7fqag2dw4TCMBgo74 x/CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682433554; x=1685025554; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=hAymwB2ajYRlu+sRIxNbgxXmCl1PbjcxmIoXSAKN4EM=; b=S9ovnm0d++W/Kli9wnbYX+k9YaGJAatbxQr416ScHFefpAS/SGyB8m30DUlXwz+txc CxtHDpSfWHu2mdTTlVO9VXNTR90j+jaslvjKdLbiMzof/dEi2D8SFfAc4BR++iCQaQri 7RD4nxc8MP9ql72C3YdLFwL6oGiajqpRsv/bgMP8SkjCurDcwOn/HDhFPW3KzQfVa+D+ s5e0W4hH/DYoMmJF8IvI18/PGCpQGp6QY+1Pq+F/1Qdv5pu+i2yxG1IJEo5jKhbgkiXS t/bX4ZzbpXJo5HOg50R4fvTXxCNSX0Y0BNcKoUB+IAHYvj+2IWLlrqlhslQQd4pdZUqR 0EPA== X-Gm-Message-State: AAQBX9drpHj/EmE8T0u44tK6Dqedyi87ya5Npnqlvs5MgKnijyWbjqDp PIe5Hq0oRWt9WmqE0c0OJhewZw== X-Google-Smtp-Source: AKy350ZTSn0jDkuxQukJ6AV2BrYyovduzY60WhUsXHFsDIidfr84v7R33z9T6nervRAGqLIFzemhcQ== X-Received: by 2002:a05:6214:1bc9:b0:5e3:d150:3163 with SMTP id m9-20020a0562141bc900b005e3d1503163mr29654593qvc.20.1682433553123; Tue, 25 Apr 2023 07:39:13 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:9fc5]) by smtp.gmail.com with ESMTPSA id hr6-20020a05621423c600b005f2dba7a5b0sm4122128qvb.132.2023.04.25.07.39.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Apr 2023 07:39:12 -0700 (PDT) Date: Tue, 25 Apr 2023 10:39:12 -0400 From: Johannes Weiner To: Mel Gorman Cc: linux-mm@kvack.org, Kaiyang Zhao , Vlastimil Babka , David Rientjes , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [RFC PATCH 08/26] mm: page_alloc: claim blocks during compaction capturing Message-ID: <20230425143912.GB17132@cmpxchg.org> References: <20230418191313.268131-1-hannes@cmpxchg.org> <20230418191313.268131-9-hannes@cmpxchg.org> <20230421131227.k2afmhb6kejdbhui@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230421131227.k2afmhb6kejdbhui@techsingularity.net> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E0BDDC0003 X-Rspam-User: X-Stat-Signature: w66qngkbef7iie43zg3hrbn7x3i6ot6g X-HE-Tag: 1682433554-885614 X-HE-Meta: U2FsdGVkX191ftT/AFYIzZGE9WA7bWr3mOF9hP9EC5Eu5Y5FGYvNIqrGRsw//WmqWd1EBvJHltZQPgB7d4bEwNPPwa1BY/IllHLkinnGmCr5FJ47EhPDfpwjJS2/97WeYsjVCEdfopMvnBCODTBPLMoE8fz/LJkdDJsV6D2CpYqxYkL0PYHOSqt4Rzsa44y5gO8YEi2jYh8pzsSoiax5dpkW2n278FwgSJUSmdJNLmAT4LxZzY+BzrbkYQMF37sqGLSfhQeCKqXt4h98Dg4ZPSC60hCJjKAsm8RIos6mmkFMpKGYOzPcGDyZx6BPvJ78abSOdL0dxjTEZUo5wNw7OZL815B45dDrgXe+AjJxM4HPaV/CUrz0KtzCUj2357ZxvhD5nJdcomaZxuQgjslUkcSK9piCpwMk6QCyD3DzSHwFYVu2sE6o31H+eJYDgbjuQ35PH8JnJ+xoNo/iBkN79vb8/PeuVlM4qp1AtSyCxYDO5VgY85EXaDd5Z0xNks7GxIsZOFyMc4cOZtvUnC7623xkaYgwWywH6dSwNaKDBQEX6/ZYUMbHgCKYIBpXJtLeH4rEzxkHlQDWtPE7eAaBMwtsT56MYd6qB5JZmPEoPrO90bqKAGkUIv0hZOPBuesMCAP0jBUVRT9XPzjCin9NdcJMCLNwPvP+R56/+78fIXBhyZgnsfx3iTw0gwc+QHjJ0DHOVT/0cHs1HTmt6oHMpC5txLrtfjkEODAaZu8tzoyV7nStobk3dgKbNVUqSDKAHuiXdkjAclyH1ih4AEqox7YMu7UnqlKmj8JY+2tghyNIIcuLV77Ycnm0xeRxiyXP/Ej2a1mHcz5OmLyIeireT5UkUDYpZx3eQjB+NICPmUlGsNBd2J7VaYEJLWQf1LLc70LiRa8XNKci1i/ZDZCFgwHRSz1pT3ALrlpOY+Urp4w1GTLsrhuVEdFXUYVmR0uAQi1DMdtGq7rqEVKr7Wu t3Z38t4z ymg8qsa/DEH7KWb3nzR+M0PPkZZuiyQD1wGOFWHxqVSlyI6x1UFItYXLZdz9UqV3u+Dp83FQVHtannHnfkpqUVohwtbfTr29ucdIj9Zm92WX6fBRPyLxm9N4/qNEaG1EuO77sXzJxBXwy+fxH8XX+apuPoHWzdeFYMRXNOE6yGnpaHoSsu8qe8l3rsA9otkqiCgybk1N8WFs7/7p/zFlGnO+8gyss5CKJrE7aMWm3LTDRl9/IosEHoKNihQZEpv8w23GUviQ2TSMqepFEycaOuepCziPSj3arKa3tMs92beC+oovLq9Yzx9XQwPPKBZGQXJRCecunJFOUB2mFBBwgpPqTCrMSZ9sxqsjdGE9bn96PTTw/AWZyLMMMpM5RH9goMPQN2LwA59R7PUcQmXj2ZbBjaVlzKGo+eo8cwisgds2RnkOSRBf+sOTWFJGSurKA2b4efTN4b9L8u4scjKsZ08ISBDgmYO3b+kzjIPuH2vB2o9rwNJjr+qu+uPypBMJ8ACl2vW4HILZqneuqiMXrm45JEQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Apr 21, 2023 at 02:12:27PM +0100, Mel Gorman wrote: > On Tue, Apr 18, 2023 at 03:12:55PM -0400, Johannes Weiner wrote: > > When capturing a whole block, update the migratetype accordingly. For > > example, a THP allocation might capture an unmovable block. If the THP > > gets split and partially freed later, the remainder should group up > > with movable allocations. > > > > Signed-off-by: Johannes Weiner > > --- > > mm/internal.h | 1 + > > mm/page_alloc.c | 42 ++++++++++++++++++++++++------------------ > > 2 files changed, 25 insertions(+), 18 deletions(-) > > > > diff --git a/mm/internal.h b/mm/internal.h > > index 024affd4e4b5..39f65a463631 100644 > > --- a/mm/internal.h > > +++ b/mm/internal.h > > @@ -432,6 +432,7 @@ struct compact_control { > > */ > > struct capture_control { > > struct compact_control *cc; > > + int migratetype; > > struct page *page; > > }; > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 4d20513c83be..8e5996f8b4b4 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -615,6 +615,17 @@ void set_pageblock_migratetype(struct page *page, int migratetype) > > page_to_pfn(page), MIGRATETYPE_MASK); > > } > > > > +static void change_pageblock_range(struct page *pageblock_page, > > + int start_order, int migratetype) > > +{ > > + int nr_pageblocks = 1 << (start_order - pageblock_order); > > + > > + while (nr_pageblocks--) { > > + set_pageblock_migratetype(pageblock_page, migratetype); > > + pageblock_page += pageblock_nr_pages; > > + } > > +} > > + > > #ifdef CONFIG_DEBUG_VM > > static int page_outside_zone_boundaries(struct zone *zone, struct page *page) > > { > > @@ -962,14 +973,19 @@ compaction_capture(struct capture_control *capc, struct page *page, > > is_migrate_isolate(migratetype)) > > return false; > > > > - /* > > - * Do not let lower order allocations pollute a movable pageblock. > > - * This might let an unmovable request use a reclaimable pageblock > > - * and vice-versa but no more than normal fallback logic which can > > - * have trouble finding a high-order free page. > > - */ > > - if (order < pageblock_order && migratetype == MIGRATE_MOVABLE) > > + if (order >= pageblock_order) { > > + migratetype = capc->migratetype; > > + change_pageblock_range(page, order, migratetype); > > + } else if (migratetype == MIGRATE_MOVABLE) { > > + /* > > + * Do not let lower order allocations pollute a > > + * movable pageblock. This might let an unmovable > > + * request use a reclaimable pageblock and vice-versa > > + * but no more than normal fallback logic which can > > + * have trouble finding a high-order free page. > > + */ > > return false; > > + } > > > > For capturing pageblock order or larger, why not unconditionally make > the block MOVABLE? Even if it's a zero page allocation, it would be nice > to keep the pageblock for movable pages after the split as long as > possible. The zero page isn't split, but if some other unmovable allocation does a split and free later on I want to avoid filling a block with an unmovable allocation with movables. That block is already lost to compaction, and this way future unmovable allocations are more likely to group into that block rather than claim an additional unmovable. I had to double take for block merges beyond pageblock order, wondering if we can claim multiple blocks for requests (capc->order) smaller than a block. But that can't happen. Once we reach pageblock_order during merging we claim, capture and exit. That means order > pageblock_order can only happen if capc->order is actually larger than a pageblock as well. I'll add a comment.