From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C368CC54FB9 for ; Mon, 20 Nov 2023 09:18:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5B1DC6B0430; Mon, 20 Nov 2023 04:18:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5394C6B0431; Mon, 20 Nov 2023 04:18:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D9EE6B0432; Mon, 20 Nov 2023 04:18:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 273C56B0430 for ; Mon, 20 Nov 2023 04:18:26 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DF3D3C08B2 for ; Mon, 20 Nov 2023 09:18:25 +0000 (UTC) X-FDA: 81477781770.09.4D5B97F Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) by imf14.hostedemail.com (Postfix) with ESMTP id 45EA1100017 for ; Mon, 20 Nov 2023 09:18:21 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700471904; a=rsa-sha256; cv=none; b=sKCFYxOtSmGFyjslT2Pjm0jTtT1jcDuqb/rRjxYPnpP1wupfAuI8Zfd2cV9/broBPe1fqN b4HX8Kq6naHPBDKTdkjgRCLWMthi/HHF3IA6Ozz+Ost2amJT4pT/bPRg7etTVATWSIPaGe DHk86S02W2W74vMU9CJEdqimnPrtvE4= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; spf=pass (imf14.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700471904; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2Z+Uk4JNIskL2AsgPhyZCAXwRbIwvsVXAJHPDbd7Smc=; b=Yl4v1VxoWDSyIuqXFrWG1LQJAoP4+uacBlieP0s8rvHhblXQin6QtlRxsx++4tpuoQ6bHl WiI+k2jjj/jx0TRH9mxu6LSI9ZoeUIrkfiiRO4d3tiwWIHwCRwmDf34enR7+hkyDzjbp/f VKFfUg9Es640LwJ3jv6/wo58RWlku2Y= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R151e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0VwkgKKM_1700471895; Received: from 30.97.48.46(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VwkgKKM_1700471895) by smtp.aliyun-inc.com; Mon, 20 Nov 2023 17:18:17 +0800 Message-ID: Date: Mon, 20 Nov 2023 17:18:29 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 1/4] mm/compaction: enable compacting >0 order folios. To: Zi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: "Huang, Ying" , Ryan Roberts , Andrew Morton , "Matthew Wilcox (Oracle)" , David Hildenbrand , "Yin, Fengwei" , Yu Zhao , Vlastimil Babka , "Kirill A . Shutemov" , Johannes Weiner , Kemeng Shi , Mel Gorman , Rohan Puri , Mcgrof Chamberlain , Adam Manzanares , "Vishal Moola (Oracle)" References: <20231113170157.280181-1-zi.yan@sent.com> <20231113170157.280181-2-zi.yan@sent.com> From: Baolin Wang In-Reply-To: <20231113170157.280181-2-zi.yan@sent.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 45EA1100017 X-Stat-Signature: xa3fa7m8e747kdbgi7u7cteonuqqqh1i X-HE-Tag: 1700471901-952184 X-HE-Meta: U2FsdGVkX1/0238NgynJ1c7CxLmXpMxwa5CVsFyN0/6KZFHpAUV8wnY3T/CcGjdzdIpY54Sqa1VMpMrmVOGAkb0YmJ6fc5qmj5VLFg7fJuEqbmStJHO9HaAWiuDJGynQlMT9OrxdPXCn1EA7x1/jRDdtvOrr+k4qw34m8F4Oc/i3IsOZb7MYCpKk7PnjGJ8pKxwFNKl7AZWYm9FT0TonysBBcFgMfSW1FnO/KCrxjwOBDJEoj/V3DoBJKk8kVycYg1/YF2U7OIS6ZYU3hN2+DjDZMn0yj2ueTA4b8c7DkFXTCvMIcr526eZTDcktMiJRDZDm3qHZJ9eYDgjmGGNZn/xDRAjQdA1yL3Bg2V/zsOsvb8yed6G+866XCNLNFJPEYPD7KbXFM6aUfjgBe2ttJ/AeEptWcRroIUZI7eZXteHTsDXzpe6r8oZh8SXoyHy5WJnNsZba+nIScL0DudvXlg8gl3FoPkzljC33ULrRELdlvofC554hNgUicoe/netnnN1n9NkMJaOsYL5kbDqXCdkrEo7O337xEYrCNrVEpxb65rkvIGbXhOKLyimPqced3N1ZqAjqDLAJ/eQi94BltcFhYf8ELmUYihKTas/Y0XNAlHZv5BkQGeqp8pmXTiBc2BZuwtgwDvW0KeYX8MsoySBXUQ6SrNIZP5LkEky7pmEKOwPxyyu+ulet1ho6afgbnzkxkWb9MN4psr46RI0cWGkqRaqlpeAHEVaJyJq06v2GVYeSF8vL0TwI4B2/G/jID2Z9XMl/HHa5uLNLXo/Z/yJ8nls7LDCbvI0AL0u1Fq2NZtCwmZpaNt1CvxwgriQtc+4q6+lW+UBE1MS6O7CDnNZRERvvURsmV8w+1PufkXherUb2reRLCY9AF7mUomXi0/lyrlQEWMq0SsLBEOMWqdhfzGoNNCxh/wFAq4AB3EuIaZO43BSzRJWA7rMEDPvkSPT+HUCWBZqUUu2I2XQ ZZVgAOHw CbYJ9sD/pINFDHWTd9CWjBnDWPj55rfJvW2Zhv1ELnEgv02giOxJ6DGA9z08L0MQP6PWtjdBpiw3STvgRNxO3M6GI/eSqiehpHk45CPCrx5ZG90jgbY0VLBNbxqZb39obcrxeGQLLAsOdq3Ov5ngb5QESqtdpeQdwm1QbUsS46CJP71qa1PJmoGIPyeIY4caUIAKe+1EkosNbZbfFgTBPiZElmkzBHrKoeDPn X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11/14/2023 1:01 AM, Zi Yan wrote: > From: Zi Yan > > migrate_pages() supports >0 order folio migration and during compaction, > even if compaction_alloc() cannot provide >0 order free pages, > migrate_pages() can split the source page and try to migrate the base pages > from the split. It can be a baseline and start point for adding support for > compacting >0 order folios. > > Suggested-by: Huang Ying > Signed-off-by: Zi Yan > --- > mm/compaction.c | 57 ++++++++++++++++++++++++++++++++++++------------- > 1 file changed, 42 insertions(+), 15 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index 01ba298739dd..5217dd35b493 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc) > return too_many; > } > > +/* > + * 1. if the page order is larger than or equal to target_order (i.e., > + * cc->order and when it is not -1 for global compaction), skip it since > + * target_order already indicates no free page with larger than target_order > + * exists and later migrating it will most likely fail; > + * > + * 2. compacting > pageblock_order pages does not improve memory fragmentation, > + * skip them; > + */ > +static bool skip_isolation_on_order(int order, int target_order) > +{ > + return (target_order != -1 && order >= target_order) || > + order >= pageblock_order; > +} > + > /** > * isolate_migratepages_block() - isolate all migrate-able pages within > * a single pageblock > @@ -1009,7 +1024,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > /* > * Regardless of being on LRU, compound pages such as THP and > * hugetlbfs are not to be compacted unless we are attempting > - * an allocation much larger than the huge page size (eg CMA). > + * an allocation larger than the compound page size. > * We can potentially save a lot of iterations if we skip them > * at once. The check is racy, but we can consider only valid > * values and the only danger is skipping too much. > @@ -1017,11 +1032,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > if (PageCompound(page) && !cc->alloc_contig) { > const unsigned int order = compound_order(page); > > - if (likely(order <= MAX_ORDER)) { > - low_pfn += (1UL << order) - 1; > - nr_scanned += (1UL << order) - 1; > + /* > + * Skip based on page order and compaction target order > + * and skip hugetlbfs pages. > + */ > + if (skip_isolation_on_order(order, cc->order) || > + PageHuge(page)) { > + if (order <= MAX_ORDER) { > + low_pfn += (1UL << order) - 1; > + nr_scanned += (1UL << order) - 1; > + } > + goto isolate_fail; > } > - goto isolate_fail; > } > > /* > @@ -1144,17 +1166,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > goto isolate_abort; > } > } > + } > > - /* > - * folio become large since the non-locked check, > - * and it's on LRU. > - */ > - if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) { > - low_pfn += folio_nr_pages(folio) - 1; > - nr_scanned += folio_nr_pages(folio) - 1; > - folio_set_lru(folio); > - goto isolate_fail_put; > - } > + /* > + * Check LRU folio order under the lock > + */ > + if (unlikely(skip_isolation_on_order(folio_order(folio), > + cc->order) && > + !cc->alloc_contig)) { > + low_pfn += folio_nr_pages(folio) - 1; > + nr_scanned += folio_nr_pages(folio) - 1; > + folio_set_lru(folio); > + goto isolate_fail_put; > } Why was this part moved out of the 'if (lruvec != locked)' block? If we hold the lru lock, then we do not need to check again, right?