From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD865C38142 for ; Sat, 28 Jan 2023 09:29:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B19CF6B0072; Sat, 28 Jan 2023 04:29:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC9FF6B0073; Sat, 28 Jan 2023 04:29:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 991926B0074; Sat, 28 Jan 2023 04:29:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 865566B0072 for ; Sat, 28 Jan 2023 04:29:42 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4D7BC1604BD for ; Sat, 28 Jan 2023 09:29:42 +0000 (UTC) X-FDA: 80403685404.19.78F74AD Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf13.hostedemail.com (Postfix) with ESMTP id 936D520011 for ; Sat, 28 Jan 2023 09:29:38 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of chenwandun@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=chenwandun@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674898179; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fpJGZy73rJbYNzNoLcLhk9j87o7m9r/69wL7Hw6GJck=; b=ccsHk2KsKQbDZvOkdHVBV7J5W/CoONa16lj1b7R25rHfhW9dfTTlqtYwVHLfQFWuEIyakx LxFXl7HWNfpRbrjs5Vp4D1re3JEPvgEd58l7W/J9KUQ0PJvdSqz6yS9aLN913GEvuoCHOa 8zaXipAWJhzQLQ56USyu/EUmeVjoRTc= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of chenwandun@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=chenwandun@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674898179; a=rsa-sha256; cv=none; b=1DVBdjnpFoqFyZMzrUvyuS/+D6i1P0igccwZXCMNzi2oPXbO/8jFZiYjeoIHKjUGXfaH1o REOJnnm747nO2EsF/aw9uOpkn2G8FZM2+jNe59Faouc3c2GI4JBa/4b/Mg+zWdOkC+HpqO FXNC70XkAzvNyMBXw/1VXDmLc0H7Y60= Received: from dggpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4P3pv642hDzJqMp; Sat, 28 Jan 2023 17:25:06 +0800 (CST) Received: from [10.174.178.178] (10.174.178.178) by dggpemm500002.china.huawei.com (7.185.36.229) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Sat, 28 Jan 2023 17:29:31 +0800 Message-ID: Date: Sat, 28 Jan 2023 17:29:30 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.0.3 Subject: Re: [PATCH -v3 2/9] migrate_pages: separate hugetlb folios migration To: Huang Ying , Andrew Morton CC: , , Baolin Wang , Zi Yan , Yang Shi , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin , Minchan Kim References: <20230116063057.653862-1-ying.huang@intel.com> <20230116063057.653862-3-ying.huang@intel.com> From: Chen Wandun In-Reply-To: <20230116063057.653862-3-ying.huang@intel.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.178.178] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500002.china.huawei.com (7.185.36.229) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: mx6ixjmduhtp6a451y3mcc9h5i9zpsoj X-Rspamd-Queue-Id: 936D520011 X-HE-Tag: 1674898178-275775 X-HE-Meta: U2FsdGVkX18dNAn/ZlgIMrip9ugjKHsNR+eaMn7oq/UmUdG5gFyn3uwRgi/GIGA2g9MJ9YSbBWMhsYShLJET/3RDllbgm6VGG0vMcDOaTl6yaPzlYqedOtusONdxZt0yhHrIqleE9M0D9NFmSIQ+Kx7fyOVlHaCFqZux8QYBjUrxi9LU8ivzc0xKpKoYFMracqOQH6Sa7plzfu7rL4lrgqd0TSwtdjuli78d0D8+ctTapZnUbfth07WasdcOAbXgndAQL3WIrtYoah/+AWKVZOGHXYtKBo62A+rhzbcrQNovphssUZWMeJcFfe1UUPYYS2E/DdRs/fZZ8sf/WQ76IX3BZoK2v+jtjVTQV5M6Spi3XcqivkXQ85ULOfpTBuSBOdBCCamaudRR1IgxyGp8cz1XMj0LTrRAdepZKjyeHvqrUH4KLKkzBFdVXY836NuNieV4cMmNMAaA33A/hi00/LxPBGUFIJU0Q4O/2cZJeXlnceX0UrWt+bNbrvvLKXNiNjIX4T4wy7Gvt9Xc/l5PU2ajTAKpFrJSpc4m0lVIYGNdS3ksltNQb1XRQy9TgRG6lPVlKHw9FR7za3/+fZASiz9+506mmEAP35QRGo0PZkjahckN5uHEKks6B3XMcOUVZvNHATLiFlD6bXZADq+VR473yf+nAEop2CPzRlGzFCArzlrYQs01AwpgpZu0Nj06s0BYQdnds0g7C7sSWOL5EftjqRua8/YeFYec23dI6OAyEy9UT84g5kjq8XvN+28wStpVhhsdVU7g47vyy8L60TvDvr5ypr4/uY6qw5zlUyH6u9dareN4Z6rcnlZzz4jboK4kOPTZTcHnSwqV4kcrwrjYwcBEWAdsRrAbaP1e3zAjUp6O3C3EI7jvE6bvov/sg07ZiaPFWDj36Wcqy+fq2nFmFe5zzILPkFDrqKwW7y0h0oSBU2lZTpqW3QaGwDpbG3rpJ2bHH7d7wb3up8l KU/T4e24 +u6rOl/Z24SYLF/NZzQtCOxZjGf6OesqI//o5WSqharzeBOsp4Yo3Vogk2qwzf3YDrrGPuGfQ7zKurBmdmTPavrgYl2cNPlQgt9KlyjHCwT0DOOqd4sCDrIIJQAnpdNcIBCSClnjY8H5BjU3t+tXvYlCPp65kaeJsZCY2W9B3qeET2MltWbAFzqkP5MlzjUpTfWwRxjrWsxj3G5awxIeIiRV1zDhXbQcNmERAl09vaIt1rDzmvC4BY3XGgXzBg1aYleGWL9sjVZuU4mjDZZhg4YE7n3ejT5So6Oy9ySzIPf4qw7roXlkgeaVXsN/szg5Fwsu+P2g8CxpK1dJdjDNyvMcHGgq+YMgly7FVvlZip/p4E8u0Ldw56EdqOke5imFyO1LpJl+Oq7a1UrL8Uk+cof7ENve8S1SZhirxEvyf2heqcZEmS6jT8NLIuxWQHFm24Jn1oOuQOZToqDqF/xmJ7PPkWcQ/q9L73+si X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/1/16 14:30, Huang Ying wrote: > This is a preparation patch to batch the folio unmapping and moving > for the non-hugetlb folios. Based on that we can batch the TLB > shootdown during the folio migration and make it possible to use some > hardware accelerator for the folio copying. > > In this patch the hugetlb folios and non-hugetlb folios migration is > separated in migrate_pages() to make it easy to change the non-hugetlb > folios migration implementation. > > Signed-off-by: "Huang, Ying" > Reviewed-by: Baolin Wang > Cc: Zi Yan > Cc: Yang Shi > Cc: Oscar Salvador > Cc: Matthew Wilcox > Cc: Bharata B Rao > Cc: Alistair Popple > Cc: haoxin > Cc: Minchan Kim > --- > mm/migrate.c | 141 +++++++++++++++++++++++++++++++++++++++++++-------- > 1 file changed, 119 insertions(+), 22 deletions(-) > > diff --git a/mm/migrate.c b/mm/migrate.c > index ef388a9e4747..be7f37523463 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1396,6 +1396,8 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f > return rc; > } > > +#define NR_MAX_MIGRATE_PAGES_RETRY 10 > + > struct migrate_pages_stats { > int nr_succeeded; /* Normal and large folios migrated successfully, in > units of base pages */ > @@ -1406,6 +1408,95 @@ struct migrate_pages_stats { > int nr_thp_split; /* THP split before migrating */ > }; > > +/* > + * Returns the number of hugetlb folios that were not migrated, or an error code > + * after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no hugetlb folios are movable > + * any more because the list has become empty or no retryable hugetlb folios > + * exist any more. It is caller's responsibility to call putback_movable_pages() > + * only if ret != 0. > + */ > +static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page, > + free_page_t put_new_page, unsigned long private, > + enum migrate_mode mode, int reason, > + struct migrate_pages_stats *stats, > + struct list_head *ret_folios) > +{ > + int retry = 1; > + int nr_failed = 0; > + int nr_retry_pages = 0; > + int pass = 0; > + struct folio *folio, *folio2; > + int rc, nr_pages; > + > + for (pass = 0; pass < NR_MAX_MIGRATE_PAGES_RETRY && retry; pass++) { > + retry = 0; > + nr_retry_pages = 0; > + > + list_for_each_entry_safe(folio, folio2, from, lru) { > + if (!folio_test_hugetlb(folio)) > + continue; > + > + nr_pages = folio_nr_pages(folio); > + > + cond_resched(); > + > + rc = unmap_and_move_huge_page(get_new_page, > + put_new_page, private, > + &folio->page, pass > 2, mode, > + reason, ret_folios); > + /* > + * The rules are: > + * Success: hugetlb folio will be put back > + * -EAGAIN: stay on the from list > + * -ENOMEM: stay on the from list > + * -ENOSYS: stay on the from list > + * Other errno: put on ret_folios list > + */ > + switch(rc) { > + case -ENOSYS: > + /* Hugetlb migration is unsupported */ > + nr_failed++; > + stats->nr_failed_pages += nr_pages; > + list_move_tail(&folio->lru, ret_folios); > + break; > + case -ENOMEM: > + /* > + * When memory is low, don't bother to try to migrate > + * other folios, just exit. > + */ > + stats->nr_failed_pages += nr_pages + nr_retry_pages; > + return -ENOMEM; > + case -EAGAIN: > + retry++; > + nr_retry_pages += nr_pages; > + break; > + case MIGRATEPAGE_SUCCESS: > + stats->nr_succeeded += nr_pages; > + break; > + default: > + /* > + * Permanent failure (-EBUSY, etc.): > + * unlike -EAGAIN case, the failed folio is > + * removed from migration folio list and not > + * retried in the next outer loop. > + */ > + nr_failed++; > + stats->nr_failed_pages += nr_pages; > + break; > + } > + } > + } > + /* > + * nr_failed is number of hugetlb folios failed to be migrated. After > + * NR_MAX_MIGRATE_PAGES_RETRY attempts, give up and count retried hugetlb > + * folios as failed. > + */ > + nr_failed += retry; > + stats->nr_failed_pages += nr_retry_pages; > + > + return nr_failed; > +} > + > /* > * migrate_pages - migrate the folios specified in a list, to the free folios > * supplied as the target for the page migration > @@ -1422,10 +1513,10 @@ struct migrate_pages_stats { > * @ret_succeeded: Set to the number of folios migrated successfully if > * the caller passes a non-NULL pointer. > * > - * The function returns after 10 attempts or if no folios are movable any more > - * because the list has become empty or no retryable folios exist any more. > - * It is caller's responsibility to call putback_movable_pages() to return folios > - * to the LRU or free list only if ret != 0. > + * The function returns after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no folios > + * are movable any more because the list has become empty or no retryable folios > + * exist any more. It is caller's responsibility to call putback_movable_pages() > + * only if ret != 0. > * > * Returns the number of {normal folio, large folio, hugetlb} that were not > * migrated, or an error code. The number of large folio splits will be > @@ -1439,7 +1530,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > int retry = 1; > int large_retry = 1; > int thp_retry = 1; > - int nr_failed = 0; > + int nr_failed; > int nr_retry_pages = 0; > int nr_large_failed = 0; > int pass = 0; > @@ -1456,38 +1547,45 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > trace_mm_migrate_pages_start(mode, reason); > > memset(&stats, 0, sizeof(stats)); > + rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason, > + &stats, &ret_folios); > + if (rc < 0) > + goto out; How about continue migrate small page for -ENOMEM case?  maybe there are still free small pages. > + nr_failed = rc; > + > split_folio_migration: > - for (pass = 0; pass < 10 && (retry || large_retry); pass++) { > + for (pass = 0; > + pass < NR_MAX_MIGRATE_PAGES_RETRY && (retry || large_retry); > + pass++) { > retry = 0; > large_retry = 0; > thp_retry = 0; > nr_retry_pages = 0; > > list_for_each_entry_safe(folio, folio2, from, lru) { > + /* Retried hugetlb folios will be kept in list */ > + if (folio_test_hugetlb(folio)) { > + list_move_tail(&folio->lru, &ret_folios); > + continue; > + } > + > /* > * Large folio statistics is based on the source large > * folio. Capture required information that might get > * lost during migration. > */ > - is_large = folio_test_large(folio) && !folio_test_hugetlb(folio); > + is_large = folio_test_large(folio); > is_thp = is_large && folio_test_pmd_mappable(folio); > nr_pages = folio_nr_pages(folio); > + > cond_resched(); > > - if (folio_test_hugetlb(folio)) > - rc = unmap_and_move_huge_page(get_new_page, > - put_new_page, private, > - &folio->page, pass > 2, mode, > - reason, > - &ret_folios); > - else > - rc = unmap_and_move(get_new_page, put_new_page, > - private, folio, pass > 2, mode, > - reason, &ret_folios); > + rc = unmap_and_move(get_new_page, put_new_page, > + private, folio, pass > 2, mode, > + reason, &ret_folios); > /* > * The rules are: > - * Success: non hugetlb folio will be freed, hugetlb > - * folio will be put back > + * Success: folio will be freed > * -EAGAIN: stay on the from list > * -ENOMEM: stay on the from list > * -ENOSYS: stay on the from list > @@ -1514,7 +1612,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > stats.nr_thp_split += is_thp; > break; > } > - /* Hugetlb migration is unsupported */ > } else if (!no_split_folio_counting) { > nr_failed++; > } > @@ -1608,8 +1705,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page, > */ > if (!list_empty(&split_folios)) { > /* > - * Move non-migrated folios (after 10 retries) to ret_folios > - * to avoid migrating them again. > + * Move non-migrated folios (after NR_MAX_MIGRATE_PAGES_RETRY > + * retries) to ret_folios to avoid migrating them again. > */ > list_splice_init(from, &ret_folios); > list_splice_init(&split_folios, from);