From: "Huang, Ying" <ying.huang@intel.com>
To: Chen Wandun <chenwandun@huawei.com>
Cc: Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Zi Yan <ziy@nvidia.com>, Yang Shi <shy828301@gmail.com>,
Oscar Salvador <osalvador@suse.de>,
Matthew Wilcox <willy@infradead.org>,
Bharata B Rao <bharata@amd.com>,
Alistair Popple <apopple@nvidia.com>,
haoxin <xhao@linux.alibaba.com>,
Minchan Kim <minchan@kernel.org>
Subject: Re: [PATCH -v3 2/9] migrate_pages: separate hugetlb folios migration
Date: Sun, 29 Jan 2023 08:36:54 +0800 [thread overview]
Message-ID: <87edrech21.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <c1f1e516-308b-3ec1-72b4-018d42def2d0@huawei.com> (Chen Wandun's message of "Sat, 28 Jan 2023 17:29:30 +0800")
Chen Wandun <chenwandun@huawei.com> writes:
> On 2023/1/16 14:30, Huang Ying wrote:
>> This is a preparation patch to batch the folio unmapping and moving
>> for the non-hugetlb folios. Based on that we can batch the TLB
>> shootdown during the folio migration and make it possible to use some
>> hardware accelerator for the folio copying.
>>
>> In this patch the hugetlb folios and non-hugetlb folios migration is
>> separated in migrate_pages() to make it easy to change the non-hugetlb
>> folios migration implementation.
>>
>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Yang Shi <shy828301@gmail.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Bharata B Rao <bharata@amd.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: haoxin <xhao@linux.alibaba.com>
>> Cc: Minchan Kim <minchan@kernel.org>
>> ---
>> mm/migrate.c | 141 +++++++++++++++++++++++++++++++++++++++++++--------
>> 1 file changed, 119 insertions(+), 22 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index ef388a9e4747..be7f37523463 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -1396,6 +1396,8 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f
>> return rc;
>> }
>> +#define NR_MAX_MIGRATE_PAGES_RETRY 10
>> +
>> struct migrate_pages_stats {
>> int nr_succeeded; /* Normal and large folios migrated successfully, in
>> units of base pages */
>> @@ -1406,6 +1408,95 @@ struct migrate_pages_stats {
>> int nr_thp_split; /* THP split before migrating */
>> };
>> +/*
>> + * Returns the number of hugetlb folios that were not migrated, or an error code
>> + * after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no hugetlb folios are movable
>> + * any more because the list has become empty or no retryable hugetlb folios
>> + * exist any more. It is caller's responsibility to call putback_movable_pages()
>> + * only if ret != 0.
>> + */
>> +static int migrate_hugetlbs(struct list_head *from, new_page_t get_new_page,
>> + free_page_t put_new_page, unsigned long private,
>> + enum migrate_mode mode, int reason,
>> + struct migrate_pages_stats *stats,
>> + struct list_head *ret_folios)
>> +{
>> + int retry = 1;
>> + int nr_failed = 0;
>> + int nr_retry_pages = 0;
>> + int pass = 0;
>> + struct folio *folio, *folio2;
>> + int rc, nr_pages;
>> +
>> + for (pass = 0; pass < NR_MAX_MIGRATE_PAGES_RETRY && retry; pass++) {
>> + retry = 0;
>> + nr_retry_pages = 0;
>> +
>> + list_for_each_entry_safe(folio, folio2, from, lru) {
>> + if (!folio_test_hugetlb(folio))
>> + continue;
>> +
>> + nr_pages = folio_nr_pages(folio);
>> +
>> + cond_resched();
>> +
>> + rc = unmap_and_move_huge_page(get_new_page,
>> + put_new_page, private,
>> + &folio->page, pass > 2, mode,
>> + reason, ret_folios);
>> + /*
>> + * The rules are:
>> + * Success: hugetlb folio will be put back
>> + * -EAGAIN: stay on the from list
>> + * -ENOMEM: stay on the from list
>> + * -ENOSYS: stay on the from list
>> + * Other errno: put on ret_folios list
>> + */
>> + switch(rc) {
>> + case -ENOSYS:
>> + /* Hugetlb migration is unsupported */
>> + nr_failed++;
>> + stats->nr_failed_pages += nr_pages;
>> + list_move_tail(&folio->lru, ret_folios);
>> + break;
>> + case -ENOMEM:
>> + /*
>> + * When memory is low, don't bother to try to migrate
>> + * other folios, just exit.
>> + */
>> + stats->nr_failed_pages += nr_pages + nr_retry_pages;
>> + return -ENOMEM;
>> + case -EAGAIN:
>> + retry++;
>> + nr_retry_pages += nr_pages;
>> + break;
>> + case MIGRATEPAGE_SUCCESS:
>> + stats->nr_succeeded += nr_pages;
>> + break;
>> + default:
>> + /*
>> + * Permanent failure (-EBUSY, etc.):
>> + * unlike -EAGAIN case, the failed folio is
>> + * removed from migration folio list and not
>> + * retried in the next outer loop.
>> + */
>> + nr_failed++;
>> + stats->nr_failed_pages += nr_pages;
>> + break;
>> + }
>> + }
>> + }
>> + /*
>> + * nr_failed is number of hugetlb folios failed to be migrated. After
>> + * NR_MAX_MIGRATE_PAGES_RETRY attempts, give up and count retried hugetlb
>> + * folios as failed.
>> + */
>> + nr_failed += retry;
>> + stats->nr_failed_pages += nr_retry_pages;
>> +
>> + return nr_failed;
>> +}
>> +
>> /*
>> * migrate_pages - migrate the folios specified in a list, to the free folios
>> * supplied as the target for the page migration
>> @@ -1422,10 +1513,10 @@ struct migrate_pages_stats {
>> * @ret_succeeded: Set to the number of folios migrated successfully if
>> * the caller passes a non-NULL pointer.
>> *
>> - * The function returns after 10 attempts or if no folios are movable any more
>> - * because the list has become empty or no retryable folios exist any more.
>> - * It is caller's responsibility to call putback_movable_pages() to return folios
>> - * to the LRU or free list only if ret != 0.
>> + * The function returns after NR_MAX_MIGRATE_PAGES_RETRY attempts or if no folios
>> + * are movable any more because the list has become empty or no retryable folios
>> + * exist any more. It is caller's responsibility to call putback_movable_pages()
>> + * only if ret != 0.
>> *
>> * Returns the number of {normal folio, large folio, hugetlb} that were not
>> * migrated, or an error code. The number of large folio splits will be
>> @@ -1439,7 +1530,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>> int retry = 1;
>> int large_retry = 1;
>> int thp_retry = 1;
>> - int nr_failed = 0;
>> + int nr_failed;
>> int nr_retry_pages = 0;
>> int nr_large_failed = 0;
>> int pass = 0;
>> @@ -1456,38 +1547,45 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>> trace_mm_migrate_pages_start(mode, reason);
>> memset(&stats, 0, sizeof(stats));
>> + rc = migrate_hugetlbs(from, get_new_page, put_new_page, private, mode, reason,
>> + &stats, &ret_folios);
>> + if (rc < 0)
>> + goto out;
> How about continue migrate small page for -ENOMEM case? maybe
> there are still
> free small pages.
Sounds reasonable to me. How about do that on top of this series? Do
you have interest to do that?
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2023-01-29 0:38 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-16 6:30 [PATCH -v3 0/9] migrate_pages(): batch TLB flushing Huang Ying
2023-01-16 6:30 ` [PATCH -v3 1/9] migrate_pages: organize stats with struct migrate_pages_stats Huang Ying
2023-01-16 6:30 ` [PATCH -v3 2/9] migrate_pages: separate hugetlb folios migration Huang Ying
2023-01-28 9:29 ` Chen Wandun
2023-01-29 0:36 ` Huang, Ying [this message]
2023-01-29 1:47 ` Chen Wandun
2023-01-16 6:30 ` [PATCH -v3 3/9] migrate_pages: restrict number of pages to migrate in batch Huang Ying
2023-01-16 6:30 ` [PATCH -v3 4/9] migrate_pages: split unmap_and_move() to _unmap() and _move() Huang Ying
2023-01-16 6:30 ` [PATCH -v3 5/9] migrate_pages: batch _unmap and _move Huang Ying
2023-01-16 6:30 ` [PATCH -v3 6/9] migrate_pages: move migrate_folio_unmap() Huang Ying
2023-01-16 6:30 ` [PATCH -v3 7/9] migrate_pages: share more code between _unmap and _move Huang Ying
2023-01-16 6:30 ` [PATCH -v3 8/9] migrate_pages: batch flushing TLB Huang Ying
2023-01-16 6:30 ` [PATCH -v3 9/9] migrate_pages: move THP/hugetlb migration support check to simplify code Huang Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87edrech21.fsf@yhuang6-desk2.ccr.corp.intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=baolin.wang@linux.alibaba.com \
--cc=bharata@amd.com \
--cc=chenwandun@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=osalvador@suse.de \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
--cc=xhao@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox