From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Hugh Dickins <hughd@google.com>,
"Xu, Pengfei" <pengfei.xu@intel.com>,
Christoph Hellwig <hch@lst.de>, Stefan Roesch <shr@devkernel.io>,
Tejun Heo <tj@kernel.org>, Xin Hao <xhao@linux.alibaba.com>,
Zi Yan <ziy@nvidia.com>, Yang Shi <shy828301@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
Mike Kravetz <mike.kravetz@oracle.com>
Subject: Re: [PATCH 3/3] migrate_pages: try migrate in batch asynchronously firstly
Date: Wed, 1 Mar 2023 19:03:20 +0800 [thread overview]
Message-ID: <b43a37f0-a869-7ef5-0a65-2d581ca031a3@linux.alibaba.com> (raw)
In-Reply-To: <87zg8x9epg.fsf@yhuang6-desk2.ccr.corp.intel.com>
On 3/1/2023 2:18 PM, Huang, Ying wrote:
> Baolin Wang <baolin.wang@linux.alibaba.com> writes:
>
>> On 2/24/2023 10:11 PM, Huang Ying wrote:
>>> When we have locked more than one folios, we cannot wait the lock or
>>> bit (e.g., page lock, buffer head lock, writeback bit) synchronously.
>>> Otherwise deadlock may be triggered. This make it hard to batch the
>>> synchronous migration directly.
>>> This patch re-enables batching synchronous migration via trying to
>>> migrate in batch asynchronously firstly. And any folios that are
>>> failed to be migrated asynchronously will be migrated synchronously
>>> one by one.
>>> Test shows that this can restore the TLB flushing batching
>>> performance
>>> for synchronous migration effectively.
>>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>>> Cc: Hugh Dickins <hughd@google.com>
>>> Cc: "Xu, Pengfei" <pengfei.xu@intel.com>
>>> Cc: Christoph Hellwig <hch@lst.de>
>>> Cc: Stefan Roesch <shr@devkernel.io>
>>> Cc: Tejun Heo <tj@kernel.org>
>>> Cc: Xin Hao <xhao@linux.alibaba.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> Cc: Yang Shi <shy828301@gmail.com>
>>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> Cc: Matthew Wilcox <willy@infradead.org>
>>> Cc: Mike Kravetz <mike.kravetz@oracle.com>
>>> ---
>>> mm/migrate.c | 65 ++++++++++++++++++++++++++++++++++++++++++++--------
>>> 1 file changed, 55 insertions(+), 10 deletions(-)
>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>> index 91198b487e49..c17ce5ee8d92 100644
>>> --- a/mm/migrate.c
>>> +++ b/mm/migrate.c
>>> @@ -1843,6 +1843,51 @@ static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
>>> return rc;
>>> }
>>> +static int migrate_pages_sync(struct list_head *from, new_page_t
>>> get_new_page,
>>> + free_page_t put_new_page, unsigned long private,
>>> + enum migrate_mode mode, int reason, struct list_head *ret_folios,
>>> + struct list_head *split_folios, struct migrate_pages_stats *stats)
>>> +{
>>> + int rc, nr_failed = 0;
>>> + LIST_HEAD(folios);
>>> + struct migrate_pages_stats astats;
>>> +
>>> + memset(&astats, 0, sizeof(astats));
>>> + /* Try to migrate in batch with MIGRATE_ASYNC mode firstly */
>>> + rc = migrate_pages_batch(from, get_new_page, put_new_page, private, MIGRATE_ASYNC,
>>> + reason, &folios, split_folios, &astats,
>>> + NR_MAX_MIGRATE_PAGES_RETRY);
>>> + stats->nr_succeeded += astats.nr_succeeded;
>>> + stats->nr_thp_succeeded += astats.nr_thp_succeeded;
>>> + stats->nr_thp_split += astats.nr_thp_split;
>>> + if (rc < 0) {
>>> + stats->nr_failed_pages += astats.nr_failed_pages;
>>> + stats->nr_thp_failed += astats.nr_thp_failed;
>>> + list_splice_tail(&folios, ret_folios);
>>> + return rc;
>>> + }
>>> + stats->nr_thp_failed += astats.nr_thp_split;
>>> + nr_failed += astats.nr_thp_split;
>>> + /*
>>> + * Fall back to migrate all failed folios one by one synchronously. All
>>> + * failed folios except split THPs will be retried, so their failure
>>> + * isn't counted
>>> + */
>>> + list_splice_tail_init(&folios, from);
>>> + while (!list_empty(from)) {
>>> + list_move(from->next, &folios);
>>> + rc = migrate_pages_batch(&folios, get_new_page, put_new_page,
>>> + private, mode, reason, ret_folios,
>>> + split_folios, stats, NR_MAX_MIGRATE_PAGES_RETRY);
>>> + list_splice_tail_init(&folios, ret_folios);
>>> + if (rc < 0)
>>> + return rc;
>>> + nr_failed += rc;
>>> + }
>>> +
>>> + return nr_failed;
>>> +}
>>> +
>>> /*
>>> * migrate_pages - migrate the folios specified in a list, to the free folios
>>> * supplied as the target for the page migration
>>> @@ -1874,7 +1919,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>> enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
>>> {
>>> int rc, rc_gather;
>>> - int nr_pages, batch;
>>> + int nr_pages;
>>> struct folio *folio, *folio2;
>>> LIST_HEAD(folios);
>>> LIST_HEAD(ret_folios);
>>> @@ -1890,10 +1935,6 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>> if (rc_gather < 0)
>>> goto out;
>>> - if (mode == MIGRATE_ASYNC)
>>> - batch = NR_MAX_BATCHED_MIGRATION;
>>> - else
>>> - batch = 1;
>>> again:
>>> nr_pages = 0;
>>> list_for_each_entry_safe(folio, folio2, from, lru) {
>>> @@ -1904,16 +1945,20 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
>>> }
>>> nr_pages += folio_nr_pages(folio);
>>> - if (nr_pages >= batch)
>>> + if (nr_pages >= NR_MAX_BATCHED_MIGRATION)
>>> break;
>>> }
>>> - if (nr_pages >= batch)
>>> + if (nr_pages >= NR_MAX_BATCHED_MIGRATION)
>>> list_cut_before(&folios, from, &folio2->lru);
>>> else
>>> list_splice_init(from, &folios);
>>> - rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private,
>>> - mode, reason, &ret_folios, &split_folios, &stats,
>>> - NR_MAX_MIGRATE_PAGES_RETRY);
>>> + if (mode == MIGRATE_ASYNC)
>>> + rc = migrate_pages_batch(&folios, get_new_page, put_new_page, private,
>>> + mode, reason, &ret_folios, &split_folios, &stats,
>>> + NR_MAX_MIGRATE_PAGES_RETRY);
>>> + else
>>> + rc = migrate_pages_sync(&folios, get_new_page, put_new_page, private,
>>> + mode, reason, &ret_folios, &split_folios, &stats);
>>
>> For split folios, it seems also reasonable to use migrate_pages_sync()
>> instead of always using fixed MIGRATE_ASYNC mode?
>
> For split folios, we only try to migrate them with minimal effort.
> Previously, we decrease the retry number from 10 to 1. Now, I think
> that it's reasonable to change the migration mode to MIGRATE_ASYNC to
> reduce latency. They have been counted as failure anyway.
Sounds reasonable. Thanks for explanation. Please feel free to add:
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
next prev parent reply other threads:[~2023-03-01 11:03 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-24 14:11 [PATCH 0/3] migrate_pages: fix deadlock in batched synchronous migration Huang Ying
2023-02-24 14:11 ` [PATCH 1/3] migrate_pages: fix deadlock in batched migration Huang Ying
2023-02-28 6:13 ` Hugh Dickins
2023-02-28 7:22 ` Huang, Ying
2023-02-28 21:07 ` Hugh Dickins
2023-03-01 1:17 ` Huang, Ying
2023-02-24 14:11 ` [PATCH 2/3] migrate_pages: move split folios processing out of migrate_pages_batch() Huang Ying
2023-03-01 2:23 ` Baolin Wang
2023-03-01 6:35 ` Huang, Ying
2023-03-01 11:07 ` Baolin Wang
2023-02-24 14:11 ` [PATCH 3/3] migrate_pages: try migrate in batch asynchronously firstly Huang Ying
2023-02-28 6:36 ` Hugh Dickins
2023-02-28 7:45 ` Huang, Ying
2023-02-28 21:22 ` Hugh Dickins
2023-03-01 6:08 ` Huang, Ying
2023-03-01 6:46 ` Hugh Dickins
2023-03-01 7:10 ` Huang, Ying
2023-03-01 3:08 ` Baolin Wang
2023-03-01 6:18 ` Huang, Ying
2023-03-01 11:03 ` Baolin Wang [this message]
2023-02-26 4:55 ` [PATCH 0/3] migrate_pages: fix deadlock in batched synchronous migration Andrew Morton
2023-02-27 1:25 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b43a37f0-a869-7ef5-0a65-2d581ca031a3@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=hch@lst.de \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=pengfei.xu@intel.com \
--cc=shr@devkernel.io \
--cc=shy828301@gmail.com \
--cc=tj@kernel.org \
--cc=willy@infradead.org \
--cc=xhao@linux.alibaba.com \
--cc=ying.huang@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox