From: "Huang, Ying" <ying.huang@intel.com>
To: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Xu,
Pengfei" <pengfei.xu@intel.com>, Christoph Hellwig <hch@lst.de>,
Stefan Roesch <shr@devkernel.io>, Tejun Heo <tj@kernel.org>,
Xin Hao <xhao@linux.alibaba.com>, Zi Yan <ziy@nvidia.com>,
Yang Shi <shy828301@gmail.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Matthew Wilcox <willy@infradead.org>,
Mike Kravetz <mike.kravetz@oracle.com>
Subject: Re: [PATCH 1/3] migrate_pages: fix deadlock in batched migration
Date: Wed, 01 Mar 2023 09:17:50 +0800 [thread overview]
Message-ID: <878rghb77l.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <da5ba36a-dba-f44-926a-c5c912148b@google.com> (Hugh Dickins's message of "Tue, 28 Feb 2023 13:07:41 -0800 (PST)")
Hugh Dickins <hughd@google.com> writes:
> On Tue, 28 Feb 2023, Huang, Ying wrote:
>> Hugh Dickins <hughd@google.com> writes:
>> > On Fri, 24 Feb 2023, Huang Ying wrote:
>> >> @@ -1247,7 +1236,7 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page
>> >> /* Establish migration ptes */
>> >> VM_BUG_ON_FOLIO(folio_test_anon(src) &&
>> >> !folio_test_ksm(src) && !anon_vma, src);
>> >> - try_to_migrate(src, TTU_BATCH_FLUSH);
>> >> + try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0);
>> >
>> > Why that change, I wonder? The TTU_BATCH_FLUSH can still be useful for
>> > gathering multiple cross-CPU TLB flushes into one, even when it's only
>> > a single page in the batch.
>>
>> Firstly, I would have thought that we have no opportunities to batch the
>> TLB flushing now. But as you pointed out, it is still possible to batch
>> if mapcount > 1. Secondly, without TTU_BATCH_FLUSH, we may flush the
>> TLB for a single page (with invlpg instruction), otherwise, we will
>> flush the TLB for all pages. The former is faster and will not
>> influence other TLB entries of the process.
>>
>> Or we use TTU_BATCH_FLUSH only if mapcount > 1?
>
> I had not thought at all of the "invlpg" advantage (which I imagine
> some other architectures than x86 share) to not delaying the TLB flush
> of a single PTE.
>
> Frankly, I just don't have any feeling for the tradeoff between
> multiple remote invlpgs versus one remote batched TLB flush of all.
> Which presumably depends on number of CPUs, size of TLBs, etc etc.
>
> Your "mapcount > 1" idea might be good, but I cannot tell: I'd say
> now that there's no reason to change your "mode == MIGRATE_ASYNC ?
> TTU_BATCH_FLUSH : 0" without much more thought, or a quick insight
> from someone else. Some other time maybe.
Yes. I think that this is reasonable. We can revisit this later.
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2023-03-01 1:19 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-24 14:11 [PATCH 0/3] migrate_pages: fix deadlock in batched synchronous migration Huang Ying
2023-02-24 14:11 ` [PATCH 1/3] migrate_pages: fix deadlock in batched migration Huang Ying
2023-02-28 6:13 ` Hugh Dickins
2023-02-28 7:22 ` Huang, Ying
2023-02-28 21:07 ` Hugh Dickins
2023-03-01 1:17 ` Huang, Ying [this message]
2023-02-24 14:11 ` [PATCH 2/3] migrate_pages: move split folios processing out of migrate_pages_batch() Huang Ying
2023-03-01 2:23 ` Baolin Wang
2023-03-01 6:35 ` Huang, Ying
2023-03-01 11:07 ` Baolin Wang
2023-02-24 14:11 ` [PATCH 3/3] migrate_pages: try migrate in batch asynchronously firstly Huang Ying
2023-02-28 6:36 ` Hugh Dickins
2023-02-28 7:45 ` Huang, Ying
2023-02-28 21:22 ` Hugh Dickins
2023-03-01 6:08 ` Huang, Ying
2023-03-01 6:46 ` Hugh Dickins
2023-03-01 7:10 ` Huang, Ying
2023-03-01 3:08 ` Baolin Wang
2023-03-01 6:18 ` Huang, Ying
2023-03-01 11:03 ` Baolin Wang
2023-02-26 4:55 ` [PATCH 0/3] migrate_pages: fix deadlock in batched synchronous migration Andrew Morton
2023-02-27 1:25 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=878rghb77l.fsf@yhuang6-desk2.ccr.corp.intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=hch@lst.de \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=pengfei.xu@intel.com \
--cc=shr@devkernel.io \
--cc=shy828301@gmail.com \
--cc=tj@kernel.org \
--cc=willy@infradead.org \
--cc=xhao@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox