linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: Peter Xu <peterx@redhat.com>,  <linux-mm@kvack.org>,
	<akpm@linux-foundation.org>,  Nadav Amit <nadav.amit@gmail.com>,
	 huang ying <huang.ying.caritas@gmail.com>,
	 LKML <linux-kernel@vger.kernel.org>,
	"Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>,
	 Felix Kuehling <Felix.Kuehling@amd.com>,
	 "Jason Gunthorpe" <jgg@nvidia.com>,
	 John Hubbard <jhubbard@nvidia.com>,
	 "David Hildenbrand" <david@redhat.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	 Matthew Wilcox <willy@infradead.org>,
	 Karol Herbst <kherbst@redhat.com>,
	 Lyude Paul <lyude@redhat.com>,  Ben Skeggs <bskeggs@redhat.com>,
	 "Logan Gunthorpe" <logang@deltatee.com>,  <paulus@ozlabs.org>,
	<linuxppc-dev@lists.ozlabs.org>,  <stable@vger.kernel.org>
Subject: Re: [PATCH v3 2/3] mm/migrate_device.c: Copy pte dirty bit to page
Date: Fri, 26 Aug 2022 09:14:31 +0800	[thread overview]
Message-ID: <87bks73ix4.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <8735dj7qwb.fsf@nvdebian.thelocal> (Alistair Popple's message of "Fri, 26 Aug 2022 11:02:58 +1000")

Alistair Popple <apopple@nvidia.com> writes:

> Peter Xu <peterx@redhat.com> writes:
>
>> On Fri, Aug 26, 2022 at 08:21:44AM +1000, Alistair Popple wrote:
>>>
>>> Peter Xu <peterx@redhat.com> writes:
>>>
>>> > On Wed, Aug 24, 2022 at 01:03:38PM +1000, Alistair Popple wrote:
>>> >> migrate_vma_setup() has a fast path in migrate_vma_collect_pmd() that
>>> >> installs migration entries directly if it can lock the migrating page.
>>> >> When removing a dirty pte the dirty bit is supposed to be carried over
>>> >> to the underlying page to prevent it being lost.
>>> >>
>>> >> Currently migrate_vma_*() can only be used for private anonymous
>>> >> mappings. That means loss of the dirty bit usually doesn't result in
>>> >> data loss because these pages are typically not file-backed. However
>>> >> pages may be backed by swap storage which can result in data loss if an
>>> >> attempt is made to migrate a dirty page that doesn't yet have the
>>> >> PageDirty flag set.
>>> >>
>>> >> In this case migration will fail due to unexpected references but the
>>> >> dirty pte bit will be lost. If the page is subsequently reclaimed data
>>> >> won't be written back to swap storage as it is considered uptodate,
>>> >> resulting in data loss if the page is subsequently accessed.
>>> >>
>>> >> Prevent this by copying the dirty bit to the page when removing the pte
>>> >> to match what try_to_migrate_one() does.
>>> >>
>>> >> Signed-off-by: Alistair Popple <apopple@nvidia.com>
>>> >> Acked-by: Peter Xu <peterx@redhat.com>
>>> >> Reported-by: Huang Ying <ying.huang@intel.com>
>>> >> Fixes: 8c3328f1f36a ("mm/migrate: migrate_vma() unmap page from vma while collecting pages")
>>> >> Cc: stable@vger.kernel.org
>>> >>
>>> >> ---
>>> >>
>>> >> Changes for v3:
>>> >>
>>> >>  - Defer TLB flushing
>>> >>  - Split a TLB flushing fix into a separate change.
>>> >>
>>> >> Changes for v2:
>>> >>
>>> >>  - Fixed up Reported-by tag.
>>> >>  - Added Peter's Acked-by.
>>> >>  - Atomically read and clear the pte to prevent the dirty bit getting
>>> >>    set after reading it.
>>> >>  - Added fixes tag
>>> >> ---
>>> >>  mm/migrate_device.c |  9 +++++++--
>>> >>  1 file changed, 7 insertions(+), 2 deletions(-)
>>> >>
>>> >> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>> >> index 6a5ef9f..51d9afa 100644
>>> >> --- a/mm/migrate_device.c
>>> >> +++ b/mm/migrate_device.c
>>> >> @@ -7,6 +7,7 @@
>>> >>  #include <linux/export.h>
>>> >>  #include <linux/memremap.h>
>>> >>  #include <linux/migrate.h>
>>> >> +#include <linux/mm.h>
>>> >>  #include <linux/mm_inline.h>
>>> >>  #include <linux/mmu_notifier.h>
>>> >>  #include <linux/oom.h>
>>> >> @@ -196,7 +197,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>> >>  			anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
>>> >>  			if (anon_exclusive) {
>>> >>  				flush_cache_page(vma, addr, pte_pfn(*ptep));
>>> >> -				ptep_clear_flush(vma, addr, ptep);
>>> >> +				pte = ptep_clear_flush(vma, addr, ptep);
>>> >>
>>> >>  				if (page_try_share_anon_rmap(page)) {
>>> >>  					set_pte_at(mm, addr, ptep, pte);
>>> >> @@ -206,11 +207,15 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>>> >>  					goto next;
>>> >>  				}
>>> >>  			} else {
>>> >> -				ptep_get_and_clear(mm, addr, ptep);
>>> >> +				pte = ptep_get_and_clear(mm, addr, ptep);
>>> >>  			}
>>> >
>>> > I remember that in v2 both flush_cache_page() and ptep_get_and_clear() are
>>> > moved above the condition check so they're called unconditionally.  Could
>>> > you explain the rational on why it's changed back (since I think v2 was the
>>> > correct approach)?
>>>
>>> Mainly because I agree with your original comments, that it would be
>>> better to keep the batching of TLB flushing if possible. After the
>>> discussion I don't think there is any issues with HW pte dirty bits
>>> here. There are already other cases where HW needs to get that right
>>> anyway (eg. zap_pte_range).
>>
>> Yes tlb batching was kept, thanks for doing that way.  Though if only apply
>> patch 1 we'll have both ptep_clear_flush() and batched flush which seems to
>> be redundant.
>>
>>>
>>> > The other question is if we want to split the patch, would it be better to
>>> > move the tlb changes to patch 1, and leave the dirty bit fix in patch 2?
>>>
>>> Isn't that already the case? Patch 1 moves the TLB flush before the PTL
>>> as suggested, patch 2 atomically copies the dirty bit without changing
>>> any TLB flushing.
>>
>> IMHO it's cleaner to have patch 1 fix batch flush, replace
>> ptep_clear_flush() with ptep_get_and_clear() and update pte properly.
>
> Which ptep_clear_flush() are you referring to? This one?
>
> 			if (anon_exclusive) {
> 				flush_cache_page(vma, addr, pte_pfn(*ptep));
> 				ptep_clear_flush(vma, addr, ptep);
>
> My understanding is that we need to do a flush for anon_exclusive.
>
>> No strong opinions on the layout, but I still think we should drop the
>> redundant ptep_clear_flush() above, meanwhile add the flush_cache_page()
>> properly for !exclusive case too.
>
> Good point, we need flush_cache_page() for !exclusive. Will add.

That can be in another patch.  For the patch itself, it looks good to
me.  Feel free to add,

Reviewed-by: "Huang, Ying" <ying.huang@intel.com>

Best Regards,
Huang, Ying


  reply	other threads:[~2022-08-26  1:14 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-24  3:03 [PATCH v3 1/3] mm/migrate_device.c: Flush TLB while holding PTL Alistair Popple
2022-08-24  3:03 ` [PATCH v3 2/3] mm/migrate_device.c: Copy pte dirty bit to page Alistair Popple
2022-08-24 15:39   ` Peter Xu
2022-08-25 22:21     ` Alistair Popple
2022-08-25 23:27       ` Peter Xu
2022-08-26  1:02         ` Alistair Popple
2022-08-26  1:14           ` Huang, Ying [this message]
2022-08-26 14:32           ` Peter Xu
2022-08-26 14:47             ` David Hildenbrand
2022-08-26 15:55               ` Peter Xu
2022-08-26 16:46                 ` David Hildenbrand
2022-08-26 21:37                   ` Peter Xu
2022-08-26 22:19                     ` David Hildenbrand
2022-08-24  3:03 ` [PATCH v3 3/3] selftests/hmm-tests: Add test for dirty bits Alistair Popple
2022-08-24  8:21 ` [PATCH v3 1/3] mm/migrate_device.c: Flush TLB while holding PTL David Hildenbrand
2022-08-24 12:26   ` Alistair Popple
2022-08-24 12:35     ` David Hildenbrand
2022-08-25  1:36 ` Huang, Ying
2022-08-25 22:35   ` Alistair Popple
2022-08-26  0:56     ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87bks73ix4.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.sierra@amd.com \
    --cc=apopple@nvidia.com \
    --cc=bskeggs@redhat.com \
    --cc=david@redhat.com \
    --cc=huang.ying.caritas@gmail.com \
    --cc=jgg@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=kherbst@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=logang@deltatee.com \
    --cc=lyude@redhat.com \
    --cc=nadav.amit@gmail.com \
    --cc=paulus@ozlabs.org \
    --cc=peterx@redhat.com \
    --cc=rcampbell@nvidia.com \
    --cc=stable@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox