linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: David Hildenbrand <david@redhat.com>
Cc: Peter Xu <peterx@redhat.com>,
	Suren Baghdasaryan <surenb@google.com>,
	 Lokesh Gidra <lokeshgidra@google.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Linux-MM <linux-mm@kvack.org>,  Kairui Song <ryncsn@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [BUG]userfaultfd_move fails to move a folio when swap-in occurs concurrently with swap-out
Date: Tue, 27 May 2025 16:17:12 +1200	[thread overview]
Message-ID: <CAGsJ_4zOhNBe9b1m1LYaJbFur3TdLma+2EXbc=BhAToDeLfvAg@mail.gmail.com> (raw)
In-Reply-To: <5abe8b0c-2354-4107-9004-ccf86cf90d25@redhat.com>

On Tue, May 27, 2025 at 12:39 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 23.05.25 01:23, Barry Song wrote:
> > Hi All,
>
> Hi!
>
> >
> > I'm encountering another bug that can be easily reproduced using the small
> > program below[1], which performs swap-out and swap-in in parallel.
> >
> > The issue occurs when a folio is being swapped out while it is accessed
> > concurrently. In this case, do_swap_page() handles the access. However,
> > because the folio is under writeback, do_swap_page() completely removes
> > its exclusive attribute.
> >
> > do_swap_page:
> >                 } else if (exclusive && folio_test_writeback(folio) &&
> >                            data_race(si->flags & SWP_STABLE_WRITES)) {
> >                          ...
> >                          exclusive = false;
> >
> > As a result, userfaultfd_move() will return -EBUSY, even though the
> > folio is not shared and is in fact exclusively owned.
> >
> >                          folio = vm_normal_folio(src_vma, src_addr,
> > orig_src_pte);
> >                          if (!folio || !PageAnonExclusive(&folio->page)) {
> >                                  spin_unlock(src_ptl);
> > +                               pr_err("%s %d folio:%lx exclusive:%d
> > swapcache:%d\n",
> > +                                       __func__, __LINE__, folio,
> > PageAnonExclusive(&folio->page),
> > +                                       folio_test_swapcache(folio));
> >                                  err = -EBUSY;
> >                                  goto out;
> >                          }
> >
> > I understand that shared folios should not be moved. However, in this
> > case, the folio is not shared, yet its exclusive flag is not set.
> >
> > Therefore, I believe PageAnonExclusive is not a reliable indicator of
> > whether a folio is truly exclusive to a process.
>
> It is. The flag *not* being set is not a reliable indicator whether it
> is really shared. ;)
>
> The reason why we have this PAE workaround (dropping the flag) in place
> is because the page must not be written to (SWP_STABLE_WRITES). CoW
> reuse is not possible.
>
> uffd moving that page -- and in that same process setting it writable,
> see move_present_pte()->pte_mkwrite() -- would be very bad.

An alternative approach is to make the folio writable only when we are
reasonably certain it is exclusive; otherwise, it remains read-only. If the
destination is later written to and the folio has become exclusive, it can
be reused directly. If not, a copy-on-write will occur on the destination
address, transparently to userspace. This avoids Lokesh’s userspace-based
strategy, which requires forcing a write to the source address.

>
> >
> > The kernel log output is shown below:
> > [   23.009516] move_pages_pte 1285 folio:fffffdffc01bba40 exclusive:0
> > swapcache:1
> >
> > I'm still struggling to find a real fix; it seems quite challenging.
>
> PAE tells you that you can immediately write to that page without going
> through CoW. However, here, CoW is required.
>
> > Please let me know if you have any ideas. In any case It seems
> > userspace should fall back to userfaultfd_copy.
>
> We could try detecting whether the page is now exclusive, to reset PAE.
> That will only be possible after writeback completed, so it adds
> complexity without being able to move the page in all cases (during
> writeback).
>
> Letting userspace deal with that in these rate scenarios is
> significantly easier.

Right, this appears to introduce the least change—essentially none—to the
kernel, while shifting more noise to userspace :-)

>
> --
> Cheers,
>
> David / dhildenb
>

Thanks
Barry


  reply	other threads:[~2025-05-27  4:17 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-22 23:23 Barry Song
2025-05-22 23:43 ` Lokesh Gidra
2025-05-22 23:53   ` Barry Song
2025-05-23  0:03     ` Lokesh Gidra
2025-05-26 12:39 ` David Hildenbrand
2025-05-27  4:17   ` Barry Song [this message]
2025-05-27  8:37     ` Barry Song
2025-05-27  9:00       ` David Hildenbrand
2025-05-27  9:31         ` Barry Song
2025-05-27 11:06           ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4zOhNBe9b1m1LYaJbFur3TdLma+2EXbc=BhAToDeLfvAg@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lokeshgidra@google.com \
    --cc=peterx@redhat.com \
    --cc=ryncsn@gmail.com \
    --cc=surenb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox