From: Muhammad Usama Anjum <usama.anjum@collabora.com>
To: Peter Xu <peterx@redhat.com>
Cc: Muhammad Usama Anjum <usama.anjum@collabora.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrea Arcangeli <aarcange@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Axel Rasmussen <axelrasmussen@google.com>,
Mike Rapoport <rppt@linux.vnet.ibm.com>,
Nadav Amit <nadav.amit@gmail.com>,
linux-stable <stable@vger.kernel.org>,
David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH] mm/hugetlb: Fix uffd wr-protection for CoW optimization path
Date: Fri, 24 Mar 2023 11:32:27 +0500 [thread overview]
Message-ID: <a9277ba3-f963-4fe3-a06f-d76e5deccb3b@collabora.com> (raw)
In-Reply-To: <ZBzOqwF2wrHgBVZb@x1n>
On 3/24/23 3:11 AM, Peter Xu wrote:
> On Thu, Mar 23, 2023 at 08:33:07PM +0500, Muhammad Usama Anjum wrote:
>> Hi Peter,
>>
>> Sorry for late reply.
>>
>> On 3/22/23 12:50 AM, Peter Xu wrote:
>>> On Tue, Mar 21, 2023 at 08:36:35PM +0100, David Hildenbrand wrote:
>>>> On 21.03.23 20:18, Peter Xu wrote:
>>>>> This patch fixes an issue that a hugetlb uffd-wr-protected mapping can be
>>>>> writable even with uffd-wp bit set. It only happens with all these
>>>>> conditions met: (1) hugetlb memory (2) private mapping (3) original mapping
>>>>> was missing, then (4) being wr-protected (IOW, pte marker installed). Then
>>>>> write to the page to trigger.
>>>>>
>>>>> Userfaultfd-wp trap for hugetlb was implemented in hugetlb_fault() before
>>>>> even reaching hugetlb_wp() to avoid taking more locks that userfault won't
>>>>> need. However there's one CoW optimization path for missing hugetlb page
>>>>> that can trigger hugetlb_wp() inside hugetlb_no_page(), that can bypass the
>>>>> userfaultfd-wp traps.
>>>>>
>>>>> A few ways to resolve this:
>>>>>
>>>>> (1) Skip the CoW optimization for hugetlb private mapping, considering
>>>>> that private mappings for hugetlb should be very rare, so it may not
>>>>> really be helpful to major workloads. The worst case is we only skip the
>>>>> optimization if userfaultfd_wp(vma)==true, because uffd-wp needs another
>>>>> fault anyway.
>>>>>
>>>>> (2) Move the userfaultfd-wp handling for hugetlb from hugetlb_fault()
>>>>> into hugetlb_wp(). The major cons is there're a bunch of locks taken
>>>>> when calling hugetlb_wp(), and that will make the changeset unnecessarily
>>>>> complicated due to the lock operations.
>>>>>
>>>>> (3) Carry over uffd-wp bit in hugetlb_wp(), so it'll need to fault again
>>>>> for uffd-wp privately mapped pages.
>>>>>
>>>>> This patch chose option (3) which contains the minimum changeset (simplest
>>>>> for backport) and also make sure hugetlb_wp() itself will start to be
>>>>> always safe with uffd-wp ptes even if called elsewhere in the future.
>>>>>
>>>>> This patch will be needed for v5.19+ hence copy stable.
>>>>>
>>>>> Reported-by: Muhammad Usama Anjum <usama.anjum@collabora.com>
>>>>> Cc: linux-stable <stable@vger.kernel.org>
>>>>> Fixes: 166f3ecc0daf ("mm/hugetlb: hook page faults for uffd write protection")
>>>>> Signed-off-by: Peter Xu <peterx@redhat.com>
>>>>> ---
>>>>> mm/hugetlb.c | 8 +++++---
>>>>> 1 file changed, 5 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>>> index 8bfd07f4c143..22337b191eae 100644
>>>>> --- a/mm/hugetlb.c
>>>>> +++ b/mm/hugetlb.c
>>>>> @@ -5478,7 +5478,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
>>>>> struct folio *pagecache_folio, spinlock_t *ptl)
>>>>> {
>>>>> const bool unshare = flags & FAULT_FLAG_UNSHARE;
>>>>> - pte_t pte;
>>>>> + pte_t pte, newpte;
>>>>> struct hstate *h = hstate_vma(vma);
>>>>> struct page *old_page;
>>>>> struct folio *new_folio;
>>>>> @@ -5622,8 +5622,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
>>>>> mmu_notifier_invalidate_range(mm, range.start, range.end);
>>>>> page_remove_rmap(old_page, vma, true);
>>>>> hugepage_add_new_anon_rmap(new_folio, vma, haddr);
>>>>> - set_huge_pte_at(mm, haddr, ptep,
>>>>> - make_huge_pte(vma, &new_folio->page, !unshare));
>>>>> + newpte = make_huge_pte(vma, &new_folio->page, !unshare);
>>>>> + if (huge_pte_uffd_wp(pte))
>>>>> + newpte = huge_pte_mkuffd_wp(newpte);
>>>>> + set_huge_pte_at(mm, haddr, ptep, newpte);
>>>>> folio_set_hugetlb_migratable(new_folio);
>>>>> /* Make the old page be freed below */
>>>>> new_folio = page_folio(old_page);
>>>>
>>>> Looks correct to me. Do we have a reproducer?
>>>
>>> I used a reproducer for the async mode I wrote (patch 2 attached, need to
>>> change to VM_PRIVATE):
>>>
>>> https://lore.kernel.org/all/ZBNr4nohj%2FTw4Zhw@x1n/
>>>
>>> I don't think kernel kselftest can trigger it because we don't do strict
>>> checks yet with uffd-wp bits. I've already started looking into cleanup
>>> the test cases and I do plan to add new tests to cover this.
>>>
>>> Meanwhile, let's also wait for an ack from Muhammad. Even though the async
>>> mode is not part of the code base, it'll be a good test for verifying every
>>> single uffd-wp bit being set or cleared as expected.
>> I've tested by applying this patch. But the bug is still there. Just like
>> Peter has mentioned, we are using our in progress patches related to
>> pagemap_scan ioctl and userfaultd wp async patches to reproduce it.
>>
>> To reproduce please build kernel and run pagemap_ioctl test in mm in
>> hugetlb_mem_reproducer branch:
>> https://gitlab.collabora.com/usama.anjum/linux-mainline/-/tree/hugetlb_mem_reproducer
>>
>> In case you have any question on how to reproduce, please let me know. I'll
>> try to provide a cleaner alternative.
>
> Hmm, I think my current fix is incomplete if not wrong. The root cause
> should still be valid, however I overlooked another path:
>
> if (page_mapcount(old_page) == 1 && PageAnon(old_page)) {
> if (!PageAnonExclusive(old_page))
> page_move_anon_rmap(old_page, vma);
> if (likely(!unshare))
> set_huge_ptep_writable(vma, haddr, ptep);
>
> delayacct_wpcopy_end();
> return 0;
> }
>
> We should bail out early in this path, and it'll be even easier we always
> bail out hugetlb_wp() as long as uffd-wp is detected because userfault
> should always be handled before any decision to CoW.
>
> v2 attached.. Please give it another shot.
This attached v2 works. Please add:
Tested-by: Muhammad Usama Anjum <usama.anjum@collabora.com>
>
> Thanks,
>
--
BR,
Muhammad Usama Anjum
next prev parent reply other threads:[~2023-03-24 6:32 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-21 19:18 Peter Xu
2023-03-21 19:35 ` Peter Xu
2023-03-21 19:36 ` David Hildenbrand
2023-03-21 19:50 ` Peter Xu
2023-03-23 15:33 ` Muhammad Usama Anjum
2023-03-23 22:11 ` Peter Xu
2023-03-24 6:32 ` Muhammad Usama Anjum [this message]
2023-03-24 8:51 ` David Hildenbrand
2023-03-24 14:11 ` Peter Xu
2023-03-21 20:57 ` Mike Kravetz
2023-03-21 21:41 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a9277ba3-f963-4fe3-a06f-d76e5deccb3b@collabora.com \
--to=usama.anjum@collabora.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nadav.amit@gmail.com \
--cc=peterx@redhat.com \
--cc=rppt@linux.vnet.ibm.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox