From: Qi Zheng <zhengqi.arch@bytedance.com>
To: Jann Horn <jannh@google.com>
Cc: david@redhat.com, hughd@google.com, willy@infradead.org,
mgorman@suse.de, muchun.song@linux.dev, vbabka@kernel.org,
akpm@linux-foundation.org, zokeefe@google.com,
rientjes@google.com, peterx@redhat.com, catalin.marinas@arm.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org
Subject: Re: [PATCH v2 5/7] mm: pgtable: try to reclaim empty PTE page in madvise(MADV_DONTNEED)
Date: Sat, 9 Nov 2024 11:07:56 +0800 [thread overview]
Message-ID: <cb59ffb5-b7ce-4ed8-a241-70917166be42@bytedance.com> (raw)
In-Reply-To: <CAG48ez3sG_=G6gttsEZnvUE4J-yHEUqyaNQfsdXR-LT-EqY2Yw@mail.gmail.com>
On 2024/11/9 02:04, Jann Horn wrote:
> On Fri, Nov 8, 2024 at 8:13 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote:
>> On 2024/11/8 07:35, Jann Horn wrote:
>>> On Thu, Oct 31, 2024 at 9:14 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote:
>>>> As a first step, this commit aims to synchronously free the empty PTE
>>>> pages in madvise(MADV_DONTNEED) case. We will detect and free empty PTE
>>>> pages in zap_pte_range(), and will add zap_details.reclaim_pt to exclude
>>>> cases other than madvise(MADV_DONTNEED).
>>>>
>>>> Once an empty PTE is detected, we first try to hold the pmd lock within
>>>> the pte lock. If successful, we clear the pmd entry directly (fast path).
>>>> Otherwise, we wait until the pte lock is released, then re-hold the pmd
>>>> and pte locks and loop PTRS_PER_PTE times to check pte_none() to re-detect
>>>> whether the PTE page is empty and free it (slow path).
>>>
>>> How does this interact with move_pages_pte()? I am looking at your
>>> series applied on top of next-20241106, and it looks to me like
>>> move_pages_pte() uses pte_offset_map_rw_nolock() and assumes that the
>>> PMD entry can't change. You can clearly see this assumption at the
>>> WARN_ON_ONCE(pmd_none(*dst_pmd)). And if we race the wrong way, I
>>
>> In move_pages_pte(), the following conditions may indeed be triggered:
>>
>> /* Sanity checks before the operation */
>> if (WARN_ON_ONCE(pmd_none(*dst_pmd)) || WARN_ON_ONCE(pmd_none(*src_pmd)) ||
>> WARN_ON_ONCE(pmd_trans_huge(*dst_pmd)) ||
>> WARN_ON_ONCE(pmd_trans_huge(*src_pmd))) {
>> err = -EINVAL;
>> goto out;
>> }
>>
>> But maybe we can just remove the WARN_ON_ONCE(), because...
>>
>>> think for example move_present_pte() can end up moving a present PTE
>>> into a page table that has already been scheduled for RCU freeing.
>>
>> ...this situation is impossible to happen. Before performing move
>> operation, the pte_same() check will be performed after holding the
>> pte lock, which can ensure that the PTE page is stable:
>>
>> CPU 0 CPU 1
>>
>> zap_pte_range
>>
>> orig_src_pte = ptep_get(src_pte);
>>
>> pmd_lock
>> pte_lock
>> check if all PTEs are pte_none()
>> --> clear pmd entry
>> unlock pte
>> unlock pmd
>>
>> src_pte_lock
>> pte_same(orig_src_pte, ptep_get(src_pte))
>> --> return false and will skip the move op
>
> Yes, that works for the source PTE. But what about the destination?
>
> Operations on the destination PTE in move_pages_pte() are, when moving
> a normal present source PTE pointing to an order-0 page, and assuming
> that the optimistic folio_trylock(src_folio) and
> anon_vma_trylock_write(src_anon_vma) succeed:
>
> dst_pte = pte_offset_map_rw_nolock(mm, dst_pmd, dst_addr,
> &dummy_pmdval, &dst_ptl)
> [check that dst_pte is non-NULL]
> some racy WARN_ON_ONCE() checks
> spin_lock(dst_ptl);
> orig_dst_pte = ptep_get(dst_pte);
> spin_unlock(dst_ptl);
> [bail if orig_dst_pte isn't none]
> double_pt_lock(dst_ptl, src_ptl)
> [check pte_same(ptep_get(dst_pte), orig_dst_pte)]
>
> and then we're past the point of no return. Note that there is a
> pte_same() check against orig_dst_pte, but pte_none(orig_dst_pte) is
> intentionally pte_none(), so the pte_same() check does not guarantee
> that the destination page table is still linked in.
OK, now I got what you mean. This is indeed a problem. In this case,
it is still necessary to recheck pmd_same() to ensure the stability
of dst_pte page. Will fix it.
>
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 002aa4f454fa0..c4a8c18fbcfd7 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -1436,7 +1436,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
>>>> static inline bool should_zap_cows(struct zap_details *details)
>>>> {
>>>> /* By default, zap all pages */
>>>> - if (!details)
>>>> + if (!details || details->reclaim_pt)
>>>> return true;
>>>>
>>>> /* Or, we zap COWed pages only if the caller wants to */
>>>
>>> This looks hacky - when we have a "details" object, its ->even_cows
>>> member is supposed to indicate whether COW pages should be zapped. So
>>> please instead set .even_cows=true in madvise_dontneed_single_vma().
>>
>> But the details->reclaim_pt should continue to be set, right? Because
>> we need to use .reclaim_pt to indicate whether the empty PTE page
>> should be reclaimed.
>
> Yeah, you should set both.
OK.
Thanks!
next prev parent reply other threads:[~2024-11-09 3:09 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-31 8:13 [PATCH v2 0/7] synchronously scan and reclaim empty user PTE pages Qi Zheng
2024-10-31 8:13 ` [PATCH v2 1/7] mm: khugepaged: retract_page_tables() use pte_offset_map_rw_nolock() Qi Zheng
2024-11-06 21:48 ` Jann Horn
2024-11-07 7:54 ` Qi Zheng
2024-11-07 17:57 ` Jann Horn
2024-11-08 6:31 ` Qi Zheng
2024-10-31 8:13 ` [PATCH v2 2/7] mm: introduce zap_nonpresent_ptes() Qi Zheng
2024-11-06 21:48 ` Jann Horn
2024-11-12 16:58 ` David Hildenbrand
2024-10-31 8:13 ` [PATCH v2 3/7] mm: introduce do_zap_pte_range() Qi Zheng
2024-11-07 21:50 ` Jann Horn
2024-11-12 17:00 ` David Hildenbrand
2024-11-13 2:40 ` Qi Zheng
2024-11-13 11:43 ` David Hildenbrand
2024-11-13 12:19 ` Qi Zheng
2024-11-14 3:09 ` Qi Zheng
2024-11-14 4:12 ` Qi Zheng
2024-10-31 8:13 ` [PATCH v2 4/7] mm: make zap_pte_range() handle full within-PMD range Qi Zheng
2024-11-07 21:46 ` Jann Horn
2024-10-31 8:13 ` [PATCH v2 5/7] mm: pgtable: try to reclaim empty PTE page in madvise(MADV_DONTNEED) Qi Zheng
2024-11-07 23:35 ` Jann Horn
2024-11-08 7:13 ` Qi Zheng
2024-11-08 18:04 ` Jann Horn
2024-11-09 3:07 ` Qi Zheng [this message]
2024-10-31 8:13 ` [PATCH v2 6/7] x86: mm: free page table pages by RCU instead of semi RCU Qi Zheng
2024-11-07 22:39 ` Jann Horn
2024-11-08 7:38 ` Qi Zheng
2024-11-08 20:09 ` Jann Horn
2024-11-09 3:14 ` Qi Zheng
2024-11-13 11:26 ` Qi Zheng
2024-10-31 8:13 ` [PATCH v2 7/7] x86: select ARCH_SUPPORTS_PT_RECLAIM if X86_64 Qi Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cb59ffb5-b7ce-4ed8-a241-70917166be42@bytedance.com \
--to=zhengqi.arch@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=muchun.song@linux.dev \
--cc=peterx@redhat.com \
--cc=rientjes@google.com \
--cc=vbabka@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox