From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: akpm@linux-foundation.org, andreyknvl@gmail.com,
anshuman.khandual@arm.com, ardb@kernel.org,
catalin.marinas@arm.com, david@redhat.com, dvyukov@google.com,
glider@google.com, james.morse@arm.com, jhubbard@nvidia.com,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
mark.rutland@arm.com, maz@kernel.org, oliver.upton@linux.dev,
ryabinin.a.a@gmail.com, suzuki.poulose@arm.com,
vincenzo.frascino@arm.com, wangkefeng.wang@huawei.com,
will@kernel.org, willy@infradead.org, yuzenghui@huawei.com,
yuzhao@google.com, ziy@nvidia.com
Subject: Re: [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings
Date: Wed, 29 Nov 2023 08:37:47 +1300 [thread overview]
Message-ID: <CAGsJ_4yLNEDtq5iv0BD96LfdbKzB5KqL0xDcuLGSXBKT02tk+g@mail.gmail.com> (raw)
In-Reply-To: <3e61d181-5e8d-4103-8dee-e18e493bc125@arm.com>
On Wed, Nov 29, 2023 at 1:08 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 28/11/2023 05:49, Barry Song wrote:
> > On Mon, Nov 27, 2023 at 5:15 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> On 27/11/2023 03:18, Barry Song wrote:
> >>>> Ryan Roberts (14):
> >>>> mm: Batch-copy PTE ranges during fork()
> >>>> arm64/mm: set_pte(): New layer to manage contig bit
> >>>> arm64/mm: set_ptes()/set_pte_at(): New layer to manage contig bit
> >>>> arm64/mm: pte_clear(): New layer to manage contig bit
> >>>> arm64/mm: ptep_get_and_clear(): New layer to manage contig bit
> >>>> arm64/mm: ptep_test_and_clear_young(): New layer to manage contig bit
> >>>> arm64/mm: ptep_clear_flush_young(): New layer to manage contig bit
> >>>> arm64/mm: ptep_set_wrprotect(): New layer to manage contig bit
> >>>> arm64/mm: ptep_set_access_flags(): New layer to manage contig bit
> >>>> arm64/mm: ptep_get(): New layer to manage contig bit
> >>>> arm64/mm: Split __flush_tlb_range() to elide trailing DSB
> >>>> arm64/mm: Wire up PTE_CONT for user mappings
> >>>> arm64/mm: Implement ptep_set_wrprotects() to optimize fork()
> >>>> arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown
> >>>
> >>> Hi Ryan,
> >>> Not quite sure if I missed something, are we splitting/unfolding CONTPTES
> >>> in the below cases
> >>
> >> The general idea is that the core-mm sets the individual ptes (one at a time if
> >> it likes with set_pte_at(), or in a block with set_ptes()), modifies its
> >> permissions (ptep_set_wrprotect(), ptep_set_access_flags()) and clears them
> >> (ptep_clear(), etc); This is exactly the same interface as previously.
> >>
> >> BUT, the arm64 implementation of those interfaces will now detect when a set of
> >> adjacent PTEs (a contpte block - so 16 naturally aligned entries when using 4K
> >> base pages) are all appropriate for having the CONT_PTE bit set; in this case
> >> the block is "folded". And it will detect when the first PTE in the block
> >> changes such that the CONT_PTE bit must now be unset ("unfolded"). One of the
> >> requirements for folding a contpte block is that all the pages must belong to
> >> the *same* folio (that means its safe to only track access/dirty for thecontpte
> >> block as a whole rather than for each individual pte).
> >>
> >> (there are a couple of optimizations that make the reality slightly more
> >> complicated than what I've just explained, but you get the idea).
> >>
> >> On that basis, I believe all the specific cases you describe below are all
> >> covered and safe - please let me know if you think there is a hole here!
> >>
> >>>
> >>> 1. madvise(MADV_DONTNEED) on a part of basepages on a CONTPTE large folio
> >>
> >> The page will first be unmapped (e.g. ptep_clear() or ptep_get_and_clear(), or
> >> whatever). The implementation of that will cause an unfold and the CONT_PTE bit
> >> is removed from the whole contpte block. If there is then a subsequent
> >> set_pte_at() to set a swap entry, the implementation will see that its not
> >> appropriate to re-fold, so the range will remain unfolded.
> >>
> >>>
> >>> 2. vma split in a large folio due to various reasons such as mprotect,
> >>> munmap, mlock etc.
> >>
> >> I'm not sure if PTEs are explicitly unmapped/remapped when splitting a VMA? I
> >> suspect not, so if the VMA is split in the middle of a currently folded contpte
> >> block, it will remain folded. But this is safe and continues to work correctly.
> >> The VMA arrangement is not important; it is just important that a single folio
> >> is mapped contiguously across the whole block.
> >>
> >>>
> >>> 3. try_to_unmap_one() to reclaim a folio, ptes are scanned one by one
> >>> rather than being as a whole.
> >>
> >> Yes, as per 1; the arm64 implementation will notice when the first entry is
> >> cleared and unfold the contpte block.
> >>
> >>>
> >>> In hardware, we need to make sure CONTPTE follow the rule - always 16
> >>> contiguous physical address with CONTPTE set. if one of them run away
> >>> from the 16 ptes group and PTEs become unconsistent, some terrible
> >>> errors/faults can happen in HW. for example
> >>
> >> Yes, the implementation obeys all these rules; see contpte_try_fold() and
> >> contpte_try_unfold(). the fold/unfold operation is only done when all
> >> requirements are met, and we perform it in a manner that is conformant to the
> >> architecture requirements (see contpte_fold() - being renamed to
> >> contpte_convert() in the next version).
> >
> > Hi Ryan,
> >
> > sorry for too many comments, I remembered another case
> >
> > 4. mremap
> >
> > a CONTPTE might be remapped to another address which might not be
> > aligned with 16*basepage. thus, in move_ptes(), we are copying CONPTEs
> > from src to dst.
> > static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > unsigned long old_addr, unsigned long old_end,
> > struct vm_area_struct *new_vma, pmd_t *new_pmd,
> > unsigned long new_addr, bool need_rmap_locks)
> > {
> > struct mm_struct *mm = vma->vm_mm;
> > pte_t *old_pte, *new_pte, pte;
> > ...
> >
> > /*
> > * We don't have to worry about the ordering of src and dst
> > * pte locks because exclusive mmap_lock prevents deadlock.
> > */
> > old_pte = pte_offset_map_lock(mm, old_pmd, old_addr, &old_ptl);
> > if (!old_pte) {
> > err = -EAGAIN;
> > goto out;
> > }
> > new_pte = pte_offset_map_nolock(mm, new_pmd, new_addr, &new_ptl);
> > if (!new_pte) {
> > pte_unmap_unlock(old_pte, old_ptl);
> > err = -EAGAIN;
> > goto out;
> > }
> > if (new_ptl != old_ptl)
> > spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > flush_tlb_batched_pending(vma->vm_mm);
> > arch_enter_lazy_mmu_mode();
> >
> > for (; old_addr < old_end; old_pte++, old_addr += PAGE_SIZE,
> > new_pte++, new_addr += PAGE_SIZE) {
> > if (pte_none(ptep_get(old_pte)))
> > continue;
> >
> > pte = ptep_get_and_clear(mm, old_addr, old_pte);
> > ....
> > }
> >
> > This has two possibilities
> > 1. new_pte is aligned with CONT_PTES, we can still keep CONTPTE;
> > 2. new_pte is not aligned with CONT_PTES, we should drop CONTPTE
> > while copying.
> >
> > does your code also handle this properly?
>
> Yes; same mechanism - the arm64 arch code does the CONT_PTE bit management and
> folds/unfolds as neccessary.
>
> Admittedly this may be non-optimal because we are iterating a single PTE at a
> time. When we clear the first pte of a contpte block in the source, the block
> will be unfolded. When we set the last pte of the contpte block in the dest, the
> block will be folded. If we had a batching mechanism, we could just clear the
> whole source contpte block in one hit (no need to unfold first) and we could
> just set the dest contpte block in one hit (no need to fold at the end).
>
> I haven't personally seen this as a hotspot though; I don't know if you have any
> data to the contrary? I've followed this type of batching technique for the fork
> case (patch 13). We could do a similar thing in theory, but its a bit more
in my previous testing, i don't see mremap quite often, so no worries.
as long as it is bug-free,
it is fine to me though a mremap microbench will definitely lose :-)
> complex because of the ptep_get_and_clear() return value; you would need to
> return all ptes for the cleared range, or somehow collapse the actual info that
> the caller requires (presumably access/dirty info).
>
Thanks
Barry
prev parent reply other threads:[~2023-11-28 19:38 UTC|newest]
Thread overview: 102+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-15 16:30 Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 01/14] mm: Batch-copy PTE ranges during fork() Ryan Roberts
2023-11-15 21:26 ` kernel test robot
2023-11-16 10:07 ` Ryan Roberts
2023-11-16 10:12 ` David Hildenbrand
2023-11-16 10:36 ` Ryan Roberts
2023-11-16 11:01 ` David Hildenbrand
2023-11-16 11:13 ` Ryan Roberts
2023-11-15 21:37 ` Andrew Morton
2023-11-16 9:34 ` Ryan Roberts
2023-12-04 11:01 ` Christophe Leroy
2023-11-15 22:40 ` kernel test robot
2023-11-16 10:03 ` David Hildenbrand
2023-11-16 10:26 ` Ryan Roberts
2023-11-27 8:42 ` Barry Song
2023-11-27 9:35 ` Ryan Roberts
2023-11-27 9:59 ` Barry Song
2023-11-27 10:10 ` Ryan Roberts
2023-11-27 10:28 ` Barry Song
2023-11-27 11:07 ` Ryan Roberts
2023-11-27 20:34 ` Barry Song
2023-11-28 9:14 ` Ryan Roberts
2023-11-28 9:49 ` Barry Song
2023-11-28 10:49 ` Ryan Roberts
2023-11-28 21:06 ` Barry Song
2023-11-29 12:21 ` Ryan Roberts
2023-11-30 0:51 ` Barry Song
2023-11-16 11:03 ` David Hildenbrand
2023-11-16 11:20 ` Ryan Roberts
2023-11-16 13:20 ` David Hildenbrand
2023-11-16 13:49 ` Ryan Roberts
2023-11-16 14:13 ` David Hildenbrand
2023-11-16 14:15 ` David Hildenbrand
2023-11-16 17:58 ` Ryan Roberts
2023-11-23 10:26 ` Ryan Roberts
2023-11-23 12:12 ` David Hildenbrand
2023-11-23 12:28 ` Ryan Roberts
2023-11-24 8:53 ` David Hildenbrand
2023-11-23 4:26 ` Alistair Popple
2023-11-23 14:43 ` Ryan Roberts
2023-11-23 23:50 ` Alistair Popple
2023-11-27 5:54 ` Barry Song
2023-11-27 9:24 ` Ryan Roberts
2023-11-28 0:11 ` Barry Song
2023-11-28 11:00 ` Ryan Roberts
2023-11-28 19:00 ` Barry Song
2023-11-29 12:29 ` Ryan Roberts
2023-11-29 13:09 ` Barry Song
2023-11-29 14:07 ` Ryan Roberts
2023-11-30 0:34 ` Barry Song
2023-11-15 16:30 ` [PATCH v2 02/14] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 03/14] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 04/14] arm64/mm: pte_clear(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 05/14] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 06/14] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 07/14] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 08/14] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 09/14] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 10/14] arm64/mm: ptep_get(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 11/14] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 12/14] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2023-11-21 11:22 ` Alistair Popple
2023-11-21 15:14 ` Ryan Roberts
2023-11-22 6:01 ` Alistair Popple
2023-11-22 8:35 ` Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 13/14] arm64/mm: Implement ptep_set_wrprotects() to optimize fork() Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown Ryan Roberts
2023-11-23 5:13 ` Alistair Popple
2023-11-23 16:01 ` Ryan Roberts
2023-11-24 1:35 ` Alistair Popple
2023-11-24 8:54 ` Ryan Roberts
2023-11-27 7:34 ` Alistair Popple
2023-11-27 8:53 ` Ryan Roberts
2023-11-28 6:54 ` Alistair Popple
2023-11-28 12:45 ` Ryan Roberts
2023-11-28 16:55 ` Ryan Roberts
2023-11-30 5:07 ` Alistair Popple
2023-11-30 5:57 ` Barry Song
2023-11-30 11:47 ` Ryan Roberts
2023-12-03 23:20 ` Alistair Popple
2023-12-04 9:39 ` Ryan Roberts
2023-11-28 7:32 ` Barry Song
2023-11-28 11:15 ` Ryan Roberts
2023-11-28 8:17 ` Barry Song
2023-11-28 11:49 ` Ryan Roberts
2023-11-28 20:23 ` Barry Song
2023-11-29 12:43 ` Ryan Roberts
2023-11-29 13:00 ` Barry Song
2023-11-30 5:35 ` Barry Song
2023-11-30 12:00 ` Ryan Roberts
2023-12-03 21:41 ` Barry Song
2023-11-27 3:18 ` [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings Barry Song
2023-11-27 9:15 ` Ryan Roberts
2023-11-27 10:35 ` Barry Song
2023-11-27 11:11 ` Ryan Roberts
2023-11-27 22:53 ` Barry Song
2023-11-28 11:52 ` Ryan Roberts
2023-11-28 3:13 ` Yang Shi
2023-11-28 11:58 ` Ryan Roberts
2023-11-28 5:49 ` Barry Song
2023-11-28 12:08 ` Ryan Roberts
2023-11-28 19:37 ` Barry Song [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGsJ_4yLNEDtq5iv0BD96LfdbKzB5KqL0xDcuLGSXBKT02tk+g@mail.gmail.com \
--to=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=anshuman.khandual@arm.com \
--cc=ardb@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=james.morse@arm.com \
--cc=jhubbard@nvidia.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=ryabinin.a.a@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=suzuki.poulose@arm.com \
--cc=vincenzo.frascino@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yuzenghui@huawei.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox