linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: akpm@linux-foundation.org, andreyknvl@gmail.com,
	anshuman.khandual@arm.com,  ardb@kernel.org,
	catalin.marinas@arm.com, david@redhat.com,  dvyukov@google.com,
	glider@google.com, james.morse@arm.com,  jhubbard@nvidia.com,
	linux-arm-kernel@lists.infradead.org,
	 linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	mark.rutland@arm.com,  maz@kernel.org, oliver.upton@linux.dev,
	ryabinin.a.a@gmail.com,  suzuki.poulose@arm.com,
	vincenzo.frascino@arm.com, wangkefeng.wang@huawei.com,
	 will@kernel.org, willy@infradead.org, yuzenghui@huawei.com,
	yuzhao@google.com,  ziy@nvidia.com
Subject: Re: [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings
Date: Tue, 28 Nov 2023 11:53:50 +1300	[thread overview]
Message-ID: <CAGsJ_4yhHuT1Sra+vEzfFykYM3Jdm85q6fRydX_3QjwHL38UMA@mail.gmail.com> (raw)
In-Reply-To: <d11d713c-9c4e-4d26-9888-8cc843861785@arm.com>

On Tue, Nov 28, 2023 at 12:11 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 27/11/2023 10:35, Barry Song wrote:
> > On Mon, Nov 27, 2023 at 10:15 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> On 27/11/2023 03:18, Barry Song wrote:
> >>>> Ryan Roberts (14):
> >>>>   mm: Batch-copy PTE ranges during fork()
> >>>>   arm64/mm: set_pte(): New layer to manage contig bit
> >>>>   arm64/mm: set_ptes()/set_pte_at(): New layer to manage contig bit
> >>>>   arm64/mm: pte_clear(): New layer to manage contig bit
> >>>>   arm64/mm: ptep_get_and_clear(): New layer to manage contig bit
> >>>>   arm64/mm: ptep_test_and_clear_young(): New layer to manage contig bit
> >>>>   arm64/mm: ptep_clear_flush_young(): New layer to manage contig bit
> >>>>   arm64/mm: ptep_set_wrprotect(): New layer to manage contig bit
> >>>>   arm64/mm: ptep_set_access_flags(): New layer to manage contig bit
> >>>>   arm64/mm: ptep_get(): New layer to manage contig bit
> >>>>   arm64/mm: Split __flush_tlb_range() to elide trailing DSB
> >>>>   arm64/mm: Wire up PTE_CONT for user mappings
> >>>>   arm64/mm: Implement ptep_set_wrprotects() to optimize fork()
> >>>>   arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown
> >>>
> >>> Hi Ryan,
> >>> Not quite sure if I missed something, are we splitting/unfolding CONTPTES
> >>> in the below cases
> >>
> >> The general idea is that the core-mm sets the individual ptes (one at a time if
> >> it likes with set_pte_at(), or in a block with set_ptes()), modifies its
> >> permissions (ptep_set_wrprotect(), ptep_set_access_flags()) and clears them
> >> (ptep_clear(), etc); This is exactly the same interface as previously.
> >>
> >> BUT, the arm64 implementation of those interfaces will now detect when a set of
> >> adjacent PTEs (a contpte block - so 16 naturally aligned entries when using 4K
> >> base pages) are all appropriate for having the CONT_PTE bit set; in this case
> >> the block is "folded". And it will detect when the first PTE in the block
> >> changes such that the CONT_PTE bit must now be unset ("unfolded"). One of the
> >> requirements for folding a contpte block is that all the pages must belong to
> >> the *same* folio (that means its safe to only track access/dirty for thecontpte
> >> block as a whole rather than for each individual pte).
> >>
> >> (there are a couple of optimizations that make the reality slightly more
> >> complicated than what I've just explained, but you get the idea).
> >>
> >> On that basis, I believe all the specific cases you describe below are all
> >> covered and safe - please let me know if you think there is a hole here!
> >>
> >>>
> >>> 1. madvise(MADV_DONTNEED) on a part of basepages on a CONTPTE large folio
> >>
> >> The page will first be unmapped (e.g. ptep_clear() or ptep_get_and_clear(), or
> >> whatever). The implementation of that will cause an unfold and the CONT_PTE bit
> >> is removed from the whole contpte block. If there is then a subsequent
> >> set_pte_at() to set a swap entry, the implementation will see that its not
> >> appropriate to re-fold, so the range will remain unfolded.
> >>
> >>>
> >>> 2. vma split in a large folio due to various reasons such as mprotect,
> >>> munmap, mlock etc.
> >>
> >> I'm not sure if PTEs are explicitly unmapped/remapped when splitting a VMA? I
> >> suspect not, so if the VMA is split in the middle of a currently folded contpte
> >> block, it will remain folded. But this is safe and continues to work correctly.
> >> The VMA arrangement is not important; it is just important that a single folio
> >> is mapped contiguously across the whole block.
> >
> > I don't think it is safe to keep CONTPTE folded in a split_vma case. as
> > otherwise, copy_ptes in your other patch might only copy a part
> > of CONTPES.
> > For example,  if page0-page4 and page5-page15 are splitted in split_vma,
> > in fork, while copying pte for the first VMA, we are copying page0-page4,
> > this will immediately cause inconsistent CONTPTE. as we have to
> > make sure all CONTPTEs are atomically mapped in a PTL.
>
> No that's not how it works. The CONT_PTE bit is not blindly copied from parent
> to child. It is explicitly managed by the arch code and set when appropriate. In
> the case above, we will end up calling set_ptes() for page0-page4 in the child.
> set_ptes() will notice that there are only 5 contiguous pages so it will map
> without the CONT_PTE bit.

Ok. cool. alternatively, in the code I shared to you, we are doing an unfold
immediately when split_vma happens within a large anon folio, so we disallow
CONTPTE to cross two VMAs to avoid all kinds of complexity afterwards.

https://github.com/OnePlusOSS/android_kernel_oneplus_sm8550/blob/oneplus/sm8550_u_14.0.0_oneplus11/mm/huge_memory.c

#ifdef CONFIG_CONT_PTE_HUGEPAGE
void vma_adjust_cont_pte_trans_huge(struct vm_area_struct *vma,
    unsigned long start,
    unsigned long end,
    long adjust_next)
{
         /*
         * If the new start address isn't hpage aligned and it could
         * previously contain an hugepage: check if we need to split
         * an huge pmd.
         */
         if (start & ~HPAGE_CONT_PTE_MASK &&
             (start & HPAGE_CONT_PTE_MASK) >= vma->vm_start &&
             (start & HPAGE_CONT_PTE_MASK) + HPAGE_CONT_PTE_SIZE <= vma->vm_end)
                  split_huge_cont_pte_address(vma, start, false, NULL);

         ....
}
#endif

In your approach, you are still holding CONTPTE crossing two VMAs. but it seems
ok. I can't have a case which might fail in my brain right now. only
running the code on
a large amount of real hardware will tell :-)

>
> >
> >>
> >>>
> >>> 3. try_to_unmap_one() to reclaim a folio, ptes are scanned one by one
> >>> rather than being as a whole.
> >>
> >> Yes, as per 1; the arm64 implementation will notice when the first entry is
> >> cleared and unfold the contpte block.
> >>
> >>>
> >>> In hardware, we need to make sure CONTPTE follow the rule - always 16
> >>> contiguous physical address with CONTPTE set. if one of them run away
> >>> from the 16 ptes group and PTEs become unconsistent, some terrible
> >>> errors/faults can happen in HW. for example
> >>
> >> Yes, the implementation obeys all these rules; see contpte_try_fold() and
> >> contpte_try_unfold(). the fold/unfold operation is only done when all
> >> requirements are met, and we perform it in a manner that is conformant to the
> >> architecture requirements (see contpte_fold() - being renamed to
> >> contpte_convert() in the next version).
> >>
> >> Thanks for the review!
> >>
> >> Thanks,
> >> Ryan
> >>
> >>>
> >>> case0:
> >>> addr0 PTE - has no CONTPE
> >>> addr0+4kb PTE - has CONTPTE
> >>> ....
> >>> addr0+60kb PTE - has CONTPTE
> >>>
> >>> case 1:
> >>> addr0 PTE - has no CONTPE
> >>> addr0+4kb PTE - has CONTPTE
> >>> ....
> >>> addr0+60kb PTE - has swap
> >>>
> >>> Unconsistent 16 PTEs will lead to crash even in the firmware based on
> >>> our observation.
> >>>
> >

Thanks
Barry


  reply	other threads:[~2023-11-27 22:54 UTC|newest]

Thread overview: 102+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-15 16:30 Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 01/14] mm: Batch-copy PTE ranges during fork() Ryan Roberts
2023-11-15 21:26   ` kernel test robot
2023-11-16 10:07     ` Ryan Roberts
2023-11-16 10:12       ` David Hildenbrand
2023-11-16 10:36         ` Ryan Roberts
2023-11-16 11:01           ` David Hildenbrand
2023-11-16 11:13             ` Ryan Roberts
2023-11-15 21:37   ` Andrew Morton
2023-11-16  9:34     ` Ryan Roberts
2023-12-04 11:01     ` Christophe Leroy
2023-11-15 22:40   ` kernel test robot
2023-11-16 10:03   ` David Hildenbrand
2023-11-16 10:26     ` Ryan Roberts
2023-11-27  8:42     ` Barry Song
2023-11-27  9:35       ` Ryan Roberts
2023-11-27  9:59         ` Barry Song
2023-11-27 10:10           ` Ryan Roberts
2023-11-27 10:28             ` Barry Song
2023-11-27 11:07               ` Ryan Roberts
2023-11-27 20:34                 ` Barry Song
2023-11-28  9:14                   ` Ryan Roberts
2023-11-28  9:49                     ` Barry Song
2023-11-28 10:49                       ` Ryan Roberts
2023-11-28 21:06                         ` Barry Song
2023-11-29 12:21                           ` Ryan Roberts
2023-11-30  0:51                             ` Barry Song
2023-11-16 11:03   ` David Hildenbrand
2023-11-16 11:20     ` Ryan Roberts
2023-11-16 13:20       ` David Hildenbrand
2023-11-16 13:49         ` Ryan Roberts
2023-11-16 14:13           ` David Hildenbrand
2023-11-16 14:15             ` David Hildenbrand
2023-11-16 17:58               ` Ryan Roberts
2023-11-23 10:26               ` Ryan Roberts
2023-11-23 12:12                 ` David Hildenbrand
2023-11-23 12:28                   ` Ryan Roberts
2023-11-24  8:53                     ` David Hildenbrand
2023-11-23  4:26   ` Alistair Popple
2023-11-23 14:43     ` Ryan Roberts
2023-11-23 23:50       ` Alistair Popple
2023-11-27  5:54   ` Barry Song
2023-11-27  9:24     ` Ryan Roberts
2023-11-28  0:11       ` Barry Song
2023-11-28 11:00         ` Ryan Roberts
2023-11-28 19:00           ` Barry Song
2023-11-29 12:29             ` Ryan Roberts
2023-11-29 13:09               ` Barry Song
2023-11-29 14:07                 ` Ryan Roberts
2023-11-30  0:34                   ` Barry Song
2023-11-15 16:30 ` [PATCH v2 02/14] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 03/14] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 04/14] arm64/mm: pte_clear(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 05/14] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 06/14] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 07/14] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 08/14] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 09/14] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 10/14] arm64/mm: ptep_get(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 11/14] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 12/14] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2023-11-21 11:22   ` Alistair Popple
2023-11-21 15:14     ` Ryan Roberts
2023-11-22  6:01       ` Alistair Popple
2023-11-22  8:35         ` Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 13/14] arm64/mm: Implement ptep_set_wrprotects() to optimize fork() Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown Ryan Roberts
2023-11-23  5:13   ` Alistair Popple
2023-11-23 16:01     ` Ryan Roberts
2023-11-24  1:35       ` Alistair Popple
2023-11-24  8:54         ` Ryan Roberts
2023-11-27  7:34           ` Alistair Popple
2023-11-27  8:53             ` Ryan Roberts
2023-11-28  6:54               ` Alistair Popple
2023-11-28 12:45                 ` Ryan Roberts
2023-11-28 16:55                   ` Ryan Roberts
2023-11-30  5:07                     ` Alistair Popple
2023-11-30  5:57                       ` Barry Song
2023-11-30 11:47                       ` Ryan Roberts
2023-12-03 23:20                         ` Alistair Popple
2023-12-04  9:39                           ` Ryan Roberts
2023-11-28  7:32   ` Barry Song
2023-11-28 11:15     ` Ryan Roberts
2023-11-28  8:17   ` Barry Song
2023-11-28 11:49     ` Ryan Roberts
2023-11-28 20:23       ` Barry Song
2023-11-29 12:43         ` Ryan Roberts
2023-11-29 13:00           ` Barry Song
2023-11-30  5:35           ` Barry Song
2023-11-30 12:00             ` Ryan Roberts
2023-12-03 21:41               ` Barry Song
2023-11-27  3:18 ` [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings Barry Song
2023-11-27  9:15   ` Ryan Roberts
2023-11-27 10:35     ` Barry Song
2023-11-27 11:11       ` Ryan Roberts
2023-11-27 22:53         ` Barry Song [this message]
2023-11-28 11:52           ` Ryan Roberts
2023-11-28  3:13     ` Yang Shi
2023-11-28 11:58       ` Ryan Roberts
2023-11-28  5:49     ` Barry Song
2023-11-28 12:08       ` Ryan Roberts
2023-11-28 19:37         ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGsJ_4yhHuT1Sra+vEzfFykYM3Jdm85q6fRydX_3QjwHL38UMA@mail.gmail.com \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=andreyknvl@gmail.com \
    --cc=anshuman.khandual@arm.com \
    --cc=ardb@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=david@redhat.com \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=james.morse@arm.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=ryabinin.a.a@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=vincenzo.frascino@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yuzenghui@huawei.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox