linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: akpm@linux-foundation.org, andreyknvl@gmail.com,
	anshuman.khandual@arm.com,  ardb@kernel.org,
	catalin.marinas@arm.com, david@redhat.com,  dvyukov@google.com,
	glider@google.com, james.morse@arm.com,  jhubbard@nvidia.com,
	linux-arm-kernel@lists.infradead.org,
	 linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	mark.rutland@arm.com,  maz@kernel.org, oliver.upton@linux.dev,
	ryabinin.a.a@gmail.com,  suzuki.poulose@arm.com,
	vincenzo.frascino@arm.com, wangkefeng.wang@huawei.com,
	 will@kernel.org, willy@infradead.org, yuzenghui@huawei.com,
	yuzhao@google.com,  ziy@nvidia.com
Subject: Re: [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown
Date: Mon, 4 Dec 2023 05:41:27 +0800	[thread overview]
Message-ID: <CAGsJ_4xJ=VibK3FFGr5UHDQn+HW_o1_ZMiUjhgo8+i=UWa3UGQ@mail.gmail.com> (raw)
In-Reply-To: <1d2f8e43-447e-4af4-96ac-1eefea7d6747@arm.com>

On Thu, Nov 30, 2023 at 8:00 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> >> Just because you found a pte that maps a page from a large folio, that doesn't
> >> mean that all pages from the folio are mapped, and it doesn't mean they are
> >> mapped contiguously. We have to deal with partial munmap(), partial mremap()
> >> etc. We could split in these cases (and in future it might be sensible to try),
> >> but that can fail (due to GUP). So we still have to handle the corner case.
> >>
> >> But I can imagine doing a batched version of ptep_get_and_clear(), like I did
> >> for ptep_set_wrprotects(). And I think this would be an improvement.
> >>
> >> The reason I haven't done that so far, is because ptep_get_and_clear() returns
> >> the pte value when it was cleared and that's hard to do if batching due to the
> >> storage requirement. But perhaps you could just return the logical OR of the
> >> dirty and young bits across all ptes in the batch. The caller should be able to
> >> reconstitute the rest if it needs it?
> >>
> >> What do you think?
> >
> > I really don't know why we care about the return value of ptep_get_and_clear()
> > as zap_pte_range() doesn't ask for any ret value at all. so why not totally give
> > up this kind of complex logical OR of dirty and young as they are useless in
> > this case?
>
> That's not the case in v6.7-rc1:
>
>
> static unsigned long zap_pte_range(struct mmu_gather *tlb,
>                                 struct vm_area_struct *vma, pmd_t *pmd,
>                                 unsigned long addr, unsigned long end,
>                                 struct zap_details *details)
> {
>         ...
>
>         do {
>                 pte_t ptent = ptep_get(pte);
>
>                 ...
>
>                 if (pte_present(ptent)) {
>                         ...
>
>                         ptent = ptep_get_and_clear_full(mm, addr, pte,
>                                                         tlb->fullmm);
>                         arch_check_zapped_pte(vma, ptent);
>                         tlb_remove_tlb_entry(tlb, pte, addr);
>                         zap_install_uffd_wp_if_needed(vma, addr, pte, details,
>                                                       ptent);
>                         if (unlikely(!page)) {
>                                 ksm_might_unmap_zero_page(mm, ptent);
>                                 continue;
>                         }
>
>                         delay_rmap = 0;
>                         if (!PageAnon(page)) {
>                                 if (pte_dirty(ptent)) {
>                                         set_page_dirty(page);
>                                         if (tlb_delay_rmap(tlb)) {
>                                                 delay_rmap = 1;
>                                                 force_flush = 1;
>                                         }
>                                 }
>                                 if (pte_young(ptent) && likely(vma_has_recency(vma)))
>                                         mark_page_accessed(page);
>                         }
>
>                         ...
>                 }
>
>                 ...
>         } while (pte++, addr += PAGE_SIZE, addr != end);
>
>         ...
> }
>
> Most importantly, file-backed mappings need the access/dirty bits to propagate that information back to the folio, so it will be written back to disk. x86 is also looking at the dirty bit in arch_check_zapped_pte(), and ksm is using it in ksm_might_unmap_zero_page().
>
> Probably for your use case of anon memory on arm64 on a phone, you don't need the return value. But my solution is also setting cotnpte for file-backed memory, and there are performance wins to be had there, especially for executable mappings where contpte reduces iTLB pressure. (I have other work which ensures these file-backed mappings are in correctly-sized large folios).
>
> So I don't think we can just do a clear without the get part. But I think I have a solution in the shape of clear_ptes(), as described in the other thread, which gives the characteristics you suggest.

understood. i realized OPPO's code actually also returned logic OR of
dirty and access bits while it exposes
CONTPTE to mm-core [1]:

static pte_t get_clear_flush(struct mm_struct *mm,
                             unsigned long addr,
                             pte_t *ptep,
                             unsigned long pgsize,
                             unsigned long ncontig,
                             bool flush)
{
        pte_t orig_pte = ptep_get(ptep);
        bool valid = pte_valid(orig_pte);
        unsigned long i, saddr = addr;

        for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
                pte_t pte = ptep_get_and_clear(mm, addr, ptep);

                if (pte_dirty(pte))
                        orig_pte = pte_mkdirty(orig_pte);

                if (pte_young(pte))
                        orig_pte = pte_mkyoung(orig_pte);
        }

        if (valid && flush) {
                struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);

                flush_tlb_range(&vma, saddr, addr);
        }
        return orig_pte;
}

static inline pte_t __cont_pte_huge_ptep_get_and_clear_flush(struct
mm_struct *mm,
                                       unsigned long addr,
                                       pte_t *ptep,
                                       bool flush)
{
        pte_t orig_pte = ptep_get(ptep);

        CHP_BUG_ON(!pte_cont(orig_pte));
        CHP_BUG_ON(!IS_ALIGNED(addr, HPAGE_CONT_PTE_SIZE));
        CHP_BUG_ON(!IS_ALIGNED(pte_pfn(orig_pte), HPAGE_CONT_PTE_NR));

        return get_clear_flush(mm, addr, ptep, PAGE_SIZE, CONT_PTES, flush);
}

[1] https://github.com/OnePlusOSS/android_kernel_oneplus_sm8550/blob/oneplus/sm8550_u_14.0.0_oneplus11/mm/cont_pte_hugepage.c#L1421

>
>
> >
> > Is it possible for us to introduce a new api like?
> >
> > bool clear_folio_ptes(folio, ptep)
> > {
> >     if(ptes are contiguous mapped) {
> >            clear all ptes all together    // this also clears all CONTPTE
> >            return true;
> >     }
> >     return false;
> > }
> >
> > in zap_pte_range():
> >
> > if (large_folio(folio) && clear_folio_ptes(folio, ptep)) {
> >          addr += nr - 1
> >          pte += nr  -1
> > } else
> >          old path.
> >
> >
> >>
> >>>
> >>> zap_pte_range is the most frequent behaviour from userspace libc heap
> >>> as i explained
> >>> before. libc can call madvise(DONTNEED) the most often. It is crucial
> >>> to performance.
> >>>
> >>> and this way can also help drop your full version by moving to full
> >>> flushing the whole
> >>> large folios? and we don't need to depend on fullmm any more?
> >>>
> >>>>
> >>>> I don't think there is any correctness issue here. But there is a problem with
> >>>> fragility, as raised by Alistair. I have some ideas on potentially how to solve
> >>>> that. I'm going to try to work on it this afternoon and will post if I get some
> >>>> confidence that it is a real solution.
> >>>>
> >>>> Thanks,
> >>>> Ryan
> >>>>
> >>>>>
> >>>>> static inline pte_t __cont_pte_huge_ptep_get_and_clear_flush(struct mm_struct *mm,
> >>>>>                                      unsigned long addr,
> >>>>>                                      pte_t *ptep,
> >>>>>                                      bool flush)
> >>>>> {
> >>>>>       pte_t orig_pte = ptep_get(ptep);
> >>>>>
> >>>>>       CHP_BUG_ON(!pte_cont(orig_pte));
> >>>>>       CHP_BUG_ON(!IS_ALIGNED(addr, HPAGE_CONT_PTE_SIZE));
> >>>>>       CHP_BUG_ON(!IS_ALIGNED(pte_pfn(orig_pte), HPAGE_CONT_PTE_NR));
> >>>>>
> >>>>>       return get_clear_flush(mm, addr, ptep, PAGE_SIZE, CONT_PTES, flush);
> >>>>> }
> >>>>>
> >>>>> [1] https://github.com/OnePlusOSS/android_kernel_oneplus_sm8550/blob/oneplus/sm8550_u_14.0.0_oneplus11/mm/memory.c#L1539
> >>>>>
> >>>>>> +     */
> >>>>>> +
> >>>>>> +    return __ptep_get_and_clear(mm, addr, ptep);
> >>>>>> +}
> >>>>>> +EXPORT_SYMBOL(contpte_ptep_get_and_clear_full);
> >>>>>> +
> >>>>>
> >>>
> >  Thanks
> >  Barry
>


  reply	other threads:[~2023-12-03 21:41 UTC|newest]

Thread overview: 102+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-15 16:30 [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 01/14] mm: Batch-copy PTE ranges during fork() Ryan Roberts
2023-11-15 21:26   ` kernel test robot
2023-11-16 10:07     ` Ryan Roberts
2023-11-16 10:12       ` David Hildenbrand
2023-11-16 10:36         ` Ryan Roberts
2023-11-16 11:01           ` David Hildenbrand
2023-11-16 11:13             ` Ryan Roberts
2023-11-15 21:37   ` Andrew Morton
2023-11-16  9:34     ` Ryan Roberts
2023-12-04 11:01     ` Christophe Leroy
2023-11-15 22:40   ` kernel test robot
2023-11-16 10:03   ` David Hildenbrand
2023-11-16 10:26     ` Ryan Roberts
2023-11-27  8:42     ` Barry Song
2023-11-27  9:35       ` Ryan Roberts
2023-11-27  9:59         ` Barry Song
2023-11-27 10:10           ` Ryan Roberts
2023-11-27 10:28             ` Barry Song
2023-11-27 11:07               ` Ryan Roberts
2023-11-27 20:34                 ` Barry Song
2023-11-28  9:14                   ` Ryan Roberts
2023-11-28  9:49                     ` Barry Song
2023-11-28 10:49                       ` Ryan Roberts
2023-11-28 21:06                         ` Barry Song
2023-11-29 12:21                           ` Ryan Roberts
2023-11-30  0:51                             ` Barry Song
2023-11-16 11:03   ` David Hildenbrand
2023-11-16 11:20     ` Ryan Roberts
2023-11-16 13:20       ` David Hildenbrand
2023-11-16 13:49         ` Ryan Roberts
2023-11-16 14:13           ` David Hildenbrand
2023-11-16 14:15             ` David Hildenbrand
2023-11-16 17:58               ` Ryan Roberts
2023-11-23 10:26               ` Ryan Roberts
2023-11-23 12:12                 ` David Hildenbrand
2023-11-23 12:28                   ` Ryan Roberts
2023-11-24  8:53                     ` David Hildenbrand
2023-11-23  4:26   ` Alistair Popple
2023-11-23 14:43     ` Ryan Roberts
2023-11-23 23:50       ` Alistair Popple
2023-11-27  5:54   ` Barry Song
2023-11-27  9:24     ` Ryan Roberts
2023-11-28  0:11       ` Barry Song
2023-11-28 11:00         ` Ryan Roberts
2023-11-28 19:00           ` Barry Song
2023-11-29 12:29             ` Ryan Roberts
2023-11-29 13:09               ` Barry Song
2023-11-29 14:07                 ` Ryan Roberts
2023-11-30  0:34                   ` Barry Song
2023-11-15 16:30 ` [PATCH v2 02/14] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 03/14] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 04/14] arm64/mm: pte_clear(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 05/14] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 06/14] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 07/14] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 08/14] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 09/14] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 10/14] arm64/mm: ptep_get(): " Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 11/14] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 12/14] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2023-11-21 11:22   ` Alistair Popple
2023-11-21 15:14     ` Ryan Roberts
2023-11-22  6:01       ` Alistair Popple
2023-11-22  8:35         ` Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 13/14] arm64/mm: Implement ptep_set_wrprotects() to optimize fork() Ryan Roberts
2023-11-15 16:30 ` [PATCH v2 14/14] arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown Ryan Roberts
2023-11-23  5:13   ` Alistair Popple
2023-11-23 16:01     ` Ryan Roberts
2023-11-24  1:35       ` Alistair Popple
2023-11-24  8:54         ` Ryan Roberts
2023-11-27  7:34           ` Alistair Popple
2023-11-27  8:53             ` Ryan Roberts
2023-11-28  6:54               ` Alistair Popple
2023-11-28 12:45                 ` Ryan Roberts
2023-11-28 16:55                   ` Ryan Roberts
2023-11-30  5:07                     ` Alistair Popple
2023-11-30  5:57                       ` Barry Song
2023-11-30 11:47                       ` Ryan Roberts
2023-12-03 23:20                         ` Alistair Popple
2023-12-04  9:39                           ` Ryan Roberts
2023-11-28  7:32   ` Barry Song
2023-11-28 11:15     ` Ryan Roberts
2023-11-28  8:17   ` Barry Song
2023-11-28 11:49     ` Ryan Roberts
2023-11-28 20:23       ` Barry Song
2023-11-29 12:43         ` Ryan Roberts
2023-11-29 13:00           ` Barry Song
2023-11-30  5:35           ` Barry Song
2023-11-30 12:00             ` Ryan Roberts
2023-12-03 21:41               ` Barry Song [this message]
2023-11-27  3:18 ` [PATCH v2 00/14] Transparent Contiguous PTEs for User Mappings Barry Song
2023-11-27  9:15   ` Ryan Roberts
2023-11-27 10:35     ` Barry Song
2023-11-27 11:11       ` Ryan Roberts
2023-11-27 22:53         ` Barry Song
2023-11-28 11:52           ` Ryan Roberts
2023-11-28  3:13     ` Yang Shi
2023-11-28 11:58       ` Ryan Roberts
2023-11-28  5:49     ` Barry Song
2023-11-28 12:08       ` Ryan Roberts
2023-11-28 19:37         ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4xJ=VibK3FFGr5UHDQn+HW_o1_ZMiUjhgo8+i=UWa3UGQ@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=andreyknvl@gmail.com \
    --cc=anshuman.khandual@arm.com \
    --cc=ardb@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=david@redhat.com \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=james.morse@arm.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=ryabinin.a.a@gmail.com \
    --cc=ryan.roberts@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=vincenzo.frascino@arm.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yuzenghui@huawei.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox