linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Barry Song <21cnbao@gmail.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
	Lance Yang <ioworker0@gmail.com>,
	Vishal Moola <vishal.moola@gmail.com>,
	akpm@linux-foundation.org, zokeefe@google.com,
	shy828301@gmail.com, mhocko@suse.com, fengwei.yin@intel.com,
	xiehuan09@gmail.com, wangkefeng.wang@huawei.com,
	songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/1] mm/madvise: enhance lazyfreeing with mTHP in madvise_free
Date: Thu, 7 Mar 2024 13:04:29 +0100	[thread overview]
Message-ID: <a895d678-ca65-4bd9-bf2f-21029adc63ce@redhat.com> (raw)
In-Reply-To: <CAGsJ_4xXS0MsxRVTbf74DY_boQVUE2oP=AP6JmdXZSqsAOZzRQ@mail.gmail.com>

On 07.03.24 13:01, Barry Song wrote:
> On Thu, Mar 7, 2024 at 7:45 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 07.03.24 12:42, Ryan Roberts wrote:
>>> On 07/03/2024 11:31, David Hildenbrand wrote:
>>>> On 07.03.24 12:26, Barry Song wrote:
>>>>> On Thu, Mar 7, 2024 at 7:13 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>>>>
>>>>>> On 07/03/2024 10:54, David Hildenbrand wrote:
>>>>>>> On 07.03.24 11:54, David Hildenbrand wrote:
>>>>>>>> On 07.03.24 11:50, Ryan Roberts wrote:
>>>>>>>>> On 07/03/2024 09:33, Barry Song wrote:
>>>>>>>>>> On Thu, Mar 7, 2024 at 10:07 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> On 07/03/2024 08:10, Barry Song wrote:
>>>>>>>>>>>> On Thu, Mar 7, 2024 at 9:00 PM Lance Yang <ioworker0@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hey Barry,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks for taking time to review!
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Mar 7, 2024 at 3:00 PM Barry Song <21cnbao@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Mar 7, 2024 at 7:15 PM Lance Yang <ioworker0@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>> [...]
>>>>>>>>>>>>>>> +static inline bool can_mark_large_folio_lazyfree(unsigned long addr,
>>>>>>>>>>>>>>> +                                                struct folio *folio,
>>>>>>>>>>>>>>> pte_t *start_pte)
>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>> +       int nr_pages = folio_nr_pages(folio);
>>>>>>>>>>>>>>> +       fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +       for (int i = 0; i < nr_pages; i++)
>>>>>>>>>>>>>>> +               if (page_mapcount(folio_page(folio, i)) != 1)
>>>>>>>>>>>>>>> +                       return false;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> we have moved to folio_estimated_sharers though it is not precise, so
>>>>>>>>>>>>>> we don't do
>>>>>>>>>>>>>> this check with lots of loops and depending on the subpage's mapcount.
>>>>>>>>>>>>>
>>>>>>>>>>>>> If we don't check the subpage’s mapcount, and there is a cow folio
>>>>>>>>>>>>> associated
>>>>>>>>>>>>> with this folio and the cow folio has smaller size than this folio,
>>>>>>>>>>>>> should we still
>>>>>>>>>>>>> mark this folio as lazyfree?
>>>>>>>>>>>>
>>>>>>>>>>>> I agree, this is true. However, we've somehow accepted the fact that
>>>>>>>>>>>> folio_likely_mapped_shared
>>>>>>>>>>>> can result in false negatives or false positives to balance the
>>>>>>>>>>>> overhead.  So I really don't know :-)
>>>>>>>>>>>>
>>>>>>>>>>>> Maybe David and Vishal can give some comments here.
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> BTW, do we need to rebase our work against David's changes[1]?
>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>> https://lore.kernel.org/linux-mm/20240227201548.857831-1-david@redhat.com/
>>>>>>>>>>>>>
>>>>>>>>>>>>> Yes, we should rebase our work against David’s changes.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +       return nr_pages == folio_pte_batch(folio, addr, start_pte,
>>>>>>>>>>>>>>> +                                        ptep_get(start_pte), nr_pages,
>>>>>>>>>>>>>>> flags, NULL);
>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>       static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>>>>>>>>>>>>>>>                                      unsigned long end, struct mm_walk
>>>>>>>>>>>>>>> *walk)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> @@ -676,11 +690,45 @@ static int madvise_free_pte_range(pmd_t *pmd,
>>>>>>>>>>>>>>> unsigned long addr,
>>>>>>>>>>>>>>>                       */
>>>>>>>>>>>>>>>                      if (folio_test_large(folio)) {
>>>>>>>>>>>>>>>                              int err;
>>>>>>>>>>>>>>> +                       unsigned long next_addr, align;
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> -                       if (folio_estimated_sharers(folio) != 1)
>>>>>>>>>>>>>>> -                               break;
>>>>>>>>>>>>>>> -                       if (!folio_trylock(folio))
>>>>>>>>>>>>>>> -                               break;
>>>>>>>>>>>>>>> +                       if (folio_estimated_sharers(folio) != 1 ||
>>>>>>>>>>>>>>> +                           !folio_trylock(folio))
>>>>>>>>>>>>>>> +                               goto skip_large_folio;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I don't think we can skip all the PTEs for nr_pages, as some of them
>>>>>>>>>>>>>> might be
>>>>>>>>>>>>>> pointing to other folios.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> for example, for a large folio with 16PTEs, you do MADV_DONTNEED(15-16),
>>>>>>>>>>>>>> and write the memory of PTE15 and PTE16, you get page faults, thus PTE15
>>>>>>>>>>>>>> and PTE16 will point to two different small folios. We can only skip
>>>>>>>>>>>>>> when we
>>>>>>>>>>>>>> are sure nr_pages == folio_pte_batch() is sure.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Agreed. Thanks for pointing that out.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +                       align = folio_nr_pages(folio) * PAGE_SIZE;
>>>>>>>>>>>>>>> +                       next_addr = ALIGN_DOWN(addr + align, align);
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +                       /*
>>>>>>>>>>>>>>> +                        * If we mark only the subpages as lazyfree, or
>>>>>>>>>>>>>>> +                        * cannot mark the entire large folio as
>>>>>>>>>>>>>>> lazyfree,
>>>>>>>>>>>>>>> +                        * then just split it.
>>>>>>>>>>>>>>> +                        */
>>>>>>>>>>>>>>> +                       if (next_addr > end || next_addr - addr !=
>>>>>>>>>>>>>>> align ||
>>>>>>>>>>>>>>> +                           !can_mark_large_folio_lazyfree(addr, folio,
>>>>>>>>>>>>>>> pte))
>>>>>>>>>>>>>>> +                               goto split_large_folio;
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +                       /*
>>>>>>>>>>>>>>> +                        * Avoid unnecessary folio splitting if the
>>>>>>>>>>>>>>> large
>>>>>>>>>>>>>>> +                        * folio is entirely within the given range.
>>>>>>>>>>>>>>> +                        */
>>>>>>>>>>>>>>> +                       folio_clear_dirty(folio);
>>>>>>>>>>>>>>> +                       folio_unlock(folio);
>>>>>>>>>>>>>>> +                       for (; addr != next_addr; pte++, addr +=
>>>>>>>>>>>>>>> PAGE_SIZE) {
>>>>>>>>>>>>>>> +                               ptent = ptep_get(pte);
>>>>>>>>>>>>>>> +                               if (pte_young(ptent) ||
>>>>>>>>>>>>>>> pte_dirty(ptent)) {
>>>>>>>>>>>>>>> +                                       ptent =
>>>>>>>>>>>>>>> ptep_get_and_clear_full(
>>>>>>>>>>>>>>> +                                               mm, addr, pte,
>>>>>>>>>>>>>>> tlb->fullmm);
>>>>>>>>>>>>>>> +                                       ptent = pte_mkold(ptent);
>>>>>>>>>>>>>>> +                                       ptent = pte_mkclean(ptent);
>>>>>>>>>>>>>>> +                                       set_pte_at(mm, addr, pte,
>>>>>>>>>>>>>>> ptent);
>>>>>>>>>>>>>>> +                                       tlb_remove_tlb_entry(tlb, pte,
>>>>>>>>>>>>>>> addr);
>>>>>>>>>>>>>>> +                               }
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can we do this in batches? for a CONT-PTE mapped large folio, you are
>>>>>>>>>>>>>> unfolding
>>>>>>>>>>>>>> and folding again. It seems quite expensive.
>>>>>>>>>>>
>>>>>>>>>>> I'm not convinced we should be doing this in batches. We want the initial
>>>>>>>>>>> folio_pte_batch() to be as loose as possible regarding permissions so
>>>>>>>>>>> that we
>>>>>>>>>>> reduce our chances of splitting folios to the min. (e.g. ignore SW bits
>>>>>>>>>>> like
>>>>>>>>>>> soft dirty, etc). I think it might be possible that some PTEs are RO and
>>>>>>>>>>> other
>>>>>>>>>>> RW too (e.g. due to cow - although with the current cow impl, probably not.
>>>>>>>>>>> But
>>>>>>>>>>> its fragile to assume that). Anyway, if we do an initial batch that ignores
>>>>>>>>>>> all
>>>>>>>>>>
>>>>>>>>>> You are correct. I believe this scenario could indeed occur. For instance,
>>>>>>>>>> if process A forks process B and then unmaps itself, leaving B as the
>>>>>>>>>> sole process owning the large folio.  The current wp_page_reuse() function
>>>>>>>>>> will reuse PTE one by one while the specific subpage is written.
>>>>>>>>>
>>>>>>>>> Hmm - I thought it would only reuse if the total mapcount for the folio
>>>>>>>>> was 1.
>>>>>>>>> And since it is a large folio with each page mapped once in proc B, I thought
>>>>>>>>> every subpage write would cause a copy except the last one? I haven't
>>>>>>>>> looked at
>>>>>>>>> the code for a while. But I had it in my head that this is an area we need to
>>>>>>>>> improve for mTHP.
>>>>>
>>>>> So sad I am wrong again 😢
>>>>>
>>>>>>>>
>>>>>>>> wp_page_reuse() will currently reuse a PTE part of a large folio only if
>>>>>>>> a single PTE remains mapped (refcount == 0).
>>>>>>>
>>>>>>> ^ == 1
>>>>>
>>>>> seems this needs improvement. it is a waste the last subpage can
>>>>
>>>> My take that is WIP:
>>>>
>>>> https://lore.kernel.org/all/20231124132626.235350-1-david@redhat.com/T/#u
>>>>
>>>>> reuse the whole large folio. i was doing it in a quite different way,
>>>>> if the large folio had only one subpage left, i would do copy and
>>>>> released the large folio[1]. and if i could reuse the whole large folio
>>>>> with CONT-PTE, i would reuse the whole large folio[2]. in mainline,
>>>>> we don't have this cont-pte luxury exposed to mm, so i guess we can
>>>>> not do [2] easily, but [1] seems to be an optimization.
>>>>
>>>> Yeah, I had essentially the same idea: just free up the large folio if most of
>>>> the stuff is unmapped. But that's rather a corner-case optimization, so I did
>>>> not proceed with that.
>>>>
>>>
>>> I'm not sure it's a corner case, really? - process forks, then both parent and
>>> child and write to all pages in what was previously a fully & contiguously
>>> mapped large folio?
>>
>> Well, with 2 MiB my assumption was that while it can happen, it's rather
>> rare. With smaller THP it might get more likely, agreed.
>>
>>>
>>> Reggardless, why is it an optimization to do the copy for the last subpage and
>>> syncrhonously free the large folio? It's already partially mapped so is on the
>>> deferred split list and can be split if memory is tight.
> 
> we don't want reclamation overhead later. and we want memories immediately
> available to others. reclamation will always cause latency and affect User
> experience. split_folio is not cheap :-) if the number of this kind of
> large folios
> is huge, the waste can be huge for some while.
> 
> it is not a corner case for large folio swap-in. while someone writes
> one subpage, I swap-in a large folio, wp_reuse will immediately
> be called. This can cause waste quite often. One outcome of this
> discussion is that I realize I should investigate this issue immediately
> in the swap-in series as my off-tree code has optimized reuse but
> mainline hasn't.

Note that if the swp entry was exclusive, the subpage will be marked 
PAE, so wp_reuse() will (and must!) reuse it.

We fallback to the refcount==1 scheme only if PAE is not set for that 
subpage.

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2024-03-07 12:04 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-07  6:14 Lance Yang
2024-03-07  7:00 ` Barry Song
2024-03-07  8:00   ` Lance Yang
2024-03-07  8:10     ` Barry Song
2024-03-07  9:07       ` Ryan Roberts
2024-03-07  9:33         ` Barry Song
2024-03-07 10:50           ` Ryan Roberts
2024-03-07 10:54             ` David Hildenbrand
2024-03-07 10:54               ` David Hildenbrand
2024-03-07 11:13                 ` Ryan Roberts
2024-03-07 11:17                   ` David Hildenbrand
2024-03-07 14:41                     ` Lance Yang
2024-03-07 14:58                       ` David Hildenbrand
2024-03-07 15:08                         ` Lance Yang
2024-03-07 11:26                   ` Barry Song
2024-03-07 11:31                     ` David Hildenbrand
2024-03-07 11:42                       ` Ryan Roberts
2024-03-07 11:45                         ` David Hildenbrand
2024-03-07 12:01                           ` Barry Song
2024-03-07 12:04                             ` David Hildenbrand [this message]
2024-03-07 16:31                             ` Ryan Roberts
2024-03-07 18:54                               ` Barry Song
2024-03-07 19:48                                 ` David Hildenbrand
2024-03-08 13:05                                 ` Ryan Roberts
2024-03-08 13:27                                   ` David Hildenbrand
2024-03-08 13:48                                     ` Ryan Roberts
2024-03-08 18:01                                   ` Barry Song
2024-03-11  9:55                                     ` Ryan Roberts
2024-03-11 10:01                                       ` Barry Song
2024-03-11 15:07         ` Ryan Roberts
2024-03-12 10:20           ` Lance Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a895d678-ca65-4bd9-bf2f-21029adc63ce@redhat.com \
    --to=david@redhat.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengwei.yin@intel.com \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=peterx@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=songmuchun@bytedance.com \
    --cc=vishal.moola@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=xiehuan09@gmail.com \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox