linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: akpm@linux-foundation.org, linux-mm@kvack.org, david@redhat.com,
	 chrisl@kernel.org, yuzhao@google.com, hanchuanhua@oppo.com,
	 linux-kernel@vger.kernel.org, willy@infradead.org,
	ying.huang@intel.com,  xiang@kernel.org, mhocko@suse.com,
	shy828301@gmail.com,  wangkefeng.wang@huawei.com,
	Barry Song <v-songbaohua@oppo.com>,
	 Hugh Dickins <hughd@google.com>
Subject: Re: [RFC PATCH] mm: hold PTL from the first PTE while reclaiming a large folio
Date: Tue, 5 Mar 2024 22:15:05 +1300	[thread overview]
Message-ID: <CAGsJ_4woFHT3eLzQ+Dg2dAUMve=wd=0SEZfZ4NqLyBVqeskkVg@mail.gmail.com> (raw)
In-Reply-To: <0a644230-f7a8-4091-9d00-ded6c8c3fc19@arm.com>

On Tue, Mar 5, 2024 at 10:11 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 05/03/2024 09:08, Barry Song wrote:
> > On Tue, Mar 5, 2024 at 9:54 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>
> >> On 04/03/2024 21:57, Barry Song wrote:
> >>> On Tue, Mar 5, 2024 at 1:21 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
> >>>>
> >>>> Hi Barry,
> >>>>
> >>>> On 04/03/2024 10:37, Barry Song wrote:
> >>>>> From: Barry Song <v-songbaohua@oppo.com>
> >>>>>
> >>>>> page_vma_mapped_walk() within try_to_unmap_one() races with other
> >>>>> PTEs modification such as break-before-make, while iterating PTEs
> >>>>> of a large folio, it will only begin to acquire PTL after it gets
> >>>>> a valid(present) PTE. break-before-make intermediately sets PTEs
> >>>>> to pte_none. Thus, a large folio's PTEs might be partially skipped
> >>>>> in try_to_unmap_one().
> >>>>
> >>>> I just want to check my understanding here - I think the problem occurs for
> >>>> PTE-mapped, PMD-sized folios as well as smaller-than-PMD-size large folios? Now
> >>>> that I've had a look at the code and have a better understanding, I think that
> >>>> must be the case? And therefore this problem exists independently of my work to
> >>>> support swap-out of mTHP? (From your previous report I was under the impression
> >>>> that it only affected mTHP).
> >>>
> >>> I think this affects all large folios with PTEs entries more than 1. but hugeTLB
> >>> is handled as a whole in try_to_unmap_one and its rmap is removed all
> >>> together, i feel hugeTLB doesn't have this problem.
> >>>
> >>>>
> >>>> Its just that the problem is becoming more pronounced because with mTHP,
> >>>> PTE-mapped large folios are much more common?
> >>>
> >>> right. as now large folios become a more common case, and it is my case
> >>> running in millions of phones.
> >>>
> >>> BTW, I feel we can somehow learn from hugeTLB, for example, we can reclaim
> >>> all PTEs all together rather than iterating PTEs one by one. This will improve
> >>> performance. for example, a batched
> >>> set_ptes_to_swap_entries()
> >>> {
> >>> }
> >>> then we only need to loop once for a large folio, right now we are looping
> >>> nr_pages times.
> >>
> >> You still need a pte-pte loop somewhere. In hugetlb's case it's in the arch
> >> implementation. HugeTLB ptes are all a fixed size for a given VMA, which makes
> >> things a bit easier too, whereas in the regular mm, they are now a variable size.
> >>
> >> David and I introduced folio_pte_batch() to help gather batches of ptes, and it
> >> uses the contpte bit to avoid iterating over intermediate ptes. And I'm adding
> >> swap_pte_batch() which does a similar thing for swap entry batching in v4 of my
> >> swap-out series.
> >>
> >> For your set_ptes_to_swap_entries() example, I'm not sure what it would do other
> >> than loop over the PTEs setting an incremented swap entry to each one? How is
> >> that more performant?
> >
> > right now, while (page_vma_mapped_walk(&pvmw)) will loop nr_pages for each
> > PTE, if each PTE, we do lots of checks within the loop.
> >
> > by implementing set_ptes_to_swap_entries(), we can iterate once for
> > page_vma_mapped_walk(), after folio_pte_batch() has confirmed
> > the large folio is completely mapped, we set nr_pages swap entries
> > all together.
> >
> > we are replacing
> >
> > for(i=0;i<nr_pages;i++)     /* page_vma_mapped_walk */
> > {
> >         lots of checks;
> >         clear PTEn
> >         set PTEn to swap
> > }
>
> OK so you are effectively hoisting "lots of checks" out of the loop?

no. page_vma_mapped_walk returns nr_pages times. We are doing
same check each time.  Each time, we do tlbi and set one PTE.

>
> >
> > by
> >
> > if (large folio && folio_pte_batch() == nr_pages)
> >     set_ptes_to_swap_entries().

for this, we do check for one time, and we do much less tlbi.

> >
> >>
> >
> > Thanks,
> > Ryan

Thanks
Barry


  reply	other threads:[~2024-03-05  9:15 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-04 10:37 Barry Song
2024-03-04 12:20 ` Ryan Roberts
2024-03-04 12:41   ` David Hildenbrand
2024-03-04 13:03     ` Ryan Roberts
2024-03-04 14:27       ` David Hildenbrand
2024-03-04 20:42         ` Barry Song
2024-03-04 21:02           ` David Hildenbrand
2024-03-04 21:41             ` Barry Song
2024-03-04 21:04     ` Barry Song
2024-03-04 21:15       ` David Hildenbrand
2024-03-04 22:29         ` Barry Song
2024-03-05  7:53           ` Huang, Ying
2024-03-05  9:02             ` Barry Song
2024-03-05  9:10               ` Huang, Ying
2024-03-05  9:21                 ` Barry Song
2024-03-05 10:28                   ` Barry Song
2024-03-04 22:02       ` Ryan Roberts
2024-03-05  7:50     ` Huang, Ying
2024-03-04 21:57   ` Barry Song
2024-03-05  8:54     ` Ryan Roberts
2024-03-05  9:08       ` Barry Song
2024-03-05  9:11         ` Ryan Roberts
2024-03-05  9:15           ` Barry Song [this message]
2024-03-05  7:28 ` Huang, Ying
2024-03-05  8:56   ` Barry Song
2024-03-05  9:04     ` Huang, Ying
2024-03-05  9:08     ` Ryan Roberts
2024-03-05  9:11       ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAGsJ_4woFHT3eLzQ+Dg2dAUMve=wd=0SEZfZ4NqLyBVqeskkVg@mail.gmail.com' \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=hanchuanhua@oppo.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=v-songbaohua@oppo.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox