linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	Russell King <linux@armlinux.org.uk>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Dinh Nguyen <dinguyen@kernel.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	"Aneesh Kumar K.V" <aneesh.kumar@kernel.org>,
	"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Alexander Gordeev <agordeev@linux.ibm.com>,
	Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Sven Schnelle <svens@linux.ibm.com>,
	"David S. Miller" <davem@davemloft.net>,
	linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, sparclinux@vger.kernel.org
Subject: Re: [PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP
Date: Wed, 31 Jan 2024 11:49:25 +0000	[thread overview]
Message-ID: <714d0930-2202-48b6-9728-d248f820325e@arm.com> (raw)
In-Reply-To: <e6eaba5b-f290-4d1f-990b-a47d89f56ee4@redhat.com>

On 31/01/2024 11:28, David Hildenbrand wrote:
> On 31.01.24 12:16, Ryan Roberts wrote:
>> On 31/01/2024 11:06, David Hildenbrand wrote:
>>> On 31.01.24 11:43, Ryan Roberts wrote:
>>>> On 29/01/2024 12:46, David Hildenbrand wrote:
>>>>> Now that the rmap overhaul[1] is upstream that provides a clean interface
>>>>> for rmap batching, let's implement PTE batching during fork when processing
>>>>> PTE-mapped THPs.
>>>>>
>>>>> This series is partially based on Ryan's previous work[2] to implement
>>>>> cont-pte support on arm64, but its a complete rewrite based on [1] to
>>>>> optimize all architectures independent of any such PTE bits, and to
>>>>> use the new rmap batching functions that simplify the code and prepare
>>>>> for further rmap accounting changes.
>>>>>
>>>>> We collect consecutive PTEs that map consecutive pages of the same large
>>>>> folio, making sure that the other PTE bits are compatible, and (a) adjust
>>>>> the refcount only once per batch, (b) call rmap handling functions only
>>>>> once per batch and (c) perform batch PTE setting/updates.
>>>>>
>>>>> While this series should be beneficial for adding cont-pte support on
>>>>> ARM64[2], it's one of the requirements for maintaining a total mapcount[3]
>>>>> for large folios with minimal added overhead and further changes[4] that
>>>>> build up on top of the total mapcount.
>>>>>
>>>>> Independent of all that, this series results in a speedup during fork with
>>>>> PTE-mapped THP, which is the default with THPs that are smaller than a PMD
>>>>> (for example, 16KiB to 1024KiB mTHPs for anonymous memory[5]).
>>>>>
>>>>> On an Intel Xeon Silver 4210R CPU, fork'ing with 1GiB of PTE-mapped folios
>>>>> of the same size (stddev < 1%) results in the following runtimes
>>>>> for fork() (shorter is better):
>>>>>
>>>>> Folio Size | v6.8-rc1 |      New | Change
>>>>> ------------------------------------------
>>>>>         4KiB | 0.014328 | 0.014035 |   - 2%
>>>>>        16KiB | 0.014263 | 0.01196  |   -16%
>>>>>        32KiB | 0.014334 | 0.01094  |   -24%
>>>>>        64KiB | 0.014046 | 0.010444 |   -26%
>>>>>       128KiB | 0.014011 | 0.010063 |   -28%
>>>>>       256KiB | 0.013993 | 0.009938 |   -29%
>>>>>       512KiB | 0.013983 | 0.00985  |   -30%
>>>>>      1024KiB | 0.013986 | 0.00982  |   -30%
>>>>>      2048KiB | 0.014305 | 0.010076 |   -30%
>>>>
>>>> Just a heads up that I'm seeing some strange results on Apple M2. Fork for
>>>> order-0 is seemingly costing ~17% more. I'm using GCC 13.2 and was pretty
>>>> sure I
>>>> didn't see this problem with version 1; although that was on a different
>>>> baseline and I've thrown the numbers away so will rerun and try to debug this.

Numbers for v1 of the series, both on top of 6.8-rc1 and rebased to the same
mm-unstable base as v3 of the series (first 2 rows are from what I just posted
for context):

| kernel             |   mean_rel |   std_rel |
|:-------------------|-----------:|----------:|
| mm-unstabe (base)  |       0.0% |      1.1% |
| mm-unstable + v3   |      16.7% |      0.8% |
| mm-unstable + v1   |      -2.5% |      1.7% |
| v6.8-rc1 + v1      |      -6.6% |      1.1% |

So all looks good with v1. And seems to suggest mm-unstable has regressed by ~4%
vs v6.8-rc1. Is this really a useful benchmark? Does the raw performance of
fork() syscall really matter? Evidence suggests its moving all over the place -
breath on the code and it changes - not a great place to be when using the test
for gating purposes!

Still with the old tests - I'll move to the new ones now.


>>>>
>>>
>>> So far, on my x86 tests (Intel, AMD EPYC), I was not able to observe this.
>>> fork() for order-0 was consistently effectively unchanged. Do you observe that
>>> on other ARM systems as well?
>>
>> Nope; running the exact same kernel binary and user space on Altra, I see
>> sensible numbers;
>>
>> fork order-0: -1.3%
>> fork order-9: -7.6%
>> dontneed order-0: -0.5%
>> dontneed order-9: 0.1%
>> munmap order-0: 0.0%
>> munmap order-9: -67.9%
>>
>> So I guess some pipelining issue that causes the M2 to stall more?
> 
> With one effective added folio_test_large(), it could only be a code layout
> problem? Or the compiler does something stupid, but you say that you run the
> exact same kernel binary, so that doesn't make sense.

Yup, same binary. We know this code is very sensitive - 1 cycle makes a big
difference. So could easily be code layout, branch prediction, etc...

> 
> I'm also surprised about the dontneed vs. munmap numbers.

You mean the ones for Altra that I posted? (I didn't post any for M2). The altra
numbers look ok to me; dontneed has no change, and munmap has no change for
order-0 and is massively improved for order-9.

 Doesn't make any sense
> (again, there was this VMA merging problem but it would still allow for batching
> within a single VMA that spans exactly one large folio).
> 
> What are you using as baseline? Really just mm-unstable vs. mm-unstable+patches?

yes. except for "v6.8-rc1 + v1" above.

> 
> Let's see if the new test changes the numbers you measure.
> 



  reply	other threads:[~2024-01-31 11:49 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-29 12:46 David Hildenbrand
2024-01-29 12:46 ` [PATCH v3 01/15] arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary David Hildenbrand
2024-02-08  6:10   ` Mike Rapoport
2024-02-09 22:36     ` David Hildenbrand
2024-01-29 12:46 ` [PATCH v3 02/15] arm/pgtable: define PFN_PTE_SHIFT David Hildenbrand
2024-02-08  6:11   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 03/15] nios2/pgtable: " David Hildenbrand
2024-02-08  6:12   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 04/15] powerpc/pgtable: " David Hildenbrand
2024-02-08  6:13   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 05/15] riscv/pgtable: " David Hildenbrand
2024-02-08  6:14   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 06/15] s390/pgtable: " David Hildenbrand
2024-02-08  6:15   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 07/15] sparc/pgtable: " David Hildenbrand
2024-02-08  6:18   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 08/15] mm/pgtable: make pte_next_pfn() independent of set_ptes() David Hildenbrand
2024-02-08  6:19   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 09/15] arm/mm: use pte_next_pfn() in set_ptes() David Hildenbrand
2024-02-08  6:20   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 10/15] powerpc/mm: " David Hildenbrand
2024-02-08  6:20   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 11/15] mm/memory: factor out copying the actual PTE in copy_present_pte() David Hildenbrand
2024-02-08  6:29   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 12/15] mm/memory: pass PTE to copy_present_pte() David Hildenbrand
2024-02-08  6:27   ` Mike Rapoport
2024-02-14 22:40   ` David Hildenbrand
2024-01-29 12:46 ` [PATCH v3 13/15] mm/memory: optimize fork() with PTE-mapped THP David Hildenbrand
2024-02-08  6:41   ` Mike Rapoport
2024-01-29 12:46 ` [PATCH v3 14/15] mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch() David Hildenbrand
2024-01-29 12:46 ` [PATCH v3 15/15] mm/memory: ignore writable bit " David Hildenbrand
2024-01-31 10:43 ` [PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP Ryan Roberts
2024-01-31 11:06   ` David Hildenbrand
2024-01-31 11:16     ` Ryan Roberts
2024-01-31 11:28       ` David Hildenbrand
2024-01-31 11:49         ` Ryan Roberts [this message]
2024-01-31 12:37           ` Ryan Roberts
2024-01-31 12:56             ` David Hildenbrand
2024-01-31 13:16               ` Ryan Roberts
2024-01-31 13:38                 ` David Hildenbrand
2024-01-31 13:58                   ` Ryan Roberts
2024-01-31 14:29                     ` David Hildenbrand
2024-01-31 15:02                       ` Ryan Roberts
2024-01-31 15:05                         ` David Hildenbrand
2024-01-31 15:08                           ` Ryan Roberts
2024-01-31 15:11                             ` David Hildenbrand
2024-01-31 12:59           ` David Hildenbrand
2024-03-25  4:42 ` patchwork-bot+linux-riscv

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=714d0930-2202-48b6-9728-d248f820325e@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=agordeev@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@kernel.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=borntraeger@linux.ibm.com \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=davem@davemloft.net \
    --cc=david@redhat.com \
    --cc=dinguyen@kernel.org \
    --cc=gerald.schaefer@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=naveen.n.rao@linux.ibm.com \
    --cc=npiggin@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=sparclinux@vger.kernel.org \
    --cc=svens@linux.ibm.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox