From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBA63C47258 for ; Wed, 31 Jan 2024 11:49:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4126C6B0085; Wed, 31 Jan 2024 06:49:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C0D98D0001; Wed, 31 Jan 2024 06:49:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28A476B0088; Wed, 31 Jan 2024 06:49:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 194B26B0085 for ; Wed, 31 Jan 2024 06:49:34 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BC51D1C159F for ; Wed, 31 Jan 2024 11:49:33 +0000 (UTC) X-FDA: 81739436226.28.0B82DC9 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf28.hostedemail.com (Postfix) with ESMTP id B9FF8C0012 for ; Wed, 31 Jan 2024 11:49:31 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706701772; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ygaOgyEV+z8eAfnO2G8+JFFGpraVR3aGrA5RLncnQXw=; b=OdfR4HIFkxr3EYqa7GZjDxxkbipEHwBXpTgQmZ6PWw6Ulqbu7PwoxuWlZDzWBr4RxQtcmy w+bNKequddxzWJ21v3hd1aBy6OBkjWvOBmR+MpbRMlF2CYZFhYnIv4KaLpc2OVa7Y+jONc nkE5uxj12FPTOY16JaWrLKCO2MYu/as= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706701772; a=rsa-sha256; cv=none; b=rVTwf568c+eAsoz0TuDU0AKfPkgHyiBaHRFL8S43h7sI7YPlgz+zVPNhGLi/hMu6tqenth kAiBy1bHwFNTZuQi5ifgEfjg9v8hn4iC4DNK7vGEHDpRVbkKiqUH3YxWt47Niu4w+5dT66 zHO1iQ96HsRexJWieecA6KYvdh+rPA4= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2681F11FB; Wed, 31 Jan 2024 03:50:14 -0800 (PST) Received: from [10.57.79.60] (unknown [10.57.79.60]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E085D3F738; Wed, 31 Jan 2024 03:49:26 -0800 (PST) Message-ID: <714d0930-2202-48b6-9728-d248f820325e@arm.com> Date: Wed, 31 Jan 2024 11:49:25 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org References: <20240129124649.189745-1-david@redhat.com> <57eb82c7-4816-42a2-b5ab-cc221e289b21@arm.com> Content-Language: en-GB From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: B9FF8C0012 X-Rspam-User: X-Stat-Signature: f66m39zc34c9mdcann5cdxpwyoxrrzpq X-Rspamd-Server: rspam03 X-HE-Tag: 1706701771-503460 X-HE-Meta: U2FsdGVkX18EypOg+F4WJirBd4z9Z/5wmcw4I0zJHShZTPYq7BCLwfTeoAlQoD4fqy1UoODjEaeedsi9J7vdYEpvU5DmvC/mhDRWDK4RHqlG3tduPBnrqECV31Tqb1wX+74xYkYJlNSWR6KPLNhOPvCs4erkkNGIMQEr/vC19J2mhvbmQEvhFHO2PX0IT4p/Jgh2qJKprGn5Lsa/7bhFBi+9jugyvwPovbO4i1MQfhMsJCSZrj4cyqL4kNyD5EjfeNYNh61IaiNoPG0QiacJG6foeIdTaF5ocpAAXh/U6tw7RLKkWuK2G222+9i39366D3UOS+L/jT9ZOC6X6zDYyxZxBwvpbmMV81yN2icYUeU0KsHSMd3IURJ0zKaJZegZT9ZKjEmkINsoL+AmVs4kZdGPonPdYgSWuG75yDfKftgiV9qjP2W7IOo5xVsy0xbqO5PYuYhMso7v2cDE297xHvgjqLCE9WzAzT5Pgpm4IHaYjQvtuMqo9Lm+7i21utVEEjWV9wlTJz5wOCZcLk5cInoHLMc4J1XoS5S9ntisV9Wwhs/UQ9bKNXE8BJPjl9caJVTTuepAK+SgT18vmnRz4DZStLDALPSJitCuU67nLXcvtsXAnMtElbuXzl4L4jxFkPpDgBXs8VboX4zTJmWIgI4IE+8OrfZUMczCt01ltu4TtHhlXTtk0HBHdfFITAsP+sH8Cw3Gwx1tldMkzit3d5cBIhxfVzmiDOnkw1aLaFU0nR27iocOPrLJWSxoe722+JjHcqqFWauqDl+nz94r0UUytk+39ArQWzRH5AsJU+hVxxzlRSOPmCFxIrZrxPxMCkLFxHatEm/Uck4N3hbdgq+viRXaGbwXmWgmXuabMQXHHL8FO2Kqgod8E5asfRrCYglIz1A+f91iiHLM8si3RgfwIo/O8RfCUEa88PnXYtya8H+x4BC5Gp/qnwZqcRNW3UzBPnrfMjJEIt8esz/ 0Y4QMKys m66MpccBr+loziytaXuKYSAjow2fAO8Q4ELZUPnMtUJ2RkbbNrQgTfjNNx4NYQpbSOcrOA3XESs6z2S++FRZglHWheFMwo3ZaGgltXWDUIy/uEiCk7vZDVBglpU7J2cYb075PKgoi4FZKbCLygW9qq9UXp5T6X3y2sMAgE5DJeQsK+lE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 31/01/2024 11:28, David Hildenbrand wrote: > On 31.01.24 12:16, Ryan Roberts wrote: >> On 31/01/2024 11:06, David Hildenbrand wrote: >>> On 31.01.24 11:43, Ryan Roberts wrote: >>>> On 29/01/2024 12:46, David Hildenbrand wrote: >>>>> Now that the rmap overhaul[1] is upstream that provides a clean interface >>>>> for rmap batching, let's implement PTE batching during fork when processing >>>>> PTE-mapped THPs. >>>>> >>>>> This series is partially based on Ryan's previous work[2] to implement >>>>> cont-pte support on arm64, but its a complete rewrite based on [1] to >>>>> optimize all architectures independent of any such PTE bits, and to >>>>> use the new rmap batching functions that simplify the code and prepare >>>>> for further rmap accounting changes. >>>>> >>>>> We collect consecutive PTEs that map consecutive pages of the same large >>>>> folio, making sure that the other PTE bits are compatible, and (a) adjust >>>>> the refcount only once per batch, (b) call rmap handling functions only >>>>> once per batch and (c) perform batch PTE setting/updates. >>>>> >>>>> While this series should be beneficial for adding cont-pte support on >>>>> ARM64[2], it's one of the requirements for maintaining a total mapcount[3] >>>>> for large folios with minimal added overhead and further changes[4] that >>>>> build up on top of the total mapcount. >>>>> >>>>> Independent of all that, this series results in a speedup during fork with >>>>> PTE-mapped THP, which is the default with THPs that are smaller than a PMD >>>>> (for example, 16KiB to 1024KiB mTHPs for anonymous memory[5]). >>>>> >>>>> On an Intel Xeon Silver 4210R CPU, fork'ing with 1GiB of PTE-mapped folios >>>>> of the same size (stddev < 1%) results in the following runtimes >>>>> for fork() (shorter is better): >>>>> >>>>> Folio Size | v6.8-rc1 |      New | Change >>>>> ------------------------------------------ >>>>>         4KiB | 0.014328 | 0.014035 |   - 2% >>>>>        16KiB | 0.014263 | 0.01196  |   -16% >>>>>        32KiB | 0.014334 | 0.01094  |   -24% >>>>>        64KiB | 0.014046 | 0.010444 |   -26% >>>>>       128KiB | 0.014011 | 0.010063 |   -28% >>>>>       256KiB | 0.013993 | 0.009938 |   -29% >>>>>       512KiB | 0.013983 | 0.00985  |   -30% >>>>>      1024KiB | 0.013986 | 0.00982  |   -30% >>>>>      2048KiB | 0.014305 | 0.010076 |   -30% >>>> >>>> Just a heads up that I'm seeing some strange results on Apple M2. Fork for >>>> order-0 is seemingly costing ~17% more. I'm using GCC 13.2 and was pretty >>>> sure I >>>> didn't see this problem with version 1; although that was on a different >>>> baseline and I've thrown the numbers away so will rerun and try to debug this. Numbers for v1 of the series, both on top of 6.8-rc1 and rebased to the same mm-unstable base as v3 of the series (first 2 rows are from what I just posted for context): | kernel | mean_rel | std_rel | |:-------------------|-----------:|----------:| | mm-unstabe (base) | 0.0% | 1.1% | | mm-unstable + v3 | 16.7% | 0.8% | | mm-unstable + v1 | -2.5% | 1.7% | | v6.8-rc1 + v1 | -6.6% | 1.1% | So all looks good with v1. And seems to suggest mm-unstable has regressed by ~4% vs v6.8-rc1. Is this really a useful benchmark? Does the raw performance of fork() syscall really matter? Evidence suggests its moving all over the place - breath on the code and it changes - not a great place to be when using the test for gating purposes! Still with the old tests - I'll move to the new ones now. >>>> >>> >>> So far, on my x86 tests (Intel, AMD EPYC), I was not able to observe this. >>> fork() for order-0 was consistently effectively unchanged. Do you observe that >>> on other ARM systems as well? >> >> Nope; running the exact same kernel binary and user space on Altra, I see >> sensible numbers; >> >> fork order-0: -1.3% >> fork order-9: -7.6% >> dontneed order-0: -0.5% >> dontneed order-9: 0.1% >> munmap order-0: 0.0% >> munmap order-9: -67.9% >> >> So I guess some pipelining issue that causes the M2 to stall more? > > With one effective added folio_test_large(), it could only be a code layout > problem? Or the compiler does something stupid, but you say that you run the > exact same kernel binary, so that doesn't make sense. Yup, same binary. We know this code is very sensitive - 1 cycle makes a big difference. So could easily be code layout, branch prediction, etc... > > I'm also surprised about the dontneed vs. munmap numbers. You mean the ones for Altra that I posted? (I didn't post any for M2). The altra numbers look ok to me; dontneed has no change, and munmap has no change for order-0 and is massively improved for order-9. Doesn't make any sense > (again, there was this VMA merging problem but it would still allow for batching > within a single VMA that spans exactly one large folio). > > What are you using as baseline? Really just mm-unstable vs. mm-unstable+patches? yes. except for "v6.8-rc1 + v1" above. > > Let's see if the new test changes the numbers you measure. >