From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6984C47DDF for ; Wed, 31 Jan 2024 10:43:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C8F06B0096; Wed, 31 Jan 2024 05:43:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6772D6B0098; Wed, 31 Jan 2024 05:43:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5669D6B0099; Wed, 31 Jan 2024 05:43:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 48FD66B0096 for ; Wed, 31 Jan 2024 05:43:35 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E430CA0CAB for ; Wed, 31 Jan 2024 10:43:34 +0000 (UTC) X-FDA: 81739269948.16.6451D7B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf03.hostedemail.com (Postfix) with ESMTP id E3DA320016 for ; Wed, 31 Jan 2024 10:43:32 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706697813; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QyKF0d+G8jKlOte8AqrLi2D3++Non2rA/uy6aHtAIec=; b=i4EBIi9hLYxY8p/v4ojaf3uHiGYLZ+2A8PiQg9qyPR5xDfRw1tJ2Zm5Y9alWHBXWHix7nC XNDzcrsWMLkgFU8lm55moifrO/0zoyaOYeF2mQdwSHMcduoxgIBnDEtqvbJxqyy8i66VZB PdY0Xc+EUddW3+LFrAEhuCXpyJ6qp9w= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706697813; a=rsa-sha256; cv=none; b=l6Ucha0rhAgDK4KuusUWjlSocgENPHplf3X2S85tmVJKxrCc93syxWTFHxiuvUJ0hZtLYL QCMXpnk5D5tg5UNDu8Qpq2J9/H0XM3zx4IHvm4rIX92UYVmbcjyKR6KwavjBR6hGReEVqY lQQy5UuhV+HUmmxRi7hY5FilhxWkoXw= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 61E34DA7; Wed, 31 Jan 2024 02:44:15 -0800 (PST) Received: from [10.57.79.60] (unknown [10.57.79.60]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A2F0D3F5A1; Wed, 31 Jan 2024 02:43:24 -0800 (PST) Message-ID: Date: Wed, 31 Jan 2024 10:43:22 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org References: <20240129124649.189745-1-david@redhat.com> From: Ryan Roberts In-Reply-To: <20240129124649.189745-1-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: rpg5wc37wczi3ca5zdo8j7zh4zdukzqj X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: E3DA320016 X-Rspam-User: X-HE-Tag: 1706697812-555305 X-HE-Meta: U2FsdGVkX1+Xs9gOuk3G2YVdbH3rBgyrnacKeb0JG8Pv9RJl+5CL2TqoDf5/eCSC2hwBbhj45Q+rdnG5zaMDXXu4jUuK4np3dXvEB06MGXS1SFPlEh9JR0yQETzP0BktRWn+LOeIbJAi9EyHv+GQ0TUpgMU5jlsc2OdSFCoN9JhWHBCrso8zlElfD/6ZXm5vVZNjSdLi5zym9oxxDyf4sCS/jgTdFRNZFvRDOWqFfEkl+UHjH30UcIx8kLlq/619qSs4Y/MKK6ksx5GidPvvOLm/XGl5xzcV4AbkgJ1rjmY9en7qOx0SNv1NUwVorGkn/EeltAeVOKgcH9KRMh8E36kQgEQH0fOhGZcjXcny9j6drgXIY/mdYSJKVAsTRNGHFc5tBfCsw4Qb77ex/o/SEVvrPWgSTgezCjE3rAgo9p5OXvm71Nz6dbzFtEJURrz4Gs5LLFS/KRHpoCUD5/hZsCjiIkhBKY/cGmhQnDIWbHM0GQ2jIsbcNMP/rCM4NxBOMb6s2oErcBWlFcvNkUPNZWnw8KPnOxFSBsg9VruI2kShr11jcRTcmoNeU24nO8fQAfmrRIntsJa6JB2XQIzRDqSVChujz1Rx1mK3W2+DOlkmMh8BCTJktaw/bBTh9Zs/pQbEf3QmnzD9YwvR043YLkHgE2dXYhU5Y9i8DV84VBUol3Nm2Gd3FGv0SAix6smLE3MgIkrJ/SyIDaQt3JfIxvIWJmwqGVmW58zCR2MFEVWu9CTCb7/xP185IMiz0Y8Au0S0Kug0d1vWCW7LOJR6AcjW3MainZhv3yvT9MsoRR5iLT4zijCXqnUNi4LMaec8EhJwPSW3ZQynVfr9MxvYgsBgeID46R2Rkze2h613AwV2r2FQqOYHb291K5tubSAnwjeEb2Oxn0ce/P9bO7w+/JVLoGVX7BU46Xi35gs4JzglAgfUCEb0w2b0KC+h7SCw/68ciy1Kq7QNCbq/2qc gAarXLHb TI/N1Jtg6SIUrsYj6IkMV93Vly2U6IuwDwTxzJU+iqgipFv7I4LrortHsi5UsuzwDem5zYxhIQUmfRS6bu6S1rhhfVzNregI/rqamAqT8eWuYFaqDkLN5t5u7k2mC1z4/RRuR9KglNn9zc6UF9v9T/qajpSGbdqsQ6OufhfxDrogNCwZwfUSXLFtrFHABSigv3z1IXIf7uH9CwDRElDPKtzuGqeSosnJkv6cfLDDL/fLmhVy0dTbQUD02br6I58uMZp31003dXE4Wx4l4Lj0Sr5OlULfU3nWQjf1uFgcLzKKA1VxdsExpL052a74xCxsKSwWMTYgMWAfA2gOHHCHsp5LJYIsR4v+sTxfrDHfJqBxsP3uvMEfH1rr0c/ktHiIHTowuKHZGp5tVW4g9Rfe1LAKidi3rjUUgyXvelPtlVKck/FWSU6KTLri6VZlDnLNemDyYMoLg8vDyTZojUvRtYUoz0ez/4kA4mibxCSobYNFYUQM9Zb3yshKG6vOrExLoyZfQI9yfTjrs0ymmTLPPJRhtcAKBKlXxVB2l6MNI7h3hr5gXVFPnN8oJzsSi0opKK2jXwVS6A1G83ZuObrTp0XV0cY7vLGhycHeGqiJMp+IQKgeYNkwx/cDTlMOY77p9VI4FnwNPk51dCRfeEQIFSR9oyIy2Tt9SqyOFFNQuDNnNfwa/VMuuYkGtOBjm4UVgj/d3gJMf/LS+2FucYq9h008nSL91xy1Oexu8GNiiA7dDllj/zQ3EBkazK9ZXcEVQLACGdzN7CAk3ubtihFHOL9CCPSi4CiwGroSjC0ccCqoO6cYMXGb898pyaA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 29/01/2024 12:46, David Hildenbrand wrote: > Now that the rmap overhaul[1] is upstream that provides a clean interface > for rmap batching, let's implement PTE batching during fork when processing > PTE-mapped THPs. > > This series is partially based on Ryan's previous work[2] to implement > cont-pte support on arm64, but its a complete rewrite based on [1] to > optimize all architectures independent of any such PTE bits, and to > use the new rmap batching functions that simplify the code and prepare > for further rmap accounting changes. > > We collect consecutive PTEs that map consecutive pages of the same large > folio, making sure that the other PTE bits are compatible, and (a) adjust > the refcount only once per batch, (b) call rmap handling functions only > once per batch and (c) perform batch PTE setting/updates. > > While this series should be beneficial for adding cont-pte support on > ARM64[2], it's one of the requirements for maintaining a total mapcount[3] > for large folios with minimal added overhead and further changes[4] that > build up on top of the total mapcount. > > Independent of all that, this series results in a speedup during fork with > PTE-mapped THP, which is the default with THPs that are smaller than a PMD > (for example, 16KiB to 1024KiB mTHPs for anonymous memory[5]). > > On an Intel Xeon Silver 4210R CPU, fork'ing with 1GiB of PTE-mapped folios > of the same size (stddev < 1%) results in the following runtimes > for fork() (shorter is better): > > Folio Size | v6.8-rc1 | New | Change > ------------------------------------------ > 4KiB | 0.014328 | 0.014035 | - 2% > 16KiB | 0.014263 | 0.01196 | -16% > 32KiB | 0.014334 | 0.01094 | -24% > 64KiB | 0.014046 | 0.010444 | -26% > 128KiB | 0.014011 | 0.010063 | -28% > 256KiB | 0.013993 | 0.009938 | -29% > 512KiB | 0.013983 | 0.00985 | -30% > 1024KiB | 0.013986 | 0.00982 | -30% > 2048KiB | 0.014305 | 0.010076 | -30% Just a heads up that I'm seeing some strange results on Apple M2. Fork for order-0 is seemingly costing ~17% more. I'm using GCC 13.2 and was pretty sure I didn't see this problem with version 1; although that was on a different baseline and I've thrown the numbers away so will rerun and try to debug this. | kernel | mean_rel | std_rel | |:------------|-----------:|----------:| | mm-unstable | 0.0% | 1.1% | | patch 1 | -2.3% | 1.3% | | patch 10 | -2.9% | 2.7% | | patch 11 | 13.5% | 0.5% | | patch 12 | 15.2% | 1.2% | | patch 13 | 18.2% | 0.7% | | patch 14 | 20.5% | 1.0% | | patch 15 | 17.1% | 1.6% | | patch 15 | 16.7% | 0.8% | fork for order-9 is looking good (-20%), and for the zap series, munmap is looking good, but dontneed is looking poor for both order-0 and 9. But one thing at a time... let's concentrate on fork order-0 first. Note that I'm still using the "old" benchmark code. Could you resend me the link to the new code? Although I don't think there should be any effect for order-0 anyway, if I understood your changes correctly? > > Note that these numbers are even better than the ones from v1 (verified > over multiple reboots), even though there were only minimal code changes. > Well, I removed a pte_mkclean() call for anon folios, maybe that also > plays a role. > > But my experience is that fork() is extremely sensitive to code size, > inlining, ... so I suspect we'll see on other architectures rather a change > of -20% instead of -30%, and it will be easy to "lose" some of that speedup > in the future by subtle code changes. > > Next up is PTE batching when unmapping. Only tested on x86-64. > Compile-tested on most other architectures. > > v2 -> v3: > * Rebased on mm-unstable > * Picked up RB's > * Updated documentation of wrprotect_ptes(). > > v1 -> v2: > * "arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary" > -> Added patch from Ryan > * "arm/pgtable: define PFN_PTE_SHIFT" > -> Removed the arm64 bits > * "mm/pgtable: make pte_next_pfn() independent of set_ptes()" > * "arm/mm: use pte_next_pfn() in set_ptes()" > * "powerpc/mm: use pte_next_pfn() in set_ptes()" > -> Added to use pte_next_pfn() in some arch set_ptes() implementations > I tried to make use of pte_next_pfn() also in the others, but it's > not trivial because the other archs implement set_ptes() in their > asm/pgtable.h. Future work. > * "mm/memory: factor out copying the actual PTE in copy_present_pte()" > -> Move common folio_get() out of if/else > * "mm/memory: optimize fork() with PTE-mapped THP" > -> Add doc for wrprotect_ptes > -> Extend description to mention handling of pinned folios > -> Move common folio_ref_add() out of if/else > * "mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch()" > -> Be more conservative with dirt/soft-dirty, let the caller specify > using flags > > [1] https://lkml.kernel.org/r/20231220224504.646757-1-david@redhat.com > [2] https://lkml.kernel.org/r/20231218105100.172635-1-ryan.roberts@arm.com > [3] https://lkml.kernel.org/r/20230809083256.699513-1-david@redhat.com > [4] https://lkml.kernel.org/r/20231124132626.235350-1-david@redhat.com > [5] https://lkml.kernel.org/r/20231207161211.2374093-1-ryan.roberts@arm.com > > Cc: Andrew Morton > Cc: Matthew Wilcox (Oracle) > Cc: Ryan Roberts > Cc: Russell King > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Dinh Nguyen > Cc: Michael Ellerman > Cc: Nicholas Piggin > Cc: Christophe Leroy > Cc: "Aneesh Kumar K.V" > Cc: "Naveen N. Rao" > Cc: Paul Walmsley > Cc: Palmer Dabbelt > Cc: Albert Ou > Cc: Alexander Gordeev > Cc: Gerald Schaefer > Cc: Heiko Carstens > Cc: Vasily Gorbik > Cc: Christian Borntraeger > Cc: Sven Schnelle > Cc: "David S. Miller" > Cc: linux-arm-kernel@lists.infradead.org > Cc: linuxppc-dev@lists.ozlabs.org > Cc: linux-riscv@lists.infradead.org > Cc: linux-s390@vger.kernel.org > Cc: sparclinux@vger.kernel.org > > --- > > Andrew asked for a resend based on latest mm-unstable. I am sending this > out earlier than I would usually have sent out the next version, so we can > pull it into mm-unstable again now that v1 was dropped. > > David Hildenbrand (14): > arm/pgtable: define PFN_PTE_SHIFT > nios2/pgtable: define PFN_PTE_SHIFT > powerpc/pgtable: define PFN_PTE_SHIFT > riscv/pgtable: define PFN_PTE_SHIFT > s390/pgtable: define PFN_PTE_SHIFT > sparc/pgtable: define PFN_PTE_SHIFT > mm/pgtable: make pte_next_pfn() independent of set_ptes() > arm/mm: use pte_next_pfn() in set_ptes() > powerpc/mm: use pte_next_pfn() in set_ptes() > mm/memory: factor out copying the actual PTE in copy_present_pte() > mm/memory: pass PTE to copy_present_pte() > mm/memory: optimize fork() with PTE-mapped THP > mm/memory: ignore dirty/accessed/soft-dirty bits in folio_pte_batch() > mm/memory: ignore writable bit in folio_pte_batch() > > Ryan Roberts (1): > arm64/mm: Make set_ptes() robust when OAs cross 48-bit boundary > > arch/arm/include/asm/pgtable.h | 2 + > arch/arm/mm/mmu.c | 2 +- > arch/arm64/include/asm/pgtable.h | 28 ++-- > arch/nios2/include/asm/pgtable.h | 2 + > arch/powerpc/include/asm/pgtable.h | 2 + > arch/powerpc/mm/pgtable.c | 5 +- > arch/riscv/include/asm/pgtable.h | 2 + > arch/s390/include/asm/pgtable.h | 2 + > arch/sparc/include/asm/pgtable_64.h | 2 + > include/linux/pgtable.h | 33 ++++- > mm/memory.c | 212 ++++++++++++++++++++++------ > 11 files changed, 229 insertions(+), 63 deletions(-) > > > base-commit: d162e170f1181b4305494843e1976584ddf2b72e