From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCF8AC3DA6E for ; Wed, 20 Dec 2023 09:57:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 718306B0075; Wed, 20 Dec 2023 04:57:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C7756B007E; Wed, 20 Dec 2023 04:57:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58E746B0081; Wed, 20 Dec 2023 04:57:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4AD6B6B0075 for ; Wed, 20 Dec 2023 04:57:40 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F1BB5120AB5 for ; Wed, 20 Dec 2023 09:57:39 +0000 (UTC) X-FDA: 81586744638.05.9697366 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf01.hostedemail.com (Postfix) with ESMTP id 2B0D540009 for ; Wed, 20 Dec 2023 09:57:37 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703066258; a=rsa-sha256; cv=none; b=g2h0Zi1ZV5QUsB11FyeFBjE1YdKSQugM9Dag0j8hqcEMOplc+FVB9+odXYV1BSqnTS6grz PrNbhU5BOoDmk3jnrEv3VhpuIyp8FpjhEsxrCl6CYM0iiU2VYFLSZ/2T1FDWRRsqFTAjm8 D/42t+iZK0r1T729FJu4evDfLbT4LCM= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703066258; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eVnEwPhUtLklkRunKC+RwPkJsEFkPwH+w8RLlaA7TtI=; b=5TsGmGEj12ZC2S9/jxDd4iuaVr0oGC15wbf/zBTb5ZYgS+7QlNXHh8LGMTt+jg/dMKWlbL bDUlA/JBu8fMXCIiG8LV8PwtfU3pP8IIoFivHYA1+0WjzO8J3UOyqYq+Ril89b4JOV0QIw AW3+6nYMAorDIHRGdQRctExcL0vQh8Q= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E17351FB; Wed, 20 Dec 2023 01:58:21 -0800 (PST) Received: from [10.57.75.247] (unknown [10.57.75.247]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 91AB13F5A1; Wed, 20 Dec 2023 01:57:33 -0800 (PST) Message-ID: <15177acd-8d75-46a8-9460-98ea313a7b2b@arm.com> Date: Wed, 20 Dec 2023 09:57:32 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork() Content-Language: en-GB From: Ryan Roberts To: David Hildenbrand , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231218105100.172635-1-ryan.roberts@arm.com> <20231218105100.172635-3-ryan.roberts@arm.com> <0bef5423-6eea-446b-8854-980e9c23a948@redhat.com> <7c0236ad-01f3-437f-8b04-125d69e90dc0@redhat.com> <9a58b1a2-2c13-4fa0-8ffa-2b3d9655f1b6@arm.com> In-Reply-To: <9a58b1a2-2c13-4fa0-8ffa-2b3d9655f1b6@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2B0D540009 X-Stat-Signature: fq3nsd4nmq5yhgahmbswe31463utx4nn X-HE-Tag: 1703066257-441938 X-HE-Meta: U2FsdGVkX18a6dI3TtdEy92aOBKhUckLcNmppsCWvAgpaCuNi5fKb6QBDq9V0Jfh1EZJHK6f/wz71ZSHUu6eErsCV4uOUsgUKfSnnudpgUy2JZ1Vt4w7DE8j1yWgw4JYH4FPq7r9vtmuQM66rJQAIFdId2OiBAwyPGnmVb/Hv4XRMCPK/az/sG5QjHWRpakf6arpv87ZeG29j6ubUf29Y90mFCDyCPgIdFxUEWfq8LNPcwKMSpoLuOca7kQkB7F1gGr48MhapOacgdHKL2JKZdZpVB6QToDBTiaoilXAtthsWFIwbgG7XLAuXONBjMDWx/Hh/8G4/pV5emguE8EsFVagqUxG/IZPwmGVvexSMmpP27W+9zInOaSlHacCwiZh+y83qxTjng9i3odVbVQI6HHicbbUrK7eNu3lHJxbYP2ySxYQqOVF6G5X+TZWeBcCflV1p4EXLCoNL4NgC4ZGRrZiOobf/gtxIjVwxhParnsSz5HsX0eg9mRBf/eUzqG06vRpdEiD2Xz3Jip6gJpetPRVG8RE38Q3/37sDwVVxMbUEFPVRJoSpDw18ieNT2Fu8T41ELpvmH3oNinNK+yRGEX8jppXFjEMhLb1l+4j2YAgA3mxJ2bt0K3pZDP7fzahz9z6rgqrWF6JDMUAhN4+xbffmJwJpRHTYzScuwg40/qrYwH3/AM8CK6XBEFVYr4s5A749mefwnCwsb1fMWOcR6DVFAF2a/OywSI4nlGdJE8RIlQiyBSQqZQf9kXi2ZwCraMHMVIjVkiqk+ALhLTNY1UR7iNxuTxAn3jZ7LP7Et26ACdSVj1RtJh2Ajq9WHV9j7vM4cYVvzXfUvHqOa/BNIqS5jgbrbGNQDcjGrxpeaA1sO+kNSqBo2MueD0SEOtZwq8TnSfO/TJdSVNDYDvIEsKVbK1SyafA6VEpKe4PCegcdbmPOqoY54HIdGDxlt/Fo8FTfp+uu6qyVkIyiRA fT2V7QwZ fECk+nbflW3FMyBBXwZM1NIJ9a+qbQ+i0oOhZp/0OoNEyWSdYEkFPJLAsvZjZL0dpxerwAVxlxJOOcTe3bj2IrEdd6tNxRgkOeAOs2WKBja+SVkf0bUYfQfk1vo+QsSYevaZERgxFWAquVWD3RxR5T6gz2g50cQl9NvxGVSGqTS72GpDuKQM1hnedmj7pdm8lKs+aCNd4nn9UfTiAIujsD2aRYclgIFLsqERnsdp4qnEanzLszvJYc5OUTLWrjDwAawtqR0gOHS6B6S/7nI4ND9r42dqLHhKW288H2S6HP3dzh58= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 20/12/2023 09:51, Ryan Roberts wrote: > On 20/12/2023 09:17, David Hildenbrand wrote: >> On 19.12.23 18:42, Ryan Roberts wrote: >>> On 19/12/2023 17:22, David Hildenbrand wrote: >>>> On 19.12.23 09:30, Ryan Roberts wrote: >>>>> On 18/12/2023 17:47, David Hildenbrand wrote: >>>>>> On 18.12.23 11:50, Ryan Roberts wrote: >>>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given >>>>>>> batch is determined by the architecture with the new helper, >>>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory, >>>>>>> all belonging to the same folio. A pte batch is then write-protected in >>>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is >>>>>>> set in one go in the child using the new helper, set_ptes_full(). >>>>>>> >>>>>>> The primary motivation for this change is to reduce the number of tlb >>>>>>> maintenance operations that the arm64 backend has to perform during >>>>>>> fork, as it is about to add transparent support for the "contiguous bit" >>>>>>> in its ptes. By write-protecting the parent using the new >>>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend >>>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive, >>>>>>> when all ptes in the range are being write-protected. Similarly, by >>>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the >>>>>>> child, the backend does not need to fold a contiguous range once they >>>>>>> are all populated - they can be initially populated as a contiguous >>>>>>> range in the first place. >>>>>>> >>>>>>> This code is very performance sensitive, and a significant amount of >>>>>>> effort has been put into not regressing performance for the order-0 >>>>>>> folio case. By default, pte_batch_remaining() is compile constant 1, >>>>>>> which enables the compiler to simplify the extra loops that are added >>>>>>> for batching and produce code that is equivalent (and equally >>>>>>> performant) as the previous implementation. >>>>>>> >>>>>>> This change addresses the core-mm refactoring only and a separate change >>>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and >>>>>>> set_ptes_full() in the arm64 backend to realize the performance >>>>>>> improvement as part of the work to enable contpte mappings. >>>>>>> >>>>>>> To ensure the arm64 is performant once implemented, this change is very >>>>>>> careful to only call ptep_get() once per pte batch. >>>>>>> >>>>>>> The following microbenchmark results demonstate that there is no >>>>>>> significant performance change after this patch. Fork is called in a >>>>>>> tight loop in a process with 1G of populated memory and the time for the >>>>>>> function to execute is measured. 100 iterations per run, 8 runs >>>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests >>>>>>> performed for case where 1G memory is comprised of order-0 folios and >>>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster, >>>>>>> positive is slower, compared to baseline upon which the series is based: >>>>>>> >>>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) | >>>>>>> | fork          |-------------------|-------------------| >>>>>>> | microbench    |    mean |   stdev |    mean |   stdev | >>>>>>> |---------------|---------|---------|---------|---------| >>>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% | >>>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% | >>>>>>> >>>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) | >>>>>>> | fork          |-------------------|-------------------| >>>>>>> | microbench    |    mean |   stdev |    mean |   stdev | >>>>>>> |---------------|---------|---------|---------|---------| >>>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% | >>>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% | >>>>>>> >>>>>>> Tested-by: John Hubbard >>>>>>> Reviewed-by: Alistair Popple >>>>>>> Signed-off-by: Ryan Roberts >>>>>>> --- >>>>>>>     include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++ >>>>>>>     mm/memory.c             | 92 ++++++++++++++++++++++++++--------------- >>>>>>>     2 files changed, 139 insertions(+), 33 deletions(-) >>>>>>> >>>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >>>>>>> index af7639c3b0a3..db93fb81465a 100644 >>>>>>> --- a/include/linux/pgtable.h >>>>>>> +++ b/include/linux/pgtable.h >>>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd) >>>>>>>     #define arch_flush_lazy_mmu_mode()    do {} while (0) >>>>>>>     #endif >>>>>>>     +#ifndef pte_batch_remaining >>>>>>> +/** >>>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary. >>>>>>> + * @pte: Page table entry for the first page. >>>>>>> + * @addr: Address of the first page. >>>>>>> + * @end: Batch ceiling (e.g. end of vma). >>>>>>> + * >>>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of >>>>>>> ptes. >>>>>>> + * In such cases, this function returns the remaining number of pages to >>>>>>> the end >>>>>>> + * of the current batch, as defined by addr. This can be useful when >>>>>>> iterating >>>>>>> + * over ptes. >>>>>>> + * >>>>>>> + * May be overridden by the architecture, else batch size is always 1. >>>>>>> + */ >>>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long >>>>>>> addr, >>>>>>> +                        unsigned long end) >>>>>>> +{ >>>>>>> +    return 1; >>>>>>> +} >>>>>>> +#endif >>>>>> >>>>>> It's a shame we now lose the optimization for all other archtiectures. >>>>>> >>>>>> Was there no way to have some basic batching mechanism that doesn't require >>>>>> arch >>>>>> specifics? >>>>> >>>>> I tried a bunch of things but ultimately the way I've done it was the only way >>>>> to reduce the order-0 fork regression to 0. >>>>> >>>>> My original v3 posting was costing 5% extra and even my first attempt at an >>>>> arch-specific version that didn't resolve to a compile-time constant 1 still >>>>> cost an extra 3%. >>>>> >>>>> >>>>>> >>>>>> I'd have thought that something very basic would have worked like: >>>>>> >>>>>> * Check if PTE is the same when setting the PFN to 0. >>>>>> * Check that PFN is consecutive >>>>>> * Check that all PFNs belong to the same folio >>>>> >>>>> I haven't tried this exact approach, but I'd be surprised if I can get the >>>>> regression under 4% with this. Further along the series I spent a lot of time >>>>> having to fiddle with the arm64 implementation; every conditional and every >>>>> memory read (even when in cache) was a problem. There is just so little in the >>>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple >>>>> M2). >>>>> >>>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to >>>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play >>>>> safe and ensure the common order-0 case doesn't regress, as you previously >>>>> suggested. >>>>> >>>> >>>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I >>>> implemented very generic and simple batching for large folios (all PTE bits >>>> except the PFN have to match). >>>> >>>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver >>>> 4210R CPU. >>>> >>>> order-0: 0.014210 -> 0.013969 >>>> >>>> -> Around 1.7 % faster >>>> >>>> order-9: 0.014373 -> 0.009149 >>>> >>>> -> Around 36.3 % faster >>> >>> Well I guess that shows me :) >>> >>> I'll do a review and run the tests on my HW to see if it concurs. >> >> >> I pushed a simple compile fixup (we need pte_next_pfn()). > > I've just been trying to compile and noticed this. Will take a look at your update. Took a look; there will still be arch work needed; arm64 doesn't define PFN_PTE_SHIFT because it defines set_ptes(). I'm not sure if there are other arches that also don't define PFN_PTE_SHIFT (or pte_next_pfn()) if the math is more complex) - it will need an audit. > > But upon review, I've noticed the part that I think makes this difficult for > arm64 with the contpte optimization; You are calling ptep_get() for every pte in > the batch. While this is functionally correct, once arm64 has the contpte > changes, its ptep_get() has to read every pte in the contpte block in order to > gather the access and dirty bits. So if your batching function ends up wealking > a 16 entry contpte block, that will cause 16 x 16 reads, which kills > performance. That's why I added the arch-specific pte_batch_remaining() > function; this allows the core-mm to skip to the end of the contpte block and > avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead > of 256. > > I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit > gathering. But we have a similar problem in zap_pte_range() and that function > needs the dirty bit to update the folio. So it doesn't work there. (see patch 3 > in my series). > > I guess you are going to say that we should combine both approaches, so that > your batching loop can skip forward an arch-provided number of ptes? That would > certainly work, but feels like an orthogonal change to what I'm trying to > achieve :). Anyway, I'll spend some time playing with it today. > > >> >> Note that we should probably handle "ptep_set_wrprotects" rather like set_ptes: >> >> #ifndef wrprotect_ptes >> static inline void wrprotect_ptes(struct mm_struct *mm, unsigned long addr, >>                pte_t *ptep, unsigned int nr) >> { >>        for (;;) { >>                ptep_set_wrprotect(mm, addr, ptep); >>                if (--nr == 0) >>                        break; >>                ptep++; >>                addr += PAGE_SIZE; >>        } >> } >> #endif Yes that's a much better name; I've also introduced clear_ptes() (in patch 3) and set_ptes_full(), which takes a flag that allows arm64 to avoid trying to fold a contpte block; needed to avoid regressing fork once the contpte changes are present. >> >> >