From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D07CFC77B7D for ; Thu, 18 May 2023 11:24:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 315C8900005; Thu, 18 May 2023 07:24:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C4BE900003; Thu, 18 May 2023 07:24:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B35E900005; Thu, 18 May 2023 07:24:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 05E4A900003 for ; Thu, 18 May 2023 07:24:03 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BDEBE1607DB for ; Thu, 18 May 2023 11:24:02 +0000 (UTC) X-FDA: 80803141524.06.3241A57 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf02.hostedemail.com (Postfix) with ESMTP id C66D28000E for ; Thu, 18 May 2023 11:24:00 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684409041; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AQC6H567bchqVfHe2X4c/BIsjAgWtPtfNk+gf0F0Rzk=; b=jRRyHl8dM/NuaaSOt/bpMQA7t/U7yD62oODrxp+rnuPbDkBkRG2dItcnFl9el4iQReFRZK mhQKkPwKbp34FZwlQMK3KXTHemIJgu1fkIJfpmqROOfEVAsLVsFt/Yj18R3etlCbn2kKRZ EYaUFtC+x2d1FE7irtU5lNmm5k9dPEA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684409041; a=rsa-sha256; cv=none; b=NstfMSw6WjUopt+zYu+959GZdmElVtpbuu9cJfcCg8CPeKKjWzS6Y87M0ZI1nWt51nai1c q+l/CFZLhEm8lmWMTZTybPSfEpUdEaXjESpIZXMNtBmlPA4SwUtLj9aSAUvEBaMyZIFviv +wZsR6Zmmm8pUsdYO/JdKP+ISEyHGXU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B12701FB; Thu, 18 May 2023 04:24:44 -0700 (PDT) Received: from [10.1.30.59] (C02Z41KALVDN.cambridge.arm.com [10.1.30.59]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F2D883F793; Thu, 18 May 2023 04:23:58 -0700 (PDT) Message-ID: <3fcfec8f-f8c8-10d3-0b82-9c0835716286@arm.com> Date: Thu, 18 May 2023 12:23:57 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Subject: Re: [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory To: David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Yu Zhao , "Yin, Fengwei" Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org References: <20230414130303.2345383-1-ryan.roberts@arm.com> <13969045-4e47-ae5d-73f4-dad40fe631be@arm.com> <568b5b73-f0e9-c385-f628-93e45825fb7b@redhat.com> <6857912b-4afd-7fb5-b11b-ebe0e32298c2@arm.com> <41549336-e1a5-9929-f3a2-5a2252837679@redhat.com> From: Ryan Roberts In-Reply-To: <41549336-e1a5-9929-f3a2-5a2252837679@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: C66D28000E X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: bhnwssdr816psw3f1kfwgsyd481drxus X-HE-Tag: 1684409040-693816 X-HE-Meta: U2FsdGVkX1+fDZ7FaaFOIoW3E4GZcrc+5vyO+lb/qcEXH1jVJPV9vS+XgEz4isXzMuirxJNWndfDItzGansG4bFkBdRt8Xp4F0gzaRqKygboQwX/qarnhZIieSHlyf64qFMQ11QWZuFfqgx226WCRYwkQq0eOTWwirvFaQ/e0ItG6QTN/QxP+KI1ju6UK0jfQ+ZMA0gILxipmxhdpUfY7V3z3wpx8CP4/bcWB90sU+bp15EiyvZRGS2hfhCWVWnzUi0j5XWnBaufBvk44ZzAKzi05cSDeyFwHnMURQKVeAWKrN3acJKK5yJlH2crRP5KQ2qfPuYg5C15vkI8r0KVLPN/wy7qdgebUF5ip/bOfFBptv9NEno47EQu22ZeB9hpgnK/K4a32FwAjRkaf5oxyxneWH3Gj/Alr4ZtTG27kPhnXfLbPEfvIgeCajdBNw1RFucHyoqvNAxTYwp2flw7LtfJoCegtuvbpxkdNSRME5F6yJF0+Xj/q5mnru3ao1mSRCX+wyQ8jj1h1/Wz7G49CpWi5MaLPpeVfWbZ3J74WKeYodTINYWUxrhqtIKvIcaKtE71An1bjZmfILTEqOEcDZbdIwcZg2vrlMuKB5kixl6srAfCJEOfxPnI8FaUjhxooWD6huW1tRZ233KBZZgr0y72ZTABjjtA3XEPtb6xGldb7+/O8zxjcY1hLCLZ4L5GuBiZVmCh1No06pOQCC6YGgUSgdbgTRJC/MvoaRWrRZin+7ZUi4bRTiBJG968RuC1VJxGYcOvuKIhlPomS2GsznF+ctt4lSSGRzcfZ+aIAcVhk66LjEK9mqOPZd1l6dwK5cKzpPuRaVe4h/QPSoPxkeFH12kELOwAECq2HkMyz3J5GpQFu/GDRf4jlXYWOQQWukhgDLygae/JzsZxrn5WTln8Bo+hJ138tby20EGntglzYQ0DBh8AxaIHxXVJKf7zmmJ/RKuTLO2xUWCHgEh uAGeJBkp zgoYWyJ5VeVIBmxhgaE6BJbg6+S+CsNWZKClZZCmeiY4ZZ6x4DZjAWT3sneWH4AnEs5AH7E7c5Vsc5m09LKjK064mdWlC0CshflDiSqfRZQxrpnI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 17/05/2023 14:58, David Hildenbrand wrote: > On 26.04.23 12:41, Ryan Roberts wrote: >> Hi David, >> >> On 17/04/2023 16:44, David Hildenbrand wrote: >> >>>>>>> >>>>>>> So what should be safe is replacing all sub-pages of a folio that are marked >>>>>>> "maybe shared" by a new folio under PT lock. However, I wonder if it's >>>>>>> really >>>>>>> worth the complexity. For THP we were happy so far to *not* optimize this, >>>>>>> implying that maybe we shouldn't worry about optimizing the fork() case for >>>>>>> now >>>>>>> that heavily. >>>>>> >>>>>> I don't have the exact numbers to hand, but I'm pretty sure I remember >>>>>> enabling >>>>>> large copies was contributing a measurable amount to the performance >>>>>> improvement. (Certainly, the zero-page copy case, is definitely a big >>>>>> contributer). I don't have access to the HW at the moment but can rerun later >>>>>> with and without to double check. >>>>> >>>>> In which test exactly? Some micro-benchmark? >>>> >>>> The kernel compile benchmark that I quoted numbers for in the cover letter. I >>>> have some trace points (not part of the submitted series) that tell me how many >>>> mappings of each order we get for each code path. I'm pretty sure I remember >>>> all >>>> of these 4 code paths contributing non-negligible amounts. >>> >>> Interesting! It would be great to see if there is an actual difference after >>> patch #10 was applied without the other COW replacement. >>> >> >> Sorry about the delay. I now have some numbers for this... >> > > Dito, I'm swamped :) Thanks for running these benchmarks! > > As LSF/MM reminded me again of this topic ... > >> I rearranged the patch order so that all the "utility" stuff (new rmap >> functions, etc) are first (1, 2, 3, 4, 5, 8, 9, 11, 12, 13), followed by a >> couple of general improvements (7, 17), which should be dormant until we have >> the final patches, then finally (6, 10, 14, 15), which implement large anon >> folios the allocate, reuse, copy-non-zero and copy-zero paths respectively. I've >> dropped patch 16 and fixed the copy-exclusive bug you spotted (by ensuring we >> never replace an exclusive page). >> >> I've measured performance at the following locations in the patch set: >> >> - baseline: none of my patches applied >> - utility: has utility and general improvement patches applied >> - alloc: utility + 6 >> - reuse: utility + 6 + 10 >> - copy: utility + 6 + 10 + 14 >> - zero-alloc: utility + 6 + 19 + 14 + 15 >> >> The test is `make defconfig && time make -jN Image` for a clean checkout of >> v6.3-rc3. The first result is thrown away, and the next 3 are kept. I saw some >> per-boot variance (probably down to kaslr, etc). So have booted each kernel 7 >> times for a total of 3x7=21 samples per kernel. Then I've taken the mean: >> >> >> jobs=8: >> >> | label      |   real |   user |   kernel | >> |:-----------|-------:|-------:|---------:| >> | baseline   |   0.0% |   0.0% |     0.0% | >> | utility    |  -2.7% |  -2.8% |    -3.1% | >> | alloc      |  -6.0% |  -2.3% |   -24.1% | >> | reuse      |  -9.5% |  -5.8% |   -28.5% | >> | copy       | -10.6% |  -6.9% |   -29.4% | >> | zero-alloc |  -9.2% |  -5.1% |   -29.8% | >> >> >> jobs=160: >> >> | label      |   real |   user |   kernel | >> |:-----------|-------:|-------:|---------:| >> | baseline   |   0.0% |   0.0% |     0.0% | >> | utility    |  -1.8% |  -0.0% |    -7.7% | >> | alloc      |  -6.0% |   1.8% |   -20.9% | >> | reuse      |  -7.8% |  -1.6% |   -24.1% | >> | copy       |  -7.8% |  -2.5% |   -26.8% | >> | zero-alloc |  -7.7% |   1.5% |   -29.4% | >> >> >> So it looks like patch 10 (reuse) is making a difference, but copy and >> zero-alloc are not adding a huge amount, as you hypothesized. Personally I would >> prefer not to drop those patches though, as it will all help towards utilization >> of contiguous PTEs on arm64, which is the second part of the change that I'm now >> working on. > > Yes, pretty much what I expected :) I can only suggest to > > (1) Make the initial support as simple and minimal as possible. That >     means, strip anything that is not absolutely required. That is, >     exclude *at least* copy and zero-alloc. We can always add selected >     optimizations on top later. > >     You'll do yourself a favor to get as much review coverage, faster >     review for inclusion, and less chances for nasty BUGs. As I'm building out the testing capability, I'm seeing that with different HW configs and workloads, things move a bit and zero-alloc can account for up to 1% in some cases. Copy is still pretty marginal, but I wonder if we might see more value from it on Android where the Zygote is constantly forked? Regardless, I appreciate your point about making the initial contribution as small and simple as possible, so as I get closer to posting a v1, I'll keep it in mind and make sure I follow your advice. Thanks, Ryan > > (2) Keep the COW logic simple. We've had too many issues >     in that area for my taste already. As 09854ba94c6a ("mm: >     do_wp_page() simplification") from Linus puts it: "Simplify, >     simplify, simplify.". If it doesn't add significant benefit, rather >     keep it simple. > >> >> >> For the final config ("zero-alloc") I also collected stats on how many >> operations each of the 4 paths was performing, using ftrace and histograms. >> "pnr" is the number of pages allocated/reused/copied, and "fnr" is the number of >> pages in the source folio): >> >> >> do_anonymous_page: >> >> { pnr:          1 } hitcount:    2749722 >> { pnr:          4 } hitcount:     387832 >> { pnr:          8 } hitcount:     409628 >> { pnr:         16 } hitcount:    4296115 >> >> pages: 76315914 >> faults: 7843297 >> pages per fault: 9.7 >> >> >> wp_page_reuse (anon): >> >> { pnr:          1, fnr:          1 } hitcount:      47887 >> { pnr:          3, fnr:          4 } hitcount:          2 >> { pnr:          4, fnr:          4 } hitcount:       6131 >> { pnr:          6, fnr:          8 } hitcount:          1 >> { pnr:          7, fnr:          8 } hitcount:         10 >> { pnr:          8, fnr:          8 } hitcount:       3794 >> { pnr:          1, fnr:         16 } hitcount:         36 >> { pnr:          2, fnr:         16 } hitcount:         23 >> { pnr:          3, fnr:         16 } hitcount:          5 >> { pnr:          4, fnr:         16 } hitcount:          9 >> { pnr:          5, fnr:         16 } hitcount:          8 >> { pnr:          6, fnr:         16 } hitcount:          9 >> { pnr:          7, fnr:         16 } hitcount:          3 >> { pnr:          8, fnr:         16 } hitcount:         24 >> { pnr:          9, fnr:         16 } hitcount:          2 >> { pnr:         10, fnr:         16 } hitcount:          1 >> { pnr:         11, fnr:         16 } hitcount:          9 >> { pnr:         12, fnr:         16 } hitcount:          2 >> { pnr:         13, fnr:         16 } hitcount:         27 >> { pnr:         14, fnr:         16 } hitcount:          2 >> { pnr:         15, fnr:         16 } hitcount:         54 >> { pnr:         16, fnr:         16 } hitcount:       6673 >> >> pages: 211393 >> faults: 64712 >> pages per fault: 3.3 >> >> >> wp_page_copy (anon): >> >> { pnr:          1, fnr:          1 } hitcount:      81242 >> { pnr:          4, fnr:          4 } hitcount:       5974 >> { pnr:          1, fnr:          8 } hitcount:          1 >> { pnr:          4, fnr:          8 } hitcount:          1 >> { pnr:          8, fnr:          8 } hitcount:      12933 >> { pnr:          1, fnr:         16 } hitcount:         19 >> { pnr:          4, fnr:         16 } hitcount:          3 >> { pnr:          8, fnr:         16 } hitcount:          7 >> { pnr:         16, fnr:         16 } hitcount:       4106 >> >> pages: 274390 >> faults: 104286 >> pages per fault: 2.6 >> >> >> wp_page_copy (zero): >> >> { pnr:          1 } hitcount:     178699 >> { pnr:          4 } hitcount:      14498 >> { pnr:          8 } hitcount:      23644 >> { pnr:         16 } hitcount:     257940 >> >> pages: 4552883 >> faults: 474781 >> pages per fault: 9.6 > > I'll have to set aside more time to digest these values :) >