From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12F8BC77B7A for ; Mon, 17 Apr 2023 15:38:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64CAC8E0002; Mon, 17 Apr 2023 11:38:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FD0A8E0001; Mon, 17 Apr 2023 11:38:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C50A8E0002; Mon, 17 Apr 2023 11:38:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3DA508E0001 for ; Mon, 17 Apr 2023 11:38:13 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id EF91140652 for ; Mon, 17 Apr 2023 15:38:12 +0000 (UTC) X-FDA: 80691289224.06.2826F16 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf20.hostedemail.com (Postfix) with ESMTP id E3CE41C001D for ; Mon, 17 Apr 2023 15:38:09 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681745890; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YznyudbdGBPPu7hLN/kqBA+WtGE2KLZ/rWN425AifBA=; b=OuZFWl25vyRD4Ux8q+D38exFKDoa+V+tFgh6wnTmR/2LlxavbfoOBnx8qxGV7hLCooIp2I YkiOnhjs9U0WDLhDY4zqpyrzg6k/+J6akzIY7dyTtLTGj2j9WTaaUZIaiqVVLR+a1IgSHN kxsqzDq+3vgufeZm7ljy17JcN4UPu/A= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681745890; a=rsa-sha256; cv=none; b=UtsjzDSPBZQUrkSMAca3o7VfytNg7bpPp+ET6cqe5GL54KwJ+T3a7drECtYjKoj3n8SNsK 5NlAaiiBhbOOPz8ARapS0ePPEFNbPJ89B/hWGQp6Z0k3irKguHTt/xWALSRYbDIC/xWf0Q 4ZJYWevVaeO4ElGFF/zSDjQabcm2MNs= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7BDBF1691; Mon, 17 Apr 2023 08:38:52 -0700 (PDT) Received: from [10.57.68.227] (unknown [10.57.68.227]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C756B3F6C4; Mon, 17 Apr 2023 08:38:07 -0700 (PDT) Message-ID: Date: Mon, 17 Apr 2023 16:38:06 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.9.1 Subject: Re: [RFC v2 PATCH 00/17] variable-order, large folios for anonymous memory Content-Language: en-US To: David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Yu Zhao , "Yin, Fengwei" Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org References: <20230414130303.2345383-1-ryan.roberts@arm.com> <13969045-4e47-ae5d-73f4-dad40fe631be@arm.com> <568b5b73-f0e9-c385-f628-93e45825fb7b@redhat.com> From: Ryan Roberts In-Reply-To: <568b5b73-f0e9-c385-f628-93e45825fb7b@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: E3CE41C001D X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: f7f5i487wta9618yiy9paz6kcitewub6 X-HE-Tag: 1681745889-817856 X-HE-Meta: U2FsdGVkX18RNIeT+Wf2B7ZfQavzOIsZL/BV+p7mneQ1IR4qtMkixZM4vy8/Yijzp5IWTaXNLxTPjtt0bek2yi5tkU2Hdt/GDr+e36ebhSb8e5ppWbxJAkMWz1JT0Ydmdb873kabuBNi0xCCl8nbDOml/Ucm6qeaG/dbkZMU3gBA/c2nlL+85fuOpuLp+XxUMiTJqgfikMufQa7IsEm4N0tJMwJSyyU/oFd030WPLOZ7sk1RQx5SyO/KNKiR1J5kPh5gNU3ctrwufNvjg3eDtamQkJ/5VqmueodgO8YMGPu6uis1xVjGFmirDZ4IGXngq6FeRMjPs/Pa1NLaP977SRfjTPlPS4TjeeKE1y7E+8I9QQ2qfD18wh3/ywUTVJW3Ol+vzK8yfECJG6Q8LvzIsUAU5X0qZK3gz5T3Fmylixg3VI/I2KzwgZuwrCBBXi3uMOrZA30dR6bXAEOmKcXnfj25CiTdkoUEuqtmXnfBJfUJVP+TckRfEeqyUKc/BuENL4XCnRlEx/IKzejN9hiotaYk/VRE0lJBMfYeX9f3gcxjXGYK7ACYKx2YTZtos+LuMAR6u0qRLlets866kX9ngyioXtNXhUHPr9ioj7fd36wfisHAFnXDf8/rfqxlwAb2kxI6QaxPiIDWGY6Re5+3UR2GX/BRV1qRoKEtdFnq5AJRMUzI7S12JqvY6w/8rezdfWz/cMgCaRZWRo9M2abjAipOZYETbt5Ifrx6L39wT9z6zGeBvmtvju6n0Pze/IqnqBtT3hpVwHXC9FqAidPSYN/U0ftJtuBsfQZNRBZamX4smjqJ7P6JZ9KjAtiYMYqt8juQfQnoLk5gixd4YwA00B75F2km16Qd3wWaxwSyWMII3dZLRmqmsDucLyg79dTtvkXEk0ImlidbtjzOeG3ruEc27pYJC9PqZUgpPq6b7Th+zIX6CkQBxaknQ2NitLocXvGqP35LCzpjRzzzYNB Leoxas65 ll4OHKxze+1em2WB/2d8kCCQlTvtVYiIKim6xnvoNrUfQVnxtkWWtaVeNf8Om6EsBebUb1QDFQn7gRlmIwVnT4cwEEsbhRIFI2r/Wc6zgI6qJY/CKwyPK4/4VxCHed10Twf1I X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 17/04/2023 15:05, David Hildenbrand wrote: > [...] > >>> Just a note (that you maybe already know) that we have to be a bit careful in >>> the wp_copy path with replacing sub-pages that are marked exclusive. >> >> Ahh, no I wasn't aware of this - thanks for taking the time to explain it. I >> think I have a bug. >> >> (I'm guessing the GUP fast path assumes that if it sees an exclusive page then >> that page can't go away? And if I then wp_copy it, I'm breaking the assumption? >> But surely user space can munmap it at any time and the consequences are >> similar? It's probably clear that I don't know much about the GUP implementation >> details...) > > If GUP finds a read-only PTE pointing at an exclusive subpage, it assumes that > this page cannot randomly be replaced by core MM due to COW. See > gup_must_unshare(). So it can go ahead and pin the page. As long as user space > doesn't do something stupid with the mapping (MADV_DONTNEED, munmap()) the > pinned page must correspond to the mapped page. > > If GUP finds a writeable PTE, it assumes that this page cannot randomly be > replaced by core MM due to COW -- because writable implies exclusive. See, for > example the VM_BUG_ON_PAGE() in follow_page_pte(). So, similarly, GUP can simply > go ahead and pin the page. > > GUP-fast runs lockless, not even taking the PT locks. It syncs against > concurrent fork() using a special seqlock, and essentially unpins whatever it > temporarily pinned when it detects that fork() was running concurrently. But it > might result in some pages temporarily being flagged as "maybe pinned". > > In other cases (!fork()), GUP-fast synchronizes against concurrent sharing (KSM) > or unmapping (migration, swapout) that implies clearing of the PG_anon_flag of > the subpage by first unmapping the PTE and conditionally remapping it. See > mm/ksm.c:write_protect_page() as an example for the sharing side (especially: if > page_try_share_anon_rmap() fails because the page may be pinned). > > Long story short: replacing a r-o "maybe shared" (!exclusive) PTE is easy. > Replacing an exclusive PTE (including writable PTEs) requires some work to sync > with GUP-fast and goes rather in the "maybe just don't bother" terriroty. Yep agreed. I'll plan to fix this by adding the constraint that all pages of the copy range (calc_anon_folio_order_copy()) must be "maybe shared". > >> >> My current patch always prefers reuse over copy, and expands the size of the >> reuse to the biggest set of pages that are all exclusive (determined either by >> the presence of the anon_exclusive flag or from the refcount), and covered by >> the same folio (and a few other bounds constraints - see >> calc_anon_folio_range_reuse()). >> >> If I determine I must copy (because the "anchor" page is not exclusive), then I >> determine the size of the copy region based on a few constraints (see >> calc_anon_folio_order_copy()). But I think you are saying that no pages in that >> region are allowed to have the anon_exclusive flag set? In which case, this >> could be fixed by adding that check in the function. > > Yes, changing a PTE that points at an anonymous subpage that has the "exclusive" > flag set requires more care. > >> >>> >>> Currently, we always only replace a single shared anon (sub)page by a fresh >>> exclusive base-page during a write-fault/unsharing. As the sub-page is already >>> marked "maybe shared", it cannot get pinned concurrently and everybody is happy. >> >> When you say, "maybe shared" is that determined by the absence of the >> "exclusive" flag? > > Yes. Semantics of PG_anon_exclusive are "exclusive" vs. "maybe shared". Once > "maybe shared", we must only go back to "exclusive (set the flag) if we are sure > that there are no other references to the page. > >> >>> >>> If you now decide to replace more subpages, you have to be careful that none of >>> them are still exclusive -- because they could get pinned concurrently and >>> replacing them would result in memory corruptions. >>> >>> There are scenarios (most prominently: MADV_WIPEONFORK), but also failed partial >>> fork() that could result in something like that. >> >> Are there any test cases that stress the kernel in this area that I could use to >> validate my fix? > > tools/testing/selftests/mm/cow.c does excessive tests (including some > MADV_DONTFORK -- that's what I actually meant --  and partial mremap tests), but > mostly focuses on ordinary base pages (order-0), THP, and hugetlb. > > We don't have any "GUP-fast racing with fork()" tests or similar yet (tests that > rely on races are not a good candidate for selftests). > > We might want to extend tools/testing/selftests/mm/cow.c to test for some of the > cases you extend. > > We may also change the detection of THP (I think, by luck, it would currently > also test your patches to some degree set the way it tests for THP) > > if (!pagemap_is_populated(pagemap_fd, mem + pagesize)) { >     ksft_test_result_skip("Did not get a THP populated\n"); >     goto munmap; > } > > Would have to be, for example, > > if (!pagemap_is_populated(pagemap_fd, mem + thpsize - pagesize)) { >     ksft_test_result_skip("Did not get a THP populated\n"); >     goto munmap; > } > > Because we touch the first PTE in a PMD and want to test if core-mm gave us a > full THP (last PTE also populated). > > > Extending the tests to cover other anon THP sizes could work by aligning a VMA > to THP/2 size (so we're sure we don't get a full THP), and then testing if we > get more PTEs populated -> your code active. Thanks. I'll run all these and make sure they pass and look at adding new variants for the next rev. > >> >>> >>> Further, we have to be a bit careful regarding replacing ranges that are backed >>> by different anon pages (for example, due to fork() deciding to copy some >>> sub-pages of a PTE-mapped folio instead of sharing all sub-pages). >> >> I don't understand this statement; do you mean "different anon _folios_"? I am >> scanning the page table to expand the region that I reuse/copy and as part of >> that scan, make sure that I only cover a single folio. So I think I conform here >> - the scan would give up once it gets to the hole. > > During fork(), what could happen (temporary detection of pinned page resulting > in a copy) is something weird like: > > PTE 0: subpage0 of anon page #1 (maybe shared) > PTE 1: subpage1 of anon page #1 (maybe shared > PTE 2: anon page #2 (exclusive) > PTE 3: subpage2 of anon page #1 (maybe shared Hmm... I can see how this could happen if you mremap PTE2 to PTE3, then mmap something new in PTE2. But I don't see how it happens at fork. For PTE3, did you mean subpage _3_? > > Of course, any combination of above. > > Further, with mremap() we might get completely crazy layouts, randomly mapping > sub-pages of anon pages, mixed with other sub-pages or base-page folios. > > Maybe it's all handled already by your code, just pointing out which kind of > mess we might get :) Yep, this is already handled; the scan to expand the range ensures that all the PTEs map to the expected contiguous pages in the same folio. > >> >>> >>> >>> So what should be safe is replacing all sub-pages of a folio that are marked >>> "maybe shared" by a new folio under PT lock. However, I wonder if it's really >>> worth the complexity. For THP we were happy so far to *not* optimize this, >>> implying that maybe we shouldn't worry about optimizing the fork() case for now >>> that heavily. >> >> I don't have the exact numbers to hand, but I'm pretty sure I remember enabling >> large copies was contributing a measurable amount to the performance >> improvement. (Certainly, the zero-page copy case, is definitely a big >> contributer). I don't have access to the HW at the moment but can rerun later >> with and without to double check. > > In which test exactly? Some micro-benchmark? The kernel compile benchmark that I quoted numbers for in the cover letter. I have some trace points (not part of the submitted series) that tell me how many mappings of each order we get for each code path. I'm pretty sure I remember all of these 4 code paths contributing non-negligible amounts. > >> >>> >>> >>> One optimization once could think of instead (that I raised previously in other >>> context) is the detection of exclusivity after fork()+exit in the child (IOW, >>> only the parent continues to exist). Once PG_anon_exclusive was cleared for all >>> sub-pages of the THP-mapped folio during fork(), we'd always decide to copy >>> instead of reuse (because page_count() > 1, as the folio is PTE mapped). >>> Scanning the surrounding page table if it makes sense (e.g., page_count() <= >>> folio_nr_pages()), to test if all page references are from the current process >>> would allow for reusing the folio (setting PG_anon_exclusive) for the sub-pages. >>> The smaller the folio order, the cheaper this "scan surrounding PTEs" scan is. >>> For THP, which are usually PMD-mapped even after fork()+exit, we didn't add this >>> optimization. >> >> Yes, I have already implemented this in my series; see patch 10. > > Oh, good! That's the most important part. >