From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25ED8C433F5 for ; Fri, 18 Mar 2022 20:29:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3FC606B0071; Fri, 18 Mar 2022 16:29:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3AAF38D0002; Fri, 18 Mar 2022 16:29:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 224028D0001; Fri, 18 Mar 2022 16:29:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 1104D6B0071 for ; Fri, 18 Mar 2022 16:29:32 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B87ED633 for ; Fri, 18 Mar 2022 20:29:31 +0000 (UTC) X-FDA: 79258647342.08.B393F65 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf24.hostedemail.com (Postfix) with ESMTP id 49273180035 for ; Fri, 18 Mar 2022 20:29:31 +0000 (UTC) Received: by mail-pl1-f180.google.com with SMTP id e13so7903913plh.3 for ; Fri, 18 Mar 2022 13:29:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=CFsBEVsAjMG47Fzg5uFf+z33IqEIg+EsdVTqDPm+v1M=; b=M0rdCsq3zRbRJwcPhmDV1CbLFTj+z12kjiWwBFs3c8wXcf5OnsaAC57gF0Cz/CGMB1 XrzN5Zr/duT92Dl/S+fZN5q0AL4KRkYlPB9OnfQ/JEnSgORBpHjA3WQCNBCbNvkVFbyN lHhCidvCa3tvyN0m0XpecUvbUzV7rntUosx32fQIVYVUw2iJNpbebfc0RZn+RApp5Q1g 8dafK4ryBetyDIkQYiaQyJ+s9W6C84PbTCI/gSxltaGWsizeKoT7nfl+vMguNH3zqlfK ngW/kY1pUTnSE6ezGpQD8bEMo1G+yrvAd9yyicO8lQ2jXkHqC1Jbj2qPygXHATtjprBz FJtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=CFsBEVsAjMG47Fzg5uFf+z33IqEIg+EsdVTqDPm+v1M=; b=ffJVBi6p5NMOmQFvk337yTMHBAlMMIkw1PBc/MOwVCdMYCcmwQO3oqwSmL9G+AE8CK ZeQ2X0oFbSC0XTfut68/i2pbE6B3ZHvBld1/QuNhRUEtjZtk76/0hMDQr68kHlW/rurM JPoxlz6PgtgWXlAjEHBfzIIJ0Ec0y+tdqfaajXvFp5XMf2R6qCPUhMgxegKbPAavsU5k HvCHUza85xxwqHoQ2ifptHOBbQt26tL3a4PCm5JlkQgRFSrLmWG+1zFtaNPpdncKhr0Q c8JM4ECUWXi3MjM0S/YnV1qek3YGPXWVEfW9vFNKVc5Ya/Q2rjcPydWWz2kYCj9s3+E0 5mpw== X-Gm-Message-State: AOAM530Ui1MLPT7A+CZAPKXwAmxmj2GK0s6d0LuDEPe9N+m1daLkBIN+ mcoLJbJxhPQnHq9kYEMs1wE23KscsETX2P0KnbA= X-Google-Smtp-Source: ABdhPJyQqaCk/69k3IOP2V6kUttp117cbqBCO94DIhiKTI0MfqDxUmiIJjYHEx8GGgNfFQrDZSrsq8ckyfPDF39PzoA= X-Received: by 2002:a17:903:1c5:b0:151:cd10:5a5 with SMTP id e5-20020a17090301c500b00151cd1005a5mr1293825plh.26.1647635370270; Fri, 18 Mar 2022 13:29:30 -0700 (PDT) MIME-Version: 1.0 References: <20220315104741.63071-1-david@redhat.com> <20220315104741.63071-12-david@redhat.com> <2b280ac6-9d39-58c5-b255-f39b1dac607b@redhat.com> In-Reply-To: <2b280ac6-9d39-58c5-b255-f39b1dac607b@redhat.com> From: Yang Shi Date: Fri, 18 Mar 2022 13:29:18 -0700 Message-ID: Subject: Re: [PATCH v2 11/15] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 49273180035 X-Stat-Signature: imetjkchmbhbhspb5gfrts34zpfy8r1p Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=M0rdCsq3; spf=pass (imf24.hostedemail.com: domain of shy828301@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-HE-Tag: 1647635371-493212 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Mar 17, 2022 at 2:06 AM David Hildenbrand wrote: > > On 16.03.22 22:23, Yang Shi wrote: > > On Tue, Mar 15, 2022 at 3:52 AM David Hildenbrand wrote: > >> > >> Let's mark exclusively mapped anonymous pages with PG_anon_exclusive as > >> exclusive, and use that information to make GUP pins reliable and stay > >> consistent with the page mapped into the page table even if the > >> page table entry gets write-protected. > >> > >> With that information at hand, we can extend our COW logic to always > >> reuse anonymous pages that are exclusive. For anonymous pages that > >> might be shared, the existing logic applies. > >> > >> As already documented, PG_anon_exclusive is usually only expressive in > >> combination with a page table entry. Especially PTE vs. PMD-mapped > >> anonymous pages require more thought, some examples: due to mremap() we > >> can easily have a single compound page PTE-mapped into multiple page tables > >> exclusively in a single process -- multiple page table locks apply. > >> Further, due to MADV_WIPEONFORK we might not necessarily write-protect > >> all PTEs, and only some subpages might be pinned. Long story short: once > >> PTE-mapped, we have to track information about exclusivity per sub-page, > >> but until then, we can just track it for the compound page in the head > >> page and not having to update a whole bunch of subpages all of the time > >> for a simple PMD mapping of a THP. > >> > >> For simplicity, this commit mostly talks about "anonymous pages", while > >> it's for THP actually "the part of an anonymous folio referenced via > >> a page table entry". > >> > >> To not spill PG_anon_exclusive code all over the mm code-base, we let > >> the anon rmap code to handle all PG_anon_exclusive logic it can easily > >> handle. > >> > >> If a writable, present page table entry points at an anonymous (sub)page, > >> that (sub)page must be PG_anon_exclusive. If GUP wants to take a reliably > >> pin (FOLL_PIN) on an anonymous page references via a present > >> page table entry, it must only pin if PG_anon_exclusive is set for the > >> mapped (sub)page. > >> > >> This commit doesn't adjust GUP, so this is only implicitly handled for > >> FOLL_WRITE, follow-up commits will teach GUP to also respect it for > >> FOLL_PIN without !FOLL_WRITE, to make all GUP pins of anonymous pages > >> fully reliable. > >> > >> Whenever an anonymous page is to be shared (fork(), KSM), or when > >> temporarily unmapping an anonymous page (swap, migration), the relevant > >> PG_anon_exclusive bit has to be cleared to mark the anonymous page > >> possibly shared. Clearing will fail if there are GUP pins on the page: > >> * For fork(), this means having to copy the page and not being able to > >> share it. fork() protects against concurrent GUP using the PT lock and > >> the src_mm->write_protect_seq. > >> * For KSM, this means sharing will fail. For swap this means, unmapping > >> will fail, For migration this means, migration will fail early. All > >> three cases protect against concurrent GUP using the PT lock and a > >> proper clear/invalidate+flush of the relevant page table entry. > >> > >> This fixes memory corruptions reported for FOLL_PIN | FOLL_WRITE, when a > >> pinned page gets mapped R/O and the successive write fault ends up > >> replacing the page instead of reusing it. It improves the situation for > >> O_DIRECT/vmsplice/... that still use FOLL_GET instead of FOLL_PIN, > >> if fork() is *not* involved, however swapout and fork() are still > >> problematic. Properly using FOLL_PIN instead of FOLL_GET for these > >> GUP users will fix the issue for them. > >> > >> I. Details about basic handling > >> > >> I.1. Fresh anonymous pages > >> > >> page_add_new_anon_rmap() and hugepage_add_new_anon_rmap() will mark the > >> given page exclusive via __page_set_anon_rmap(exclusive=1). As that is > >> the mechanism fresh anonymous pages come into life (besides migration > >> code where we copy the page->mapping), all fresh anonymous pages will > >> start out as exclusive. > >> > >> I.2. COW reuse handling of anonymous pages > >> > >> When a COW handler stumbles over a (sub)page that's marked exclusive, it > >> simply reuses it. Otherwise, the handler tries harder under page lock to > >> detect if the (sub)page is exclusive and can be reused. If exclusive, > >> page_move_anon_rmap() will mark the given (sub)page exclusive. > >> > >> Note that hugetlb code does not yet check for PageAnonExclusive(), as it > >> still uses the old COW logic that is prone to the COW security issue > >> because hugetlb code cannot really tolerate unnecessary/wrong COW as > >> huge pages are a scarce resource. > >> > >> I.3. Migration handling > >> > >> try_to_migrate() has to try marking an exclusive anonymous page shared > >> via page_try_share_anon_rmap(). If it fails because there are GUP pins > >> on the page, unmap fails. migrate_vma_collect_pmd() and > >> __split_huge_pmd_locked() are handled similarly. > >> > >> Writable migration entries implicitly point at shared anonymous pages. > >> For readable migration entries that information is stored via a new > >> "readable-exclusive" migration entry, specific to anonymous pages. > >> > >> When restoring a migration entry in remove_migration_pte(), information > >> about exlusivity is detected via the migration entry type, and > >> RMAP_EXCLUSIVE is set accordingly for > >> page_add_anon_rmap()/hugepage_add_anon_rmap() to restore that > >> information. > >> > >> I.4. Swapout handling > >> > >> try_to_unmap() has to try marking the mapped page possibly shared via > >> page_try_share_anon_rmap(). If it fails because there are GUP pins on the > >> page, unmap fails. For now, information about exclusivity is lost. In the > >> future, we might want to remember that information in the swap entry in > >> some cases, however, it requires more thought, care, and a way to store > >> that information in swap entries. > >> > >> I.5. Swapin handling > >> > >> do_swap_page() will never stumble over exclusive anonymous pages in the > >> swap cache, as try_to_migrate() prohibits that. do_swap_page() always has > >> to detect manually if an anonymous page is exclusive and has to set > >> RMAP_EXCLUSIVE for page_add_anon_rmap() accordingly. > >> > >> I.6. THP handling > >> > >> __split_huge_pmd_locked() has to move the information about exclusivity > >> from the PMD to the PTEs. > >> > >> a) In case we have a readable-exclusive PMD migration entry, simply insert > >> readable-exclusive PTE migration entries. > >> > >> b) In case we have a present PMD entry and we don't want to freeze > >> ("convert to migration entries"), simply forward PG_anon_exclusive to > >> all sub-pages, no need to temporarily clear the bit. > >> > >> c) In case we have a present PMD entry and want to freeze, handle it > >> similar to try_to_migrate(): try marking the page shared first. In case > >> we fail, we ignore the "freeze" instruction and simply split ordinarily. > >> try_to_migrate() will properly fail because the THP is still mapped via > >> PTEs. > > Hi, > > thanks for the review! > > > > > How come will try_to_migrate() fail? The afterward pvmw will find > > those PTEs then convert them to migration entries anyway IIUC. > > > > It will run into that code: > > >> @@ -1903,6 +1938,15 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, > >> page_vma_mapped_walk_done(&pvmw); > >> break; > >> } > >> + VM_BUG_ON_PAGE(pte_write(pteval) && PageAnon(page) && > >> + !anon_exclusive, page); > >> + if (anon_exclusive && > >> + page_try_share_anon_rmap(subpage)) { > >> + set_pte_at(mm, address, pvmw.pte, pteval); > >> + ret = false; > >> + page_vma_mapped_walk_done(&pvmw); > >> + break; > >> + } > > and similarly fail the page_try_share_anon_rmap(), at which point > try_to_migrate() stops and the caller will still observe a > "page_mapped() == true". Thanks, I missed that. Yes, the page will still be mapped. This should trigger the VM_WARN_ON_ONCE in unmap_page(), if this change will make this happen more often, we may consider removing that warning even though it is "once" since seeing a mapped page may become a normal case (once DIO is switched to FOLL_PIN, it may be more often). Anyway we don't have to remove it right now. > > -- > Thanks, > > David / dhildenb >