From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED28BC4708C for ; Tue, 6 Dec 2022 03:05:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 605DB8E0002; Mon, 5 Dec 2022 22:05:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 58F2F8E0001; Mon, 5 Dec 2022 22:05:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 430128E0002; Mon, 5 Dec 2022 22:05:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2FC388E0001 for ; Mon, 5 Dec 2022 22:05:54 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 019D3A0478 for ; Tue, 6 Dec 2022 03:05:53 +0000 (UTC) X-FDA: 80210391828.04.DDEF9D4 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf25.hostedemail.com (Postfix) with ESMTP id 941B8A0003 for ; Tue, 6 Dec 2022 03:05:51 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670295952; a=rsa-sha256; cv=none; b=AgDJvxoNOywQrWAVOXGEjU4N5Mt0NYxGgoDHVbXKOl7xpw6yORA0x2gjLHZyMpNZK7Y5mI z9420B7R0N/aZajTXt5s0HerMz/LUiBYuZhttJH5axp412wq9lZTt+cZI1rWVkTGPp+wyE 02NcfDzRbe4jl9KzWHGD1R0++2Hdynw= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670295952; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uctDXDTC6k07+WuuSlZKazLAKrz6g9rcJNZCT8lcyv8=; b=Dv05exyDep8XhkcEvNVk8hB3TO4pIBNRuSVaRFKYmeViNmt6mtcKehQXYnKltoSdpbojk1 i+SflnFsbCO389Mno3f+Hf7B/nOkjJ2xLnTRgF92pksNMxlfAriRQprpRsp3XT024qE8cu 3eFaP/+Y8MnRzY9T89HINCe4Jp+OTNg= Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4NR4tW1WtNzqSsN; Tue, 6 Dec 2022 11:01:07 +0800 (CST) Received: from [10.174.151.185] (10.174.151.185) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Tue, 6 Dec 2022 11:05:14 +0800 Subject: Re: [PATCH v4 12/17] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive To: David Hildenbrand , CC: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , References: <20220428083441.37290-1-david@redhat.com> <20220428083441.37290-13-david@redhat.com> From: Miaohe Lin Message-ID: <90dd6a93-4500-e0de-2bf0-bf522c311b0c@huawei.com> Date: Tue, 6 Dec 2022 11:05:14 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20220428083441.37290-13-david@redhat.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.151.185] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Rspam-User: X-Spamd-Result: default: False [-4.20 / 9.00]; BAYES_HAM(-6.00)[100.00%]; SUSPICIOUS_RECIPS(1.50)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[huawei.com,quarantine]; R_SPF_ALLOW(-0.20)[+ip4:45.249.212.187/29]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; RCPT_COUNT_TWELVE(0.00)[30]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; TO_MATCH_ENVRCPT_SOME(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; MID_RHS_MATCH_FROM(0.00)[]; HAS_XOIP(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; TAGGED_RCPT(0.00)[]; TO_DN_SOME(0.00)[]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 941B8A0003 X-Rspamd-Server: rspam01 X-Stat-Signature: jgigabdccsdk65wgdey39jzbqhszx4zy X-HE-Tag: 1670295951-594752 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/4/28 16:34, David Hildenbrand wrote: > Let's mark exclusively mapped anonymous pages with PG_anon_exclusive as > exclusive, and use that information to make GUP pins reliable and stay > consistent with the page mapped into the page table even if the > page table entry gets write-protected. > > With that information at hand, we can extend our COW logic to always > reuse anonymous pages that are exclusive. For anonymous pages that > might be shared, the existing logic applies. > > As already documented, PG_anon_exclusive is usually only expressive in > combination with a page table entry. Especially PTE vs. PMD-mapped > anonymous pages require more thought, some examples: due to mremap() we > can easily have a single compound page PTE-mapped into multiple page tables > exclusively in a single process -- multiple page table locks apply. > Further, due to MADV_WIPEONFORK we might not necessarily write-protect > all PTEs, and only some subpages might be pinned. Long story short: once > PTE-mapped, we have to track information about exclusivity per sub-page, > but until then, we can just track it for the compound page in the head > page and not having to update a whole bunch of subpages all of the time > for a simple PMD mapping of a THP. > > For simplicity, this commit mostly talks about "anonymous pages", while > it's for THP actually "the part of an anonymous folio referenced via > a page table entry". > > To not spill PG_anon_exclusive code all over the mm code-base, we let > the anon rmap code to handle all PG_anon_exclusive logic it can easily > handle. > > If a writable, present page table entry points at an anonymous (sub)page, > that (sub)page must be PG_anon_exclusive. If GUP wants to take a reliably > pin (FOLL_PIN) on an anonymous page references via a present > page table entry, it must only pin if PG_anon_exclusive is set for the > mapped (sub)page. > > This commit doesn't adjust GUP, so this is only implicitly handled for > FOLL_WRITE, follow-up commits will teach GUP to also respect it for > FOLL_PIN without FOLL_WRITE, to make all GUP pins of anonymous pages > fully reliable. > > Whenever an anonymous page is to be shared (fork(), KSM), or when > temporarily unmapping an anonymous page (swap, migration), the relevant > PG_anon_exclusive bit has to be cleared to mark the anonymous page > possibly shared. Clearing will fail if there are GUP pins on the page: > * For fork(), this means having to copy the page and not being able to > share it. fork() protects against concurrent GUP using the PT lock and > the src_mm->write_protect_seq. > * For KSM, this means sharing will fail. For swap this means, unmapping > will fail, For migration this means, migration will fail early. All > three cases protect against concurrent GUP using the PT lock and a > proper clear/invalidate+flush of the relevant page table entry. > > This fixes memory corruptions reported for FOLL_PIN | FOLL_WRITE, when a > pinned page gets mapped R/O and the successive write fault ends up > replacing the page instead of reusing it. It improves the situation for > O_DIRECT/vmsplice/... that still use FOLL_GET instead of FOLL_PIN, > if fork() is *not* involved, however swapout and fork() are still > problematic. Properly using FOLL_PIN instead of FOLL_GET for these > GUP users will fix the issue for them. > Hi David, sorry for the late respond and a possible inconsequential question. :) > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 7a71ed679853..5add8bbd47cd 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -4772,7 +4772,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, > is_hugetlb_entry_hwpoisoned(entry))) { > swp_entry_t swp_entry = pte_to_swp_entry(entry); > > - if (is_writable_migration_entry(swp_entry) && cow) { > + if (!is_readable_migration_entry(swp_entry) && cow) { > /* > * COW mappings require pages in both > * parent and child to be set to read. > @@ -5172,6 +5172,8 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, > set_huge_ptep_writable(vma, haddr, ptep); > return 0; > } > + VM_BUG_ON_PAGE(PageAnon(old_page) && PageAnonExclusive(old_page), > + old_page); > > /* > * If the process that created a MAP_PRIVATE mapping is about to > @@ -6169,12 +6171,17 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, > } > if (unlikely(is_hugetlb_entry_migration(pte))) { > swp_entry_t entry = pte_to_swp_entry(pte); > + struct page *page = pfn_swap_entry_to_page(entry); > > - if (is_writable_migration_entry(entry)) { > + if (!is_readable_migration_entry(entry)) { In hugetlb_change_protection(), is_writable_migration_entry() is changed to !is_readable_migration_entry(), but > pte_t newpte; > > - entry = make_readable_migration_entry( > - swp_offset(entry)); > + if (PageAnon(page)) > + entry = make_readable_exclusive_migration_entry( > + swp_offset(entry)); > + else > + entry = make_readable_migration_entry( > + swp_offset(entry)); > newpte = swp_entry_to_pte(entry); > set_huge_swap_pte_at(mm, address, ptep, > newpte, huge_page_size(h)); > diff --git a/mm/mprotect.c b/mm/mprotect.c > index b69ce7a7b2b7..56060acdabd3 100644 > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -152,6 +152,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, > pages++; > } else if (is_swap_pte(oldpte)) { > swp_entry_t entry = pte_to_swp_entry(oldpte); > + struct page *page = pfn_swap_entry_to_page(entry); > pte_t newpte; > > if (is_writable_migration_entry(entry)) { In change_pte_range(), is_writable_migration_entry() is not changed to !is_readable_migration_entry(). Is this done intentionally? Could you tell me why there's such a difference? I'm confused. It's very kind of you if you can answer my puzzle. Thanks! Miaohe Lin > @@ -159,8 +160,11 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, > * A protection check is difficult so > * just be safe and disable write > */ > - entry = make_readable_migration_entry( > - swp_offset(entry)); > + if (PageAnon(page)) > + entry = make_readable_exclusive_migration_entry( > + swp_offset(entry)); > + else > + entry = make_readable_migration_entry(swp_offset(entry)); > newpte = swp_entry_to_pte(entry); > if (pte_swp_soft_dirty(oldpte)) > newpte = pte_swp_mksoft_dirty(newpte);