From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C49CFC4332F for ; Tue, 20 Dec 2022 07:25:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 648E18E0006; Tue, 20 Dec 2022 02:25:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D1068E0001; Tue, 20 Dec 2022 02:25:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44B098E0006; Tue, 20 Dec 2022 02:25:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 30D8F8E0001 for ; Tue, 20 Dec 2022 02:25:33 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E13411406AE for ; Tue, 20 Dec 2022 07:25:32 +0000 (UTC) X-FDA: 80261849304.30.B9EA857 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf05.hostedemail.com (Postfix) with ESMTP id 464BB100012 for ; Tue, 20 Dec 2022 07:25:31 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=LK6A8Ib1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of shiyn.lin@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=shiyn.lin@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1671521131; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/t6LuwFAGWbsZ6dMtjW/A8v0QGvpGJligamF6F81WOc=; b=x/VaCymAB/fsEOoYREahqjZ3c0rViL9IP6GXVuy7RR9hcs8dNbPtVTBJ7aq/CFvISaTap2 mroStn5c5KU1HkuZ6YAdlr79//OILuvf3w1lqHnN8xA5HEkWyTeMxBOrLpqFYPaAhSmnhT o6zqvf5sei2810bGYlIrEYQrlKXhl/E= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=LK6A8Ib1; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of shiyn.lin@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=shiyn.lin@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1671521131; a=rsa-sha256; cv=none; b=Vqgz2uaMlW8BRlwUOU1uWzcM5qsDgVOwu6SzZjhUsJF8yjmFHXK3n027J0K8O0SgVJ6/A2 njvMKahqgeRu6zM4CPQrwjwE9zlRAclKdK6jpgwqcFpig3PXzyFnFvf9k6OooGpZ2/EbXJ 2QIZuG1F6lahFXKclfCBjt0npCL83vY= Received: by mail-pl1-f173.google.com with SMTP id u7so3228117plq.11 for ; Mon, 19 Dec 2022 23:25:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/t6LuwFAGWbsZ6dMtjW/A8v0QGvpGJligamF6F81WOc=; b=LK6A8Ib1of+UjiY50k1DoKIshnVtLUdpsiAq3QvElWxYPooXdkY2uj+3REuWy5FyJe W6DWqBPi8VZ6vGc5KGHdb1llRt1LgH+EnOjCuSmeZoU243yVA4fNFDk+niNM8ULiI109 fW7PKLJlGLT0P1h5Ej7eHyD5XuhpLYKBRa1qQrLAFy3NIX2Sd65er3C9VK/N3det7JYi bYYCLA/wPxCcBlIM8qNn23Jm61b+AWZTbeGzy9bz89OVHGlgJwP6Sgqj0AQKup11mO0d DI441aM6+te1JDuxLpewD14n1aPMN8LjqXRYXDdzzw8qjLpBwOzYf5AYISM9C0Tn6BGg yt5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/t6LuwFAGWbsZ6dMtjW/A8v0QGvpGJligamF6F81WOc=; b=IGAzrP7X+t27gy42HvB1E9FC9V56mK56GvNGpmXBjUm9RHr6PshZGPFpHLN0Ome8Ej oDziWBe+p7oBbSxG42a/FyaNjFASEslRMQB4feAsR2d0LOymbMEVQvaIGYAAcQvBprRi YrfEIFrlJS0IA1TlwceYWmji+cyj8u09l1AyA9wOI06cFqNDNzJVLRXPTkxyVFtcr1iQ /K4gZms/5s+0HbVBZ6cu63lnT2eWe6jttoTpi+kJsziaOdSzkGoex5le7IJhfWlweR39 cllCsuIbkfRMt7Te/aD3czYJ92UVf6tVTd3ABRI5Ay0wIVQ3bUu/+NvITuqN7lyVXi4j x/NQ== X-Gm-Message-State: ANoB5plH7pJOhZgaPTljhwfWMQHKjP9pL1x9zT8RTjdFei36VEbLlTBV 5P/QGQb0sY6kth7Vik7gEzA= X-Google-Smtp-Source: AA0mqf72zDpgTCXGteBrrOYijbalbeEHoDalBxMJpqfzoSwDK7e+ka4IWvLTQqCGG5GO6NrPC+8kpQ== X-Received: by 2002:a05:6a20:a023:b0:9d:efd3:66ec with SMTP id p35-20020a056a20a02300b0009defd366ecmr68457722pzj.51.1671521130077; Mon, 19 Dec 2022 23:25:30 -0800 (PST) Received: from archlinux.localdomain ([140.121.198.213]) by smtp.googlemail.com with ESMTPSA id q15-20020aa7982f000000b00576f9773c80sm7865544pfl.206.2022.12.19.23.25.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Dec 2022 23:25:29 -0800 (PST) From: Chih-En Lin To: Andrew Morton , Qi Zheng , David Hildenbrand , Matthew Wilcox , Christophe Leroy , John Hubbard , Nadav Amit Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Steven Rostedt , Masami Hiramatsu , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Yang Shi , Peter Xu , Zach O'Keefe , "Liam R . Howlett" , Alex Sierra , Xianting Tian , Colin Cross , Suren Baghdasaryan , Barry Song , Pasha Tatashin , Suleiman Souhlal , Brian Geffon , Yu Zhao , Tong Tiangen , Liu Shixin , Li kunyu , Anshuman Khandual , Vlastimil Babka , Hugh Dickins , Minchan Kim , Miaohe Lin , Gautam Menghani , Catalin Marinas , Mark Brown , Will Deacon , "Eric W . Biederman" , Thomas Gleixner , Sebastian Andrzej Siewior , Andy Lutomirski , Fenghua Yu , Barret Rhoden , Davidlohr Bueso , "Jason A . Donenfeld" , Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng , Chih-En Lin Subject: [PATCH v3 03/14] mm: Add break COW PTE fault and helper functions Date: Tue, 20 Dec 2022 15:27:32 +0800 Message-Id: <20221220072743.3039060-4-shiyn.lin@gmail.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221220072743.3039060-1-shiyn.lin@gmail.com> References: <20221220072743.3039060-1-shiyn.lin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 464BB100012 X-Stat-Signature: gmbbtua1mhp31ta9pa1c6ttut96icufi X-HE-Tag: 1671521131-782536 X-HE-Meta: U2FsdGVkX1+6uQcYFyCTSmhngG4kYFqBohR4f/Ty+cRpG1QpIILsqMnUv3tkKr+A7ChJfDYCgP24QpNoE2rUsTCQDVyxnzKks9z0/ovuQopZcA/eW6oBpTobhc5z2PmHZe7Pm3HJ9DifYpA7RaOe5RzH4DgXt/M7Kl+om17Kn8LuReGxGQflS7EmyXtblP7HnHJJ4BhwJZbEahOcZdk4YGsM4N/FoZI/AxXJ8Wt2I4BVIef7KBPGjW/4UQvj8Noiv3vULvGAZo8HmFaBC4oe+zy3ksb4BuhxQWBl66MVirdPqsdETlKWzEDe92avP6PmIgNkJ2n9jZttIarTdxMp8xvKlvVfuHOZzuBWP2SFcQFkrl/1VYapG9BG+LnnNVRloMGOnc8v2c/zGKZHz8s4Wl60cjjkChqudgls1swfM644xeWoGHRSKmCg1W2DQ104y3bmPzdZQoDT/d1vI/jrDJuFJH+SQZFLIS6FKUVxKNeWNHDi0Uw/wJY3IiNiDjJLtPyyLGgLPPHCF4BdOXgdMh+nop1b065p/3na2uCUWtilrSzbKNGm51ToX7TWCm9rSzDLIaKaFr2nkljvuPe7w2aNLaaZaG5X+rGlk/ntosZawI83aA6fnVuxhp50yy9TeyCn4dQf3wNdiftgXh0gOU8AQSFRI6pNsGxZCalPyXfB6bM+hJHZuXKMrtX79P15YONKaG/o7U6uRUjUzb37LJhNxCKXu6piTiT+yXYtqBHK09bYXMbQSlPc6DV4uy32fIa6xQmvfrTxhpcWUNUj5/XQiblv9L3nabBZJ/HgFdnjdwpnpwJMrC1O2SHN8DEE1dUMYPp68XMLe9kF6aHKxz7gnYFX5g5C8aE/G4ZekbdaVgNjAksOmn1buHXLWPaMQpkd/GrNTm8mLBZ5FHnL3iW6G8Ffx17Uo88Z7546CtkRzbe2Dk6F6Ai3di3dKwVCDeziJR+/zsLOEdu8arH CmI/yo8a OnaPsezsIv5svRHi5Dd5Npi45HwM70Vpd5HLYX18WHfyW6lSM92QhJHrE/D0UKtxAgfcZULfGQn1rRANTJ6N/ddB1rQ1AA7cQY+bABTC+nDHDi7vSehxTbv2uRVaHVhjmqAQl1ckBJT3oAJj4UZbIXeLg+f65aKb8qvPR7z7TwAFXgVgJWSGonTIBixrvI8Z2/2uChFoePV1n42hYB5i/BQrl3sueL9oQcRcXB90oZffwPC0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add the function, break_cow_pte_fault(), to break (unshare) COW-ed PTE with the page fault that will modify the PTE table or the mapped page resided in COW-ed PTE (i.e., write, unshared, file read). When breaking COW PTE, it first checks COW-ed PTE's refcount to try to reuse it. If COW-ed PTE cannot be reused, allocates new PTE and duplicates all pte entries in COW-ed PTE. Moreover, flush TLB when we change the write protection of PTE. In addition, provide the helper functions, break_cow_pte{,_range}(), to let the other features (remap, THP, migration, swapfile, etc) to use. Signed-off-by: Chih-En Lin --- include/linux/mm.h | 4 + include/linux/pgtable.h | 6 + mm/memory.c | 319 +++++++++++++++++++++++++++++++++++++++- mm/mmap.c | 4 + mm/mremap.c | 2 + mm/swapfile.c | 2 + 6 files changed, 331 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8c6ec1da2336f..6a0eb01ee6f7e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1894,6 +1894,10 @@ void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to); void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end); int generic_error_remove_page(struct address_space *mapping, struct page *page); +int break_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr); +int break_cow_pte_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end); + #ifdef CONFIG_MMU extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags, diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index a108b60a6962b..895fa18e3b011 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1395,6 +1395,12 @@ static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t *pmd) if (pmd_none(pmdval) || pmd_trans_huge(pmdval) || (IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION) && !pmd_present(pmdval))) return 1; + /* + * COW-ed PTE has write protection which can trigger pmd_bad(). + * To avoid this, return here if entry is write protection. + */ + if (!pmd_write(pmdval)) + return 0; if (unlikely(pmd_bad(pmdval))) { pmd_clear_bad(pmd); return 1; diff --git a/mm/memory.c b/mm/memory.c index 5b474d14a5411..8ebff4cac2191 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -239,6 +239,35 @@ static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud, pmd = pmd_offset(pud, addr); do { next = pmd_addr_end(addr, end); + /* + * For COW-ed PTE, the pte entries still mapping to pages. + * However, we should did de-accounting to all of it. So, + * even if the refcount is not the same as zapping, we + * could still fall back to normal PTE and handle it + * without traversing entries to do the de-accounting. + */ + if (test_bit(MMF_COW_PTE, &tlb->mm->flags)) { + if (!pmd_none(*pmd) && !pmd_write(*pmd)) { + spinlock_t *ptl = pte_lockptr(tlb->mm, pmd); + + spin_lock(ptl); + if (!pmd_put_pte(pmd)) { + pmd_t new = pmd_mkwrite(*pmd); + + set_pmd_at(tlb->mm, addr, pmd, new); + spin_unlock(ptl); + free_pte_range(tlb, pmd, addr); + continue; + } + spin_unlock(ptl); + + pmd_clear(pmd); + mm_dec_nr_ptes(tlb->mm); + flush_tlb_mm_range(tlb->mm, addr, next, + PAGE_SHIFT, false); + } else + VM_WARN_ON(cow_pte_count(pmd) != 1); + } if (pmd_none_or_clear_bad(pmd)) continue; free_pte_range(tlb, pmd, addr); @@ -1676,12 +1705,34 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, pte_t *start_pte; pte_t *pte; swp_entry_t entry; + bool pte_is_shared = false; + + if (test_bit(MMF_COW_PTE, &mm->flags) && !pmd_write(*pmd)) { + if (!range_in_vma(vma, addr & PMD_MASK, + (addr + PMD_SIZE) & PMD_MASK)) { + /* + * We cannot promise this COW-ed PTE will also be zap + * with the rest of VMAs. So, break COW PTE here. + */ + break_cow_pte(vma, pmd, addr); + } else { + start_pte = pte_offset_map_lock(mm, pmd, addr, &ptl); + if (cow_pte_count(pmd) == 1) { + /* Reuse COW-ed PTE */ + pmd_t new = pmd_mkwrite(*pmd); + set_pmd_at(tlb->mm, addr, pmd, new); + } else + pte_is_shared = true; + pte_unmap_unlock(start_pte, ptl); + } + } tlb_change_page_size(tlb, PAGE_SIZE); again: init_rss_vec(rss); start_pte = pte_offset_map_lock(mm, pmd, addr, &ptl); pte = start_pte; + flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); do { @@ -1698,11 +1749,15 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, page = vm_normal_page(vma, addr, ptent); if (unlikely(!should_zap_page(details, page))) continue; - ptent = ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); + if (pte_is_shared) + ptent = *pte; + else + ptent = ptep_get_and_clear_full(mm, addr, pte, + tlb->fullmm); tlb_remove_tlb_entry(tlb, pte, addr); - zap_install_uffd_wp_if_needed(vma, addr, pte, details, - ptent); + if (!pte_is_shared) + zap_install_uffd_wp_if_needed(vma, addr, pte, + details, ptent); if (unlikely(!page)) continue; @@ -1768,8 +1823,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, /* We should have covered all the swap entry types */ WARN_ON_ONCE(1); } - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); - zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent); + + if (!pte_is_shared) { + pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); + zap_install_uffd_wp_if_needed(vma, addr, pte, + details, ptent); + } } while (pte++, addr += PAGE_SIZE, addr != end); add_mm_rss_vec(mm, rss); @@ -2147,6 +2206,8 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, if (retval) goto out; retval = -ENOMEM; + if (break_cow_pte(vma, NULL, addr) < 0) + goto out; pte = get_locked_pte(vma->vm_mm, addr, &ptl); if (!pte) goto out; @@ -2406,6 +2467,9 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, pte_t *pte, entry; spinlock_t *ptl; + if (break_cow_pte(vma, NULL, addr) < 0) + return VM_FAULT_OOM; + pte = get_locked_pte(mm, addr, &ptl); if (!pte) return VM_FAULT_OOM; @@ -2783,6 +2847,10 @@ int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr, BUG_ON(addr >= end); pfn -= addr >> PAGE_SHIFT; pgd = pgd_offset(mm, addr); + + if (!break_cow_pte_range(vma, addr, end)) + return -ENOMEM; + flush_cache_range(vma, addr, end); do { next = pgd_addr_end(addr, end); @@ -5143,6 +5211,226 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud) return VM_FAULT_FALLBACK; } +/* Break (unshare) COW PTE */ +static vm_fault_t handle_cow_pte_fault(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; + struct mm_struct *mm = vma->vm_mm; + pmd_t *pmd = vmf->pmd; + unsigned long start, end, addr = vmf->address; + struct mmu_notifier_range range; + pmd_t cowed_entry; + pte_t *orig_dst_pte, *orig_src_pte; + pte_t *dst_pte, *src_pte; + spinlock_t *dst_ptl, *src_ptl; + int ret = 0; + + /* + * Do nothing with the fault that doesn't have PTE yet + * (from lazy fork). + */ + if (pmd_none(*pmd) || pmd_write(*pmd)) + return 0; + /* COW PTE doesn't handle huge page. */ + if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) + return 0; + + mmap_assert_write_locked(mm); + + start = addr & PMD_MASK; + end = (addr + PMD_SIZE) & PMD_MASK; + addr = start; + + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, + 0, vma, mm, start, end); + /* + * Because of the address range is PTE not only for the faulted + * vma, it might have some unmatch situations since mmu notifier + * will only reigster the faulted vma. + * Do we really need to care about this kind of unmatch? + */ + mmu_notifier_invalidate_range_start(&range); + raw_write_seqcount_begin(&mm->write_protect_seq); + + /* + * Fast path, check if we are the only one faulted task + * references to this COW-ed PTE, reuse it. + */ + src_pte = pte_offset_map_lock(mm, pmd, addr, &src_ptl); + if (cow_pte_count(pmd) == 1) { + pmd_t new = pmd_mkwrite(*pmd); + set_pmd_at(mm, addr, pmd, new); + pte_unmap_unlock(src_pte, src_ptl); + goto flush_tlb; + } + pte_unmap_unlock(src_pte, src_ptl); + + /* + * Slow path. Since we already did the accounting and still + * sharing the mapped pages, we can just clone PTE. + */ + + cowed_entry = READ_ONCE(*pmd); + /* Decrease the pgtable_bytes of COW-ed PTE. */ + mm_dec_nr_ptes(mm); + pmd_clear(pmd); + orig_dst_pte = dst_pte = pte_alloc_map_lock(mm, pmd, addr, &dst_ptl); + if (unlikely(!dst_pte)) { + /* If allocation failed, restore COW-ed PTE. */ + set_pmd_at(mm, addr, pmd, cowed_entry); + ret = -ENOMEM; + goto out; + } + + /* + * We should hold the lock of COW-ed PTE until all the operations + * have been done, including duplicating, TLB flush, and decrease + * refcount. + */ + src_pte = pte_offset_map_lock(mm, &cowed_entry, addr, &src_ptl); + orig_src_pte = src_pte; + arch_enter_lazy_mmu_mode(); + + do { + if (pte_none(*src_pte)) + continue; + /* + * We should handled the most of cases in copy_cow_pte_range(), + * But, we cannot distinguish the vma is belong to parent or + * child, so we need to take care about it. + */ + set_pte_at(mm, addr, dst_pte, *src_pte); + } while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end); + + arch_leave_lazy_mmu_mode(); + pte_unmap_unlock(orig_dst_pte, dst_ptl); + + /* Decrease the refcount of COW-ed PTE. */ + if (!pmd_put_pte(&cowed_entry)) { + /* COW-ed (old) PTE's refcount is 1, reuse it. */ + pgtable_t token = pmd_pgtable(*pmd); + /* Reuse COW-ed PTE. */ + pmd_t new = pmd_mkwrite(cowed_entry); + + /* Clear all the entries of new PTE. */ + addr = start; + dst_pte = pte_offset_map_lock(mm, pmd, addr, &dst_ptl); + orig_dst_pte = dst_pte; + do { + if (pte_none(*dst_pte)) + continue; + if (pte_present(*dst_pte)) + page_table_check_pte_clear(mm, addr, *dst_pte); + pte_clear(mm, addr, dst_pte); + } while (dst_pte++, addr += PAGE_SIZE, addr != end); + pte_unmap_unlock(orig_dst_pte, dst_ptl); + /* Now, we can safely free new PTE. */ + pmd_clear(pmd); + pte_free(mm, token); + /* Reuse COW-ed PTE */ + set_pmd_at(mm, start, pmd, new); + } + + pte_unmap_unlock(orig_src_pte, src_ptl); + +flush_tlb: + /* + * If we change the protection, flush TLB. + * flush_tlb_range() will only use vma to get mm, we don't need + * to consider the unmatch address range with vma problem here. + */ + flush_tlb_range(vma, start, end); +out: + raw_write_seqcount_end(&mm->write_protect_seq); + mmu_notifier_invalidate_range_end(&range); + + return ret; +} + +static inline int __break_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long addr) +{ + struct vm_fault vmf = { + .vma = vma, + .address = addr & PAGE_MASK, + .pmd = pmd, + }; + + return handle_cow_pte_fault(&vmf); +} + +/** + * break_cow_pte - duplicate/reuse shared, wprotected (COW-ed) PTE + * @vma: target vma want to break COW + * @pmd: pmd index that maps to the shared PTE + * @addr: the address trigger break COW PTE + * + * The address needs to be in the range of shared and write portected + * PTE that the pmd index mapped. If pmd is NULL, it will get the pmd + * from vma. Duplicate COW-ed PTE when some still mapping to it. + * Otherwise, reuse COW-ed PTE. + */ +int break_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr) +{ + struct mm_struct *mm; + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + if (!vma) + return -EINVAL; + mm = vma->vm_mm; + + if (!test_bit(MMF_COW_PTE, &mm->flags)) + return 0; + + if (!pmd) { + pgd = pgd_offset(mm, addr); + if (pgd_none_or_clear_bad(pgd)) + return 0; + p4d = p4d_offset(pgd, addr); + if (p4d_none_or_clear_bad(p4d)) + return 0; + pud = pud_offset(p4d, addr); + if (pud_none_or_clear_bad(pud)) + return 0; + pmd = pmd_offset(pud, addr); + } + + /* We will check the type of pmd entry later. */ + + return __break_cow_pte(vma, pmd, addr); +} + +/** + * break_cow_pte_range - duplicate/reuse COW-ed PTE in a given range + * @vma: target vma want to break COW + * @start: the address of start breaking + * @end: the address of end breaking + * + * Return: zero on success, the number of failed otherwise. + */ +int break_cow_pte_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + unsigned long addr, next; + int nr_failed = 0; + + if (!vma) + return -EINVAL; + if (range_in_vma(vma, start, end)) + return -EINVAL; + + addr = start; + do { + next = pmd_addr_end(addr, end); + if (break_cow_pte(vma, NULL, addr) < 0) + nr_failed++; + } while (addr = next, addr != end); + + return nr_failed; +} + /* * These routines also need to handle stuff like marking pages dirty * and/or accessed for architectures that don't do it in hardware (most @@ -5355,8 +5643,27 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return 0; } } + /* + * Duplicate COW-ed PTE when page fault will change the + * mapped pages (write or unshared fault) or COW-ed PTE + * (file mapped read fault, see do_read_fault()). + */ + if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE) || + vma->vm_ops) && test_bit(MMF_COW_PTE, &mm->flags)) { + ret = handle_cow_pte_fault(&vmf); + if (unlikely(ret == -ENOMEM)) + return VM_FAULT_OOM; + } } + /* + * It's definitely will break the kernel when refcount of PTE + * is higher than 1 and it is writeable in PMD entry. But we + * want to see more information so just warning here. + */ + if (likely(!pmd_none(*vmf.pmd))) + VM_WARN_ON(cow_pte_count(vmf.pmd) > 1 && pmd_write(*vmf.pmd)); + return handle_pte_fault(&vmf); } diff --git a/mm/mmap.c b/mm/mmap.c index 74a84eb33b904..3eb9b852adc3b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2208,6 +2208,10 @@ int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma, return err; } + err = break_cow_pte(vma, NULL, addr); + if (err) + return err; + new = vm_area_dup(vma); if (!new) return -ENOMEM; diff --git a/mm/mremap.c b/mm/mremap.c index e465ffe279bb0..b4136b12f24b6 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -534,6 +534,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma, old_pmd = get_old_pmd(vma->vm_mm, old_addr); if (!old_pmd) continue; + /* TLB flush twice time here? */ + break_cow_pte(vma, old_pmd, old_addr); new_pmd = alloc_new_pmd(vma->vm_mm, vma, new_addr); if (!new_pmd) break; diff --git a/mm/swapfile.c b/mm/swapfile.c index 72e481aacd5df..10af3e0a2eb5d 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1911,6 +1911,8 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud, next = pmd_addr_end(addr, end); if (pmd_none_or_trans_huge_or_clear_bad(pmd)) continue; + if (break_cow_pte(vma, pmd, addr) < 0) + return -ENOMEM; ret = unuse_pte_range(vma, pmd, addr, next, type); if (ret) return ret; -- 2.37.3