From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F260C54EE9 for ; Tue, 27 Sep 2022 16:28:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0EA538E00E1; Tue, 27 Sep 2022 12:28:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 099838E00C1; Tue, 27 Sep 2022 12:28:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2E4B8E00E1; Tue, 27 Sep 2022 12:28:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D0A6D8E00C1 for ; Tue, 27 Sep 2022 12:28:16 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7FFC1C0EF8 for ; Tue, 27 Sep 2022 16:28:16 +0000 (UTC) X-FDA: 79958397792.06.5D09F51 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf06.hostedemail.com (Postfix) with ESMTP id 0344A180013 for ; Tue, 27 Sep 2022 16:28:15 +0000 (UTC) Received: by mail-pg1-f179.google.com with SMTP id bh13so9876188pgb.4 for ; Tue, 27 Sep 2022 09:28:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=VnIN7aCnF5zg62leJbyDnNV6liD03aQ32fsEaaJloIU=; b=fsg0KihQvoWiGIhH+ksRX4pZozCAqcUXZJoJDAUOprO4czcq1U39r9FGLdAzAg5g/8 7Mr3k/AeEoktBnacW68xdRXUhpTTZjOo/v6XuvgTVXCvS0TamvuJeCuldOKVjsQks+At 6s7IhGc5QaPrpOUuBrs/vKjstyY+qKvpcDQ4mWOG2idoGXQy1+040YxdD+k7VDrYF/Xf pL0HVUEhBoStMBOrcju6EGrhEGUejxsBN4TSYYvWfmDqBdPGqt8/cbBS5FyZ7BbAjrvl YPIy88e3FkvrGeYVRDmc5FdhevfzNafGweJvM9nC8iLeSDz957d2fkqQEyu5j04z5F38 Xkog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=VnIN7aCnF5zg62leJbyDnNV6liD03aQ32fsEaaJloIU=; b=TDR0+/g67L7S8f0Z90dZawoMiCc09S1mar6Ywt22sbLGzyGBjdVP239HPzHtSi2sMj nkoZ8cNS46y2yx9XEU2v5qs7A2herGFM9N5G4NQyO1PBaluQNDxgV/OdxTBz+M9gV1VB fFBs3uhU9vCi7xDcBCgM2iQqIeztTpUrLD42UkYlmYE2EYjIqdfeWFnz2l6ubt5QZAUt 1vSEDkWWoU6HO8eHSMvJdT1mqXNYW5fOt9BgepDXSDe9UdFjgpeOB6+4sAv+QY4yI1iC HpBlwNhf9/gkv1cW4S8QMLl1kWLQTOYxd7FrwW9Am6wp6eUwQHeqW3Zr+stY5aBquAW/ MqXQ== X-Gm-Message-State: ACrzQf2lzajcAyDR3dQmbQReO9Edr3zLji6+QfgUqyaO/LRmtVO5G9NH lOQURiEyIBycoUnWESGLbww= X-Google-Smtp-Source: AMsMyM67l7AHsumdCaArKikJNOJaaDkNNe3imQK1Gmo9dsopNJWErpmOsAxQEwlR62zMKUsb8Rld5A== X-Received: by 2002:a63:5a44:0:b0:431:fa3a:f92c with SMTP id k4-20020a635a44000000b00431fa3af92cmr25975839pgm.471.1664296094902; Tue, 27 Sep 2022 09:28:14 -0700 (PDT) Received: from archlinux.localdomain ([140.121.198.213]) by smtp.googlemail.com with ESMTPSA id 9-20020a17090a0f0900b001f333fab3d6sm8602360pjy.18.2022.09.27.09.28.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Sep 2022 09:28:14 -0700 (PDT) From: Chih-En Lin To: Andrew Morton , Qi Zheng , David Hildenbrand , Matthew Wilcox , Christophe Leroy Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Luis Chamberlain , Kees Cook , Iurii Zaikin , Vlastimil Babka , William Kucharski , "Kirill A . Shutemov" , Peter Xu , Suren Baghdasaryan , Arnd Bergmann , Tong Tiangen , Pasha Tatashin , Li kunyu , Nadav Amit , Anshuman Khandual , Minchan Kim , Yang Shi , Song Liu , Miaohe Lin , Thomas Gleixner , Sebastian Andrzej Siewior , Andy Lutomirski , Fenghua Yu , Dinglan Peng , Pedro Fonseca , Jim Huang , Huichun Feng , Chih-En Lin Subject: [RFC PATCH v2 7/9] mm: Add the break COW PTE handler Date: Wed, 28 Sep 2022 00:29:55 +0800 Message-Id: <20220927162957.270460-8-shiyn.lin@gmail.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20220927162957.270460-1-shiyn.lin@gmail.com> References: <20220927162957.270460-1-shiyn.lin@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=fsg0KihQ; spf=pass (imf06.hostedemail.com: domain of shiyn.lin@gmail.com designates 209.85.215.179 as permitted sender) smtp.mailfrom=shiyn.lin@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664296096; a=rsa-sha256; cv=none; b=IcWlOUgPHwvionwYIvGpIxIh/WcNSfXmqaeWUpzoaPN2PbatfinQnunaHUdaLb9Sr2AVzi kq2aNrjTpaOr2MadmjQidSzzz6fKPNcJuKNuBqZSsndcByj+QTNrBmc6KsbJVuOs8XN5ZB bbt6LiPfTDpJhg4XWR38lO1wIdq4AHA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664296096; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VnIN7aCnF5zg62leJbyDnNV6liD03aQ32fsEaaJloIU=; b=LP4d7YYAOCrMj0E+ebA0gLtRPr7tMRwRsnQHhy66hV/zU4En9bSwQDObssMa4VnmCzNLpQ /CZZIszkC8vVB0Hn+1tyUF/KXBT1a2gwj/PAk2lb1whKjQ6Yte47FlJIUPXRni61QhWuM8 K6sACbJV+ONse5GljdnTrG+nTJVKozM= Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=fsg0KihQ; spf=pass (imf06.hostedemail.com: domain of shiyn.lin@gmail.com designates 209.85.215.179 as permitted sender) smtp.mailfrom=shiyn.lin@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 0344A180013 X-Stat-Signature: d6ywtxmcd5gfd3yc7zb4ikax4mew4sok X-HE-Tag: 1664296095-408423 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To handle the COW PTE with write fault, introduce the helper function handle_cow_pte(). The function provides two behaviors. One is breaking COW by decreasing the refcount, pgables_bytes, and RSS. Another is copying all the information in the shared PTE table by using copy_pte_page() with a wrapper. Also, add the wrapper functions to help us find out the COWed or COW-available PTE table. Signed-off-by: Chih-En Lin --- include/linux/pgtable.h | 75 +++++++++++++++++ mm/memory.c | 179 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 254 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 9b08a3361d490..85255f5223ae3 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -10,6 +10,7 @@ #include #include +#include /* For MMF_COW_PTE flag */ #include #include #include @@ -674,6 +675,42 @@ static inline void pmd_cow_pte_clear_mkexclusive(pmd_t *pmd) set_cow_pte_owner(pmd, NULL); } +static inline unsigned long get_pmd_start_edge(struct vm_area_struct *vma, + unsigned long addr) +{ + unsigned long start = addr & PMD_MASK; + + if (start < vma->vm_start) + start = vma->vm_start; + + return start; +} + +static inline unsigned long get_pmd_end_edge(struct vm_area_struct *vma, + unsigned long addr) +{ + unsigned long end = (addr + PMD_SIZE) & PMD_MASK; + + if (end > vma->vm_end) + end = vma->vm_end; + + return end; +} + +static inline bool is_cow_pte_available(struct vm_area_struct *vma, pmd_t *pmd) +{ + if (!vma || !pmd) + return false; + if (!test_bit(MMF_COW_PTE, &vma->vm_mm->flags)) + return false; + if (pmd_cow_pte_exclusive(pmd)) + return false; + return true; +} + +int handle_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, + bool alloc); + #ifndef pte_access_permitted #define pte_access_permitted(pte, write) \ (pte_present(pte) && (!(write) || pte_write(pte))) @@ -1002,6 +1039,44 @@ int cow_pte_handler(struct ctl_table *table, int write, void *buffer, extern int sysctl_cow_pte_pid; +static inline bool __is_pte_table_cowing(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long addr) +{ + if (!vma) + return false; + if (!pmd) { + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + + if (addr == 0) + return false; + + pgd = pgd_offset(vma->vm_mm, addr); + if (pgd_none_or_clear_bad(pgd)) + return false; + p4d = p4d_offset(pgd, addr); + if (p4d_none_or_clear_bad(p4d)) + return false; + pud = pud_offset(p4d, addr); + if (pud_none_or_clear_bad(pud)) + return false; + pmd = pmd_offset(pud, addr); + } + if (!test_bit(MMF_COW_PTE, &vma->vm_mm->flags)) + return false; + if (pmd_none(*pmd) || pmd_write(*pmd)) + return false; + if (pmd_cow_pte_exclusive(pmd)) + return false; + return true; +} + +static inline bool is_pte_table_cowing(struct vm_area_struct *vma, pmd_t *pmd) +{ + return __is_pte_table_cowing(vma, pmd, 0UL); +} + #endif /* CONFIG_MMU */ /* diff --git a/mm/memory.c b/mm/memory.c index 3e66e229f4169..4cf3f74fb183f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2911,6 +2911,185 @@ void cow_pte_fallback(struct vm_area_struct *vma, pmd_t *pmd, set_pmd_at(mm, addr, pmd, new); } +static inline int copy_cow_pte_range(struct vm_area_struct *vma, + pmd_t *dst_pmd, pmd_t *src_pmd, + unsigned long start, unsigned long end) +{ + struct mm_struct *mm = vma->vm_mm; + struct mmu_notifier_range range; + int ret; + bool is_cow; + + is_cow = is_cow_mapping(vma->vm_flags); + if (is_cow) { + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, + 0, vma, mm, start, end); + mmu_notifier_invalidate_range_start(&range); + mmap_assert_write_locked(mm); + raw_write_seqcount_begin(&mm->write_protect_seq); + } + + ret = copy_pte_range(vma, vma, dst_pmd, src_pmd, start, end); + + if (is_cow) { + raw_write_seqcount_end(&mm->write_protect_seq); + mmu_notifier_invalidate_range_end(&range); + } + + return ret; +} + +/* + * Break COW PTE, two state here: + * - After fork : [parent, rss=1, ref=2, write=NO , owner=parent] + * to [parent, rss=1, ref=1, write=YES, owner=NULL ] + * COW PTE become [ref=1, write=NO , owner=NULL ] + * [child , rss=0, ref=2, write=NO , owner=parent] + * to [child , rss=1, ref=1, write=YES, owner=NULL ] + * COW PTE become [ref=1, write=NO , owner=parent] + * NOTE + * - Copy the COW PTE to new PTE. + * - Clear the owner of COW PTE and set PMD entry writable when it is owner. + * - Increase RSS if it is not owner. + */ +static int break_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long addr) +{ + struct mm_struct *mm = vma->vm_mm; + unsigned long pte_start, pte_end; + unsigned long start, end; + struct vm_area_struct *prev = vma->vm_prev; + struct vm_area_struct *next = vma->vm_next; + pmd_t cowed_entry = *pmd; + + if (cow_pte_count(&cowed_entry) == 1) { + cow_pte_fallback(vma, pmd, addr); + return 1; + } + + pte_start = start = addr & PMD_MASK; + pte_end = end = (addr + PMD_SIZE) & PMD_MASK; + + pmd_clear(pmd); + /* + * If the vma does not cover the entire address range of the PTE table, + * it should check the previous and next. + */ + if (start < vma->vm_start && prev) { + /* The part of address range is covered by previous. */ + if (start < prev->vm_end) + copy_cow_pte_range(prev, pmd, &cowed_entry, + start, prev->vm_end); + start = vma->vm_start; + } + if (end > vma->vm_end && next) { + /* The part of address range is covered by next. */ + if (end > next->vm_start) + copy_cow_pte_range(next, pmd, &cowed_entry, + next->vm_start, end); + end = vma->vm_end; + } + if (copy_cow_pte_range(vma, pmd, &cowed_entry, start, end)) + return -ENOMEM; + + /* + * Here, it is the owner, so clear the ownership. To keep RSS state and + * page table bytes correct, it needs to decrease them. + * Also, handle the address range issue here. + */ + if (cow_pte_owner_is_same(&cowed_entry, pmd)) { + set_cow_pte_owner(&cowed_entry, NULL); + if (pte_start < vma->vm_start && prev && + pte_start < prev->vm_end) + cow_pte_rss(mm, vma->vm_prev, pmd, + pte_start, prev->vm_end, false /* dec */); + if (pte_end > vma->vm_end && next && + pte_end > next->vm_start) + cow_pte_rss(mm, vma->vm_next, pmd, + next->vm_start, pte_end, false /* dec */); + cow_pte_rss(mm, vma, pmd, start, end, false /* dec */); + mm_dec_nr_ptes(mm); + } + + /* Already handled it, don't reuse cowed table. */ + pmd_put_pte(vma, &cowed_entry, addr, false); + + VM_BUG_ON(cow_pte_count(pmd) != 1); + + return 0; +} + +static int zap_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long addr) +{ + struct mm_struct *mm = vma->vm_mm; + unsigned long start, end; + + if (pmd_put_pte(vma, pmd, addr, true)) { + /* fallback, reuse pgtable */ + return 1; + } + + start = addr & PMD_MASK; + end = (addr + PMD_SIZE) & PMD_MASK; + + /* + * If PMD entry is owner, clear the ownership, + * and decrease RSS state and pgtable_bytes. + */ + if (cow_pte_owner_is_same(pmd, pmd)) { + set_cow_pte_owner(pmd, NULL); + cow_pte_rss(mm, vma, pmd, start, end, false /* dec */); + mm_dec_nr_ptes(mm); + } + + pmd_clear(pmd); + return 0; +} + +/** + * handle_cow_pte - Break COW PTE, copy/dereference the shared PTE table + * @vma: target vma want to break COW + * @pmd: pmd index that maps to the shared PTE table + * @addr: the address trigger the break COW + * @alloc: copy PTE table if alloc is true, otherwise dereference + * + * The address needs to be in the range of the PTE table that the pmd index + * mapped. If pmd is NULL, it will get the pmd from vma and check it is COWing. + */ +int handle_cow_pte(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, + bool alloc) +{ + pgd_t *pgd; + p4d_t *p4d; + pud_t *pud; + struct mm_struct *mm = vma->vm_mm; + int ret = 0; + + if (!pmd) { + pgd = pgd_offset(mm, addr); + if (pgd_none_or_clear_bad(pgd)) + return 0; + p4d = p4d_offset(pgd, addr); + if (p4d_none_or_clear_bad(p4d)) + return 0; + pud = pud_offset(p4d, addr); + if (pud_none_or_clear_bad(pud)) + return 0; + pmd = pmd_offset(pud, addr); + } + + if (!is_pte_table_cowing(vma, pmd)) + return 0; + + if (alloc) + ret = break_cow_pte(vma, pmd, addr); + else + ret = zap_cow_pte(vma, pmd, addr); + + return ret; +} + /* * handle_pte_fault chooses page fault handler according to an entry which was * read non-atomically. Before making any commitment, on those architectures -- 2.37.3