From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57CBDC433E2 for ; Fri, 4 Sep 2020 11:31:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F1E42206B7 for ; Fri, 4 Sep 2020 11:31:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1E42206B7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bitdefender.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C40F1900002; Fri, 4 Sep 2020 07:31:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BED228E0003; Fri, 4 Sep 2020 07:31:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A914D900002; Fri, 4 Sep 2020 07:31:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id 7B1E88E0003 for ; Fri, 4 Sep 2020 07:31:09 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 34111181AC9C6 for ; Fri, 4 Sep 2020 11:31:09 +0000 (UTC) X-FDA: 77225162658.24.desk38_1d11326270b1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 03B661A4A0 for ; Fri, 4 Sep 2020 11:31:08 +0000 (UTC) X-HE-Tag: desk38_1d11326270b1 X-Filterd-Recvd-Size: 10279 Received: from mx01.bbu.dsd.mx.bitdefender.com (mx01.bbu.dsd.mx.bitdefender.com [91.199.104.161]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Sep 2020 11:31:08 +0000 (UTC) Received: from smtp.bitdefender.com (smtp01.buh.bitdefender.com [10.17.80.75]) by mx01.bbu.dsd.mx.bitdefender.com (Postfix) with ESMTPS id CD72B30747C8; Fri, 4 Sep 2020 14:31:06 +0300 (EEST) Received: from localhost.localdomain (unknown [195.189.155.252]) by smtp.bitdefender.com (Postfix) with ESMTPSA id 041833072787; Fri, 4 Sep 2020 14:31:05 +0300 (EEST) From: =?UTF-8?q?Adalbert=20Laz=C4=83r?= To: linux-mm@kvack.org Cc: linux-api@vger.kernel.org, Andrew Morton , Alexander Graf , Stefan Hajnoczi , Jerome Glisse , Paolo Bonzini , =?UTF-8?q?Mihai=20Don=C8=9Bu?= , Mircea Cirjaliu , Andy Lutomirski , Arnd Bergmann , Sargun Dhillon , Aleksa Sarai , Oleg Nesterov , Jann Horn , Kees Cook , Matthew Wilcox , Christian Brauner , =?UTF-8?q?Adalbert=20Laz=C4=83r?= Subject: [RESEND RFC PATCH 2/5] mm: let the VMA decide how zap_pte_range() acts on mapped pages Date: Fri, 4 Sep 2020 14:31:13 +0300 Message-Id: <20200904113116.20648-3-alazar@bitdefender.com> In-Reply-To: <20200904113116.20648-1-alazar@bitdefender.com> References: <20200904113116.20648-1-alazar@bitdefender.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Rspamd-Queue-Id: 03B661A4A0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mircea Cirjaliu Instead of having one big function to handle all cases of page unmapping, have multiple implementation-defined callbacks, each for its own VMA type= . In the future, exotic VMA implementations won't have to bloat the unique zapping function with another case of mappings. Signed-off-by: Mircea Cirjaliu Signed-off-by: Adalbert Laz=C4=83r --- include/linux/mm.h | 16 ++++ mm/memory.c | 182 +++++++++++++++++++++++++-------------------- 2 files changed, 116 insertions(+), 82 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 1be4482a7b81..39e55467aa49 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -36,6 +36,7 @@ struct file_ra_state; struct user_struct; struct writeback_control; struct bdi_writeback; +struct zap_details; =20 void init_mm_internals(void); =20 @@ -601,6 +602,14 @@ struct vm_operations_struct { */ struct page *(*find_special_page)(struct vm_area_struct *vma, unsigned long addr); + + /* + * Called by zap_pte_range() for use by special VMAs that implement + * custom zapping behavior. + */ + int (*zap_pte)(struct vm_area_struct *vma, unsigned long addr, + pte_t *pte, int rss[], struct mmu_gather *tlb, + struct zap_details *details); }; =20 static inline void vma_init(struct vm_area_struct *vma, struct mm_struct= *mm) @@ -1594,6 +1603,13 @@ static inline bool can_do_mlock(void) { return fal= se; } extern int user_shm_lock(size_t, struct user_struct *); extern void user_shm_unlock(size_t, struct user_struct *); =20 +/* + * Flags returned by zap_pte implementations + */ +#define ZAP_PTE_CONTINUE 0 +#define ZAP_PTE_FLUSH (1 << 0) /* Ask for TLB flush. */ +#define ZAP_PTE_BREAK (1 << 1) /* Break PTE iteration. */ + /* * Parameter block passed down to zap_pte_range in exceptional cases. */ diff --git a/mm/memory.c b/mm/memory.c index 8e78fb151f8f..a225bfd01417 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1031,18 +1031,109 @@ int copy_page_range(struct mm_struct *dst_mm, st= ruct mm_struct *src_mm, return ret; } =20 +static int zap_pte_common(struct vm_area_struct *vma, unsigned long addr= , + pte_t *pte, int rss[], struct mmu_gather *tlb, + struct zap_details *details) +{ + struct mm_struct *mm =3D tlb->mm; + pte_t ptent =3D *pte; + swp_entry_t entry; + int flags =3D 0; + + if (pte_present(ptent)) { + struct page *page; + + page =3D vm_normal_page(vma, addr, ptent); + if (unlikely(details) && page) { + /* + * unmap_shared_mapping_pages() wants to + * invalidate cache without truncating: + * unmap shared but keep private pages. + */ + if (details->check_mapping && + details->check_mapping !=3D page_rmapping(page)) + return 0; + } + ptent =3D ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); + tlb_remove_tlb_entry(tlb, pte, addr); + if (unlikely(!page)) + return 0; + + if (!PageAnon(page)) { + if (pte_dirty(ptent)) { + flags |=3D ZAP_PTE_FLUSH; + set_page_dirty(page); + } + if (pte_young(ptent) && + likely(!(vma->vm_flags & VM_SEQ_READ))) + mark_page_accessed(page); + } + rss[mm_counter(page)]--; + page_remove_rmap(page, false); + if (unlikely(page_mapcount(page) < 0)) + print_bad_pte(vma, addr, ptent, page); + if (unlikely(__tlb_remove_page(tlb, page))) + flags |=3D ZAP_PTE_FLUSH | ZAP_PTE_BREAK; + return flags; + } + + entry =3D pte_to_swp_entry(ptent); + if (non_swap_entry(entry) && is_device_private_entry(entry)) { + struct page *page =3D device_private_entry_to_page(entry); + + if (unlikely(details && details->check_mapping)) { + /* + * unmap_shared_mapping_pages() wants to + * invalidate cache without truncating: + * unmap shared but keep private pages. + */ + if (details->check_mapping !=3D page_rmapping(page)) + return 0; + } + + pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); + rss[mm_counter(page)]--; + page_remove_rmap(page, false); + put_page(page); + return 0; + } + + /* If details->check_mapping, we leave swap entries. */ + if (unlikely(details)) + return 0; + + if (!non_swap_entry(entry)) + rss[MM_SWAPENTS]--; + else if (is_migration_entry(entry)) { + struct page *page; + + page =3D migration_entry_to_page(entry); + rss[mm_counter(page)]--; + } + if (unlikely(!free_swap_and_cache(entry))) + print_bad_pte(vma, addr, ptent, NULL); + pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); + + return flags; +} + static unsigned long zap_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, struct zap_details *details) { struct mm_struct *mm =3D tlb->mm; - int force_flush =3D 0; + int flags =3D 0; int rss[NR_MM_COUNTERS]; spinlock_t *ptl; pte_t *start_pte; pte_t *pte; - swp_entry_t entry; + + int (*zap_pte)(struct vm_area_struct *vma, unsigned long addr, + pte_t *pte, int rss[], struct mmu_gather *tlb, + struct zap_details *details) =3D zap_pte_common; + if (vma->vm_ops && vma->vm_ops->zap_pte) + zap_pte =3D vma->vm_ops->zap_pte; =20 tlb_change_page_size(tlb, PAGE_SIZE); again: @@ -1058,92 +1149,19 @@ static unsigned long zap_pte_range(struct mmu_gat= her *tlb, =20 if (!zap_is_atomic(details) && need_resched()) break; - - if (pte_present(ptent)) { - struct page *page; - - page =3D vm_normal_page(vma, addr, ptent); - if (unlikely(details) && page) { - /* - * unmap_shared_mapping_pages() wants to - * invalidate cache without truncating: - * unmap shared but keep private pages. - */ - if (details->check_mapping && - details->check_mapping !=3D page_rmapping(page)) - continue; - } - ptent =3D ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); - tlb_remove_tlb_entry(tlb, pte, addr); - if (unlikely(!page)) - continue; - - if (!PageAnon(page)) { - if (pte_dirty(ptent)) { - force_flush =3D 1; - set_page_dirty(page); - } - if (pte_young(ptent) && - likely(!(vma->vm_flags & VM_SEQ_READ))) - mark_page_accessed(page); - } - rss[mm_counter(page)]--; - page_remove_rmap(page, false); - if (unlikely(page_mapcount(page) < 0)) - print_bad_pte(vma, addr, ptent, page); - if (unlikely(__tlb_remove_page(tlb, page))) { - force_flush =3D 1; - addr +=3D PAGE_SIZE; - break; - } - continue; - } - - entry =3D pte_to_swp_entry(ptent); - if (non_swap_entry(entry) && is_device_private_entry(entry)) { - struct page *page =3D device_private_entry_to_page(entry); - - if (unlikely(details && details->check_mapping)) { - /* - * unmap_shared_mapping_pages() wants to - * invalidate cache without truncating: - * unmap shared but keep private pages. - */ - if (details->check_mapping !=3D - page_rmapping(page)) - continue; - } - - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); - rss[mm_counter(page)]--; - page_remove_rmap(page, false); - put_page(page); - continue; + if (flags & ZAP_PTE_BREAK) { + flags &=3D ~ZAP_PTE_BREAK; + break; } =20 - /* If details->check_mapping, we leave swap entries. */ - if (unlikely(details)) - continue; - - if (!non_swap_entry(entry)) - rss[MM_SWAPENTS]--; - else if (is_migration_entry(entry)) { - struct page *page; - - page =3D migration_entry_to_page(entry); - rss[mm_counter(page)]--; - } - if (unlikely(!free_swap_and_cache(entry))) - print_bad_pte(vma, addr, ptent, NULL); - pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); + flags |=3D zap_pte(vma, addr, pte, rss, tlb, details); } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); =20 add_mm_rss_vec(mm, rss); arch_leave_lazy_mmu_mode(); =20 /* Do the actual TLB flush before dropping ptl */ - if (force_flush) + if (flags & ZAP_PTE_FLUSH) tlb_flush_mmu_tlbonly(tlb); pte_unmap_unlock(start_pte, ptl); =20 @@ -1153,8 +1171,8 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, * entries before releasing the ptl), free the batched * memory too. Restart if we didn't do everything. */ - if (force_flush) { - force_flush =3D 0; + if (flags & ZAP_PTE_FLUSH) { + flags &=3D ~ZAP_PTE_FLUSH; tlb_flush_mmu(tlb); } =20