From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37150C43331 for ; Fri, 3 Apr 2020 09:06:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F295B20678 for ; Fri, 3 Apr 2020 09:06:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F295B20678 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A4CBC8E0009; Fri, 3 Apr 2020 05:06:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D66B8E0008; Fri, 3 Apr 2020 05:06:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C65C8E0009; Fri, 3 Apr 2020 05:06:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 6F3D08E0008 for ; Fri, 3 Apr 2020 05:06:18 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1C70F98BA for ; Fri, 3 Apr 2020 09:06:18 +0000 (UTC) X-FDA: 76665962436.27.bikes13_711fdcb57685c X-HE-Tag: bikes13_711fdcb57685c X-Filterd-Recvd-Size: 5335 Received: from huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 3 Apr 2020 09:06:17 +0000 (UTC) Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id E6FA519880F97011DEC2; Fri, 3 Apr 2020 17:01:11 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.487.0; Fri, 3 Apr 2020 17:01:05 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , CC: , , , , , , , , , Subject: [PATCH v1 5/6] mm: tlb: Provide flush_*_tlb_range wrappers Date: Fri, 3 Apr 2020 17:00:47 +0800 Message-ID: <20200403090048.938-6-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200403090048.938-1-yezhenyu2@huawei.com> References: <20200403090048.938-1-yezhenyu2@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch provides flush_{pte|pmd|pud|p4d}_tlb_range() in generic code, which are expressed through the mmu_gather APIs. These interface set tlb->cleared_* and finally call tlb_flush(), so we can do the tlb invalidation according to the information in struct mmu_gather. Signed-off-by: Zhenyu Ye --- include/asm-generic/pgtable.h | 12 +++++++-- mm/pgtable-generic.c | 50 +++++++++++++++++++++++++++++++++++ 2 files changed, 60 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.= h index e2e2bef07dd2..2bedeee94131 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1160,11 +1160,19 @@ static inline int pmd_free_pte_page(pmd_t *pmd, u= nsigned long addr) * invalidate the entire TLB which is not desitable. * e.g. see arch/arc: flush_pmd_tlb_range */ -#define flush_pmd_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, e= nd) -#define flush_pud_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, e= nd) +extern void flush_pte_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end); +extern void flush_pmd_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end); +extern void flush_pud_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end); +extern void flush_p4d_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end); #else +#define flush_pte_tlb_range(vma, addr, end) BUILD_BUG() #define flush_pmd_tlb_range(vma, addr, end) BUILD_BUG() #define flush_pud_tlb_range(vma, addr, end) BUILD_BUG() +#define flush_p4d_tlb_range(vma, addr, end) BUILD_BUG() #endif #endif =20 diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 3d7c01e76efc..0f5414a4a2ec 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -101,6 +101,56 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, u= nsigned long address, =20 #ifdef CONFIG_TRANSPARENT_HUGEPAGE =20 +#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE +void flush_pte_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); + tlb_start_vma(&tlb, vma); + tlb_set_pte_range(&tlb, addr, end - addr); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb, addr, end); +} + +void flush_pmd_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); + tlb_start_vma(&tlb, vma); + tlb_set_pmd_range(&tlb, addr, end - addr); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb, addr, end); +} + +void flush_pud_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); + tlb_start_vma(&tlb, vma); + tlb_set_pud_range(&tlb, addr, end - addr); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb, addr, end); +} + +void flush_p4d_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); + tlb_start_vma(&tlb, vma); + tlb_set_p4d_range(&tlb, addr, end - addr); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb, addr, end); +} +#endif /* __HAVE_ARCH_FLUSH_PMD_TLB_RANGE */ + #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, --=20 2.19.1