From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19D24C43331 for ; Tue, 31 Mar 2020 14:30:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB6F52078B for ; Tue, 31 Mar 2020 14:30:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB6F52078B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BA0618E0001; Tue, 31 Mar 2020 10:30:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2A4A6B0071; Tue, 31 Mar 2020 10:30:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F4A28E0001; Tue, 31 Mar 2020 10:30:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0116.hostedemail.com [216.40.44.116]) by kanga.kvack.org (Postfix) with ESMTP id 7B0E56B0070 for ; Tue, 31 Mar 2020 10:30:07 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 43C26180AD811 for ; Tue, 31 Mar 2020 14:30:07 +0000 (UTC) X-FDA: 76655892054.06.tub28_eef9d873406 X-HE-Tag: tub28_eef9d873406 X-Filterd-Recvd-Size: 5789 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Tue, 31 Mar 2020 14:30:06 +0000 (UTC) Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 26B92B45686137A5CFF1; Tue, 31 Mar 2020 22:30:03 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Tue, 31 Mar 2020 22:29:57 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , Subject: [RFC PATCH v5 8/8] arm64: tlb: Set the TTL field in flush_tlb_range Date: Tue, 31 Mar 2020 22:29:27 +0800 Message-ID: <20200331142927.1237-9-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200331142927.1237-1-yezhenyu2@huawei.com> References: <20200331142927.1237-1-yezhenyu2@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch uses the cleared_* in struct mmu_gather to set the TTL field in flush_tlb_range(). Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/tlb.h | 39 ++++++++++++++++++++++++++++++- arch/arm64/include/asm/tlbflush.h | 22 +++++------------ 2 files changed, 44 insertions(+), 17 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index b76df828e6b7..72b6e3763df2 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -21,11 +21,34 @@ static void tlb_flush(struct mmu_gather *tlb); =20 #include =20 +/* + * get the tlbi levels in arm64. Default value is 0 if more than one + * of cleared_* is set or neither is set. + * Arm64 doesn't support p4ds now. + */ +static inline int tlb_get_level(struct mmu_gather *tlb) +{ + int sum =3D tlb->cleared_ptes + tlb->cleared_pmds + + tlb->cleared_puds + tlb->cleared_p4ds; + + if (sum !=3D 1) + return 0; + else if (tlb->cleared_ptes) + return 3; + else if (tlb->cleared_pmds) + return 2; + else if (tlb->cleared_puds) + return 1; + + return 0; +} + static inline void tlb_flush(struct mmu_gather *tlb) { struct vm_area_struct vma =3D TLB_FLUSH_VMA(tlb->mm, 0); bool last_level =3D !tlb->freed_tables; unsigned long stride =3D tlb_get_unmap_size(tlb); + int tlb_level =3D tlb_get_level(tlb); =20 /* * If we're tearing down the address space then we only care about @@ -38,7 +61,21 @@ static inline void tlb_flush(struct mmu_gather *tlb) return; } =20 - __flush_tlb_range(&vma, tlb->start, tlb->end, stride, last_level); + __flush_tlb_range(&vma, tlb->start, tlb->end, stride, + last_level, tlb_level); +} + +static inline void flush_tlb_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + /* + * We cannot use leaf-only invalidation here, since we may be invalidat= ing + * table entries as part of collapsing hugepages or moving page tables. + */ + unsigned long stride =3D tlb_get_unmap_size(tlb); + int tlb_level =3D tlb_get_level(tlb); + __flush_tlb_range(vma, start, end, stride, false, tlb_level); } =20 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/t= lbflush.h index 0b4d75a2270b..dc8e803692f8 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -215,7 +215,8 @@ static inline void flush_tlb_page(struct vm_area_stru= ct *vma, =20 static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end, - unsigned long stride, bool last_level) + unsigned long stride, bool last_level, + int tlb_level) { unsigned long asid =3D ASID(vma->vm_mm); unsigned long addr; @@ -237,27 +238,16 @@ static inline void __flush_tlb_range(struct vm_area= _struct *vma, dsb(ishst); for (addr =3D start; addr < end; addr +=3D stride) { if (last_level) { - __tlbi_level(vale1is, addr, 0); - __tlbi_user_level(vale1is, addr, 0); + __tlbi_level(vale1is, addr, tlb_level); + __tlbi_user_level(vale1is, addr, tlb_level); } else { - __tlbi_level(vae1is, addr, 0); - __tlbi_user_level(vae1is, addr, 0); + __tlbi_level(vae1is, addr, tlb_level); + __tlbi_user_level(vae1is, addr, tlb_level); } } dsb(ish); } =20 -static inline void flush_tlb_range(struct mmu_gather *tlb, - struct vm_area_struct *vma, - unsigned long start, unsigned long end) -{ - /* - * We cannot use leaf-only invalidation here, since we may be invalidat= ing - * table entries as part of collapsing hugepages or moving page tables. - */ - __flush_tlb_range(vma, start, end, PAGE_SIZE, false); -} - static inline void flush_tlb_kernel_range(unsigned long start, unsigned = long end) { unsigned long addr; --=20 2.19.1