From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9376BC433DF for ; Fri, 10 Jul 2020 09:44:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 555BF206F4 for ; Fri, 10 Jul 2020 09:44:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 555BF206F4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8BA328D0007; Fri, 10 Jul 2020 05:44:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 81CB88D0002; Fri, 10 Jul 2020 05:44:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 696168D0007; Fri, 10 Jul 2020 05:44:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id 50D698D0002 for ; Fri, 10 Jul 2020 05:44:43 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 16400181AEF0B for ; Fri, 10 Jul 2020 09:44:43 +0000 (UTC) X-FDA: 77021681646.30.jam46_5e0b24926ecd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id D9B9E180B3C95 for ; Fri, 10 Jul 2020 09:44:42 +0000 (UTC) X-HE-Tag: jam46_5e0b24926ecd X-Filterd-Recvd-Size: 8454 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 09:44:41 +0000 (UTC) Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id C9E85B7A7DBB6AA9B942; Fri, 10 Jul 2020 17:44:37 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.174.186.75) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.487.0; Fri, 10 Jul 2020 17:44:30 +0800 From: Zhenyu Ye To: , , , , , , CC: , , , , , , , , , Subject: [PATCH v2 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64 Date: Fri, 10 Jul 2020 17:44:20 +0800 Message-ID: <20200710094420.517-3-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200710094420.517-1-yezhenyu2@huawei.com> References: <20200710094420.517-1-yezhenyu2@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.186.75] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: D9B9E180B3C95 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range(). When cpu supports TLBI feature, the minimum range granularity is decided by 'scale', so we can not flush all pages by one instruction in some cases. For example, when the pages =3D 0xe81a, let's start 'scale' from maximum, and find right 'num' for each 'scale': 1. scale =3D 3, we can flush no pages because the minimum range is 2^(5*3 + 1) =3D 0x10000. 2. scale =3D 2, the minimum range is 2^(5*2 + 1) =3D 0x800, we can flush 0xe800 pages this time, the num =3D 0xe800/0x800 - 1 =3D 0x1c. Remaining pages is 0x1a; 3. scale =3D 1, the minimum range is 2^(5*1 + 1) =3D 0x40, no page can be flushed. 4. scale =3D 0, we flush the remaining 0x1a pages, the num =3D 0x1a/0x2 - 1 =3D 0xd. However, in most scenarios, the pages =3D 1 when flush_tlb_range() is called. Start from scale =3D 3 or other proper value (such as scale =3D ilog2(pages)), will incur extra overhead. So increase 'scale' from 0 to maximum, the flush order is exactly opposite to the example. Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/tlbflush.h | 138 +++++++++++++++++++++++------- 1 file changed, 109 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/t= lbflush.h index 39aed2efd21b..edfec8139ef8 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -60,6 +60,31 @@ __ta; \ }) =20 +/* + * Get translation granule of the system, which is decided by + * PAGE_SIZE. Used by TTL. + * - 4KB : 1 + * - 16KB : 2 + * - 64KB : 3 + */ +#define TLBI_TTL_TG_4K 1 +#define TLBI_TTL_TG_16K 2 +#define TLBI_TTL_TG_64K 3 + +static inline unsigned long get_trans_granule(void) +{ + switch (PAGE_SIZE) { + case SZ_4K: + return TLBI_TTL_TG_4K; + case SZ_16K: + return TLBI_TTL_TG_16K; + case SZ_64K: + return TLBI_TTL_TG_64K; + default: + return 0; + } +} + /* * Level-based TLBI operations. * @@ -73,9 +98,6 @@ * in asm/stage2_pgtable.h. */ #define TLBI_TTL_MASK GENMASK_ULL(47, 44) -#define TLBI_TTL_TG_4K 1 -#define TLBI_TTL_TG_16K 2 -#define TLBI_TTL_TG_64K 3 =20 #define __tlbi_level(op, addr, level) do { \ u64 arg =3D addr; \ @@ -83,19 +105,7 @@ if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) && \ level) { \ u64 ttl =3D level & 3; \ - \ - switch (PAGE_SIZE) { \ - case SZ_4K: \ - ttl |=3D TLBI_TTL_TG_4K << 2; \ - break; \ - case SZ_16K: \ - ttl |=3D TLBI_TTL_TG_16K << 2; \ - break; \ - case SZ_64K: \ - ttl |=3D TLBI_TTL_TG_64K << 2; \ - break; \ - } \ - \ + ttl |=3D get_trans_granule() << 2; \ arg &=3D ~TLBI_TTL_MASK; \ arg |=3D FIELD_PREP(TLBI_TTL_MASK, ttl); \ } \ @@ -108,6 +118,39 @@ __tlbi_level(op, (arg | USER_ASID_FLAG), level); \ } while (0) =20 +/* + * This macro creates a properly formatted VA operand for the TLBI RANGE= . + * The value bit assignments are: + * + * +----------+------+-------+-------+-------+----------------------+ + * | ASID | TG | SCALE | NUM | TTL | BADDR | + * +-----------------+-------+-------+-------+----------------------+ + * |63 48|47 46|45 44|43 39|38 37|36 0| + * + * The address range is determined by below formula: + * [BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE) + * + */ +#define __TLBI_VADDR_RANGE(addr, asid, scale, num, ttl) \ + ({ \ + unsigned long __ta =3D (addr) >> PAGE_SHIFT; \ + __ta &=3D GENMASK_ULL(36, 0); \ + __ta |=3D (unsigned long)(ttl & 3) << 37; \ + __ta |=3D (unsigned long)(num & 31) << 39; \ + __ta |=3D (unsigned long)(scale & 3) << 44; \ + __ta |=3D (get_trans_granule() & 3) << 46; \ + __ta |=3D (unsigned long)(asid) << 48; \ + __ta; \ + }) + +/* These macros are used by the TLBI RANGE feature. */ +#define __TLBI_RANGE_PAGES(num, scale) (((num) + 1) << (5 * (scale) + 1)= ) +#define MAX_TLBI_RANGE_PAGES __TLBI_RANGE_PAGES(31, 3) + +#define TLBI_RANGE_MASK GENMASK_ULL(4, 0) +#define __TLBI_RANGE_NUM(range, scale) \ + (((range) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) + /* * TLB Invalidation * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D @@ -232,32 +275,69 @@ static inline void __flush_tlb_range(struct vm_area= _struct *vma, unsigned long stride, bool last_level, int tlb_level) { + int num =3D 0; + int scale =3D 0; unsigned long asid =3D ASID(vma->vm_mm); unsigned long addr; + unsigned long pages; =20 start =3D round_down(start, stride); end =3D round_up(end, stride); + pages =3D (end - start) >> PAGE_SHIFT; =20 - if ((end - start) >=3D (MAX_TLBI_OPS * stride)) { + if ((!cpus_have_const_cap(ARM64_HAS_TLBI_RANGE) && + (end - start) >=3D (MAX_TLBI_OPS * stride)) || + pages >=3D MAX_TLBI_RANGE_PAGES) { flush_tlb_mm(vma->vm_mm); return; } =20 - /* Convert the stride into units of 4k */ - stride >>=3D 12; + dsb(ishst); =20 - start =3D __TLBI_VADDR(start, asid); - end =3D __TLBI_VADDR(end, asid); + /* + * When cpu does not support TLBI RANGE feature, we flush the tlb + * entries one by one at the granularity of 'stride'. + * When cpu supports the TLBI RANGE feature, then: + * 1. If pages is odd, flush the first page through non-RANGE + * instruction; + * 2. For remaining pages: The minimum range granularity is decided + * by 'scale', so we can not flush all pages by one instruction + * in some cases. + * Here, we start from scale =3D 0, flush corresponding pages + * (from 2^(5*scale + 1) to 2^(5*(scale + 1) + 1)), and increase + * it until no pages left. + */ + while (pages > 0) { + if (!cpus_have_const_cap(ARM64_HAS_TLBI_RANGE) || + pages % 2 =3D=3D 1) { + addr =3D __TLBI_VADDR(start, asid); + if (last_level) { + __tlbi_level(vale1is, addr, tlb_level); + __tlbi_user_level(vale1is, addr, tlb_level); + } else { + __tlbi_level(vae1is, addr, tlb_level); + __tlbi_user_level(vae1is, addr, tlb_level); + } + start +=3D stride; + pages -=3D stride >> PAGE_SHIFT; + continue; + } =20 - dsb(ishst); - for (addr =3D start; addr < end; addr +=3D stride) { - if (last_level) { - __tlbi_level(vale1is, addr, tlb_level); - __tlbi_user_level(vale1is, addr, tlb_level); - } else { - __tlbi_level(vae1is, addr, tlb_level); - __tlbi_user_level(vae1is, addr, tlb_level); + num =3D __TLBI_RANGE_NUM(pages, scale) - 1; + if (num >=3D 0) { + addr =3D __TLBI_VADDR_RANGE(start, asid, scale, + num, tlb_level); + if (last_level) { + __tlbi(rvale1is, addr); + __tlbi_user(rvale1is, addr); + } else { + __tlbi(rvae1is, addr); + __tlbi_user(rvae1is, addr); + } + start +=3D __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; + pages -=3D __TLBI_RANGE_PAGES(num, scale); } + scale++; } dsb(ish); } --=20 2.19.1