From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44D93C433E1 for ; Thu, 9 Jul 2020 06:51:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 146C02070E for ; Thu, 9 Jul 2020 06:51:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 146C02070E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 93FE66B000D; Thu, 9 Jul 2020 02:51:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C8A46B000E; Thu, 9 Jul 2020 02:51:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7914E6B0010; Thu, 9 Jul 2020 02:51:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 5F0FE6B000D for ; Thu, 9 Jul 2020 02:51:22 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id DC6A282499B9 for ; Thu, 9 Jul 2020 06:51:21 +0000 (UTC) X-FDA: 77017615962.02.bells65_2e00fa426ec3 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 9E88B400020A7931 for ; Thu, 9 Jul 2020 06:51:21 +0000 (UTC) X-HE-Tag: bells65_2e00fa426ec3 X-Filterd-Recvd-Size: 4018 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Jul 2020 06:51:20 +0000 (UTC) Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 1BB15177614AC0946EA4; Thu, 9 Jul 2020 14:51:16 +0800 (CST) Received: from [127.0.0.1] (10.174.186.75) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.487.0; Thu, 9 Jul 2020 14:51:07 +0800 Subject: Re: [RFC PATCH v5 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64 To: Catalin Marinas CC: , , , , , , , , , , , , , , References: <20200708124031.1414-1-yezhenyu2@huawei.com> <20200708124031.1414-3-yezhenyu2@huawei.com> <20200708182451.GF6308@gaia> From: Zhenyu Ye Message-ID: <27a4d364-d967-c644-83ed-805ba75f13f6@huawei.com> Date: Thu, 9 Jul 2020 14:51:05 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.3.0 MIME-Version: 1.0 In-Reply-To: <20200708182451.GF6308@gaia> Content-Type: text/plain; charset="gbk" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.186.75] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 9E88B400020A7931 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/7/9 2:24, Catalin Marinas wrote: > On Wed, Jul 08, 2020 at 08:40:31PM +0800, Zhenyu Ye wrote: >> Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range(). >> >> In this patch, we only use the TLBI RANGE feature if the stride == PAGE_SIZE, >> because when stride > PAGE_SIZE, usually only a small number of pages need >> to be flushed and classic tlbi intructions are more effective. > > Why are they more effective? I guess a range op would work on this as > well, say unmapping a large THP range. If we ignore this stride == > PAGE_SIZE, it could make the code easier to read. > OK, I will remove the stride == PAGE_SIZE here. >> We can also use 'end - start < threshold number' to decide which way >> to go, however, different hardware may have different thresholds, so >> I'm not sure if this is feasible. >> >> Signed-off-by: Zhenyu Ye >> --- >> arch/arm64/include/asm/tlbflush.h | 104 ++++++++++++++++++++++++++---- >> 1 file changed, 90 insertions(+), 14 deletions(-) > > Could you please rebase these patches on top of the arm64 for-next/tlbi > branch: > > git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/tlbi > OK, I will send a formal version patch of this series soon. >> >> - if ((end - start) >= (MAX_TLBI_OPS * stride)) { >> + if ((!cpus_have_const_cap(ARM64_HAS_TLBI_RANGE) && >> + (end - start) >= (MAX_TLBI_OPS * stride)) || >> + range_pages >= MAX_TLBI_RANGE_PAGES) { >> flush_tlb_mm(vma->vm_mm); >> return; >> } > > Is there any value in this range_pages check here? What's the value of > MAX_TLBI_RANGE_PAGES? If we have TLBI range ops, we make a decision here > but without including the stride. Further down we use the stride to skip > the TLBI range ops. > MAX_TLBI_RANGE_PAGES is defined as __TLBI_RANGE_PAGES(31, 3), which is decided by ARMv8.4 spec. The address range is determined by below formula: [BADDR, BADDR + (NUM + 1) * 2^(5*SCALE + 1) * PAGESIZE) Which has nothing to do with the stride. After removing the stride == PAGE_SIZE below, there will be more clear. >> } > > I think the algorithm is correct, though I need to work it out on a > piece of paper. > > The code could benefit from some comments (above the loop) on how the > range is built and the right scale found. > OK. Thanks, Zhenyu