From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 963D0C433EB for ; Tue, 14 Jul 2020 13:57:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 64B5122509 for ; Tue, 14 Jul 2020 13:57:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 64B5122509 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C6D1A6B0006; Tue, 14 Jul 2020 09:57:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C1D288D0001; Tue, 14 Jul 2020 09:57:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B31CD6B000C; Tue, 14 Jul 2020 09:57:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id 9C7496B0006 for ; Tue, 14 Jul 2020 09:57:16 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 27578181AC9BF for ; Tue, 14 Jul 2020 13:57:16 +0000 (UTC) X-FDA: 77036833272.18.music57_0812cac26ef1 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id ECF8A100EDBD9 for ; Tue, 14 Jul 2020 13:57:15 +0000 (UTC) X-HE-Tag: music57_0812cac26ef1 X-Filterd-Recvd-Size: 3017 Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jul 2020 13:57:15 +0000 (UTC) Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id A5B01131134F9A7732CE; Tue, 14 Jul 2020 21:51:53 +0800 (CST) Received: from [127.0.0.1] (10.174.186.75) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.487.0; Tue, 14 Jul 2020 21:51:44 +0800 Subject: Re: [PATCH v2 2/2] arm64: tlb: Use the TLBI RANGE feature in arm64 To: Catalin Marinas CC: , , , , , , , , , , , , , , References: <20200710094420.517-1-yezhenyu2@huawei.com> <20200710094420.517-3-yezhenyu2@huawei.com> <20200714103629.GA18793@gaia> From: Zhenyu Ye Message-ID: Date: Tue, 14 Jul 2020 21:51:42 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.3.0 MIME-Version: 1.0 In-Reply-To: <20200714103629.GA18793@gaia> Content-Type: text/plain; charset="gbk" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.186.75] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: ECF8A100EDBD9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/7/14 18:36, Catalin Marinas wrote: > On Fri, Jul 10, 2020 at 05:44:20PM +0800, Zhenyu Ye wrote: >> +#define __TLBI_RANGE_PAGES(num, scale) (((num) + 1) << (5 * (scale) + 1)) >> +#define MAX_TLBI_RANGE_PAGES __TLBI_RANGE_PAGES(31, 3) >> + >> +#define TLBI_RANGE_MASK GENMASK_ULL(4, 0) >> +#define __TLBI_RANGE_NUM(range, scale) \ >> + (((range) >> (5 * (scale) + 1)) & TLBI_RANGE_MASK) > [...] >> + int num = 0; >> + int scale = 0; > [...] >> + start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; > [...] > > Since num is an int, __TLBI_RANGE_PAGES is still an int. Shifting it by > PAGE_SHIFT can overflow as the maximum would be 8GB for 4K pages (or > 128GB for 64K pages). I think we probably get away with this because of > some implicit type conversion but I'd rather make __TLBI_RANGE_PAGES an > explicit unsigned long: > > #define __TLBI_RANGE_PAGES(num, scale) ((unsigned long)((num) + 1) << (5 * (scale) + 1)) > This is valuable and I will update this in next series, with the check for binutils (or encode the instructions by hand), as soon as possible. > Without this change, the CBMC check fails (see below for the test). In > the kernel, we don't have this problem as we encode the address via > __TLBI_VADDR_RANGE and it doesn't overflow.> The good part is that CBMC reckons the algorithm is correct ;). Thanks for your test! Zhenyu