From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80DCBC001DF for ; Tue, 1 Aug 2023 11:22:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CEF0090000B; Tue, 1 Aug 2023 07:22:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C77FB8E0002; Tue, 1 Aug 2023 07:22:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B405790000B; Tue, 1 Aug 2023 07:22:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id A0A958E0002 for ; Tue, 1 Aug 2023 07:22:22 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 779531C9C53 for ; Tue, 1 Aug 2023 11:22:22 +0000 (UTC) X-FDA: 81075297324.13.194289F Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf10.hostedemail.com (Postfix) with ESMTP id ECEEEC0020 for ; Tue, 1 Aug 2023 11:22:16 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690888940; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cyA/6hMRETY+QFOykZjgPbTVDY6SbtFlerkI/xVIG5o=; b=15Q0e6G9f9ICJgDmL6qz1Csdes8V6lyEAlQJMk+DxJqDKaWXcE/dFklypBG6+yKgQqdAca pUdTQ7UFzecwNcK1fqYyM2lmJ08D6FtVxDuawHNdHTYuG/nlFVQ0hOoFqeKxr9+ia+tbgm EZu/lz//d/qC6X37QHRL3/G0oM6KOzg= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf10.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690888940; a=rsa-sha256; cv=none; b=pqx8czJY0VJj62TJZd03+TdAs4qk3ymmQ+WsalKKW67L8rEwgoH+V1QN2Jv7r7m/STXELp GsFEGEn8t5HLp6XPGQmUCpGuiHV9tVVfSS4ZfWnDaZxNs2Jo4V6Q1k3vUAqJyMB8dfwNAK fSYo2YvaJ1zJTj/5LMhCEV6ur8VoGxg= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RFXjd3q0gzrS62; Tue, 1 Aug 2023 19:21:09 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Tue, 1 Aug 2023 19:22:10 +0800 Message-ID: <4b5a3cfb-e13d-4df4-c08a-fb176cc2cbf6@huawei.com> Date: Tue, 1 Aug 2023 19:22:09 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.10.1 Subject: Re: [PATCH v2 2/2] arm64: hugetlb: enable __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE Content-Language: en-US To: Catalin Marinas CC: Andrew Morton , Will Deacon , Mike Kravetz , Muchun Song , Mina Almasry , , , , , , , , <21cnbao@gmail.com> References: <20230801023145.17026-1-wangkefeng.wang@huawei.com> <20230801023145.17026-3-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: ECEEEC0020 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: hujducymfeokrxp7imrfo86p9gzwe7fp X-HE-Tag: 1690888936-472079 X-HE-Meta: U2FsdGVkX18lbOqowN4YgEp1sjjQONJoWWtgnYf8C5ypbBvKZ2pqd2h8hFJ9oTgjyNcFfS2V0c8RxGteciIei/cRpCFnFrUfxKcjw+FFW7ARAfAconpvA7Zuz+ufuuh8JlgZLGrfsqIeiPoE05GEPOznSyPUTCz8SjhCXpXrwJJiZQTcrc2Od5JXaqNgTIV/GVr4yfpbYZQc1jGFOeI4s7tWg8z0IoGZfSLPKTCUzw7p4OuZfYOaA8DU92TbMO5TXZkQyqS3JrabaQcgowbYl9u1xfjiP3D41hHzVrv+y94F53q964nkMbVnH7eiZO5/wpgG1pRpgepPraqY6T8su+R+Aq9qPaoPpqjVP0WLo3ijLyOvdws3jxudHNq0yykRSruRBtrQm1nBLreW71HOqI6hZmJUiI2nlZ89pu5fM2t0oTZ3mWSNP8V0OmB9ItMZWZifqzJF81XlUCKgzuDZo1Oz5ZprADuJcrwRP7IoT3gTio1QlrQGk9fpRTPZAFh9KFCSOrLXgy3dW6+i+B6aKCa2c43qSmkol8XruMwej6yXgZWpdYpXDDRLYVjYtkFizcGCLHRtgLeTh1RBMKzy2vYExTioslqU2grT1KFUbqcw1roYbbFS7kNqKhqi0il4qbmgg4e5B8Sen0wAUAZo9GKXkCHUuhIH8kwfx17ghA9I0LulP9nB1DAg6bC8epVMcn7BmABhkg2jrALEICi50QQF8qI8E70XjYDMxrGd6NVRCqg+nqbFJ+xiXM9fUju+eHCdFlohKRWiMgqcFAP3UZVPaIIKwrLT+s5Z9ZfzCitjhoRX0vR++F6n0neii+lVofv7pZihUgi8a9197Y+DOvq07vKN4x9WAnfumuwql4JTTbIbTVbS8XfN4sCUxwfMUmFtk4AvIjCD0v8xGoxenB6/e+Ob5YgJ9hEgoe4fsp5r0MM5Pb/OMaI0rVZzJR528aLlSbkIwFbGsOmLfbU vvNlxQaC f28hRfBZUIhUu41as9kDMS0EzG7OXa/OI5R9HezA+3cVgsTepHc9yNv2cr4mCgPbppjHSXYE18ItAbPZRXI8PWaqu3VLZvHxgCeUu/111VebvcrWqcG71edMZDKIBupShod2NKTWqlOGQohc2viiH9a6I6sGqjFe2WMEUTJiuyRkoLRXHvXxvxPYSeA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/8/1 19:09, Catalin Marinas wrote: > On Tue, Aug 01, 2023 at 10:31:45AM +0800, Kefeng Wang wrote: >> +#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE >> +static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, >> + unsigned long start, >> + unsigned long end) >> +{ >> + unsigned long stride = huge_page_size(hstate_vma(vma)); >> + >> + if (stride != PMD_SIZE && stride != PUD_SIZE) >> + stride = PAGE_SIZE; >> + __flush_tlb_range(vma, start, end, stride, false, 0); > > We could use some hints here for the tlb_level (2 for pmd, 1 for pud). > Regarding the last_level argument to __flush_tlb_range(), I think it > needs to stay false since this function is also called on the > hugetlb_unshare_pmds() path where the pud is cleared and needs > invalidating. > > That said, maybe you can rewrite it as a switch statement and call > flush_pmd_tlb_range() or flush_pud_tlb_range() (just make sure these are > defined when CONFIG_HUGETLBFS is enabled). > How about this way, not involved with thp ? diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h index e5c2e3dd9cf0..a7ce59d3388e 100644 --- a/arch/arm64/include/asm/hugetlb.h +++ b/arch/arm64/include/asm/hugetlb.h @@ -66,10 +66,22 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long end) { unsigned long stride = huge_page_size(hstate_vma(vma)); + int tlb_level = 0; - if (stride != PMD_SIZE && stride != PUD_SIZE) + switch (stride) { +#ifndef __PAGETABLE_PMD_FOLDED + case PUD_SIZE: + tlb_level = 1; + break; +#endif + case PMD_SIZE: + tlb_level = 2; + break; + default: stride = PAGE_SIZE; - __flush_tlb_range(vma, start, end, stride, false, 0); + } + + __flush_tlb_range(vma, start, end, stride, false, tlb_level); }