From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D774C001DE for ; Mon, 31 Jul 2023 08:33:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE14428001D; Mon, 31 Jul 2023 04:33:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B904E28001A; Mon, 31 Jul 2023 04:33:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A578F28001D; Mon, 31 Jul 2023 04:33:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 91BCD28001A for ; Mon, 31 Jul 2023 04:33:50 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 63B63120107 for ; Mon, 31 Jul 2023 08:33:50 +0000 (UTC) X-FDA: 81071243820.22.51596F6 Received: from mail-vk1-f169.google.com (mail-vk1-f169.google.com [209.85.221.169]) by imf06.hostedemail.com (Postfix) with ESMTP id 87F1718000D for ; Mon, 31 Jul 2023 08:33:48 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=guWCLMET; spf=pass (imf06.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.169 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690792428; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ckP7NF4Z1GYTZiK2dYi1nNlDBTWx4/E1X/qSymornb4=; b=v74JvFA0vTEMOlJ10uggBpXVQXdYO7DelUOZF9eTywb4A1vePfqgIMErj+ipBW2BzoS20O U534jeV8KDsPUZck+78QdE6xH+68DIuqfhBPiioZu3vgBxeujLHyIB96RFcl/SUnYTLLXZ dQObssJ0pXws+dz9Tauv2XKemPOA8Z0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690792428; a=rsa-sha256; cv=none; b=k6H49cwP4q2ufAWps08Kc2mNKtWij1xIpGjZtG+POm4lYdimO79IGCT6/wFzTXITxpXLLO mF5vPz5bYvj7ciQcWavk7cnBDjCXn1NaO/LVVaem7g99ziuq89l8HU37tAXNxcya184xxN qDGeSxXQdYejBTyUe6nsdPTJNPnnu44= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=guWCLMET; spf=pass (imf06.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.169 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vk1-f169.google.com with SMTP id 71dfb90a1353d-4864928ce7fso1334730e0c.3 for ; Mon, 31 Jul 2023 01:33:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690792427; x=1691397227; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ckP7NF4Z1GYTZiK2dYi1nNlDBTWx4/E1X/qSymornb4=; b=guWCLMET61252C+Y3fHlLWURhdE33BrI72qoF3c3XbX3saGrvm5yGS7DN+T13z9opB GkF+MFpYqRTjp3KZ3qrZtTCcvFP05XmUYYHuYnSZ2QAYWCRoyWGnb5X+u5hzLx/UhenD wahK+HTVUyA/dbOZjpxeK16hBdGCTSjutPtR1N8jAKrgyEQissAUDWm9IEcVNOtbTp7+ l90XD8MeYwuvXIC9XsnPpDZFGnMIVkzrmu23rHRnLK5cupra2yZN/qYVSL75/MMKrJPb 6nTKVzO7aB6eG5t6FmMofBadGV62EfIdWQN+rbZxqfNjrtvloHDbzzSeaMWNIl71m41a nuTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690792427; x=1691397227; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ckP7NF4Z1GYTZiK2dYi1nNlDBTWx4/E1X/qSymornb4=; b=FVh2vRlycPrVkFr/wb3glHf/tVTuwqy6ikhPdoGrg+Oux3r98zk4Ayg13FNezeI/BH lU9jjRysf9uygvrU/U6TOLgu+YdknEAiXwlIV/ziv7tSob/gf3FcxwvuDTLWRkf1MUAU ckS5eBgmw2b3nu4QXj7i9nMHOMTTOXdP0iWhbIAbWPQPGxgPCKDt/x3yyx3uonVmxnL7 JVcKmm2b/ansKXpXsxdb6wrwU1g9no+lDhu2qsuQx+xcNM3JZ8FRGGqvnrDwUHL1u/Qq 63ffmQOXi/h4xHjbmu19Wdg7JcqtI8AcXjcPUpcVFimlPHMfn1abA9BJrYffZeXhMlHT 9fSw== X-Gm-Message-State: ABy/qLaBuNXQ4z2abXlOhpaLKwMs2IK/ABoVlkMDwvyYvJcfwP1ish++ LpDJl61Z8gc/AmPjZLWwiB836p/EsOC0rsBJoAQ= X-Google-Smtp-Source: APBJJlGr8v2sGSmuvo5riAw+d+bqvLQqaR1dl+ejrmOkv/DR13s0R0YVtDCFriB+aGix82X/xDObYVFtQMiBwhIfmtw= X-Received: by 2002:a1f:6013:0:b0:486:4c0b:b4cf with SMTP id u19-20020a1f6013000000b004864c0bb4cfmr4757886vkb.10.1690792427512; Mon, 31 Jul 2023 01:33:47 -0700 (PDT) MIME-Version: 1.0 References: <20230731074829.79309-1-wangkefeng.wang@huawei.com> <20230731074829.79309-5-wangkefeng.wang@huawei.com> In-Reply-To: <20230731074829.79309-5-wangkefeng.wang@huawei.com> From: Barry Song <21cnbao@gmail.com> Date: Mon, 31 Jul 2023 16:33:36 +0800 Message-ID: Subject: Re: [PATCH 4/4] arm64: tlb: set huge page size to stride for hugepage To: Kefeng Wang Cc: Andrew Morton , Catalin Marinas , Will Deacon , Mike Kravetz , Muchun Song , Mina Almasry , kirill@shutemov.name, joel@joelfernandes.org, william.kucharski@oracle.com, kaleshsingh@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: rd7if3ropdgxwctuau8z41yjuybz5tef X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 87F1718000D X-Rspam-User: X-HE-Tag: 1690792428-570397 X-HE-Meta: U2FsdGVkX1+eX3QU+jRRI67VFiB6RP/OeTMzmWQoxL5HF6BdBWIq7Ml+ff0Ti04NdPcPCbOWVQQoZ0j3LpR+MzUgqyR4GprXC9pG68r0szYfrXCUIGQ1e7ymFcFmM1bjGS1BhFTw24vfpsFq0paK0bh7XyTflmUpx5WM6iJlBYNdw7+eoO9QLYH1RXzV1+w1iTVWcAdeVoUXQbB/6ao4J9mG9b6I67WbwyVfoGUM22lgel2dE7Sjnk6sAFh+b1YAbihve26R6w8RykB+FnXzdSVvPfXabhtIt/NI0+UAD0mNrAMNl+941tpV/evkYNws1OOP/nFsPgLbYzyDBDuRvdd8MAmJlQOFb2yf8AfC6C4PJ7eCkyyd6G8W1QsKvCAFYG34MPE+AnGXG4WgzvszRk13LmJl337LWBldnNBZQnomzNCjc89JIc5WO5tJ3HV3XA7ICaah3Zcl17ZeR6IxhFeuejv4X/rt/UT1VPtqxVWAALI9v3TQ6ng5lTyLcTuiMbxuFFjq6mulZAkrPmByDTBw5cPKms6F+xQAxT3MQTAs+IiayU0w6wcH9TZET69vOcqGq76hiUKhgmPdhzbl0AjY6E/qf7Rt+3oVgywyZYZ/qKV+9AruFBNcLDxGYQMGhqJmcUYkGNpIDdlqa36RGl88CTzLMjYW1+ue2avInB1V8qY9wvt6+6Zt5gNMCRxkUtQP/hgVkll/veAKHXdPAYhgHRUyLUEbPu0N596nJUdvlFgHpP03XPeGTNsSJrOu0Lewz9015aPc6l4v47hZcqQMEXccCiiEASzijORAzcoLUw8r62am3n7bzEBuvnfMZQl5613Vlf5Tu1RsUb8KHB9TOJQDfSgyqN79NRNhWb8mM1b1NU+aUQOHjHf5a289dtLJdCSSNajSeK1v84hUeusecipevMU8RsFYaBWBws4kMzyzM76LQFHtvCfyy+8UBDyK+Q5liGsycrGF/NQ pQelC8xs JEw4F/jVkDIRoz5G0TmdbtmE95i87kdoeD/5HTGLh4dfBgEemRLlziKLTfoEhtxMr82rie8hl//al7M5y74tbMJ284KpmNJExNtomx61dF9K+YIbllSQB29rq4Lm7n71tSeovG6MVquN9RGfDBt4Onod0zhthSkuQ13dvsrGuZCj2vBU2hC/Ss7zURril4UW7IpKuiu20ciJmkWE12MqA4DC5uZSeaI3L41sBv1U7zLXEUdPogLHY2OyNSNWGRU/r1GkEWhSQ/ZxXDJ25VyEtjekdafs6FgfJ5Y6zXvbJt8yOw8h1stGHu3Wwd+su3/drsV116Ml4nR4HAZFcFAPWws6NshdxNPpovRCrQz2qaPbvMHhEtTFLTZHjQ/HBW1Al2kk4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 31, 2023 at 4:14=E2=80=AFPM Kefeng Wang wrote: > > It is better to use huge_page_size() for hugepage(HugeTLB) instead of > PAGE_SIZE for stride, which has been done in flush_pmd/pud_tlb_range(), > it could reduce the loop in __flush_tlb_range(). > > Signed-off-by: Kefeng Wang > --- > arch/arm64/include/asm/tlbflush.h | 21 +++++++++++---------- > 1 file changed, 11 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/t= lbflush.h > index 412a3b9a3c25..25e35e6f8093 100644 > --- a/arch/arm64/include/asm/tlbflush.h > +++ b/arch/arm64/include/asm/tlbflush.h > @@ -360,16 +360,17 @@ static inline void __flush_tlb_range(struct vm_area= _struct *vma, > dsb(ish); > } > > -static inline void flush_tlb_range(struct vm_area_struct *vma, > - unsigned long start, unsigned long end= ) > -{ > - /* > - * We cannot use leaf-only invalidation here, since we may be inv= alidating > - * table entries as part of collapsing hugepages or moving page t= ables. > - * Set the tlb_level to 0 because we can not get enough informati= on here. > - */ > - __flush_tlb_range(vma, start, end, PAGE_SIZE, false, 0); > -} > +/* > + * We cannot use leaf-only invalidation here, since we may be invalidati= ng > + * table entries as part of collapsing hugepages or moving page tables. > + * Set the tlb_level to 0 because we can not get enough information here= . > + */ > +#define flush_tlb_range(vma, start, end) \ > + __flush_tlb_range(vma, start, end, \ > + ((vma)->vm_flags & VM_HUGETLB) \ > + ? huge_page_size(hstate_vma(vma)) \ > + : PAGE_SIZE, false, 0) > + seems like a good idea. I wonder if a better implementation will be MMU_GATHER_PAGE_SIZE, in this = case, we are going to support stride for other large folios as well, such as thp. > > static inline void flush_tlb_kernel_range(unsigned long start, unsigned = long end) > { > -- > 2.41.0 > Thanks Barry