From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1835BECAAD8 for ; Wed, 21 Sep 2022 07:15:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 832046B0072; Wed, 21 Sep 2022 03:15:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7E1D26B0073; Wed, 21 Sep 2022 03:15:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A9E1940007; Wed, 21 Sep 2022 03:15:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5CED66B0072 for ; Wed, 21 Sep 2022 03:15:24 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2BCEC8078E for ; Wed, 21 Sep 2022 07:15:24 +0000 (UTC) X-FDA: 79935231768.23.708AEF4 Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by imf14.hostedemail.com (Postfix) with ESMTP id AB958100014 for ; Wed, 21 Sep 2022 07:15:23 +0000 (UTC) Received: by mail-ed1-f53.google.com with SMTP id a41so7294583edf.4 for ; Wed, 21 Sep 2022 00:15:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=z9jXMKTsbXt/GY48g/wdOYArfXNGBa3cbn0t4HSGf7c=; b=UiTMai2YO907oZTmL2uRRBUvQZje672mf6EbZ9LgXUMIQqDtJoEU89YEJ98C1iKFq6 6aeo1LqTpUmEdFw2CaSaL28YZg63X9PaAExd1UpR0MksLPUasxQTMUS69mrLKvm0i5tk e+hFnF/9spxfT0XRQ4XHRAw+QOtJoML5Vz1O2F4ZVoDwC/N5QPHI623PCIhl5qf2Uzs6 VtL7p6OurNamkmgIdL4rb5sRfKwpRieDiY2ZiC2WZG4mXXiBevFcf0mmEL8JqIYWdM6i PRQdgQ5j9YvBGXmjMfUP4G3iqVPZJBS27lF04is8Z5yupn+hVJQtVn2lVCtMTZdQPasZ 8cvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=z9jXMKTsbXt/GY48g/wdOYArfXNGBa3cbn0t4HSGf7c=; b=gE5FQf4yQmcswFlFCR7DE2Gsmzgy0mMEp6fI2BFsQ6+PmmpiWKdfSu0kkE5xq6K6gR J0bxdFv2AgRn7GHoOOEJg690JCy/tVv+/OCKwXyf+jeN7ACLqUw3a6RQxV4GGMkqoWM4 nXQTkIHIQ6ZJs9q6L/fI7i/XQuXtZS0fzEb7mSvcUIvogSoQR1xHd++CMA47Dks4e1Ie UrGIKbYJOBhGbTemCi5Kv6m4RGJBarIGF5htaV3/TC9GQ/tj31Cf+AuixGVpDW76Qo7v 0LphDrPdrkk9rOtzj+Sk1ejhknWoKh0SCA1cTJFQVob9ugKw1dFOB6h5pyUIehRXWvHj jB1Q== X-Gm-Message-State: ACrzQf1CiVSFppmh0RTGMvxeC9W6oG3U2ybKARNTkW1APnLMJGjLwDBs tQYOxY1Y8FIFIoSsxsUhbpQY8bcVzidXctWTeBM= X-Google-Smtp-Source: AMsMyM55Ybm0ogt2YynX5Ic5pid72Oh9X7U20E3TfVabrhaJXkQZyVhJOQK1a/CYV33WISOGkFZPMdCY+LA22ybkLZI= X-Received: by 2002:a05:6402:1e8d:b0:441:58db:b6a2 with SMTP id f13-20020a0564021e8d00b0044158dbb6a2mr22952604edf.277.1663744522301; Wed, 21 Sep 2022 00:15:22 -0700 (PDT) MIME-Version: 1.0 References: <20220822082120.8347-1-yangyicong@huawei.com> <20220822082120.8347-5-yangyicong@huawei.com> <888da5f3-104c-3929-c21e-c710922d6f1e@arm.com> In-Reply-To: <888da5f3-104c-3929-c21e-c710922d6f1e@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 21 Sep 2022 19:15:10 +1200 Message-ID: Subject: Re: [PATCH v3 4/4] arm64: support batched/deferred tlb shootdown during page reclamation To: Anshuman Khandual Cc: Yicong Yang , akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, catalin.marinas@arm.com, will@kernel.org, linux-doc@vger.kernel.org, corbet@lwn.net, peterz@infradead.org, arnd@arndb.de, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, wangkefeng.wang@huawei.com, xhao@linux.alibaba.com, prime.zeng@hisilicon.com, Barry Song , Nadav Amit , Mel Gorman Content-Type: text/plain; charset="UTF-8" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663744523; a=rsa-sha256; cv=none; b=n3Vh1NFQIGtxqdHWPA8z+6aU3v4M3k+GGULAp/oFiGafCB9JXh30N3zYlPv8F45aHoOtii 75jEDlWZVaHdwnIyM0DKUSt9QLWSp4uGc3bzqEK1LyIqrhbKFxRILg4eHaV0ei4bW1YCn6 secwlc0e3bkuQA4KLfBlUtKHD2syaoQ= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=UiTMai2Y; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663744523; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z9jXMKTsbXt/GY48g/wdOYArfXNGBa3cbn0t4HSGf7c=; b=D7bixjr7Y/8HMZ8U8OlPaNMvInfK7QV8gIJDdh1jvJZPggZOkGLyTWjcPSOXMv8pYuGR6Z sPrAZW/b6nICi+xRVHY0nxyVOwav7jF/hi2nWHOF5D1HqX0hFRji6W2cLrkVgO6GkdyZXW AUxhOwKicx0lnLz+OKQmbcsaoIPwfMA= X-Rspamd-Queue-Id: AB958100014 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=UiTMai2Y; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com X-Rspamd-Server: rspam08 X-Rspam-User: X-Stat-Signature: wo5cft8t8159dtdwi6rnx9jx9tbyicbu X-HE-Tag: 1663744523-995998 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 21, 2022 at 6:53 PM Anshuman Khandual wrote: > > > On 8/22/22 13:51, Yicong Yang wrote: > > +static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch, > > + struct mm_struct *mm, > > + unsigned long uaddr) > > +{ > > + __flush_tlb_page_nosync(mm, uaddr); > > +} > > + > > +static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch) > > +{ > > + dsb(ish); > > +} > > Just wondering if arch_tlbbatch_add_mm() could also detect continuous mapping > TLB invalidation requests on a given mm and try to generate a range based TLB > invalidation such as flush_tlb_range(). > > struct arch_tlbflush_unmap_batch via task->tlb_ubc->arch can track continuous > ranges while being queued up via arch_tlbbatch_add_mm(), any range formed can > later be flushed in subsequent arch_tlbbatch_flush() ? > > OR > > It might not be worth the effort and complexity, in comparison to performance > improvement, TLB range flush brings in ? Probably it is not worth the complexity as perf annotate shows " Further perf annonate shows 95% cpu time of ptep_clear_flush is actually used by the final dsb() to wait for the completion of tlb flush." so any further optimization before dsb(ish) might bring some improvement but seems minor. Thanks Barry