From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 425D4C6FA8E for ; Wed, 21 Sep 2022 08:54:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C28986B0072; Wed, 21 Sep 2022 04:54:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD841940008; Wed, 21 Sep 2022 04:54:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9F88940007; Wed, 21 Sep 2022 04:54:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 987FC6B0072 for ; Wed, 21 Sep 2022 04:54:48 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 74499804A8 for ; Wed, 21 Sep 2022 08:54:48 +0000 (UTC) X-FDA: 79935482256.08.95CA134 Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) by imf25.hostedemail.com (Postfix) with ESMTP id EFE62A00D4 for ; Wed, 21 Sep 2022 08:54:47 +0000 (UTC) Received: by mail-ej1-f50.google.com with SMTP id lh5so11986285ejb.10 for ; Wed, 21 Sep 2022 01:54:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=sc4HrH+dawwbyFY4jxaPYFax97q7PbwtRdCVIa2eWag=; b=nvXLTwTum0Zu+Nzwkb/Va5KR7MpZoNKJUWh0AvWe8HdVcr1Grhklhdo1BZW8sba5fw BGnTPKQw9TWdIn1AwbWmhqYPWj/FVYt+uu55rzvSXj7gyQEl6nisi0K/8PiKq+G1vate OwUz5f2/PwR248iBS39pCnt+cZvqAUi3xnXwI/CPS+xFuJqBAs2z3QcTEXopA/PS8206 LsKWiJynHncuR2MFh2DVSwmUKzCR6RXBe0hFSksOj2krERKhpYbj6OOLJgFNJt7NSBQ6 hVqA13u7w8pzq63oY/Yhzl8fszOLZBKNoD8poD5JuUBZFek7tviVQ4tIq4kb5M10FNKT s1WA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=sc4HrH+dawwbyFY4jxaPYFax97q7PbwtRdCVIa2eWag=; b=vaY/weRrfjSSF0k0P8ObMhqWaIBEMMB8blq4dnun2Xgmb+oqUHvNVeYJYiLUYPSTY5 D+tcI4PpkCtvXN4T0du3LIz1C1vP7cfM5vGRuqDe8u3B/g5ZJ/SYKLAngIwh10Uz0TuO 2tcTkf4QDMLYqNIHpLjc5wg2uGR82eUKkuUJokR7TzPygeGAJDBWuzTaXa2QSguZaTB2 0i+lmg/twDy5SivXIK7GFCbOxwk6/3EeLFwJzh2Xktv+P00KlvfhsQP3k4DWJmoIgjz6 b08r/WMQ/57Po+2lxu85R8Omfuu3aoC8N/aZwWz4WB4MAdT2ORrN+6XLQkzUIOMpHuAV 5ecQ== X-Gm-Message-State: ACrzQf11+dSOXofukRAs7BrE7+7B674iqJbuPUb82i6v2gXzLZqVO+ch F9EXDgxvEYDoYjsvamWOtpX9hLLeO+WiUESFlg4= X-Google-Smtp-Source: AMsMyM5mpmqn+gfhEkasy6u301zqGqQeURhHcxEGpcBIv8VPgxV44BVkZMurDLR6PBxFMgmK/214xxyLA2GRvdonQRQ= X-Received: by 2002:a17:907:c13:b0:781:d3c2:5015 with SMTP id ga19-20020a1709070c1300b00781d3c25015mr4799862ejc.457.1663750486484; Wed, 21 Sep 2022 01:54:46 -0700 (PDT) MIME-Version: 1.0 References: <20220921084302.43631-1-yangyicong@huawei.com> <20220921084302.43631-2-yangyicong@huawei.com> In-Reply-To: <20220921084302.43631-2-yangyicong@huawei.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 21 Sep 2022 20:54:35 +1200 Message-ID: Subject: Re: [PATCH v4 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() To: Yicong Yang Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, catalin.marinas@arm.com, will@kernel.org, linux-doc@vger.kernel.org, corbet@lwn.net, peterz@infradead.org, arnd@arndb.de, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, wangkefeng.wang@huawei.com, xhao@linux.alibaba.com, prime.zeng@hisilicon.com, anshuman.khandual@arm.com, Anshuman Khandual Content-Type: text/plain; charset="UTF-8" ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nvXLTwTu; spf=pass (imf25.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.218.50 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663750487; a=rsa-sha256; cv=none; b=ufmLd2wrX9bkfCjKKJS6WX7t2y7aMrNkFmKio+WmVN4543nlZdvsTgJe9X9FPYUe30wNF6 rQ5JZ3D31bRxUzEXUvkP58KAj+xUtHli0GzpJMUclf1mTZaptYbn2L/LoyxqMBYg/vYNpD +swVFs+v7p22w++PKr4RQZsqgn+PgWs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663750487; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sc4HrH+dawwbyFY4jxaPYFax97q7PbwtRdCVIa2eWag=; b=Y9L2PNa1DzM7uPP1TXhiYL3PfRidaFyCVwapdwP8Zjp2YIE3c4f6RZSrvWR8QLSW5faUAa VPNu6ye8kVGvQyyruX//iD4xis5Jk6wsCx7M+/ENlhC4TT+qY31ILrNuB+GQkWRVkwjI/x SXxIVObk2BLpXDdq0E8h/l2CN9khekQ= X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EFE62A00D4 X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=nvXLTwTu; spf=pass (imf25.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.218.50 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: iddt5cz73jxipab99z4n9xihirwn33yu X-HE-Tag: 1663750487-543361 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 21, 2022 at 8:45 PM Yicong Yang wrote: > > From: Anshuman Khandual > > The entire scheme of deferred TLB flush in reclaim path rests on the > fact that the cost to refill TLB entries is less than flushing out > individual entries by sending IPI to remote CPUs. But architecture > can have different ways to evaluate that. Hence apart from checking > TTU_BATCH_FLUSH in the TTU flags, rest of the decision should be > architecture specific. > > Signed-off-by: Anshuman Khandual > [https://lore.kernel.org/linuxppc-dev/20171101101735.2318-2-khandual@linux.vnet.ibm.com/] > Signed-off-by: Yicong Yang > [Rebase and fix incorrect return value type] > Reviewed-by: Kefeng Wang > Reviewed-by: Anshuman Khandual > --- Reviewed-by: Barry Song > arch/x86/include/asm/tlbflush.h | 12 ++++++++++++ > mm/rmap.c | 9 +-------- > 2 files changed, 13 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h > index cda3118f3b27..8a497d902c16 100644 > --- a/arch/x86/include/asm/tlbflush.h > +++ b/arch/x86/include/asm/tlbflush.h > @@ -240,6 +240,18 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) > flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false); > } > > +static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm) > +{ > + bool should_defer = false; > + > + /* If remote CPUs need to be flushed then defer batch the flush */ > + if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) > + should_defer = true; > + put_cpu(); > + > + return should_defer; > +} > + > static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) > { > /* > diff --git a/mm/rmap.c b/mm/rmap.c > index 93d5a6f793d2..cd8cf5cb0b01 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -690,17 +690,10 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > */ > static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) > { > - bool should_defer = false; > - > if (!(flags & TTU_BATCH_FLUSH)) > return false; > > - /* If remote CPUs need to be flushed then defer batch the flush */ > - if (cpumask_any_but(mm_cpumask(mm), get_cpu()) < nr_cpu_ids) > - should_defer = true; > - put_cpu(); > - > - return should_defer; > + return arch_tlbbatch_should_defer(mm); > } > > /* > -- > 2.24.0 >