From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30C40C53210 for ; Sun, 8 Jan 2023 10:48:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DE658E0002; Sun, 8 Jan 2023 05:48:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 167428E0001; Sun, 8 Jan 2023 05:48:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 007388E0002; Sun, 8 Jan 2023 05:48:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DEBF08E0001 for ; Sun, 8 Jan 2023 05:48:55 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B4931A022D for ; Sun, 8 Jan 2023 10:48:55 +0000 (UTC) X-FDA: 80331309030.02.8B1406F Received: from mail-vk1-f170.google.com (mail-vk1-f170.google.com [209.85.221.170]) by imf19.hostedemail.com (Postfix) with ESMTP id 3CADF1A0003 for ; Sun, 8 Jan 2023 10:48:54 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="khZsRl3/"; spf=pass (imf19.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.170 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673174934; a=rsa-sha256; cv=none; b=reGkgB47bpB5hfHjW6ATe0Fv/XEd785uC10jMM7ctV1DMLlTamuLymYffVJ7+oUfO+ZYbZ ClHmKQhzR9vc7za15Z4WFaFBUzIhHGTTWqAmb1po4Cqu2h1uH/le8wunTRfUm28kvd+gcH Pt/r9TDr/hKSAwaZXPwDc4h2naNoOuU= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="khZsRl3/"; spf=pass (imf19.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.170 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673174934; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=o+ulyHY5t1vIQHRPkJRkZkMsVhYDzl0Wi1NRRdUXX9E=; b=u6+0KDbrdfpNE4ptjCAw/FymMnUwlXXZrVsZTsXGdF4yGPBsad6xJkMC7qfHe53LoqV+wv rDan10YU8OfJGKfGddmsfz5MlL0SJUBTKJpqW6p8WjoSiJ5KKyYBM9r0aKidMYYs7k5CNR vaeGsRqtq26f4m7gH8pRI0lN4m9YRDI= Received: by mail-vk1-f170.google.com with SMTP id s192so326783vka.3 for ; Sun, 08 Jan 2023 02:48:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=o+ulyHY5t1vIQHRPkJRkZkMsVhYDzl0Wi1NRRdUXX9E=; b=khZsRl3/h6xFPtV+trGF8SoZlhUH7CXOkbNSVy0ph0vcxdY2POtEBg03zxNX8NOjKd tk0nBaNgQ7Qkcbndq/9sUu9/ASJ7jbK4uYDbK/bTz5OlmFXDU8glJgzuEEv6Z/s6NMfA mp4rpznbR52Td9JIJd+jemR2waWnWqaR5O+5xO3DbdqK/ncWuaOvXDjz4YwzrATW6+AT M8+n4X/t1ZgquFU/5c+wBFLYST09b1qKcf+ZPYUk9m3W2UOYcUJY3wtRak73CebxOF2s Srav0OegoVZNvk1eaPYqXHoJyRanQU2FIyrTJWyfZ4nP8ieu3yAmmAtAsqqtCCVbv2RA wVmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=o+ulyHY5t1vIQHRPkJRkZkMsVhYDzl0Wi1NRRdUXX9E=; b=YGc1cioOw8X0OGVzUj59By9WhlspxihubhazmhajWPq6BOW2yCXWBmVbJ6tuOM1dZE IAlYdHbWz64FtlYv9O+0lGH5IEtT2ULotPrWGzm0Spzi1eKA7yMLNdgIA/Rp0qKR+6I6 gNQGAy+FjB2h+KDCMI1JEDnqntMyS5zM/+G422uAtsOEUxkrJKgVEz3K/qAzhQEIK8+M xnhAISkfqZh8XIYtcxi4SE8Bt21VUn9zDgo0glv4JO11QNz200SY6q/I1ER53HUUS3WG CQxIuJNGgffNueO49eoGQH5gtRlFhX0SdFADw9Md+4y1Vs463MFVN/zSpN+5rGNm6Wl+ 6cqA== X-Gm-Message-State: AFqh2kpnRG1hX6Q/4aqTyr3Qx3b5h25oxyBuHnTI7cIqzb6KQHAWF85s yfgwos4+EE40vaFkRccVBIeJAHYoeiuAO23FzJQ= X-Google-Smtp-Source: AMrXdXsC9zjoSX3JUJXUNKpbqfCGbJ5sLgu3n0p9Rc2JNm9GFnGoyDe1ACeN5YOtUEET51RrQuI/o6A4avtvFhehWBM= X-Received: by 2002:a1f:2e4e:0:b0:3d5:5ea6:ccb9 with SMTP id u75-20020a1f2e4e000000b003d55ea6ccb9mr6218898vku.7.1673174933379; Sun, 08 Jan 2023 02:48:53 -0800 (PST) MIME-Version: 1.0 References: <20221117082648.47526-1-yangyicong@huawei.com> <20221117082648.47526-3-yangyicong@huawei.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Sun, 8 Jan 2023 18:48:41 +0800 Message-ID: Subject: Re: [PATCH v7 2/2] arm64: support batched/deferred tlb shootdown during page reclamation To: Catalin Marinas Cc: Yicong Yang , akpm@linux-foundation.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, will@kernel.org, anshuman.khandual@arm.com, linux-doc@vger.kernel.org, corbet@lwn.net, peterz@infradead.org, arnd@arndb.de, punit.agrawal@bytedance.com, linux-kernel@vger.kernel.org, darren@os.amperecomputing.com, yangyicong@hisilicon.com, huzhanyuan@oppo.com, lipeifeng@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, realmz6@gmail.com, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, wangkefeng.wang@huawei.com, xhao@linux.alibaba.com, prime.zeng@hisilicon.com, Barry Song , Nadav Amit , Mel Gorman Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3CADF1A0003 X-Stat-Signature: wtzc6ckj9b9qbq3nwqo5h9x8c8crymzz X-HE-Tag: 1673174934-901976 X-HE-Meta: U2FsdGVkX19EULcLhUMZukaWcrmjqj/kgyl2KBUs1QkjPjNzSYI+9EpXVNv+2zwxdCxY9UmgNWPWnmEsO1ZxXOJ3ePjMdeJdi93Ca09G8rXGAlpwA7IPjacrZKaPRItLhPeh7hxlJVRD3KHV67Nd++kmOLn3xcNNzl2GhB+YPRHLqsx5zKa0jBBy9M5LljPrIW8amQNIQQIpr/ffX9sr/NSiCoR9PP0N/lG/hy+ogAUwXGDNrFfiZykjhPLQ+NRjsep1MpLyfm4E/MXaZv1/1SkHkrXG0YBPtz6xQlEIvj9HxG5kPawnp7Ykx//PONBMJY6qMYvAZ10U7UyoflNdBs8zHWNUEgvPIOjkFvOKHeMHkbDS6fkLfjXNdG3TpLVVwoa6nFYIJuRfsfSNlVffnVk6Om6cNlSBVvIy03mSa//1UpP8lYip01bNTXs6AjUR5jyuqdhL2WMY1UsMUPvIt8gDFsP/v2Uc+bSk0z0xzHScH3K9H9x5/k8owuJaMYTLiYF31CBnOuEdh/59WGFbRgsbrSQH+HeixAye/B7WO5Jbj9pi8oPmzgNJyZkg8tmz0y28euK1sVI0cAozxZxMJctlOoAN7WJl5R3PFeuxHHl7orGhvxq7ITTT+QYGePG50iGCux5VxMnisaddOOh0FjNWBmqtr5NBW711sZ5FLCEJd+uiv0cCdY+OAVGcSHzqRJukDpktMGtQHfL7uIyc1DW3IrBe33B+hFiKO7+xsTGtbAJvUmR3+X3ddUpKkqY5ySoDXYN4Ko+9ztjmFEXHKI3JZ1jmt3NCcuP+AkzJ9Pbx4Ro3YteIYzNUs2Xjy9bWieKPYZ0m6nyZ2nJE9fAM8rc0nggnC2GX3wydZSCbnksP1hOtbgoExfHLRJsi/zrwwu8/fv4eFiPe2XbwxuuyDh8nNmlpbmP9IBEmEI7kAbJyYqolDsi+JU2Gjq+Y+IyA9exDqviE/n5ly46Uk2U p4ocWqTT redBXDUHK/PK63xN5lwvSp2NqyDn++2z8mfyHMVmQiuilb2cHdmcYGcTAUQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 6, 2023 at 2:15 AM Catalin Marinas wrote: > > On Thu, Nov 17, 2022 at 04:26:48PM +0800, Yicong Yang wrote: > > It is tested on 4,8,128 CPU platforms and shows to be beneficial on > > large systems but may not have improvement on small systems like on > > a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends > > on CONFIG_EXPERT for this stage and make this disabled on systems > > with less than 8 CPUs. User can modify this threshold according to > > their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB. > > What's the overhead of such batching on systems with 4 or fewer CPUs? If > it isn't noticeable, I'd rather have it always on than some number > chosen on whichever SoC you tested. On the one hand, tlb flush is cheap on a small system. so batching tlb flush helps very minorly. On the other hand, since we have batched the tlb flush, new PTEs might be invisible to others before the final broadcast is done and Ack-ed. thus, there is a risk someone else might do mprotect or similar things on those deferred pages which will ask for read-modify-write on those deferred PTEs. in this case, mm will do an explicit flush by flush_tlb_batched_pending which is not required if tlb flush is not deferred. the code is in: static unsigned long change_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { ... pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); flush_tlb_batched_pending(vma->vm_mm); arch_enter_lazy_mmu_mode(); do { oldpte = *pte; if (pte_present(oldpte)) { pte_t ptent; ... } since we don't have the mechanism to record which pages should be flushed in flush_tlb_batched_pending(), flush_tlb_batched_pending() is flushing the whole process, void flush_tlb_batched_pending(struct mm_struct *mm) { int batch = atomic_read(&mm->tlb_flush_batched); int pending = batch & TLB_FLUSH_BATCH_PENDING_MASK; int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT; if (pending != flushed) { flush_tlb_mm(mm); /* * If the new TLB flushing is pending during flushing, leave * mm->tlb_flush_batched as is, to avoid losing flushing. */ atomic_cmpxchg(&mm->tlb_flush_batched, batch, pending | (pending << TLB_FLUSH_BATCH_FLUSHED_SHIFT)); } } I guess mprotect things won't be that often for a running process especially when the system has begun to reclaim its memory. it might be more often only during the initialization of a process. And x86 has enabled this feature for a long time, probably this concurrency doesn't matter too much. but it is still case by case. That is why we have decided to be more conservative on globally enabling this feature and why it also depends on CONFIG_EXPERT. I believe Anshuman has contributed many points on this in those previous discussions. Thanks Barry