From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0E9AC761A6 for ; Thu, 30 Mar 2023 13:15:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 357A76B0072; Thu, 30 Mar 2023 09:15:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 308796B0074; Thu, 30 Mar 2023 09:15:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A91C6B0075; Thu, 30 Mar 2023 09:15:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0C0D36B0072 for ; Thu, 30 Mar 2023 09:15:25 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BEFDA1C660A for ; Thu, 30 Mar 2023 13:15:24 +0000 (UTC) X-FDA: 80625610968.14.B8F87FE Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by imf02.hostedemail.com (Postfix) with ESMTP id 44EEF8000C for ; Thu, 30 Mar 2023 13:15:21 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ONXxGaal; spf=pass (imf02.hostedemail.com: domain of punit.agrawal@bytedance.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=punit.agrawal@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680182122; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=l+RaE4XYGB80qyrp/p+4bFb+Fa0YEMFoXvqdGvo2ZNk=; b=Q1C6F7zw/NmbyK/wc8PoS/TvScRiieIzw/r6cf9RQky07sB8KPDOJACR4SHzmN1GxxzCjm V5rEaFPmzxioKiqLV5XpjaB43+ObBkAzKHekpa3FnUxJc8kbUn6gDNbvzRKu/fQsOi1l/p csIJkT9NdVvGvIEkryzgdZZyrX8kheg= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ONXxGaal; spf=pass (imf02.hostedemail.com: domain of punit.agrawal@bytedance.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=punit.agrawal@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680182122; a=rsa-sha256; cv=none; b=ASrwc94sZVDzMysD40z7ol6HsNBgwixHmTYBAcnv4W6ogHjB4c2HCxs0BbfFIrDS8mTGzD lvG8GKwJ2EFc8f3E9fDhyf3yWrODaCzEPBH7/r+VUExZHWJ53FIDcNTHtEVWJDuut8CY8V yNmfHQubWd6k5JGMiLP6IZrnfTuzEs8= Received: by mail-ed1-f53.google.com with SMTP id t10so76164968edd.12 for ; Thu, 30 Mar 2023 06:15:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1680182119; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:cc:to:from:from:to:cc:subject:date:message-id:reply-to; bh=l+RaE4XYGB80qyrp/p+4bFb+Fa0YEMFoXvqdGvo2ZNk=; b=ONXxGaalDe5k0dNXrxQs6A83VtdJzzP4OKcKWnuh8i5+JhLWI1oobBWma0BSZIH4gS 8VPji3BNZghO37zjNor7eRlKtWW14jqswEzhWfcVz0JZkvNOZ+IX/1gcM2KpVx+w3O6O vDTgzRqUzux4A4ne06K4njwlYrOgzxyizw0TBIetkoXoX8FTnn66JsOexZcrQnw3U/8/ no7t0vBu5yQ3A17zsey22jnYZ6pGTqpACTEXEa8H14YOQfICLLxyUvDper0qcnmmlx8g yZAt0lUv5fj057/z5xe5FMu/5hg2b3Y0bvaE9O7aMwcfz9Vn9f8Ztki5xMOREV/hzjXt IHPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680182119; h=mime-version:user-agent:message-id:in-reply-to:date:references :subject:cc:to:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=l+RaE4XYGB80qyrp/p+4bFb+Fa0YEMFoXvqdGvo2ZNk=; b=VA7HA7VWXvWO1ViwyKswUaeMiyvwGErbZzOIQtEuxYw+Q87Wtt+/9w4JmEFZM4AAxS UwFBW+OjKjzAP+58sA/3QEf/5328xJS0cmKYhT+r9eIzl7wnUgqVUirRlb6b4B9ssJ1k RQrSWu/qKQIOw58FfgsBxvIPxQag7kN23Fv5HocTStYbKY99UQPSMopE1qXxMrC1FbR3 +nlz3oV7VEXcw82fTaqSScQY8Ct2/ijX+b0bnM2IN9TfIcl4JH790RkbJrqGkWxX2+O5 RfK/65DfSBE8J3gjKOooOnv3fcuZ8GW6oUEp3i2ZTjlpCl6EckyKhSiTueXtSx6ebhsU HNmA== X-Gm-Message-State: AAQBX9eST7E+pvAHeJR+VBQmmq2xts03RG+eAhQhWSlEbxQmacsW7k9Y MQ6p9Gr67wkCHr5RmQPE1pLIDg== X-Google-Smtp-Source: AKy350bRsIS81O1P1wJbKuTM1etnYVXfc+LKORzx35R/lxw+WuJo/v00Kps9b6AB+jvBtSF6gFwsJg== X-Received: by 2002:a17:907:1c21:b0:8dd:5710:a017 with SMTP id nc33-20020a1709071c2100b008dd5710a017mr30636860ejc.4.1680182119599; Thu, 30 Mar 2023 06:15:19 -0700 (PDT) Received: from localhost ([95.148.15.55]) by smtp.gmail.com with ESMTPSA id r9-20020a1709061ba900b009474ee5de37sm826782ejg.143.2023.03.30.06.15.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Mar 2023 06:15:19 -0700 (PDT) From: Punit Agrawal To: Yicong Yang Cc: , , , , , , , , , , , , , , , , , , , , , , , , , Barry Song <21cnbao@gmail.com>, , , , , Barry Song , Nadav Amit , Mel Gorman Subject: Re: [PATCH v8 2/2] arm64: support batched/deferred tlb shootdown during page reclamation References: <20230329035512.57392-1-yangyicong@huawei.com> <20230329035512.57392-3-yangyicong@huawei.com> Date: Thu, 30 Mar 2023 14:15:18 +0100 In-Reply-To: <20230329035512.57392-3-yangyicong@huawei.com> (Yicong Yang's message of "Wed, 29 Mar 2023 11:55:12 +0800") Message-ID: <87cz4qwfbt.fsf_-_@stealth> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 5zasiq9gtizhpkojkske1ajqksr5s794 X-Rspamd-Queue-Id: 44EEF8000C X-HE-Tag: 1680182121-990775 X-HE-Meta: U2FsdGVkX1/q9pOjBvhxhge21AV24g+5XLNt60+KPwHqFctMHBatQbYMUSaYdaYRpVfqzv7i/FqySctF26ZWPk/VXN1Rk0QBYcJ1peSXdoi9hPmmM+tlWvrOXWzV8oAeCq8GuZaaNSkpdCA5o+IZBklEmw9wEGPe3cIPxA0BBFgfC4y5hjB6kPogl1Bx1u7izTeU53YtvMqkjz+76H0FzMhscBHAMcYTNAgUi0jtwK9edhLeP3z3BNMFlUaDRnZG8itIdwpDDSXk/vdwEdri+06e4SMGOnqp2DI5Fpd2N+lmaK0Za3koJ7hbO/SpoBFexI8DFM9TxEOkBX379mHh6HEproPvm0SbOSt5owhCX2Cup+ZbPO6WFQpzj1HVJJ3xNL5NcvP+t74Y6o5+Dl28nXgC6BB8KCTVuxJf5PdG1i85Ukf7+Wy9oU9yEXDSYQWLMFmgWpKM9n1a66KjroYGjTCRUgCLm1Hj3nBQwWgy+6XPRaa+mZAN4TZGi59uIYbEvVK7NDGc+QLOXrchIlbJiqrpasHO7JESXiuZJObWC42GhDvxEjmXWdnBKi8JTOvILM4X96PcOXhfojnCFjoG1YtYL/yhGjKXACLvKuRq70SVWKxSUVpcGbzCaZFqm3lYSpuMYf+b0MAz8K7RQ1LNjcZuVlqOb8elk6H1NMEfwbXKbJ27H+7LNj9sGa4zd6yO/Ie2t/uLrRUE+hysSxBYvy/ALCCj3uMrb91QfOaZ0ZJgqEdeZLEEpu+4Rr78wYeTW/Jk5dKe6Zo5rWDW4Swfr5XC+VK6b3924RzLQE47VKY5EnaZK1omjdp68x3h9JmoofI3g6/jBZFQVHFNds4o/lVLUQRWLi7NAB1/i3JAaXQnFq4XQy4QRw/w8OEI3XE7wgdXQ34InxaixL+PhTV8R5o6MVB9Gu1uIIXXBRQOAN/oaEe4URpOSqi66QjuSTGFCgj/zhKgn+7Z8FpJ5tD QhjXn5ti JMM5DpbEneoFidCMhdXnEYeobq94NkHVfEl7+TwQB+kPG4B1GXLGGSahVwuiqLdY0XiclePcJBWobAY5xWppN9y1nnPlmKBLto5lbuJR8C4cw7FBQer40ir4WBpAw1QOAuuS6IBlWJsFj+YU97ifOYUFxN5JLm+nYS5RAC1nsv2aAK9T45lCvbbp1TkBl1ttDsIkaNMnufXz6pC3b3vhPeImBZE6TFbhN1bSh9aHOlvWeytlkcWwrVEniM1XPVc3HrVrfRSGiXPPN5P8OMoVnahh2zyWSNW9Qvaw1kg3CGu1hYJta9PzVoRaNhkxb6cai0JuQgA5E5McXKHgupoEOwRajit+pXH9w2lV2gGYDB85dDZ3HxmOHWSHwpuDZH8PHD8RsOfeqqWr7VzZ51ehmmwCiJYKbi8u7QX5h/jvXXJfM4b30DSKgrP2n+M+xr49OAWG64AU/sUUbc3OEcH6EPYCUScgVT4SVXCgLD035d9BGZKsKvoOtVO8zrX0Zi9p8zn6lcsrwL87yQSGXi6ZQQRRKcBdjg7bHHVnyy9D+1BwATL2j+RhYegwrmsT5T1ynsdfaf+BnBGVBO3AeUCw1C91u622bbLMFYsGme5O9MONG3OPkWYEqAzKuMwTqjTm4ZDITi48aVlg1dscMdC9TYL5OJlfpiBOMT9XvvoqD4NWoXKwd5jWKuBmMzvoAQPa7zhLHkHHPcpEhM4a0lBM4jdlowRo++doTTOzCtV5UOomsRHM08Nf35uT7CzmHrz6C1hwI6y7OffZoDEE5VbcGHwNeiA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Yicong, Yicong Yang writes: > From: Barry Song > > on x86, batched and deferred tlb shootdown has lead to 90% > performance increase on tlb shootdown. on arm64, HW can do > tlb shootdown without software IPI. But sync tlbi is still > quite expensive. > > Even running a simplest program which requires swapout can > prove this is true, > #include > #include > #include > #include > > int main() > { > #define SIZE (1 * 1024 * 1024) > volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, > MAP_SHARED | MAP_ANONYMOUS, -1, 0); > > memset(p, 0x88, SIZE); > > for (int k = 0; k < 10000; k++) { > /* swap in */ > for (int i = 0; i < SIZE; i += 4096) { > (void)p[i]; > } > > /* swap out */ > madvise(p, SIZE, MADV_PAGEOUT); > } > } > > Perf result on snapdragon 888 with 8 cores by using zRAM > as the swap block device. > > ~ # perf record taskset -c 4 ./a.out > [ perf record: Woken up 10 times to write data ] > [ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ] > ~ # perf report > # To display the perf.data header info, please use --header/--header-only options. > # To display the perf.data header info, please use --header/--header-only options. > # > # > # Total Lost Samples: 0 > # > # Samples: 60K of event 'cycles' > # Event count (approx.): 35706225414 > # > # Overhead Command Shared Object Symbol > # ........ ....... ................. ............................................................................. > # > 21.07% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irq > 8.23% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore > 6.67% a.out [kernel.kallsyms] [k] filemap_map_pages > 6.16% a.out [kernel.kallsyms] [k] __zram_bvec_write > 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush > 3.71% a.out [kernel.kallsyms] [k] _raw_spin_lock > 3.49% a.out [kernel.kallsyms] [k] memset64 > 1.63% a.out [kernel.kallsyms] [k] clear_page > 1.42% a.out [kernel.kallsyms] [k] _raw_spin_unlock > 1.26% a.out [kernel.kallsyms] [k] mod_zone_state.llvm.8525150236079521930 > 1.23% a.out [kernel.kallsyms] [k] xas_load > 1.15% a.out [kernel.kallsyms] [k] zram_slot_lock > > ptep_clear_flush() takes 5.36% CPU in the micro-benchmark > swapping in/out a page mapped by only one process. If the > page is mapped by multiple processes, typically, like more > than 100 on a phone, the overhead would be much higher as > we have to run tlb flush 100 times for one single page. > Plus, tlb flush overhead will increase with the number > of CPU cores due to the bad scalability of tlb shootdown > in HW, so those ARM64 servers should expect much higher > overhead. > > Further perf annonate shows 95% cpu time of ptep_clear_flush > is actually used by the final dsb() to wait for the completion > of tlb flush. This provides us a very good chance to leverage > the existing batched tlb in kernel. The minimum modification > is that we only send async tlbi in the first stage and we send > dsb while we have to sync in the second stage. > > With the above simplest micro benchmark, collapsed time to > finish the program decreases around 5%. > > Typical collapsed time w/o patch: > ~ # time taskset -c 4 ./a.out > 0.21user 14.34system 0:14.69elapsed > w/ patch: > ~ # time taskset -c 4 ./a.out > 0.22user 13.45system 0:13.80elapsed > > Also, Yicong Yang added the following observation. > Tested with benchmark in the commit on Kunpeng920 arm64 server, > observed an improvement around 12.5% with command > `time ./swap_bench`. > w/o w/ > real 0m13.460s 0m11.771s > user 0m0.248s 0m0.279s > sys 0m12.039s 0m11.458s > > Originally it's noticed a 16.99% overhead of ptep_clear_flush() > which has been eliminated by this patch: > > [root@localhost yang]# perf record -- ./swap_bench && perf report > [...] > 16.99% swap_bench [kernel.kallsyms] [k] ptep_clear_flush > > It is tested on 4,8,128 CPU platforms and shows to be beneficial on > large systems but may not have improvement on small systems like on > a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends > on CONFIG_EXPERT for this stage and make this disabled on systems > with less than 8 CPUs. User can modify this threshold according to > their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB. The commit log and the patch disagree on the name of the config option (CONFIG_NR_CPUS_FOR_BATCHED_TLB vs CONFIG_ARM64_NR_CPUS_FOR_BATCHED_TLB). But more importantly, I was wondering why this posting doesn't address Catalin's feedback [a] about using a runtime tunable. Maybe I missed the follow-up discussion. Thanks, Punit [a] https://lore.kernel.org/linux-mm/Y7xMhPTAwcUT4O6b@arm.com/ > Also this patch improve the performance of page migration. Using pmbench > and tries to migrate the pages of pmbench between node 0 and node 1 for > 20 times, this patch decrease the time used more than 50% and saved the > time used by ptep_clear_flush(). > > This patch extends arch_tlbbatch_add_mm() to take an address of the > target page to support the feature on arm64. Also rename it to > arch_tlbbatch_add_pending() to better match its function since we > don't need to handle the mm on arm64 and add_mm is not proper. > add_pending will make sense to both as on x86 we're pending the > TLB flush operations while on arm64 we're pending the synchronize > operations. > > Cc: Anshuman Khandual > Cc: Jonathan Corbet > Cc: Nadav Amit > Cc: Mel Gorman > Tested-by: Yicong Yang > Tested-by: Xin Hao > Tested-by: Punit Agrawal > Signed-off-by: Barry Song > Signed-off-by: Yicong Yang > Reviewed-by: Kefeng Wang > Reviewed-by: Xin Hao > Reviewed-by: Anshuman Khandual > --- > .../features/vm/TLB/arch-support.txt | 2 +- > arch/arm64/Kconfig | 6 +++ > arch/arm64/include/asm/tlbbatch.h | 12 +++++ > arch/arm64/include/asm/tlbflush.h | 52 ++++++++++++++++++- > arch/x86/include/asm/tlbflush.h | 5 +- > include/linux/mm_types_task.h | 4 +- > mm/rmap.c | 12 +++-- > 7 files changed, 81 insertions(+), 12 deletions(-) > create mode 100644 arch/arm64/include/asm/tlbbatch.h [...]