From: Punit Agrawal <punit.agrawal@bytedance.com>
To: Yicong Yang <yangyicong@huawei.com>
Cc: <akpm@linux-foundation.org>, <linux-mm@kvack.org>,
<linux-arm-kernel@lists.infradead.org>, <x86@kernel.org>,
<catalin.marinas@arm.com>, <will@kernel.org>,
<anshuman.khandual@arm.com>, <linux-doc@vger.kernel.org>,
<corbet@lwn.net>, <peterz@infradead.org>, <arnd@arndb.de>,
<punit.agrawal@bytedance.com>, <linux-kernel@vger.kernel.org>,
<darren@os.amperecomputing.com>, <yangyicong@hisilicon.com>,
<huzhanyuan@oppo.com>, <lipeifeng@oppo.com>,
<zhangshiming@oppo.com>, <guojian@oppo.com>,
<realmz6@gmail.com>, <linux-mips@vger.kernel.org>,
<openrisc@lists.librecores.org>, <linuxppc-dev@lists.ozlabs.org>,
<linux-riscv@lists.infradead.org>, <linux-s390@vger.kernel.org>,
Barry Song <21cnbao@gmail.com>, <wangkefeng.wang@huawei.com>,
<xhao@linux.alibaba.com>, <prime.zeng@hisilicon.com>,
<Jonathan.Cameron@Huawei.com>,
Barry Song <v-songbaohua@oppo.com>,
Nadav Amit <namit@vmware.com>, Mel Gorman <mgorman@suse.de>
Subject: Re: [PATCH v8 2/2] arm64: support batched/deferred tlb shootdown during page reclamation
Date: Thu, 30 Mar 2023 14:15:18 +0100 [thread overview]
Message-ID: <87cz4qwfbt.fsf_-_@stealth> (raw)
In-Reply-To: <20230329035512.57392-3-yangyicong@huawei.com> (Yicong Yang's message of "Wed, 29 Mar 2023 11:55:12 +0800")
Hi Yicong,
Yicong Yang <yangyicong@huawei.com> writes:
> From: Barry Song <v-songbaohua@oppo.com>
>
> on x86, batched and deferred tlb shootdown has lead to 90%
> performance increase on tlb shootdown. on arm64, HW can do
> tlb shootdown without software IPI. But sync tlbi is still
> quite expensive.
>
> Even running a simplest program which requires swapout can
> prove this is true,
> #include <sys/types.h>
> #include <unistd.h>
> #include <sys/mman.h>
> #include <string.h>
>
> int main()
> {
> #define SIZE (1 * 1024 * 1024)
> volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
> MAP_SHARED | MAP_ANONYMOUS, -1, 0);
>
> memset(p, 0x88, SIZE);
>
> for (int k = 0; k < 10000; k++) {
> /* swap in */
> for (int i = 0; i < SIZE; i += 4096) {
> (void)p[i];
> }
>
> /* swap out */
> madvise(p, SIZE, MADV_PAGEOUT);
> }
> }
>
> Perf result on snapdragon 888 with 8 cores by using zRAM
> as the swap block device.
>
> ~ # perf record taskset -c 4 ./a.out
> [ perf record: Woken up 10 times to write data ]
> [ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ]
> ~ # perf report
> # To display the perf.data header info, please use --header/--header-only options.
> # To display the perf.data header info, please use --header/--header-only options.
> #
> #
> # Total Lost Samples: 0
> #
> # Samples: 60K of event 'cycles'
> # Event count (approx.): 35706225414
> #
> # Overhead Command Shared Object Symbol
> # ........ ....... ................. .............................................................................
> #
> 21.07% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irq
> 8.23% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 6.67% a.out [kernel.kallsyms] [k] filemap_map_pages
> 6.16% a.out [kernel.kallsyms] [k] __zram_bvec_write
> 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush
> 3.71% a.out [kernel.kallsyms] [k] _raw_spin_lock
> 3.49% a.out [kernel.kallsyms] [k] memset64
> 1.63% a.out [kernel.kallsyms] [k] clear_page
> 1.42% a.out [kernel.kallsyms] [k] _raw_spin_unlock
> 1.26% a.out [kernel.kallsyms] [k] mod_zone_state.llvm.8525150236079521930
> 1.23% a.out [kernel.kallsyms] [k] xas_load
> 1.15% a.out [kernel.kallsyms] [k] zram_slot_lock
>
> ptep_clear_flush() takes 5.36% CPU in the micro-benchmark
> swapping in/out a page mapped by only one process. If the
> page is mapped by multiple processes, typically, like more
> than 100 on a phone, the overhead would be much higher as
> we have to run tlb flush 100 times for one single page.
> Plus, tlb flush overhead will increase with the number
> of CPU cores due to the bad scalability of tlb shootdown
> in HW, so those ARM64 servers should expect much higher
> overhead.
>
> Further perf annonate shows 95% cpu time of ptep_clear_flush
> is actually used by the final dsb() to wait for the completion
> of tlb flush. This provides us a very good chance to leverage
> the existing batched tlb in kernel. The minimum modification
> is that we only send async tlbi in the first stage and we send
> dsb while we have to sync in the second stage.
>
> With the above simplest micro benchmark, collapsed time to
> finish the program decreases around 5%.
>
> Typical collapsed time w/o patch:
> ~ # time taskset -c 4 ./a.out
> 0.21user 14.34system 0:14.69elapsed
> w/ patch:
> ~ # time taskset -c 4 ./a.out
> 0.22user 13.45system 0:13.80elapsed
>
> Also, Yicong Yang added the following observation.
> Tested with benchmark in the commit on Kunpeng920 arm64 server,
> observed an improvement around 12.5% with command
> `time ./swap_bench`.
> w/o w/
> real 0m13.460s 0m11.771s
> user 0m0.248s 0m0.279s
> sys 0m12.039s 0m11.458s
>
> Originally it's noticed a 16.99% overhead of ptep_clear_flush()
> which has been eliminated by this patch:
>
> [root@localhost yang]# perf record -- ./swap_bench && perf report
> [...]
> 16.99% swap_bench [kernel.kallsyms] [k] ptep_clear_flush
>
> It is tested on 4,8,128 CPU platforms and shows to be beneficial on
> large systems but may not have improvement on small systems like on
> a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends
> on CONFIG_EXPERT for this stage and make this disabled on systems
> with less than 8 CPUs. User can modify this threshold according to
> their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB.
The commit log and the patch disagree on the name of the config option
(CONFIG_NR_CPUS_FOR_BATCHED_TLB vs CONFIG_ARM64_NR_CPUS_FOR_BATCHED_TLB).
But more importantly, I was wondering why this posting doesn't address
Catalin's feedback [a] about using a runtime tunable. Maybe I missed the
follow-up discussion.
Thanks,
Punit
[a] https://lore.kernel.org/linux-mm/Y7xMhPTAwcUT4O6b@arm.com/
> Also this patch improve the performance of page migration. Using pmbench
> and tries to migrate the pages of pmbench between node 0 and node 1 for
> 20 times, this patch decrease the time used more than 50% and saved the
> time used by ptep_clear_flush().
>
> This patch extends arch_tlbbatch_add_mm() to take an address of the
> target page to support the feature on arm64. Also rename it to
> arch_tlbbatch_add_pending() to better match its function since we
> don't need to handle the mm on arm64 and add_mm is not proper.
> add_pending will make sense to both as on x86 we're pending the
> TLB flush operations while on arm64 we're pending the synchronize
> operations.
>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Nadav Amit <namit@vmware.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Tested-by: Yicong Yang <yangyicong@hisilicon.com>
> Tested-by: Xin Hao <xhao@linux.alibaba.com>
> Tested-by: Punit Agrawal <punit.agrawal@bytedance.com>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> Reviewed-by: Xin Hao <xhao@linux.alibaba.com>
> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> .../features/vm/TLB/arch-support.txt | 2 +-
> arch/arm64/Kconfig | 6 +++
> arch/arm64/include/asm/tlbbatch.h | 12 +++++
> arch/arm64/include/asm/tlbflush.h | 52 ++++++++++++++++++-
> arch/x86/include/asm/tlbflush.h | 5 +-
> include/linux/mm_types_task.h | 4 +-
> mm/rmap.c | 12 +++--
> 7 files changed, 81 insertions(+), 12 deletions(-)
> create mode 100644 arch/arm64/include/asm/tlbbatch.h
[...]
next prev parent reply other threads:[~2023-03-30 13:15 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-29 3:55 [PATCH v8 0/2] " Yicong Yang
2023-03-29 3:55 ` [PATCH v8 1/2] mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Yicong Yang
2023-03-29 3:55 ` [PATCH v8 2/2] arm64: support batched/deferred tlb shootdown during page reclamation Yicong Yang
2023-03-30 13:15 ` Punit Agrawal [this message]
2023-03-30 13:45 ` Yicong Yang
2023-04-01 12:12 ` Yicong Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87cz4qwfbt.fsf_-_@stealth \
--to=punit.agrawal@bytedance.com \
--cc=21cnbao@gmail.com \
--cc=Jonathan.Cameron@Huawei.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=arnd@arndb.de \
--cc=catalin.marinas@arm.com \
--cc=corbet@lwn.net \
--cc=darren@os.amperecomputing.com \
--cc=guojian@oppo.com \
--cc=huzhanyuan@oppo.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=lipeifeng@oppo.com \
--cc=mgorman@suse.de \
--cc=namit@vmware.com \
--cc=openrisc@lists.librecores.org \
--cc=peterz@infradead.org \
--cc=prime.zeng@hisilicon.com \
--cc=realmz6@gmail.com \
--cc=v-songbaohua@oppo.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=xhao@linux.alibaba.com \
--cc=yangyicong@hisilicon.com \
--cc=yangyicong@huawei.com \
--cc=zhangshiming@oppo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox