From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29866C761AF for ; Thu, 30 Mar 2023 13:46:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A924C6B0072; Thu, 30 Mar 2023 09:46:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A1DBB6B0074; Thu, 30 Mar 2023 09:46:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8952B6B0075; Thu, 30 Mar 2023 09:46:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 714726B0072 for ; Thu, 30 Mar 2023 09:46:00 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2B4E080F8B for ; Thu, 30 Mar 2023 13:46:00 +0000 (UTC) X-FDA: 80625688080.03.CEFC4ED Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf13.hostedemail.com (Postfix) with ESMTP id EAA9E20008 for ; Thu, 30 Mar 2023 13:45:54 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=yangyicong@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680183955; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PGbmFcCq2pBKbQEZ10T2Sg2aG0kQoRy4/zRzPY6u41k=; b=TraV6IcwxiFg4A4n5SR0kGBATFet13BxAcchYpo7DMLQKZGMH0pTwGf2P4CUNqHwAfpwe/ gsL8/hGgFIVnDRnwLOmu+nRQk4dylN3tIXgMIwxqORHaLQVWCWM/G5uuRC1GaC2tH4o0+k h1LD7kmCnYbzaTqfoOCWU9bHe2JXuEY= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=yangyicong@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680183955; a=rsa-sha256; cv=none; b=iGvUaxFiDYt75iWYib7pbI/NHDOjWo5atpcH6HRZabKohjExSo5Y+4IW/vxo/cG2CWJEf1 iN5t9cr2GmOtph+cfe4SOB0bG+C2chbzF4MAuOYlP1QhrFoEJAGDkWv4/Ij3MIat/cr+dI PSSC4kJk31w5qtQz6sVzHBycEHKfaTE= Received: from canpemm500009.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PnPky3hmRzKwFr; Thu, 30 Mar 2023 21:43:22 +0800 (CST) Received: from [10.67.102.169] (10.67.102.169) by canpemm500009.china.huawei.com (7.192.105.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Thu, 30 Mar 2023 21:45:46 +0800 CC: , , , , , , , , , , , , , , , , , , , , , , , , Barry Song <21cnbao@gmail.com>, , , , , Barry Song , Nadav Amit , Mel Gorman Subject: Re: [PATCH v8 2/2] arm64: support batched/deferred tlb shootdown during page reclamation To: Punit Agrawal References: <20230329035512.57392-1-yangyicong@huawei.com> <20230329035512.57392-3-yangyicong@huawei.com> <87cz4qwfbt.fsf_-_@stealth> From: Yicong Yang Message-ID: <2687a998-6dbe-de8f-2f62-1456d2de7940@huawei.com> Date: Thu, 30 Mar 2023 21:45:46 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.5.1 MIME-Version: 1.0 In-Reply-To: <87cz4qwfbt.fsf_-_@stealth> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.102.169] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500009.china.huawei.com (7.192.105.203) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: EAA9E20008 X-Stat-Signature: 5tc98m9ryqib1xhm6i64gmwiixyyiwaz X-HE-Tag: 1680183954-735761 X-HE-Meta: U2FsdGVkX18CQSxctr2J4GCoZ/0159z6Qd+Byk9qIHeG+XcGkyTovYob5Xnq7rRtHruYoldZ/OO5uDkXF0JIRK02s54nqvHWfR3f46CnTmcyZl8/6iPrw5HMc6DaL2gTNeRfJatK7S+zQYOl4ik1wsgNLnYxsxTz1vREHKVvO/J5hcaZTr3WOQFn+/OZQW1DfwEh94jl7hGeLcfLPnfwUa7Wh4RGqrKG4Rk7qCJxw2B8h1x9WI766R5m2X1NOdqB+YmizPgrK+r8ipnh65mC0lAv3exgmLrsxPWGqaTl2LrR98RppaIZ/heW8Y5fTmEAIoOtf6W6lBno4JKO7f8hJVS65njcyAqF5uGVUsyjGZ0kPpafKvGRdFTfTtYyjAFzpw+8uwF87TmaDVfAUIdvLfBDHZ2q+8NuAsEeszM6w7Uxuk+2ybOQx2izIZhEka5wM6DPh6Gx8//wpVv7YUTr1NOa0DY3dmrbkckYKg+Pn6vbBp0Gz1Dv3nAUlaU2JfWOFh6GNnwEMUdNsObsAnkiTkvP6YH8Cm28NatpoVaUYJhw4VbaEn1ihag8NnUzxtCvKpb2Sn+S4MZWICfBFxOZj4oFrjWSysHm0AhsqtzzSYuWMVLdSTxLbCsbKoYjiOLE1yAKatIR2ZcV4ZDyDj27rr41ZqcO+0BH61M3cPBGfQcd+7jruFxkF/ZTgCbyqQEKxKSENRoMEoT4e8LP0t0soRVSM0Nz5VyQpUn/++DZTadP98pwYcDu6X6Z7hUZCE4tJmwJVYrn8tJhSKPTe2lksiHyH0NT7z+6qWMMcNW4vNSmv4viCdSgMZgQY3ogMZIg9qChgfuvMGZcElmIwfp1CTRj6qdGgCcO1+pYqKjbitLqOTdWCWil1CbhkWUrvp1jWxtW9V/2DwSu0Zpgx08c41R8CRB/u/oOeBckeY+UgVaG3OGMv2lj79j0g01pf1Y4ncPZ6rRyK1Mm6ugqDlQ 852nGvKA vCC0pPW42w8ToV+DCgkSqhSzRdMOs3qy9Vm0G4lHH/QL45ABL3Tq42VD8atZ8DA0vmy92k7iT7KeYoRmHTGkRQJC2TW9s/pU11qHlJ5oYvotAIIvHwSKKVFsPX7Jkq9+HqHnPQoS4geQwRKIJA05z2u2M3vq6X1GnObm9/HwG4r6Re/NffsTRmGWQUr+ZF4p4g0kEzkGPhZFUobFHoM2M6vOgWs1rRU4w7J5DIWm493wA4JFOQ4dIXexFdlmPFkHaNIHVzqcjuWpHttI0R3bbjNOlMQ0HITZ27+SxK772iEAfY80Sx0kIApjJq37whaD+ckX1lW/P5Hqa7yq+jdBGQH3z5Ryg4JqQhlwUfd6VgZgXcm3SE9HmbZZ8pypYlEDMtRxleGSqtBkAijBscchOP4v3P8QXqQlE0Q3cqNdUzY5HzeytkHKP94g6fqB+qV2F9bxIFUuWx38iNAM+CyqNK2OikHfO010J2Euc0kRM4+o3YbnT8kKpEA7SH6JspOJoEAS6Svh3Bo+DDkW54Sf1FwfixISG6yR2W/WhDkAW0hWXR5Iv0lspN6sxtl7OoJhi86DkMIuSUn4gJPJcs4XmmSy9JZ8K+bBliC+L X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Punit, On 2023/3/30 21:15, Punit Agrawal wrote: > Hi Yicong, > > Yicong Yang writes: > >> From: Barry Song >> >> on x86, batched and deferred tlb shootdown has lead to 90% >> performance increase on tlb shootdown. on arm64, HW can do >> tlb shootdown without software IPI. But sync tlbi is still >> quite expensive. >> >> Even running a simplest program which requires swapout can >> prove this is true, >> #include >> #include >> #include >> #include >> >> int main() >> { >> #define SIZE (1 * 1024 * 1024) >> volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, >> MAP_SHARED | MAP_ANONYMOUS, -1, 0); >> >> memset(p, 0x88, SIZE); >> >> for (int k = 0; k < 10000; k++) { >> /* swap in */ >> for (int i = 0; i < SIZE; i += 4096) { >> (void)p[i]; >> } >> >> /* swap out */ >> madvise(p, SIZE, MADV_PAGEOUT); >> } >> } >> >> Perf result on snapdragon 888 with 8 cores by using zRAM >> as the swap block device. >> >> ~ # perf record taskset -c 4 ./a.out >> [ perf record: Woken up 10 times to write data ] >> [ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ] >> ~ # perf report >> # To display the perf.data header info, please use --header/--header-only options. >> # To display the perf.data header info, please use --header/--header-only options. >> # >> # >> # Total Lost Samples: 0 >> # >> # Samples: 60K of event 'cycles' >> # Event count (approx.): 35706225414 >> # >> # Overhead Command Shared Object Symbol >> # ........ ....... ................. ............................................................................. >> # >> 21.07% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irq >> 8.23% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore >> 6.67% a.out [kernel.kallsyms] [k] filemap_map_pages >> 6.16% a.out [kernel.kallsyms] [k] __zram_bvec_write >> 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush >> 3.71% a.out [kernel.kallsyms] [k] _raw_spin_lock >> 3.49% a.out [kernel.kallsyms] [k] memset64 >> 1.63% a.out [kernel.kallsyms] [k] clear_page >> 1.42% a.out [kernel.kallsyms] [k] _raw_spin_unlock >> 1.26% a.out [kernel.kallsyms] [k] mod_zone_state.llvm.8525150236079521930 >> 1.23% a.out [kernel.kallsyms] [k] xas_load >> 1.15% a.out [kernel.kallsyms] [k] zram_slot_lock >> >> ptep_clear_flush() takes 5.36% CPU in the micro-benchmark >> swapping in/out a page mapped by only one process. If the >> page is mapped by multiple processes, typically, like more >> than 100 on a phone, the overhead would be much higher as >> we have to run tlb flush 100 times for one single page. >> Plus, tlb flush overhead will increase with the number >> of CPU cores due to the bad scalability of tlb shootdown >> in HW, so those ARM64 servers should expect much higher >> overhead. >> >> Further perf annonate shows 95% cpu time of ptep_clear_flush >> is actually used by the final dsb() to wait for the completion >> of tlb flush. This provides us a very good chance to leverage >> the existing batched tlb in kernel. The minimum modification >> is that we only send async tlbi in the first stage and we send >> dsb while we have to sync in the second stage. >> >> With the above simplest micro benchmark, collapsed time to >> finish the program decreases around 5%. >> >> Typical collapsed time w/o patch: >> ~ # time taskset -c 4 ./a.out >> 0.21user 14.34system 0:14.69elapsed >> w/ patch: >> ~ # time taskset -c 4 ./a.out >> 0.22user 13.45system 0:13.80elapsed >> >> Also, Yicong Yang added the following observation. >> Tested with benchmark in the commit on Kunpeng920 arm64 server, >> observed an improvement around 12.5% with command >> `time ./swap_bench`. >> w/o w/ >> real 0m13.460s 0m11.771s >> user 0m0.248s 0m0.279s >> sys 0m12.039s 0m11.458s >> >> Originally it's noticed a 16.99% overhead of ptep_clear_flush() >> which has been eliminated by this patch: >> >> [root@localhost yang]# perf record -- ./swap_bench && perf report >> [...] >> 16.99% swap_bench [kernel.kallsyms] [k] ptep_clear_flush >> >> It is tested on 4,8,128 CPU platforms and shows to be beneficial on >> large systems but may not have improvement on small systems like on >> a 4 CPU platform. So make ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH depends >> on CONFIG_EXPERT for this stage and make this disabled on systems >> with less than 8 CPUs. User can modify this threshold according to >> their own platforms by CONFIG_NR_CPUS_FOR_BATCHED_TLB. > > The commit log and the patch disagree on the name of the config option > (CONFIG_NR_CPUS_FOR_BATCHED_TLB vs CONFIG_ARM64_NR_CPUS_FOR_BATCHED_TLB). > ah yes, it's a typo and I'll fix it. > But more importantly, I was wondering why this posting doesn't address > Catalin's feedback [a] about using a runtime tunable. Maybe I missed the > follow-up discussion. > I must have missed that, terribly sorry for it... Thanks for pointing it out! Let me try to implement a version using a runtime tunable and get back with some test results. Thanks, Yicong > Thanks, > Punit > > [a] https://lore.kernel.org/linux-mm/Y7xMhPTAwcUT4O6b@arm.com/ > >> Also this patch improve the performance of page migration. Using pmbench >> and tries to migrate the pages of pmbench between node 0 and node 1 for >> 20 times, this patch decrease the time used more than 50% and saved the >> time used by ptep_clear_flush(). >> >> This patch extends arch_tlbbatch_add_mm() to take an address of the >> target page to support the feature on arm64. Also rename it to >> arch_tlbbatch_add_pending() to better match its function since we >> don't need to handle the mm on arm64 and add_mm is not proper. >> add_pending will make sense to both as on x86 we're pending the >> TLB flush operations while on arm64 we're pending the synchronize >> operations. >> >> Cc: Anshuman Khandual >> Cc: Jonathan Corbet >> Cc: Nadav Amit >> Cc: Mel Gorman >> Tested-by: Yicong Yang >> Tested-by: Xin Hao >> Tested-by: Punit Agrawal >> Signed-off-by: Barry Song >> Signed-off-by: Yicong Yang >> Reviewed-by: Kefeng Wang >> Reviewed-by: Xin Hao >> Reviewed-by: Anshuman Khandual >> --- >> .../features/vm/TLB/arch-support.txt | 2 +- >> arch/arm64/Kconfig | 6 +++ >> arch/arm64/include/asm/tlbbatch.h | 12 +++++ >> arch/arm64/include/asm/tlbflush.h | 52 ++++++++++++++++++- >> arch/x86/include/asm/tlbflush.h | 5 +- >> include/linux/mm_types_task.h | 4 +- >> mm/rmap.c | 12 +++-- >> 7 files changed, 81 insertions(+), 12 deletions(-) >> create mode 100644 arch/arm64/include/asm/tlbbatch.h > > > [...] > > . >