From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10C12C433FE for ; Tue, 24 May 2022 08:12:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 500A68D0002; Tue, 24 May 2022 04:12:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4883E8D0001; Tue, 24 May 2022 04:12:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 399A08D0002; Tue, 24 May 2022 04:12:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2B3F78D0001 for ; Tue, 24 May 2022 04:12:20 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id F136161369 for ; Tue, 24 May 2022 08:12:19 +0000 (UTC) X-FDA: 79499919198.13.F8FAA5B Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf02.hostedemail.com (Postfix) with ESMTP id D930780030 for ; Tue, 24 May 2022 08:12:16 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B0533B817F7; Tue, 24 May 2022 08:12:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6214CC385AA; Tue, 24 May 2022 08:12:13 +0000 (UTC) Date: Tue, 24 May 2022 09:12:09 +0100 From: Catalin Marinas To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, will@kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, hanchuanhua@oppo.com, zhangshiming@oppo.com, guojian@oppo.com, Barry Song , "Huang, Ying" , Minchan Kim , Johannes Weiner , Hugh Dickins , Shaohua Li , Rik van Riel , Andrea Arcangeli , Steven Price Subject: Re: [PATCH] arm64: enable THP_SWAP for arm64 Message-ID: References: <20220524071403.128644-1-21cnbao@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220524071403.128644-1-21cnbao@gmail.com> Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf02.hostedemail.com: domain of cmarinas@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D930780030 X-Stat-Signature: b9cfd6d7bmtsddbg1mpggjqegf8sgf4p X-HE-Tag: 1653379936-676081 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 24, 2022 at 07:14:03PM +1200, Barry Song wrote: > From: Barry Song > > THP_SWAP has been proved to improve the swap throughput significantly > on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay > splitting THP after swapped out"). > As long as arm64 uses 4K page size, it is quite similar with x86_64 > by having 2MB PMD THP. So we are going to get similar improvement. > For other page sizes such as 16KB and 64KB, PMD might be too large. > Negative side effects such as IO latency might be a problem. Thus, > we can only safely enable the counterpart of X86_64. > > Cc: "Huang, Ying" > Cc: Minchan Kim > Cc: Johannes Weiner > Cc: Hugh Dickins > Cc: Shaohua Li > Cc: Rik van Riel > Cc: Andrea Arcangeli > Signed-off-by: Barry Song > --- > arch/arm64/Kconfig | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index d550f5acfaf3..8e3771c56fbf 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -98,6 +98,7 @@ config ARM64 > select ARCH_WANT_HUGE_PMD_SHARE if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36) > select ARCH_WANT_LD_ORPHAN_WARN > select ARCH_WANTS_NO_INSTR > + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES I'm not opposed to this but I think it would break pages mapped with PROT_MTE. We have an assumption in mte_sync_tags() that compound pages are not swapped out (or in). With MTE, we store the tags in a slab object (128-bytes per swapped page) and restore them when pages are swapped in. At some point we may teach the core swap code about such metadata but in the meantime that was the easiest way. -- Catalin