From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EB5BC433EF for ; Mon, 30 May 2022 22:17:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C282C6B0071; Mon, 30 May 2022 18:17:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD4706B0073; Mon, 30 May 2022 18:17:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9CEB6B0074; Mon, 30 May 2022 18:17:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 99BB96B0071 for ; Mon, 30 May 2022 18:17:35 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 64FD221449 for ; Mon, 30 May 2022 22:17:35 +0000 (UTC) X-FDA: 79523822070.25.CE5AC61 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf10.hostedemail.com (Postfix) with ESMTP id 7DBD3C004E for ; Mon, 30 May 2022 22:16:52 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id D84AEB80E5A; Mon, 30 May 2022 22:17:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EFC80C385B8; Mon, 30 May 2022 22:17:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1653949051; bh=gmPvQFon2SSASGMCkX4ZnrD/0nIIHRD8b1aoavDZPmo=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=eHno5uisg/IoRujMJmZOcDXvVKOVpeta08vWXaPhGhhPjjiRe8ILcGLP0wwTQIe5w WbgN6l/K/0gjBs8T/vKLttabAnVn+KTmvxP1zXkl3gua9hFdFvacyCpUfXjm0bu9+L rYt9lng24bVCxmTk8HPwAJboqhWF6byno6XpMS48= Date: Mon, 30 May 2022 15:17:30 -0700 From: Andrew Morton To: Barry Song <21cnbao@gmail.com> Cc: catalin.marinas@arm.com, will@kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, zhangshiming@oppo.com, guojian@oppo.com, hanchuanhua@oppo.com, Barry Song , "Huang, Ying" , Minchan Kim , Johannes Weiner , Hugh Dickins , Andrea Arcangeli , Anshuman Khandual , Steven Price , Yang Shi Subject: Re: [PATCH v2] arm64: enable THP_SWAP for arm64 Message-Id: <20220530151730.39596f41e284b5686acba04f@linux-foundation.org> In-Reply-To: <20220527100644.293717-1-21cnbao@gmail.com> References: <20220527100644.293717-1-21cnbao@gmail.com> X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 7DBD3C004E X-Rspam-User: X-Stat-Signature: 63rd5h5om3cch7knhs7yij7mxga9d9hf Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=eHno5uis; dmarc=none; spf=pass (imf10.hostedemail.com: domain of akpm@linux-foundation.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-HE-Tag: 1653949012-374995 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 27 May 2022 22:06:44 +1200 Barry Song <21cnbao@gmail.com> wrote: > From: Barry Song > > THP_SWAP has been proved to improve the swap throughput significantly > on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay > splitting THP after swapped out"). > As long as arm64 uses 4K page size, it is quite similar with x86_64 > by having 2MB PMD THP. So we are going to get similar improvement. > For other page sizes such as 16KB and 64KB, PMD might be too large. > Negative side effects such as IO latency might be a problem. Thus, > we can only safely enable the counterpart of X86_64. > A corner case is that MTE has an assumption that only base pages > can be swapped. We won't enable THP_SWP for ARM64 hardware with > MTE support until MTE is re-arched. > > ... > > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -45,6 +45,8 @@ > __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > +#define arch_thp_swp_supported !system_supports_mte Does that even work? if (arch_thp_swp_supported()) expands to if (!system_supports_mte()) so I guess it does work. Is this ugly party trick required for some reason? If so, an apologetic comment describing why would be helpful. Otherwise, can we use a static inline function here, as we do with the stub function? > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -461,4 +461,16 @@ static inline int split_folio_to_list(struct folio *folio, > return split_huge_page_to_list(&folio->page, list); > } > > +/* > + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to > + * limitations in the implementation like arm64 MTE can override this to > + * false > + */ > +#ifndef arch_thp_swp_supported > +static inline bool arch_thp_swp_supported(void) > +{ > + return true; > +} Missing a #define arch_thp_swp_supported arch_thp_swp_supported here. > +#endif > + > #endif /* _LINUX_HUGE_MM_H */ Otherwise looks OK to me. Please include it in the arm64 tree if/when it's considered ready.