From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60354C433EF for ; Mon, 30 May 2022 09:53:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8E8F6B0071; Mon, 30 May 2022 05:53:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D3C576B0072; Mon, 30 May 2022 05:53:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFF196B0073; Mon, 30 May 2022 05:53:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A95EB6B0071 for ; Mon, 30 May 2022 05:53:52 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 826B234519 for ; Mon, 30 May 2022 09:53:52 +0000 (UTC) X-FDA: 79521947904.16.AAEE435 Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) by imf03.hostedemail.com (Postfix) with ESMTP id F2FF620037 for ; Mon, 30 May 2022 09:53:37 +0000 (UTC) Received: by mail-ej1-f50.google.com with SMTP id q1so3257047ejz.9 for ; Mon, 30 May 2022 02:53:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=/GvOu9jW6S5+4oVzMixcNVhLNdEnzCcYD3edjuwGPVc=; b=HdW9GQCx+bCgSir5dMDLeeLq2QHB53EoPq91QzyFDy+2zT+XTRoKJwZHWUmt1ycWY6 Gqg+hTu08GU/njqz49VVtX1sZjOv2NORFOSU6KCp1VDpUjNJgwL5QrPqrL8LLCuBEdNa 0f066HUuOsd1O8m9MQ02uRUbteFTpKYwX7g1eeixbUbsroiREirUxltACpTNbBay/7jT Df1F9lr94la2ALlXMeUoraGe+fDS3XBR47iypamZ8O0WfCFJuwMCEM7QdlBeqsm81BUO +aNKYqomwLUshUUi7JWXF8exxoWC3Yx6LT65M8j/MQ1UUBrZOvuM0Jf9YX7Ag2K9LrKg WpPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=/GvOu9jW6S5+4oVzMixcNVhLNdEnzCcYD3edjuwGPVc=; b=ZakxbTehLY5d9Bv0xhOU1J3D0RqphHfMKDJkQcV2aaoMSXy8Fk0Y57NkY8tdHAmJXP Hx+S5Grdb/o7kc1nrvGuoBckz4FbCqvMJRO8i3M93BGkIhuPaI1+kZsiEP3X/ctXyr8G DSv4iscoa6C3IXYcz/Us7um7ZTAYt9jaJMCnQEsF08OGosNY7oqif5aIODWz9CGOE2q5 2jxJFLAMh4XDxDuDNUk/8waJ89gXKC2M3LFWy6Fd96XZ7AcogzZjKtN0W9XqRW6Cnw6S hJwp+baGFGWSJTXl9fiNMQbD1dxzGn2Xby2BuV5kCQGkIE4RG0m6g3BPFHbOoSpK4i39 Hz/w== X-Gm-Message-State: AOAM5332si6bM6r3psYHwOQ0qpXB113w3NfXU0GG/WFFdRujyPtycosU PFwcjgPUQjuxPWO6zdjGKkrlcKFTySamlciI9oc= X-Google-Smtp-Source: ABdhPJytOq9yrNqUOUH6v74F2jFDXshfmB/n4ZzKZr8SaFqO2WaA5koUnyJagRfJskJ1ZCbJwgripOABeowskfQOrFI= X-Received: by 2002:a17:907:d87:b0:6fe:a2a0:2331 with SMTP id go7-20020a1709070d8700b006fea2a02331mr42654194ejc.702.1653904430683; Mon, 30 May 2022 02:53:50 -0700 (PDT) MIME-Version: 1.0 References: <20220527100644.293717-1-21cnbao@gmail.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Mon, 30 May 2022 21:53:39 +1200 Message-ID: Subject: Re: [PATCH v2] arm64: enable THP_SWAP for arm64 To: Anshuman Khandual Cc: Andrew Morton , Catalin Marinas , Will Deacon , Linux-MM , LAK , LKML , =?UTF-8?B?5byg6K+X5piOKFNpbW9uIFpoYW5nKQ==?= , =?UTF-8?B?6YOt5YGl?= , hanchuanhua , Barry Song , "Huang, Ying" , Minchan Kim , Johannes Weiner , Hugh Dickins , Andrea Arcangeli , Steven Price , Yang Shi Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=HdW9GQCx; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.218.50 as permitted sender) smtp.mailfrom=21cnbao@gmail.com X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: F2FF620037 X-Stat-Signature: zktyzydr4s6jpkiwtpmu8zk4xryfy4g5 X-HE-Tag: 1653904417-836357 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, May 30, 2022 at 7:07 PM Anshuman Khandual wrote: > > Hello Barry, Hi Anshuman, thanks! > > On 5/27/22 15:36, Barry Song wrote: > > From: Barry Song > > > > THP_SWAP has been proved to improve the swap throughput significantly > > on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay > > splitting THP after swapped out"). > It will be useful to run similar experiments on arm64 platform to > demonstrate tangible benefit, else we might be just enabling this > feature just because x86 has it. Do you have some data points ? > > > As long as arm64 uses 4K page size, it is quite similar with x86_64 > > by having 2MB PMD THP. So we are going to get similar improvement. > > This is an assumption without any data points (until now). Please > do provide some results. Fair enough though I believe THP_SWP is arch-independent. Our testing will post data. Plus, we do need it for real use cases with some possible out-of-tree code for this moment. so this patch does not originate only because x86 has it :-) > > > For other page sizes such as 16KB and 64KB, PMD might be too large. > > Negative side effects such as IO latency might be a problem. Thus, > > we can only safely enable the counterpart of X86_64. > > Incorrect reasoning. Although sometimes it might be okay to enable > a feature on platforms with possible assumptions about its benefits, > but to claim 'similar improvement, safely, .. etc' while comparing > against x86 4K page config without data points, is not very helpful. > > > A corner case is that MTE has an assumption that only base pages > > can be swapped. We won't enable THP_SWP for ARM64 hardware with > > MTE support until MTE is re-arched. > > re-arched ?? Did you imply that MTE is reworked to support THP ? I think at least MTE should be able to coexist with THP_SWP though I am not quite sure if MTE can be re-worked to fully support THP. > > > > > Cc: "Huang, Ying" > > Cc: Minchan Kim > > Cc: Johannes Weiner > > Cc: Hugh Dickins > > Cc: Andrea Arcangeli > > Cc: Anshuman Khandual > > Cc: Steven Price > > Cc: Yang Shi > > Signed-off-by: Barry Song > > --- > > arch/arm64/Kconfig | 1 + > > arch/arm64/include/asm/pgtable.h | 2 ++ > > include/linux/huge_mm.h | 12 ++++++++++++ > > mm/swap_slots.c | 2 +- > > 4 files changed, 16 insertions(+), 1 deletion(-) > > > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > > index a4968845e67f..5306009df2dc 100644 > > --- a/arch/arm64/Kconfig > > +++ b/arch/arm64/Kconfig > > @@ -101,6 +101,7 @@ config ARM64 > > select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP > > select ARCH_WANT_LD_ORPHAN_WARN > > select ARCH_WANTS_NO_INSTR > > + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES > > select ARCH_HAS_UBSAN_SANITIZE_ALL > > select ARM_AMBA > > select ARM_ARCH_TIMER > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > > index 0b6632f18364..06076139c72c 100644 > > --- a/arch/arm64/include/asm/pgtable.h > > +++ b/arch/arm64/include/asm/pgtable.h > > @@ -45,6 +45,8 @@ > > __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) > > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > > > +#define arch_thp_swp_supported !system_supports_mte > > Does it check for 'system_supports_mte' as a symbol or call system_supports_mte() > to ascertain runtime MTE support ? It might well be correct, but just does not > look much intuitive. yep. looks a bit weird. but considering we only need this for arm64 and arch_thp_swp_supported is a macro, I can't find a better way to make code modification smaller than this in mm core, arm64 and x86. and probably we will totally remove it once we make MTE co-exist with THP_SWP. Do you have any suggestions for a better solution? > > > + > > /* > > * Outside of a few very special situations (e.g. hibernation), we always > > * use broadcast TLB invalidation instructions, therefore a spurious page > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > index de29821231c9..4ddaf6ad73ef 100644 > > --- a/include/linux/huge_mm.h > > +++ b/include/linux/huge_mm.h > > @@ -461,4 +461,16 @@ static inline int split_folio_to_list(struct folio *folio, > > return split_huge_page_to_list(&folio->page, list); > > } > > > > +/* > > + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to > > + * limitations in the implementation like arm64 MTE can override this to > > A small nit. > > A 'comma' here will be better. s/arm64 MTE can/arm64 MTE, can/ yep. > > > + * false > > Similarly a 'full stop' here will be better as well. > yep. > > + */ > > +#ifndef arch_thp_swp_supported > > +static inline bool arch_thp_swp_supported(void) > > +{ > > + return true; > > +} > > +#endif > > + > > #endif /* _LINUX_HUGE_MM_H */ > > diff --git a/mm/swap_slots.c b/mm/swap_slots.c > > index 2a65a89b5b4d..10b94d64cc25 100644 > > --- a/mm/swap_slots.c > > +++ b/mm/swap_slots.c > > @@ -307,7 +307,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) > > entry.val = 0; > > > > if (folio_test_large(folio)) { > > - if (IS_ENABLED(CONFIG_THP_SWAP)) > > + if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported()) > > get_swap_pages(1, &entry, folio_nr_pages(folio)); > > goto out; > > } > > - Anshuman Thanks Barry