From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A374AC4332F for ; Mon, 6 Nov 2023 10:12:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB47F8D0013; Mon, 6 Nov 2023 05:12:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E3DFA8D0002; Mon, 6 Nov 2023 05:12:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDD818D0013; Mon, 6 Nov 2023 05:12:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B8B2F8D0002 for ; Mon, 6 Nov 2023 05:12:50 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 760F31A0698 for ; Mon, 6 Nov 2023 10:12:50 +0000 (UTC) X-FDA: 81427115700.30.40FD059 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 55D1AA000A for ; Mon, 6 Nov 2023 10:12:48 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of steven.price@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=steven.price@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699265568; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mjupEx+CtxFpKoV5IJ+BjCC78N3dxhUaOHWLHBs6Y6E=; b=FabpkqzpHWQP58D/VWG6Mh7kKuFrpVa3axDqVlGl8IW8iPPPIoEoexizc2pEDHI2RUbj2W ULYC+Nhv42CZrH3BjU9ehlboRrW9hGZBHGzzOR2MVXsd4mG2OCBrM88XLkr8raF+Exc8su /s5znTvCy+aC7hkN7ai1UTH4oPh9HH0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699265568; a=rsa-sha256; cv=none; b=EMt0Hd8I8Ol9Xdo7mpMTG5rvySWIIBg5zMeqEvO+6tvipweZMEpbYB+YcrRZppbeBlpjmm 60+Y7cwHqf4UWtyefWlwNY8cAVmLVAc50MNFU6qwbs93IJdF3fWGKVmgm4m6FeRySjdy3O AAduD/0QYXdJGjg/w6gcsnV38hXauqg= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of steven.price@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=steven.price@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1B309FEC; Mon, 6 Nov 2023 02:13:31 -0800 (PST) Received: from [10.57.40.236] (unknown [10.57.40.236]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 675873F6C4; Mon, 6 Nov 2023 02:12:44 -0800 (PST) Message-ID: Date: Mon, 6 Nov 2023 10:12:45 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, david@redhat.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.com, ryan.roberts@arm.com, shy828301@gmail.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com, Barry Song References: <2fe5ce7e-9c5c-4df4-b4fc-9fd3d9b2dccb@arm.com> <20231104093423.170054-1-v-songbaohua@oppo.com> Content-Language: en-GB From: Steven Price In-Reply-To: <20231104093423.170054-1-v-songbaohua@oppo.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: jdfygkf6kptfeg6nemt5huhm7jzoko6i X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 55D1AA000A X-Rspam-User: X-HE-Tag: 1699265568-420073 X-HE-Meta: U2FsdGVkX19EJyBvzPqej9jm6vctjmdTCeTNXILumPPDlQM5ToIH6rE8qP+H8JUChk6Jmmeeh8Aks2x38umpeCPbjSAAgtDn4EriczF8vrzHxt8eZ0Ee0VmBrUileu6XwREFQZShSNkDcKSE+cuuOYyUE84CDWw7u2os2KMOZu92lkm6U4owVQeVyP97p18QdsI+ckA46ucOY0rso8HZoN69e57O4TPEq4Bz1YIXxShEjL9QHynaO62q5cmjtjUPlNC1tuSq/AsLAVdey4fLG9MS5Jzb/xC21WWVvM/VpgyLNeH14WLfFStzZDZ9PAVSMjrjWl9N4kNOB0UJ2W1v0ONz+yHT12padGobOs6T8ajEXQzKljdKyDF0o4NdrYLL9TuCSpe9+rwT5/Ee634lyYTMJMQzP7c+hMIj9H0rlLIF3fFHjrOT4txUi9zEXH0go0EhLSaOWcqVI9Xt9MfynG/6s/DDQR3L+EDpRbCpqDO/afmvfrnNFxjj8RaWsfWkzhYbQ9ytlmvr+h10pRRZUCm5UdbvXoHo/FpDKYrfFZr5QsFtC03f6a7KoBRjovisMRwGPegjt43Q5sBwf+hwQoxFxkhK+N5gio99g0M/E9S6N1pKMcwlw/7Ty9kUarVRCY2/ava+fYP0sX3R996GLUf0DAml7OJQEXvaUHhxnjSX808uaHyRpq4VWloHKDhuPBBr33OZKYvY12pqF/AmnkOLfKMsPy0BAqJX/S+M708efa04x5erWlofEVwNPstE9fDyfKdnk1+h8q18UgwCzzznt9suprT4/vAvNL1DnIyOCUpNt1EiXsw+ooadK+yUtSJhnXB+qa1UGgYaheLImwxAFlt9RtuuqVVzalpp0SDNP4wV8GW+7bA+vL7a/x2P6JaqY5liFAWLMTysqvjBM8GwUdcL7g8zrU+jvNOS3JUV4r4YKDnONQF+uoRMIVp8hWGB7/MDOjziTdZ529a Ds58tM9C CQi7tPewHHWlzw8Glqi694xe+sawOaolqY/WPP7YMnxh+cipIi3QfIhiUlm71GTaEmZIYfBDGxd2Ms1inE91zmVzRzXHN6zbDkBmFWLtiJvGy2px+lCIaj62RSAkHClIkg/avsYesv4Y5ZGJ+HGncW8Z4/VMz/V0V0deTxOTyu76igH6RjpcOLFRyX/8OUGdeytIlQWP42fbjcTM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 04/11/2023 09:34, Barry Song wrote: >> Yes that's right. mte_save_tags() needs to allocate memory so can fail >> and if failing then arch_prepare_to_swap() would need to put things back >> how they were with calls to mte_invalidate_tags() (although I think >> you'd actually want to refactor to create a function which takes a >> struct page *). >> >> Steve > > Thanks, Steve. combining all comments from You and Ryan, I made a v2. > One tricky thing is that we are restoring one page rather than folio > in arch_restore_swap() as we are only swapping in one page at this > stage. > > [RFC v2 PATCH] arm64: mm: swap: save and restore mte tags for large folios > > This patch makes MTE tags saving and restoring support large folios, > then we don't need to split them into base pages for swapping on > ARM64 SoCs with MTE. > > This patch moves arch_prepare_to_swap() to take folio rather than > page, as we support THP swap-out as a whole. And this patch also > drops arch_thp_swp_supported() as ARM64 MTE is the only one who > needs it. > > Signed-off-by: Barry Song > --- > arch/arm64/include/asm/pgtable.h | 21 +++------------ > arch/arm64/mm/mteswap.c | 44 ++++++++++++++++++++++++++++++++ > include/linux/huge_mm.h | 12 --------- > include/linux/pgtable.h | 2 +- > mm/page_io.c | 2 +- > mm/swap_slots.c | 2 +- > 6 files changed, 51 insertions(+), 32 deletions(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index b19a8aee684c..d8f523dc41e7 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -45,12 +45,6 @@ > __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > -static inline bool arch_thp_swp_supported(void) > -{ > - return !system_supports_mte(); > -} > -#define arch_thp_swp_supported arch_thp_swp_supported > - > /* > * Outside of a few very special situations (e.g. hibernation), we always > * use broadcast TLB invalidation instructions, therefore a spurious page > @@ -1036,12 +1030,8 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, > #ifdef CONFIG_ARM64_MTE > > #define __HAVE_ARCH_PREPARE_TO_SWAP > -static inline int arch_prepare_to_swap(struct page *page) > -{ > - if (system_supports_mte()) > - return mte_save_tags(page); > - return 0; > -} > +#define arch_prepare_to_swap arch_prepare_to_swap > +extern int arch_prepare_to_swap(struct folio *folio); > > #define __HAVE_ARCH_SWAP_INVALIDATE > static inline void arch_swap_invalidate_page(int type, pgoff_t offset) > @@ -1057,11 +1047,8 @@ static inline void arch_swap_invalidate_area(int type) > } > > #define __HAVE_ARCH_SWAP_RESTORE > -static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) > -{ > - if (system_supports_mte()) > - mte_restore_tags(entry, &folio->page); > -} > +#define arch_swap_restore arch_swap_restore > +extern void arch_swap_restore(swp_entry_t entry, struct folio *folio); > > #endif /* CONFIG_ARM64_MTE */ > > diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c > index a31833e3ddc5..14a479e4ea8e 100644 > --- a/arch/arm64/mm/mteswap.c > +++ b/arch/arm64/mm/mteswap.c > @@ -68,6 +68,12 @@ void mte_invalidate_tags(int type, pgoff_t offset) > mte_free_tag_storage(tags); > } > > +static inline void __mte_invalidate_tags(struct page *page) > +{ > + swp_entry_t entry = page_swap_entry(page); > + mte_invalidate_tags(swp_type(entry), swp_offset(entry)); > +} > + > void mte_invalidate_tags_area(int type) > { > swp_entry_t entry = swp_entry(type, 0); > @@ -83,3 +89,41 @@ void mte_invalidate_tags_area(int type) > } > xa_unlock(&mte_pages); > } > + > +int arch_prepare_to_swap(struct folio *folio) > +{ > + int err; > + long i; > + > + if (system_supports_mte()) { > + long nr = folio_nr_pages(folio); > + for (i = 0; i < nr; i++) { > + err = mte_save_tags(folio_page(folio, i)); > + if (err) > + goto out; > + } > + } > + return 0; > + > +out: > + while (--i) > + __mte_invalidate_tags(folio_page(folio, i)); > + return err; > +} > + > +void arch_swap_restore(swp_entry_t entry, struct folio *folio) > +{ > + if (system_supports_mte()) { > + /* > + * We don't support large folios swap in as whole yet, but > + * we can hit a large folio which is still in swapcache > + * after those related processes' PTEs have been unmapped > + * but before the swapcache folio is dropped, in this case, > + * we need to find the exact page which "entry" is mapping > + * to. If we are not hitting swapcache, this folio won't be > + * large > + */ Does it make sense to keep arch_swap_restore taking a folio? I'm not sure I understand why the change was made in the first place. It just seems odd to have a function taking a struct folio but making the assumption that it's actually only a single page (and having to use entry to figure out which page). It seems particularly broken in the case of unuse_pte() which calls page_folio() to get the folio in the first place. Other than that it looks correct to me. Thanks, Steve > + struct page *page = folio_file_page(folio, swp_offset(entry)); > + mte_restore_tags(entry, page); > + } > +} > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index fa0350b0812a..f83fb8d5241e 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -400,16 +400,4 @@ static inline int split_folio(struct folio *folio) > return split_folio_to_list(folio, NULL); > } > > -/* > - * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to > - * limitations in the implementation like arm64 MTE can override this to > - * false > - */ > -#ifndef arch_thp_swp_supported > -static inline bool arch_thp_swp_supported(void) > -{ > - return true; > -} > -#endif > - > #endif /* _LINUX_HUGE_MM_H */ > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index af7639c3b0a3..33ab4ddd91dd 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -897,7 +897,7 @@ static inline int arch_unmap_one(struct mm_struct *mm, > * prototypes must be defined in the arch-specific asm/pgtable.h file. > */ > #ifndef __HAVE_ARCH_PREPARE_TO_SWAP > -static inline int arch_prepare_to_swap(struct page *page) > +static inline int arch_prepare_to_swap(struct folio *folio) > { > return 0; > } > diff --git a/mm/page_io.c b/mm/page_io.c > index cb559ae324c6..0fd832474c1d 100644 > --- a/mm/page_io.c > +++ b/mm/page_io.c > @@ -189,7 +189,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) > * Arch code may have to preserve more data than just the page > * contents, e.g. memory tags. > */ > - ret = arch_prepare_to_swap(&folio->page); > + ret = arch_prepare_to_swap(folio); > if (ret) { > folio_mark_dirty(folio); > folio_unlock(folio); > diff --git a/mm/swap_slots.c b/mm/swap_slots.c > index 0bec1f705f8e..2325adbb1f19 100644 > --- a/mm/swap_slots.c > +++ b/mm/swap_slots.c > @@ -307,7 +307,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) > entry.val = 0; > > if (folio_test_large(folio)) { > - if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported()) > + if (IS_ENABLED(CONFIG_THP_SWAP)) > get_swap_pages(1, &entry, folio_nr_pages(folio)); > goto out; > }