From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B28FC54F91 for ; Wed, 15 Nov 2023 14:40:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 891376B0362; Wed, 15 Nov 2023 09:40:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 81A116B036E; Wed, 15 Nov 2023 09:40:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 693866B036F; Wed, 15 Nov 2023 09:40:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 532076B0362 for ; Wed, 15 Nov 2023 09:40:17 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 1F59F1A0AC7 for ; Wed, 15 Nov 2023 14:40:17 +0000 (UTC) X-FDA: 81460448874.07.AB1FA9E Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id DC6BA1C0027 for ; Wed, 15 Nov 2023 14:40:14 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of steven.price@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=steven.price@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700059215; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BzaHvZEaQc3GzDg61VQjJr9B6QhRT8sh5xmc8LZxQgE=; b=YvqdB42x3ABQO4N//i12o0QYxbe1Rw5gR8LxCevUAUnWNT/qHzNH2QWvnkMG1D7KwgrvYz JYmrpGucZPdI5NwSUlJ74BnqUUuKWjdBs5MFUiLXg3NPf/tRW23CyCxiFEWT0pllLwyO6t t/FUFIgoYiMmYRRM0cBSlpOY8Kgx7cc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700059215; a=rsa-sha256; cv=none; b=8ZiuuPxGZPZsEEBNtvL05g7FcRfoFyx7RHlaYjT0aJ0jm/ROykbDlrkVPV0G6b8gowIA0X dowEdV7cCxjiFRL6kjNYnSg5rXcG535jl5CfXDoKK7vovl9PqBWLeLKVq9wbM56v2N5gW9 Ty6Ot3sMAfNn/v/r4tco/ZpnqGVh45o= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; spf=pass (imf18.hostedemail.com: domain of steven.price@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=steven.price@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 631081FB; Wed, 15 Nov 2023 06:40:59 -0800 (PST) Received: from [10.57.40.146] (unknown [10.57.40.146]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D0E523F641; Wed, 15 Nov 2023 06:40:08 -0800 (PST) Message-ID: <0164d185-12db-478a-93fd-b07d5fe00599@arm.com> Date: Wed, 15 Nov 2023 14:40:11 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for large folios Content-Language: en-GB To: Barry Song <21cnbao@gmail.com>, akpm@linux-foundation.org, ryan.roberts@arm.com, catalin.marinas@arm.com, will@kernel.org Cc: david@redhat.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.com, shy828301@gmail.com, v-songbaohua@oppo.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com References: <20231114014313.67232-1-v-songbaohua@oppo.com> From: Steven Price In-Reply-To: <20231114014313.67232-1-v-songbaohua@oppo.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: fenzhk3wbydb4ndg35wjcewfw73996yg X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: DC6BA1C0027 X-Rspam-User: X-HE-Tag: 1700059214-450926 X-HE-Meta: U2FsdGVkX19jtcJyVVl3PCckYaJc/tpwjabf/OmHiDjKQN+qRih8FY/5zD0tBRxvhCpVwUr0XwZP4ewWTvwLv7wc6YTa5nUZ/Sg7A6WJ+WDSTBaAKCP89F07VIaUdByBdQHXSZk07fI5rVFWgdNe0FqVltJcCR4EffYyNalUN32GROk3DHgCeSS60bpTXdJjY7pKldxHGPCXXZJ6HfWU1jtctdLjJ6oZIPyX0MIO+LbztGtEjxwYVJbgjw9UxeiaaVqg+7xM3r9Szy0eLQnzKXheAqQL35Du05NQu0HqZ1mQoGe8tCYmRyXv5XZh4T0JOPdRv9u0dUrHB1Wg48JO8ZdZVHkPvYSbOSf2eH4aAxWQNePFxSCEUvrbe4t739sEHpjTB1P3jWpsoZAPcBY8+WHhfdKQAKuosDD8mU66rKZMRiBBXYFyzFdbJjrJ77nB1hCQoQ8e2Tymkwt2MK5qF3WIUbbajKeHKR6+erxl6lYB80//a8PmDde7FrHuQZImfVrfs6rhh7gI2m3ClM3Y/O5K1kkG/p6bpViue1qX4P9uwwNG6YCxFGkBjgepOQ40bjq52TsWY8fzYmQeomO/aaof6ZiyF4Qtt4cHyboXm7KX2MASLxljxTJG+X9lJFgszF31A05zLdcgvV2fNxZjZIfcVQ1DipydGZWLVL17IjVNX/7XySVT4BJ/yFikAMO4MhelAzv6e+CB7wFDS3Xx/7uUOwkQRlJ8jF63ek4/70mMgokqxk1ba81ivQRbQehria2L2V3p0gnw0dZ8Zgh1z0uaTYDfdKsVvCXbOIKYmsURpC1yv2yQcl03GlQJuZoHMIJsD+pPqbKkYUQz66YOfkr/9dgUflWCQGkTt+3QydsVp24a/VOF8xPe/w7SEXxcvDidRvXgrpvT4BofD8lSXFYJZ+NDLgw2Rbd+y8eSvVwm4vCxx5sjUaJSzpbuC/MJQmo49gRs569Y7S9TV81 c0uYHgCG evdD4Iw+c4348oVIW56Pju2hPzdv1yNDUYer685uszJoE1fG30G20JIyV8Xuh7KNnavFwD88CZHd3OFYJqNjdqJ49u07LhNW5TNRSp/bJH6BOqSqcDzaTd/sC7ouZISJri3VgBZG9QWGuu4Gycb3mUNZsLiLndNZsfVZxu2EZzDt3uynw4LEE6L+qHDqXCHZzA9mpeP1AQIMiggqH+aN8UyehLfRFiEBS6DHdK7b0A/mtIOV1m5ODPhYmD6kz+k/TbIjMImmTgdL/IDvL2WA1sGzAGZJckj1mHFgAyeSFmBSIrAZBhLksl9B0D7J31gcTT5iK9Y/F/CZ8Kxc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 14/11/2023 01:43, Barry Song wrote: > This patch makes MTE tags saving and restoring support large folios, > then we don't need to split them into base pages for swapping out > on ARM64 SoCs with MTE. > > arch_prepare_to_swap() should take folio rather than page as parameter > because we support THP swap-out as a whole. > > Meanwhile, arch_swap_restore() should use page parameter rather than > folio as swap-in always works at the granularity of base pages right > now. > > arch_thp_swp_supported() is dropped since ARM64 MTE was the only one > who needed it. > > Signed-off-by: Barry Song LGTM! Reviewed: Steven Price Although one NIT below you might want to fix. > --- > rfc v3: > * move arch_swap_restore() to take page rather than folio according to > the discussion with Ryan and Steven; > * fix some other issues commented by Ryan > > rfc v2: > https://lore.kernel.org/lkml/20231104093423.170054-1-v-songbaohua@oppo.com/ > rfc v1: > https://lore.kernel.org/lkml/20231102223643.7733-1-v-songbaohua@oppo.com/ > > arch/arm64/include/asm/pgtable.h | 21 ++++---------------- > arch/arm64/mm/mteswap.c | 34 ++++++++++++++++++++++++++++++++ > include/linux/huge_mm.h | 12 ----------- > include/linux/pgtable.h | 4 ++-- > mm/memory.c | 2 +- > mm/page_io.c | 2 +- > mm/shmem.c | 2 +- > mm/swap_slots.c | 2 +- > mm/swapfile.c | 2 +- > 9 files changed, 45 insertions(+), 36 deletions(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index b19a8aee684c..c3eef11c1a9e 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -45,12 +45,6 @@ > __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > -static inline bool arch_thp_swp_supported(void) > -{ > - return !system_supports_mte(); > -} > -#define arch_thp_swp_supported arch_thp_swp_supported > - > /* > * Outside of a few very special situations (e.g. hibernation), we always > * use broadcast TLB invalidation instructions, therefore a spurious page > @@ -1036,12 +1030,8 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, > #ifdef CONFIG_ARM64_MTE > > #define __HAVE_ARCH_PREPARE_TO_SWAP > -static inline int arch_prepare_to_swap(struct page *page) > -{ > - if (system_supports_mte()) > - return mte_save_tags(page); > - return 0; > -} > +#define arch_prepare_to_swap arch_prepare_to_swap > +extern int arch_prepare_to_swap(struct folio *folio); > > #define __HAVE_ARCH_SWAP_INVALIDATE > static inline void arch_swap_invalidate_page(int type, pgoff_t offset) > @@ -1057,11 +1047,8 @@ static inline void arch_swap_invalidate_area(int type) > } > > #define __HAVE_ARCH_SWAP_RESTORE > -static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) > -{ > - if (system_supports_mte()) > - mte_restore_tags(entry, &folio->page); > -} > +#define arch_swap_restore arch_swap_restore > +extern void arch_swap_restore(swp_entry_t entry, struct page *page); > > #endif /* CONFIG_ARM64_MTE */ > > diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c > index a31833e3ddc5..75c2836e8240 100644 > --- a/arch/arm64/mm/mteswap.c > +++ b/arch/arm64/mm/mteswap.c > @@ -68,6 +68,12 @@ void mte_invalidate_tags(int type, pgoff_t offset) > mte_free_tag_storage(tags); > } > > +static inline void __mte_invalidate_tags(struct page *page) > +{ > + swp_entry_t entry = page_swap_entry(page); > + mte_invalidate_tags(swp_type(entry), swp_offset(entry)); NIT: checkpatch will complain there should be blank line between these two lines (declarations are followed by a blank line). Steve > +} > + > void mte_invalidate_tags_area(int type) > { > swp_entry_t entry = swp_entry(type, 0); > @@ -83,3 +89,31 @@ void mte_invalidate_tags_area(int type) > } > xa_unlock(&mte_pages); > } > + > +int arch_prepare_to_swap(struct folio *folio) > +{ > + int err; > + long i; > + > + if (system_supports_mte()) { > + long nr = folio_nr_pages(folio); > + > + for (i = 0; i < nr; i++) { > + err = mte_save_tags(folio_page(folio, i)); > + if (err) > + goto out; > + } > + } > + return 0; > + > +out: > + while (i--) > + __mte_invalidate_tags(folio_page(folio, i)); > + return err; > +} > + > +void arch_swap_restore(swp_entry_t entry, struct page *page) > +{ > + if (system_supports_mte()) > + mte_restore_tags(entry, page); > +} > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index fa0350b0812a..f83fb8d5241e 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -400,16 +400,4 @@ static inline int split_folio(struct folio *folio) > return split_folio_to_list(folio, NULL); > } > > -/* > - * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to > - * limitations in the implementation like arm64 MTE can override this to > - * false > - */ > -#ifndef arch_thp_swp_supported > -static inline bool arch_thp_swp_supported(void) > -{ > - return true; > -} > -#endif > - > #endif /* _LINUX_HUGE_MM_H */ > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index af7639c3b0a3..87e3140a55ca 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -897,7 +897,7 @@ static inline int arch_unmap_one(struct mm_struct *mm, > * prototypes must be defined in the arch-specific asm/pgtable.h file. > */ > #ifndef __HAVE_ARCH_PREPARE_TO_SWAP > -static inline int arch_prepare_to_swap(struct page *page) > +static inline int arch_prepare_to_swap(struct folio *folio) > { > return 0; > } > @@ -914,7 +914,7 @@ static inline void arch_swap_invalidate_area(int type) > #endif > > #ifndef __HAVE_ARCH_SWAP_RESTORE > -static inline void arch_swap_restore(swp_entry_t entry, struct folio *folio) > +static inline void arch_swap_restore(swp_entry_t entry, struct page *page) > { > } > #endif > diff --git a/mm/memory.c b/mm/memory.c > index 1f18ed4a5497..fad238dd38e7 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4022,7 +4022,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * when reading from swap. This metadata may be indexed by swap entry > * so this must be called before swap_free(). > */ > - arch_swap_restore(entry, folio); > + arch_swap_restore(entry, page); > > /* > * Remove the swap entry and conditionally try to free up the swapcache. > diff --git a/mm/page_io.c b/mm/page_io.c > index cb559ae324c6..0fd832474c1d 100644 > --- a/mm/page_io.c > +++ b/mm/page_io.c > @@ -189,7 +189,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) > * Arch code may have to preserve more data than just the page > * contents, e.g. memory tags. > */ > - ret = arch_prepare_to_swap(&folio->page); > + ret = arch_prepare_to_swap(folio); > if (ret) { > folio_mark_dirty(folio); > folio_unlock(folio); > diff --git a/mm/shmem.c b/mm/shmem.c > index 91e2620148b2..7d32e50da121 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1892,7 +1892,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > * Some architectures may have to restore extra metadata to the > * folio after reading from swap. > */ > - arch_swap_restore(swap, folio); > + arch_swap_restore(swap, &folio->page); > > if (shmem_should_replace_folio(folio, gfp)) { > error = shmem_replace_folio(&folio, gfp, info, index); > diff --git a/mm/swap_slots.c b/mm/swap_slots.c > index 0bec1f705f8e..2325adbb1f19 100644 > --- a/mm/swap_slots.c > +++ b/mm/swap_slots.c > @@ -307,7 +307,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) > entry.val = 0; > > if (folio_test_large(folio)) { > - if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported()) > + if (IS_ENABLED(CONFIG_THP_SWAP)) > get_swap_pages(1, &entry, folio_nr_pages(folio)); > goto out; > } > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 4bc70f459164..6450e0279e35 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1784,7 +1784,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, > * when reading from swap. This metadata may be indexed by swap entry > * so this must be called before swap_free(). > */ > - arch_swap_restore(entry, page_folio(page)); > + arch_swap_restore(entry, page); > > /* See do_swap_page() */ > BUG_ON(!PageAnon(page) && PageMappedToDisk(page));