From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0BA6C4167D for ; Wed, 8 Nov 2023 11:23:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5538B8001C; Wed, 8 Nov 2023 06:23:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 503A68D00AD; Wed, 8 Nov 2023 06:23:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CB898001C; Wed, 8 Nov 2023 06:23:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 2E1698D00AD for ; Wed, 8 Nov 2023 06:23:18 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 06D0C1A0BB6 for ; Wed, 8 Nov 2023 11:23:18 +0000 (UTC) X-FDA: 81434550876.11.5DF6337 Received: from mail-ua1-f43.google.com (mail-ua1-f43.google.com [209.85.222.43]) by imf26.hostedemail.com (Postfix) with ESMTP id 3E9B5140025 for ; Wed, 8 Nov 2023 11:23:15 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ReWAIvn9; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.43 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699442595; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Zdl3merjrptXUASgrpht4NSAgLlAIGFWmIZzjIyUklk=; b=qrSFteiIEfXevX7R6QMIYx7l9hsm1q79vtKyob01ruMj5+BZ0LWtcSPwZlAMjWY3czSfKC S6xtnMMDJUgl+RrUjn+arlV7jBbHi9fUQGt0pn6+MZ5FBFguYgBjwr1zJUXAEBF/UsCHKr Yxm4GpcLT38R21upCBphOfrCa7mwuGU= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ReWAIvn9; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.43 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699442595; a=rsa-sha256; cv=none; b=dav2k9f7VFIGQS0ybENL4YzjXaWUNLUlohCq9DToAftyASG4wFhX3V3tLi0jEpE3zNJnG4 mvh5ot0Hz7XwpaXaQ9ua9WjaNzpHqJEoa4asobUSxZ857DzXWIvdz4FtYgdyfXbiNywN5C dY0ltYVAXe/9uf5KMlT4rUnpIxJL+oM= Received: by mail-ua1-f43.google.com with SMTP id a1e0cc1a2514c-7ba6fa81aabso3026029241.0 for ; Wed, 08 Nov 2023 03:23:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699442594; x=1700047394; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Zdl3merjrptXUASgrpht4NSAgLlAIGFWmIZzjIyUklk=; b=ReWAIvn9G5pCR5lSeGfyOkIJj2Y2EF/4G6HUEJytT6dmlDAHfThrE2GzUTJUwsE7YU ZXpzkZQr5lmN+8BgoStnvwJ98MTXnxzFEUiP9YMbpp6lIw626WeeJHvpVGGEJJPSU5iS i09z+r/c/m6PxZjdVyA0WwAOfUA1x1deNswARXujnLggwVAJi+LeSHw9x+QTEdlUch/F aVScDwr2CUN+8F+iIkl4PedhGbZqqGAQ7Wfhg6TyUlugNKgPANntc3JNN0uJhWv4FGA2 PAvYPdFmLoh1WjGH7hA47rIvC1juN3tM/r8w5OgOd6nBmTCNY9J08xWdxs3FD1EoS4cf BgMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699442594; x=1700047394; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Zdl3merjrptXUASgrpht4NSAgLlAIGFWmIZzjIyUklk=; b=CwUrnGJAQyu7/Ff+l5qQ4wKi1NjP1ls0PZCgC5sMbTvIX9Tp5wm/vcCDeVAGwWPaDa EbBkqCJivbX7E4NZW/u3aN7Oj128JCxysJv27gfE560suTt8yoHIDQqdyRK4JX0+Rcn9 dfVNs0Pw51dSCsrJtPe8vx4lWm00EFWpD9Fgnkt9l2bp/DN4Z51zHKVAbWmz8MM2k0Cs xZcZdFGKqu4oqzYsmilcszDLXXOcF8Lx9J9fx3LWKZteQuJlltlMbfXVFbXRZlT2N4gF AcAub2UNZHzBj6TsbBXp97I5jUou8Q7p6lHfJVMKIns3GM+T50MsAuxqpHdL037HaFV3 kjIA== X-Gm-Message-State: AOJu0YyqjBM1zMFbgfUzmDtZDurplEq1e3cWC636n/JDjLOtahI3X1Dq iF4vIRtGw3STudZIGH4LTeb2H768SWzh9UgASxI= X-Google-Smtp-Source: AGHT+IHMG9dH5WEmglE8Cu+skrspZOAZn3j9/d7FqEIIAE7y530fgJamBENFMT5DlvKQBlg95v7GodM2POH9L1rjPo0= X-Received: by 2002:a67:e1ca:0:b0:45d:ab58:b6b with SMTP id p10-20020a67e1ca000000b0045dab580b6bmr1363025vsl.6.1699442594149; Wed, 08 Nov 2023 03:23:14 -0800 (PST) MIME-Version: 1.0 References: <2fe5ce7e-9c5c-4df4-b4fc-9fd3d9b2dccb@arm.com> <20231104093423.170054-1-v-songbaohua@oppo.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Wed, 8 Nov 2023 19:23:01 +0800 Message-ID: Subject: Re: [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting To: Ryan Roberts Cc: steven.price@arm.com, akpm@linux-foundation.org, david@redhat.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.com, shy828301@gmail.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com, Barry Song , nd@arm.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3E9B5140025 X-Stat-Signature: k8roe18m9dsx1t6ezecnngfg834feqo1 X-HE-Tag: 1699442595-56839 X-HE-Meta: U2FsdGVkX19B538GHzNeb/Yw3l/9ceSSzDQ7OrnN05/ZUS/aMTtxA889FeVnXCmWR7vNEl96gTF+zljq42rXQ882Kx9lQhKQbYWBE4K0kURJicFWWy7uu3FTzWTGm+0+SLNiQUrWo5WgX1q3JKxeWwyc7dkx+eeE1mNCLND7FIy/KJH3j4b2E70Hi0x59TXJkc0Q+YzuPywqSHfPQqkw4PL2C7rwuzPAnhtiQdATjqgtCYqv0lz3/BdVPd1Ibi9EJXf8m0luLzxy7GRuQYaUPAvwe3XhtHcjgIbOKoD2yepvxdypqLrjjie4EazZ9iyA2CS8jgvA8mFHJa7p1YUfD2ZESeeqGiirSONzclYwkygLmMct50DGxnrOdU05p4xJpJ84LFv9X7WCn7juBUrzuT+fhoGHbxhOafpMN/WGblz20EWfTZenPdZtY50DSuuMTQsqOoSlCKLhYCOyK2wjsgYeRSTNLNsjsTjvpV6IH0zeON8LpVCZWfNlWqlE0OL81fP2Hrecmpug+1fh9lE4g3ezLRY4JXxbQU18FgYwbdSzyVddKLEBm3nEg8PoDlesiYVuw0kH6eefqwHdsaox2QHhDM/lZ9bHlJ9vBH6EE2Yvz35oWRUZiTLOEuPzsCsonuRykM+avYXfm5w/HnLsMIqu9mJUtCpK2WteFSIGUSPFlASVCLCDbh0uPWzhw/r2oEvZ3HeTk4Yw0JofQDOjVEtvs0Bd3xdYd6Zp1nPB8wi0FJC+F8XYtJsn8rTsjoBJghuKuU4W4y9kn+gMN9WRqlOSOMy1xsbcEuLXSmj0LUDJa8UlI7JuCTJRDfr395XDXna2xVj6V4ObwhTl9Sz53wzG2NyCYNfQ3VFnVXqIQVDpN2UKnji9hEpDMpSQcXf0sUlNgCpapF5zXf47J3QdAtBaf9m8/Ek+43NdQVABjgs8aCH/dFMsrGQGalmFI9jzGj+veKelYKRn40Hq4v/ 4zUDITrf jqUwXWyMIsWLlnOEbR26IwP6kL4ISQxGGkvkm22IkswbhOsTkfVegYrDQ49is/ga9AObMCjn6l+6EVeZAGRvnjVqk+fZQExIyomzZdNXxzpp8RVMzySI9iooVenw5Zwh+/SDZzfqElAu4uE/uykIgFf7RwWU5pgDjuQP/RJhzG+JwGGpbWW04NXsisXaHbhBomudu8JvqKmrIANX0d5+pmXyLb55GkeQFT+yJ0/R4Gcc+DOWyjyBnr8fkUjRGeEi1zxcPz3oGcYttNW2whu1P9cNnZ5Obw2ETMdJpPj0+Dsax2ngPi7dzXonJlsasypYTJUsXvu5sQFGrABAxPfY0cTuWlxkEbYZ0gcfp1uRloLCXkZVUfUJQ9E5BDnE2SB+F9sLaE0GegopyeRnhAXrqmGkjAs5PgXRdlQk9S2UDH6XvMW90FlbYBR/4unkBi9d43XWM9irGoiZHmqPJ0CnMjaq4hxjDmxFIXBkkxKJUFgW1b62zWn8WGix9aA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 8, 2023 at 2:05=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrote= : > > On Tue, Nov 7, 2023 at 8:46=E2=80=AFPM Ryan Roberts wrote: > > > > On 04/11/2023 09:34, Barry Song wrote: > > >> Yes that's right. mte_save_tags() needs to allocate memory so can fa= il > > >> and if failing then arch_prepare_to_swap() would need to put things = back > > >> how they were with calls to mte_invalidate_tags() (although I think > > >> you'd actually want to refactor to create a function which takes a > > >> struct page *). > > >> > > >> Steve > > > > > > Thanks, Steve. combining all comments from You and Ryan, I made a v2. > > > One tricky thing is that we are restoring one page rather than folio > > > in arch_restore_swap() as we are only swapping in one page at this > > > stage. > > > > > > [RFC v2 PATCH] arm64: mm: swap: save and restore mte tags for large f= olios > > > > > > This patch makes MTE tags saving and restoring support large folios, > > > then we don't need to split them into base pages for swapping on > > > ARM64 SoCs with MTE. > > > > > > This patch moves arch_prepare_to_swap() to take folio rather than > > > page, as we support THP swap-out as a whole. And this patch also > > > drops arch_thp_swp_supported() as ARM64 MTE is the only one who > > > needs it. > > > > > > Signed-off-by: Barry Song > > > --- > > > arch/arm64/include/asm/pgtable.h | 21 +++------------ > > > arch/arm64/mm/mteswap.c | 44 ++++++++++++++++++++++++++++++= ++ > > > include/linux/huge_mm.h | 12 --------- > > > include/linux/pgtable.h | 2 +- > > > mm/page_io.c | 2 +- > > > mm/swap_slots.c | 2 +- > > > 6 files changed, 51 insertions(+), 32 deletions(-) > > > > > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/as= m/pgtable.h > > > index b19a8aee684c..d8f523dc41e7 100644 > > > --- a/arch/arm64/include/asm/pgtable.h > > > +++ b/arch/arm64/include/asm/pgtable.h > > > @@ -45,12 +45,6 @@ > > > __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) > > > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > > > > > -static inline bool arch_thp_swp_supported(void) > > > -{ > > > - return !system_supports_mte(); > > > -} > > > -#define arch_thp_swp_supported arch_thp_swp_supported > > > - > > > /* > > > * Outside of a few very special situations (e.g. hibernation), we a= lways > > > * use broadcast TLB invalidation instructions, therefore a spurious= page > > > @@ -1036,12 +1030,8 @@ static inline pmd_t pmdp_establish(struct vm_a= rea_struct *vma, > > > #ifdef CONFIG_ARM64_MTE > > > > > > #define __HAVE_ARCH_PREPARE_TO_SWAP > > > -static inline int arch_prepare_to_swap(struct page *page) > > > -{ > > > - if (system_supports_mte()) > > > - return mte_save_tags(page); > > > - return 0; > > > -} > > > +#define arch_prepare_to_swap arch_prepare_to_swap > > > +extern int arch_prepare_to_swap(struct folio *folio); > > > > > > #define __HAVE_ARCH_SWAP_INVALIDATE > > > static inline void arch_swap_invalidate_page(int type, pgoff_t offse= t) > > > @@ -1057,11 +1047,8 @@ static inline void arch_swap_invalidate_area(i= nt type) > > > } > > > > > > #define __HAVE_ARCH_SWAP_RESTORE > > > -static inline void arch_swap_restore(swp_entry_t entry, struct folio= *folio) > > > -{ > > > - if (system_supports_mte()) > > > - mte_restore_tags(entry, &folio->page); > > > -} > > > +#define arch_swap_restore arch_swap_restore > > > +extern void arch_swap_restore(swp_entry_t entry, struct folio *folio= ); > > > > > > #endif /* CONFIG_ARM64_MTE */ > > > > > > diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c > > > index a31833e3ddc5..14a479e4ea8e 100644 > > > --- a/arch/arm64/mm/mteswap.c > > > +++ b/arch/arm64/mm/mteswap.c > > > @@ -68,6 +68,12 @@ void mte_invalidate_tags(int type, pgoff_t offset) > > > mte_free_tag_storage(tags); > > > } > > > > > > +static inline void __mte_invalidate_tags(struct page *page) > > > +{ > > > + swp_entry_t entry =3D page_swap_entry(page); > > > + mte_invalidate_tags(swp_type(entry), swp_offset(entry)); > > > +} > > > + > > > void mte_invalidate_tags_area(int type) > > > { > > > swp_entry_t entry =3D swp_entry(type, 0); > > > @@ -83,3 +89,41 @@ void mte_invalidate_tags_area(int type) > > > } > > > xa_unlock(&mte_pages); > > > } > > > + > > > +int arch_prepare_to_swap(struct folio *folio) > > > +{ > > > + int err; > > > + long i; > > > + > > > + if (system_supports_mte()) { > > > + long nr =3D folio_nr_pages(folio); > > > > nit: there should be a clear line between variable declarations and log= ic. > > right. > > > > > > + for (i =3D 0; i < nr; i++) { > > > + err =3D mte_save_tags(folio_page(folio, i)); > > > + if (err) > > > + goto out; > > > + } > > > + } > > > + return 0; > > > + > > > +out: > > > + while (--i) > > > > If i is initially > 0, this will fail to invalidate page 0. If i is ini= tially 0 > > then it will wrap and run ~forever. I think you meant `while (i--)`? > > nop. if i=3D0 and we goto out, that means the page0 has failed to save ta= gs, > there is nothing to revert. if i=3D3 and we goto out, that means 0,1,2 ha= ve > saved, we restore 0,1,2 and we don't restore 3. I am terribly sorry for my previous noise. You are right, Ryan. i actually meant i--. > > > > > > + __mte_invalidate_tags(folio_page(folio, i)); > > > + return err; > > > +} > > > + > > > +void arch_swap_restore(swp_entry_t entry, struct folio *folio) > > > +{ > > > + if (system_supports_mte()) { > > > + /* > > > + * We don't support large folios swap in as whole yet, = but > > > + * we can hit a large folio which is still in swapcache > > > + * after those related processes' PTEs have been unmapp= ed > > > + * but before the swapcache folio is dropped, in this = case, > > > + * we need to find the exact page which "entry" is mapp= ing > > > + * to. If we are not hitting swapcache, this folio won'= t be > > > + * large > > > + */ > > > > So the currently defined API allows a large folio to be passed but the = caller is > > supposed to find the single correct page using the swap entry? That fee= ls quite > > nasty to me. And that's not what the old version of the function was do= ing; it > > always assumed that the folio was small and passed the first page (whic= h also > > doesn't feel 'nice'). If the old version was wrong, I suggest a separat= e commit > > to fix that. If the old version is correct, then I guess this version i= s wrong. > > the original version(mainline) is wrong but it works as once we find the = SoCs > support MTE, we will split large folios into small pages. so only small p= ages > will be added into swapcache successfully. > > but now we want to swap out large folios even on SoCs with MTE as a whole= , > we don't split, so this breaks the assumption do_swap_page() will always = get > small pages. let me clarify this more. The current mainline assumes arch_swap_restore() always get a folio with only one page. this is true as we split large folios if we find SoCs have MTE. but since we are dropping the split now, that means a large folio can be gotten by do_swap_page(). we have a chance that try_to_unmap_one() has been= done but folio is not put. so PTEs will have swap entry but folio is still there, and do_swap_page() to hit cache directly and the folio won't be released. but after getting the large folio in do_swap_page, it still only takes one basepage particularly for the faulted PTE and maps this 4KB PTE only. so it uses the faulted swap_entry and the folio as parameters to call arch_swap_restore() which can be something = like: do_swap_page() { arch_swap_restore(the swap entry for the faulted 4KB PTE, large fol= io); } > > > > > Thanks, > > Ryan Thanks Barry