From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84E81C197A0 for ; Fri, 17 Nov 2023 00:16:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 995C46B03FA; Thu, 16 Nov 2023 19:16:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 96D336B0459; Thu, 16 Nov 2023 19:16:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80DFE6B045B; Thu, 16 Nov 2023 19:16:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 70C586B03FA for ; Thu, 16 Nov 2023 19:16:02 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 42253A0255 for ; Fri, 17 Nov 2023 00:16:02 +0000 (UTC) X-FDA: 81465528564.07.C9230C6 Received: from mail-vk1-f180.google.com (mail-vk1-f180.google.com [209.85.221.180]) by imf01.hostedemail.com (Postfix) with ESMTP id 854074000A for ; Fri, 17 Nov 2023 00:16:00 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jlcNE+ib; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.180 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700180160; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7513Wob2VysaWAdX86UezKQXmf9Eh2SoW7kpL5yRLdA=; b=BPA349Rb0oQqq7x/RzY+WZ5AG72nEPqlTo+FpW8r2R+2ReQp1IXCS9u0LpqcwThk2IPEr4 st8/dNQHpYy8zRKYuM+1HasNEAuGv+XSqYBGLB45dNzM7Qw/NPmnchJ3X2ZZ7h9AWSSnRy 8Mx3MfIKSIeNNF0SzvprBCPV/jnEE2M= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jlcNE+ib; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.180 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700180160; a=rsa-sha256; cv=none; b=1xvw6Wv7Bc7TmGoipDKb8TUkxvPB1EFVKTdazlysMSH2P30amBtrN07i98GkzE41yIyaJo Kx8P8XeJkSMMOJRCpnEFE3tfSSxIsgd1fLxBNqVM+BbaRgJ05XMHNiNVA02r8oDnBgTE8a i8XHF1/jwEgHar5o4qwlIg/1S7Fjksg= Received: by mail-vk1-f180.google.com with SMTP id 71dfb90a1353d-4ac023c8f82so538042e0c.1 for ; Thu, 16 Nov 2023 16:16:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700180159; x=1700784959; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=7513Wob2VysaWAdX86UezKQXmf9Eh2SoW7kpL5yRLdA=; b=jlcNE+ibjLuRoWherQxprhqWVYzQKUIozsMeB21aXd+3HTwRd8OfpMtMjhqlwSQguW n3E0nE4HonTm+fI3FLN8bDILCl+RI9e3osGg4P1ER3WCfrdith2wZKOSyHJShbR3vc3d SwgufxX214wORyjHSNF8Drkm8+MeP1FaMmlBIrmUT8d+l7gA4O6BcmTen/4JS+MBvt7y n7p29rOkWBIMER8FpuhmuVqtI/GpH/kPq8n7I1KyIg6/GW+reBKzwR4VZ95KHVENZxp6 GofpNG3XEr7/wAIOGESApI1cv8XZ+fF7zL0Hsp5UBYB9Hy8HRDb5g/G2eSLC7Nc1CrAF +4cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700180159; x=1700784959; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7513Wob2VysaWAdX86UezKQXmf9Eh2SoW7kpL5yRLdA=; b=lL/jcPc8WQlPSN94vLSIRDQjHglj202wd2k6UMDhmDKwLqsup4+fK5qH+GROd2dwEx AfycLoMOt3uRzEh0/t4LREroYgF/u8Eq2TRI3E1X4DF9NT1z2qXSghOuLCgS293caq/I GcSTOFjHNgq27pudKAXoilAh0oxNWX+dIMAsOUm794tLWWTy1+rD8JaTl13H9SfIzoZr eZlni9yetz1y3vbcBG/qW2BMqbsySxsriJGTRMyM8F3IgtYfM7CCCOx5C7lSBDlonkyy OWtrN2q5Blhggnvlp0e2KHLsaB7mTV1OFtKjq4ZR18i9bcK91tTzq9nATTVrplRDfjwY Z0ZA== X-Gm-Message-State: AOJu0Yxdet8e4fKXKaRjKVkcWXcHP5umxaEEJkdO8YlEEdHy27UiqwRe 28zdeiZUDbxcyd4ACHkjIuy6Hl+UT6cEo0jhcUs= X-Google-Smtp-Source: AGHT+IF36h7M89Z7GqLUSnbnoxhiz8zShmvJBSeuDKk+cRhhYq/gX8udqytxDvOPsaGl4OlJFPlJdXl4w9af9houE18= X-Received: by 2002:a1f:b204:0:b0:495:c464:a2fe with SMTP id b4-20020a1fb204000000b00495c464a2femr15945295vkf.2.1700180159393; Thu, 16 Nov 2023 16:15:59 -0800 (PST) MIME-Version: 1.0 References: <20231114014313.67232-1-v-songbaohua@oppo.com> <864489b3-5d85-4145-b5bb-5d8a74b9b92d@redhat.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Fri, 17 Nov 2023 08:15:48 +0800 Message-ID: Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for large folios To: David Hildenbrand Cc: steven.price@arm.com, akpm@linux-foundation.org, ryan.roberts@arm.com, catalin.marinas@arm.com, will@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.com, shy828301@gmail.com, v-songbaohua@oppo.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 854074000A X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: bqiqrqb1eb6cbme5ui1drs3rqnttjmn3 X-HE-Tag: 1700180160-727877 X-HE-Meta: U2FsdGVkX1+7Nk4JN8FaMJE9ZYFIXkQXfvv+VTsQvoo0h54jVIX7+XSQP0zR82viyjaGo7JUHU7dnkTHWA1fvx78hLKVpBAnlVYv67fp2fYPrU7UqFC8ZSAWWBcHD4IY6kYim9JQXiSG8K/2GrpDF3XMUUb6MKOTsitPXr9RPEHIk0i3Fc2CfutZKq8EqhyRuSah10yC5dIjnvK/0Q163U/w8K2GDJ1ifFuyVod7od+nx287BnwdwFEbzbOa+E9DX8mUowk1k6s9hORktRNbHCEJD18+t0ooGQRF+tSuWJuel29s5qCsuwC3emUIbM5zIJ1DuqjKTxJrtvEr5cCwr3eF8j2jWmxMBPnomOHQ7cGH313ieoYpakAMRhrbQ7JQyXfDoweDRNa9hF/l3s6U/loQ/fd7JXjnOYvq+Z5TzvqiNhJPNhm50k3IpY/Ert7h/nYPtPjXvBnQqfWAk3j4YDdzL4WBMw7/Pbefb3ry/Ednk4taeLsxR+XS+s3TNRMeSu94t/3MS9tINYnkL+EY7YEKTa7IIdUe+kziy101VpUf4mqPBLH+Z95VNmb5Sqj7Y66uPuuLxwXOJcQ6IQ7sBDUYvZXeOeWvDPhtRCtDkfXrVQdx11A5dA0J02La6F7RYa4RS1awl8bSjWrmkb88TflCBDylTEvcIHrEau4yGGw3C4YOLhdgVmqNlC2zXStWRit+iPtCXBkq9EcMCQDFSgny7O45T2pJ6wbD+juIomXjv+XKGAsGvm28b4v8niqsdbPXHLeQe1GslUqqNMQaX7qtPeaFK3gd1eCnRqs3lD4uwMl9JMpedeTXfYQvMdG0SeyeLH1diid1RxnW9w5xxu+If4RNMA4czMe+EwheiAHFd3QDv+nDEL2nVbWJ5pKuOKLPax+GQ/HBWkjl0YZfVm/lxVhT/kDjYkWm4ERzXDZ3C8rz0EV+3ir+CCcpaxci2zTTBqKC1u6ewkijBTo lAd5zY4k qqZlsj110v/UkN10stT70ZZaXM/e6DVxtb0LhugPbHfc3Ta8xFQsZMQEGZD9E8JUWTDAE0j6QZFzJFbKfeRbU8L4XJ7FVIyGeR/uGHNDzT8jRucLHiWTNunJnkynP90tForC8aXZULsfDvqr+W7Ek/XmGgQGMZTdL93Jx1XH6iU7w8UHsjFIMLPhH9drL8kjNhdGi8iVzVRIgKQOiNsJpaBItagiUZNGfeVEyf8EawqfMqmOMOt59xTmJPJUrCrJzGdkrB6ZqM4wlFCt+he1Y4Mo6dYAWvL0QtPywk8o9qItJz0WZuqB5P9iuGiDxxfNVsesbGYfVNag6MYUw70nqTNqmxI748Fqu5DO/JUvoBoaYz05IZPMUtVsFxNQNto3CpW1tMN/eTHT3cRkQImvfHqAR0MCYEBnoRGLyFhajMYWhD6ErZjYGFzElAuHOXBfxx3ApM6bAtMtWKdg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Nov 17, 2023 at 7:47=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrot= e: > > On Thu, Nov 16, 2023 at 5:36=E2=80=AFPM David Hildenbrand wrote: > > > > On 15.11.23 21:49, Barry Song wrote: > > > On Wed, Nov 15, 2023 at 11:16=E2=80=AFPM David Hildenbrand wrote: > > >> > > >> On 14.11.23 02:43, Barry Song wrote: > > >>> This patch makes MTE tags saving and restoring support large folios= , > > >>> then we don't need to split them into base pages for swapping out > > >>> on ARM64 SoCs with MTE. > > >>> > > >>> arch_prepare_to_swap() should take folio rather than page as parame= ter > > >>> because we support THP swap-out as a whole. > > >>> > > >>> Meanwhile, arch_swap_restore() should use page parameter rather tha= n > > >>> folio as swap-in always works at the granularity of base pages righ= t > > >>> now. > > >> > > >> ... but then we always have order-0 folios and can pass a folio, or = what > > >> am I missing? > > > > > > Hi David, > > > you missed the discussion here: > > > > > > https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4= Ac68MkhjMDw@mail.gmail.com/ > > > https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguO= kNzmh1Veocg@mail.gmail.com/ > > > > Okay, so you want to handle the refault-from-swapcache case where you g= et a > > large folio. > > > > I was mislead by your "folio as swap-in always works at the granularity= of > > base pages right now" comment. > > > > What you actually wanted to say is "While we always swap in small folio= s, we > > might refault large folios from the swapcache, and we only want to rest= ore > > the tags for the page of the large folio we are faulting on." > > > > But, I do if we can't simply restore the tags for the whole thing at on= ce > > at make the interface page-free? > > > > Let me elaborate: > > > > IIRC, if we have a large folio in the swapcache, the swap entries/offse= t are > > contiguous. If you know you are faulting on page[1] of the folio with a > > given swap offset, you can calculate the swap offset for page[0] simply= by > > subtracting from the offset. > > > > See page_swap_entry() on how we perform this calculation. > > > > > > So you can simply pass the large folio and the swap entry corresponding > > to the first page of the large folio, and restore all tags at once. > > > > So the interface would be > > > > arch_prepare_to_swap(struct folio *folio); > > void arch_swap_restore(struct page *folio, swp_entry_t start_entry); > > > > I'm sorry if that was also already discussed. > > This has been discussed. Steven, Ryan and I all don't think this is a goo= d > option. in case we have a large folio with 16 basepages, as do_swap_page > can only map one base page for each page fault, that means we have > to restore 16(tags we restore in each page fault) * 16(the times of page = faults) > for this large folio. > > and still the worst thing is the page fault in the Nth PTE of large folio > might free swap entry as that swap has been in. > do_swap_page() > { > /* > * Remove the swap entry and conditionally try to free up the swapcach= e. > * We're already holding a reference on the page but haven't mapped it > * yet. > */ > swap_free(entry); > } > > So in the page faults other than N, I mean 0~N-1 and N+1 to 15, you might= access > a freed tag. And David, one more information is that to keep the parameter of arch_swap_restore() unchanged as folio, i actually tried an ugly approach in rfc v2: +void arch_swap_restore(swp_entry_t entry, struct folio *folio) +{ + if (system_supports_mte()) { + /* + * We don't support large folios swap in as whole yet, but + * we can hit a large folio which is still in swapcache + * after those related processes' PTEs have been unmapped + * but before the swapcache folio is dropped, in this case, + * we need to find the exact page which "entry" is mapping + * to. If we are not hitting swapcache, this folio won't be + * large + */ + struct page *page =3D folio_file_page(folio, swp_offset(entry)); + mte_restore_tags(entry, page); + } +} And obviously everybody in the discussion hated it :-) i feel the only way to keep API unchanged using folio is that we support restoring PTEs all together for the whole large folio and we support the swap-in of large folios. This is in my list to do, I will send a patchset based on Ryan's large anon folios series after a while. till that is really done, it seems using page rather than folio is a better choice. > > > > > BUT, IIRC in the context of > > > > commit cfeed8ffe55b37fa10286aaaa1369da00cb88440 > > Author: David Hildenbrand > > Date: Mon Aug 21 18:08:46 2023 +0200 > > > > mm/swap: stop using page->private on tail pages for THP_SWAP > > > > Patch series "mm/swap: stop using page->private on tail pages for = THP_SWAP > > + cleanups". > > > > This series stops using page->private on tail pages for THP_SWAP, = replaces > > folio->private by folio->swap for swapcache folios, and starts usi= ng > > "new_folio" for tail pages that we are splitting to remove the usa= ge of > > page->private for swapcache handling completely. > > > > As long as the folio is in the swapcache, we even do have the proper > > swp_entry_t start_entry available as folio_swap_entry(folio). > > > > But now I am confused when we actually would have to pass > > "swp_entry_t start_entry". We shouldn't if the folio is in the swapcach= e ... > > > > Nop, hitting swapcache doesn't necessarily mean tags have been restored. > when A forks B,C,D,E,F. and A, B, C, D, E ,F share the swapslot. > as we have two chances to hit swapcache: > 1. swap out, unmap has been done but folios haven't been dropped > 2. swap in, shared processes allocate folios and add to swapcache > > for 2, If A gets fault earlier than B, A will allocate folio and add > it to swapcache. > Then B will hit the swapcache. But If B's CPU is faster than A, B still m= ight > be the one mapping PTE earlier than A though A is the one which has > added the page to swapcache. we have to make sure MTE is there when > mapping is done. > > > -- > > Cheers, > > > > David / dhildenb Thanks Barry