From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 982EFC61D97 for ; Fri, 24 Nov 2023 18:14:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0FC0F6B0694; Fri, 24 Nov 2023 13:14:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0ABF26B0695; Fri, 24 Nov 2023 13:14:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB5826B0696; Fri, 24 Nov 2023 13:14:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DA8676B0694 for ; Fri, 24 Nov 2023 13:14:35 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A1EC080398 for ; Fri, 24 Nov 2023 18:14:35 +0000 (UTC) X-FDA: 81493648110.18.FA9212F Received: from mail-vk1-f173.google.com (mail-vk1-f173.google.com [209.85.221.173]) by imf15.hostedemail.com (Postfix) with ESMTP id A571CA0023 for ; Fri, 24 Nov 2023 18:14:33 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=arP6aAp5; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700849673; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vruCw4XDPfGPNPhxq4EcKnwxHszATHynx+1RDcFbdIc=; b=h4Vxe39PkNa4zrOgB/A31g0rrsikhQrCkBANc0LuE7EFa7ziJ714ROCXJoqZp3yyJdaB/q 4wDGEwOm6o6D1oYAV7EsxgllNVSsQBJL13tV2hjTPz6Ob0ehfs6sEQVTWtKE/LX8N1Z+Py kTI/XjDD54BturjVzCIA9QZW8mmxKnM= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=arP6aAp5; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700849673; a=rsa-sha256; cv=none; b=XG8kDbiiL5zpLaJtRQHw+Rb1EndAwVgHuLHrcQ5Q8FzO2u0I6syEZWHT6aFiw3fvysu1n5 OxoZIptfMhaOBMiNACjIMV9asJ/ePaX8kYWWrwrLgfSRPbZOsRpqd62roWlYNd1F783jtT dn1toJv66lha/E4iyiQ1ARVT6ymHHuM= Received: by mail-vk1-f173.google.com with SMTP id 71dfb90a1353d-4ac42a20f35so518920e0c.2 for ; Fri, 24 Nov 2023 10:14:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700849673; x=1701454473; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=vruCw4XDPfGPNPhxq4EcKnwxHszATHynx+1RDcFbdIc=; b=arP6aAp5UWtdY3ZWrmcdZFp/RccMASvvk45kbQW3Jgx1fSIBoWBRcKb6QcX0b41neI duFM5muHdPd9IU1M1bKbf25QzDqw8Tnin8r4GOu3bPiJ8bWHVN+WGrSJ0oOm8lYaXF66 GFFxTK3mCztELrpSWpeqlP4p5wKjE9Q6I63MDLlnd3iAeJyWHt1U2Q8tVu35Xom7FCBI y7oLAavwANyVySKhSi5htyQpgotlNl3fyoLEigIkv/GY+AlkovA8skTtYaZR8UXYNYo2 FKJlAdvkCNZwPQTMI7DFHffYDKHeNX+HF9kKB/XOeOaN8pXAP3pfIgQp0OWWReze8yn0 Q6SA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700849673; x=1701454473; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vruCw4XDPfGPNPhxq4EcKnwxHszATHynx+1RDcFbdIc=; b=bbJ70487Dbsf5g1CGQppD99ITB5DSi7jGUKySYLiv4+HFJ5exP6Xb9JJv+0c5jUhWV yPkB2c1PualsX1R/5V1AcPPv9L/KTME7xv7WxdOaf4xrBDFxUhLmit8t/QuEZKJ8beWH xvPsgByDoAlUt9Om1RvUerqMXxTDrVsfKnZgXh6bWdopoRpEeIdw8ccjPGF5bUeefzt2 w1ZDx/Qn9OQ7n7PDjiac734ZlEaZwhVlRDUL0vEDkf75PyM1L2nLXVOmiFex97ZR8pSA 8j9J5xhgZFkYnuegTo3lE033CqTe5s+7B/ORYo2MbVtWOtUNWulJp6x1OCFt1cB0J6Qp wLDg== X-Gm-Message-State: AOJu0Yw4eO/ehCMomhqXHSifVdX+d+8grc4nTkOCVSgLtFWNy78mnA6i DiDfdfg4y/hn1tjyQj0ywsF2ATblIH+lZo54LCk= X-Google-Smtp-Source: AGHT+IGKnQedA3sJnyzfbhCbFLnRqMZEHtehkyNRJiNZKIzRuXJYl7My1s37YJk58PDxsd6cSZU4ZjpAVocLtccLhBQ= X-Received: by 2002:a05:6122:c96:b0:4ab:d065:d1b with SMTP id ba22-20020a0561220c9600b004abd0650d1bmr3241929vkb.8.1700849672562; Fri, 24 Nov 2023 10:14:32 -0800 (PST) MIME-Version: 1.0 References: <20231114014313.67232-1-v-songbaohua@oppo.com> <864489b3-5d85-4145-b5bb-5d8a74b9b92d@redhat.com> <8c7f1a2f-57d2-4f20-abb2-394c7980008e@redhat.com> <5de66ff5-b6c8-4ffc-acd9-59aec4604ca4@redhat.com> <71c4b8b2-512a-4e50-9160-6ee77a5ec0a4@arm.com> In-Reply-To: <71c4b8b2-512a-4e50-9160-6ee77a5ec0a4@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Sat, 25 Nov 2023 07:14:21 +1300 Message-ID: Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for large folios To: Steven Price Cc: Ryan Roberts , David Hildenbrand , akpm@linux-foundation.org, catalin.marinas@arm.com, will@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.com, shy828301@gmail.com, v-songbaohua@oppo.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A571CA0023 X-Stat-Signature: knubzx5y7tdnwz1kek5ke3aqwh3aurqu X-Rspam-User: X-HE-Tag: 1700849673-373520 X-HE-Meta: U2FsdGVkX1+uWuzpArzL+qjS640iDU4p4RJtssSRI5Kl8o8rgXnrLg6d+mEQLAISCM/DMuifCviIVzSLejSqsfo6zKubflP8mbJbVfOyCz8WDxcNVoRp/Q9zo353+hdAydQp5lI7oIDA4QBXq5lsXgWBSwtkdOFtTPo/tD4gp9TW5hZeswO9FhGMrUNv1KXoUpszSfWPeOeOmHfwYrRW6AvrZWRHpne4SMv6CmC97DlOSPCj4d/FKuVBQbt63y/IziJQfJ5qJOppGL75QShLLWVTyHwd5OOiYlMPJIkESlF0Fy3mqxrZN9K0PIbk+oUjFylDj4DVaH/jOWeFHsEbpJNVdlVRmS4MVopnOUfoLgikE0UCzvw7bKbMDJP8wWdXQB9aK6mWkgTXOyckeRUEWFiz28wpYDSN3TIu4bErBfoVHE+OR9O8RsLJqvvbv0XCyWAuFliOjeEO8r1SAfsVSYOKdQ8hfDuKNiNC5Cr9wRCAD7nTfdeah0ilVWcrfcK7jzFFUJOO/fLvffA+Hi07RO7X+0SD/PmYyrI030VYkhPlTIm3nDZShwCBt6wXxvy5SMJiGyN3uVtpOu3X+YdOydAYhqnu/tswwCXOngSX0LQsNtInlV7rdKIEBL09WZ75FUkNii3+Z4WWcn64QUyg+FOx37rsODDP8UV6PFZ1Ks9P0QsNYNUd7w+lMO4uXgsZG8xnPxUhr9oDTtmy3FCPqF8YpNd1ql66ndwg1oneQkPyPu8dmgA2JDWt0z1U1HeIw2GEy5r5FzFtMA54UhtdYGCT1PAYOc5imFhQbd6Yzw7FWP9BP4g+j/ltVbCmvn0I3sGxqpB94hAeLmwNAcoACH/4sFE442M0m8GROHBK1PXDk4bNFa0q1lL9rFZuPFaw8rYaxdibxeGkLvK3HX77/+PCRiJQEOvS5DFw5pnfd5uGZ03cBnK5bVd8OwbnfXta5OsIF9uqdF9tWH0Ci7G xkZBO8kB jZm9VwZx8zLnZTZSTuwoJ2BUBWsA6AvN8MLKPfsb1nP4JDA+qPNmGG/QHKYkoaKqvpAKAgKo80cw3UmH6MJ9SniRkt/4h0m9kFZuFsAwuy+fNx22zMvJSwzAok+7OgaB4ZXIx8z9PGqsvD+NgsAOpSDjZ+h197oaaFP2wp2lXf1edkgZSUHcYGdv90MDdJOmPgQGtUQi6gZNWPfPLQT7dxac08zDK4S0PflVWLMjEByutHkiBPwegr2uQ0eqT3QjJEDAzvmmv9Apl63UpodPvI5U9NrGPSVZdqG78avz796ZRhPtwPCBEwTtRZAIi76unG9DXR5WGCV0a4f2AtTlb7+VLIZ4rn91dUtKSgsHO7u0DymLeEbsOKDTzklx3TqRWZwy35fL1Rv5PTIIiSYCqlKGTQg6sRh0r3C7GsdPKavaASmnrhyek+k9gODxtTfsW5qRw3iSbdFsdTqz6unwO8ciIYeLxkyVnaMO+oY2+vb4Rq7V2lWrUOkqBDjPDcL8wmdf+9rCamgV3/3wbpkoPu/ncjUps7mJ15yFd7yj9iAm8oXRKmquislyeF4QuHn/1yQ07 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Nov 24, 2023 at 10:55=E2=80=AFPM Steven Price wrote: > > On 24/11/2023 09:01, Ryan Roberts wrote: > > On 24/11/2023 08:55, David Hildenbrand wrote: > >> On 24.11.23 02:35, Barry Song wrote: > >>> On Mon, Nov 20, 2023 at 11:57=E2=80=AFPM Ryan Roberts wrote: > >>>> > >>>> On 20/11/2023 09:11, David Hildenbrand wrote: > >>>>> On 17.11.23 19:41, Barry Song wrote: > >>>>>> On Fri, Nov 17, 2023 at 7:28=E2=80=AFPM David Hildenbrand wrote: > >>>>>>> > >>>>>>> On 17.11.23 01:15, Barry Song wrote: > >>>>>>>> On Fri, Nov 17, 2023 at 7:47=E2=80=AFAM Barry Song <21cnbao@gmai= l.com> wrote: > >>>>>>>>> > >>>>>>>>> On Thu, Nov 16, 2023 at 5:36=E2=80=AFPM David Hildenbrand wrote: > >>>>>>>>>> > >>>>>>>>>> On 15.11.23 21:49, Barry Song wrote: > >>>>>>>>>>> On Wed, Nov 15, 2023 at 11:16=E2=80=AFPM David Hildenbrand > >>>>>>>>>>> wrote: > >>>>>>>>>>>> > >>>>>>>>>>>> On 14.11.23 02:43, Barry Song wrote: > >>>>>>>>>>>>> This patch makes MTE tags saving and restoring support larg= e folios, > >>>>>>>>>>>>> then we don't need to split them into base pages for swappi= ng out > >>>>>>>>>>>>> on ARM64 SoCs with MTE. > >>>>>>>>>>>>> > >>>>>>>>>>>>> arch_prepare_to_swap() should take folio rather than page a= s parameter > >>>>>>>>>>>>> because we support THP swap-out as a whole. > >>>>>>>>>>>>> > >>>>>>>>>>>>> Meanwhile, arch_swap_restore() should use page parameter ra= ther than > >>>>>>>>>>>>> folio as swap-in always works at the granularity of base pa= ges right > >>>>>>>>>>>>> now. > >>>>>>>>>>>> > >>>>>>>>>>>> ... but then we always have order-0 folios and can pass a fo= lio, or what > >>>>>>>>>>>> am I missing? > >>>>>>>>>>> > >>>>>>>>>>> Hi David, > >>>>>>>>>>> you missed the discussion here: > >>>>>>>>>>> > >>>>>>>>>>> https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-= fR06ijv4Ac68MkhjMDw@mail.gmail.com/ > >>>>>>>>>>> https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4= fbbhGguOkNzmh1Veocg@mail.gmail.com/ > >>>>>>>>>> > >>>>>>>>>> Okay, so you want to handle the refault-from-swapcache case wh= ere you > >>>>>>>>>> get a > >>>>>>>>>> large folio. > >>>>>>>>>> > >>>>>>>>>> I was mislead by your "folio as swap-in always works at the gr= anularity of > >>>>>>>>>> base pages right now" comment. > >>>>>>>>>> > >>>>>>>>>> What you actually wanted to say is "While we always swap in sm= all > >>>>>>>>>> folios, we > >>>>>>>>>> might refault large folios from the swapcache, and we only wan= t to restore > >>>>>>>>>> the tags for the page of the large folio we are faulting on." > >>>>>>>>>> > >>>>>>>>>> But, I do if we can't simply restore the tags for the whole th= ing at once > >>>>>>>>>> at make the interface page-free? > >>>>>>>>>> > >>>>>>>>>> Let me elaborate: > >>>>>>>>>> > >>>>>>>>>> IIRC, if we have a large folio in the swapcache, the swap > >>>>>>>>>> entries/offset are > >>>>>>>>>> contiguous. If you know you are faulting on page[1] of the fol= io with a > >>>>>>>>>> given swap offset, you can calculate the swap offset for page[= 0] simply by > >>>>>>>>>> subtracting from the offset. > >>>>>>>>>> > >>>>>>>>>> See page_swap_entry() on how we perform this calculation. > >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> So you can simply pass the large folio and the swap entry corr= esponding > >>>>>>>>>> to the first page of the large folio, and restore all tags at = once. > >>>>>>>>>> > >>>>>>>>>> So the interface would be > >>>>>>>>>> > >>>>>>>>>> arch_prepare_to_swap(struct folio *folio); > >>>>>>>>>> void arch_swap_restore(struct page *folio, swp_entry_t start_e= ntry); > >>>>>>>>>> > >>>>>>>>>> I'm sorry if that was also already discussed. > >>>>>>>>> > >>>>>>>>> This has been discussed. Steven, Ryan and I all don't think thi= s is a good > >>>>>>>>> option. in case we have a large folio with 16 basepages, as do_= swap_page > >>>>>>>>> can only map one base page for each page fault, that means we h= ave > >>>>>>>>> to restore 16(tags we restore in each page fault) * 16(the time= s of page > >>>>>>>>> faults) > >>>>>>>>> for this large folio. > >>>>>>>>> > >>>>>>>>> and still the worst thing is the page fault in the Nth PTE of l= arge folio > >>>>>>>>> might free swap entry as that swap has been in. > >>>>>>>>> do_swap_page() > >>>>>>>>> { > >>>>>>>>> /* > >>>>>>>>> * Remove the swap entry and conditionally try to free up= the > >>>>>>>>> swapcache. > >>>>>>>>> * We're already holding a reference on the page but have= n't > >>>>>>>>> mapped it > >>>>>>>>> * yet. > >>>>>>>>> */ > >>>>>>>>> swap_free(entry); > >>>>>>>>> } > >>>>>>>>> > >>>>>>>>> So in the page faults other than N, I mean 0~N-1 and N+1 to 15,= you might > >>>>>>>>> access > >>>>>>>>> a freed tag. > >>>>>>>> > >>>>>>>> And David, one more information is that to keep the parameter of > >>>>>>>> arch_swap_restore() unchanged as folio, > >>>>>>>> i actually tried an ugly approach in rfc v2: > >>>>>>>> > >>>>>>>> +void arch_swap_restore(swp_entry_t entry, struct folio *folio) > >>>>>>>> +{ > >>>>>>>> + if (system_supports_mte()) { > >>>>>>>> + /* > >>>>>>>> + * We don't support large folios swap in as whole yet, bu= t > >>>>>>>> + * we can hit a large folio which is still in swapcache > >>>>>>>> + * after those related processes' PTEs have been unmapped > >>>>>>>> + * but before the swapcache folio is dropped, in this ca= se, > >>>>>>>> + * we need to find the exact page which "entry" is mappin= g > >>>>>>>> + * to. If we are not hitting swapcache, this folio won't = be > >>>>>>>> + * large > >>>>>>>> + */ > >>>>>>>> + struct page *page =3D folio_file_page(folio, swp_offset(entry)= ); > >>>>>>>> + mte_restore_tags(entry, page); > >>>>>>>> + } > >>>>>>>> +} > >>>>>>>> > >>>>>>>> And obviously everybody in the discussion hated it :-) > >>>>>>>> > >>>>>>> > >>>>>>> I can relate :D > >>>>>>> > >>>>>>>> i feel the only way to keep API unchanged using folio is that we > >>>>>>>> support restoring PTEs > >>>>>>>> all together for the whole large folio and we support the swap-i= n of > >>>>>>>> large folios. This is > >>>>>>>> in my list to do, I will send a patchset based on Ryan's large a= non > >>>>>>>> folios series after a > >>>>>>>> while. till that is really done, it seems using page rather than= folio > >>>>>>>> is a better choice. > >>>>>>> > >>>>>>> I think just restoring all tags and remembering for a large folio= that > >>>>>>> they have been restored might be the low hanging fruit. But as al= ways, > >>>>>>> devil is in the detail :) > >>>>>> > >>>>>> Hi David, > >>>>>> thanks for all your suggestions though my feeling is this is too c= omplex and > >>>>>> is not worth it for at least three reasons. > >>>>> > >>>>> Fair enough. > >>>>> > >>>>>> > >>>>>> 1. In multi-thread and particularly multi-processes, we need some = locks to > >>>>>> protect and help know if one process is the first one to restore t= ags and if > >>>>>> someone else is restoring tags when one process wants to restore. = there > >>>>>> is not this kind of fine-grained lock at all. > >>>>> > >>>>> We surely always hold the folio lock on swapin/swapout, no? So when= these > >>>>> functions are called. > >>>>> > >>>>> So that might just work already -- unless I am missing something im= portant. > >>>> > >>>> We already have a page flag that we use to mark the page as having h= ad its mte > >>>> state associated; PG_mte_tagged. This is currently per-page (and IIU= C, Matthew > >>>> has been working to remove as many per-page flags as possible). Coul= dn't we just > >>>> make arch_swap_restore() take a folio, restore the tags for *all* th= e pages and > >>>> repurpose that flag to be per-folio (so head page only)? It looks li= ke the the > >>>> mte code already manages all the serialization requirements too. The= n > >>>> arch_swap_restore() can just exit early if it sees the flag is alrea= dy set on > >>>> the folio. > >>>> > >>>> One (probably nonsense) concern that just sprung to mind about havin= g MTE work > >>>> with large folios in general; is it possible that user space could c= ause a large > >>>> anon folio to be allocated (THP), then later mark *part* of it to be= tagged with > >>>> MTE? In this case you would need to apply tags to part of the folio = only. > >>>> Although I have a vague recollection that any MTE areas have to be m= arked at > >>>> mmap time and therefore this type of thing is impossible? > >>> > >>> right, we might need to consider only a part of folio needs to be > >>> mapped and restored MTE tags. > >>> do_swap_page() can have a chance to hit a large folio but it only > >>> needs to fault-in a page. > >>> > >>> A case can be quite simple as below, > >>> > >>> 1. anon folio shared by process A and B > >>> 2. add_to_swap() as a large folio; > >>> 3. try to unmap A and B; > >>> 4. after A is unmapped(ptes become swap entries), we do a > >>> MADV_DONTNEED on a part of the folio. this can > >>> happen very easily as userspace is still working in 4KB level; > >>> userspace heap management can free an > >>> basepage area by MADV_DONTNEED; > >>> madvise(address, MADV_DONTNEED, 4KB); > >>> 5. A refault on address + 8KB, we will hit large folio in > >>> do_swap_page() but we will only need to map > >>> one basepage, we will never need this DONTNEEDed in process A. > >>> > >>> another more complicated case can be mprotect and munmap a part of > >>> large folios. since userspace > >>> has no idea of large folios in their mind, they can do all strange > >>> things. are we sure in all cases, > >>> large folios have been splitted into small folios? > > > > I don;'t think these examples you cite are problematic. Although user s= pace > > thinks about things in 4K pages, the kernel does things in units of fol= ios. So a > > folio is either fully swapped out or not swapped out at all. MTE tags c= an be > > saved/restored per folio, even if only part of that folio ends up being= mapped > > back into user space. I am not so optimistic :-) but zap_pte_range() due to DONTNEED on a part of swapped-out folio can free a part of swap entries? thus, free a part of MTE tags in a folio? after process's large folios are swapped out, all PTEs in a large folio become swap entries, but DONTNEED on a part of this area will only set a part of swap entries to PTE_NONE, thus decrease the swapcount of this part? zap_pte_range -> entry =3D pte_to_swp_entry -> free_swap_and_cache(entry) -> mte tags invalidate > > > > The problem is that MTE tagging could be active only for a selection of= pages > > within the folio; that's where it gets tricky. > > > >> > >> To handle that, we'd have to identify > >> > >> a) if a subpage has an mte tag to save during swapout > >> b) if a subpage has an mte tag to restore during swapin > >> > >> I suspect b) can be had from whatever datastructure we're using to act= ually save > >> the tags? > >> > >> For a), is there some way to have that information from the HW? > > > > Yes I agree with this approach. I don't know the answer to that questio= n though; > > I'd assume it must be possible. Steven? > > Architecturally 'all' pages have MTE metadata (although see Alex's > series[1] which would change this). > > The question is: could user space have put any data in the tag storage? > We currently use the PG_mte_tagged page flag to keep track of pages > which have been mapped (to user space) with MTE enabled. If the page has > never been exposed to user space with MTE enabled (since being cleared) > then there's nothing of interest to store. > > It would be possible to reverse this scheme - we could drop the page > flag and just look at the actual tag storage. If it's all zeros then > obviously there's no point in storing it. Note that originally we had a > lazy clear of tag storage - i.e. if user space only had mapping without > MTE enabled then the tag storage could contain junk. I believe that's > now changed and the tag storage is always cleared at the same time as > the data storage. > > The VMAs (obviously) also carry the information about whether a range is > MTE-enabled (the VM_MTE flag controlled by PROT_MTE in user space), but > there could be many VMAs and they may have different permissions, so > fetching the state from there would be expensive. > > Not really relevant here, but the kernel can also use MTE (HW_TAGS > KASAN) - AFAIK there's no way of identifying if the MTE tag storage is > used or not for kernel pages. But this only presents an issue for > hibernation which uses a different mechanism to normal swap. > > Steve > > [1] > https://lore.kernel.org/r/20231119165721.9849-1-alexandru.elisei%40arm.co= m Thanks Barry