From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09A41C197A0 for ; Thu, 16 Nov 2023 23:47:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 36C0D8004B; Thu, 16 Nov 2023 18:47:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 31C7B8003F; Thu, 16 Nov 2023 18:47:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BCBB8004B; Thu, 16 Nov 2023 18:47:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 098D78003F for ; Thu, 16 Nov 2023 18:47:15 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C8D201602B2 for ; Thu, 16 Nov 2023 23:47:14 +0000 (UTC) X-FDA: 81465455988.29.AC4A818 Received: from mail-vs1-f41.google.com (mail-vs1-f41.google.com [209.85.217.41]) by imf26.hostedemail.com (Postfix) with ESMTP id 15F40140017 for ; Thu, 16 Nov 2023 23:47:12 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NH01Gn6L; spf=pass (imf26.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.41 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700178433; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=itBzxpYVA8AZkGZhdmPLdRxYtlcOmy3dzDpekhNS5LI=; b=6w0DlAbHEef2pOGuZCl4UVXUp/mrSkq0okcSMNiBxA9q9Qyq+OhMjkAV7tMmSQ60povS+D ORSj6JzFII/fL+e7Cf2WKoZZzsi9y8ZqC8ZdFOfAmf6Mt/PMGY+i/zeKJZl5m1AOx4mrqh WR7H7WWt2xhz/iQfEGQZr2ye3WU/Afo= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NH01Gn6L; spf=pass (imf26.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.41 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700178433; a=rsa-sha256; cv=none; b=3i7k7r16QIb0YRVaQhwiHy6c7Yd+B1dT3BlJ4hwh3Oi7hqBGiz/AAz+gaTI0NwU2zI/Ypx qFfwM3H4AQy0bTyHm++aWVsAqIYyDGvr2KYXUw0T3GWN8qA3MWo/BFtwYa8ir4ZXczDRFv QtftlmmYWTMReCfPiQ5bwi1KElei0MM= Received: by mail-vs1-f41.google.com with SMTP id ada2fe7eead31-45ef8c21e8aso534042137.1 for ; Thu, 16 Nov 2023 15:47:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700178432; x=1700783232; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=itBzxpYVA8AZkGZhdmPLdRxYtlcOmy3dzDpekhNS5LI=; b=NH01Gn6LcuMH/5nj4TqGfFed5d7IMjR5tTn1XyzMa68D864ZSMHT8jSTqzYi9vPPUD xegF3EtE1TS7ImYyFdRJ0VZIeaZwnFWRR5UrEL+3xXnKz+kE1EcNdwPhylIfD3bPC5q9 qcHem5oyqn/5Da85/1n/MJ1KUpV3QYrbDsM7UABfFjlc3IBP4s/ghI6rmnUAXfBx24jO ODnLQ6G/0gjJkW1efoFLsbtRf+62/Q2HDJWaAonrSIablJuxqteo8xO0nrsSuFEkIq9B Hwzf/37YkG2LQ2qBmGLpTBeu9AXY5Tor8klGLJ3K/jUAo+Y5wJicwAdKF2mGtL0Gzl2+ zUoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700178432; x=1700783232; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=itBzxpYVA8AZkGZhdmPLdRxYtlcOmy3dzDpekhNS5LI=; b=ZZuOAp/I72Hk96+IUrShgLhRftbOB8he+w2JKaMmazzadMP+3eMt+uCD0VAGdoJjuc /46eLiB5wX/lcRvTN9AHp37Z3XvBE5toDsRRMpe1iO3lzxnkfOWT9R4DFwTDVIId3g2S pWyrQ7Hny03k+JmjY8pkvqn2irUZZPs4FoRB5xeOrd3qw279tstJ1wzSbYanvSqhYC2q 8eQha0aNYE0T0Zj1aq4FCnREL8FGtgV/ORN59ZGIOO5r9qSsy0TPIdu+KXkN2OdpDE1M e/VDevvwFGn2TEtO8xDFnQuMg5N2IUh2/98oDVclMd8ajQU9aZXqmXTksXe6CwBS9X+m bzMQ== X-Gm-Message-State: AOJu0Yy80s2xtBGsblAw0fssLQs8bVl1T4S+UVa4TU/5+pBxRutFMmsl 5+KDlez/jcOzC4UvT48jxKa0NxcSYxlfKqo1XnY= X-Google-Smtp-Source: AGHT+IH4KzQlnV3KvIJLri3KzLbsD+wnGhElBTYIjI0H+UbcOHbLmLEsOmVwV/RMDeahhKGcMtMu/YvxL0TOzf6PqOY= X-Received: by 2002:a05:6102:5f09:b0:45e:fde1:b273 with SMTP id ik9-20020a0561025f0900b0045efde1b273mr18038965vsb.29.1700178432072; Thu, 16 Nov 2023 15:47:12 -0800 (PST) MIME-Version: 1.0 References: <20231114014313.67232-1-v-songbaohua@oppo.com> <864489b3-5d85-4145-b5bb-5d8a74b9b92d@redhat.com> In-Reply-To: <864489b3-5d85-4145-b5bb-5d8a74b9b92d@redhat.com> From: Barry Song <21cnbao@gmail.com> Date: Fri, 17 Nov 2023 07:47:00 +0800 Message-ID: Subject: Re: [RFC V3 PATCH] arm64: mm: swap: save and restore mte tags for large folios To: David Hildenbrand Cc: steven.price@arm.com, akpm@linux-foundation.org, ryan.roberts@arm.com, catalin.marinas@arm.com, will@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, mhocko@suse.com, shy828301@gmail.com, v-songbaohua@oppo.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yuzhao@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 15F40140017 X-Rspam-User: X-Stat-Signature: j9rhnn57ofpnwybt5q4dr8jz7b9x5gjp X-Rspamd-Server: rspam01 X-HE-Tag: 1700178432-906667 X-HE-Meta: U2FsdGVkX1/IJMByNqJd9i3SZnNfxHmtHrcpjgYPDPk+hPiPDmNvzy+KGwf8HdjrdrY3LBTIxm2ayxj7FPVBYCHFo5syFYRB9jsiwxX5YOzoObXNkoqpCPK7OHpngeBYhqnLOWGbBoYOBJ3G6GfRgl26anY2gWXWeMHP4AbrG3BSM9UwzeApOzeibtuedhoXsg8vpSMPKiSDcMxXb/V2actUH4Ss/eminJZR0Usnfgx5iPifecXsKbzIkj/KpAv7Ty+2Y8/hMcRRYlM2MBRITni/4//6XeeLFdh8wBu0itdCVhd+LfSkXSUYgcNXy1oD7H4REm294aoMP4y+DImJLSTd7VnNfSz9gq//214M2RVdIJPbJ7GdeYKKLYQBtuSDxL5LT9pAdgLJGL2WER8NmXZaWRWH8wpMMLtVSF5WlmrbUmv+x6cIMpCtLLHQJs56PDMguWoWWd0Tz0hXHMAnW4uQK5YcDqR1OK0HHAq/Jsi42m6kfgeZVAuPla7KhsFBSIdj7aCuVihxTOWuqjTUsvXch8Wmp8mfGxpZepJLYp5a18Sl44ykq9QiDSit+dGsqUKDIccQvZMo0u6VKs/Kuw9SfIDRO0Jy824uebTvVVX7RsG2G41Ur3pjuFn5Ri2FGT0miFhIUTH2653J1t6g8BEL1SjjDGnH5HIbRfabo66qqRFHe9rPR6KZFzdFB6YTLbhgHcpLyQdGlS/VaEt7L5+XqwDxRiamZa5GGKr2AjwKBhY4KvryzzIn4Uk8xBpt2re9BOZrfcPPF1qz5fxgaYTGTeV8jcMe0mHfxBSzmgXoSgEce2cCpWDct1jh64vM1RRMG1MqIQ+JoAKMAl+50yPAvSgZoxvQxJS4wq6xKb/6j1hAl8bUM37y4TAqkjT9jxa+YLGc8uMhJ6ARE9O3Oiah1eY6YJ+3qEFZsSO2/zWiv3gWrXqbNnshRyyus7nyD17noxdqTyFkayv+/xB A/8OiYVI D93giGYghZ2lHLK+RXmYi7fEr5LVdrjHI0X5ef/6M1cohhArhtBmEaThuy47AeuFmTLSjxSX/ZSOBD5P69L55h6TCyD9W0td3v8YLO5K2cRQCMfNlIhplyuqr3AFT4ECsU2Dwi+tNqzxnI87oooLadMMTjjmYXdA1L7I6q58LkM0czcTfuc3ZS3AgmZ+c+UTSXSE2l4lzwQ0/TZnhFKOdkfMfKYiVg98pv1at0CYdHg27UOMOBqvvae5xbKCiX+/CZfozdi/X+BBeuMyOkb6IpR4/P3H9zRMF2YbHW38ntN2sGyfjjbu4hMVTQlDXsQ7P/axzj1uooFOCMGt4hKM6Y8oGucAu48EBSbXejJ3Nd8GQsQKOfoACrkzZEb63CuzRvgRZBoKDQxQVDvxdVCqwFVqi2RreOnwWcQCrtF1PbIn1fLOBqbH8vsSLRgVw4dILgbV3pdUinbE2uX4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Nov 16, 2023 at 5:36=E2=80=AFPM David Hildenbrand wrote: > > On 15.11.23 21:49, Barry Song wrote: > > On Wed, Nov 15, 2023 at 11:16=E2=80=AFPM David Hildenbrand wrote: > >> > >> On 14.11.23 02:43, Barry Song wrote: > >>> This patch makes MTE tags saving and restoring support large folios, > >>> then we don't need to split them into base pages for swapping out > >>> on ARM64 SoCs with MTE. > >>> > >>> arch_prepare_to_swap() should take folio rather than page as paramete= r > >>> because we support THP swap-out as a whole. > >>> > >>> Meanwhile, arch_swap_restore() should use page parameter rather than > >>> folio as swap-in always works at the granularity of base pages right > >>> now. > >> > >> ... but then we always have order-0 folios and can pass a folio, or wh= at > >> am I missing? > > > > Hi David, > > you missed the discussion here: > > > > https://lore.kernel.org/lkml/CAGsJ_4yXjex8txgEGt7+WMKp4uDQTn-fR06ijv4Ac= 68MkhjMDw@mail.gmail.com/ > > https://lore.kernel.org/lkml/CAGsJ_4xmBAcApyK8NgVQeX_Znp5e8D4fbbhGguOkN= zmh1Veocg@mail.gmail.com/ > > Okay, so you want to handle the refault-from-swapcache case where you get= a > large folio. > > I was mislead by your "folio as swap-in always works at the granularity o= f > base pages right now" comment. > > What you actually wanted to say is "While we always swap in small folios,= we > might refault large folios from the swapcache, and we only want to restor= e > the tags for the page of the large folio we are faulting on." > > But, I do if we can't simply restore the tags for the whole thing at once > at make the interface page-free? > > Let me elaborate: > > IIRC, if we have a large folio in the swapcache, the swap entries/offset = are > contiguous. If you know you are faulting on page[1] of the folio with a > given swap offset, you can calculate the swap offset for page[0] simply b= y > subtracting from the offset. > > See page_swap_entry() on how we perform this calculation. > > > So you can simply pass the large folio and the swap entry corresponding > to the first page of the large folio, and restore all tags at once. > > So the interface would be > > arch_prepare_to_swap(struct folio *folio); > void arch_swap_restore(struct page *folio, swp_entry_t start_entry); > > I'm sorry if that was also already discussed. This has been discussed. Steven, Ryan and I all don't think this is a good option. in case we have a large folio with 16 basepages, as do_swap_page can only map one base page for each page fault, that means we have to restore 16(tags we restore in each page fault) * 16(the times of page fa= ults) for this large folio. and still the worst thing is the page fault in the Nth PTE of large folio might free swap entry as that swap has been in. do_swap_page() { /* * Remove the swap entry and conditionally try to free up the swapcache. * We're already holding a reference on the page but haven't mapped it * yet. */ swap_free(entry); } So in the page faults other than N, I mean 0~N-1 and N+1 to 15, you might a= ccess a freed tag. > > BUT, IIRC in the context of > > commit cfeed8ffe55b37fa10286aaaa1369da00cb88440 > Author: David Hildenbrand > Date: Mon Aug 21 18:08:46 2023 +0200 > > mm/swap: stop using page->private on tail pages for THP_SWAP > > Patch series "mm/swap: stop using page->private on tail pages for TH= P_SWAP > + cleanups". > > This series stops using page->private on tail pages for THP_SWAP, re= places > folio->private by folio->swap for swapcache folios, and starts using > "new_folio" for tail pages that we are splitting to remove the usage= of > page->private for swapcache handling completely. > > As long as the folio is in the swapcache, we even do have the proper > swp_entry_t start_entry available as folio_swap_entry(folio). > > But now I am confused when we actually would have to pass > "swp_entry_t start_entry". We shouldn't if the folio is in the swapcache = ... > Nop, hitting swapcache doesn't necessarily mean tags have been restored. when A forks B,C,D,E,F. and A, B, C, D, E ,F share the swapslot. as we have two chances to hit swapcache: 1. swap out, unmap has been done but folios haven't been dropped 2. swap in, shared processes allocate folios and add to swapcache for 2, If A gets fault earlier than B, A will allocate folio and add it to swapcache. Then B will hit the swapcache. But If B's CPU is faster than A, B still mig= ht be the one mapping PTE earlier than A though A is the one which has added the page to swapcache. we have to make sure MTE is there when mapping is done. > -- > Cheers, > > David / dhildenb > Thanks Barry