From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 259C5C83F22 for ; Wed, 16 Jul 2025 22:22:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BCDC96B00A2; Wed, 16 Jul 2025 18:22:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B7EFB6B00A3; Wed, 16 Jul 2025 18:22:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6D846B00A5; Wed, 16 Jul 2025 18:22:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 96A6E6B00A2 for ; Wed, 16 Jul 2025 18:22:11 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 488011D9F62 for ; Wed, 16 Jul 2025 22:22:11 +0000 (UTC) X-FDA: 83671552062.04.351D6DD Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf26.hostedemail.com (Postfix) with ESMTP id 62D98140009 for ; Wed, 16 Jul 2025 22:22:09 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=rlYbRfSQ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3DyZ4aAsKCIYkmuo1vo83xqqyyqvo.mywvsx47-wwu5kmu.y1q@flex--ackerleytng.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3DyZ4aAsKCIYkmuo1vo83xqqyyqvo.mywvsx47-wwu5kmu.y1q@flex--ackerleytng.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752704529; a=rsa-sha256; cv=none; b=Qb8vReI6YnT36w6tXJ2Hk+rJ7nSO8fYZtI0nW2rvdeOSs435xclukMYif4PddJvoMm/5Oh uo9Pi7Zrk8T5NNZ+oF2FenHvz2DhmYVXnFL7fsFrcyqWr0VkkT4AKLTX4MD9jGlbgkRWEd 19JFnrDlm6hsfWIgNt+RwWfx9ECIOSc= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=rlYbRfSQ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3DyZ4aAsKCIYkmuo1vo83xqqyyqvo.mywvsx47-wwu5kmu.y1q@flex--ackerleytng.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3DyZ4aAsKCIYkmuo1vo83xqqyyqvo.mywvsx47-wwu5kmu.y1q@flex--ackerleytng.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752704529; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kHYx0oKJKOX+LIQdUAr0yKPkS6Z6eF8ywIsSJu/uw4Y=; b=Tq2M6/tRTKZz2RPmcZW1cpekFLKkqsqduNjITYsx6ktoJNFgfu5wX/430xwM9LTVX42Jcb GyFVMenYoPmBhk3tXwEQ/nx5C3tscvOL5JvB64pEehLm6FIaFUUJN8eQxxM/jIUiGM8X1H 8eTc4SHJEFV5Ap4H935w1wlIwFmhmRI= Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7494999de28so390389b3a.1 for ; Wed, 16 Jul 2025 15:22:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752704528; x=1753309328; darn=kvack.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=kHYx0oKJKOX+LIQdUAr0yKPkS6Z6eF8ywIsSJu/uw4Y=; b=rlYbRfSQLlFKkkN8cl5DYLDh66hk8jZUthM3Tx21tdjJwYyMbaFU3ziMqgbbOJNda/ chsMdCuiGMiljHDtOnlwR91HXXPEpHLhyKp/hwrQRv23CLcmURAm1RxcsPYEOIpHQD35 XGLiexvHZv8SF5wqigLU5lSSjhcWp7UkaYqP/QixgE9Q0SI98gkwb7E7HFEIPs+oONSe wV+0gweCRQwdxiWf9WSfTUspBU+eacdnDM5dR4DWY6La2LD9BGT4/EAVinqXa3HXKboR V8g1CFIvJDrpV4j9MEeCoZhxKCyGU25LWUZP5bj3CNqEukRAZZzc+Vw6y0qOL1eXE/U2 yEbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752704528; x=1753309328; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=kHYx0oKJKOX+LIQdUAr0yKPkS6Z6eF8ywIsSJu/uw4Y=; b=WKjIi+iz4yX2erioWAUFGMlNhzuintNZW7S2FPuWCdVcdDXmdfmSEE1wNeKWwu5Gtw h0XFVeOW7GXeUIih8SoEYpxkplEo2LqAWMhUFkT6oS8AxGdGnAoF7vK3iSyAl6Gy05sw iT39DywYUbFDILqXkzkpzJP16w0XgqeaiTUKafSP/HMP+kGESt78NAhAsOXKxk0bPRt4 wj11/YJ6gOsWJI/cVGJcrFiKZN1fUqhZQkgwMrc6Ol2YdSIP2cJaOjjzanMqGusTbGix rfXG0P0jfP4UmJhGgfBIrUQpfoaRolPiHgAx5JE8TkukyYUsy+uQCdLQPYcL4sXpVxcN f29Q== X-Forwarded-Encrypted: i=1; AJvYcCWim7OHB/omdxvvzGbL+3NbjFiZLrFkyKHi12NFjHNo1zvJSrzFA2WrvWxgMc0+OohTMLoUJ8Hkag==@kvack.org X-Gm-Message-State: AOJu0YzNCTOdAFfhNjbajoQiIhDGpnXIzmm9LZeyuC1XCAKY7zREhwKx ZpOBNyCRdu3zDz3Nk2sacqfqf1PABohOUnGjYq6ev4C9CeNM7Yk5T8IQTGWbRLrcG3vNiOw+g70 Lqy/Oycwzz1golhOLivubuNQaFQ== X-Google-Smtp-Source: AGHT+IG0OvScUbujc+i6/1IIn8ARXzZvFQD59bptEMA3mXCmCDkH6zU0DKQx5NAHGSoU+3a0uG4wqBjqoX5XM1g8tA== X-Received: from pfoo10.prod.google.com ([2002:a05:6a00:1a0a:b0:746:3321:3880]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:882:b0:748:ff39:a0ed with SMTP id d2e1a72fcca58-7584b43a0acmr949334b3a.20.1752704527892; Wed, 16 Jul 2025 15:22:07 -0700 (PDT) Date: Wed, 16 Jul 2025 15:22:06 -0700 In-Reply-To: Mime-Version: 1.0 References: <9502503f-e0c2-489e-99b0-94146f9b6f85@amd.com> <20250624130811.GB72557@ziepe.ca> Message-ID: Subject: Re: [RFC PATCH v2 04/51] KVM: guest_memfd: Introduce KVM_GMEM_CONVERT_SHARED/PRIVATE ioctls From: Ackerley Tng To: Yan Zhao , Vishal Annapurve Cc: Jason Gunthorpe , Alexey Kardashevskiy , Fuad Tabba , kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 62D98140009 X-Stat-Signature: pkrs7g3fncge6gqriimmun6fdz7reobn X-HE-Tag: 1752704529-28048 X-HE-Meta: U2FsdGVkX1/gV0HzV9ZzHHfSAwnukST/h10Xr7YvOKiTlpnbB/diRcaFPhZpt7/gbvMVcfHVGBgezy5rjkUuYDvx+ZDTbMTkiekFLKtPRc+k6kS1ehRN2SNHC0y6bN3tsFQGMOu9A98Qxnqd7/EV8qccrr7NUiDoqjad1YgGu5vpvql15pA//ZLK8+Qgg/zS28v3NZkb7yKegoMPfowQ7D/VYBzRZlwJzIVNbbyC4EXuP69j3ykweVJTFB2SZQbaDD8rMGsrd0hxBEWzC0baNIla8UGH55jtsnOG0twamIQaAkSITF92eAl4yZDf5mv6jlYxDBlT7n0wexsKOrJL/ARUYFx4xs8eEbe5+pmbG7IKRGjLsOIhM6/wZDPkRIV+cjs0mNc1BaNtDh+JdOHaxHcUNLBh6BGXtOk5dH8h+8fffqvQp/3Vdw10G04EuUuMhidwQxZ5scYzkgfAmACcVnI3l+z1tw3AxtJ5mhV3xubX6nVmmGEEPd3UxuNchyGxttVSq0Nme2gBuBG26Y8DghTcEOqoT42jVIYBFKxjtbXEKmshrLy0YX5rKpNohQEBAOuWfiVxdPxxYAd7MCU76SPE7dQKhUjAtFvlB83q68vgBC+/cWtLuLdeDkRwWa/hay+DzfCDr3EYDXDuAFQ0hoMn5kk9GIUhsQ+lv+g4zng7AYfMG6OeItANmAj9IYpH5FzoSOuvB+dM8OK0ElJVgwv+sAQIRDMapkLe7TxQL9drmXa0cA0lnkTqLYRaEjGXlrFdRlFCdJwo5g9hyojB0vZBCKwjFNNUS4hmCoHtEA+WlXg1XOr24dHkNRSvxlyHQntNnRhWbWxt7ChhR2kQ582iczMcS7FSejoXOcOHuXdC7mPqPdCa0UHlqCjbtK4XFgqSmDMBOfFXqnMLUYUWfjwc4wZET+IwbIbVoHVz0rzj+LCfXT32aIwNdVIjURnT72z+RBq0asWteo7MGXJ GBMZsqE+ LQNXcERTaRsbmys3f+0B+/IzuvlaJ6kauXfJKIVCVGNhnnXPqqchxqM/jR7+3by1ldfiAITXPOLqAaLVXbbwo1K32alGS7jroaBmOKbG4RbO1qN0/36UHaatCteYBBqIIVLDdhsbLjW4kjhR/sQH0GNygLzMiI4na6awBmLQNxpguBGzKYTYRWtW5NKX9Oxr38lbS+YXj8oIRGbETkE5w2AqKsCugeqImfxIg7Hx4ly/PPwzhVGHGPO9sw152U+UBKxHv2s8JwnBumrDbc2qCSngONmHatCzPARSAl0GLBGomVbTF7ApJCbz36NwY++8VA2e94V6tuvSH4akZACzkrd5k5Gacqij9MjXwIx+QnQ1PXLiZacPZnEfFfMmyYq1KRxYf4cdt5zUTc0fWvo6vLBzRShgeUFrKswp+ijar4ow4RBa7Hdeblv7IKawsVZDrhU1v1rvA9cxlStEHL0vfSUlYy5pCH9SSib2W9C783oMY/tp9o50+cArjHVKDJQzBknovYsfUavv345J5vY5IEorfZ36Zz3uxO2L83WsJxrZun0S4+13XJAAL2KSkOdy0Cr9GLp/xLbkTJa9ZwRoL7gn9WoaRD1BkvEPsOMknoWQXN7VHk1zYySo0BRVjjvykiJgnkdM8JqnWA5DGEoc26nvK5KKTIum/f4mW96MXoL4vVoLBvTHKT7ayW76r14mNlG31b1MeTowVgV8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Yan Zhao writes: > On Tue, Jun 24, 2025 at 07:10:38AM -0700, Vishal Annapurve wrote: >> On Tue, Jun 24, 2025 at 6:08=E2=80=AFAM Jason Gunthorpe w= rote: >> > >> > On Tue, Jun 24, 2025 at 06:23:54PM +1000, Alexey Kardashevskiy wrote: >> > >> > > Now, I am rebasing my RFC on top of this patchset and it fails in >> > > kvm_gmem_has_safe_refcount() as IOMMU holds references to all these >> > > folios in my RFC. >> > > >> > > So what is the expected sequence here? The userspace unmaps a DMA >> > > page and maps it back right away, all from the userspace? The end >> > > result will be the exactly same which seems useless. And IOMMU TLB >>=20 >> As Jason described, ideally IOMMU just like KVM, should just: >> 1) Directly rely on guest_memfd for pinning -> no page refcounts taken >> by IOMMU stack > In TDX connect, TDX module and TDs do not trust VMM. So, it's the TDs to = inform > TDX module about which pages are used by it for DMAs purposes. > So, if a page is regarded as pinned by TDs for DMA, the TDX module will f= ail the > unmap of the pages from S-EPT. > > If IOMMU side does not increase refcount, IMHO, some way to indicate that > certain PFNs are used by TDs for DMA is still required, so guest_memfd ca= n > reject the request before attempting the actual unmap. > Otherwise, the unmap of TD-DMA-pinned pages will fail. > > Upon this kind of unmapping failure, it also doesn't help for host to ret= ry > unmapping without unpinning from TD. > > Yan, Yilun, would it work if, on conversion, 1. guest_memfd notifies IOMMU that a conversion is about to happen for a PFN range 2. IOMMU forwards the notification to TDX code in the kernel 3. TDX code in kernel tells TDX module to stop thinking of any PFNs in the range as pinned for DMA? If the above is possible then by the time we get to unmapping from S-EPTs, TDX module would already consider the PFNs in the range "not pinned for DMA". >> 2) Directly query pfns from guest_memfd for both shared/private ranges >> 3) Implement an invalidation callback that guest_memfd can invoke on >> conversions. >>=20 >> Current flow: >> Private to Shared conversion via kvm_gmem_convert_range() - >> 1) guest_memfd invokes kvm_gmem_invalidate_begin() for the ranges >> on each bound memslot overlapping with the range >> -> KVM has the concept of invalidation_begin() and end(), >> which effectively ensures that between these function calls, no new >> EPT/NPT entries can be added for the range. >> 2) guest_memfd invokes kvm_gmem_convert_should_proceed() which >> actually unmaps the KVM SEPT/NPT entries. >> 3) guest_memfd invokes kvm_gmem_execute_work() which updates the >> shareability and then splits the folios if needed >>=20 >> Shared to private conversion via kvm_gmem_convert_range() - >> 1) guest_memfd invokes kvm_gmem_invalidate_begin() for the ranges >> on each bound memslot overlapping with the range >> 2) guest_memfd invokes kvm_gmem_convert_should_proceed() which >> actually unmaps the host mappings which will unmap the KVM non-seucure >> EPT/NPT entries. >> 3) guest_memfd invokes kvm_gmem_execute_work() which updates the >> shareability and then merges the folios if needed. >>=20 >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D >>=20 >> For IOMMU, could something like below work? >>=20 >> * A new UAPI to bind IOMMU FDs with guest_memfd ranges >> * VFIO_DMA_MAP/UNMAP operations modified to directly fetch pfns from >> guest_memfd ranges using kvm_gmem_get_pfn() >> -> kvm invokes kvm_gmem_is_private() to check for the range >> shareability, IOMMU could use the same or we could add an API in gmem >> that takes in access type and checks the shareability before returning >> the pfn. >> * IOMMU stack exposes an invalidation callback that can be invoked by >> guest_memfd. >>=20 >> Private to Shared conversion via kvm_gmem_convert_range() - >> 1) guest_memfd invokes kvm_gmem_invalidate_begin() for the ranges >> on each bound memslot overlapping with the range >> 2) guest_memfd invokes kvm_gmem_convert_should_proceed() which >> actually unmaps the KVM SEPT/NPT entries. >> -> guest_memfd invokes IOMMU invalidation callback to zap >> the secure IOMMU entries. > If guest_memfd could determine if a page is used by DMA purposes before > attempting the actual unmaps, it could reject and fail the conversion ear= lier, > thereby keeping IOMMU/S-EPT mappings intact. > > This could prevent the conversion from partially failing. > If the above suggestion works, then instead of checking if pages are allowed to be unmapped, guest_memfd will just force everyone to unmap. >> 3) guest_memfd invokes kvm_gmem_execute_work() which updates the >> shareability and then splits the folios if needed >> 4) Userspace invokes IOMMU map operation to map the ranges in >> non-secure IOMMU. >>=20 >> Shared to private conversion via kvm_gmem_convert_range() - >> 1) guest_memfd invokes kvm_gmem_invalidate_begin() for the ranges >> on each bound memslot overlapping with the range >> 2) guest_memfd invokes kvm_gmem_convert_should_proceed() which >> actually unmaps the host mappings which will unmap the KVM non-seucure >> EPT/NPT entries. >> -> guest_memfd invokes IOMMU invalidation callback to zap the >> non-secure IOMMU entries. >> 3) guest_memfd invokes kvm_gmem_execute_work() which updates the >> shareability and then merges the folios if needed. >> 4) Userspace invokes IOMMU map operation to map the ranges in secur= e IOMMU. >>=20 >> There should be a way to block external IOMMU pagetable updates while >> guest_memfd is performing conversion e.g. something like >> kvm_invalidate_begin()/end(). >>=20 >> > > is going to be flushed on a page conversion anyway (the RMPUPDATE >> > > instruction does that). All this is about AMD's x86 though. >> > >> > The iommu should not be using the VMA to manage the mapping. It should >>=20 >> +1. >>=20 >> > be directly linked to the guestmemfd in some way that does not disturb >> > its operations. I imagine there would be some kind of invalidation >> > callback directly to the iommu. >> > >> > Presumably that invalidation call back can include a reason for the >> > invalidation (addr change, shared/private conversion, etc) >> > >> > I'm not sure how we will figure out which case is which but guestmemfd >> > should allow the iommu to plug in either invalidation scheme.. >> > >> > Probably invalidation should be a global to the FD thing, I imagine >> > that once invalidation is established the iommu will not be >> > incrementing page refcounts. >>=20 >> +1. >>=20 >> > >> > Jason >>=20