From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89997C3ABDD for ; Tue, 20 May 2025 14:34:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D94846B0089; Tue, 20 May 2025 10:34:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D454C6B009A; Tue, 20 May 2025 10:34:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C338B6B009C; Tue, 20 May 2025 10:34:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A79ED6B0089 for ; Tue, 20 May 2025 10:34:29 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 52DDF1A045A for ; Tue, 20 May 2025 14:34:29 +0000 (UTC) X-FDA: 83463531858.01.9C065D0 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf29.hostedemail.com (Postfix) with ESMTP id 59A8E12000A for ; Tue, 20 May 2025 14:34:27 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lKO0P4JS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of tabba@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=tabba@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747751667; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=y0P2IqjvcCUOqznaXOXK895YsSeeP/H/xxEi3DHBfCY=; b=2v/dehXKPxgz9YP2BY8JiqRhEtN7mjDTnYj87N7/jYgR3j40xqCx9cCUlowAnnalKXawhR T/dopoSIIb25QX6UPcldzKUCWewbqUj/LTXaKh07ZBBndBbBl1ADNS8jdQEftY9qjkTRVI ZmZuTKZ+D+1Vy1TRvjF/5A34gEFd5Ko= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747751667; a=rsa-sha256; cv=none; b=teWyn0Lh+SIPD13k+2tmwBbr9zGtCyaN0YJgqCOcjEyhZSKO9VwksdFRXr13G+ff3kQ96F 8nn0Kdmy0kJ4CU80j5QA7yhzC+XvL+1seD/nvQ+EgL3m6KqgsZFz736b0KDGNElo26r/J8 HP2NDZXh5qWKvQqGM43Yd7ylJGBvrRg= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=lKO0P4JS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of tabba@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=tabba@google.com Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-48b7747f881so1042711cf.1 for ; Tue, 20 May 2025 07:34:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747751666; x=1748356466; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=y0P2IqjvcCUOqznaXOXK895YsSeeP/H/xxEi3DHBfCY=; b=lKO0P4JS1vX9SPFNO2dvveb8iy59i4D98nLpOUDUwnL4/tk9VtVR2XIJSkv7ICqN3D tL39holfeywS2QhQYQnB3pqelDnhjthwritFYJptLXMNJ4yCJiniHeD4z7eug3ILq2Dj AYZNXxCI51Q8Qi/cnSlgfs/MM3btKn4jEXcsWnMkbikzKbXnlMSU8jbYSAP7o+V44USK JzTIRMvCCj43VamIVzPz11YCQakmxqyYZbk3f/cexeqQ1xzmw6SyTQN8yz9bWCaLWTIC dpaHJ/+/jtLG14xcUIaOYoJObyofyvxh2Ml2y+BDUVY9fJ4oSXcMtBuW+f0nBDrsApQQ K+kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747751666; x=1748356466; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y0P2IqjvcCUOqznaXOXK895YsSeeP/H/xxEi3DHBfCY=; b=stvKa+44/HdqJvRI6IsvUiLCvEZQMP7N6TCZd8B3QzgnxQVfJ0cAKuiCS2bEadJH0S kk3mBy5mmN9VK5uzXnlGeFwYytrpJ3x2d6jqSCFQVEKn7NY8FhuSeLdUlZ46UxrHyfUR Pa5MslQRFjjqazdyTAgM+B9MX4aJQ+HgxZn9mFLYU1C/3kI/0fVZxi3kbdrP50aYVAds 2QJFX9m7bZ6570azlSPY18m0M2EFNFmYrK/sY3vPXraKqbf8B9n4j7SSPasv5acQtMZ4 p619nUNYGmkftqLuyXic+8CO3I0H8IbJW+iBO4LrIlCCNfq7CwQvYhiB4bgjI0Alh7M2 LlLQ== X-Forwarded-Encrypted: i=1; AJvYcCWh+GfwIP9dKZUS0sldMItN9KUkSTKFaQ3dQuNGZBJ7HUruiE4NFWnAjEpOm4sCLlfFf865UyTwQQ==@kvack.org X-Gm-Message-State: AOJu0YzCM9/1hkvnkUPbfTQZWQZrXba7ZhOsDtJk9UcWfUv52YtANR75 RLBdSYjbE+UlT17LKNJODeyXAD98Wycea/45ocEkFseGEanQ3P8v+b5ky+4ZV9l2SkmmGXX3GYB Z5K5qwWtamnLf47vHvk34sb8zlepDWk32n6edJ7vU X-Gm-Gg: ASbGnctafgdHOUWtdqvMBbQKxjMXj6NexFcxkLJqOl2YyLrlLVOLbAP4pXzciRfWgF6 bes/lNpEkF1sysYRumsjiHSrFLMiMZOxyj13OWufxo1u8fW582XDgZm31zl5yPlQy/xnH5NgdQk r4Tgklc3wytI5FqRURg44VmXRHIucvD8FYKsCbqeIN7Ts+tFsT4HkOraxW9HYI6Ku5fNjBRxLs X-Google-Smtp-Source: AGHT+IFq6QHuKVGpcJOYykEJd+54LhE6lJ5aZDCK7UqOGv7vWK/rlbhRvqzoyV3nEFreQqOalTu7rjwWEpWUr0hZeLw= X-Received: by 2002:ac8:5a08:0:b0:47d:4e8a:97f0 with SMTP id d75a77b69052e-4960136a0a5mr10783031cf.29.1747751665479; Tue, 20 May 2025 07:34:25 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Fuad Tabba Date: Tue, 20 May 2025 15:33:49 +0100 X-Gm-Features: AX0GCFt089q6agCPPB7UyDi-PWzBYyB89JwxkyNf6w9tWUw1XGRX3RdPGBlte3M Message-ID: Subject: Re: [RFC PATCH v2 04/51] KVM: guest_memfd: Introduce KVM_GMEM_CONVERT_SHARED/PRIVATE ioctls To: Vishal Annapurve Cc: Ackerley Tng , kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: 9xk8dcus4jhcn3qwhymknxbooas6j5d6 X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 59A8E12000A X-HE-Tag: 1747751667-725312 X-HE-Meta: U2FsdGVkX1+aLU/zb8hXa17ob/bvCxRNzy5XR7N9gAKm8GjX83oPv1+3I6UJ7FSuX6Vgf8QLuJXkbevppSrMpjX0691ciBi/thGfcuEbh9TM/W0A+yOqtdOVOAXaBGxN1zghHDKmAdgtrvyPuyROp4cR2jMxq1gaUoS670VAdry1pXxJaZuOJrtT0IVO5crV1lC59VhCmETip2JEHWrUyECrjoCzoMIa2smuF5kfeJ4uRqn7L6JMmX6RhMmQy0uKjFBqYT/Z8GE/E/jPNJOJZcjvi6xlYSj6S+rjvkhX+yhqOOG/HwJk0HzgvOt04yTMvqJ5RqhtAdD5avHDNvLL1kvlAQ3zFeMEcYM6vJMQgIMS0bUFuSy6nYAF4RMUByOoo0ZHLGjxVdOarmGuo72xnKvV2fTFAm+iR/SB3nDlJGFQ9GkOQEIS3/CaJH9gQMFTY/ajro+TsfWi//wTLQ/kUO2CS3FY6sYA3H24Nrac0PQQdoMhVU0tIUZb5lAOgc0OfOWQp3nzcSJcHHiYLQi3zn0X0Km/NbcALN3DJNdj8yYpNdN0pBX1AWvmhSDlWabwFvnWLsMtYOcdkqwUKGdW1HAp9l5dIG99mivR6imvpPxc+vjQ+lA8N8zdlas2xvrKpqvz3bm9RysDkHpub3bQUSvDr6cuELA9kOQ1h5wwSCwTE5VD+O47L0R7E5KDd72ob9ufYdlwoOx08OG7IOGZ9IAuMyyb/bQV8x5zB71jh0e8RDsf7T0Z872886uzBhxk1kaLzG/bcCm/yeyn5lb8H60yKQ9wN65Xg90TP6Gmq1aG7rQAdM529HIOqd41UgQ/cs7C3KwsCFccEfffhEFqMvOW1+84teb8qlD5vr1YNQKK2I15YLHggmLMcdxHSX4rYbCay/jCQ7gfAkOys9zOJyKgwxvJWkb8nazxTgh3k0NFFpY/h3N8/9GYs+IweYOlqKeFBUNBaUF71074RJK hJy1cGiK AdMhyvuBIKGCVEYoCkcMWP50MlyLXQsDwL0U1R7hfeezs8UT5JJrgsCRwAz+ABwZC0xj/cBZKLaAkBiJ4pz2ydvazC9UXMRtni7dXG+1/W5aC/OueWTtDO4wUHQi4XtnhI83mTQWwY6AGsWBK6Vro4xF+KwbuzBoNmrgQRLNofcxBPnj69PTNtev+tXSbNYdHLz3kXnonKFOt0FsnAjhSw/8PHLExhQ1z+J4MAnvCiqxeMgLuYy+H2+OcqF5l9aMySIbSHC8Qj3EU+VyFFkMbNAwCKnb997f9C5QWTlFezMpgP0pYiW6uvRr/Mz3JFEykZbv5bzosl6r7r4D3mONOMtxOWg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Vishal, On Tue, 20 May 2025 at 15:11, Vishal Annapurve wrot= e: > > On Tue, May 20, 2025 at 6:44=E2=80=AFAM Fuad Tabba wro= te: > > > > Hi Vishal, > > > > On Tue, 20 May 2025 at 14:02, Vishal Annapurve = wrote: > > > > > > On Tue, May 20, 2025 at 2:23=E2=80=AFAM Fuad Tabba = wrote: > > > > > > > > Hi Ackerley, > > > > > > > > On Thu, 15 May 2025 at 00:43, Ackerley Tng = wrote: > > > > > > > > > > The two new guest_memfd ioctls KVM_GMEM_CONVERT_SHARED and > > > > > KVM_GMEM_CONVERT_PRIVATE convert the requested memory ranges to s= hared > > > > > and private respectively. > > > > > > > > I have a high level question about this particular patch and this > > > > approach for conversion: why do we need IOCTLs to manage conversion > > > > between private and shared? > > > > > > > > In the presentations I gave at LPC [1, 2], and in my latest patch > > > > series that performs in-place conversion [3] and the associated (by > > > > now outdated) state diagram [4], I didn't see the need to have a > > > > userspace-facing interface to manage that. KVM has all the informat= ion > > > > it needs to handle conversions, which are triggered by the guest. T= o > > > > me this seems like it adds additional complexity, as well as a user > > > > facing interface that we would need to maintain. > > > > > > > > There are various ways we could handle conversion without explicit > > > > interference from userspace. What I had in mind is the following (a= s > > > > an example, details can vary according to VM type). I will use use = the > > > > case of conversion from shared to private because that is the more > > > > complicated (interesting) case: > > > > > > > > - Guest issues a hypercall to request that a shared folio become pr= ivate. > > > > > > > > - The hypervisor receives the call, and passes it to KVM. > > > > > > > > - KVM unmaps the folio from the guest stage-2 (EPT I think in x86 > > > > parlance), and unmaps it from the host. The host however, could sti= ll > > > > have references (e.g., GUP). > > > > > > > > - KVM exits to the host (hypervisor call exit), with the informatio= n > > > > that the folio has been unshared from it. > > > > > > > > - A well behaving host would now get rid of all of its references > > > > (e.g., release GUPs), perform a VCPU run, and the guest continues > > > > running as normal. I expect this to be the common case. > > > > > > > > But to handle the more interesting situation, let's say that the ho= st > > > > doesn't do it immediately, and for some reason it holds on to some > > > > references to that folio. > > > > > > > > - Even if that's the case, the guest can still run *. If the guest > > > > tries to access the folio, KVM detects that access when it tries to > > > > fault it into the guest, sees that the host still has references to > > > > that folio, and exits back to the host with a memory fault exit. At > > > > this point, the VCPU that has tried to fault in that particular fol= io > > > > cannot continue running as long as it cannot fault in that folio. > > > > > > Are you talking about the following scheme? > > > 1) guest_memfd checks shareability on each get pfn and if there is a > > > mismatch exit to the host. > > > > I think we are not really on the same page here (no pun intended :) ). > > I'll try to answer your questions anyway... > > > > Which get_pfn? Are you referring to get_pfn when faulting the page > > into the guest or into the host? > > I am referring to guest fault handling in KVM. > > > > > > 2) host user space has to guess whether it's a pending refcount or > > > whether it's an actual mismatch. > > > > No need to guess. VCPU run will let it know exactly why it's exiting. > > > > > 3) guest_memfd will maintain a third state > > > "pending_private_conversion" or equivalent which will transition to > > > private upon the last refcount drop of each page. > > > > > > If conversion is triggered by userspace (in case of pKVM, it will be > > > triggered from within the KVM (?)): > > > > Why would conversion be triggered by userspace? As far as I know, it's > > the guest that triggers the conversion. > > > > > * Conversion will just fail if there are extra refcounts and userspac= e > > > can try to get rid of extra refcounts on the range while it has enoug= h > > > context without hitting any ambiguity with memory fault exit. > > > * guest_memfd will not have to deal with this extra state from 3 abov= e > > > and overall guest_memfd conversion handling becomes relatively > > > simpler. > > > > That's not really related. The extra state isn't necessary any more > > once we agreed in the previous discussion that we will retry instead. > > Who is *we* here? Which entity will retry conversion? Userspace will re-attempt the VCPU run. > > > > > Note that for x86 CoCo cases, memory conversion is already triggered > > > by userspace using KVM ioctl, this series is proposing to use > > > guest_memfd ioctl to do the same. > > > > The reason why for x86 CoCo cases conversion is already triggered by > > userspace using KVM ioctl is that it has to, since shared memory and > > private memory are two separate pages, and userspace needs to manage > > that. Sharing memory in place removes the need for that. > > Userspace still needs to clean up memory usage before conversion is > successful. e.g. remove IOMMU mappings for shared to private > conversion. I would think that memory conversion should not succeed > before all existing users let go of the guest_memfd pages for the > range being converted. Yes. Userspace will know that it needs to do that on the VCPU exit, which informs it of the guest's hypervisor request to unshare (convert from shared to private) the page. > In x86 CoCo usecases, userspace can also decide to not allow > conversion for scenarios where ranges are still under active use by > the host and guest is erroneously trying to take away memory. Both > SNP/TDX spec allow failure of conversion due to in use memory. How can the guest erroneously try to take away memory? If the guest sends a hypervisor request asking for a conversion of memory that doesn't belong to it, then I would expect the hypervisor to prevent that. I don't see how having an IOCTL to trigger the conversion is needed to allow conversion failure. How is that different from userspace ignoring or delaying releasing all references it has for the conversion request? > > > > This series isn't using the same ioctl, it's introducing new ones to > > perform a task that as far as I can tell so far, KVM can handle by > > itself. > > I would like to understand this better. How will KVM handle the > conversion process for guest_memfd pages? Can you help walk an example > sequence for shared to private conversion specifically around > guest_memfd offset states? To make sure that we are discussing the same scenario: can you do the same as well please --- walk me through an example sequence for shared to private conversion specifically around guest_memfd offset states With the IOCTLs involved? Here is an example that I have implemented and tested with pKVM. Note that there are alternatives, the flow below is architecture or even vm-type dependent. None of this code is code KVM code and the behaviour could vary. Assuming the folio is shared with the host: Guest sends unshare hypercall to the hypervisor Hypervisor forwards request to KVM (gmem) (having done due diligence) KVM (gmem) performs an unmap_folio(), exits to userspace with KVM_EXIT_UNSHARE and all the information about the folio being unshared Case 1: Userspace removes any remaining references (GUPs, IOMMU Mappings etc...) Userspace calls vcpu_run(): KVM (gmem) sees that there aren't any references, sets state to PRIVATE Case 2 (alternative 1): Userspace doesn't release its references Userspace calls vcpu_run(): KVM (gmem) sees that there are still references, exits back to userspace with KVM_EXIT_UNSHARE Case 2 (alternative 2): Userspace doesn't release its references Userspace calls vcpu_run(): KVM (gmem) sees that there are still references, unmaps folio from guest, but allows it to run (until it tries to fault in the folio) Guest tries to fault in folio that still has reference, KVM does not allow that (it sees that the folio is shared, and it doesn't fault in shared folios to confidential guests) KVM exits back to userspace with KVM_EXIT_UNSHARE As I mentioned, the alternatives above are _not_ set in core KVM code. They can vary by architecture of VM type, depending on the policy, support, etc.. Now for your example please on how this would work with IOCTLs :) Thanks, /fuad > > > > > - Allows not having to keep track of separate shared/private range > > > information in KVM. > > > > This patch series is already tracking shared/private range information = in KVM. > > > > > - Simpler handling of the conversion process done per guest_memfd > > > rather than for full range. > > > - Userspace can handle the rollback as needed, simplifying error > > > handling in guest_memfd. > > > - guest_memfd is single source of truth and notifies the users of > > > shareability change. > > > - e.g. IOMMU, userspace, KVM MMU all can be registered for > > > getting notifications from guest_memfd directly and will get notified > > > for invalidation upon shareability attribute updates. > > > > All of these can still be done without introducing a new ioctl. > > > > Cheers, > > /fuad