From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FCD1C3ABD8 for ; Fri, 16 May 2025 14:21:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6818D6B018D; Fri, 16 May 2025 10:21:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 60A5E6B018E; Fri, 16 May 2025 10:21:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AD666B018F; Fri, 16 May 2025 10:21:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2939B6B018D for ; Fri, 16 May 2025 10:21:40 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 81F20BB2A5 for ; Fri, 16 May 2025 14:21:41 +0000 (UTC) X-FDA: 83448984402.29.E049C55 Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) by imf02.hostedemail.com (Postfix) with ESMTP id B38F58000D for ; Fri, 16 May 2025 14:21:39 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=m748iO09; spf=pass (imf02.hostedemail.com: domain of tabba@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747405299; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4Gx95MeEuG9a5F4o5TlfsgBawZJUohJXGboelCMmm1E=; b=n9NGvJikBfW1Ec8WPhW3/7FsKt5tHqmRBhSmaj8eDN32OfXJkqnYdeWm89y+Vn5QElybsS vUX8TOPJto+ZZlg2BvDdr5UUHdrqEfj6LXpktWOPyYqAeYmOHD0qI8I2xABrFK/mQcj6B3 E4K+lfEBLHzyJVGHd8ExnMEudsNUcUY= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=m748iO09; spf=pass (imf02.hostedemail.com: domain of tabba@google.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=tabba@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747405299; a=rsa-sha256; cv=none; b=fX/O8iEmTmuApo2deWB23ZlvwlfvkNsmKw0V3bRGH5tPLFtuyDpokr1IXXtdg8hPsEShiM vOaHlucE32uBU4xR8Qkv3IA8fjsYDht0W5YPmM4J7rC+I0kz7xjpuGi61ZxTMHFszGIVRC nCj64ei4VDdeMIntp2IcIIiwTv6q13w= Received: by mail-qt1-f176.google.com with SMTP id d75a77b69052e-47e9fea29easo308051cf.1 for ; Fri, 16 May 2025 07:21:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747405299; x=1748010099; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=4Gx95MeEuG9a5F4o5TlfsgBawZJUohJXGboelCMmm1E=; b=m748iO09w1dTPkX7SEWbWdgr8r8Cht1R1xano4MWGkrN8qbV/rll6gt6wwym8Pw8nC gYGxyYtB+nBpg1CZ2wpynLDtOcD1pD+K530hKTTKpli3Z84u4nclhMGn1EtK44RKWMdV N4KXEX24ayXFLDM8RHbdrZQqJbhcHMqU57xVUBASnjCCFXcz9wgc5Zc7tSW88ixJ6qz5 xdWLSoZNKVgkCk/hs/TN/ucrtahslTau4cUjN559/osqVYEI3s4HOtAyCfxU4MubhXuW o5ASBZU4mc1e4XNY6YxoPKsx10Jv3DhcZ5N2sTQHKn6GafIj7+GKBlQU/uh1D6D2auja JPNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747405299; x=1748010099; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4Gx95MeEuG9a5F4o5TlfsgBawZJUohJXGboelCMmm1E=; b=asxCS44B0cwPB9xJn8bUvfyoQ9aJ1CQNEfJ5JYHC4nvPzKBjFaKmJFOjGfrmrla8Ju YIfZ76TaNuAIZb7PhoOI78Jov5Ja1ulZzQIEMX3QGiZVgUhLN8vJfHSXZqk148urSA7Q /XHjkd11FLHL5xHmZ6MZ6NtVb9tbLnF7Encqkp7zGxGLgqWgRIuD4ZIpbMclQaUL/zF9 TXjRCsNl5WMF8r/0WPpYD7E7X9YYcuDitnD+FUzrWyE1dSwwRGOuqtqyxpi12J7xaGcK 4cG96WA8DVmZQnfWxy3zngUS53ywKj/6dEhCTh+VMOSArbOzcisQoy+Q6Z0ESGa4v2+n LXrw== X-Forwarded-Encrypted: i=1; AJvYcCXDZhUpUge1JVoIujj9WjMhm6Aukln7IGeq2Ffuv2f5g6P88H5DZqsKOLlphTojxFI6ebu//OfHgg==@kvack.org X-Gm-Message-State: AOJu0YwSL9SRVe15WDGPO0H+V07C8pqV+BfyThAyNRNfJ1slzNwPg90x c6trudZX6HvtYFHwcR4Vw67c9XBjVEB5+85grp7O6euKennC3NydSsKXJNKlw2NlSZxyphf7JYJ eDlyEuENvkHEnzi9oMEJ41eoO3cFXeo88/neCVKzR X-Gm-Gg: ASbGncuNM8t8+H/P2uXxoTyvLnBi3LbR8Po+zk3+cZG9HmCG6Cn/8mG2a/3/jDiyHUM 9pamYYMs35DPE9yR+DmoOk2fbHLLYczpkA3wjd2rtezMuMt3zK5OfmnxJ5IzQpGYTjnOOqAFj2R g7+eXjDp3fVDje+DYFuQyiJxDx6LlPip8owGGm1erTZlLd/tCN2fgxY4iYTZE= X-Google-Smtp-Source: AGHT+IG9BQbe3D6OYznrgokRiyHAr+oO2Hki2vzSI5InxPp964z0WMUS12BNQkJEUsPjbiIDcuFg4vmGiIRRO2BGv8g= X-Received: by 2002:a05:622a:13ca:b0:494:58a3:d3e6 with SMTP id d75a77b69052e-494a1dc9945mr9046861cf.26.1747405298218; Fri, 16 May 2025 07:21:38 -0700 (PDT) MIME-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> <20250513163438.3942405-8-tabba@google.com> In-Reply-To: From: Fuad Tabba Date: Fri, 16 May 2025 16:20:00 +0200 X-Gm-Features: AX0GCFs9MZr6pehdoYL4Z29-N60sTr4wuO1DvLFNqQOlgs5ePz8Qsa7qzqPuLrE Message-ID: Subject: Re: [PATCH v9 07/17] KVM: guest_memfd: Allow host to map guest_memfd() pages To: Gavin Shan Cc: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: h4gcbyn5yki9mbq4rkn9amn449yks8fh X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B38F58000D X-HE-Tag: 1747405299-570299 X-HE-Meta: U2FsdGVkX18VWLa9ea6E+TstEXLYTmTUfPRmjsdknYiudKT6/VaMWQFIZ82ku+sYswoghG4dQKrqAzsbUakQiookZUXNomOMJrCnAs0dEpsKu7a41C3eomKL29TYrrFymmHKvmKpHkg6yaNEw1KFLzGNzqB07B8XiiHLempNVwee/oVs7XZqQ8GpdxI8RKfUCI2+ZwJosV72A93M5AqumX28qvaP/HXFc9jrRjOeJ6+H9GoVsLRJLvIblvtzR9ZK9mvt8aDbE8odjj2SCOsG6VpHfLDoiWeouke0UeFylXTUAFWd0YEm+KSfOBth0WjFaOXbyAG9ACtKjoZGLusXwBAfJ+5WcSFlmzpzIDlqTm9mO7658GMpp237PR9D2LpWgIKdog3LPnUgoOacDlCSQThTi8pq42j2jRtbMq7Z5We347sI2eH0NAjvXf08eoq7sHZ6aHMHKTstyiMC/Hgyz1KZ+1wFJXF/MjNUO+2/CnzgjaV8IrcS7Okr6mw5s7qPxYXqVQcEC+kQLKc+bH2MwD4ORVMfn8eKfWCCirxnJgoWyKZ2nCeNoROaTMu2QSRe4CuoSAappa3+J5ZolMZOUw5B1A/PNQH2Si3OffmmsARa0RBxGg7imHnHxkV9f5EuKg3/L6M5XtvC2wff0sIWDJLiPbNxExnAktMuyaee8TJz8Cd6dj5Lc+aderEGnMI1mA1fhiqrNMyy4Q9SZAjSJPA6u9hlASvWpZFpJrXCZ2aaCxjI2B296MOAOx0Z0aqIuuue1Ix7J3EwVGfTGfbyDTzVwR3hvStSauLK/+d0rxogFOTQelQm7DnB6VE+TcSHNJgwMszoXrp68T4GYcynmzPof0GDnOLbwdpuxJT50fs/8y9i2xM0MVpk0T763hVhGZicV59R/uv+3TpGJxTdE3x6yeQDy+MLmCSezpE/jB/FTNvRV9HUnDqGkS1/8ICKtATfdDiBWBU8quAQTNm oj+Pany0 MtnUZk6OXDo0SSXPsKXhNWhHuocFLfJG1VBdLdjQpYKdyNPv4uJUfacEiaQxlPsTcuOEqqhzRTIQ+hKI4qN+iBljeIsbi67tBQA1fG+lFJ4iaK1jQyo/czk7+eTMHGHbMs8ZlyiAPjA8oKSYmVRftY6bZT28dSpnsFaXWTb2bkf+ZjrAI1muzs4/RiThQToDDvOTlKvQfYpE6rbFiYwY0d8Y7fTDOYk3ugkaFsfl3PEYc5N1jCzxVUo5NwvE4Eqs3K2oWarSUiF+fdDp1f1SY7Kw59zTXlzxlKq6EAROCdGPx073L77hN9Y/PCkkuxEBRgJQOVC+HKev1uKvgCvkpP39Ap5CNMaywkSbzwBE+Zeqdrl9lsviWhoICzE8UyGq5YnJyEgYFy2/gO3LCne9XRvNfAt138nDdDrmTuVbk5Jwem9AaLKQf5mxkrPVe4kqyDrbxkJGZjQ4T6iA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Gavin, On Fri, 16 May 2025 at 13:12, Gavin Shan wrote: > > Hi Fuad, > > On 5/16/25 5:56 PM, Fuad Tabba wrote: > > On Fri, 16 May 2025 at 08:09, Gavin Shan wrote: > >> On 5/14/25 2:34 AM, Fuad Tabba wrote: > >>> This patch enables support for shared memory in guest_memfd, including > >>> mapping that memory at the host userspace. This support is gated by the > >>> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd > >>> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a > >>> guest_memfd instance. > >>> > >>> Co-developed-by: Ackerley Tng > >>> Signed-off-by: Ackerley Tng > >>> Signed-off-by: Fuad Tabba > >>> --- > >>> arch/x86/include/asm/kvm_host.h | 10 ++++ > >>> include/linux/kvm_host.h | 13 +++++ > >>> include/uapi/linux/kvm.h | 1 + > >>> virt/kvm/Kconfig | 5 ++ > >>> virt/kvm/guest_memfd.c | 88 +++++++++++++++++++++++++++++++++ > >>> 5 files changed, 117 insertions(+) > >>> > >> > >> [...] > >> > >>> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > >>> index 6db515833f61..8e6d1866b55e 100644 > >>> --- a/virt/kvm/guest_memfd.c > >>> +++ b/virt/kvm/guest_memfd.c > >>> @@ -312,7 +312,88 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) > >>> return gfn - slot->base_gfn + slot->gmem.pgoff; > >>> } > >>> > >>> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM > >>> + > >>> +static bool kvm_gmem_supports_shared(struct inode *inode) > >>> +{ > >>> + uint64_t flags = (uint64_t)inode->i_private; > >>> + > >>> + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED; > >>> +} > >>> + > >>> +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) > >>> +{ > >>> + struct inode *inode = file_inode(vmf->vma->vm_file); > >>> + struct folio *folio; > >>> + vm_fault_t ret = VM_FAULT_LOCKED; > >>> + > >>> + filemap_invalidate_lock_shared(inode->i_mapping); > >>> + > >>> + folio = kvm_gmem_get_folio(inode, vmf->pgoff); > >>> + if (IS_ERR(folio)) { > >>> + int err = PTR_ERR(folio); > >>> + > >>> + if (err == -EAGAIN) > >>> + ret = VM_FAULT_RETRY; > >>> + else > >>> + ret = vmf_error(err); > >>> + > >>> + goto out_filemap; > >>> + } > >>> + > >>> + if (folio_test_hwpoison(folio)) { > >>> + ret = VM_FAULT_HWPOISON; > >>> + goto out_folio; > >>> + } > >>> + > >>> + if (WARN_ON_ONCE(folio_test_large(folio))) { > >>> + ret = VM_FAULT_SIGBUS; > >>> + goto out_folio; > >>> + } > >>> + > >> > >> I don't think there is a large folio involved since the max/min folio order > >> (stored in struct address_space::flags) should have been set to 0, meaning > >> only order-0 is possible when the folio (page) is allocated and added to the > >> page-cache. More details can be referred to AS_FOLIO_ORDER_MASK. It's unnecessary > >> check but not harmful. Maybe a comment is needed to mention large folio isn't > >> around yet, but double confirm. > > > > The idea is to document the lack of hugepage support in code, but if > > you think it's necessary, I could add a comment. > > > > Ok, I was actually nit-picky since we're at v9, which is close to integration, > I guess. If another respin is needed, a comment wouldn't be harmful, but it's > also perfectly fine without it :) > > > > >> > >>> + if (!folio_test_uptodate(folio)) { > >>> + clear_highpage(folio_page(folio, 0)); > >>> + kvm_gmem_mark_prepared(folio); > >>> + } > >>> + > >> > >> I must be missing some thing here. This chunk of code is out of sync to kvm_gmem_get_pfn(), > >> where kvm_gmem_prepare_folio() and kvm_arch_gmem_prepare() are executed, and then > >> PG_uptodate is set after that. In the latest ARM CCA series, kvm_arch_gmem_prepare() > >> isn't used, but it would delegate the folio (page) with the prerequisite that > >> the folio belongs to the private address space. > >> > >> I guess that kvm_arch_gmem_prepare() is skipped here because we have the assumption that > >> the folio belongs to the shared address space? However, this assumption isn't always > >> true. We probably need to ensure the folio range is really belonging to the shared > >> address space by poking kvm->mem_attr_array, which can be modified by VMM through > >> ioctl KVM_SET_MEMORY_ATTRIBUTES. > > > > This series only supports shared memory, and the idea is not to use > > the attributes to check. We ensure that only certain VM types can set > > the flag (e.g., VM_TYPE_DEFAULT and KVM_X86_SW_PROTECTED_VM). > > > > In the patch series that builds on it, with in-place conversion > > between private and shared, we do add a check that the memory faulted > > in is in-fact shared. > > > > Ok, thanks for your clarification. I plan to review that series, but not > getting a chance yet. Right, it's sensible to limit the capability of modifying > page's attribute (private vs shared) to the particular machine types since > the whole feature (restricted mmap and in-place conversion) is applicable > to particular machine types. I can understand KVM_X86_SW_PROTECTED_VM > (similar to pKVM) needs the feature, but I don't understand why VM_TYPE_DEFAULT > needs the feature. I guess we may want to use guest-memfd as to tmpfs or > shmem, meaning all the address space associated with a guest-memfd is shared, > but without the corresponding private space pointed by struct kvm_userspace_memory_region2 > ::userspace_addr. Instead, the 'userspace_addr' will be mmap(guest-memfd) from > VMM's perspective if I'm correct. There are two reasons for why we're adding this feature for VM_TYPE_DEFAULT. The first is for VMMs like Firecracker to be able to run guests backed completely by guest_memfd [1]. Combined with Patrick's series for direct map removal in guest_memfd [2], this would allow running VMs that offer additional hardening against Spectre-like transient execution attacks. The other one is that, in the long term, the hope is for guest_memfd to become the main way for backing guests, regardless of the type of guest they represent. If you're interested to find out more, we had a discussion about this a couple of weeks ago during the bi-weekly guest_memfd upstream call (May 1) [3]. Cheers, /fuad [1] https://github.com/firecracker-microvm/firecracker/tree/feature/secret-hiding [2] https://lore.kernel.org/all/20250221160728.1584559-1-roypat@amazon.co.uk/ [3] https://docs.google.com/document/d/1M6766BzdY1Lhk7LiR5IqVR8B8mG3cr-cxTxOrAosPOk/edit?tab=t.0#heading=h.jwwteecellpo > Thanks, > Gavin > > > Thanks, > > /fuad > > > >>> + vmf->page = folio_file_page(folio, vmf->pgoff); > >>> + > >>> +out_folio: > >>> + if (ret != VM_FAULT_LOCKED) { > >>> + folio_unlock(folio); > >>> + folio_put(folio); > >>> + } > >>> + > >>> +out_filemap: > >>> + filemap_invalidate_unlock_shared(inode->i_mapping); > >>> + > >>> + return ret; > >>> +} > >>> + > >> > >> Thanks, > >> Gavin > >> > > >